Tuesday, July 10, 2018

Common Koha Bugs

In koha sometime we get "Internal server error" message when try to open MARC record after migration.
to get the error details visit the default koha error path "/var/log/koha/library" and open the log file "plack-error.log"

here is how the file look like


let us focus on the two errors on red and show how can we fix them

if the error message comes after upgrade from very old version of Koha to the latest version.
Software error:
Can't call method "branchname" on an undefined value at /usr/share/koha/lib/C4/Biblio.pm line 1570.

The first solution is that execute a full index of Zebra. Apply following command;

sudo su
koha-rebuild-zebra -f -a -b -v library

if this not success then try to update the Koha source code



1) Use of uninitialized value $tag in hash element at /usr/share/perl5/MARC/Record.pm line 202.

update file "/usr/share/perl5/MARC/Record.pm"
add the next check









2) Can't call method "branchname" on an undefined value at /usr/share/koha/lib/C4/Biblio.pm line 1570.
update file "/usr/share/koha/lib/C4/Biblio.pm"
add the next check


Friday, July 6, 2018

React Native with firebase online free DB

1) Open account on Firebase

2) install firebase component to react native project using
npm install firebase --save

3) Create new account on Firebase URL https://console.firebase.google.com
    then create new project



we will need the code in red area in the next file





React Native Code 

4) Open Firebase DB connection on new page with name 'firebase.js'

import * as firebase from 'firebase';

// Initialize Firebase
const firebaseConfig = {
apiKey: "xxxxxxxxxxxxxxx",
authDomain: "xxxxxxxxx.firebaseapp.com",
databaseURL: "https://xxxxxxxxxx.firebaseio.com",
projectId: "xxxxxxxxx",
storageBucket: "xxxxxxxxxx.appspot.com",
messagingSenderId: "xxxxxxxxxxxx"
};


let instance = null

class FirebaseService {
constructor() {
try {
if (!instance) {
this.app = firebase.initializeApp(firebaseConfig);
instance = this;
}
return instance;
}
catch (e) {
console.log('Firebase error');
console.log(e);
return null;
}
}
}

const firebaseService = new FirebaseService().app
export default firebaseService;



5) Create code to use Firebase connection
Code does the next jobs: Create Account/Login/LogOut and also for Insert/Update/Select/Delete

import React, { Component } from 'react';
import { Text, View, StyleSheet, LayoutAnimation, Platform, UIManager, TouchableOpacity } from 'react-native';
import firebaseService from './firebase'

export default class FireBaseTest extends React.Component {
componentWillMount() {
//Create account
// firebaseService.auth().createUserWithEmailAndPassword('test@test.com', '123456').catch(function(error) {
// console.log(error);
// });

//Login
// firebaseService.auth().signInWithEmailAndPassword('test@test.com', '123456').catch(function(error) {
// console.log(error);
// });


console.log('Code version 2');

//Insert
firebaseService.database().ref('News/001').set({
Title: 'Test',
Contents: 'Test'
}).then(() => {
//console.log('Insert Done');
}).catch((error) => {
// console.log(error);
});


//Update
firebaseService.database().ref('News/001').update({
Title: 'Update Title Only!'
}).then(() => {
//console.log('Update Done');
}).catch((error) => {
//console.log(error);
});

//=========Select
//Method 1
//Run only once during loading
firebaseService.database().ref('News').once('value', (data) => {
var returnResults = data.toJSON();
console.log(returnResults);
}).then(() => {
//console.log('Select Done');
}).catch((error) => {
//console.log(error);
});
//Method 2
//Run with everytime DB data change
// firebaseService.database().ref('News').on('value', (data) => {
// var returnResults = data.toJSON();
// console.log(returnResults);
// });


//Delete
// firebaseService.database().ref('News/001').remove().then(() => {
// console.log('Done');
// }).catch((error) => {
// console.log(error);
// });


//SignOut
// firebase.auth().signOut().then(function() {
// console.log('Sign-out successful');
// }).catch(function(error) {
// console.log(error);
// });
}


render() {
return (
<View>
<View>
<TouchableOpacity>
<Text>Title</Text>
</TouchableOpacity>
<View>
<Text >
Contents
</Text>
</View>
</View>
</View>
);
}
}









Saturday, June 30, 2018

React Native commands

How to create your first project fast?

Get the command line tool

npm install exp --global


Create your first project

exp init my-new-project
cd my-new-project
exp start

npm install react-navigation@1.1.2

npm install --save react-native-elements

npm install native-base --save
npm install @expo/vector-icons --save


Start the build

exp build:android
exp build:ios

Good complete project 
https://expo.io/@geekyants/nativebasekitchensink

source code
https://github.com/GeekyAnts/NativeBase-KitchenSink/tree/CRNA


Thursday, June 28, 2018

Install Solr 7.4

Step 1. First, Install Java.
Because solr are Java based softwares we need the Java environment (As it is advised in the Solr wiki : prefere a full JDK to a simple JRE.)
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
sudo apt-get install oracle-java8-set-default
Step 3 – Setup JAVA_HOME and JRE_HOME Variable
nano /etc/environment
add two new lines

JAVA_HOME=/usr/lib/jvm/java-8-oracle
JRE_HOME=/usr/lib/jvm/java-8-oracle/jre

reboot then

to validate run  echo $JAVA_HOME




Install Solr



wget http://archive.apache.org/dist/lucene/solr/7.4.0/solr-7.4.0.tgz



tar xzf solr-7.4.0.tgz solr-7.4.0/bin/install_solr_service.sh --strip-components=2


sudo bash ./install_solr_service.sh  solr-7.4.0.tgz


Create Solr Collection with name TestCollection1

sudo su - solr -c "/opt/solr/bin/solr create -c TestCollection1 -n data_driven_schema_configs"

this will create new folder with name TestCollection1  in the path /var/solr/data


Start/Stop Solr
sudo service solr stop
sudo service solr start
sudo service solr status



After install Solr it will be available on 
http://  Server IP  :8983/solr/


View Live Solr Logs
tail -f /var/solr/logs/solr.log

Friday, May 4, 2018

Work with Solr

Terminology

1) Solr instance:Zero or more cores can be configured to run inside a Solr instance. Each Solr instance requires a reference to a separate Solr home directory.

2) Solr core: Each of your indexes and the files required by that index makes a core. So if your application requires multiple indexes, you can have multiple cores running inside a Solr instance

3) Solr home:directory that Solr refers to for almost everything. It contains all the information regarding the cores and their indexes, configurations, and dependencies.

4) Solr shard: This term is used in distributed environments, in which you partition the data between multiple Solr instances. Each chunk of data on a particular instance is called a shard. The shard contains a subset of the whole index. For example, say you have 30 million documents and plan to distribute these in three shards, each containing 10 million documents. You’ll need three Solr instances, each having one core with the same name and schema. While serving queries, any shard can receive the request and distribute it to the other two shards for processing, get all the results and respond back to the client with the merged result.


Solr types of distributed architecture:

1) master-slave architecture [old]: index is created on the master server, which is replicated to one or more slave servers dedicated to searching. This approach has several limitations

2) SolrCloud [new]: sets up a cluster of Solr servers to provide fault tolerance and high availability and to offer features such as distributed indexing, centralized configuration, automatic load balancing, and failover.


SolrCloud Terminology

Node: A single instance of Solr

Cluster: All the nodes in your environment together.

Collection: A complete logical index in a cluster.

Shard: A logical portion, or slice, of a collection.

Replica: The physical copy of a shard, which runs in a node as a Solr core.

Leader: Among all the replicas of a shard, one is elected as a leader. SolrCloud forwards all requests
to the leader of the shard, which distributes it to the replicas.

ZooKeeper: ZooKeeper is an Apache project widely used by distributed systems for centralized configuration and coordination. SolrCloud uses it for managing the cluster and electing a leader.


SolrCloud

Apache Solr includes the ability to set up a cluster of Solr servers that combines fault tolerance and high availability. Called SolrCloud, these capabilities provide distributed indexing and search capabilities, supporting the following features:
  • Central configuration for the entire cluster
  • Automatic load balancing and fail-over for queries
  • ZooKeeper integration for cluster coordination and configuration.
SolrCloud is flexible distributed search and indexing, without a master node to allocate nodes, shards and replicas. Instead, Solr uses ZooKeeper to manage these locations, depending on configuration files and schemas. Queries and updates can be sent to any server. Solr will use the information in the ZooKeeper database to figure out which servers need to handle the request.

Launch a SolrCloud cluster on your local workstation

bin/solr start -e cloud

(run the previous command on solr home,to get the next questions, just accept defaults by press enter)

How many Solr nodes would you like to run in your local cluster (specify 1-4 nodes) [2]:?
Please enter the port for node1 [8983]:
Please enter the port for node2 [7574]:
Create a new collection, Please provide a name for your new collection [gettingstarted]:
How many shards would you like to split new collection into? [2]
How many replicas per shard would you like to create? [2]




Notice that 
Two instances of Solr have started on two nodes, one on port 7574 and one on port 8983.
There is one collection created, a two shard collection, each with two replicas.
Solr Admin UI URL: http://localhost:8983/solr

Solr has two main configuration files: the schema file (named either managed-schema or schema.xml), and solrconfig.xml.


solrconfig.xml:
configure the <slowQueryThresholdMillis> element in the query section 
<slowQueryThresholdMillis>1000</slowQueryThresholdMillis>
Any queries that take longer than the specified threshold will be logged as "slow" queries at the WARN level.






delete Solr Record

<delete><query>*:*</query></delete>

Thursday, May 3, 2018

Ubuntu important commands

lastb only shows login failures. Use last to see successful logins.



Check if package is install or not

dpkg --list | grep phpmyadmin




How to keep processes running after ending ssh session?
  • ssh into your remote box. Type screen Then start the process you want.
  • Press Ctrl-A then Ctrl-D. This will "detach" your screen session but leave your processes running. You can now log out of the remote box.
  • If you want to come back later, log on again and type screen -r This will "resume" your screen session, and you can see the output of your process.
OR use 
sudo aptitude install byobu
Start byobu by typing byobu.
Press 
F2 to create a new window within the current session, 
F3-F4 to switch between the various windows.
F6 (detach) to leave byobu and keeep it running .




Get Installation directory 
whereis tomcat7

Compress Folder and save it as a file in current directory
tar -zcvf FileName.tar.gz -C /Folder/Name/To/Compress   .

Extract Compressed File
tar zxf solr-7.0.0.tgz


Install PHP version 5.6
------------------------------
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install -y php5.6

Un-Install PHP 7
-----------------------
sudo apt-get purge php7.0-common
sudo apt-get purge php7.*


Install JDK 8.0
-------------------
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
sudo apt-get install oracle-java8-set-default

#Setup JAVA_HOME and JRE_HOME Variable
nano /etc/environment
JAVA_HOME=/usr/lib/jvm/java-8-oracle
JRE_HOME=/usr/lib/jvm/java-8-oracle/jre


Install Apache2
---------------------
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install apache2 apache2-doc apache2-utils


Install MySQL
-------------------
sudo apt-get update
sudo apt-get install mysql-client-core-5.5
sudo apt-get install mysql-server-5.5
sudo mysql_secure_installation

Un-Install MySQL
----------------------
sudo apt-get remove --purge mysql-server mysql-client mysql-common
sudo apt-get autoremove
sudo apt-get autoclean
sudo rm -rf /var/lib/mysql
sudo rm -rf /etc/init.d/mysql
sudo rm -rf /etc/init/mysql.conf




Login to Mysql, create DB, Import Backup, Create user, and add PRIVILEGES
-------------------------------------------------------------------------------------------------------
mysql -u root -p


CREATE ___my_database_name____ CHARACTER SET utf8 COLLATE utf8_general_ci;

mysql -u root -p ddl < /root/backupFile.sql
CREATE USER 'xxxx'@'localhost' IDENTIFIED BY 'Password';
GRANT ALL ON DatabaseName.* TO 'xxx'@'localhost';

CREATE USER 'xxxx'@'127.0.0.1' IDENTIFIED BY 'Password';
GRANT ALL ON DatabaseName.* TO 'xxx'@'127.0.0.1';

CREATE USER 'root'@'%' IDENTIFIED BY 'some_pass';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%';

FLUSH PRIVILEGES;




Install Solr and upgrade old Solr version

Step 1. First, Install Java.
Because solr are Java based softwares we need the Java environment (As it is advised in the Solr wiki : prefere a full JDK to a simple JRE.)
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
sudo apt-get install oracle-java8-set-default
Step 3 – Setup JAVA_HOME and JRE_HOME Variable
nano /etc/environment
add two new lines

JAVA_HOME=/usr/lib/jvm/java-8-oracle
JRE_HOME=/usr/lib/jvm/java-8-oracle/jre

reboot then

to validate run  echo $JAVA_HOME





Install Default Solr version 


sudo add-apt-repository universe

Option 1: Install Solr with Tomcat
sudo apt-get install solr-tomcat
Option 2: Install Solr with Jetty
sudo apt-get install solr-jetty
Open the URL http://localhost:8080/solr/admin/, if your tomcat is listen on port 8080.

Notes 
if you get error during installation
tomcat7[4983]:  * no JDK or JRE found - please set JAVA_HOME

then do the next steps

nano  /etc/default/tomcat7
add this line
JAVA_HOME=/usr/lib/jvm/java-8-oracle

change default tomcat port using the next command
nano /etc/tomcat7/server.xml
  • Search "Connector port" and replace 8080 with any new port

Uninstall
sudo apt-get remove solr-tomcat
sudo apt-get remove solr-jetty 
sudo apt-get remove tomcat7-common
sudo apt autoremove




Install Specific version of Solr

1) Install Java
2) Get the URL from       http://archive.apache.org/dist/lucene/solr/
3) Run the next commands

For Solr version 7.3.0

steps: download the package then extract one file from compressed package then call this file and pass compressed folder as prameter

cd ~
wget http://www-eu.apache.org/dist/lucene/solr/7.3.0/solr-7.3.0.tgz
tar xzf solr-7.3.0.tgz solr-7.3.0/bin/install_solr_service.sh --strip-components=2
sudo bash ./install_solr_service.sh  solr-7.3.0.tgz

For Solr version 5.3.1

cd ~
wget http://archive.apache.org/dist/lucene/solr/5.3.1/solr-5.3.1.tgz
tar xzf solr-5.3.1.tgz solr-5.3.1/bin/install_solr_service.sh --strip-components=2
sudo chmod +x install_solr_service.sh
sudo ./install_solr_service.sh solr-5.3.1.tgz

OR


For Solr version 4.10.4

wget https://archive.apache.org/dist/lucene/solr/4.10.4/solr-4.10.4.tgz
tar -xvf solr-4.10.4.tgz
cp -R solr-4.10.4/example /opt/solr
cd /opt/solr
java -jar start.jar

Solr will be active in the next URL   Open the URL http://your_server_ip:8983/solr


Uninstall via
sudo service solr stop
sudo rm -r /var/solr
sudo rm -r /opt/solr-5.3.1
sudo rm -r /opt/solr
sudo rm /etc/init.d/solr
sudo deluser --remove-home solr
sudo deluser --group solr

Use following commands to Start, Stop and check the status of Solr service.
sudo service solr stop
sudo service solr start
sudo service solr status

Create First Solr Collection
sudo su - solr -c "/opt/solr/bin/solr create -c TestCollection1 -n data_driven_schema_configs"
  
this will create new folder with name TestCollection1  in the path /var/solr/data


For more information check https://www.howtoforge.com/tutorial/how-to-install-and-configure-solr-on-ubuntu-1604/




Upgrade old Solr version

use the next script     https://github.com/cominvent/solr-tools/tree/master/upgradeindex

Usage:

Script to Upgrade old indices from 3.x -> 4.x -> 5.x -> 6.x format, 
so it can be used with Solr 6.x or 7.x
Usage: ./upgradeindex.sh [-s] [-t target-ver] <indexdata-root>

Example: ./upgradeindex.sh -s -t 6 /opt/solr

Solr upgradeindex
https://github.com/cradules/bash_scripts/tree/master/solr-tools/upgradeindex
https://github.com/cominvent/solr-tools/tree/master/upgradeindex


Importing/Indexing database (MySQL or SQL Server) in Solr using Data Import Handler
https://gist.github.com/maxivak/3e3ee1fca32f3949f052

Monday, April 30, 2018

Python code to extract Title, Author, and Dewey from KOHA Database


sudo apt-get install python-bs4
sudo apt-get install python-mysqldb


Extract MARC data and insert it into new table

1) create the next table

DROP TABLE IF EXISTS `newmarcrecords`;
CREATE TABLE `NewMarcRecords` (
    `BibID` VARCHAR(30) NOT NULL ,
    `Leader` VARCHAR(30)  NULL ,
    `Title` VARCHAR(500) NOT NULL ,
    `Auther` VARCHAR(500)  NULL ,
    `Publisher` VARCHAR(500)  NULL ,
   
    `PublishYear` VARCHAR(500)  NULL ,
    `PublishLocation` VARCHAR(500)  NULL ,
    `Subject` VARCHAR(500)  NULL ,
    `Classification` VARCHAR(500)  NULL ,

    `RecordSource` VARCHAR(500)  NULL ,
    `Pages` VARCHAR(500)  NULL ,
    `URL` VARCHAR(500)  NULL ,
    `CoverURL` VARCHAR(500)  NULL ,
    `Price` VARCHAR(30) NULL ,
    `RecordStatus` VARCHAR(1) NULL ,
     PRIMARY KEY (`BibID`)) DEFAULT CHARSET=utf8 COLLATE=utf8_general_ci ENGINE = InnoDB;


2) run the next Python code

# -*- coding: utf-8 -*-
#!/usr/bin/python

"""
DROP TABLE IF EXISTS `newmarcrecords`;
CREATE TABLE `NewMarcRecords` (
    `BibID` VARCHAR(30) NOT NULL ,
    `Leader` VARCHAR(30)  NULL ,
    `Title` VARCHAR(500) NOT NULL ,
    `Auther` VARCHAR(500)  NULL ,
    `Publisher` VARCHAR(500)  NULL ,
    `PublishYear` VARCHAR(500)  NULL ,
    `PublishLocation` VARCHAR(500)  NULL ,
    `Subject` VARCHAR(500)  NULL ,
    `Classification` VARCHAR(500)  NULL ,
    `RecordSource` VARCHAR(500)  NULL ,
    `Pages` VARCHAR(500)  NULL ,
    `URL` VARCHAR(500)  NULL ,
    `CoverURL` VARCHAR(500)  NULL ,
    `Price` VARCHAR(30) NULL ,
    `RecordStatus` VARCHAR(1) NULL ,
     PRIMARY KEY (`BibID`)) DEFAULT CHARSET=utf8 COLLATE=utf8_general_ci ENGINE = InnoDB;
"""

import MySQLdb
from bs4 import BeautifulSoup

def GetValue( Tag,Feild ):
    check=y.find(tag=Tag)
    if check is not None:
        check = check.find(code=Feild)
        if check is not None:
            return check.get_text()
    return '---'



db = MySQLdb.connect(host="localhost",    # your host, usually localhost
                     user="root",         # your username
                     passwd="admin123",   # your password
                     db="koha" ,          # name of the data base
                     charset='utf8')         

# you must create a Cursor object. It will let
#  you execute all the queries you need
cur = db.cursor()

# Use all the SQL you like
cur.execute("SELECT biblionumber,metadata FROM `biblio_metadata` where CAST(biblionumber AS CHAR) not in (select BibID from NewMarcRecords)")

# print all the first cell of all the rows
for row in cur.fetchall():
    biblionumber= row[0]
    xmlMarc = row[1]
    y=BeautifulSoup(xmlMarc,"html5lib")
    Leader=y.record.leader.get_text()
    title=GetValue( "245","a")
    author=GetValue( "100","a")
    Publisher=GetValue( "260","a")
    PublishYear=GetValue( "260","c")
    PublishLocation=GetValue( "260","a")
    Subject=GetValue( "650","a")
    Classification=GetValue( "082","a")
    RecordSource=GetValue( "956","s")
    Pages=GetValue( "300","a")
    URL=GetValue( "856","u")
    CoverURL=GetValue( "956","u")
    Price=GetValue( "956","p")
    RecordStatus='0'

    try:
        cur.execute("""
                    INSERT INTO `newmarcrecords`  (`BibID`, `Leader`, `Title`, `Auther`, `Publisher`, `PublishYear`, `PublishLocation`
                    , `Subject`, `Classification`, `RecordSource`, `Pages`, `URL`, `CoverURL`, `Price`, `RecordStatus`)
                    VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)
                    """
                    ,(biblionumber,Leader,title,author,Publisher,PublishYear,PublishLocation
                    ,Subject,Classification,RecordSource,Pages,URL,CoverURL,Price,RecordStatus))
        db.commit()
        print "success"
    except ValueError as e:
        db.rollback()
        print "Fail!"
        print e.strerror


    print title
    print "========================="

db.close()