Wednesday, May 04, 2022

Step by Step YugabyteDB 2.11 (Open Source) Distributed DB - Multi-node Cluster Setup on RHEL

Scope -

·       Infrastructure planning and requirement for installation of the multi-node cluster database

·       Prerequisites Software, Network Port, the storage requirement for yugabyte DB 



Pre-requisites for yugabyte DB 2.11 Configurations

A) Node Lists:-

Host

 

IP Address

Node 1

yblab101

 

192.168.0.101

Node 2

yblab102

 

192.168.0.102

Node 3

yblab103

 

192.168.0.103

B) The below-required Network ports and software are required for YugabyteDB:

S.No. Software / Port / Action Item Requirement Value

1             Yedis                                      Port         6379

2             Admin Tserver                          Port         9000

3             Node explorter                           Port         9300

4             Admin master                           Port         7000

5             YCQL Api                                   Port      12000

6             Ssh                                           Port         22

7             YSQL Api                                   Port        5433

8             YCQL                                           Port         9042

9         Yugabyte Software 2.11        Software

10 YB-Master RPC Communication RPC communication 7100

11 YB-TServer RPC communication RPC communication 9100

12 NTP package Ntp

13 psycopg2 PostgreSQL database adapter for the Python programming language


C) Yugabyte db software required values should be updated on the /etc/security/limits.conf (root user):- 

 

*                -       core            unlimited

*                -       data            unlimited

*                -       fsize           unlimited

*                -       sigpending      119934

*                -       memlock         64

*                -       rss             unlimited

*                -       nofile          1048576

*                -       msgqueue        819200

*                -       stack           8192

*                -       cpu             unlimited

*                -       nproc           12000

*                -       locks           unlimited

 

D) Transparent Hugepages

The transparent huge pages should be always. Even if it was set to "madvise" or "never". Please ensure to update it by root user.

cat /sys/kernel/mm/transparent_hugepage/enabled

[always] madvise never

 

E) Ensure the below value should be set on the OS level. if it is Red Hat Enterprise Linux Server release 7.9:-

 

[root@hqliorpv108 ~]# cat /etc/default/grub | grep -i "GRUB_CMDLINE_LINUX"

GRUB_CMDLINE_LINUX="crashkernel=auto spectre_v2=retpoline rd.lvm.lv=VG00/root rd.lvm.lv=VG00/swap rd.lvm.lv=VG00/usr biosdevname=0 ipv6.disable=1 net.ifnames=0 rhgb quiet transparent_hugepage=never"

[root@hqliorpv108 ~]#

 

F) Installation Steps:-

1) enable the below ports on root user in each nodes:

sudo firewall-cmd --zone=public --add-port=9000/tcp --permanent
sudo firewall-cmd --zone=public --add-port=9300/tcp --permanent
sudo firewall-cmd --zone=public --add-port=7000/tcp --permanent
sudo firewall-cmd --zone=public --add-port=9100/tcp --permanent
sudo firewall-cmd --zone=public --add-port=12000/tcp --permanent
sudo firewall-cmd --zone=public --add-port=13000/tcp --permanent
sudo firewall-cmd --zone=public --add-port=7100/tcp --permanent
sudo firewall-cmd --zone=public --add-port=11000/tcp --permanent
sudo firewall-cmd --zone=public --add-port=9042/tcp --permanent

 

2) Create group , user and folder for both yugabyte software on all three nodes: (Node1/Node2/Node3)

=>

root>  groupadd  yugabytedb

root> useradd -d /home/yugabyte -g yugabytedb yugabytedb

root> useradd -g prometheus Prometheus (May required for monitoring purpose). 

 

=> cat the /etc/passwd file for after user yugabytedb creations :-

yugabytedb:x:3000:3000:00000 Yugabytedb software owner:/opt/yugabytedb:/bin/bash

 

[root@hqliorpv107 ~]# cat /etc/group | grep yugabytedb
yugabytedb:x:3000:
[root@hqliorpv107 ~]#

 

3) create two data directories

   data1 => postgresql data directory | data2 => yugabyte data directories :- 

cd /yugabyte01
mkdir YUGABYTE
chown yugabytedb:yugabtyedb YUGABYTE
sudo su - yugabytedb
mkdir -p /yugabyte01/YUGABYTE/data1 /yugabyte01/YUGABYTE/data2

 

4) Install Yugabyte software from the Nexus Repo and run post_install.sh as follows on all three nodes.

cd /opt/yugabyte
wget http://repository.emirates.com/repository/dbateam_repo/Yugabyte/yugabyte-2.11.0.1-b1-linux-x86_64.tar.gz
tar -zxvf yugabyte-2.11.0.1-b1-linux-x86_64.tar.gz
cd yugabyte-2.11.0.1/

run the post_install script after installations 

./bin/post_install.sh

 

5) Start YB master services as follows on each node:-

 

Node 1: yblab101 (192.168.0.101)

cd /opt/yugabyte/yugabyte-2.11.0.1
./bin/yb-master \
  --master_addresses 192.168.0.101:7100,192.168.0.102:7100,192.168.0.103:7100 \
  --rpc_bind_addresses 192.168.0.101:7100 \
  --fs_data_dirs "/yugabyte01/YUGABYTE/data1,/yugabyte01/YUGABYTE/data2" \
  --placement_cloud hq \
  --placement_region hq \
  --placement_zone hq   --webserver_interface=0.0.0.0  \

 --master_enable_metrics_snapshotter=true  --webserver_port=6516
  >& /opt/yugabyte/yb-master.out &
cd /yugabyte01/YUGABYTE/data1/yb-data/master/logs
cat yb-master.INFO | grep "This master"

 

Node 2: yblab102 (192.168.0.102)

cd /opt/yugabyte/yugabyte-2.11.0.1
./bin/yb-master \
  --master_addresses 192.168.0.101:7100:7100,192.168.0.102:7100,192.168.0.103:7100 \
  --rpc_bind_addresses 192.168.0.102:7100 \
  --fs_data_dirs "/yugabyte01/YUGABYTE/data1,/yugabyte01/YUGABYTE/data2" \
  --placement_cloud hq \
  --placement_region hq \
  --placement_zone hq \
  >& /opt/yugabyte/yb-master.out &

cat yb-master.INFO | grep "This master"

 

Node 3: yblab103 (192.168.0.103)

cd /opt/yugabyte/yugabyte-2.11.0.1
./bin/yb-master \
  --master_addresses 
192.168.0.101:7100,192.168.0.102:7100,192.168.0.103:7100 \
  --rpc_bind_addresses 
192.168.0.103:7100 \
  --fs_data_dirs "/yugabyte01/YUGABYTE/data1,/yugabyte01/YUGABYTE/data2" \
  --placement_cloud hq \
  --placement_region hq \
  --placement_zone hq \
  >& /opt/yugabyte/yb-master.out &

 

 Find the Leader node using below log output running in each node.

cd /yugabyte01/YUGABYTE/data1/yb-data/master/logs
cat yb-master.INFO | grep "This master"

 

 :/yugabyte01/YUGABYTE/data1/yb-data/master/logs>cat yb-master.yblab101.yugabytedb.log.INFO.20220207-063044.57683 | grep "This master"
I0207 06:30:45.855286 57707 sys_catalog.cc:384] T 00000000000000000000000000000000 P b36db44b23a4490b9514bccc1fab2e8e [sys.catalog]: This master's current role is: FOLLOWER
I0207 06:30:45.855345 57707 sys_catalog.cc:384] T 00000000000000000000000000000000 P b36db44b23a4490b9514bccc1fab2e8e [sys.catalog]: This master's current role is: FOLLOWER
I0207 06:52:20.829769 63103 sys_catalog.cc:384] T 00000000000000000000000000000000 P b36db44b23a4490b9514bccc1fab2e8e [sys.catalog]: This master's current role is: LEADER
 :/yugabyte01/YUGABYTE/data1/yb-data/master/logs>

 

6. Start the YB tablet servers on each node as follows:-

Node 1: yblab101 (192.168.0.101)

 cd /opt/yugabyte/yugabyte-2.11.0.1
./bin/yb-tserver \
  --tserver_master_addrs 192.168.0.101:7100,192.168.0.102:7100,192.168.0.103:7100 \
  --rpc_bind_addresses 192.168.0.101:9100 \
  --start_pgsql_proxy \
  --pgsql_proxy_bind_address 192.168.0.101:6518 \
  --cql_proxy_bind_address 192.168.0.101:9042 \
  --fs_data_dirs "/yugabyte01/YUGABYTE/data1,/yugabyte01/YUGABYTE/data2" \
  --placement_cloud doz \
  --placement_region doz \
  --placement_zone doz \
  >& /opt/yugabyte/yb-tserver.out &

 

Node 2: yblab102 (192.168.0.102)

cd /opt/yugabyte/yugabyte-2.11.0.1
./bin/yb-tserver \
  --tserver_master_addrs 192.168.0.101:7100,192.168.0.102:7100,192.168.0.103:7100 \
  --rpc_bind_addresses 192.168.0.102:9100 \
  --start_pgsql_proxy \
  --pgsql_proxy_bind_address 192.168.0.102::6518 \
  --cql_proxy_bind_address 192.168.0.102::9042 \
  --fs_data_dirs "/yugabyte01/YUGABYTE/data1,/yugabyte01/YUGABYTE/data2" \
  --placement_cloud hq \
  --placement_region hq \
  --placement_zone hq \
  >& /opt/yugabyte/yb-tserver.out &

 

Node 3: yblab103 (192.168.0.103)

cd /opt/yugabyte/yugabyte-2.11.0.1
./bin/yb-tserver \
  --tserver_master_addrs 192.168.0.101:7100,192.168.0.102:7100,192.168.0.103:7100 \
  --rpc_bind_addresses 192.168.0.103:9100 \
  --start_pgsql_proxy \
  --pgsql_proxy_bind_address 192.168.0.103:6518 \
  --cql_proxy_bind_address 192.168.0.103:9042 \
  --fs_data_dirs "/yugabyte01/YUGABYTE/data1,/yugabyte01/YUGABYTE/data2" \
  --placement_cloud doz \
  --placement_region doz \
  --placement_zone doz \
  >& /opt/yugabyte/yb-tserver.out &

 

Verify the cluster on all nodes using below log output running in each node:-

cd /yugabyte01/YUGABYTE/data1/yb-data/tserver/logs

 :/yugabyte01/YUGABYTE/data1/yb-data/tserver/logs>cat yb-tserver.yblab102.yugabytedb.log.INFO.20220208-121828.112811 | grep -i "Connected to a leader master server"
I0208 12:18:28.482936 112848 heartbeater.cc:305] P 8e87587653304075a23a19c7e0b43f98: Connected to a leader master server at 192.168.0.101:7100
 :/yugabyte01/YUGABYTE/data1/yb-data/tserver/logs>

 

Step by Step YugabyteDB 2.11 (Open Source) Distributed DB - Multi-node Cluster Setup on RHEL

Scope - ·        Infrastructure planning and requirement for installation of the multi-node cluster database ·        Prerequisites Software...