ORACLE RAC INTRODUCTION & RAC NODE SETUP
----------------------------------------------------------------------
Supports multiple instances in a clustered environment( upto 100 instances)
Supports high availability and scalibility
To install Rac Two softwares needed
-------------------------------------
Grid Infrastructure ($ GRID_HOME) ---> Grid infrastucture bundles with both clusterware and ASM software with a separate home
/u01/app/12.1.0.1/grid/bin
--->RDBMS Software(12cdb)(ORACLE_HOME)
/u01/app/12.1.0.1/rdbms/bin
part of grid software with the feature oracle restart will manage the resources on start/stop
grid to manage resorces (DB,LISTENER,SCAN) use to monitor and manage resources, clusterware software that provides interface and services that enable and support a cluster
Oracle clusterware components:
-------------------------------------
Two components in clusterware
1. Voting disk : keep node membership inforamation when node joins or leaves is located on shared storage available for all nodes can be multiplexed upto 15 voting disks
# ./CRSCTL css votedisk
2. Oracle cluster Registry:OCR file resides on shared storage ,stores resources information that are database,listener,vip,scan,asm,instanaces can also be multiplexed upto 5 disks
# ./OCRCHECK
Oracle Real Application Cluster
-------------------------------------
RAC start up sequnces
--------------------------
1.lower stack ---->oracle high availability ---level0
will spawn ohasd-OHA-service daemon while we installed ,executed root.sh will add on entry in inittab to spawn ohasd service
# more/etc/inittab
following entry /etc/init.d/init.ohasd run>/dev/null2 >&1</dev/null
2.upper stack----->clusterware process ----> OHASD daemon then spawns additional clusterware process at each startup level
CRSD-->cluster ready services---->high availability operations in a cluster
OCSSD-->Cluster synchronization service--->manage node membership information when node joins or leaves
EVMD--->Event management service--->a bg process that publishes events that oracle clusterware creates
ONS---> Oracle Notifiation Service--> a publish and subscribe service for communicating fast application notification (FAN )events
OPROCD--->Oracle process monitor daemon--->periodically wakes up and checks that the interval since it last awake is with in the expected time if not oprocd resets the processor and restarts the node
an oprocd failure results in oracle clusterware restarting the node
# ./crsctl check crs
# ./ crsctl check cssd
Rac Tools
------------
CRSCTL--->root user --->cluster ready service control utility
SRVCTL--->oracle user--->service control utility
RAC NODE SETUP
------------------------
Insatllation of SAN Software (we are using openfiler 64 bit san Software on vmware ,all the nodes create and access the centralized SAN storage ,we can download openfiler software
from www.openfiler.com ,4 node set up cloning from 1 node to rest 4 nodes)
Node1
---------
1)Oracle linux installed operating system
2)Three Network Adapters
public
private
Asm
3) vm configuration
4 gb ram
processor 1
hdd 100gb
4) adding kernel paameters
5)installing RPMS
6)asm libs
7)asm configure fdisk ,label,scan,list
oracle real application cluster 12c r1 grid installation(12cr1)
Node2
-----
# xhost +
# neat
Network configuration (we have taken from the node 1 cloning and network configuration and created delete old eth0.bkp,eth1.bkp.eth2.bkp and configure to eth0,eth1,eth2 after that once deactivated and activate ethernetcard
----------------------
and DNS search path cluster.com ,node2.cluster.com)
eth0 public ip
-----------------------
ip address 147.43.0.2
subnetmask 255.255.255.0
ip address 147.43.0.3
subnetmask 255.255.255.0
ip address 147.43.0.4
subnetmask 255.255.255.0
eth1 private ip
------------------
ip address 192.168.0.2
subnetmask 255.255.255.0
ip address 192.168.0.3
subnetmask 255.255.255.0
ip address 192.168.0.4
subnetmask 255.255.255.0
eth2 asm network
----------------
ip address 192.168.1.2
subnet mask 255.255.255.0
ip address 192.168.1.3
subnet mask 255.255.255.0
ip address 192.168.1.4
subnet mask 255.255.255.0
Node 1 : configure etc/hosts file in vi editor and assign ip address of public ,private,asmip,vip,scanip and local host.localdomain
------
# cat etc/hosts
vi etc/hosts
127.0.0.1 localhost.localdomain localhost
## PUBLIC IP
147.43.0.1 node1.cluster.com node1
147.43.0.2 node2.cluster.com node2
147.43.0.3 node3.cluster.com node3
147.43.0.4 node4.cluster.com node4
## PRIVATE IP
192.168.0.1 node1-pri.cluster node1-pri
192.168.0.2 node2-pri.cluster node2-pri
192.168.0.3 node3-pri.cluster node3-pri
192.168.0.4 node4-pri.cluster node4-pri
## ASM IP
192.168.1.1 node1-asm.cluster.com node1-asm
192.168.1.2 node2-asm.cluster.com node2-asm
192.168.1.3 node3-asm.cluster.com node3-asm
192.168.1.4 node4-asm.cluster.com node4-asm
##VIP
147.43.0.11 node1-vip.cluster.com node1-vip
147.43.0.12 node2-vip.cluster.com node2-vip
147.43.0.13 node3-vip.cluster.com node3-vip
147.43.0.14 node4-vip.cluster.com node4-vip
## SCANIP
147.43.0.51 node-scan.cluster.com node-scan
147.43.0.52 node-scan.cluster.com node-scan
147.43.0.53 node-scan.cluster.com node-scan
save and quit the file
copy the /etc/hosts file to node 2,node3,node4 it will reflect all the nodes check wheter all the nodes are reflected or not and ll ip address are set or not using cat command
# scp /etc/hosts node2 : /etc/hosts
# cat etc/hosts
node1
------
To access san in gui web based administration throug node1
# firefox https://147.43.0.5:446/ (firefoz will open the san openfiler and enter the username and password credintials
username : openfiler
pwd :openfiler
San information : san.cluster.com (147.43.0.5)
mount type partition
/ ext3 /dev/sd1
/var ext3 /dev/sd2
/tmp ext3 /dev/sd3
/usr ext3 /dev/sd5
/dev/sdm tmpfs tmpfs
in san openfiler menu under systemtab
Network interface configuration
interface bootprotocol ip address network mask speed mtu link
eth0 static 147.43.0.5 255.255.255.0 100mbps
Network access configuratioin
device name networkhost(publicip) subnetmask
node1.cluster.com 147.43.0.1 255.255.255.255
node2.cluster.com 147.43.0.2 255.255.255.255
node3.cluster.com 147.43.0.3 255.255.255.255
node4.cluster.com 147.43.0.4 255.255.255.255
adding all nodes in san network
and after that click on -->volume tab
create a new volume group
under block device management
/dev/sda scsi ata vmware virtual 12 gb --->creating san 6----> partion
/dev/sdb scsi vmware vmware virtual 60 gb 0
click on dev/sdb create a partion in dev/sdb
click to create --->click on volume manager tab --->volume group name--->volgroup -->/dev/sdb1 57.22 gb --->add volume group
click on services tab
under ----> manage services
scsci enabled --->start
click on volume tab
create a volume in volgroup
vol 1--->for rac 58560 -->file--> scsi
Right click on Iscsi target
|
|-----------> add new Iscsi target
click on LUN Mapping----->map the LUN (logical unit number)
click on null ---> click on Network & Acl
node1.cluster.com 147.43.0.5-------> Allow
node2.cluster.com 147.43.0.5------->Allow
node3.cluster.com 147.43.0.5-------> Allow
node4.cluster.com 147.43.0.5-------> Allow
click on --->update
log out SAN (openfiler)
Node1
------
# ISCSIADM -m discovery - t st-p 147.43.0.5 (we have iscsi number should reflecton all nodes by using this command ,similarly command check to all nodes node1,node2,node3,node4 to access the san)
Node1
-----
# service iscsi restart (reset the iscsi services to all the access the partition)
Node1
----
# fdisk -l
/dev/sda1
/dev/sda2
/dev/sda3
/dev/sda4
/dev/sda5
/dev/sda6
/dev/sda7
disk /dev/sdb
# fdisk /dev/sdb
primary partion p
partition number 1
using defaultvalue 1
last cylinder orsize +40g
command for help n
primary partition - p
partition number -2
lastcylinder size +20g
save w (creating the partions over)
check all nodes to fdisk -l to reflect the partion 40g and 20 g
node1
-----
# partprobe/dev/sdb
node1
-----
creating the users & groups
# userdel -r oracle
# groupdel oinstall
#id
#gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),101(pkcs11)
# groupdel dba
# groupadd -g 5001 Oinstall
# groupadd -g 5002 dba
# groupadd -g 5003 Oper
# groupadd -g 5004 asmdba
# groupadd -g 5005 asmadmin
# groupadd -g 5006 asmoper
# useradd -U 5007 -g Oinsatll -G asmadmin,asmdba,dba-d /u01/home -m grid
# chown -R grid: Oinsatll/u01/home
# chmod -R 777 /u01/home
passwd grid
new passwd grid@123
all the users are groups should be craeted in all nodes node1,node2,node3,node4 and permissions to the users
copy all 6 ASM RPMS to all nodes
next step is the ASM RPMS to install all the RPMS to all the nodes copy all rpms to all nodes
node1
-------
# rpm -ivh oracleasmlib-2.0.4.1.el5_x86_64.rpm ..nodeps ..force
# rpm -ivh oracleasmlib-2.0.4.1.el5_x86_64.rpm ..nodeps ..force
# scp oracleasm* node2:/u01/home
# scp oracleasm* node3:/u01/home
# scp oracleasm* node4:/u01/home
node2
-----
cd /uo1/home
ls
next step to configure the linux asm libraies
# oracleasm configure -h
# oracleasm -exec -path
# oracleasm -h
# oracleasm -v
# oracleasm configure -1
default user driverinterface :grid
deafult group to own the driver interface : Oinsatll
start oracle asm library driver on boot : y
scan for oracle asm disks on boot : y
writting oracle asm library driver configuration : done
# oracleasm exit
# oarcleasm init
# oracleasm createdisk DATA/dev/sdb1
# oracleasm creatdisk OCR_VD/dev/sdb2
# oracleasm listdisks
node2
-----
# oracleasm scandisks
# oracleasm configure -1
# oracleasm exit
# oracleasm init
# oracleasm listdisks
# oracleasm scandisks
# oracleasm listdisk
DATA
OCR_VD
same for node 3 and node4
node1
------
# mv /etc/ntp.conf/etc/ntp.conf_bkp
# service ntpd restart
# service ntpd restart
failed
shuttingdown ntpd:
ssh setup connectivity passwordless connectivity in all nodes ssh setup is not establishement node1,node2,node3,node4 will not be get node information
node1
-----
# su -grid
# rm -rfv .ssh/
removed 'ssh/known-hosts*
removed directory '.ssh/*
ssh setup folder sshUserset.sh
# cd /mgt/hgfs/grid_12c/
ls
# sshsetup
ls
SShUsersetup.sh
#SShUsersetup.sh -hosts "node1" grid user (ssh verification is completed for node1 who are allowed connected this command will used in all nodes)
# cd /u01/home
cd .SSh/
ls
authorized_keys
$ vi authorized_keys
$ cd /mnt/hgfs/grid_12c
ls
$ scp SShUsersetup.sh node2:/u01/home
$ scp SShUsersetup.sh node3:/u01/home
$ scp SShUsersetup.sh node4:/u01/home
copy all the sshusers setup
node1
------
$ vi authorized_keys (in authorized_keys open the file we found binary information and signature combine in this copied all the 4 nodes binary information and signature like authorized_keys without password prompt we can connect to the any node these authorized_keys should be copied to all nodes like wise node2,node3 and node 4 we have to removethe authorized_keys and copy the fresh authorized_keys after that we have to connect with outpassword prompt to all nodes authorized_keys look like this( ssh-rsa AAAAABBCl+fdvi6+n7yntps+=grid@node1.cluster.com) similar to node2,node3,node4)
$ scp authorized_keys node2:/u01.home/.ssh
node2
-----
$ rm -rfv authorized_keys
removed authorized_keys
$ scp authorized_keys node3: /u01/home/.ssh
node3
-----
$ rm -rfv authorized_keys
removed authorized_keys
$ scp authorized_keys node4: /u01/home/.ssh
grid@node1.ssh]ssh node4
grid@node4~] ssh node1
node1
------
cd/mnt/hgfs/grid_12c/
ls runCluvfy.sh (in grid_12c folder runCluvfy.sh file is there performing pre checks for cluster services )
checking node reachability
checking node reachability check passed from node " node1"
checking user equivalence
user equivalency check passed to user grid
checking node conncectivity
checking multicast communication passed
checking asm lib configuration passed
package existence check
kernel parameters
node1
-----
xhost +
./runInstaller
skip software updates
install and configure oracle grid infrastructure for a cluster( 12c release1 oraclegrid infrastructure )
configure standard cluster
advanced installation
english
uncheck configure GNS
cluster name: node-cluster
scanname :node-scan
scanport :1521
public host name virtual host name
node1.cluster.com node1-vip.cluster.com
node2.cluster.com node2-vip.cluster.com
node3.cluster.com node3-vip.cluster.com
node4.cluster.com node4-vip.cluster.com
ssh connectivity
os user : grid os password : grid@123
click on -->next
eth0 147.43.0.0 public
eth1 192.168.0.0 private
eth2 192.168.1.0 asm
click on--> next
configure grid infrastucture management repository
use standard asm for storage
use oracle flex asm for storage--click on this
select disk group
disk group name DATA
redundancy -->external
candidate disk
ORCLDATA---->click on this
ORCLODB_VD
use same password
oraclebase /u01/home/grid
software location : /u01/home/12.1.0/grid
inventory location : /u01/home/oraInventory
automatically run configuration scripts ( run installer scripts will automatically run in 12c in all nodes)
use root use credintials --->select this
# /tmp/cvu_12.1.0.1.0_grid/runfixup.sh ( this script wil run all nodes)
grid infrasture is completed
node1
-----
# pwd
/root
vi grid.env
export GRID_HOME =/u01/home/12.1.0./grid
export PATH =$ gRID_HOME/bin: $PATh:
save and exit
# chown -r grid:oinstall grid.env
# chown -R grid: oinstall grid.env
#chmod - R 777 grid.env
now copygrid.env file to all nodes grid home loaction
# scp grid.env node2 :/u01/home/12.1.0/grid
# scp grid.env node3 :/u01/home/12.1.0/grid
# scp grid.env node4 :/u01/home/12.1.0/grid
# cp grid.env /u01/home/12.1.0/grid/
# su - grid
$ cd /u01/home/12.1.0/grid/
ls
$ .grid.env
# crsctl check cluster -all
$ cd bin /
# crsctl check cluster -all
cluster ready service
cluster synchronization
event management
# crsctl stat res -t
grid@node1 bin]$ srvctl status asm
asm is running on node 1,node2,node2
$ ps - ef | grep -i smon
node2
-----
su - grid
cd 12.1.0/grid
$.grid.env
$ ps - ef | grep - i smon
Insatlled SAN storage added two disks ISCI shared storage linux installed on
oarcle 12c database software installation in cluster environment
node1
-----
# su - grid
$ cd /u01/home/12.1.0/grid
$ .grid.env
$ asmca
$ asmca (by using this command runs the asm configuration assistant)
disk group name size
DATA
click on create
disk group name OCR
external
ORCLOCR_VD click on this
disk group OCR created ( high ---> 3copies normal ---> 2 copies extended---> no copies
DATA
OCR
node1
------
# useradd - U 5008 -g oinstall -G dba,asmdba,oracle
#chown -R oracle: oinstall/u01/home (Repeat the same steps user creation permissions to this oracle in all nodes)
# chmod -R 777 /u01/home/
# passwd oracle
install the oracle soft login as a oracle user
# su - oracle
cd /mgt/hgfs/
ls
grid12c grid 12c database
ls
./runInstaller (launch oracle universal installer 12crelease 1)
skip software update
install database software only
oracle real application cluster database installation by deafault local node is always selected
node 2
node 3
node 4
select all nodes
ssh connectivity
os oracle user pwd ----
password ssh set up (establish ssh connectivity between the selected nodes)
click on next
product language : english
enterprise edition
oracle base /u01/home/oracle
software location /u01/home/oracle/12.1.0/db_home_1
privileged operating system group
osdba ---> dba
osoper----> oper,dba
installing the 12cdatabase
node1
-----
$ login as a rootuser
/u01/home/oracle/12.1.0/db_home_1/rrot.sh (login as a root user runInstaller will run the scriprt in all nodes manually)
node1
-----
vi rdbms.env
export ORACLE_HOME=/u01/home/oracle/12.1.0/dbhome_1
export PATH = $ORACLE_HOME/bin:$PATH:
cat rdbms.env
# chown oracle:oinstall rdbms.env
# chmod -R 777 rdbms.env
# scp rdbms.env node2:/u01/home/oracle/12.1.0/dbhome_1
copy rdbms.env file to all nodes
# su - oracle
cd /u01/home/oracle/12.1.0/dbhome_1/
ls
# cp rdbms.env /u01/home/12.1.0/dbhome_1/
su - oracle
cd /u01/home/oracle/12.1.0/dbhome_1/
ls
rdbms.env
$ dbca database configuration assistant will assist the installation process
Note : info on Rac node setup it may be different on your environment like production,testing,development, ipaddress,directory structures,
THANK YOU FOR VIEWING MY BLOG FOR MORE UPDATES FOLLOW ME