Sunday, August 16, 2020

olsnodes command in oracle rac

 olsnodes command in oracle rac


Olsnodes,oifcfg,cluvfy,


Introduction


olsnodes command provides the list of nodes and other information of all the nodes particapating in the cluster.


crsctl tool

will verify health check of cluster services


from one node, can verify of all nodes.



#cd /u01/app/11.2.0/grid/bin

#./crsctl check cluster -all


crs

css

evm - upper stack

ohasd - lower stack -

Only specific node - service status of that particular node.


./crsctl check cluster


Note: On server boot, the cluster services get started on its own.

----------------------------------------------------

To stop the clusterware services


#./crsctl stop cluster -all ( includes all the nodes)

But make sure to stop lower stack manually of every node.

./crsctl stop crs (execute on every node)

-------------------------------------------


To start .. the service if manually then


just start crs

both lower and upper get started.

./crsctl start crs

Monitor ./crsctl check cluster -all

------------------------------------------


 Verify Unix logs and clusterware logs.


For every service, we have a log in grid home.

Where does the clusterware logs locate?

cd $GRID_HOME/log  path


eg: /u01/app/11.2.0/grid/log/rac1

For node2

/u01/app/11.2.0/grid/log/rac2


Will find

folders

crsd

cssd

evmd


How to read the log file ?

# tail -100f crsd.log


also

# tail -1000f crsd.log | more




Using diagcollection script

diagcollection.pl


Steps: #

# ./diagcollection.pl --collect


once the zip files generated , will use winscp/ftp to pull in to desktop and then upload to metalink/support.


If oracle support needs , to collect diagnostic information from cluster logs to identify the issue. Will need to provide logs using diagcollection script





Diagnostic files - Retention policy - housekeeping

----------------------------------

$GRID_HOME/log/rac1

crsd.log

cssd.log

ohasd.log

read by tail command


Retention Policy for these logs.


# Rotation / retention Policy

  10x10 rules as part of automatic rotation/retention policy

and governed automatically.


10 copies of cssd.log files with 50M retained and rotated

subsequently

ohasd,evmd,crsd etc logs also retain 10 copies with 10m size.


The policy doesnt applies to the alerthostname.log file.


If the file got removed , need to stop,start service.



Changing parameter file :

--------------------------

 Use sid to apply on nodes in a cluster.


sql>alter system set undo_retention=2600 scope=both sid='*';


------------------------------------

About -

    Olsnodes,oifcfg,cluvfy



olsnodes


    On a cluster - to list active nodes

# cd /u01/app/11.2.0/grid/bin/

#./olsnodes


# ./olsnodes -i


# ./olsnodes -n

rac1    1

rac2    2

rac3    3


# ./olsnodes -s

rac1    Active

rac2    Active

rac3    Inactive

---------------------------------------------------------


Oracle Interface Configuration Tool (OIFCFG)

        to adminstrate the network interfaces.


    # oifcfg iflist (For listing interfaces)


   

# ./oifcfg iflist

eth0  192.169.2.0

eth1  10.10.10.0


# ./oifcfg getif ( verify - list of public/private interfaces)

eth0  192.169.2.0  global  public

eth1  10.10.10.0  global  cluster_interconnect



#./oifcfg setif xxxxxx private


Verify - scan configuration

---------------------------

su - oracle

. grid_env

$ srvctl config scan



Cluster Verification Utility – Environment Integrity Check tool

--------------------------------------------------------------

Before installation , how to ensure the cluster integrity ...

    how we verify cluster configuration

        using cluvfy tool....

        comes with in grid software as well. in installed binaries.

   

    two Stages

    pre

    post - failed - no DNS Server - cluvfy failed.


    cd /u01/app/11.2.0/grid/bin

    ls –ltr cluvfy*

    cluvfy

   

    to be ran with oracle user *


    Verifying shared storage accessibility

   

    $cluvfy comp ssa -n all -verbose



    Cluster Verification Utility Connectivity Verifications    


    $cluvfy comp nodereach -n rac1,rac2,rac3 -verbose

    $cluvfy comp nodecon -n all -verbose


    Post - stage

    cluvfy stage -post crsinst -n rac1,rac2 -verbose


    Cluster integrity

     ./cluvfy comp clu




Crsctl to manage grid services

srvctl to manage/monitor resources - db,listener,scan,vip.


Check the Status of the RAC

---------------------------

$ srvctl config database -d prod

$ srvctl status database -d prod


SQL> SELECT inst_id,inst_name FROM v$active_instances;


Verify instances status using views in grid.

--------------------------------------------

SQL> select inst_id,instance_name,status,thread# from gv$instance;


$srvctl status instance -d prod -i prod1,prod2

$srvctl status instance -d prod -n rac1



Note : Info on olsnodes it may be differ from your environment prod,dev,test,naming conventions and directories etc


THANKS FOR VIEWING MY BLOG FOR MORE UPDATES FOLLOW ME




ITIL Process

ITIL Process Introduction In this Blog i am going to explain  ITIL Process, ITIL stands for Information Technology Infrastructure Library ...