1.查看OCR位置
用户指定的位置会被放置在 /etc/oracle/ocr.loc(Liunx系统) 或 /var/opt/oracle/ocr.loc
[oracle@rac4 opt]$ cat /etc/oracle/ocr.loc ocrconfig_loc=/dev/raw/raw1 local_only=FALSE
2.查看voting disk位置
[oracle@rac4 opt]$ crsctl query css votedisk 0. 0 /dev/raw/raw2 located 1 votedisk(s).
3.关键的3个进程
EVMD和CRSD两个进程如果出现异常,则系统会自动重启这两个进程,但是如果CSSD进程异常,系统会立即重启
h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null h2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null h3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null
4.CSS服务两种心跳网络延时参数
[oracle@rac4 opt]$ crsctl get css disktimeout 200 [oracle@rac4 opt]$ crsctl get css misscount 60
修改心跳时间设置 crsctl set css misscount 100 慎用
5.查看集群的节点信息
[oracle@rac4 opt]$ olsnodes --help Usage: olsnodes [-n] [-p] [-i] [<node> | -l] [-g] [-v] where -n print node number with the node name -p print private interconnect name with the node name -i print virtual IP name with the node name <node> print information for the specified node -l print information for the local node -g turn on logging -v run in verbose mode
[oracle@rac4 opt]$ olsnodes -n rac3 1 rac4 2 [oracle@rac4 opt]$ olsnodes -n -p rac3 1 rac3-priv rac4 2 rac4-priv [oracle@rac4 opt]$ olsnodes -n -p -i rac3 1 rac3-priv rac3-vip rac4 2 rac4-priv rac4-vip
备注:本文部分例子来源于张晓明《大话Oracle RAC:集群 高可用性 备份与恢复》
6.配置crs栈是否自动启动
root@rac3 bin]# ./crsctl
crsctl enable crs - enables startup for all CRS daemons
crsctl disable crs - disables startup for all CRS daemons
其实 crsctl enable crs 修改的是/etc/oracle/scls_scr/节点名/root/crsstart文件
[root@rac3 root]# more crsstart
enable
其实可以手动把它编辑成disable或enable也可以,因为crsctl enable crs/crsctl disable crs就是改的这个文件。
7.查看RAC资源的状态
[oracle@rac4 ~]$ srvctl status nodeapps -n rac3 VIP is running on node: rac3 GSD is running on node: rac3 Listener is running on node: rac3 ONS daemon is running on node: rac3
[oracle@rac4 ~]$ srvctl status asm -n rac3 ASM instance +ASM1 is running on node rac3.
[oracle@rac4 ~]$ srvctl status database -d racdb Instance racdb2 is running on node rac4 Instance racdb1 is running on node rac3
[oracle@rac4 ~]$ srvctl status service -d racdb Service racdbserver is running on instance(s) racdb2
8.查看集群件的状态
[oracle@rac3 ~]$ crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy [oracle@rac4 ~]$ crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy
[oracle@rac4 ~]$ crsctl check cssd
CSS appears healthy
[oracle@rac4 ~]$ crsctl check crsd
CRS appears healthy
[oracle@rac4 ~]$ crsctl check evmd
EVM appears healthy
9.oifcfg命令的使用
oifcfg命令有以下4个子命令,每个命令又可以有不同的参数,具体可通过oifcfg -help获取帮助
[oracle@rac4 ~]$ oifcfg --hlep PRIF-9: incorrect usage Name: oifcfg - Oracle Interface Configuration Tool. Usage: oifcfg iflist [-p [-n]] oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}... oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ] oifcfg delif [-node <nodename> | -global] [<if_name>[/<subnet>]] oifcfg [-help] <nodename> - name of the host, as known to a communications network <if_name> - name by which the interface is configured in the system <subnet> - subnet address of the interface <if_type> - type of the interface { cluster_interconnect | public | storage }
<1>.iflist 显示网口列表
<2>.getif 获得单个网口信息
<3>.setif 配置单个网口
<4>.delif 删除网口
--使用iflist显示网口列表
[oracle@rac4 ~]$ oifcfg iflist eth0 192.168.1.0 eth1 192.168.2.0
--使用getif子命令查看每个网卡的属性
[oracle@rac4 ~]$ oifcfg getif eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect
注意:网络接口的配置方式分成两类:global和node-specific。前者说明集群所有节点的配置信息相同,也就是说
所有节点的配置是对称的;而后者意味着这个节点的配置和其他节点的配置不同,是非对称的。
--查询rac4/rac3节点的global类型配置
[oracle@rac4 ~]$ oifcfg getif -global rac4 eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect [oracle@rac4 ~]$ oifcfg getif -global rac3 eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect
--查询rac4/rac3节点的node-specific类型配置
[oracle@rac4 ~]$ oifcfg getif -node rac3
[oracle@rac4 ~]$ oifcfg getif -node rac4
两个节点都没有输出,说明集群中没有node-specific的配置
--按类型查看网卡的配置 (public/cluster_interconnect)
[oracle@rac4 ~]$ oifcfg getif -type public eth0 192.168.1.0 global public [oracle@rac4 ~]$ oifcfg getif -type cluster_interconnect eth1 192.168.2.0 global cluster_interconnect
--通过setif添加新的网卡
[oracle@rac4 ~]$ oifcfg setif -global livan@net/10.0.0.0:public --注意这个命令并不会检查网卡是否真的存在 [oracle@rac4 ~]$ oifcfg getif -global eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect livan@net 10.0.0.0 global public
--使用delif命令删除接口配置
[oracle@rac4 ~]$ oifcfg getif -global eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect livan@net 10.0.0.0 global public [oracle@rac4 ~]$ oifcfg delif -global livan@net [oracle@rac4 ~]$ oifcfg getif -global eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect
[oracle@rac4 ~]$ oifcfg delif -global --删除所有网络配置 [oracle@rac4 ~]$ oifcfg getif -global [oracle@rac4 ~]$ oifcfg setif -global eth0/192.168.1.0:public [oracle@rac4 ~]$ oifcfg setif -global eth1/192.168.2.0:cluster_interconnect [oracle@rac4 ~]$ oifcfg getif -global eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect
[oracle@rac4 ~]$ oifcfg delif -global [oracle@rac4 ~]$ oifcfg setif -global eth0/192.168.1.0:public eth1/192.168.2.0:cluster_interconnect [oracle@rac4 ~]$ oifcfg getif -global eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect
10.查看Votedisk磁盘的位置
[oracle@rac4 ~]$ crsctl query css votedisk
0. 0 /dev/raw/raw2
located 1 votedisk(s).
上面的输入说明votedisk为于 /dev/raw/raw2
11.列出crs集群件的安装及操作版本
[oracle@rac4 ~]$ crsctl query crs softwareversion rac3 CRS software version on node [rac3] is [10.2.0.1.0] [oracle@rac4 ~]$ crsctl query crs softwareversion rac4 CRS software version on node [rac4] is [10.2.0.1.0] [oracle@rac4 ~]$ crsctl query crs activeversion CRS active version on the cluster is [10.2.0.1.0]
12.列出crs各服务模块
CRS由CRS、CSS、EVM这3个服务组成,而每个服务又是由一系列modeule(模块)组成, crsctl允许对每个modeule进行跟踪,并把跟踪内容记录的日志中。
--列出CRS服务对应的模块
[oracle@rac4 ~]$ crsctl lsmodules crs
The following are the CRS modules ::
CRSUI
CRSCOMM
CRSRTI
CRSMAIN
CRSPLACE
CRSAPP
CRSRES
CRSCOMM
CRSOCR
CRSTIMER
CRSEVT
CRSD
CLUCLS
CSSCLNT
COMMCRS
COMMNS
--列出CSS服务对应的模块
[oracle@rac4 ~]$ crsctl lsmodules css
The following are the CSS modules ::
CSSD
COMMCRS
COMMNS
--列出EVM服务对应的模块
[oracle@rac4 ~]$ crsctl lsmodules evm
The following are the EVM modules ::
EVMD
EVMDMAIN
EVMCOMM
EVMEVT
EVMAPP
EVMAGENT
CRSOCR
CLUCLS
CSSCLNT
COMMCRS
COMMNS
13.跟踪cssd模块(需要用root用户执行)
[root@rac4 bin]# ./crsctl debug log css "CSSD:1" Configuration parameter trace is now set to 1. Set CRSD Debug Module: CSSD Level: 1
[root@rac4 10.2.0]# more ./crs_1/log/rac4/cssd/ocssd.log ...... [ CSSD]2015-01-26 09:02:12.891 [1084229984] >TRACE: clssscSetDebugLevel: The logging level is set to 1 ,the cache level is set to 2 [ CSSD]2015-01-26 09:02:46.587 [1147169120] >TRACE: clssgmClientConnectMsg: Connect from con(0x7c3bf0) proc(0x7c0850) pid() proto(10:2:1:1) [ CSSD]2015-01-26 09:03:46.948 [1147169120] >TRACE: clssgmClientConnectMsg: Connect from con(0x7a4bf0) proc(0x7c0900) pid() proto(10:2:1:1) [ CSSD]2015-01-26 09:04:47.299 [1147169120] >TRACE: clssgmClientConnectMsg: Connect from con(0x7c3bf0) proc(0x7c0850) pid() proto(10:2:1:1) [ CSSD]2015-01-26 09:05:47.553 [1147169120] >TRACE: clssgmClientConnectMsg: Connect from con(0x7a4bf0) proc(0x7c0900) pid() proto(10:2:1:1) ......
14.增加或删除Votedisk
加和删除votedisk的操作比较危险,必须在停止数据库、停止ASM、停止CRS Stack后操作,并且操作时必须使用-force参数
注意:即使在CRS关闭后,也必须通过-force参数来添加删除votedisk。并且-force参数只有在crs关闭的场合下使用才安全。 因为votedisk的数量应该是奇数,所以添加删除操作都是成对进行的。
我们在RAC上增加了两个裸设备,分别都是2G
--添加前
[root@rac3 ~]# raw -qa
/dev/raw/raw1: bound to major 8, minor 17
/dev/raw/raw2: bound to major 8, minor 33
/dev/raw/raw3: bound to major 8, minor 49
/dev/raw/raw4: bound to major 8, minor 65
/dev/raw/raw5: bound to major 8, minor 81
[root@rac3 ~]#
延伸:
vmware workstation RAC环境下添加共享磁盘(裸设备):
1.在一个节点上创建好虚拟磁盘,预先分配好空间,并设置好磁盘的 虚拟设备节点[虚拟机设置-->点击要设置的磁盘-->右边高级选项]
2.在另外节点添加虚拟机磁盘,选择添加已存在的盘,选择在第一个节点创建的盘,并把虚拟设备节点与第一台设备设置为相同
3.fdisk -l 在两个节点上都能看到盘,划分去
4.修改/etc/sysconfig/rawdevices 添加裸设备与磁盘的对应关系
5.重启裸设备服务 service rawdevices start
--添加后
[root@rac3 ~]# raw -qa
/dev/raw/raw1: bound to major 8, minor 17
/dev/raw/raw2: bound to major 8, minor 33
/dev/raw/raw3: bound to major 8, minor 49
/dev/raw/raw4: bound to major 8, minor 65
/dev/raw/raw5: bound to major 8, minor 81
/dev/raw/raw7: bound to major 8, minor 97 --新添加
/dev/raw/raw8: bound to major 8, minor 113 --新添加
[root@rac4 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 located 1 votedisk(s).
[root@rac4 bin]# ./crsctl add css votedisk /dev/raw/raw7 --必须增加force选项 Cluster is not in a ready state for online disk addition [root@rac4 bin]# ./crsctl add css votedisk /dev/raw/raw7 -force --没有关闭crs,增加失败 Now formatting voting disk: /dev/raw/raw7 CLSFMT returned with error [4]. failed 9 to initailize votedisk /dev/raw/raw7. [root@rac4 bin]#
[root@rac4 bin]# ./crsctl stop crs --所有节点都关闭 Stopping resources. Successfully stopped CRS resources Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued.
[root@rac4 bin]# ./crsctl add css votedisk /dev/raw/raw7 -force --再添加提示我们已经有了【保险点,重新添加】 votedisk named /dev/raw/raw7 already configured as /dev/raw/raw7.
[root@rac4 bin]# ./crsctl delete css votedisk /dev/raw/raw7 --删除刚次的添加 successful deletion of votedisk /dev/raw/raw7.
[root@rac4 bin]# ./crsctl add css votedisk /dev/raw/raw7 -force --还是不成功 Now formatting voting disk: /dev/raw/raw7 CLSFMT returned with error [4]. failed 9 to initailize votedisk /dev/raw/raw7.
--重启系统后增加成功,【成对增加或删除】(新增加了裸设备,重启一下系统还是比重启一下裸设备服务保险点),用来测试的裸设备的有点大,格式化时间会长一点
[root@rac4 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 located 1 votedisk(s). [root@rac4 bin]# ./crsctl add css votedisk /dev/raw/raw7 -force Now formatting voting disk: /dev/raw/raw7 successful addition of votedisk /dev/raw/raw7. [root@rac4 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 1. 0 /dev/raw/raw7 located 2 votedisk(s). [root@rac4 bin]# ./crsctl add css votedisk /dev/raw/raw8 -force Now formatting voting disk: /dev/raw/raw8 successful addition of votedisk /dev/raw/raw8. [root@rac4 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 1. 0 /dev/raw/raw7 2. 0 /dev/raw/raw8 located 3 votedisk(s).
--另一节点查看
[root@rac3 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 1. 0 /dev/raw/raw7 2. 0 /dev/raw/raw8
--删除votedisk
[root@rac4 bin]# ./crsctl delete css votedisk /dev/raw/raw7 Cluster is not in a ready state for online disk removal [root@rac4 bin]# ./crsctl delete css votedisk /dev/raw/raw7 -force successful deletion of votedisk /dev/raw/raw7. [root@rac4 bin]# ./crsctl delete css votedisk /dev/raw/raw8 -force successful deletion of votedisk /dev/raw/raw8. [root@rac4 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 located 1 votedisk(s).
15.备份Votedisk
<1>. 刘宪军的《Oracle RAC 11g实战指南》p94第9行:“从Oracle 11.2开始,Voting文件的备份不需要手工进行,只要对Clusterware的结构做了修改,Voting文件便被自动备份到OCR文件中。”
<2>. 刘炳林的《构建最高可用Oracle数据库系统 Oracle11gR2 RAC管理、维护与性能优化》p326第2行:“在Clusterware 11gR2中,不需要备份表决磁盘。表决磁盘的任何改变会自动备份到OCR备份文件中,相关信息会自动还原到任何添加的表决磁盘文件中。”
[oracle@rac3 ~]$ dd if=/dev/raw/raw2 of=/home/oracle/votedisk.bak
208864+0 records in
208864+0 records out
同样恢复的命令是 dd if=/home/oracle/votedisk.bak of=/dev/raw/raw2
16.清除裸设备的内容(我们之前增加的两个裸设备,重装crs的时候需要清除一下裸设备)
[root@rac3 bin]# dd if=/dev/zero of=/dev/raw/raw7 bs=10M dd: writing `/dev/raw/raw7': No space left on device 205+0 records in 204+0 records out [root@rac3 bin]# dd if=/dev/zero of=/dev/raw/raw8 bs=10M dd: writing `/dev/raw/raw8': No space left on device 205+0 records in 204+0 records out
17.使用ocrdump打印ocr内容
ocrdump命令能以ASCII的方式打印出OCR的内容,这是这个命令不能用作OCR的辈分恢复, 也就是说产生的文件只能用于阅读,而不能用于恢复OCR.
执行 ocrdump -help 寻求帮组
orcdump [-stdout] [filename] [-keyname name] [-xml]
【-stdout】把内容打印输出到屏幕上
【filename】把内容输出到文件中
【-keyname name】只打印某个键及其子键的内容
【-xml】以.xml格式打印输出
--把SYSTEM.css键的内容以.xml格式打印输出到屏幕上
[root@rac3 bin]# ./ocrdump -stdout -keyname SYSTEM.css -xml|more <OCRDUMP> <TIMESTAMP>01/27/2015 10:37:04</TIMESTAMP> <COMMAND>./ocrdump.bin -stdout -keyname SYSTEM.css -xml </COMMAND> <KEY> <NAME>SYSTEM.css</NAME> <VALUE_TYPE>UNDEF</VALUE_TYPE> <VALUE><![CDATA[]]></VALUE> <USER_PERMISSION>PROCR_ALL_ACCESS</USER_PERMISSION> <GROUP_PERMISSION>PROCR_READ</GROUP_PERMISSION> <OTHER_PERMISSION>PROCR_READ</OTHER_PERMISSION> <USER_NAME>root</USER_NAME> <GROUP_NAME>root</GROUP_NAME>
。。。。。。
ocrdump命令执行过程中会在$CRS_HOME/log/<nodename>/client目录下产生日志文件,文件名ocrdump_<pid>.log,
如果命令执行出现问题,可以从这个日志查看问题原因。
[root@rac3 client]# pwd /opt/ora10g/product/10.2.0/crs_1/log/rac3/client [root@rac3 client]# ll -ltr ocrdump_2* -rw-r----- 1 root root 245 Jan 27 10:35 ocrdump_26850.log -rw-r----- 1 root root 823 Jan 27 10:39 ocrdump_29423.log
18.使用ocrcheck检查OCR内容的一致性
该命令用于检查OCR内容的一致性,命令执行过程中会在$CRS_HOME/log/<nodename>/client/ocrcheck_<pid>.log日志文件。这个命令没有参数。
[root@rac3 bin]# ./ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 104344 Used space (kbytes) : 4340 Available space (kbytes) : 100004 ID : 777521936 Device/File Name : /dev/raw/raw1 --OCR盘所在位置 Device/File integrity check succeeded --这里表示内容一致,内容不一致的话会输出 Device/File needs to be synchronized with the other device Device/File not configured Cluster registry integrity check succeeded
--查看该命令生成的日志
[root@rac3 client]# pwd /opt/ora10g/product/10.2.0/crs_1/log/rac3/client [root@rac3 client]# ll -ltr ocrcheck_* -rw-r----- 1 oracle oinstall 370 Apr 18 2014 ocrcheck_25577.log -rw-r----- 1 root root 370 Jan 27 10:44 ocrcheck_7947.log
19.使用ocrconfig命令维护OCR磁盘
ocrconfig命令用于维护OCR磁盘,安装Clusterware过程中,如果选择External Redundancy冗余方式,则只能输入一个OCR磁盘位置。但是Oracle允许配置两个OCR磁盘互为镜像,以防止OCR磁盘的单点故障。OCR磁盘和votedisk磁盘不一样,OCR磁盘最多只能有两个,一个Primary OCR和一个Mirror OCR(镜像的OCR)。
[root@rac3 bin]# ./ocrconfig -help Name: ocrconfig - Configuration tool for Oracle Cluster Registry. Synopsis: ocrconfig [option] option: -export <filename> [-s online] - Export cluster register contents to a file -import <filename> - Import cluster registry contents from a file -upgrade [<user> [<group>]] - Upgrade cluster registry from previous version -downgrade [-version <version string>] - Downgrade cluster registry to the specified version -backuploc <dirname> - Configure periodic backup location -showbackup - Show backup information -restore <filename> - Restore from physical backup -replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file -overwrite - Overwrite OCR configuration on disk -repair ocr|ocrmirror <filename> - Repair local OCR configuration -help - Print out this help information Note: A log file will be created in $ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure you have file creation privileges in the above directory before running this tool.
ocrconfig命令非常的重要,我们通过一些试验来了解该命令:
http://www.cnblogs.com/myrunning/p/4253696.html
##RAC应用层的一些命令
20.crs_stat命令维护crs资源
crs_stat这个命令用于查看CRS维护的所有资源的运行状态。如果不带任何参数时,显示所有资源的概要信息。 每个资源显示各个属性:资源的名称、类型、目标、资源状态等。
--查看所有资源详细信息
[oracle@rac3 ~]$ crs_stat NAME=ora.rac3.ASM1.asm TYPE=application TARGET=ONLINE STATE=ONLINE on rac3 NAME=ora.rac3.LISTENER_RAC3.lsnr TYPE=application TARGET=ONLINE STATE=ONLINE on rac3 NAME=ora.rac3.gsd TYPE=application TARGET=ONLINE STATE=ONLINE on rac3 NAME=ora.rac3.ons TYPE=application TARGET=ONLINE STATE=ONLINE on rac3 。。。。。。。。。。。。
--查看指定资源的状态
[oracle@rac3 ~]$ crs_stat ora.racdb.racdb1.inst NAME=ora.racdb.racdb1.inst TYPE=application TARGET=ONLINE STATE=ONLINE on rac3
--使用-v选项查看详细
[oracle@rac3 ~]$ crs_stat -v ora.racdb.racdb1.inst NAME=ora.racdb.racdb1.inst TYPE=application RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac3
--使用-p选项查看更详细信息
[oracle@rac3 ~]$ crs_stat -p ora.racdb.racdb1.inst NAME=ora.racdb.racdb1.inst TYPE=application ACTION_SCRIPT=/opt/ora10g/product/10.2.0/db_1/bin/racgwrap ACTIVE_PLACEMENT=0 AUTO_START=1 CHECK_INTERVAL=600 DESCRIPTION=CRS application for Instance FAILOVER_DELAY=0 FAILURE_INTERVAL=0 FAILURE_THRESHOLD=0 HOSTING_MEMBERS=rac3 OPTIONAL_RESOURCES= PLACEMENT=restricted REQUIRED_RESOURCES=ora.rac3.vip ora.rac3.ASM1.asm RESTART_ATTEMPTS=5 SCRIPT_TIMEOUT=600 START_TIMEOUT=0 STOP_TIMEOUT=0 UPTIME_THRESHOLD=7d USR_ORA_ALERT_NAME= USR_ORA_CHECK_TIMEOUT=0 USR_ORA_CONNECT_STR=/ as sysdba USR_ORA_DEBUG=0 USR_ORA_DISCONNECT=false USR_ORA_FLAGS= USR_ORA_IF= USR_ORA_INST_NOT_SHUTDOWN= USR_ORA_LANG= USR_ORA_NETMASK= USR_ORA_OPEN_MODE= USR_ORA_OPI=false USR_ORA_PFILE= USR_ORA_PRECONNECT=none USR_ORA_SRV= USR_ORA_START_TIMEOUT=0 USR_ORA_STOP_MODE=immediate USR_ORA_STOP_TIMEOUT=0 USR_ORA_VIP=
--使用-ls选项,查看每个资源的权限定义
[oracle@rac3 ~]$ crs_stat -ls Name Owner Primary PrivGrp Permission ----------------------------------------------------------------- ora....SM1.asm oracle oinstall rwxrwxr-- ora....C3.lsnr oracle oinstall rwxrwxr-- ora.rac3.gsd oracle oinstall rwxr-xr-- ora.rac3.ons oracle oinstall rwxr-xr-- ora.rac3.vip root oinstall rwxr-xr-- ora....SM2.asm oracle oinstall rwxrwxr-- ora....C4.lsnr oracle oinstall rwxrwxr-- ora.rac4.gsd oracle oinstall rwxr-xr-- ora.rac4.ons oracle oinstall rwxr-xr-- ora.rac4.vip root oinstall rwxr-xr-- ora.racdb.db oracle oinstall rwxrwxr-- ora....b1.inst oracle oinstall rwxrwxr-- ora....b2.inst oracle oinstall rwxrwxr-- ora....rver.cs oracle oinstall rwxrwxr-- ora....db2.srv oracle oinstall rwxrwxr--
21.rac srvctl命令使用理解
srvctl命令是RAC维护中最常用到的命令,也最为复杂,使用这个命令可以操作CRS上的Database,Instance, ASM,Service、Listener和Node Application资源,其中Node Application资源又包括了GSD、ONS、VIP。
这些 资源还有独立的管理工具,比如
ONS可以使用onsctl命令进行管理: http://www.cnblogs.com/myrunning/p/4265522.html
listener还可以通过lsnrctl命令进行管理:http://www.cnblogs.com/myrunning/p/3977931.html
srvctl命令使用理解:http://www.cnblogs.com/myrunning/p/4265539.html