转自:http://www.askmaclean.com/archives/add-node-to-11-2-0-2-grid-infrastructure.html
在之前的文章中我介绍了为10g RAC Cluster添加节点的具体步骤。在11gr2中Oracle CRS升级为Grid Infrastructure,通过GI我们可以更方便地控制CRS资源如:VIP、ASM等等,这也导致了在为11.2中的GI添加节点时,同10gr2相比有着较大的差异。
这里我们要简述在11.2中为GI ADD NODE的几个要点:
一、准备工作
准备工作是不可忽略的,在10g RAC Cluster添加节点中我列举了必须完成的先决条件,在11.2 GI中这些条件依然有效,但请注意以下2点:
1.不仅要为oracle用户配置用户等价性,也要为grid(GI安装用户)用户配置;除非你同时使用oracle安装GI和RDBMS,这是不推荐的
2.在11.2 GI中推出了octssd(Oracle Cluster Synchronization Service Daemon)时间同步服务,如果打算使用octssd的话那么建议禁用ntpd事件服务,具体方法如下:
# service ntpd stop
Shutting down ntpd: [ OK ]
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.orig
# rm /var/run/ntpd.pid
3.使用cluster verify工具验证新增节点是否满足cluster的要求:
cluvfy stage -pre nodeadd -n <NEW NODE> 具体用法如: su - grid [grid@vrh1 ~]$ cluvfy stage -pre nodeadd -n vrh3 Performing pre-checks for node addition Checking node reachability...
Node reachability check passed from node "vrh1" Checking user equivalence...
User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0" Node connectivity check passed Checking CRS integrity... CRS integrity check passed Checking shared resources... Checking CRS home location...
The location "/g01/11.2.0/grid" is not shared but is present/creatable on all nodes
Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0" Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1" Node connectivity check passed Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "vrh3:/tmp"
Free disk space check passed for "vrh1:/tmp"
Check for multiple users with UID value 54322 passed
User existence check passed for "grid"
Run level check passed
Hard limits check failed for "maximum open file descriptors"
Check failed on nodes:
vrh3
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81( x86_64)"
Package existence check passed for "binutils-2.17.50.0.6( x86_64)"
Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"
Package existence check passed for "glibc-common-2.5( x86_64)"
Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"
Package existence check passed for "glibc-headers-2.5( x86_64)"
Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "sysstat-7.0.2( x86_64)"
Package existence check passed for "ksh-20060214( x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed Checking OCR integrity... OCR integrity check passed Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started...
No NTP Daemons or Services were found to be running Clock synchronization check using Network Time Protocol(NTP) passed User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: vrh3 File "/etc/resolv.conf" is not consistent across nodes Pre-check for node addition was unsuccessful on all the nodes.
一般来说如果我们不使用DNS解析域名方式的话,那么resolv.conf不一直的问题可以忽略,但在slient安装模式下可能造成我们的操作无法完成,这个后面会介绍。
二、向GI中加入新的节点
注意11.2.0.2 GI添加节点的关键脚本addNode.sh可能存在Bug,如官方文档所述当希望使用Interactive Mode交互模式启动OUI界面添加节点时,只要运行addNode.sh脚本即可,实际情况则不是这样:
documentation said:
Go to CRS_home/oui/bin and run the addNode.sh script on one of the existing nodes.
Oracle Universal Installer runs in add node mode and the Welcome page displays.
Click Next and the Specify Cluster Nodes for Node Addition page displays. we done: 运行addNode.sh要求以GI拥有者身份运行该脚本,一般为grid用户,要求在已有的正运行GI的节点上启动脚本 [grid@vrh1 ~]$ cd $ORA_CRS_HOME/oui/bin [grid@vrh1 bin]$ ./addNode.sh
ERROR:
Value for CLUSTER_NEW_NODES not specified. USAGE:
/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl {-pre|-post} /g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] CLUSTER_NEW_NODES={}
/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] CLUSTER_NEW_NODES={}
CLUSTER_NEW_VIRTUAL_HOSTNAMES={} /g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] -responseFile
/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -post [-silent]
我们的本意是期望使用图形化的交互界面的OUI(runInstaller -addnode)来新增节点,然而addNode.sh居然让我们输入一些参量,而且其调用的check_nodeadd.pl脚本使用的是silent模式。
在MOS和GOOGLE上搜了一圈,基本所有的文档都推荐使用silent模式来添加节点,无法只好转到静默添加上来。实际上静默添加所需要提供的参数并不多,这可能是这种方式得到推崇的原因之一,但是这里又碰到问题了:
语法SYNTAX:
./addNode.sh –silent
"CLUSTER_NEW_NODES={node2}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={node2-priv}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-vip}" 在我们的例子中具体命令如下 ./addNode.sh -silent
"CLUSTER_NEW_NODES={vrh3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}" 以上命令因为采用silent模式所以没有任何窗口输出(实际上会输出到 /tmp/silentInstall.log日志文件中),去掉-silent参数 ./addNode.sh "CLUSTER_NEW_NODES={vrh3}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}" Performing pre-checks for node addition Checking node reachability...
Node reachability check passed from node "vrh1" Checking user equivalence...
User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0" Node connectivity check passed Checking CRS integrity... CRS integrity check passed Checking shared resources... Checking CRS home location...
The location "/g01/11.2.0/grid" is not shared but is present/creatable on all nodes
Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0" Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1" Node connectivity check passed Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "vrh3:/tmp"
Free disk space check passed for "vrh1:/tmp"
Check for multiple users with UID value 54322 passed
User existence check passed for "grid"
Run level check passed
Hard limits check failed for "maximum open file descriptors"
Check failed on nodes:
vrh3
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81( x86_64)"
Package existence check passed for "binutils-2.17.50.0.6( x86_64)"
Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"
Package existence check passed for "glibc-common-2.5( x86_64)"
Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"
Package existence check passed for "glibc-headers-2.5( x86_64)"
Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "sysstat-7.0.2( x86_64)"
Package existence check passed for "ksh-20060214( x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed Checking OCR integrity... OCR integrity check passed Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started...
No NTP Daemons or Services were found to be running Clock synchronization check using Network Time Protocol(NTP) passed User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: vrh3 File "/etc/resolv.conf" is not consistent across nodes Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed. Pre-check for node addition was unsuccessful on all the nodes.
在addNode.sh正式添加节点之前它也会调用cluvfy工具来验证新加入节点是否满足条件,如果不满足则拒绝下一步操作。因为我们在之前已经验证过了新节点的可用性,所以这里完全可以跳过addNode.sh的验证,具体来看一下addNode.sh脚本的内容:
[grid@vrh1 bin]$ cat addNode.sh #!/bin/sh
OHOME=/g01/11.2.0/grid
INVPTRLOC=$OHOME/oraInst.loc
ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ]
then
$ADDNODE
else
CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre $*"
$CHECK_NODEADD
if [ $? -eq 0 ]
then
$ADDNODE
fi
fi
可以看到存在一个IGNORE_PREADDNODE_CHECKS环境变量可以控制是否进行节点新增的预检查,我们手动设置该变量,之后再次运行addNode.sh脚本:
export IGNORE_PREADDNODE_CHECKS=Y ./addNode.sh "CLUSTER_NEW_NODES={vrh3}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}"
> add_node.log 2>&1 另开一个窗口可以监控新增节点的过程日志 tail -f add_node.log Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 5951 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved. Performing tests to see whether nodes vrh2,vrh3 are available
............................................................... 100% Done. .
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /g01/11.2.0/grid
New Nodes
Space Requirements
New Nodes
vrh3
/: Required 6.66GB : Available 32.40GB
Installed Products
Product Names
Oracle Grid Infrastructure 11.2.0.2.0
Sun JDK 1.5.0.24.08
Installer SDK Component 11.2.0.2.0
Oracle One-Off Patch Installer 11.2.0.0.2
Oracle Universal Installer 11.2.0.2.0
Oracle USM Deconfiguration 11.2.0.2.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.3
Oracle DBCA Deconfiguration 11.2.0.2.0
Oracle RAC Deconfiguration 11.2.0.2.0
Oracle Quality of Service Management (Server) 11.2.0.2.0
Installation Plugin Files 11.2.0.2.0
Universal Storage Manager Files 11.2.0.2.0
Oracle Text Required Support Files 11.2.0.2.0
Automatic Storage Management Assistant 11.2.0.2.0
Oracle Database 11g Multimedia Files 11.2.0.2.0
Oracle Multimedia Java Advanced Imaging 11.2.0.2.0
Oracle Globalization Support 11.2.0.2.0
Oracle Multimedia Locator RDBMS Files 11.2.0.2.0
Oracle Core Required Support Files 11.2.0.2.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.2.0
Oracle Quality of Service Management (Client) 11.2.0.2.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.2.0
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.2.0
Oracle JDBC/OCI Instant Client 11.2.0.2.0
Oracle Multimedia Client Option 11.2.0.2.0
LDAP Required Support Files 11.2.0.2.0
Character Set Migration Utility 11.2.0.2.0
Perl Interpreter 5.10.0.0.1
PL/SQL Embedded Gateway 11.2.0.2.0
OLAP SQL Scripts 11.2.0.2.0
Database SQL Scripts 11.2.0.2.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.2.0
SQL*Plus Files for Instant Client 11.2.0.2.0
Oracle Net Required Support Files 11.2.0.2.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.2.0
RDBMS Required Support Files Runtime 11.2.0.2.0
XML Parser for Java 11.2.0.2.0
Oracle Security Developer Tools 11.2.0.2.0
Oracle Wallet Manager 11.2.0.2.0
Enterprise Manager plugin Common Files 11.2.0.2.0
Platform Required Support Files 11.2.0.2.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.2.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.3
Deinstallation Tool 11.2.0.2.0
Oracle Java Client 11.2.0.2.0
Cluster Verification Utility Files 11.2.0.2.0
Oracle Notification Service (eONS) 11.2.0.2.0
Oracle LDAP administration 11.2.0.2.0
Cluster Verification Utility Common Files 11.2.0.2.0
Oracle Clusterware RDBMS Files 11.2.0.2.0
Oracle Locale Builder 11.2.0.2.0
Oracle Globalization Support 11.2.0.2.0
Buildtools Common Files 11.2.0.2.0
Oracle RAC Required Support Files-HAS 11.2.0.2.0
SQL*Plus Required Support Files 11.2.0.2.0
XDK Required Support Files 11.2.0.2.0
Agent Required Support Files 10.2.0.4.3
Parser Generator Required Support Files 11.2.0.2.0
Precompiler Required Support Files 11.2.0.2.0
Installation Common Files 11.2.0.2.0
Required Support Files 11.2.0.2.0
Oracle JDBC/THIN Interfaces 11.2.0.2.0
Oracle Multimedia Locator 11.2.0.2.0
Oracle Multimedia 11.2.0.2.0
HAS Common Files 11.2.0.2.0
Assistant Common Files 11.2.0.2.0
PL/SQL 11.2.0.2.0
HAS Files for DB 11.2.0.2.0
Oracle Recovery Manager 11.2.0.2.0
Oracle Database Utilities 11.2.0.2.0
Oracle Notification Service 11.2.0.2.0
SQL*Plus 11.2.0.2.0
Oracle Netca Client 11.2.0.2.0
Oracle Net 11.2.0.2.0
Oracle JVM 11.2.0.2.0
Oracle Internet Directory Client 11.2.0.2.0
Oracle Net Listener 11.2.0.2.0
Cluster Ready Services Files 11.2.0.2.0
Oracle Database 11g 11.2.0.2.0
----------------------------------------------------------------------------- Instantiating scripts for add node (Monday, August 15, 2011 10:15:35 PM CST)
. 1% Done.
Instantiation of add node scripts complete Copying to remote nodes (Monday, August 15, 2011 10:15:38 PM CST)
............................................................................................... 96% Done.
Home copied to new nodes Saving inventory on nodes (Monday, August 15, 2011 10:21:02 PM CST)
. 100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session.
However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/g01/oraInventory/orainstRoot.sh'
with root privileges on nodes 'vrh3'.
If you do not register the inventory, you may not be able to update or
patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each cluster node.
/g01/oraInventory/orainstRoot.sh #On nodes vrh3
/g01/11.2.0/grid/root.sh #On nodes vrh3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node The Cluster Node Addition of /g01/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
以上GI软件的安装成功了,接下来我们还需要在新加入的节点上运行2个关键的脚本,千万不要忘记这一点!:
运行orainstRoot.sh 和 root.sh脚本要求以root身份
su - root [root@vrh3]# cat /etc/oraInst.loc
inventory_loc=/g01/oraInventory --这里是oraInventory的位置
inst_group=asmadmin [root@vrh3 ~]# cd /g01/oraInventory [root@vrh3 oraInventory]# ./orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /g01/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world. Changing groupname of /g01/oraInventory to asmadmin.
The execution of the script is complete. 运行CRS_HOME下的root.sh脚本,可能会有警告但不要紧 [root@vrh3 ~]# cd $ORA_CRS_HOME [root@vrh3 g01]# /g01/11.2.0/grid/root.sh
Running Oracle 11g root script... The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /g01/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ... Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed. Using configuration parameter file: /g01/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node vrh1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
/g01/11.2.0/grid/bin/srvctl start listener -n vrh3 ... failed
Failed to perform new node configuration at /g01/11.2.0/grid/crs/install/crsconfig_lib.pm line 8255.
/g01/11.2.0/grid/perl/bin/perl -I/g01/11.2.0/grid/perl/lib -I/g01/11.2.0/grid/crs/install
/g01/11.2.0/grid/crs/install/rootcrs.pl execution failed
以上会出现了2个小错误:
1.新增节点上LISTENER启动失败的问题可以忽略,这是因为RDBMS_HOME仍未安装,但CRS尝试去启动相关的监听
[root@vrh3 g01]# /g01/11.2.0/grid/bin/srvctl start listener -n vrh3
PRCR-1013 : Failed to start resource ora.CRS_LISTENER.lsnr
PRCR-1064 : Failed to start resource ora.CRS_LISTENER.lsnr on node vrh3
CRS-5010: Update of configuration file "/s01/orabase/product/11.2.0/dbhome_1/network/admin/listener.ora" failed: details at "(:CLSN00014:)" in "/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"
CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process "/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "check": details at "(:CLSN00008:)" in "/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"
CRS-2674: Start of 'ora.CRS_LISTENER.lsnr' on 'vrh3' failed
CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process "/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "clean": details at "(:CLSN00008:)" in "/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"
CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process "/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "check": details at "(:CLSN00008:)" in "/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"
CRS-2678: 'ora.CRS_LISTENER.lsnr' on 'vrh3' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
PRCC-1015 : LISTENER was already running on vrh3
PRCR-1004 : Resource ora.LISTENER.lsnr is already running
2.rootcrs.pl脚本运行失败的话,一般重新运行一次即可:
[root@vrh3 bin]# /g01/11.2.0/grid/perl/bin/perl -I/g01/11.2.0/grid/perl/lib
-I/g01/11.2.0/grid/crs/install /g01/11.2.0/grid/crs/install/rootcrs.pl Using configuration parameter file: /g01/11.2.0/grid/crs/install/crsconfig_params
PRKO-2190 : VIP exists for node vrh3, VIP name vrh3-vip
PRKO-2420 : VIP is already started on node(s): vrh3
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
3.建议在新增节点上重启crs,并使用cluvfy验证nodeadd顺利完成 :
[root@vrh3 ~]# crsctl stop crs [root@vrh3 ~]# crsctl start crs [root@vrh3 ~]# su - grid [grid@vrh3 ~]$ cluvfy stage -post nodeadd -n vrh1,vrh2,vrh3 Performing post-checks for node addition Checking node reachability...
Node reachability check passed from node "vrh1" Checking user equivalence...
User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0" Node connectivity check passed Checking cluster integrity... Cluster integrity check passed Checking CRS integrity... CRS integrity check passed Checking shared resources... Checking CRS home location...
The location "/g01/11.2.0/grid" is not shared but is present/creatable on all nodes
Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0" Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1" Node connectivity check passed Checking node application existence... Checking existence of VIP node application (required)
VIP node application check passed Checking existence of NETWORK node application (required)
NETWORK node application check passed Checking existence of GSD node application (optional)
GSD node application is offline on nodes "vrh3,vrh2,vrh1" Checking existence of ONS node application (optional)
ONS node application check passed Checking Single Client Access Name (SCAN)... Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes Checking name resolution setup for "vrh.cluster.oracle.com"... ERROR:
PRVF-4664 : Found inconsistent name resolution entries for SCAN name "vrh.cluster.oracle.com" ERROR:
PRVF-4657 : Name resolution setup check for "vrh.cluster.oracle.com" (IP address: 192.168.1.190) failed ERROR:
PRVF-4664 : Found inconsistent name resolution entries for SCAN name "vrh.cluster.oracle.com" Verification of SCAN VIP and Listener setup failed User "grid" is not part of "root" group. Check passed Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed Checking if CTSS Resource is running on all nodes...
CTSS resource check passed Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed Check CTSS state started...
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Check of clock time offsets passed Oracle Cluster Time Synchronization Services check passed Post-check for node addition was successful.
为11.2.0.2 Grid Infrastructure添加节点的更多相关文章
-
为11gR2 Grid Infrastructure增加新的public网络
在某些环境下,运行11.2版本的RAC数据库的服务器上,连接了多个public网络,那么就会有如下的需求: 给其他的,或者说是新的public网络增加新的VIP地址. 在新的public网络上增加SC ...
-
uninstall 11.2.0.3.0 grid &; database in linux 5.7
OS: Oracle Linux Server release 5.7 DB: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - ...
-
HPDL380G8平台11.2.0.3 RAC实施手册
HPDL380G8平台11.2.0.3 RAC实施手册 1 前言 此文档详细描述了Oracle 11gR2 数据库在HPDL380G上的安装RAC的检查及安装步骤.文档中#表示root用户执行,$ ...
-
HP11.31安装11.2.0.3实施手册
1 前言 此文档详细描述了Oracle 11gR2 数据库在HP11.31上的安装RAC的检查及安装步骤.文档中#表示root用户执行,$表示grid或oracle用户执行. 2 系统环境 操作系统环 ...
-
Oracle 11.2.0.3.0 RAC GI_DB升级到11.2.0.4.0
转载: http://blog.csdn.net/frank0521/article/details/18226199 前言 还是大家常说的那句:生产环境千万记得备份哈~~~ 以下的环境,是我的测试 ...
-
【Oracle】11G 11.2.0.4 RAC环境打补丁
一.准备工作 1,数据库环境 操作系统版本 : RedHat 7.2 x64 数据库版本 : Oracle 11.2.0.4 x64 RAC Grid : 11.2 ...
-
转://Oracle 11gR2 硬件导致重新添加节点
一.环境描述: 这是一套五年前部署的双节点单柜11g RAC,当时操作系统盘是一块164g的单盘,没有做RAID. OS: RedHat EnterPrise 5.5 x8 ...
-
Oracle RAC集群添加节点
一,节点环境 所有节点分发/etc/hosts,这里我添加两个节点,一个是上次删除的节点,另一个是什么都没有的节点,尝试添加 服务器介绍什么的都在这hosts文件了,大家自己琢磨下 [grid@nod ...
-
Oracle-11g-R2(11.2.0.3.x)RAC Oracle Grid &; Database 零宕机方式回滚 PSU(自动模式)
回滚环境: 1.源库版本: Grid Infrastructure:11.2.0.3.15 Database:11.2.0.3.15 2.目标库版本: Grid Infrastructure:11.2 ...
随机推荐
-
jpeg huffman coding table
亮度DC系数的取值范围及序号: 序号(size) 取值范围 0 0 1 - ...
-
hadoop机架感知与网络拓扑分析:NetworkTopology和DNSToSwitchMapping
hadoop网络拓扑结构在整个系统中具有很重要的作用,它会影响DataNode的启动(注册).MapTask的分配等等.了解网络拓扑对了解整个hadoop的运行会有很大帮助. 首先通过下面两个图来了解 ...
-
zTree树的模糊搜索
工作需要,所以做了一个比较方便的搜索功能:1.功能实现都是基于zTree的API:2.如有更好的建议,欢迎拍我:其中要说明下的是flag 这个字段, 这是我自己定义的扩展字段,代码中涉及到flag 请 ...
-
AppCompat学习(1)-AppCompatSpinner
andriod中的spinner控件一共有两个,一个是本身的Spinner,一个是android.support.v7.widget.AppCompatSpinner. 两者的区别在于v7内的Spin ...
-
(转)UML用例图总结
用例图主要用来描述“用户.需求.系统功能单元”之间的关系.它展示了一个外部用户能够观察到的系统功能模型图. [用途]:帮助开发团队以一种可视化的方式理解系统的功能需求. 用例图所包含的元素如下: 1. ...
-
基于MAC10.12+MYSQL5.7.17搭建XMPP服务器【黑苹果系统】
在以前的公司中了解到XMPP可以搭建即时通讯APP.出于好奇自己在空余时间也学了一下搭建XMPP服务器,其中遇到了许多问题,经过坎坷的路程终于搭建成功[这些坎坷的经历主要是由于自己的无知造成的] 下面 ...
-
windows下cmd命令行上传代码到github的指定库
https://blog.csdn.net/a419419/article/details/80063010 git错误:unable to auto-detect email address 解决办 ...
-
shell 函数用法
近期在学习shell编程方面的知识,写的不怎么好,请大家多多指点,下面给大家分享一下shell函数的用法. 我们为什么要用shell函数? 简单的说,函数的作用就是把程序多次调用相同的代码部分定义成一 ...
-
Jquery 一些好用的插件和工具类
1.做页面校验的工具类 <!--area.js存放区域编码的一个常量.由于bee.js里面的getPersonInfo18()方法需要调用这个常量,所以在bee.js之前引入.如果不需要用到这个 ...
-
【摘】50个jQuery代码段帮助你成为一个更好的JavaScript开发者
今 天的帖子会给你们展示50个jQuery代码片段,这些代码能够给你的JavaScript项目提供帮助.其中的一些代码段是从jQuery1.4.2才 开始支持的做法,另一些则是真正有用的函数或方法,他 ...