Oracle RAC 11g 安装测试问题小结(还未测试成功)

时间:2021-12-09 17:22:41

Oracle RAC 11g安装测试问题小结(还未测试成功)


国庆节前就开始着手进行Oracle RAC 11g的安装测试,先看了刘宪军著的《Oracle RAC 11g实战指南》,然后又在网上参考了别人的安装说明和安装视频。最后想在国庆节亲测一下,没想到还正应了书上的那句话:初次安装RAC,失败率是100%。试了三次。


参考文档:

[安装部署] 【视频教学】Maclean教你用Vbox在Linux 6.3上安装Oracle 11gR2 RAC
http://t.askmaclean.com/thread-2007-1-1.html


Oracle 11g RAC安装系列
http://www.oracleonlinux.cn/rac/

使用UDEV在Oracle Linux 6上安装Oracle 11g RAC(11.2.0.3) (一)
http://blog.csdn.net/staricqxyz/article/details/8447495

使用UDEV在Oracle Linux 6上安装Oracle 11g RAC(11.2.0.3) (二)
http://blog.csdn.net/staricqxyz/article/details/8447684

使用UDEV在Oracle Linux 6上安装Oracle 11g RAC(11.2.0.3) (三)
http://blog.csdn.net/staricqxyz/article/details/8447985

使用UDEV在Oracle Linux 6上安装Oracle 11g RAC(11.2.0.3) (四)
http://blog.csdn.net/staricqxyz/article/details/8448611

使用UDEV在Oracle Linux 6上安装Oracle 11g RAC(11.2.0.3) (五)
http://blog.csdn.net/staricqxyz/article/details/8449850

RAC Grid Infrastructure安装11.2.0.3.5 14727347 PSU GI-RDBMS补丁
http://blog.csdn.net/askmaclean/article/details/8703698


Vmware+Linux+Oracle 10G RAC全程详细图解
http://www.linuxidc.com/Linux/2011-02/31976p5.htm


VBOX+OEL5.7上安装ORACLE11G RAC详细过程
http://blog.csdn.net/haibusuanyun/article/details/11557661


lsof -i
http://blog.itpub.net/21162451/viewspace-721938

libcap.so.1:cannot open shared object file: No such file or directory
http://blog.itpub.net/21754115/viewspace-1118529/

Oracle 11gR2 RAC ohasd failed to start 解决方法
http://blog.csdn.net/meteorlet/article/details/8363745

Oracle ASM+11gR2安装 
http://blog.itpub.net/26224914/viewspace-1290146/

Oracle 11g RAC ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443 解决方法
分类: Oracle Troubleshooting Oracle RAC
http://blog.csdn.net/tianlesoftware/article/details/7697366

RAC 卸载 说明
http://blog.csdn.net/tianlesoftware/article/details/5892225


在redhat 6.2上安装oracle 11.2--RAC时报ADVM/ACFS is not supported
http://blog.itpub.net/25133597/viewspace-1058569/

一步一步在Linux上安装Oracle 11gR2 RAC (8--完结)
http://www.oracleonlinux.cn/2012/06/step-by-step-install-11gr2-rac-on-linux-8/



1.第一次测试

测试系统是:CentOS 5.5_32bit

数据库软件:linux_11gR2_database_1of2.zip,linux_11gR2_database_2of2.zip,linux_11gR2_grid.zip


第一次安装时为了解决虚拟机的共享磁盘问题就花了好长时间。

最后终于到了安装Grid前的安装检查:

[grid@server1 grid]$ ./runcluvfy.sh stage -post hwos -n server1,server2 -verbose
检查后会有个fix脚本,按提示执行就可以了。


安装时出现出现问题;

经查:
Failed to create or upgrade OLR
http://blog.csdn.net/leshami/article/details/8294969

对于Oracle 11g RAC 的安装,与Oracle 10g(clusterware)类似,grid 安装完毕后需要执行orainstroot.sh和root.sh,如果是AMD芯片,Oracle说不认识。

——呵呵,到这里才说不认识AMD芯片。。。。

下载补丁:
[grid@server1 grid]$ cd /u01/app/11.2.0/grid/OPatch 
[grid@server1 grid]$ ./opatch apply /mnt/hgfs/sharefiles/oracle11g/OPatch/8670579


找到补丁后 ,打完补丁再试,还是不行。遂决定重新来过。

#注意patch的时候check一下ORACLE_HOME环境变量以及perl -v 查看perl的版本,应当高于5.00503
#感觉Oracle 11g 32 bit版本问题挺多的。尽可能安装64bit测试。

还有一点说明,安装oracle database后也要patch 8670579,否则dbca报错。



2.第二次测试

测试系统:CentOS 6.5_64bit

数据库软件:oracle11g_64bit_11.2.0.1


在11.2GI的安装过程中,当拷贝到远程节点时OUI 挂起(65%处)
http://blog.csdn.net/msdnchina/article/details/43879097


症状:
11.2 GI的OUI安装在65%的时候挂起,此时正在拷贝文件到远程节点。
最后一步显示 拷贝GRID 软件目录到远程及诶单
去检查远程节点发现:没有软件被拷贝过来。

原因
这个问题是由于在两个节点之间有防火墙导致的

解决方案
关闭防火墙然后重试安装。
步骤如下:
  1. 取消掉当前的安装
  2. 同你的系统管理员和网络管理员一起工作,务必保证:在你的cluster中的所有节点上,没有任何的防火墙
  3. 重新安装


所以记得在关闭防火墙之后,还要修改防火墙的配置文件:

关闭FIREWALL和Disable SElinux
vi /etc/selinux/config   ==>SELINUX=disabled
 
 因为防火墙的原因,节点2上的文件都没有拷贝全,节点一又碰到11.0.2.1的经典问题
ohasd failed to start at/u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.问题。

CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start at /u01/app/grid/11.2.0/grid_1/crs/install/rootcrs.pl line 443.

同时CentOS缺少一些的包,这些包怎么也找不到。明明安装的是64bit的系统,非要i386的相关包。

libaio-0.3.105 (i386)     failed    
compat-libstdc++-33-3.2.3 (i386)  failed    
libaio-devel-0.3.105 (i386)  failed    
libgcc-3.4.6 (i386)       failed    
libstdc++-3.4.6 (i386)    failed    
unixODBC-2.2.11 (i386)    failed    
unixODBC-devel-2.2.11 (i386)  failed  


因为花费了太多时间,故放弃这一轮。


3.第三次测试

测试环境:OracleLinux-R6-U3-x86_64

数据库软件:oracle11g_64bit_11.2.0.3

使用oracle的linux就是方便,直接更新系统就把需要的包装好了。更新系统
yum -y install oracle-rdbms-server-11gR2-preinstall


在后面安装的前校验发现swap空间不足,需要至少4G才行。

参考文档:

解决交换分区swap不足的问题
http://www.wenkuxiazai.com/doc/c4aaa91afad6195f312ba666.html


最后Grid安装前的检验通过了:

[grid@server1 ~]$ cd /mnt/hgfs/sharefiles/oracle11g_64bit/11.2.0.3/grid
[grid@server1 grid]$ ./runcluvfy.sh stage -pre crsinst -n server1,server2 -fixup -verbose


Performing pre-checks for cluster services setup 


Checking node reachability...


Check: Node reachability from node "server1"
  Destination Node                      Reachable?              
  ------------------------------------  ------------------------
  server2                               yes                     
  server1                               yes                     
Result: Node reachability check passed from node "server1"




Checking user equivalence...


Check: User equivalence for user "grid"
  Node Name                             Status                  
  ------------------------------------  ------------------------
  server2                               passed                  
  server1                               passed                  
Result: User equivalence check passed for user "grid"


Checking node connectivity...


Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  server2                               passed                  
  server1                               passed                  


Verification of the hosts config file successful




Interface information for node "server2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth1   192.168.25.12   192.168.25.0    0.0.0.0         192.168.3.1     00:0C:29:E5:C0:4D 1500  
 eth0   192.168.3.183   192.168.3.0     0.0.0.0         192.168.3.1     00:0C:29:E5:C0:43 1500  




Interface information for node "server1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth1   192.168.25.10   192.168.25.0    0.0.0.0         192.168.3.1     00:0C:29:87:85:BD 1500  
 eth0   192.168.3.181   192.168.3.0     0.0.0.0         192.168.3.1     00:0C:29:87:85:B3 1500  




Check: Node connectivity of subnet "192.168.25.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  server2[192.168.25.12]          server1[192.168.25.10]          yes             
Result: Node connectivity passed for subnet "192.168.25.0" with node(s) server2,server1




Check: TCP connectivity of subnet "192.168.25.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  server1:192.168.25.10           server2:192.168.25.12           passed          
Result: TCP connectivity check passed for subnet "192.168.25.0"




Check: Node connectivity of subnet "192.168.3.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  server2[192.168.3.183]          server1[192.168.3.181]          yes             
Result: Node connectivity passed for subnet "192.168.3.0" with node(s) server2,server1




Check: TCP connectivity of subnet "192.168.3.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  server1:192.168.3.181           server2:192.168.3.183           passed          
Result: TCP connectivity check passed for subnet "192.168.3.0"




Interfaces found on subnet "192.168.3.0" that are likely candidates for VIP are:
server2 eth0:192.168.3.183
server1 eth0:192.168.3.181


Interfaces found on subnet "192.168.25.0" that are likely candidates for a private interconnect are:
server2 eth1:192.168.25.12
server1 eth1:192.168.25.10
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.25.0".
Subnet mask consistency check passed for subnet "192.168.3.0".
Subnet mask consistency check passed.


Result: Node connectivity check passed


Checking multicast communication...


Checking subnet "192.168.25.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.25.0" for multicast communication with multicast group "230.0.1.0" passed.


Checking subnet "192.168.3.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.3.0" for multicast communication with multicast group "230.0.1.0" passed.


Check of multicast communication passed.


Checking ASMLib configuration.
  Node Name                             Status                  
  ------------------------------------  ------------------------
  server2                               passed                  
  server1                               passed                  
Result: Check for ASMLib configuration passed.


Check: Total memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       2.8999GB (3040728.0KB)    1.5GB (1572864.0KB)       passed    
  server1       2.8999GB (3040728.0KB)    1.5GB (1572864.0KB)       passed    
Result: Total memory check passed


Check: Available memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       2.7005GB (2831728.0KB)    50MB (51200.0KB)          passed    
  server1       2.54GB (2663384.0KB)      50MB (51200.0KB)          passed    
Result: Available memory check passed


Check: Swap space 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       3.8672GB (4055032.0KB)    2.8999GB (3040728.0KB)    passed    
  server1       3.8672GB (4055032.0KB)    2.8999GB (3040728.0KB)    passed    
Result: Swap space check passed


Check: Free disk space for "server2:/tmp" 
  Path              Node Name     Mount point   Available     Required      Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  /tmp              server2       /             29.3457GB     1GB           passed      
Result: Free disk space check passed for "server2:/tmp"


Check: Free disk space for "server1:/tmp" 
  Path              Node Name     Mount point   Available     Required      Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  /tmp              server1       /             29.3281GB     1GB           passed      
Result: Free disk space check passed for "server1:/tmp"


Check: User existence for "grid" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  server2       passed                    exists(1100)            
  server1       passed                    exists(1100)            


Checking for multiple users with UID value 1100
Result: Check for multiple users with UID value 1100 passed 
Result: User existence check passed for "grid"


Check: Group existence for "oinstall" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  server2       passed                    exists                  
  server1       passed                    exists                  
Result: Group existence check passed for "oinstall"


Check: Group existence for "dba" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  server2       passed                    exists                  
  server1       passed                    exists                  
Result: Group existence check passed for "dba"


Check: Membership of user "grid" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           yes           yes           yes           yes           passed      
  server1           yes           yes           yes           yes           passed      
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed


Check: Membership of user "grid" in group "dba" 
  Node Name         User Exists   Group Exists  User in Group  Status          
  ----------------  ------------  ------------  ------------  ----------------
  server2           yes           yes           yes           passed          
  server1           yes           yes           yes           passed          
Result: Membership check for user "grid" in group "dba" passed


Check: Run level 
  Node Name     run level                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       5                         3,5                       passed    
  server1       5                         3,5                       passed    
Result: Run level check passed


Check: Hard limits for "maximum open file descriptors" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  server2           hard          65536         65536         passed          
  server1           hard          65536         65536         passed          
Result: Hard limits check passed for "maximum open file descriptors"


Check: Soft limits for "maximum open file descriptors" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  server2           soft          1024          1024          passed          
  server1           soft          1024          1024          passed          
Result: Soft limits check passed for "maximum open file descriptors"


Check: Hard limits for "maximum user processes" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  server2           hard          16384         16384         passed          
  server1           hard          16384         16384         passed          
Result: Hard limits check passed for "maximum user processes"


Check: Soft limits for "maximum user processes" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  server2           soft          2047          2047          passed          
  server1           soft          2047          2047          passed          
Result: Soft limits check passed for "maximum user processes"


Check: System architecture 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       x86_64                    x86_64                    passed    
  server1       x86_64                    x86_64                    passed    
Result: System architecture check passed


Check: Kernel version 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       2.6.39-200.24.1.el6uek.x86_64  2.6.32                    passed    
  server1       2.6.39-200.24.1.el6uek.x86_64  2.6.32                    passed    
Result: Kernel version check passed


Check: Kernel parameter for "semmsl" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           250           250           250           passed          
  server1           250           250           250           passed          
Result: Kernel parameter check passed for "semmsl"


Check: Kernel parameter for "semmns" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           32000         32000         32000         passed          
  server1           32000         32000         32000         passed          
Result: Kernel parameter check passed for "semmns"


Check: Kernel parameter for "semopm" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           100           100           100           passed          
  server1           100           100           100           passed          
Result: Kernel parameter check passed for "semopm"


Check: Kernel parameter for "semmni" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           128           128           128           passed          
  server1           128           128           128           passed          
Result: Kernel parameter check passed for "semmni"


Check: Kernel parameter for "shmmax" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           4398046511104  4398046511104  1556852736    passed          
  server1           4398046511104  4398046511104  1556852736    passed          
Result: Kernel parameter check passed for "shmmax"


Check: Kernel parameter for "shmmni" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           4096          4096          4096          passed          
  server1           4096          4096          4096          passed          
Result: Kernel parameter check passed for "shmmni"


Check: Kernel parameter for "shmall" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           4294967296    4294967296    2097152       passed          
  server1           4294967296    4294967296    2097152       passed          
Result: Kernel parameter check passed for "shmall"


Check: Kernel parameter for "file-max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           6815744       6815744       6815744       passed          
  server1           6815744       6815744       6815744       passed          
Result: Kernel parameter check passed for "file-max"


Check: Kernel parameter for "ip_local_port_range" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed          
  server1           between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed          
Result: Kernel parameter check passed for "ip_local_port_range"


Check: Kernel parameter for "rmem_default" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           262144        262144        262144        passed          
  server1           262144        262144        262144        passed          
Result: Kernel parameter check passed for "rmem_default"


Check: Kernel parameter for "rmem_max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           4194304       4194304       4194304       passed          
  server1           4194304       4194304       4194304       passed          
Result: Kernel parameter check passed for "rmem_max"


Check: Kernel parameter for "wmem_default" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           262144        262144        262144        passed          
  server1           262144        262144        262144        passed          
Result: Kernel parameter check passed for "wmem_default"


Check: Kernel parameter for "wmem_max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           1048576       1048576       1048576       passed          
  server1           1048576       1048576       1048576       passed          
Result: Kernel parameter check passed for "wmem_max"


Check: Kernel parameter for "aio-max-nr" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  server2           1048576       1048576       1048576       passed          
  server1           1048576       1048576       1048576       passed          
Result: Kernel parameter check passed for "aio-max-nr"


Check: Package existence for "binutils" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       binutils-2.20.51.0.2-5.34.el6  binutils-2.20.51.0.2      passed    
  server1       binutils-2.20.51.0.2-5.34.el6  binutils-2.20.51.0.2      passed    
Result: Package existence check passed for "binutils"


Check: Package existence for "compat-libcap1" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       compat-libcap1-1.10-1     compat-libcap1-1.10       passed    
  server1       compat-libcap1-1.10-1     compat-libcap1-1.10       passed    
Result: Package existence check passed for "compat-libcap1"


Check: Package existence for "compat-libstdc++-33(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed    
  server1       compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed    
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"


Check: Package existence for "libgcc(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       libgcc(x86_64)-4.4.7-16.el6  libgcc(x86_64)-4.4.4      passed    
  server1       libgcc(x86_64)-4.4.7-16.el6  libgcc(x86_64)-4.4.4      passed    
Result: Package existence check passed for "libgcc(x86_64)"


Check: Package existence for "libstdc++(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       libstdc++(x86_64)-4.4.7-16.el6  libstdc++(x86_64)-4.4.4   passed    
  server1       libstdc++(x86_64)-4.4.7-16.el6  libstdc++(x86_64)-4.4.4   passed    
Result: Package existence check passed for "libstdc++(x86_64)"


Check: Package existence for "libstdc++-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       libstdc++-devel(x86_64)-4.4.7-16.el6  libstdc++-devel(x86_64)-4.4.4  passed    
  server1       libstdc++-devel(x86_64)-4.4.7-16.el6  libstdc++-devel(x86_64)-4.4.4  passed    
Result: Package existence check passed for "libstdc++-devel(x86_64)"


Check: Package existence for "sysstat" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       sysstat-9.0.4-20.el6      sysstat-9.0.4             passed    
  server1       sysstat-9.0.4-20.el6      sysstat-9.0.4             passed    
Result: Package existence check passed for "sysstat"


Check: Package existence for "gcc" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       gcc-4.4.7-16.el6          gcc-4.4.4                 passed    
  server1       gcc-4.4.7-16.el6          gcc-4.4.4                 passed    
Result: Package existence check passed for "gcc"


Check: Package existence for "gcc-c++" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       gcc-c++-4.4.7-16.el6      gcc-c++-4.4.4             passed    
  server1       gcc-c++-4.4.7-16.el6      gcc-c++-4.4.4             passed    
Result: Package existence check passed for "gcc-c++"


Check: Package existence for "ksh" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       ksh-20120801-28.el6_7.3   ksh-20100621              passed    
  server1       ksh-20120801-28.el6_7.3   ksh-20100621              passed    
Result: Package existence check passed for "ksh"


Check: Package existence for "make" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       make-3.81-20.el6          make-3.81                 passed    
  server1       make-3.81-20.el6          make-3.81                 passed    
Result: Package existence check passed for "make"


Check: Package existence for "glibc(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       glibc(x86_64)-2.12-1.166.el6_7.3  glibc(x86_64)-2.12        passed    
  server1       glibc(x86_64)-2.12-1.166.el6_7.3  glibc(x86_64)-2.12        passed    
Result: Package existence check passed for "glibc(x86_64)"


Check: Package existence for "glibc-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       glibc-devel(x86_64)-2.12-1.166.el6_7.3  glibc-devel(x86_64)-2.12  passed    
  server1       glibc-devel(x86_64)-2.12-1.166.el6_7.3  glibc-devel(x86_64)-2.12  passed    
Result: Package existence check passed for "glibc-devel(x86_64)"


Check: Package existence for "libaio(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed    
  server1       libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed    
Result: Package existence check passed for "libaio(x86_64)"


Check: Package existence for "libaio-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  server2       libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed    
  server1       libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed    
Result: Package existence check passed for "libaio-devel(x86_64)"


Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed 


Check: Current group ID 
Result: Current group ID check passed


Starting check for consistency of primary group of root user
  Node Name                             Status                  
  ------------------------------------  ------------------------
  server2                               passed                  
  server1                               passed                  


Check for consistency of root user's primary group passed


Starting Clock synchronization checks using Network Time Protocol(NTP)...


NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running


Result: Clock synchronization check using Network Time Protocol(NTP) passed


Checking Core file name pattern consistency...
Core file name pattern consistency check passed.


Checking to make sure user "grid" is not in "root" group
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  server2       passed                    does not exist          
  server1       passed                    does not exist          
Result: User "grid" is not part of "root" group. Check passed


Check default user file creation mask
  Node Name     Available                 Required                  Comment   
  ------------  ------------------------  ------------------------  ----------
  server2       0022                      0022                      passed    
  server1       0022                      0022                      passed    
Result: Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes


Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking DNS response time for an unreachable node
  Node Name                             Status                  
  ------------------------------------  ------------------------
  server2                               passed                  
  server1                               passed                  
The DNS response time for an unreachable node is within acceptable limit on all nodes


File "/etc/resolv.conf" is consistent across nodes


Check: Time zone consistency 
Result: Time zone consistency check passed


Pre-check for cluster services setup was successful. 


不过,在节点1安装完后:

节点1:
[root@server1 Desktop]# /u01/app/11.2.0/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/11.2.0/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.


Changing groupname of /u01/app/11.2.0/oraInventory to oinstall.
The execution of the script is complete.
[root@server1 Desktop]# /u01/app/grid/11.2.0/grid_1/root.sh
Performing root user operation for Oracle 11g 


The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/grid/11.2.0/grid_1


Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...




Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.0/grid_1/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'server1'
CRS-2676: Start of 'ora.mdnsd' on 'server1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'server1'
CRS-2676: Start of 'ora.gpnpd' on 'server1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'server1'
CRS-2672: Attempting to start 'ora.gipcd' on 'server1'
CRS-2676: Start of 'ora.gipcd' on 'server1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'server1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'server1'
CRS-2672: Attempting to start 'ora.diskmon' on 'server1'
CRS-2676: Start of 'ora.diskmon' on 'server1' succeeded
CRS-2676: Start of 'ora.cssd' on 'server1' succeeded


ASM created and started successfully.


Disk Group DATA created successfully.


clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 5c831af75c3d4f04bf58c3d72cb658ad.
Successful addition of voting disk 5a767c7fa41e4f0bbfc81a9070610c02.
Successful addition of voting disk 9571fbc484db4facbf263ea636f72ba3.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   5c831af75c3d4f04bf58c3d72cb658ad (/dev/asm-diskb) [DATA]
 2. ONLINE   5a767c7fa41e4f0bbfc81a9070610c02 (/dev/asm-diskc) [DATA]
 3. ONLINE   9571fbc484db4facbf263ea636f72ba3 (/dev/asm-diskd) [DATA]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'server1'
CRS-2676: Start of 'ora.asm' on 'server1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'server1'
CRS-2676: Start of 'ora.DATA.dg' on 'server1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded


在节点2上的安装却碰到问题:





节点2:


[root@server2 Desktop]# /u01/app/11.2.0/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/11.2.0/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.


Changing groupname of /u01/app/11.2.0/oraInventory to oinstall.
The execution of the script is complete.
[root@server2 Desktop]# /u01/app/grid/11.2.0/grid_1/root.sh
Performing root user operation for Oracle 11g 


The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/grid/11.2.0/grid_1


Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...




Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.0/grid_1/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to upstart


Message from syslogd@server2 at Oct  6 18:24:53 ...
 kernel:Stack:


Message from syslogd@server2 at Oct  6 18:24:53 ...
 kernel:Call Trace:


Message from syslogd@server2 at Oct  6 18:24:53 ...
 kernel:Code: 30 07 00 00 49 8b be 38 07 00 00 4c 8b a0 a0 01 00 00 4c 8b a8 a8 01 00 00 48 81 c7 08 08 00 00 e8 3c bb fb ff 66 90 fb 66 66 90 
CRS-2672: Attempting to start 'ora.mdnsd' on 'server2'
CRS-2676: Start of 'ora.mdnsd' on 'server2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'server2'
CRS-2676: Start of 'ora.gpnpd' on 'server2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'server2'
CRS-2672: Attempting to start 'ora.gipcd' on 'server2'
CRS-2676: Start of 'ora.gipcd' on 'server2' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'server2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'server2'
CRS-2672: Attempting to start 'ora.diskmon' on 'server2'
CRS-2676: Start of 'ora.diskmon' on 'server2' succeeded
CRS-2676: Start of 'ora.cssd' on 'server2' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
You have new mail in /var/spool/mail/root


注意上面的“kernel:Code“,这个报错直接导致节点1自动重启。

在节点2上检查安装结果:
[root@server2 ~]# su - grid
[grid@server2 ~]$ olsnodes
server1
server2
[grid@server2 ~]$ olsnodes -i
server1 server1-vip
server2 server2-vip
[grid@server2 ~]$ olsnodes -s
server1 Inactive
server2 Active

参考资料:

oracle_Grid Infrastructure 启动的五大问题 
http://blog.sina.com.cn/s/blog_8317516b01015ipp.html

诊断 Grid Infrastructure 启动问题 (文档 ID 1623340.1)
http://blog.csdn.net/compard/article/details/24699959

【oracleASM 笔记】 ohasd命令
http://blog.csdn.net/jason_asia/article/details/7935952

如何在Oracle 10g和11g上收集crs日志
http://jishu.zol.com.cn/3738.html


检查日志:
#Grid以下日志
/u01/app/grid/11.2.0/grid_1/log/server1/alertserver1.log


linux 的日志:/var/spool/mail/root


        cd /u01/app/grid/11.2.0/grid_1/bin
   1. 确认ORA_CRS_HOME环境变量已经设置
   Ensure environment variable ORA_CRS_HOME is set
   2. 运行
   run
   ./diagcollection.pl -crshome=$ORA_CRS_HOME -collect
   该脚本会将收集的信息生成: crsData_.tar.gz, ocrData_.tar.gz, oraData_.tar.gz, coreData_.tar.gz, os_.tar.gz
   The script will create: crsData_.tar.gz, ocrData_.tar.gz, oraData_.tar.gz, coreData_.tar.gz, os_.tar.gz
  11gR2
   1. 运行
   run
   /bin/diagcollection.sh
   该脚本会将收集的信息生成: crsData_.tar.gz, ocrData_.tar.gz, oraData_.tar.gz, coreData_.tar.gz, os_.tar.gz
   The script will create: crsData_.tar.gz, ocrData_.tar.gz, oraData_.tar.gz, coreData_.tar.gz, os_.tar.gz
 
直接执行   ./diagcollection.pl  -collect 也行,日志就在当前目录下。
我在环境设置中增加了$ORA_CRS_HOME=/u01/app/11.2.0/grid/log_collect
server1的节点:/u01/app/11.2.0/grid/log_collect
//------------------------------------------------------------------------------//

参考文档:

#解决CRS-4639: Could not contact Oracle High Availability Services过程如下:
http://www.bkjia.com/oracle/480083.html

解决CRS-4639: Could not contact Oracle High Availability Services
http://blog.csdn.net/laoshangxyc/article/details/11903001

节点1:
CRS-4639: Could not contact Oracle High Availability Services

......
Message from syslogd@server1 at Oct  8 15:14:53 ...
 kernel:Code: de f2 92 00 49 89 f4 65 48 8b 3c 25 00 c4 00 00 85 c0 75 72 41 c7 44 24 28 00 00 00 00 48 89 cf e8 8d fe fd ff 66 90 fb 66 66 90 


CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node server2, number 2, and is terminating 
An active cluster was found during exclusive startup, restarting to join the cluster
........

在节点1上
[grid@server1 bin]$ 
Message from syslogd@server1 at Oct  8 15:14:53 ...
 kernel:Stack:


Message from syslogd@server1 at Oct  8 15:14:53 ...
 kernel:Call Trace:


Message from syslogd@server1 at Oct  8 15:14:53 ...
 kernel:Code: de f2 92 00 49 89 f4 65 48 8b 3c 25 00 c4 00 00 85 c0 75 72 41 c7 44 24 28 00 00 00 00 48 89 cf e8 8d fe fd ff 66 90 fb 66 66 90 

在节点2上:


[grid@server2 bin]$ 
Message from syslogd@server2 at Oct  8 15:14:52 ...
 kernel:Stack:


Message from syslogd@server2 at Oct  8 15:14:52 ...
 kernel:Call Trace:


Message from syslogd@server2 at Oct  8 15:14:52 ...
 kernel:Code: 55 c0 e9 4e ff ff ff 66 90 48 8b 05 31 e4 93 00 


Message from syslogd@server2 at Oct  8 15:14:53 ...
 kernel:Stack:


Message from syslogd@server2 at Oct  8 15:14:53 ...
 kernel:Call Trace:


Message from syslogd@server2 at Oct  8 15:14:53 ...
 kernel:Stack:


Message from syslogd@server2 at Oct  8 15:14:54 ...
 kernel:Call Trace:


Message from syslogd@server2 at Oct  8 15:14:54 ...
 kernel:Code: 


Message from syslogd@server2 at Oct  8 15:14:54 ...
 kernel:Code: 48  [<ffffffff812453e0>] scsi_cmd_ioctl+0x2a0/0x4c0

//------------------------------------------------------------------------------//


在节点1尝试修复:

[root@server1 bin]# cd /u01/app/grid/11.2.0/grid_1/crs/install/
[root@server1 install]#  ./roothas.pl -deconfig -force -verbose



[root@server1 install]# cd /u01/app/grid/11.2.0/grid_1
[root@server1 grid_1]# ./root.sh 


但是在节点2尝试修复时,节点1仍会自动重启。怀疑是OS的问题,linux 的Kernel 有问题。

到这里就没法解决了。我对linux内核还没有什么研究,对这个Kernel没有办法了。



后面计划用Oracle VM VirtualBox 再做一次Oracle RAC 11g的安装试验。