Oracle Cluster verification utility failed 的解决方法

时间:2022-08-10 10:58:09
安装crs执行root.sh之后出现Oracle Cluster Verification Utility配置错误提示

Linux AS4.0  , Oracle10.2.0.1  RAC  两个节点

配置按照文档,比较顺利,一直到安装CRS , 在第二个节点执行 root.sh 报错,
然后在节点1执行 vipca 之后然按照如下步骤配置,直到点击ok ,
错误现象见图片。  















参考:   [url]http://yangtingkun.itpub.net/post/468/277075[/url]
[url]http://yangtingkun.itpub.net/post/468/276589[/url]

-----------------------------------------------

Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "ce0" is not public. Public interfaces should be used to configure virtual IPs.

这个错误的具体描述在Solaris8的RAC安装文档中已经详细描述了,解决方法是手工启动vipca进行配置。

root@ahrac1 # cd /data/oracle/product/10.2/crs/bin
root@ahrac1 # ./vipca

在Xmanager中启动一个终端,启动vipca图形界面。点击next,出现所有可用的网络接口,由于ce0配置的是PUBLIC INTERFACT,这里选择ce0,点击next,在出现的配置中IP Alias Name分别填入:ahrac1-vip和ahrac2-vip,IP address处填入:172.25.198.44和172.25.198.45。这里如果你的配置是正确的,那么你填完一个IP,Oracle会自动将剩下三个配置补齐。点击next,出现汇总页面,检查无误后,点击Finish。

Oracle会执行6个步骤,Create VIP application resource、Create GSD application resource、Create ONS application resource、Start VIP application resource、Start GSD application resource、Start ONS application resource。

全部成功后点击OK,结束VIPCA的配置。

这时候返回Cluster Ware的安装界面,点击OK,Oracle自动启动两个工具,并再次执行验证,顺利结束后,安装完成。

----------------------------------------------------------

 

2007-12-12 14:29 tolywangSEVERE: OUI-25031:Some of the configuration assistants failed. It is strongly re
commended that you retry the configuration assistants at this time. Not successf
ully running any "Recommended" assistants means your system will not be correctl
y configured.
1. Check the Details panel on the Configuration Assistant Screen to see the erro
rs resulting in the failures.
2. Fix the errors causing these failures.
3. Select the failed assistants and click the 'Retry' button to retry them.

 

2007-12-12 14:35 yinheng8066我在windows2003sp1上也碰到过这个问题,当时是先运行VIPCA,配置IP,之后忽略此错误,cluster安装完毕

刚问了同事,同事也说碰到过此问题,是在solaris平台上出现的,他也是忽略过去,运行vipca。

-------------------------
关注此帖

[[i] 本帖最后由 yinheng8066 于 2007-12-12 14:41 编辑 [/i]]

 

2007-12-12 14:41 tolywangGND-RAC01</u01/product/oraInventory/logs>$cat installActions2007-12-12_11-53-29AM.log




INFO: Starting to execute configuration assistants
INFO: Command = /u01/product/crs/bin/cluvfy stage -post crsinst -n GND-RAC01,GND-RAC02


Checking Cluster manager integrity...


Checking CSS daemon...
Daemon status check passed for "CSS daemon".

Cluster manager integrity check passed.

Checking cluster integrity...


Cluster integrity check passed


Checking OCR integrity...

Checking the absence of a non-clustered configuration...

All nodes free of non-clustered, local-only configurations.

Uniqueness check for OCR device passed.

Checking the version of OCR...
OCR of correct Version "2" exists.

Checking data integrity of OCR...
Data integrity check for OCR passed.

OCR integrity check passed.

Checking CRS integrity...

Checking daemon liveness...
Liveness check passed for "CRS daemon".

Checking daemon liveness...
Liveness check passed for "CSS daemon".

Checking daemon liveness...
Liveness check passed for "EVM daemon".

Checking CRS health...

CRS health check passed.

CRS integrity check passed.

Checking node application existence...


Checking existence of VIP node application (required)
Check failed.
Check failed on nodes:
        GND-RAC02,GND-RAC01

Checking existence of ONS node application (optional)
Check ignored.

Checking existence of GSD node application (optional)
Check ignored.


Post-check for cluster services setup was unsuccessful on all the nodes.

Command = /u01/product/crs/bin/cluvfy has failed

INFO: Configuration assistant "Oracle Cluster Verification Utility" failed
-----------------------------------------------------------------------------
*** Starting OUICA ***
Oracle Home set to /u01/product/crs
Configuration directory is set to /u01/product/crs/cfgtoollogs. All xml files under the directory will be processed
INFO: The "/u01/product/crs/cfgtoollogs/configToolFailedCommands" script contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
-----------------------------------------------------------------------------
SEVERE: OUI-25031:Some of the configuration assistants failed. It is strongly recommended that you retry the configuration assistants at this time. Not successfully running any "Recommended" assistants means your system will not be correctly configured.
1. Check the Details panel on the Configuration Assistant Screen to see the errors resulting in the failures.
2. Fix the errors causing these failures.
3. Select the failed assistants and click the 'Retry' button to retry them.
INFO: User Selected: Yes/OK

[[i] 本帖最后由 tolywang 于 2007-12-12 16:02 编辑 [/i]]

 

2007-12-12 14:48 tolywang[quote]原帖由 [i]yinheng8066[/i] 于 2007-12-12 14:35 发表 [url=http://www.itpub.net/redirect.php?goto=findpost&pid=9174677&ptid=909387][img]http://www.itpub.net/images/common/back.gif[/img][/url]
我在windows2003sp1上也碰到过这个问题,当时是先运行VIPCA,配置IP,之后忽略此错误,cluster安装完毕

刚问了同事,同事也说碰到过此问题,是在solaris平台上出现的,他也是忽略过去,运行vipca。

-------------------------
关注此帖 [/quote]


不过我在rac02 运行sh  root.sh 的时候出现错误提示,然后就在节点1上运行过一次 vipca ,  这个时候还需要运行一次 ?

:rose:

[[i] 本帖最后由 tolywang 于 2007-12-12 14:49 编辑 [/i]]

 

2007-12-12 14:51 tolywangvipca 是仅仅在安装CRS的主节点(rac01) 上运行吧 ,那就应该生效了,crs都可以启动了,奇怪 !!


vipca 在节点1上运行, 应该是在第二个节点运行 sh  root.sh 报错之后,出现OUI-25031及Oracle Cluster Verification Utility配置出错之前吧 ?

[[i] 本帖最后由 tolywang 于 2007-12-12 14:56 编辑 [/i]]

 

2007-12-12 14:56 yinheng8066[quote]原帖由 [i]tolywang[/i] 于 2007-12-12 14:48 发表 [url=http://www.itpub.net/redirect.php?goto=findpost&pid=9174786&ptid=909387][img]http://www.itpub.net/images/common/back.gif[/img][/url]



不过我在rac02 运行sh  root.sh 的时候出现错误提示,然后就在节点1上运行过一次 vipca ,  这个时候还需要运行一次 ?

:rose: [/quote]


你运行rac2 上的 root.sh

“The given interface(s), "eth0" is not public.Public interfaces should be used to configure virtual IPs.”

是不是这个错误?


如果是的话,以root 用户身份在第二个节点上手动调用 VIPCA。

 

2007-12-12 15:01 yinheng8066以 root 用户身份按顺序执行以下脚本(一次执行一个)。在当前脚本完成后,再继续执行下一个脚本。
在 rac1 上执行 /u01/app/oracle/oraInventory/orainstRoot.sh。
在 rac2 上执行 /u01/app/oracle/oraInventory/orainstRoot.sh。
在 rac1 上执行 /u01/app/oracle/product/10.2.0/crs_1/root.sh。
在 rac2 上执行 /u01/app/oracle/product/10.2.0/crs_1/root.sh。

在 rac2上执行的root.sh,会报错the given interface(s), "eth0" is not public.Public interfaces should be used to configure virtual IPs.这时应该在rac2上以root用户运行vipca,执行vipca回到cluster界面.

一般是按照这个顺序运行的,以前我在linux as4试过,不按顺序执行,会出现长时间没有响应或报其他错误。

[[i] 本帖最后由 yinheng8066 于 2007-12-12 15:10 编辑 [/i]]

 

2007-12-12 15:13 tolywang[quote]原帖由 [i]yinheng8066[/i] 于 2007-12-12 15:01 发表 [url=http://www.itpub.net/redirect.php?goto=findpost&pid=9174897&ptid=909387][img]http://www.itpub.net/images/common/back.gif[/img][/url]
以 root 用户身份按顺序执行以下脚本(一次执行一个)。在当前脚本完成后,再继续执行下一个脚本。
在 rac1 上执行 /u01/app/oracle/oraInventory/orainstRoot.sh。
在 rac2 上执行 /u01/app/oracle/oraInventory/orainstRoot.sh。
在 rac1 上执行 /u01/app/oracle/product/10.2.0/crs_1/root.sh。
在 rac2 上执行 /u01/app/oracle/product/10.2.0/crs_1/root.sh。

一般是按照这个顺序运行的,以前我试过,不按顺序,会出现长时间没有响应或报错。 [/quote]


还好,我是按照这个顺序。


我每台Sever上有两片Public NIC , 一片 Private NIC .
我需要指明的 VIP 对应eth2 public 网卡的网段 。 不是 eth0 . 不知道是不是这个原因,难道要我将eth0 先屏蔽 ?

我第二次执行vipca 的时候只是出现eth0 , eth2 两个public card , 而没有像第一次执行vipca的时候 3 快网卡都出现。我选择了上次一样的eth2 .

[[i] 本帖最后由 tolywang 于 2007-12-12 15:15 编辑 [/i]]

 

2007-12-12 15:17 tolywang[quote]原帖由 [i]yinheng8066[/i] 于 2007-12-12 14:56 发表 [url=http://www.itpub.net/redirect.php?goto=findpost&pid=9174852&ptid=909387][img]http://www.itpub.net/images/common/back.gif[/img][/url]



你运行rac2 上的 root.sh

“The given interface(s), "eth0" is not public.Public interfaces should be used to configure virtual IPs.”

是不是这个错误?


如果是的话,以root 用户身份在第二个节点上手动调用 VIPCA。 [/quote]


RAC02 上执行 sh root.sh 报错:




[root@GND-RAC02 crs]# sh root.sh
WARNING: directory '/u01/product' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/product' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: gnd-rac01 gnd-pri01 gnd-rac01
node 2: gnd-rac02 gnd-pri02 gnd-rac02
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        gnd-rac01
        gnd-rac02
CSS is active on all nodes.
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Invalid node name "GND-RAC01" entered in an input argument.
Invalid node name "GND-RAC01" entered in an input argument.

 

2007-12-12 15:18 tolywangOracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Invalid node name "GND-RAC01" entered in an input argument.
Invalid node name "GND-RAC01" entered in an input argument.

按照这个提示及google到的文档来的 vipca

 

2007-12-12 15:20 tolywang出现两次
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Invalid node name "GND-RAC01" entered in an input argument.
Invalid node name "GND-RAC01" entered in an input argument.




Invalid node name "GND-RAC01" entered in an input argument.
Invalid node name "GND-RAC01" entered in an input argument.


会不会是public网卡设置有关系 ?

GND-RAC01</etc>$cat hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
# 127.0.0.1     GND-RAC01

127.0.0.1               localhost.localdomain    localhost

10.155.4.95     GND-RAC01
172.20.1.19     GND-RAC01
10.1.0.1        GND-PRI01
172.20.1.29     GND-VIP01

10.155.4.96     GND-RAC02
172.20.1.18     GND-RAC02
10.1.0.2        GND-PRI02
172.20.1.28     GND-VIP02




我有两个public card :      

10.155.4.95     GND-RAC01
172.20.1.19     GND-RAC01

[[i] 本帖最后由 tolywang 于 2007-12-12 15:22 编辑 [/i]]

 

2007-12-12 15:26 yinheng8066[quote]原帖由 [i]tolywang[/i] 于 2007-12-12 15:13 发表 [url=http://www.itpub.net/redirect.php?goto=findpost&pid=9174996&ptid=909387][img]http://www.itpub.net/images/common/back.gif[/img][/url]



还好,我是按照这个顺序。


我每台Sever上有两片Public NIC , 一片 Private NIC .
我需要指明的 VIP 对应eth2 public 网卡的网段 。 不是 eth0 . 不知道是不是这个原因,难道要我将eth0 先屏蔽 ?

我第二次执行vipca 的时候只是出现eth0 , eth2 两个public card , 而没有像第一次执行vipca的时候 3 快网卡都出现。我选择了上次一样的eth2 . [/quote]


当时我的环境是一片public  eth0,  一片private  eth1。
应该不需要屏蔽吧,貌似不是这个原因

 

2007-12-12 15:34 yinheng8066你出现这样的报错
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Invalid node name "GND-RAC01" entered in an input argument.
Invalid node name "GND-RAC01" entered in an input argument.

=========================================
正常情况应该是下面的
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.

 

2007-12-12 15:52 tolywangPRKV-1060 "Invalid node name /"{0}/" entered in an input argument."
Cause: An attempt has been made to configure CRS node applications for the node that is not part of the cluster.
Action: Check if the node is configured in the cluster using '<CRS home>/bin/olsnodes' Enter only the nodes that are part of the cluster, or add the node to the cluster before configuring nodeapps.



GND-RAC02</u01/product/crs/bin>$olsnodes
gnd-rac01
gnd-rac02


GND-RAC01</u01/product/crs/bin>$olsnodes
gnd-rac01
gnd-rac02

[[i] 本帖最后由 tolywang 于 2007-12-12 15:54 编辑 [/i]]

 

2007-12-12 15:57 tolywang报以上错误之后在节点2上第二次执行(测试一下而已) : sh  root.sh  


[root@GND-RAC02 crs]# sh root.sh
WARNING: directory '/u01/product' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)

 

2007-12-12 16:07 tolywang[url]http://www.dbasupport.com/forums/archive/index.php/t-52297.html[/url]   



merlin_rbs08-04-2006, 06:37 AM
Hi All,

I resolved this issue by removing the cluser software and renaming everything to lowercase. Everything being the hostnames, the entries in the host files and the config in the cluster.conf file.

Regards,
Ryan

 

2007-12-12 17:03 tolywang[quote]原帖由 [i]yinheng8066[/i] 于 2007-12-12 15:34 发表 [url=http://www.itpub.net/redirect.php?goto=findpost&pid=9175194&ptid=909387][img]http://www.itpub.net/images/common/back.gif[/img][/url]
你出现这样的报错
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Invalid node name "GND-RAC01" entered in an input argument.
Invalid node name "GND-RAC01" entered in an input argument.

=========================================
正常情况应该是下面的
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs. [/quote]

是的,错误不一样,我的错误估计不能通过 vipca解决。

准备测试看看host name 改为小写   :(

 

2007-12-12 19:45 tolywang的确是设置了多个public nic导致的:

我重新设置了public ,private nic , 每个只有一个,另外一个public nic 被设置为do not use .

结果执行sh root.sh 的时候节点2上提示正常了




[root@gnd-rac02 crs]#
[root@gnd-rac02 crs]# sh root.sh
WARNING: directory '/u01/product' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/product' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>node 1: gnd-rac01 gnd-pri01 gnd-rac01
node 2: gnd-rac02 gnd-pri02 gnd-rac02
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        gnd-rac01
        gnd-rac02
CSS is active on all nodes.
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth2" is not public. Public interfaces should be used to configure virtual IPs.




运行vipca 之后安装一切正常,目前已经成功安装oracle software.

[[i] 本帖最后由 tolywang 于 2007-12-12 20:12 编辑 [/i]]