『ORACLE』RAC增加节点(11g)

时间:2022-01-26 08:35:33

1.主机规划

 

节点1

节点2

节点3

Hostname

node1

node2

node3

Public IP

10.10.10.10

10.10.10.20

10.10.10.30

VIP

10.10.10.11

10.10.10.22

10.10.10.33

Private IP

192.168.1.10

192.168.1.20

192.168.1.30

Scan IP

10.10.10.100

1.1.集群添加节点

1)在一节点,以grid用户登录,静默方式完成grid节点添加

[grid@node1 ~]$ export IGNORE_PREADDNODE_CHECKS=Y

[grid@node1 ~]$ cd $ORACLE_HOME/oui/bin

[grid@node1 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}"

 

Starting Oracle Universal Installer...

 

Checking swap space: must be greater than 500 MB.   Actual 3997 MB    Passed

Oracle Universal Installer, Version 11.2.0.4.0 Production

Copyright (C) 1999, 2013, Oracle. All rights reserved.

 

Performing tests to see whether nodes node2,node3 are available

............................................................... 100% Done.

 

.

-----------------------------------------------------------------------------

Cluster Node Addition Summary

Global Settings

   Source: /u01/app/grid

.........

-----------------------------------------------------------------------------

Instantiating scripts for add node (Wednesday, June 28, 2017 6:19:39 PM CST)

.                                                                 1% Done.

Instantiation of add node scripts complete

 

Copying to remote nodes (Wednesday, June 28, 2017 6:19:41 PM CST)

...............................................................................................                                 96% Done.

Home copied to new nodes

 

Saving inventory on nodes (Wednesday, June 28, 2017 6:22:05 PM CST)

.                                                               100% Done.

Save inventory complete

WARNING:

The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.

/u01/app/oraInventory/orainstRoot.sh #On nodes node3

/u01/app/grid/root.sh #On nodes node3

To execute the configuration scripts:

    1. Open a terminal window

    2. Log in as "root"

    3. Run the scripts in each cluster node  

The Cluster Node Addition of /u01/app/grid was successful.

Please check '/tmp/silentInstall.log' for more details.

 

##如果出现报错,检查/u01/app/grid路径的权限是不是grid:oinstall

在检查/u01/app/oraInventory/ContentsXML路径下的inventory.xml文件中是否记载node3信息,没有可以用下面脚本试着重建inventory.xml文件

[grid@node1 ~]$ cd /u01/app/grid/oui/bin/

[grid@node1 bin]$ ./runInstaller -silent -ignoreSysPrereqs -attachHome ORACLE_HOME="/u01/app/grid" ORACLE_HOME_NAME="Ora_gridinfrahome1" CLUSTER_NODES=node1,node2 CRS=true "INVENTORY_LOCATION=/u01/app/oraInventory/ContentsXML" LOCAL_NODE=node1

####ContentsXML文件中只包含comps.xml  inventory.xml  libs.xml这三个文件,其余的文件都干掉,不然跑脚本会报错

####脚本成功后如果没有提示/u01/app/oraInventory/orainstRoot.sh #On nodes node3

可以从节点一将orainstRoot.sh传到节点三中

 

2)以上脚本执行完后,提示在第三节点执行上述两个脚本

[root@node3 ~]# /u01/app/grid/root.sh

Performing root user operation for Oracle 11g

 

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/grid

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

OLR initialization - successful

Adding Clusterware entries to upstart

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node node1, number 1, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Preparing packages for installation...

cvuqdisk-1.0.9-1

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

 

1.2.数据库节点添加

1)oracle用户登录:静默方式完成oracle节点添加

[grid@node1 bin]$ su - oracle

Password:

[oracle@node1 ~]$ cd /u01/app/oracle/product/11.2.0/db_1/oui/bin/

[oracle@node1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={node3}"

Performing pre-checks for node addition

 

Checking node reachability...

Node reachability check passed from node "node1"

 

 

Checking user equivalence...

User equivalence check passed for user "oracle"

 

WARNING:

Node "node3" already appears to be part of cluster

 

Pre-check for node addition was successful.

Starting Oracle Universal Installer...

 

Checking swap space: must be greater than 500 MB.   Actual 3999 MB    Passed

Oracle Universal Installer, Version 11.2.0.4.0 Production

Copyright (C) 1999, 2013, Oracle. All rights reserved.

 

 

Performing tests to see whether nodes node2,node3 are available

............................................................... 100% Done.

 

.

-----------------------------------------------------------------------------

Cluster Node Addition Summary

Global Settings

   Source: /u01/app/oracle/product/11.2.0/db_1

   New Nodes

Space Requirements

   New Nodes

      node3

         /: Required 4.59GB : Available 16.17GB

Installed Products

   Product Names

      Oracle Database 11g 11.2.0.4.0

.......

 

Instantiating scripts for add node (Thursday, June 29, 2017 4:32:04 PM CST)

.                                                                 1% Done.

Instantiation of add node scripts complete

 

Copying to remote nodes (Thursday, June 29, 2017 4:32:07 PM CST)

...............................................................................................                                 96% Done.

Home copied to new nodes

 

Saving inventory on nodes (Thursday, June 29, 2017 4:36:58 PM CST)

.                                                               100% Done.

Save inventory complete

WARNING:

The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.

/u01/app/oracle/product/11.2.0/db_1/root.sh #On nodes node3

To execute the configuration scripts:

    1. Open a terminal window

    2. Log in as "root"

    3. Run the scripts in each cluster node

    

The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_1 was successful.

Please check '/tmp/silentInstall.log' for more details.

2)在三节点执行上述脚本

[root@node3 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh

Performing root user operation for Oracle 11g

 

The following environment variables are set as:

    ORACLE_OWNER= oracle

    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/db_1

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

1.3.数据库实例添加

一节点,以oracle用户登录系统,dbca完成oracle instance添加

[oracle@node1 ~]$ dbca -silent -addInstance -nodeList node3 -gdbName rac11g -instanceName rac11g3 -sysDBAUserName sys -sysDBAPassword oracle

AUserName sys -sysDBAPassword oracle

Adding instance

1% complete

2% complete

6% complete

13% complete

20% complete

26% complete

33% complete

40% complete

46% complete

53% complete

66% complete

Completing instance management.

76% complete

100% complete

Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/rac11g/rac11g0.log" for further details.

1.4.验证

1)sqlplus

SQL> select thread#,status,instance from v$thread;

 

   THREAD# STATUS INSTANCE

---------- ------ --------------------------------------------------------------------------------

         1 OPEN   rac11g1

         2 OPEN   rac11g2

         3 OPEN   rac11g3

2)crsctl

SQL> select thread#,status,instance from v$thread;

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS    --------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ARCH.dg

               ONLINE  ONLINE       node1                                    

               ONLINE  ONLINE       node2                                     

               ONLINE  ONLINE       node3                                     

ora.DATA.dg

               ONLINE  ONLINE       node1                                     

               ONLINE  ONLINE       node2                                     

               ONLINE  ONLINE       node3                                    

ora.LISTENER.lsnr

               ONLINE  ONLINE       node1                                     

               ONLINE  ONLINE       node2                                                    ONLINE  ONLINE       node3                                     ora.OCR.dg

               ONLINE  ONLINE       node1                                     

               ONLINE  ONLINE       node2                                                    ONLINE  ONLINE       node3                                     

ora.asm

               ONLINE  ONLINE       node1                    Started                          ONLINE  ONLINE       node2                    Started           

               ONLINE  ONLINE       node3                    Started           

ora.gsd

               OFFLINE OFFLINE      node1                                      

               OFFLINE OFFLINE      node2                                      

               OFFLINE OFFLINE      node3                                        

ora.net1.network

               ONLINE  ONLINE       node1                                     

               ONLINE  ONLINE       node2                                     

               ONLINE  ONLINE       node3                                     

ora.ons

               ONLINE  ONLINE       node1                                     

               ONLINE  ONLINE       node2                                     

               ONLINE  ONLINE       node3                                     

ora.registry.acfs

               ONLINE  ONLINE       node1                                     

               ONLINE  ONLINE       node2                                     

               ONLINE  ONLINE       node3                                     

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       node2                                     

ora.cvu

      1        ONLINE  ONLINE       node2                                     

ora.node1.vip

      1        ONLINE  ONLINE       node1                                     

ora.node2.vip

      1        ONLINE  ONLINE       node2                                     

ora.node3.vip

      1        ONLINE  ONLINE       node3                                     

ora.oc4j

      1        ONLINE  ONLINE       node2                                     

ora.rac11g.db

      1        ONLINE  ONLINE       node1                    Open             

      2        ONLINE  ONLINE       node2                    Open             

      3        ONLINE  ONLINE       node3                    Open             

ora.scan1.vip

      1        ONLINE  ONLINE       node2             

3)srvctl

SQL> srvctl config database -d rac11g

Database unique name: rac11g

Database name: rac11g

Oracle home: /u01/app/oracle/product/11.2.0/db_1

Oracle user: oracle

Spfile: +DATA/rac11g/spfilerac11g.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: rac11g

Database instances: rac11g1,rac11g2,rac11g3

Disk Groups: DATA,ARCH

Mount point paths:

Services:

Type: RAC

Database is administrator managed