百度搜索,会很多文章关于如何添加删除节点的,而且这个操作也没有什么很多的技术含量,但是自己测试过,记录一下,以备后续查询,同时也希望能给需要的朋友一些帮助.
环境介绍
环境为两个节点RAC:racnode1和racnode2
测试过程是删除racnode1
oracle版本为11.2.0.1
---- racnode1上操作 -----
首先,要禁用和停止监听,然后查询确认操作已经成功
- [grid@racnode1 ~]$ srvctl disable listener -n racnode1
- [grid@racnode1 ~]$ srvctl stop listener -n racnode1
- [grid@racnode1 ~]$ crsctl status res -t
- ora.LISTENER.lsnr
- OFFLINE OFFLINE racnode1
- ONLINE ONLINE racnode2
- ora.LISTENER_2.lsnr
- OFFLINE OFFLINE racnode1
- ONLINE ONLINE racnode2
- --------------------------------------------------------------------------------
进入到GI目录,去删除OCR中的该数据库的信息(这个操作可以在任何一个运行的节点上执行)
- [grid@racnode1 ~]$ cd $ORACLE_HOME/oui/bin
- [grid@racnode1 bin]$ pwd
- /u01/app/11.2.0/grid/oui/bin
- [grid@racnode1 bin]$ ./runInstaller -updateNodelist ORACLE_HOME=/u01/app/11.2.0/grid/oui/bin "CLUSTER_NODES=racnode2" <<<<<注:这里需要填写希望保留下来的节点,如果有多个节点,除了需要删除的节点,其他节点都要写上并用“,”隔开
- Starting Oracle Universal Installer...
- Checking swap space: must be greater than 500 MB. Actual 2353 MB Passed
- The inventory pointer is located at /etc/oraInst.loc
- The inventory is located at /u01/app/oraInventory
- /u01/app/oraInventory/ContentsXML
- [grid@racnode1 bin]$ olsnodes -s -t
- racnode1 Active Unpinned
- racnode2 Active Unpinned
停止并删除VIP
[root@racnode1 bin]# srvctl stop vip -i racnode1-vip
[root@racnode1 bin]# crsctl status res -t
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
OFFLINE OFFLINE racnode1
ONLINE ONLINE racnode2
ora.LISTENER_2.lsnr
OFFLINE OFFLINE racnode1
ONLINE ONLINE racnode2
ora.racnode1.vip
1 OFFLINE OFFLINE
ora.racnode2.vip
1 ONLINE ONLINE racnode2
ora.scan1.vip
1 ONLINE ONLINE racnode1
[root@racnode1 bin]# srvctl remove vip -i racnode1-vip
Please confirm that you intend to remove the VIPs racnode1-vip (y/[n]) y
[root@racnode1 bin]# crsctl status res -t
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
OFFLINE OFFLINE racnode1
ONLINE ONLINE racnode2
ora.LISTENER_2.lsnr
OFFLINE OFFLINE racnode1
ONLINE ONLINE racnode2
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE racnode1
ora.hd.db
1 ONLINE ONLINE racnode1 Open
2 ONLINE ONLINE racnode2 Open
ora.oc4j
1 OFFLINE OFFLINE
ora.racnode2.vip
1 ONLINE ONLINE racnode2
ora.scan1.vip
1 ONLINE ONLINE racnode1
首先停止CRS,不然会报如下错误:
- [root@racnode1 bin]# crsctl delete node -n racnode1
- CRS-4658: The clusterware stack on node racnode1 is not completely down.
- CRS-4000: Command Delete failed, or completed with errors.
- [root@racnode1 bin]# crsctl stop crs
执行rootcrs.pl脚本
- [root@racnode1 grid]# cd /u01/app/11.2.0/grid/crs/install/
- [root@racnode1 install]# ./rootcrs.pl -deconfig -force
- Successfully deconfigured Oracle clusterware stack on this node
如果没有执行rootcrs.pl脚本
- CRS is already configured on this node for crshome=0
- Cannot configure two CRS instances on the same cluster.
- Please deconfigure before proceeding with the configuration of new home.
------ racnode2 上操作---------
- cd /u01/app/11.2.0/grid/oui/bin
- runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid CLUSTER_NODES={racnode2} CRS=TRUE
- [grid@racnode2 bin]$ crsctl status res -t
- --------------------------------------------------------------------------------
- NAME TARGET STATE SERVER STATE_DETAILS
- --------------------------------------------------------------------------------
- Local Resources
- --------------------------------------------------------------------------------
- ora.DG_CRS.dg
- ONLINE ONLINE racnode2
- ora.DG_DATA.dg
- ONLINE ONLINE racnode2
- ora.DG_FRA.dg
- ONLINE ONLINE racnode2
- ora.LISTENER.lsnr
- ONLINE INTERMEDIATE racnode2
- ora.LISTENER_2.lsnr
- ONLINE INTERMEDIATE racnode2
- ora.asm
- ONLINE ONLINE racnode2 Started
- ora.eons
- ONLINE ONLINE racnode2
- ora.gsd
- OFFLINE OFFLINE racnode2
- ora.net1.network
- ONLINE ONLINE racnode2
- ora.ons
- ONLINE ONLINE racnode2
- ora.registry.acfs
- ONLINE ONLINE racnode2
- --------------------------------------------------------------------------------
- Cluster Resources
- --------------------------------------------------------------------------------
- ora.LISTENER_SCAN1.lsnr
- 1 ONLINE ONLINE racnode2
- ora.hd.db
- 1 ONLINE OFFLINE
- 2 ONLINE ONLINE racnode2 Open
- ora.oc4j
- 1 OFFLINE OFFLINE
- ora.racnode2.vip
- 1 ONLINE ONLINE racnode2
- ora.scan1.vip
- 1 ONLINE ONLINE racnode2
增加节点到集群的方法
如果是完全空白的机器,执行以下命令会从别的节点拷贝全部的GI和DB的软件,然后完成注册OCR等操作。
- ./addNode.sh -silent "CLUSTER_NEW_NODES={racnode1} CLUSTER_NEW_PRIVATE_NODE_NAMES={racnode1-priv} CLUSTER_NEW_VIRTUAL_HOSTNAMES={racnode1-vip}"
如果这个节点上已经有GI和DB的软件,就不需要再copy一遍了,可以通过下面的命令直接添加节点,这样是很快的,也就是更改一下OCR,同事启动一下实例就OK了。
- ./addNode.sh -noCopy "CLUSTER_NEW_NODES={racnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={racnode1-vip}"
在需要添加的节点上运行root.sh脚本,完成最后的添加
- cd /u01/app/11.2.0/grid/oui/bin
- runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid CLUSTER_NODES={racnode1,racnode2} CRS=TRUE
- [root@racnode1 grid]# cd /u01/app/11.2.0/grid/crs/install/
- [root@racnode1 install]# ./rootcrs.pl -deconfig -force
- Successfully deconfigured Oracle clusterware stack on this node
到此,删除添加都完成了,步骤不多也比较简单。