Gavinsoorma.com.au



Notes on adding and deleting a node from a clusterEnvironment New VM to be added: rac03.localdomainPublic IP Address: 192.168.56.109Private IP Address: 192.168.10.109Hub or Leaf: Hub Node RAM: 4300 MBCreate the rac03 VMImport the .ova file which we had exported earlier and create the rac03 VMChange the name to rac03 in Appliance SettingsStart the VM rac03 and change the hostname and MAC addresses and IP addresses [root@rac01 ~]# vi /etc/sysconfig/networkNETWORKING=yesHOSTNAME=rac03.localdomain# oracle-rdbms-server-12cR1-preinstall : Add NOZEROCONF=yesNOZEROCONF=yesChange the last two characters of the MAC address (eth01,eth1 and eth2) to give it a unique address which is different to rac01 and rac02 VMsChange the Public and Private IP addresses Rename 70-persistent-net.rules – it will be recreated on next VM start[root@rac01 ~]# cd /etc/udev/rules.d/ [root@rac01 rules.d]# mv 70-persistent-net.rules 70-persistent-net.rules.oldShutdown the VMAdd the shared VirtualBox disks to rac03 Do the same for ASM2, ASM3, ASM4, ASM5 and ASM6 disks Change the MAC address of all three network adapters for the VM Note- we change the last 2 characters of MAC address field using the same values we did earlier while configuring the network for rac03 VM – ‘99’Do the same for Adapter 2 and Adapter 3 Add entries for Public IP into the DNS for rac03 [root@dns named]# vi /var/named/localdomain.db$TTL 1H ; Time to live$ORIGIN localdomain.@ IN SOA dns01 root.localdomain. ( 2018060701 ; serial (todays date + todays serial #) 3H ; refresh 3 hours 1H ; retry 1 hour 1W ; expire 1 week 1D ) ; minimum 24 hour; A 192.168.56.102 NS dns01 ; name server for localdomaindns01 A 192.168.56.102rac01 A 192.168.56.100rac02 A 192.168.56.101rac03 A 192.168.56.109rac-gns A 192.168.56.150 ; A record for the GNS;;sub-domain(rac.localdomain) definitions$ORIGIN rac.localdomain.@ IN NS rac-gns.localdomain. ;[root@dns named]# vi 192.168.56.db$TTL 1H@ IN SOA dns01 root.localdomain. ( 2018060701 ; serial (todays date + todays serial #) 3H ; refresh 3 hours 1H ; retry 1 hour 1W ; expire 1 week 1D ) ; minimum 24 hour; NS dns01.localdomain.100 PTR rac01.localdomain.101 PTR rac02.localdomain.102 PTR dns01.localdomain.109 PTR rac03.localdomain.150 PTR rac-gns.localdomain. ; reverse mapping for GNS[root@dns named]# service named restart Stopping named: [ OK ]Starting named: [ OK ][root@dns named]# nslookup rac03Server:192.168.56.102Address:192.168.56.102#53Name:rac03.localdomainAddress: 192.168.56.109Start the rac03 VMPre-Node Addition Steps Configure SSH Equivalence (from rac01)[root@rac01 bin]# su – grid[grid@rac01 ~]$ . oraenvORACLE_SID = [grid] ? +ASM1The Oracle base has been set to /u01/app/grid[grid@rac01 ~]$ cd $ORACLE_HOME/oui/bin [oracle@rac01 bin]$ ./runSSHSetup.sh -user grid -hosts "rac01 rac02 rac03" -advanced -exverifyThis script will setup SSH Equivalence from the host 'rac01.localdomain' to specified remote hosts. ORACLE_HOME = /u01/app/12.1.0/gridJAR_LOC = /u01/app/12.1.0/grid/oui/jlibSSH_LOC = /u01/app/12.1.0/grid/oui/jlibOUI_LOC = /u01/app/12.1.0/grid/ouiJAVA_HOME = /u01/app/12.1.0/grid/jdkChecking if the remote hosts are reachable.ClusterLogger - log file location: /u01/app/12.1.0/grid/oui/bin/Logs/remoteInterfaces2017-09-20_11-58-09-AM.logFailed Nodes : rac01 rac03Remote host reachability check succeeded.All hosts are reachable. Proceeding further...NOTE :As part of the setup procedure, this script will use ssh and scp to copyfiles between the local host and the remote hosts. You may be prompted forthe password during the execution of the script.AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORYAND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESEdirectories.Do you want to continue and let the script make the above mentioned changes (yes/no)?yesIf The files containing the client public and private keys already exist on the local host. The current private key may or may not have a passphrase associated with it. In case you remember the passphrase and do not want to re-run ssh-keygen, type 'no'. If you type 'yes', the script will remove the old private/public key files and, any previous SSH user setups would be reset.Enter 'yes', 'no' noEnter the password: Logfile Location : /u01/app/12.1.0/grid/oui/bin/SSHSetup2017-09-20_11-58-19-AMClusterLogger - log file location: /u01/app/12.1.0/grid/oui/bin/Logs/remoteInterfaces2017-09-20_11-58-19-AM.logChecking binaries on remote hosts...Doing SSHSetup...Please be patient, this operation might take sometime...Dont press Ctrl+C...Validating remote binaries..Remote binaries check succeededLocal Platform:- Linux ------------------------------------------------------------------------Verifying SSH setup===================The script will now run the date command on the remote nodes using sshto verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FORPASSWORDS. If you see any output other than date or are prompted for thepassword, ssh is not setup correctly and you will need to resolve theissue and set up ssh again.The possible causes for failure could be:1. The server settings in /etc/ssh/sshd_config file do not allow sshfor user oracle.2. The server may have disabled public key based authentication.3. The client public key on the server may be outdated.4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.5. User may not have passed -shared option for shared remote users ormay be passing the -shared option for non-shared remote users.6. If there is output in addition to the date, but no password is asked,it may be a security alert shown as part of company policy. Append theadditional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.--------------------------------------------------------------------------rac01:--Running /usr/bin/ssh -x -l oracle rac01 date to verify SSH connectivity has been setup from local host to rac01.IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.Wed Sep 20 11:58:43 AWST 2017--------------------------------------------------------------------------rac03:--Running /usr/bin/ssh -x -l oracle rac03 date to verify SSH connectivity has been setup from local host to rac03.IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.Wed Sep 20 11:58:45 AWST 2017------------------------------------------------------------------------------------------------------------------------------------------------Verifying SSH connectivity has been setup from rac01 to rac01------------------------------------------------------------------------IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.Wed Sep 20 11:58:44 AWST 2017------------------------------------------------------------------------------------------------------------------------------------------------Verifying SSH connectivity has been setup from rac01 to rac03------------------------------------------------------------------------IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.Wed Sep 20 11:58:45 AWST 2017-------------------------------------------------------------------------Verification from rac01 complete-------------------------------------------------------------------------Verifying SSH connectivity has been setup from rac03 to rac01------------------------------------------------------------------------IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.Wed Sep 20 11:58:44 AWST 2017------------------------------------------------------------------------------------------------------------------------------------------------Verifying SSH connectivity has been setup from rac03 to rac03------------------------------------------------------------------------IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.Wed Sep 20 11:58:46 AWST 2017-------------------------------------------------------------------------Verification from rac03 complete-SSH verification complete.Turn off NTP on rac03[root@rac03 u01]# chkconfig ntpd offNote: Ensure adequate disk space exists – at least 7.00 GB of free space in /Here we can see that root partition has only 3.4 GB available – so used yum clean all command to free up additional disk space.[root@rac03 u01]# df -h |grep /Filesystem Size Used Avail Use% Mounted on/dev/mapper/vg_rac03-lv_root 13G 8.7G 3.4G 72% /…… [root@rac03 u01]# yum clean allLoaded plugins: refresh-packagekit, security, ulninfoCleaning repos: public_ol6_UEKR4 public_ol6_latestCleaning up Everything[root@rac03 u01]# df -h |grep /Filesystem Size Used Avail Use% Mounted on/dev/mapper/vg_rac03-lv_root 13G 4.0G 8.1G 34% /……Copy the cvuqdisk-1.0.10-1.rpm to rac03 and install the RPMgrid@rac01 bin]$ cd $ORACLE_HOME[grid@rac01 grid]$ cd cv/[grid@rac01 cv]$ lsadmin baseline cvdata cvutl init log remenv report rpm[grid@rac01 cv]$ cd rpm[grid@rac01 rpm]$ lscvuqdisk-1.0.10-1.rpm[grid@rac01 rpm]$ scp -rp cvuqdisk-1.0.10-1.rpm grid@rac03:/tmpcvuqdisk-1.0.10-1.rpm 100% 8860 8.7KB/s 00:00 [root@rac03 ~]# cd /tmp[root@rac03 tmp]# rpm -ivh cvuqdisk-1.0.10-1.rpm Preparing... ########################################### [100%]Using default group oinstall to install package 1:cvuqdisk ########################################### [100%][root@rac03 tmp]#Check the ASM disks on shared storage are visible on rac03 [root@rac03 ~]# /usr/sbin/oracleasm scandisksReloading disk partitions: doneCleaning any stale ASM disks...Scanning system for ASM disks...[root@rac03 ~]# /usr/sbin/oracleasm listdisksASM1ASM2ASM3ASM4ASM5ASM6 [root@rac03 ~]#Check node readiness for addition to the cluster (from rac01)[grid@rac01 rpm]$ cluvfy stage -pre nodeadd -n rac03 -verboseIgnore these memory related errors …Failures were encountered during execution of CVU verification request "stage -pre nodeadd".Verifying Physical Memory ...FAILEDrac03: PRVF-7530 : Sufficient physical memory is not available on node "rac03" [Required physical memory = 8GB (8388608.0KB)]rac01: PRVF-7530 : Sufficient physical memory is not available on node "rac01" [Required physical memory = 8GB (8388608.0KB)]Add rac03 node to the cluster via addnode.sh script (from rac01)[grid@rac01 addnode]$ pwd/u01/app/12.2.0/grid/addnode./addnode.sh “CLUSTER_NEW_NODES={rac03}” “CLUSTER_NEW_NODE_ROLES={hub}”[root@rac03 12.2.0]# cd /u01/app/oraInventory/[root@rac03 oraInventory]# ./orainstRoot.sh Changing permissions of /u01/app/oraInventory.Adding read,write permissions for group.Removing read,write,execute permissions for world.Changing groupname of /u01/app/oraInventory to oinstall.The execution of the script is complete.[root@rac03 oraInventory]# cd /u01/app/12.2.0/grid/[root@rac03 grid]# ./root.shPerforming root user operation.The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/12.2.0/gridEnter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ...Creating /etc/oratab file...Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root script.Now product-specific root actions will be performed.Relinking oracle with rac_on optionUsing configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_paramsThe log of current session can be found at: /u01/app/grid/crsdata/rac03/crsconfig/rootcrs_rac03_2018-08-03_12-52-50AM.log2018/08/03 12:52:55 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.2018/08/03 12:52:55 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.2018/08/03 12:53:26 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.2018/08/03 12:53:26 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.2018/08/03 12:53:36 CLSRSC-363: User ignored prerequisites during installation2018/08/03 12:53:36 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.2018/08/03 12:53:37 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.2018/08/03 12:53:39 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.2018/08/03 12:53:43 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.2018/08/03 12:53:43 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.2018/08/03 12:53:44 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.2018/08/03 12:53:46 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.2018/08/03 12:53:49 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.2018/08/03 12:53:49 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.2018/08/03 12:53:51 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.2018/08/03 12:54:07 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'2018/08/03 12:54:28 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.2018/08/03 12:54:31 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac03'CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac03' has completedCRS-4133: Oracle High Availability Services has been stopped.CRS-4123: Oracle High Availability Services has been started.2018/08/03 12:55:15 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.2018/08/03 12:55:17 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac03'CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac03' has completedCRS-4133: Oracle High Availability Services has been stopped.CRS-4123: Oracle High Availability Services has been started.CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac03'CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac03'CRS-2677: Stop of 'ora.drivers.acfs' on 'rac03' succeededCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac03' has completedCRS-4133: Oracle High Availability Services has been stopped.2018/08/03 12:55:32 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.CRS-4123: Starting Oracle High Availability Services-managed resourcesCRS-2672: Attempting to start 'ora.mdnsd' on 'rac03'CRS-2672: Attempting to start 'ora.evmd' on 'rac03'CRS-2676: Start of 'ora.mdnsd' on 'rac03' succeededCRS-2676: Start of 'ora.evmd' on 'rac03' succeededCRS-2672: Attempting to start 'ora.gpnpd' on 'rac03'CRS-2676: Start of 'ora.gpnpd' on 'rac03' succeededCRS-2672: Attempting to start 'ora.gipcd' on 'rac03'CRS-2676: Start of 'ora.gipcd' on 'rac03' succeededCRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac03'CRS-2676: Start of 'ora.cssdmonitor' on 'rac03' succeededCRS-2672: Attempting to start 'ora.cssd' on 'rac03'CRS-2672: Attempting to start 'ora.diskmon' on 'rac03'CRS-2676: Start of 'ora.diskmon' on 'rac03' succeededCRS-2676: Start of 'ora.cssd' on 'rac03' succeededCRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac03'CRS-2672: Attempting to start 'ora.ctssd' on 'rac03'CRS-2676: Start of 'ora.ctssd' on 'rac03' succeededCRS-2672: Attempting to start 'ora.crf' on 'rac03'CRS-2676: Start of 'ora.crf' on 'rac03' succeededCRS-2672: Attempting to start 'ora.crsd' on 'rac03'CRS-2676: Start of 'ora.crsd' on 'rac03' succeededCRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac03' succeededCRS-2672: Attempting to start 'ora.asm' on 'rac03'CRS-2676: Start of 'ora.asm' on 'rac03' succeededCRS-6017: Processing resource auto-start for servers: rac03CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac02'CRS-2672: Attempting to start 'work' on 'rac03'CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac03'CRS-2672: Attempting to start 'ora.chad' on 'rac03'CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac02' succeededCRS-2676: Start of 'work' on 'rac03' succeededCRS-2673: Attempting to stop 'ora.' on 'rac02'CRS-2672: Attempting to start 'ora.ons' on 'rac03'CRS-2677: Stop of 'ora.' on 'rac02' succeededCRS-2672: Attempting to start 'ora.' on 'rac03'CRS-2676: Start of 'ora.chad' on 'rac03' succeededCRS-2676: Start of 'ora.' on 'rac03' succeededCRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac03'CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac03' succeededCRS-2672: Attempting to start 'ora.asm' on 'rac03'CRS-2676: Start of 'ora.ons' on 'rac03' succeededCRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac03' succeededCRS-2676: Start of 'ora.asm' on 'rac03' succeededCRS-2672: Attempting to start 'ora.DATA.dg' on 'rac03'CRS-2676: Start of 'ora.DATA.dg' on 'rac03' succeededCRS-6016: Resource auto-start has completed for server rac03CRS-6024: Completed start of Oracle Cluster Ready Services-managed resourcesCRS-4123: Oracle High Availability Services has been started.2018/08/03 12:58:01 CLSRSC-343: Successfully started Oracle Clusterware stack2018/08/03 12:58:01 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.clscfg: EXISTING configuration version 5 detected.clscfg: version 5 is 12c Release 2.Successfully accumulated necessary OCR keys.Creating OCR keys for user 'root', privgrp 'root'..Operation successful.2018/08/03 12:58:22 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.2018/08/03 12:58:33 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded[root@rac03 grid]#Verify the addition of the new node rac03[grid@rac01 addnode]$ olsnodes -arac01Hubrac02Hubrac03HubNote a new VIP has been allocated dynamically by GNS[grid@rac01 addnode]$ srvctl config nodeapps Network 1 existsSubnet IPv4: 192.168.56.0/255.255.255.0/eth0, dhcpSubnet IPv6: Ping Targets: Network is enabledNetwork is individually enabled on nodes: Network is individually disabled on nodes: VIP exists: network number 1, hosting node rac01VIP Name: rac01VIP IPv4 Address: -/rac01-vip/192.168.56.27VIP IPv6 Address: VIP is is individually enabled on nodes: VIP is individually disabled on nodes: VIP exists: network number 1, hosting node rac02VIP Name: rac02VIP IPv4 Address: -/rac02-vip/192.168.56.24VIP IPv6 Address: VIP is is individually enabled on nodes: VIP is individually disabled on nodes: VIP exists: network number 1, hosting node rac03VIP Name: rac03VIP IPv4 Address: -/rac03-vip/192.168.56.30VIP IPv6 Address: VIP is is individually enabled on nodes: VIP is individually disabled on nodes: ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL trueONS is enabledONS is individually enabled on nodes: ONS is individually disabled on nodes: [grid@rac01 addnode]$SQL> select inst_id,host_name,instance_name from gv$instance; INST_ID HOST_NAME---------- ----------------------------------------------------------------INSTANCE_NAME---------------- 1 rac01.localdomain+ASM1 2 rac02.localdomain+ASM2 3 rac03.localdomain+ASM3[grid@rac01 addnode]$ cluvfy stage -post nodeadd -n rac03 -verbose Verifying Node Connectivity ... Verifying Hosts File ... Node Name Status ------------------------------------ ------------------------ rac01 passed rac03 passed rac02 passed Verifying Hosts File ...PASSEDInterface information for node "rac01" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.56.100 192.168.56.0 0.0.0.0 10.0.4.2 08:00:27:14:3D:19 1500 eth0 192.168.56.27 192.168.56.0 0.0.0.0 10.0.4.2 08:00:27:14:3D:19 1500 eth0 192.168.56.25 192.168.56.0 0.0.0.0 10.0.4.2 08:00:27:14:3D:19 1500 eth1 192.168.10.100 192.168.10.0 0.0.0.0 10.0.4.2 08:00:27:BF:9E:BD 1500 Interface information for node "rac02" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.56.101 192.168.56.0 0.0.0.0 10.0.4.2 08:00:27:14:3D:00 1500 eth0 192.168.56.24 192.168.56.0 0.0.0.0 10.0.4.2 08:00:27:14:3D:00 1500 eth0 192.168.56.150 192.168.56.0 0.0.0.0 10.0.4.2 08:00:27:14:3D:00 1500 eth0 192.168.56.28 192.168.56.0 0.0.0.0 10.0.4.2 08:00:27:14:3D:00 1500 eth1 192.168.10.101 192.168.10.0 0.0.0.0 10.0.4.2 08:00:27:BF:9E:00 1500 Interface information for node "rac03" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.56.109 192.168.56.0 0.0.0.0 10.0.4.2 08:00:27:14:3D:22 1500 eth0 192.168.56.29 192.168.56.0 0.0.0.0 10.0.4.2 08:00:27:14:3D:22 1500 eth0 192.168.56.30 192.168.56.0 0.0.0.0 10.0.4.2 08:00:27:14:3D:22 1500 eth1 192.168.10.109 192.168.10.0 0.0.0.0 10.0.4.2 08:00:27:BF:9E:22 1500 Check: MTU consistency on the private interfaces of subnet "192.168.10.0" Node Name IP Address Subnet MTU ---------------- ------------ ------------ ------------ ---------------- rac03 eth1 192.168.10.109 192.168.10.0 1500 rac01 eth1 192.168.10.100 192.168.10.0 1500 rac02 eth1 192.168.10.101 192.168.10.0 1500 Check: MTU consistency of the subnet "192.168.56.0". Node Name IP Address Subnet MTU ---------------- ------------ ------------ ------------ ---------------- rac02 eth0 192.168.56.101 192.168.56.0 1500 rac01 eth0 192.168.56.100 192.168.56.0 1500 rac01 eth0 192.168.56.27 192.168.56.0 1500 rac01 eth0 192.168.56.25 192.168.56.0 1500 rac02 eth0 192.168.56.24 192.168.56.0 1500 rac02 eth0 192.168.56.150 192.168.56.0 1500 rac02 eth0 192.168.56.28 192.168.56.0 1500 rac03 eth0 192.168.56.109 192.168.56.0 1500 rac03 eth0 192.168.56.29 192.168.56.0 1500 rac03 eth0 192.168.56.30 192.168.56.0 1500 Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac01[eth0:192.168.56.100] rac02[eth0:192.168.56.101] yes rac01[eth0:192.168.56.100] rac01[eth0:192.168.56.27] yes rac01[eth0:192.168.56.100] rac01[eth0:192.168.56.25] yes rac01[eth0:192.168.56.100] rac02[eth0:192.168.56.24] yes rac01[eth0:192.168.56.100] rac02[eth0:192.168.56.150] yes rac01[eth0:192.168.56.100] rac02[eth0:192.168.56.28] yes rac01[eth0:192.168.56.100] rac03[eth0:192.168.56.109] yes rac01[eth0:192.168.56.100] rac03[eth0:192.168.56.29] yes rac01[eth0:192.168.56.100] rac03[eth0:192.168.56.30] yes rac02[eth0:192.168.56.101] rac01[eth0:192.168.56.27] yes rac02[eth0:192.168.56.101] rac01[eth0:192.168.56.25] yes rac02[eth0:192.168.56.101] rac02[eth0:192.168.56.24] yes rac02[eth0:192.168.56.101] rac02[eth0:192.168.56.150] yes rac02[eth0:192.168.56.101] rac02[eth0:192.168.56.28] yes rac02[eth0:192.168.56.101] rac03[eth0:192.168.56.109] yes rac02[eth0:192.168.56.101] rac03[eth0:192.168.56.29] yes rac02[eth0:192.168.56.101] rac03[eth0:192.168.56.30] yes rac01[eth0:192.168.56.27] rac01[eth0:192.168.56.25] yes rac01[eth0:192.168.56.27] rac02[eth0:192.168.56.24] yes rac01[eth0:192.168.56.27] rac02[eth0:192.168.56.150] yes rac01[eth0:192.168.56.27] rac02[eth0:192.168.56.28] yes rac01[eth0:192.168.56.27] rac03[eth0:192.168.56.109] yes rac01[eth0:192.168.56.27] rac03[eth0:192.168.56.29] yes rac01[eth0:192.168.56.27] rac03[eth0:192.168.56.30] yes rac01[eth0:192.168.56.25] rac02[eth0:192.168.56.24] yes rac01[eth0:192.168.56.25] rac02[eth0:192.168.56.150] yes rac01[eth0:192.168.56.25] rac02[eth0:192.168.56.28] yes rac01[eth0:192.168.56.25] rac03[eth0:192.168.56.109] yes rac01[eth0:192.168.56.25] rac03[eth0:192.168.56.29] yes rac01[eth0:192.168.56.25] rac03[eth0:192.168.56.30] yes rac02[eth0:192.168.56.24] rac02[eth0:192.168.56.150] yes rac02[eth0:192.168.56.24] rac02[eth0:192.168.56.28] yes rac02[eth0:192.168.56.24] rac03[eth0:192.168.56.109] yes rac02[eth0:192.168.56.24] rac03[eth0:192.168.56.29] yes rac02[eth0:192.168.56.24] rac03[eth0:192.168.56.30] yes rac02[eth0:192.168.56.150] rac02[eth0:192.168.56.28] yes rac02[eth0:192.168.56.150] rac03[eth0:192.168.56.109] yes rac02[eth0:192.168.56.150] rac03[eth0:192.168.56.29] yes rac02[eth0:192.168.56.150] rac03[eth0:192.168.56.30] yes rac02[eth0:192.168.56.28] rac03[eth0:192.168.56.109] yes rac02[eth0:192.168.56.28] rac03[eth0:192.168.56.29] yes rac02[eth0:192.168.56.28] rac03[eth0:192.168.56.30] yes rac03[eth0:192.168.56.109] rac03[eth0:192.168.56.29] yes rac03[eth0:192.168.56.109] rac03[eth0:192.168.56.30] yes rac03[eth0:192.168.56.29] rac03[eth0:192.168.56.30] yes Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac01[eth1:192.168.10.100] rac03[eth1:192.168.10.109] yes rac01[eth1:192.168.10.100] rac02[eth1:192.168.10.101] yes rac03[eth1:192.168.10.109] rac02[eth1:192.168.10.101] yes Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "192.168.56.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.10.0" ...PASSEDVerifying Node Connectivity ...PASSEDVerifying Cluster Integrity ... Node Name ------------------------------------ rac01 rac02 rac03 Verifying Cluster Integrity ...PASSEDVerifying Node Addition ... Verifying CRS Integrity ...PASSED Verifying Clusterware Version Consistency ...PASSED Verifying '/u01/app/12.2.0/grid' ...PASSEDVerifying Node Addition ...PASSEDVerifying Multicast check ...Checking subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251"Verifying Multicast check ...PASSEDVerifying Node Application Existence ...Checking existence of VIP node application (required) Node Name Required Running? Comment ------------ ------------------------ ------------------------ ---------- rac01 yes yes passed rac02 yes yes passed rac03 yes yes passed Checking existence of NETWORK node application (required) Node Name Required Running? Comment ------------ ------------------------ ------------------------ ---------- rac01 yes yes passed rac02 yes yes passed rac03 yes yes passed Checking existence of ONS node application (optional) Node Name Required Running? Comment ------------ ------------------------ ------------------------ ---------- rac01 no yes passed rac02 no yes passed rac03 no yes passed Verifying Node Application Existence ...PASSEDVerifying Single Client Access Name (SCAN) ... SCAN Name Node Running? ListenerName Port Running? ---------------- ------------ ------------ ------------ ------------ ------------ mycluster-scan.mycluster.rac.localdomain rac01 true LISTENER_SCAN1 1521 true mycluster-scan.mycluster.rac.localdomain rac03 true LISTENER_SCAN2 1521 true mycluster-scan.mycluster.rac.localdomain rac02 true LISTENER_SCAN3 1521 true Checking TCP connectivity to SCAN listeners... Node ListenerName TCP connectivity? ------------ ------------------------ ------------------------ rac03 LISTENER_SCAN1 yes rac03 LISTENER_SCAN2 yes rac03 LISTENER_SCAN3 yes Verifying DNS/NIS name service 'mycluster-scan.mycluster.rac.localdomain' ... Verifying Name Service Switch Configuration File Integrity ...PASSED SCAN Name IP Address Status Comment ------------ ------------------------ ------------------------ ---------- mycluster-scan.mycluster.rac.localdomain 192.168.56.25 passed mycluster-scan.mycluster.rac.localdomain 192.168.56.28 passed mycluster-scan.mycluster.rac.localdomain 192.168.56.29 passed Verifying DNS/NIS name service 'mycluster-scan.mycluster.rac.localdomain' ...PASSEDVerifying Single Client Access Name (SCAN) ...PASSEDVerifying User Not In Group "root": grid ... Node Name Status Comment ------------ ------------------------ ------------------------ rac03 passed does not exist Verifying User Not In Group "root": grid ...PASSEDVerifying Clock Synchronization ... Node Name Status ------------------------------------ ------------------------ rac03 passed Node Name State ------------------------------------ ------------------------ rac03 Active Node Name Time Offset Status ------------ ------------------------ ------------------------ rac03 0.0 passed Verifying Clock Synchronization ...PASSEDPost-check for node addition was successful. CVU operation performed: stage -post nodeaddDate: Aug 3, 2018 1:20:08 PMCVU home: /u01/app/12.2.0/grid/User: grid[grid@rac01 addnode]$ [grid@rac01 addnode]$ srvctl config gns -list rac01.CLSFRAMEmyclus SRV Target: 192.168.10.100.mycluster Protocol: tcp Port: 19940 Weight: 0 Priority: 0 Flags: 0x101rac01.CLSFRAMEmyclus TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101rac02.CLSFRAMEmyclus SRV Target: 192.168.10.101.mycluster Protocol: tcp Port: 55344 Weight: 0 Priority: 0 Flags: 0x101rac02.CLSFRAMEmyclus TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101rac03.CLSFRAMEmyclus SRV Target: 192.168.10.109.mycluster Protocol: tcp Port: 63340 Weight: 0 Priority: 0 Flags: 0x101rac03.CLSFRAMEmyclus TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101rac01.gipcdhaname SRV Target: 192.168.10.100.mycluster Protocol: tcp Port: 18735 Weight: 0 Priority: 0 Flags: 0x101rac01.gipcdhaname TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101rac02.gipcdhaname SRV Target: 192.168.10.101.mycluster Protocol: tcp Port: 48644 Weight: 0 Priority: 0 Flags: 0x101rac02.gipcdhaname TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101rac03.gipcdhaname SRV Target: 192.168.10.109.mycluster Protocol: tcp Port: 56082 Weight: 0 Priority: 0 Flags: 0x101rac03.gipcdhaname TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101gpnpd h:rac01 c:mycluster u:242a677357084fedbfb3f73a1341a84b.gpnpa1341a84b SRV Target: rac01.mycluster Protocol: tcp Port: 46798 Weight: 0 Priority: 0 Flags: 0x101gpnpd h:rac01 c:mycluster u:242a677357084fedbfb3f73a1341a84b.gpnpa1341a84b TXT agent="gpnpd", cname="mycluster", guid="242a677357084fedbfb3f73a1341a84b", host="rac01", pid="354" Flags: 0x101gpnpd h:rac02 c:mycluster u:242a677357084fedbfb3f73a1341a84b.gpnpa1341a84b SRV Target: rac02.mycluster Protocol: tcp Port: 17660 Weight: 0 Priority: 0 Flags: 0x101gpnpd h:rac02 c:mycluster u:242a677357084fedbfb3f73a1341a84b.gpnpa1341a84b TXT agent="gpnpd", cname="mycluster", guid="242a677357084fedbfb3f73a1341a84b", host="rac02", pid="3991" Flags: 0x101gpnpd h:rac03 c:mycluster u:242a677357084fedbfb3f73a1341a84b.gpnpa1341a84b SRV Target: rac03.mycluster Protocol: tcp Port: 48787 Weight: 0 Priority: 0 Flags: 0x101gpnpd h:rac03 c:mycluster u:242a677357084fedbfb3f73a1341a84b.gpnpa1341a84b TXT agent="gpnpd", cname="mycluster", guid="242a677357084fedbfb3f73a1341a84b", host="rac03", pid="27548" Flags: 0x101CSSHub1.hubCSS SRV Target: rac01.mycluster Protocol: gipc Port: 0 Weight: 0 Priority: 0 Flags: 0x101CSSHub1.hubCSS TXT HOSTQUAL="mycluster" Flags: 0x101CSSHub2.hubCSS SRV Target: rac02.mycluster Protocol: gipc Port: 0 Weight: 0 Priority: 0 Flags: 0x101CSSHub2.hubCSS TXT HOSTQUAL="mycluster" Flags: 0x101CSSHub3.hubCSS SRV Target: rac03.mycluster Protocol: gipc Port: 0 Weight: 0 Priority: 0 Flags: 0x101CSSHub3.hubCSS TXT HOSTQUAL="mycluster" Flags: 0x101rac.localdomain DLV 35805 10 18 ( YDYEUoQXUJhZSgp2vdMGKC83KMn/la7GE1E3ne9d+tZhGvdlcf068ShuFnECk86ifGjEAGii3FbcMSR4SJWwEQ== ) Unique Flags: 0x314rac.localdomain DNSKEY 7 3 10 ( MIIBCgKCAQEAsSXxVdERHCL2DQzcFxeTz+GUCs/Mpugyu1vfyQmS7ag0ds2bEXWSxUaDfVI8mercMNAVz+4VfsKmEfx7XGneQ5PxXCRSxtAvJhJIw1DI/RxmDqaSdQNZJ5sdLnP/xET2KbsE2imWJNgXDOXIojxX2kWqsqIbh6LPEnuryu6znAdwodGKe5D0OI6DhLfQQmI0QQJHeTXawyQOtj97cv0ekGlCtr23ic+V3LowHxhz1OpWU0m36u0tg/Vyu8aT/bWt6OtXt0YytMHO8ccSe5SC54iCTFKKmzHTaiSpQFBm7+8z9oBP3DDg1SWdm28B0fgPIzsVeSYj2FFOuRNPppmxPQIDAQAB ) Unique Flags: 0x314rac.localdomain NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314mycluster-scan.mycluster A 192.168.56.25 Unique Flags: 0x81mycluster-scan.mycluster A 192.168.56.28 Unique Flags: 0x81mycluster-scan.mycluster A 192.168.56.29 Unique Flags: 0x81mycluster-scan1-vip.mycluster A 192.168.56.25 Unique Flags: 0x81mycluster-scan2-vip.mycluster A 192.168.56.29 Unique Flags: 0x81mycluster-scan3-vip.mycluster A 192.168.56.28 Unique Flags: 0x81rac01-vip.mycluster A 192.168.56.27 Unique Flags: 0x81rac02-vip.mycluster A 192.168.56.24 Unique Flags: 0x81rac03-vip.mycluster A 192.168.56.30 Unique Flags: 0x81mycluster-scan1-vip A 192.168.56.25 Unique Flags: 0x81mycluster-scan2-vip A 192.168.56.29 Unique Flags: 0x81mycluster-scan3-vip A 192.168.56.28 Unique Flags: 0x81Net-X-1.oraAsm SRV Target: 192.168.10.100.mycluster Protocol: tcp Port: 1526 Weight: 0 Priority: 0 Flags: 0x101Net-X-2.oraAsm SRV Target: 192.168.10.109.mycluster Protocol: tcp Port: 1526 Weight: 0 Priority: 0 Flags: 0x101Net-X-3.oraAsm SRV Target: 192.168.10.109.mycluster Protocol: tcp Port: 1526 Weight: 0 Priority: 0 Flags: 0x101Oracle-GNS A 192.168.56.150 Unique Flags: 0x315mycluster.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 12931 Weight: 0 Priority: 0 Flags: 0x315mycluster.Oracle-GNS TXT CLUSTER_NAME="mycluster", CLUSTER_GUID="242a677357084fedbfb3f73a1341a84b", NODE_NAME="rac03", SERVER_STATE="RUNNING", VERSION="12.2.0.0.0", PROTOCOL_VERSION="0xc200000", DOMAIN="rac.localdomain" Flags: 0x315Oracle-GNS-ZM A 192.168.56.150 Unique Flags: 0x315mycluster.Oracle-GNS-ZM SRV Target: Oracle-GNS-ZM Protocol: tcp Port: 61457 Weight: 0 Priority: 0 Flags: 0x315rac01-vip A 192.168.56.27 Unique Flags: 0x81rac02-vip A 192.168.56.24 Unique Flags: 0x81rac03-vip A 192.168.56.30 Unique Flags: 0x81[grid@rac01 addnode]$[grid@rac01 addnode]$ srvctl config vip -node rac03VIP exists: network number 1, hosting node rac03VIP Name: rac03VIP IPv4 Address: -/rac03-vip/192.168.56.30VIP IPv6 Address: VIP is is individually enabled on nodes: VIP is individually disabled on nodes: Non-authoritative answer:Name:rac03-vip.rac.localdomainAddress: 192.168.56.30[grid@rac01 addnode]$ srvctl status vip -node rac03VIP 192.168.56.30 is enabledVIP 192.168.56.30 is running on node: rac03[grid@rac01 addnode]$ nslookup rac03-vip 192.168.56.150Server:192.168.56.150Address:192.168.56.150#53Name:rac03-vip.rac.localdomainAddress: 192.168.56.30Remove a node from the cluster Issue deinstall -local command to remove node from cluster (on rac03)[root@rac03 ~]# su - grid [grid@rac03 ~]$ /u01/app/12.2.0/grid/deinstall/deinstall -localChecking for required files and bootstrapping ...Please wait ...Location of logs /tmp/deinstall2018-08-12_05-01-51AM/logs/############ ORACLE DECONFIG TOOL START ##################################### DECONFIG CHECK OPERATION START ########################### [START] Install check configuration ##Checking for existence of the Oracle home location /u01/app/12.2.0/gridOracle Home type selected for deinstall is: Oracle Grid Infrastructure for a ClusterOracle Base selected for deinstall is: /u01/app/gridChecking for existence of central inventory location /u01/app/oraInventoryChecking for existence of the Oracle Grid Infrastructure home /u01/app/12.2.0/gridThe following nodes are part of this cluster: rac03,rac02,rac01Checking for sufficient temp space availability on node(s) : 'rac03'## [END] Install check configuration ##Traces log file: /tmp/deinstall2018-08-12_05-01-51AM/logs//crsdc_2018-08-12_05-02-23-AM.logNetwork Configuration check config STARTNetwork de-configuration trace file location: /tmp/deinstall2018-08-12_05-01-51AM/logs/netdc_check2018-08-12_05-02-25-AM.logNetwork Configuration check config ENDAsm Check Configuration STARTASM de-configuration trace file location: /tmp/deinstall2018-08-12_05-01-51AM/logs/asmcadc_check2018-08-12_05-02-25-AM.logDatabase Check Configuration STARTDatabase de-configuration trace file location: /tmp/deinstall2018-08-12_05-01-51AM/logs/databasedc_check2018-08-12_05-02-25-AM.logOracle Grid Management database was found in this Grid Infrastructure homeDatabase Check Configuration END######################### DECONFIG CHECK OPERATION END ################################################ DECONFIG CHECK OPERATION SUMMARY #######################Oracle Grid Infrastructure Home is: /u01/app/12.2.0/gridThe following nodes are part of this cluster: rac03,rac02,rac01The cluster node(s) on which the Oracle home deinstallation will be performed are:rac03Oracle Home selected for deinstall is: /u01/app/12.2.0/gridInventory Location where the Oracle home registered is: /u01/app/oraInventoryOption -local will not modify any ASM configuration.Oracle Grid Management database was found in this Grid Infrastructure homeLocal configuration of Oracle Grid Management database will be removedDo you want to continue (y - yes, n - no)? [n]:yA log of this session will be written to: '/tmp/deinstall2018-08-12_05-01-51AM/logs/deinstall_deconfig2018-08-12_05-02-20-AM.out'Any error messages from this session will be written to: '/tmp/deinstall2018-08-12_05-01-51AM/logs/deinstall_deconfig2018-08-12_05-02-20-AM.err'######################## DECONFIG CLEAN OPERATION START ########################Database de-configuration trace file location: /tmp/deinstall2018-08-12_05-01-51AM/logs/databasedc_clean2018-08-12_05-03-41-AM.logASM de-configuration trace file location: /tmp/deinstall2018-08-12_05-01-51AM/logs/asmcadc_clean2018-08-12_05-03-41-AM.logASM Clean Configuration ENDNetwork Configuration clean config STARTNetwork de-configuration trace file location: /tmp/deinstall2018-08-12_05-01-51AM/logs/netdc_clean2018-08-12_05-03-41-AM.logNetwork Configuration clean config ENDRun the following command as the root user or the administrator on node "rac03"./u01/app/12.2.0/grid/crs/install/rootcrs.sh -force -deconfig -paramfile "/tmp/deinstall2018-08-12_05-01-51AM/response/deinstall_OraGI12Home1.rsp"Press Enter after you finish running the above commandsNote: we run this command from another terminal session as root user [root@rac03 ~]# /u01/app/12.2.0/grid/crs/install/rootcrs.sh -force -deconfig -paramfile "/tmp/deinstall2018-08-12_05-01-51AM/response/deinstall_OraGI12Home1.rsp"Using configuration parameter file: /tmp/deinstall2018-08-12_05-01-51AM/response/deinstall_OraGI12Home1.rspThe log of current session can be found at: /tmp/deinstall2018-08-12_05-01-51AM/logs/crsdeconfig_rac03_2018-08-12_05-05-47AM.log## A fatal error has been detected by the Java Runtime Environment:## SIGSEGV (0xb) at pc=0x00007f986ce6b1b3, pid=1812, tid=140293350962944## JRE version: Java(TM) SE Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14)# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 compressed oops)# Problematic frame:# C [libhasgen12.so+0x1bd1b3] clsdhcp_parsemessageoptions+0xc1## Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again## An error report file with more information is saved as:# /u01/app/12.2.0/grid/hs_err_pid1812.log## If you would like to submit a bug report, please visit:# The crash happened outside the Java Virtual Machine in native code.# See problematic frame for where to report the bug.#/u01/app/12.2.0/grid/bin/srvctl: line 327: 1812 Aborted (core dumped) ${JRE} ${JRE_OPTIONS} -DORACLE_HOME=${ORACLE_HOME} -classpath ${CLASSPATH} ${SRVM_PROPERTY_DEFS} oracle.ops.opsctl.OPSCTLDriver "$@"2018/08/12 05:06:23 CLSRSC-180: An error occurred while executing the command '/u01/app/12.2.0/grid/bin/srvctl remove vip -i rac03 -y -f'CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac03'CRS-2673: Attempting to stop 'ora.crsd' on 'rac03'CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac03'CRS-2673: Attempting to stop 'ora.chad' on 'rac03'CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac03'CRS-2673: Attempting to stop 'ora.MGMT.dg' on 'rac03'CRS-2677: Stop of 'ora.OCR.dg' on 'rac03' succeededCRS-2677: Stop of 'ora.MGMT.dg' on 'rac03' succeededCRS-2673: Attempting to stop 'ora.asm' on 'rac03'CRS-2677: Stop of 'ora.asm' on 'rac03' succeededCRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac03'CRS-2677: Stop of 'ora.chad' on 'rac03' succeededCRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac03' succeededCRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac03' has completedCRS-2677: Stop of 'ora.crsd' on 'rac03' succeededCRS-2673: Attempting to stop 'ora.storage' on 'rac03'CRS-2673: Attempting to stop 'ora.crf' on 'rac03'CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac03'CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac03'CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac03'CRS-2677: Stop of 'ora.drivers.acfs' on 'rac03' succeededCRS-2677: Stop of 'ora.storage' on 'rac03' succeededCRS-2673: Attempting to stop 'ora.asm' on 'rac03'CRS-2677: Stop of 'ora.crf' on 'rac03' succeededCRS-2677: Stop of 'ora.gpnpd' on 'rac03' succeededCRS-2677: Stop of 'ora.mdnsd' on 'rac03' succeededCRS-2677: Stop of 'ora.asm' on 'rac03' succeededCRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac03'CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac03' succeededCRS-2673: Attempting to stop 'ora.ctssd' on 'rac03'CRS-2673: Attempting to stop 'ora.evmd' on 'rac03'CRS-2677: Stop of 'ora.ctssd' on 'rac03' succeededCRS-2677: Stop of 'ora.evmd' on 'rac03' succeededCRS-2673: Attempting to stop 'ora.cssd' on 'rac03'CRS-2677: Stop of 'ora.cssd' on 'rac03' succeededCRS-2673: Attempting to stop 'ora.gipcd' on 'rac03'CRS-2677: Stop of 'ora.gipcd' on 'rac03' succeededCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac03' has completedCRS-4133: Oracle High Availability Services has been stopped.2018/08/12 05:07:08 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.2018/08/12 05:07:23 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.2018/08/12 05:07:24 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node[root@rac03 ~]#In the original grid user session hit <ENTER>######################### DECONFIG CLEAN OPERATION END ################################################ DECONFIG CLEAN OPERATION SUMMARY #######################Local configuration of Oracle Grid Management database was removed successfullyOracle Clusterware is stopped and successfully de-configured on node "rac03"Oracle Clusterware is stopped and de-configured successfully.#################################################################################### ORACLE DECONFIG TOOL END #############Using properties file /tmp/deinstall2018-08-12_05-01-51AM/response/deinstall_2018-08-12_05-02-20-AM.rspLocation of logs /tmp/deinstall2018-08-12_05-01-51AM/logs/############ ORACLE DEINSTALL TOOL START ################################### DEINSTALL CHECK OPERATION SUMMARY #######################A log of this session will be written to: '/tmp/deinstall2018-08-12_05-01-51AM/logs/deinstall_deconfig2018-08-12_05-02-20-AM.out'Any error messages from this session will be written to: '/tmp/deinstall2018-08-12_05-01-51AM/logs/deinstall_deconfig2018-08-12_05-02-20-AM.err'######################## DEINSTALL CLEAN OPERATION START ########################## [START] Preparing for Deinstall ##Setting LOCAL_NODE to rac03Setting CLUSTER_NODES to rac03Setting CRS_HOME to trueSetting oracle.installer.invPtrLoc to /tmp/deinstall2018-08-12_05-01-51AM/oraInst.locSetting oracle.installer.local to true## [END] Preparing for Deinstall ##Setting the force flag to falseSetting the force flag to cleanup the Oracle BaseOracle Universal Installer clean STARTDetach Oracle home '/u01/app/12.2.0/grid' from the central inventory on the local node : DoneDelete directory '/u01/app/12.2.0/grid' on the local node : DoneDelete directory '/u01/app/oraInventory' on the local node : DoneThe Oracle Base directory '/u01/app/grid' will not be removed on local node. The directory is not empty.Oracle Universal Installer cleanup was successful.Oracle Universal Installer clean END## [START] Oracle install clean #### [END] Oracle install clean ########################### DEINSTALL CLEAN OPERATION END ################################################ DEINSTALL CLEAN OPERATION SUMMARY #######################Successfully detached Oracle home '/u01/app/12.2.0/grid' from the central inventory on the local node.Successfully deleted directory '/u01/app/12.2.0/grid' on the local node.Successfully deleted directory '/u01/app/oraInventory' on the local node.Oracle Universal Installer cleanup was successful.Run 'rm -r /etc/oraInst.loc' as root on node(s) 'rac03' at the end of the session.Run 'rm -r /opt/ORCLfmap' as root on node(s) 'rac03' at the end of the session.Review the permissions and contents of '/u01/app/grid' on nodes(s) 'rac03'.If there are no Oracle home(s) associated with '/u01/app/grid', manually delete '/u01/app/grid' and its contents.Oracle deinstall tool successfully cleaned up temporary directories.#################################################################################### ORACLE DEINSTALL TOOL END #############[grid@rac03 ~]$Remove the node from the cluster Note: we run this as root from an existing node [root@rac01 bin]# cd /u01/app/12.2.0/grid/bin[root@rac01 bin]# ./crsctl delete node -n rac03CRS-4661: Node rac03 successfully deleted.Verify the node deletion (from rac01)[grid@rac01 addnode]$ cluvfy stage -post nodedel -n rac03 -verboseVerifying Node Removal ... Verifying CRS Integrity ...PASSED Verifying Clusterware Version Consistency ...PASSEDVerifying Node Removal ...PASSEDPost-check for node removal was successful. CVU operation performed: stage -post nodedelDate: Aug 11, 2018 8:26:03 AMCVU home: /u01/app/12.2.0/grid/User: grid[grid@rac01 addnode]$ olsnodes -arac01Hubrac02Hub ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download