Upgrade Oracle Grid Infrastructure From 11g to 12c
Check Prerequisites [ALL NODES]
**It must not be placed under one of the Oracle base directories, including the Oracle base directory of the Oracle Grid Infrastructure installation owner.
**It must not be placed in the home directory of an installation owner. These requirements are specific to Oracle Grid Infrastructure for a cluster installations.
ORACLE_BASE=/opt/grid/product/12.2.0.1
ORACLE_HOME=/opt/grid/product/12.2.0.1/grid
Cleanup /opt/grid/product
Red Hat Enterprise Linux 6.4: 2.6.32-358.el6.x86_64 or later
z08s-temp02a:/home/mdashok# uname -a
Linux z08s-temp02a.zebra.lan 2.6.32-696.3.1.el6.x86_64 #1 SMP Thu Apr 20 11:30:02 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
df /vol2/opt/grid
Filesystem
1K-blocks Used Available Use%
Mounted on
/dev/mapper/vg101-lvol01
103077152
70583444 27251036 73% /vol2/opt/grid
df /dev/shm
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 529323164
172268044 357055120 33% /dev/shm
z08s-temp02a:/opt/grid/product/11.2.0.4/grid# ll -d /dev/shm
drwxrwxrwt. 2 root root 1588160 Oct 15 14:01 /dev/shm
ll /etc/oraInst.loc
-rw-r--r--. 1 grid oinstall 65 Aug 15
2014 /etc/oraInst.loc
cat /etc/oraInst.loc
inventory_loc=/opt/grid/product/oraInventory
inst_group=oinstall
/home/grid# . ./grid.env
env | grep ORA_CRS_HOME
umask
0022
grep MemTotal /proc/meminfo
MemTotal:
1058646328 kB
grep SwapTotal /proc/meminfo
SwapTotal:
32767996 kB
df -h /tmp
Filesystem
Size Used Avail Use% Mounted on
/dev/mapper/vg00-lvol01
20G 577M 18G
4% /tmp
free
total
used free shared
buffers cached
Mem:
1058646328 1008206184
50440144 310099496 12525040
876558400
-/+ buffers/cache:
119122744 939523584
Swap:
32767996 0 32767996
uname -me
x86_64
df -h /dev/shm
Filesystem
Size Used Avail Use% Mounted on
tmpfs
505G 165G 341G
33% /dev/shm
Confirm THP are Disabled
cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
always madvise [never]
cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
4.14 Enabling the Name Service
Cache Daemon
To allow Oracle Clusterware to better tolerate network failures with
NAS devices or NFS mounts, enable the Name Service Cache Daemon (nscd).
z08s-temp02a:/home/grid# ps -fe | grep nscd | grep -v grep
nscd
25210 1 0 Jun16 ? 11:33:40 /usr/sbin/nscd
**4.15 Verifying the Disk I/O
Scheduler on Linux [http://docs.oracle.com/database/122/CWLIN/setting-the-disk-io-scheduler-on-linux.htm#CWLIN-GUID-B59FCEFB-20F9-4E64-8155-7A61B38D8CDF]
For best performance for Oracle ASM, Oracle recommends that you use the
Deadline I/O Scheduler.
cat /sys/block/${ASM_DISK}/queue/scheduler
noop [deadline] cfq
Confirm NTP is running
When the installer finds that the NTP protocol is not active, the
Cluster Time Synchronization Service is installed in active mode and
synchronizes the time across the nodes.
If NTP is found configured, then the Cluster Time Synchronization
Service is started in observer mode, and no active time synchronization is
performed by Oracle Clusterware within the cluster.
ps -fu ntp
UID
PID PPID C STIME TTY TIME CMD
ntp
25312 1 0 Jun16 ? 00:16:37 ntpd -x -u ntp:ntp -p
/var/run/ntpd.pid -g
nslookup TEMP02-DB
Server:
10.x.x.248
Address:
10.x.x.248#53
Name: TEMP02-DB.zebra.lan
Address: 10.xx.x.65
Name: TEMP02-DB.zebra.lan
Address: 10.xx.x.64
Name: TEMP02-DB.zebra.lan
Address: 10.xx.x.63
**5.7 Broadcast Requirements for
Networks Used by Oracle Grid Infrastructure
Broadcast communications (ARP and UDP) must work properly across all
the public and private interfaces configured for use by Oracle Grid
Infrastructure.
The broadcast must work across any configured VLANs as used by the
public or private interfaces.
When configuring public and private network interfaces for Oracle RAC,
you must enable Address Resolution Protocol (ARP). Highly Available IP (HAIP)
addresses do not require ARP on the public network, but for VIP failover, you
need to enable ARP. Do not configure NOARP.
**5.10 Configuration
Requirements for Oracle Flex Clusters
cat /etc/security/limits.conf
@oinstall
soft nproc 262144
@oinstall
hard nproc
262144
@oinstall
soft nofile 65536
@oinstall
hard nofile 65536
@oinstall
soft memlock 3145728
@oinstall
hard memlock 3145728
@oinstall
soft stack 10240
@oinstall
hard stack 32768
Set ASM diskgroup compatibility
values in gv$asm% (pull SQL output via 1 db)
srvctl start database -d `whoami`;srvctl status database -d `whoami`;
select distinct name
, allocation_unit_size
, compatibility
-- , database_compatibility
, decode(state, 'CONNECTED',
'MOUNTED', state) state
, 'alter diskgroup
'||name||' set attribute ''compatible.asm'' = ''11.2.0.2.0'';'
from gv$asm_diskgroup
where compatibility = '11.2.0.0.0'
order by name;
/home/grid# . ./grid.env
sqlplus /nolog
SQL> conn / as sysasm
Connected.
<< Run + Validate SQL generated via above query to modify all
diskgroups >>
srvctl stop database -d `whoami`;
SQL> @asm_disks_GRID.sql
DISK_PATH
HEADER_STATU DISK_TOTAL_MB DISK_USED_MB
PCT_USED ALLOCATION_UNIT_SIZE V V FAILGRP
/dev/oracleasm/disks/GRID1
MEMBER 30720 436 1.42 4194304 Y Y REGULAR
/dev/oracleasm/disks/GRID2
MEMBER 30720 432 1.41
4194304 Y Y REGULAR
/dev/oracleasm/disks/GRID3
MEMBER 30720 436 1.42 4194304 Y Y REGULAR
**root
/root# whoami
root
cd /opt/grid/product
/opt/grid/product#
mkdir -p 12.2.0 12.2.0.1/grid
chown -R grid:oinstall 12.2.0 12.2.0.1
ll -d 12.2.0 12.2.0.1 12.2.0.1/grid
drwxr-x---. 2
grid oinstall 4096 Oct 17 16:22
12.2.0
drwxr-x---. 3 grid oinstall 4096 Oct 17 13:24 12.2.0.1
drwxr-x---. 2 grid oinstall 4096 Oct 17 13:24
12.2.0.1/grid
**Remove grid on B/C/D alone to
prevent the below noted error
z08s-temp02b/c/d
cd /opt/grid/product/12.2.0.1
rmdir grid
As grid@z08s-temp02a (software pushed to subsequent nodes)
z08s-temp02a:/home/oracle/software/12.2.0.1/grid#
unzip V840012-01.zip -d /opt/grid/product/12.2.0.1/grid
z08s-temp02a:/opt/grid/product/12.2.0.1/grid#
find . | wc -l
20862
runcluvfy.sh stage -pre crsinst
-upgrade [-rolling]
/home/grid# whoami
grid
z08s-temp02a:/home/grid# script
$HOME/rjc/runcluvfy.pre12201.log
Script started, file is /home/grid/rjc/runcluvfy.pre12201.log
z08s-temp02a:/home/grid#
/opt/grid/product/12.2.0.1/grid/runcluvfy.sh stage -pre crsinst -upgrade
-rolling -src_crshome /opt/grid/product/11.2.0.4/grid -dest_crshome
/opt/grid/product/12.2.0.1/grid -dest_version 12.2.0.1 -verbose
Pre-check for cluster services setup was unsuccessful.
^M
Checks did not pass for the following nodes:^M
z08s-temp02d,z08s-temp02c,z08s-temp02b,z08s-temp02a^M
Checks did not pass for the following ASM disk
groups:^M
Verifying Disk group ASM
compatibility setting ...FAILED^M
_DB: PRVE-3175 :
ASM compatibility for ASM disk group "_DB"
is^M
set to
"11.2.0.0.0", which is less than the minimum supported value^M
"11.2.0.2.0".^M
Solution
srvctl start database -d `whoami`;srvctl status database -d `whoami`;
select distinct name
, allocation_unit_size
, compatibility
-- , database_compatibility
, decode(state, 'CONNECTED',
'MOUNTED', state) state
, 'alter diskgroup
'||name||' set attribute ''compatible.asm'' = ''11.2.0.2.0'';'
from gv$asm_diskgroup
where compatibility =
'11.2.0.0.0'
order by name;
srvctl stop database -d `whoami`;
X : Pre-12.2.0.1 Backup GRID (olr/ocr/vote/spfile)
OCR (Oracle Cluster Registry) [Node-A ONLY]
**root on
Node-A ONLY
/opt/grid/product/11.2.0.4/grid/bin/ocrconfig
-manualbackup
**Output will reflect the master node at the time the backup was taken;
confirm on file-system
z08s-temp02a 2017/10/17 13:42:39 /opt/grid/product/11.2.0.4/grid/cdata/TEMP02-DB/backup_20171017_134239.ocr
z08s-temp02a 2016/11/04
15:10:02
/opt/grid/product/11.2.0.4/grid/cdata/TEMP02-DB/backup_20161104_151002.ocr
z08s-temp02a 2016/09/24
19:43:30 /opt/grid/product/11.2.0.4/grid/cdata/TEMP02-DB/backup_20160924_194330.ocr
z03s-TEMP02b 2016/09/24
16:19:46
/opt/grid/product/grid/cdata/TEMP02-DB/backup_20160924_161946.ocr
z08s-temp02a 2016/09/23
14:13:49
/opt/grid/product/grid/cdata/TEMP02-DB/backup_20160923_141349.ocr
Additionally, confirm regularly occurring automatic backups are
available :
/home/grid#
./grid.env
$ORACLE_HOME/bin/ocrconfig -showbackup
**Output will reflect the master node at the time the backup was taken;
confirm on file-system
z08s-temp02a 2017/10/17 13:36:15 /opt/grid/product/11.2.0.4/grid/cdata/TEMP02-DB/backup00.ocr
z08s-temp02a 2017/10/17
09:36:15
/opt/grid/product/11.2.0.4/grid/cdata/TEMP02-DB/backup01.ocr
z08s-temp02a 2017/10/17
05:36:14 /opt/grid/product/11.2.0.4/grid/cdata/TEMP02-DB/backup02.ocr
z08s-temp02a 2017/10/16 05:36:08 /opt/grid/product/11.2.0.4/grid/cdata/TEMP02-DB/day.ocr
z08s-temp02a 2017/10/03 17:35:04 /opt/grid/product/11.2.0.4/grid/cdata/TEMP02-DB/week.ocr
z08s-temp02a 2017/10/17 13:42:39 /opt/grid/product/11.2.0.4/grid/cdata/TEMP02-DB/backup_20171017_134239.ocr
z08s-temp02a 2016/11/04
15:10:02
/opt/grid/product/11.2.0.4/grid/cdata/TEMP02-DB/backup_20161104_151002.ocr
z08s-temp02a 2016/09/24 19:43:30 /opt/grid/product/11.2.0.4/grid/cdata/TEMP02-DB/backup_20160924_194330.ocr
z03s-TEMP02b 2016/09/24 16:19:46 /opt/grid/product/grid/cdata/TEMP02-DB/backup_20160924_161946.ocr
z08s-temp02a 2016/09/23 14:13:49 /opt/grid/product/grid/cdata/TEMP02-DB/backup_20160923_141349.ocr
ASM spfile [Node-A ONLY]
SQL> show parameter spfile
spfile string +GRID/TEMP02-db/asmparameterfile/registry.253.800399127
SQL> create pfile='$HOME/rjc/initASM.ora.pre12201' from spfile;
File created.
Launch Grid Setup Utility [Node-A ONLY]
Note : CRS should be UP
**grid
z08s-temp02a:/home/grid#
export DISPLAY=`hostname -s`:24
z08s-temp02a:/home/grid#
cd /opt/grid/product/12.2.0.1/grid
z08s-temp02a:/opt/grid/product/12.2.0.1/grid#
./gridSetup.sh
Next >>
SSH connectivity >>
Yes
Next >>
OK
Next>>
Next>>
Yes>>
**SKIP This slide as this ORACLE_BASE value is incorrect; just showing the error here for informational purposes
Next >>
**grid@z08s-temp02b/c/d
cd /opt/grid/product/12.2.0.1
rmdir grid
OK + Next >>
Yes + Save Response File >>
Install
**root
**Run this on A, then
B, then C, then D (DO NOT RUN ON MULTIPLE NODES IN PARALLEL)
/opt/grid/product/12.2.0.1/grid/rootupgrade.sh
(say “y” to
overwriting files in /usr/local/bin)
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file:
/opt/grid/product/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
/opt/grid/product/12.2.0/crsdata/z08s-temp02a/crsconfig/rootcrs_z08s-temp02a_2017-10-17_04-43-20PM.log
2017/10/17 16:43:28 CLSRSC-595: Executing upgrade step 1 of 19:
'UpgradeTFA'.
2017/10/17 16:43:28 CLSRSC-4015: Performing install or upgrade action
for Oracle Trace File Analyzer (TFA) Collector.
2017/10/17 16:44:21 CLSRSC-4003: Successfully patched Oracle Trace File
Analyzer (TFA) Collector.
2017/10/17 16:44:21 CLSRSC-595: Executing upgrade step 2 of 19:
'ValidateEnv'.
2017/10/17 16:44:52 CLSRSC-595: Executing upgrade step 3 of 19:
'GenSiteGUIDs'.
2017/10/17 16:44:57 CLSRSC-595: Executing upgrade step 4 of 19:
'GetOldConfig'.
2017/10/17 16:44:57 CLSRSC-464: Starting retrieval of the cluster
configuration data
2017/10/17 16:45:14 CLSRSC-515: Starting OCR manual backup.
2017/10/17 16:45:22 CLSRSC-516: OCR manual backup successful.
2017/10/17 16:45:49 CLSRSC-486:
At this stage of upgrade, the
OCR has changed.
Any attempt to downgrade the
cluster after this point will require a complete cluster outage to restore the
OCR.
2017/10/17 16:45:49 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been
upgraded must be downgraded.
2017/10/17 16:45:49 CLSRSC-542:
2. Before downgrading the last
node, the Grid Infrastructure stack on all other cluster nodes must be down.
2017/10/17 16:45:49 CLSRSC-615:
3. The last node to downgrade
cannot be a Leaf node.
2017/10/17 16:46:05 CLSRSC-465: Retrieval of the cluster configuration
data has successfully completed.
2017/10/17 16:46:05 CLSRSC-595: Executing upgrade step 5 of 19:
'UpgPrechecks'.
2017/10/17 16:46:12 CLSRSC-363: User ignored prerequisites during
installation
2017/10/17 16:51:45 CLSRSC-595: Executing upgrade step 6 of 19:
'SaveParamFile'.
2017/10/17 16:52:03 CLSRSC-595: Executing upgrade step 7 of 19:
'SetupOSD'.
2017/10/17 16:52:17 CLSRSC-595: Executing upgrade step 8 of 19:
'PreUpgrade'.
2017/10/17 16:52:24 CLSRSC-468: Setting Oracle Clusterware and ASM to
rolling migration mode
2017/10/17 16:52:24 CLSRSC-482: Running command:
'/opt/grid/product/12.2.0.1/grid/bin/asmca -silent -upgradeNodeASM -nonRolling
false -oldCRSHome /opt/grid/product/11.2.0.4/grid -oldCRSVersion 11.2.0.4.0
-firstNode true -startRolling true '
ASM configuration upgraded in local node successfully.
2017/10/17 16:52:33 CLSRSC-469: Successfully set Oracle Clusterware and
ASM to rolling migration mode
2017/10/17 16:52:48 CLSRSC-466: Starting shutdown of the current Oracle
Grid Infrastructure stack
2017/10/17 16:53:17 CLSRSC-467: Shutdown of the current Oracle Grid
Infrastructure stack has successfully completed.
2017/10/17 16:53:19 CLSRSC-595: Executing upgrade step 9 of 19:
'CheckCRSConfig'.
2017/10/17 16:53:20 CLSRSC-595: Executing upgrade step 10 of 19:
'UpgradeOLR'.
2017/10/17 16:53:39 CLSRSC-595: Executing upgrade step 11 of 19:
'ConfigCHMOS'.
2017/10/17 16:53:40 CLSRSC-595: Executing upgrade step 12 of 19:
'InstallAFD'.
2017/10/17 16:53:53 CLSRSC-595: Executing upgrade step 13 of 19:
'createOHASD'.
2017/10/17 16:54:08 CLSRSC-595: Executing upgrade step 14 of 19:
'ConfigOHASD'.
2017/10/17 16:54:58 CLSRSC-595: Executing upgrade step 15 of 19:
'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability
Services-managed resources on 'z08s-temp02a'
CRS-2793: Shutdown of Oracle High Availability Services-managed
resources on 'z08s-temp02a' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/10/17 16:55:46 CLSRSC-595: Executing upgrade step 16 of 19:
'InstallKA'.
2017/10/17 16:56:00 CLSRSC-595: Executing upgrade step 17 of 19:
'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability
Services-managed resources on 'z08s-temp02a'
CRS-2793: Shutdown of Oracle High Availability Services-managed
resources on 'z08s-temp02a' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
(z08s-temp02 hung
here for what felt like an eternity...)
CRS-2791: Starting shutdown of Oracle High Availability Services-managed
resources on 'z08s-temp02a'
CRS-2673: Attempting to stop 'ora.crsd' on 'z08s-temp02a'
CRS-2677: Stop of 'ora.crsd' on 'z08s-temp02a' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'z08s-temp02a'
CRS-2673: Attempting to stop 'ora.crf' on 'z08s-temp02a'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'z08s-temp02a'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'z08s-temp02a'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'z08s-temp02a'
CRS-2677: Stop of 'ora.drivers.acfs' on 'z08s-temp02a' succeeded
CRS-2677: Stop of 'ora.crf' on 'z08s-temp02a' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'z08s-temp02a' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'z08s-temp02a' succeeded
CRS-2677: Stop of 'ora.asm' on 'z08s-temp02a' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'z08s-temp02a'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'z08s-temp02a'
succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'z08s-temp02a'
CRS-2673: Attempting to stop 'ora.evmd' on 'z08s-temp02a'
CRS-2677: Stop of 'ora.ctssd' on 'z08s-temp02a' succeeded
CRS-2677: Stop of 'ora.evmd' on 'z08s-temp02a' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'z08s-temp02a'
CRS-2677: Stop of 'ora.cssd' on 'z08s-temp02a' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'z08s-temp02a'
CRS-2677: Stop of 'ora.gipcd' on 'z08s-temp02a' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed
resources on 'z08s-temp02a' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/10/17 17:13:41 CLSRSC-343: Successfully started Oracle Clusterware
stack
2017/10/17 17:13:41 CLSRSC-595: Executing upgrade step 18 of 19:
'UpgradeNode'.
2017/10/17 17:13:48 CLSRSC-474: Initiating upgrade of resource types
2017/10/17 17:15:39 CLSRSC-482: Running command: 'srvctl upgrade model
-s 11.2.0.4.0 -d 12.2.0.1.0 -p first'
2017/10/17 17:15:39 CLSRSC-475: Upgrade of resource types successfully
initiated.
2017/10/17 17:16:00 CLSRSC-595: Executing upgrade step 19 of 19:
'PostUpgrade'.
2017/10/17 17:16:24 CLSRSC-325: Configure Oracle Grid Infrastructure
for a Cluster ... succeeded
**Specific to the final node
2017/12/27 15:37:01 CLSRSC-595: Executing upgrade step 18 of 19:
'UpgradeNode'.
Start upgrade invoked..
2017/12/27 15:38:47 CLSRSC-478: Setting Oracle Clusterware active
version on the last node to be upgraded
2017/12/27 15:38:47 CLSRSC-482: Running command:
'/opt/grid/product/12.2.0.1/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This
operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 12.2.0.1.0.
2017/12/27 15:40:49 CLSRSC-479: Successfully set Oracle Clusterware
active version
2017/12/27 15:40:57 CLSRSC-476: Finishing upgrade of resource types
2017/12/27 15:41:42 CLSRSC-482: Running command: 'srvctl upgrade model
-s 11.2.0.4.0 -d 12.2.0.1.0 -p last'
2017/12/27 15:41:42 CLSRSC-477: Successfully completed upgrade of
resource types
2017/12/27 15:42:16 CLSRSC-595: Executing upgrade step 19 of 19:
'PostUpgrade'.
2017/12/27 15:42:35 CLSRSC-325: Configure Oracle Grid Infrastructure
for a Cluster ... succeeded
**Timings
z08s-temp02a : 35 minutes
z08s-temp02b
: 6 minutes
z08c-temp02c
: 6 minutes
z08s-temp02d
: 15 minutes
(this step will take awhile, ~30
minutes)
z08s-temp02a:/opt/grid/product/oraInventory/logs/GridSetupActions2017-10-17_04-10-12PM#
grep -i fail time2017-10-17_04-10-12PM.log
# Configure Oracle Grid
Infrastructure for a Cluster failed. # 1988338 # 1508282710204
# Configure Oracle Grid
Infrastructure for a Cluster failed. # 1988338 # 1508282710204
# Configure Oracle Grid
Infrastructure for a Cluster failed. # 1988338 # 1508282710204
z08s-temp02a:/opt/grid/product/oraInventory/logs/GridSetupActions2017-10-17_04-10-12PM#
cat gridSetupActions2017-10-17_04-10-12PM.log
OK >>
Fix z08s-temp02a/b/c/d:/dev/oracleasm/disks/* to grid:oinstall w/0660
Retry >>
Wait for 100%...
Close >>
Update $HOME/grid.env [ALL NODES]
/home/grid# cat grid.env
export ORACLE_BASE=/opt/grid/product/12.2.0
export ORACLE_HOME=/opt/grid/product/12.2.0.1/grid
export ORACLE_SID=+ASM1
export PATH=$PATH:$ORACLE_HOME/bin
Update ASM init.ora
**If necessary; this was required in Project-Support but not
Production-Support
SQL> select name, value from v$parameter where value like '%11.2%'
order by 1;
NAME
--------------------------------------------------------------------------------
VALUE
--------------------------------------------------------------------------------
core_dump_dest
/opt/grid/product/11.2.0.4/diag/asm/+asm/+ASM1/cdump
diagnostic_dest
/opt/grid/product/11.2.0.4
ROLLING [Node-Specific]
mkdir -p /opt/grid/product/12.2.0/diag/asm/+asm/+ASM1/adump
mkdir -p /opt/grid/product/12.2.0/diag/asm/+asm/+ASM2/adump
mkdir -p /opt/grid/product/12.2.0/diag/asm/+asm/+ASM3/adump
mkdir -p /opt/grid/product/12.2.0/diag/asm/+asm/+ASM4/adump
mkdir -p /opt/grid/product/12.2.0/diag/asm/+asm/+ASM1/cdump
mkdir -p /opt/grid/product/12.2.0/diag/asm/+asm/+ASM2/cdump
mkdir -p /opt/grid/product/12.2.0/diag/asm/+asm/+ASM3/cdump
mkdir -p /opt/grid/product/12.2.0/diag/asm/+asm/+ASM4/cdump
/home/grid# sqlplus /nolog
SQL> conn / as sysasm
alter system set
diagnostic_dest='/opt/grid/product/12.2.0' scope=spfile sid='*';
alter system set audit_file_dest='/opt/grid/product/12.2.0/diag/asm/+asm/+ASM1/adump'
scope=spfile sid='+ASM1';
alter system set
audit_file_dest='/opt/grid/product/12.2.0/diag/asm/+asm/+ASM2/adump'
scope=spfile sid='+ASM2';
alter system set audit_file_dest='/opt/grid/product/12.2.0/diag/asm/+asm/+ASM3/adump'
scope=spfile sid='+ASM3';
alter system set
audit_file_dest='/opt/grid/product/12.2.0/diag/asm/+asm/+ASM4/adump'
scope=spfile sid='+ASM4';
alter system set
core_dump_dest='/opt/grid/product/12.2.0/diag/asm/+asm/+ASM1/cdump' scope=both
sid='+ASM1';
alter system set
core_dump_dest='/opt/grid/product/12.2.0/diag/asm/+asm/+ASM2/cdump' scope=both
sid='+ASM2';
alter system set
core_dump_dest='/opt/grid/product/12.2.0/diag/asm/+asm/+ASM3/cdump' scope=both
sid='+ASM3';
alter system set core_dump_dest='/opt/grid/product/12.2.0/diag/asm/+asm/+ASM4/cdump'
scope=both sid='+ASM4';
shutdown immediate;
startup;
show parameter diagnostic_dest
NAME
TYPE VALUE
diagnostic_dest
string /opt/grid/product/12.2.0
show parameter audit_file_dest
NAME
TYPE VALUE
audit_file_dest
string /opt/grid/product/12.2.0/diag/asm/+asm/+ASM1/adump
show parameter core_dump_dest
NAME TYPE VALUE
core_dump_dest
string
/opt/grid/product/12.2.0/diag/asm/+asm/+ASM1/cdump
/home/grid# asmcmd lsdg
All
MOUNTED
z08s-temp02a:/home/grid#
srvctl status asm
ASM is running on z08s-temp02c,z08s-temp02b,z08s-temp02a,z08s-temp02d
Verify
z08s-temp02a:/home/grid#
$ORACLE_HOME/bin/crsctl query crs activeversion;
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
z08s-temp02a:/home/grid#
$ORACLE_HOME/bin/ocrcheck;
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space
(kbytes) : 409568
Used space (kbytes) :
16004
Available space (kbytes)
: 393564
ID :
641978920
Device/File Name :
+GRID
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry
integrity check succeeded
Logical corruption check
bypassed due to non-privileged user
z08s-temp02a:/home/grid#
$ORACLE_HOME/bin/crsctl query css votedisk;
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE aa0a7b7e51be4f48bf3edc9fc9b1334a (/dev/oracleasm/disks/GRID1)
[GRID]
2. ONLINE 7401041c7fa24fc5bfbdb0b2ed746498
(/dev/oracleasm/disks/GRID2) [GRID]
3. ONLINE abcb03d4ee874f63bf13008cb68f901d
(/dev/oracleasm/disks/GRID3) [GRID]
Located 3 voting disk(s).
z08s-temp02a:/home/grid# $ORACLE_HOME/bin/crsctl check cluster -all;
**************************************************************
z08s-temp02a:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
z08s-temp02b:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
z08s-temp02c:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
z08s-temp02d:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
################################Grid
Patching #################################
################################Grid
Patching #################################
Apply October 2017 PSU (#)
Backup OCR (Oracle Cluster
Registry) [Node-A ONLY]
**root on
Node-A ONLY
/opt/grid/product/12.2.0.1/grid/bin/ocrconfig
-manualbackup
z08s-temp02a:/opt/grid/product/12.2.0.1#
/opt/grid/product/12.2.0.1/grid/bin/ocrconfig -manualbackup
z08s-temp02a 2017/10/17
22:02:39
+GRID:/TEMP02-DB/OCRBACKUP/backup_20171017_220239.ocr.283.957650559 0
z08c-temp02c
2017/10/17
17:46:32 +GRID:/TEMP02-DB/OCRBACKUP/backup_20171017_174632.ocr.258.957635193 0
Backup Grid Infrastructure
Binaries [All Nodes]
USE PYTHON SCRIPT BUT IN A PINCH
**root
/root# cd /opt/grid/product
/opt/grid/product#
tar -czvpf oraInventory.20171017_pre12cR2gridPSU.tar.gz oraInventory
chown grid:oinstall oraInventory.20171017_pre12cR2gridPSU.tar.gz
Upgrade OPatch to 12.2.0.1.10 (6880880)
[All Nodes]
$ORACLE_HOME/OPatch/opatch version | grep Version
OPatch Version: 12.2.0.1.6
**root (backup existing
OPatch)
cd /opt/grid/product/12.2.0.1/grid
mv OPatch OPatch.122016
mkdir OPatch
chown grid:oinstall
OPatch
chmod 755 OPatch
/opt/grid/product/12.2.0/patches#
unzip p6880880_122010_Linux-x86-64.zip -d $ORACLE_HOME/OPatch
cd $ORACLE_HOME/OPatch
/opt/grid/product/12.2.0.1/grid/OPatch#
mv OPatch/* .
rmdir OPatch
$ORACLE_HOME/OPatch/opatch version | grep Version
OPatch Version: 12.2.0.1.10
Apply Patch #[26636246] [Grid
Infrastructure 12.2.0.1.*] [ALL NODES]
**opatchauto
utility should not be run in
parallel on the cluster nodes
$ORACLE_HOME/OPatch/opatch prereq
CheckConflictAgainstOHWithDetail -phBaseDir
$ORACLE_BASE/patches/26636246/26737266/26710464
$ORACLE_HOME/OPatch/opatch prereq
CheckConflictAgainstOHWithDetail -phBaseDir
$ORACLE_BASE/patches/26636246/26737266/26925644
$ORACLE_HOME/OPatch/opatch prereq
CheckConflictAgainstOHWithDetail -phBaseDir
$ORACLE_BASE/patches/26636246/26737266/26737232
$ORACLE_HOME/OPatch/opatch prereq
CheckConflictAgainstOHWithDetail -phBaseDir
$ORACLE_BASE/patches/26636246/26737266/26839277
$ORACLE_HOME/OPatch/opatch prereq
CheckConflictAgainstOHWithDetail -phBaseDir
$ORACLE_BASE/patches/26636246/26737266/26928563
Invoking prereq
"checkconflictagainstohwithdetail"
Prereq
"checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.
**Confirm no ASM REBALANCE is
occurring (mL#2168607.1 : Prepatch is Failing with "CLSRSC-430: Failed
to start rolling patch mode", if ASM Rebalance is Running)
select * from gv$asm_operation;
**The opatchauto utility
must be executed by an operating system (OS) user with root privileges, and it
must be executed on each node in the cluster if the GI home or Oracle RAC
database home is in non-shared storage. The utility should not be run in
parallel on the cluster nodes.
**root
export ORACLE_BASE=/opt/grid/product/12.2.0
export ORACLE_HOME=/opt/grid/product/12.2.0.1/grid
export PATH=$PATH:$ORACLE_HOME/OPatch
which opatchauto
opatchauto apply $ORACLE_BASE/patches/26636246/26737266
-oh $ORACLE_HOME
z08s-temp02a:/root#
opatchauto apply
$ORACLE_BASE/patches/26636246/26737266 -oh $ORACLE_HOME
OPatchauto session is initiated at Tue Oct 17 22:31:54
2017
System
initialization log file is
/vol2/opt/grid/product/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2017-10-17_10-32-54PM.log.
Session log file is
/vol2/opt/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2017-10-17_10-33-59PM.log
The id for this session is FYXK
Executing OPatch
prereq operations to verify patch applicability on home
/opt/grid/product/12.2.0.1/grid
Patch applicability verified successfully on home
/opt/grid/product/12.2.0.1/grid
Bringing down CRS service on home
/opt/grid/product/12.2.0.1/grid
Prepatch operation
log file location: /opt/grid/product/12.2.0/crsdata/z08s-temp02a/crsconfig/crspatch_z08s-temp02a_2017-10-17_10-34-57PM.log
CRS service brought down successfully on home
/opt/grid/product/12.2.0.1/grid
Start applying binary patch on home
/opt/grid/product/12.2.0.1/grid
Binary patch applied successfully on home
/opt/grid/product/12.2.0.1/grid
Starting CRS service on home /opt/grid/product/12.2.0.1/grid
Postpatch operation
log file location: /opt/grid/product/12.2.0/crsdata/z08s-temp02a/crsconfig/crspatch_z08s-temp02a_2017-10-17_10-47-12PM.log
CRS service started successfully on home
/opt/grid/product/12.2.0.1/grid
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the
summary as follows:
Host:z08s-temp02a
CRS Home:/opt/grid/product/12.2.0.1/grid
Summary:
==Following patches were SUCCESSFULLY applied:
Patch:
/vol2/opt/grid/product/12.2.0/patches/26636246/26737266/26710464
Log:
/opt/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-10-17_22-37-19PM_1.log
Patch:
/vol2/opt/grid/product/12.2.0/patches/26636246/26737266/26737232
Log:
/opt/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-10-17_22-37-19PM_1.log
Patch:
/vol2/opt/grid/product/12.2.0/patches/26636246/26737266/26839277
Log:
/opt/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-10-17_22-37-19PM_1.log
Patch:
/vol2/opt/grid/product/12.2.0/patches/26636246/26737266/26925644
Log:
/opt/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-10-17_22-37-19PM_1.log
Patch: /vol2/opt/grid/product/12.2.0/patches/26636246/26737266/26928563
Log:
/opt/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-10-17_22-37-19PM_1.log
OPatchauto
session completed at Tue Oct 17 22:59:53 2017
Time taken to complete the
session 28 minutes, 0 second
z08s-temp02a:/home/grid#
$ORACLE_HOME/bin/crsctl query crs activeversion;
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
z08s-temp02a:/home/grid# $ORACLE_HOME/bin/crsctl check cluster -all;
**************************************************************
z08s-temp02a:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
z08s-temp02b:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
z08s-temp02c:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
z08s-temp02d:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
/home/grid# $ORACLE_HOME/OPatch/opatch lsinventory | grep ^Patch
Patch 26928563 : applied on Tue Oct 17 22:47:09 CDT 2017
Patch description: "TOMCAT
RELEASE UPDATE 12.2.0.1.0(ID:170711) (26928563)"
Patch 26925644 : applied on Tue Oct 17 22:47:01 CDT 2017
Patch description: "OCW
RELEASE UPDATE 12.2.0.1.0(ID:171003) (26925644)"
Patch 26839277 : applied on Tue Oct 17 22:46:12 CDT 2017
Patch description: "DBWLM
RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)"
Patch 26737232 : applied on Tue Oct 17 22:46:04 CDT 2017
Patch description: "ACFS
RELEASE UPDATE 12.2.0.1.0(ID:170823) (26737232)"
Patch 26710464 : applied on Tue Oct 17 22:44:37 CDT 2017
Patch description:
"Database Release Update : 12.2.0.1.171017 (26710464)"
mv 11.2.0 + 11.2.0.4 to .DELETE
Configure OEM 12c ASM Target
+ASM_TEMP08-FB > Cluster ASM > Target Setup > Monitoring
Configuration
Oracle home path : /opt/grid/product/12.2.0.1/grid
Test Connection (both Cluster and Instance) : The connection test was
successful
/vol2/opt/oracle/product/agent/agent_inst/bin# ./emctl stop
agent;./emctl start agent;./emctl upload agent;./emctl status agent;
References
Oracle Clusterware (CRS/GI) - ASM - Database Version Compatibility
[mL#337737.1]
Oracle Database 12c Release 2 Install and Upgrade [http://docs.oracle.com/database/122/nav/install-and-upgrade.htm]
Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2
[mL#1189783.1]
Patches to apply before upgrading Oracle GI and DB to 12.2.0.1
[mL#2180188.1]
**For Grid & RDBMS 11.2.0.4 => 12.2.0.1, Oracle recommends to be
on at least GI PSU #22646198 – 11.2.0.4.160419 (Apr 2016) Grid Infrastructure
Patch Set Update (GI PSU)
Patch Set Update and Critical Patch Update October 2017 Availability
Document [mL#2296870.1]