B How to Upgrade to Oracle Grid Infrastructure 12c Release 1

This appendix describes how to perform Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) upgrades.

Oracle Clusterware upgrades can be rolling upgrades, in which a subset of nodes are brought down and upgraded while other nodes remain active. Oracle ASM 12c Release 1 (12.1) upgrades can be rolling upgrades. If you upgrade a subset of nodes, then a software-only installation is performed on the existing cluster nodes that you do not select for upgrade.

This appendix contains the following topics:

B.1 Back Up the Oracle Software Before Upgrades

Before you make any changes to the Oracle software, Oracle recommends that you create a backup of the Oracle software and databases.

B.2 About Oracle Grid Infrastructure and Oracle ASM Upgrade and Downgrade

You can upgrade Oracle Grid Infrastructure in any of the following ways:

  • Rolling Upgrade which involves upgrading individual nodes without stopping Oracle Grid Infrastructure on other nodes in the cluster

  • Non-rolling Upgrade which involves bringing down all the nodes except one. A complete cluster outage occurs while the root script stops the old Oracle Clusterware stack and starts the new Oracle Clusterware stack on the node where you initiate the upgrade. After upgrade is completed, the new Oracle Clusterware is started on all the nodes.

Note that some services are disabled when one or more nodes are in the process of being upgraded. All upgrades are out-of-place upgrades, meaning that the software binaries are placed in a different Grid home from the Grid home used for the prior release.

You can downgrade from Oracle Grid Infrastructure 12c Release 1 (12.1) to prior releases of Oracle Grid Infrastructure. Be aware that if you downgrade to a prior release, then your cluster must conform with the configuration requirements for that prior release, and the features available for the cluster consist only of the features available for that prior release of Oracle Clusterware and Oracle ASM.

If you have an existing Oracle ASM 11g Release 1 (11.1) or 10g release instance, with Oracle ASM in a separate home, then you can either upgrade it at the time that you install Oracle Grid Infrastructure, or you can upgrade it after the installation, using Oracle ASM Configuration Assistant (ASMCA). However, be aware that a number of Oracle ASM features are disabled until you upgrade Oracle ASM, and Oracle Clusterware management of Oracle ASM does not function correctly until Oracle ASM is upgraded, because Oracle Clusterware only manages Oracle ASM when it is running in the Oracle Grid Infrastructure home. For this reason, Oracle recommends that if you do not upgrade Oracle ASM at the same time as you upgrade Oracle Clusterware, then you should upgrade Oracle ASM immediately afterward. This issue does not apply to Oracle ASM 11g Release 2 (11.2) and later, as the Oracle Grid Infrastructure home contains Oracle ASM binaries as well.

You can perform out-of-place upgrades to an Oracle ASM instance using Oracle ASM Configuration Assistant (ASMCA). In addition to running ASMCA using the graphical user interface, you can run ASMCA in non-interactive (silent) mode.

Note:

You must complete an upgrade before attempting to use cluster backup files. You cannot use backups for a cluster that has not completed upgrade.

See Also:

Oracle Database Upgrade Guide and Oracle Automatic Storage Management Administrator's Guide for additional information about upgrading existing Oracle ASM installations

B.3 Options for Oracle Grid Infrastructure Upgrades and Downgrades

Upgrade options from Oracle Grid Infrastructure 11g to Oracle Grid Infrastructure 12c include the following:

  • Oracle Grid Infrastructure rolling upgrade which involves upgrading individual nodes without stopping Oracle Grid Infrastructure on other nodes in the cluster

  • Oracle Grid Infrastructure non-rolling upgrade by bringing the cluster down and upgrading the complete cluster

Upgrade options from Oracle Grid Infrastructure 11g Release 2 (11.2) to Oracle Grid Infrastructure 12c include the following:

  • Oracle Grid Infrastructure rolling upgrade, with OCR and voting disks on Oracle ASM

  • Oracle Grid Infrastructure complete cluster upgrade (downtime, non-rolling), with OCR and voting disks on Oracle ASM

Upgrade options from releases before Oracle Grid Infrastructure 11g Release 2 (11.2) to Oracle Grid Infrastructure 12c include the following:

  • Oracle Grid Infrastructure rolling upgrade, with OCR and voting disks on storage other than Oracle ASM or shared file system

  • Oracle Grid Infrastructure complete cluster upgrade (downtime, non-rolling), with OCR and voting disks on storage other than Oracle ASM or shared file system

Downgrade options from Oracle Grid Infrastructure 12c to earlier releases include the following:

  • Oracle Grid Infrastructure downgrade to Oracle Grid Infrastructure 11g Release 2 (11.2)

  • Oracle Grid Infrastructure downgrades to releases before Oracle Grid Infrastructure 11g Release 2 (11.2), Oracle Grid Infrastructure 11g Release 1 (11.1), Oracle Clusterware and Oracle ASM 10g, if storage for OCR and voting files is on storage other than Oracle ASM

B.4 Restrictions and Guidelines for Oracle Grid Infrastructure Upgrades

Oracle recommends that you use the Cluster Verification Utility tool (CVU) to check if there are any patches required for upgrading your existing Oracle Grid Infrastructure 11g Release 2 (11.2) or Oracle RAC database 11g Release 2 (11.2) installations.

Be aware of the following restrictions and changes for upgrades to Oracle Grid Infrastructure installations, which consists of Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM):

  • When you upgrade from Oracle Grid Infrastructure 11g or Oracle Clusterware and Oracle ASM 10g releases to Oracle Grid Infrastructure 12c Release 1 (12.1), you upgrade to a standard cluster configuration. You can enable Oracle Flex Cluster configuration after the upgrade.

  • If the Oracle Cluster Registry (OCR) and voting file locations for your current installation are on raw or block devices, then you must migrate them to Oracle ASM disk groups or shared file systems before upgrading to Oracle Grid Infrastructure 12c.

  • If you want to upgrade Oracle Grid Infrastructure releases before Oracle Grid Infrastructure 11g Release 2 (11.2), where the OCR and voting files are on raw or block devices, and you want to migrate these files to Oracle ASM rather than to a shared file system, then you must upgrade to Oracle Grid Infrastructure 11g Release 2 (11.2) before you upgrade to Oracle Grid Infrastructure 12c.

  • Downgrades from an Oracle Grid Infrastructure 12c Release 1 (12.1) Oracle Flex Cluster configuration to a Standard cluster configuration are not supported. All cluster configurations in releases earlier than Oracle Grid Infrastructure 12c are Standard cluster configurations. This downgrade restriction includes downgrades from an Oracle Flex Cluster to Oracle Grid Infrastructure 11g cluster, or to Oracle Clusterware and Oracle ASM 10g clusters.

  • You can downgrade to the Oracle Grid Infrastructure release you upgraded from. For example, if you upgraded from Oracle Grid Infrastructure 11g Release 2 (11.2) to Oracle Grid Infrastructure 12c Release 1 (12.1), you can only downgrade to Oracle Grid Infrastructure 11g Release 2 (11.2).

  • To change a cluster member node role to Leaf, you must have completed the upgrade on all Oracle Grid Infrastructure nodes so that the active version is Oracle Grid Infrastructure 12c Release 1 (12.1) or later.

  • To upgrade existing Oracle Clusterware installations to a standard configuration Oracle Grid Infrastructure 12c cluster, your release must be greater than or equal to Oracle Clusterware 10g Release 1 (10.1.0.5), Oracle Clusterware 10g Release 2 (10.2.0.3), Oracle Grid Infrastructure 11g Release 1 (11.1.0.6), or Oracle Grid Infrastructure 11g Release 2 (11.2).

  • To upgrade existing Oracle Grid Infrastructure installations from Oracle Grid Infrastructure 11g Release 2 (11.2.0.2) to a later release, you must apply patch 11.2.0.2.3 (11.2.0.2 PSU 3) or later.

  • Do not delete directories in the Grid home. For example, do not delete the directory Grid_home/Opatch. If you delete the directory, then the Grid infrastructure installation owner cannot use OPatch to patch the grid home, and OPatch displays the error message "'checkdir' error: cannot create Grid_home/OPatch".

  • To upgrade existing Oracle Grid Infrastructure installations to Oracle Grid Infrastructure 12c Release 1 (12.1), you must first verify if you need to apply any mandatory patches for upgrade to succeed. See Section B.6 for steps to check readiness.

    See Also:

    Oracle 12c Upgrade Companion (My Oracle Support Note 1462240.1):

    https://support.oracle.com/oip/faces/secure/km/DocumentDisplay.jspx?id=1462240.1

  • Oracle Clusterware and Oracle ASM upgrades are always out-of-place upgrades. You cannot perform an in-place upgrade of Oracle Clusterware and Oracle ASM to existing homes.

  • If the existing Oracle Clusterware home is a shared home, note that you can use a non-shared home for the Oracle Grid Infrastructure for a cluster home for Oracle Clusterware and Oracle ASM 12c Release 1 (12.1).

  • The same user that owned the earlier release Oracle Grid Infrastructure software must perform the Oracle Grid Infrastructure 12c Release 1 (12.1) upgrade. Before Oracle Database 11g, either all Oracle software installations were owned by the Oracle user, typically oracle, or Oracle Database software was owned by oracle, and Oracle Clusterware software was owned by a separate user, typically crs.

  • Oracle ASM and Oracle Clusterware both run in the Oracle Grid Infrastructure home.

  • During a major release upgrade to Oracle Grid Infrastructure 12c Release 1 (12.1), the software in the 12c Release 1 (12.1) Oracle Grid Infrastructure home is not fully functional until the upgrade is completed. Running srvctl, crsctl, and other commands from the new Grid homes are not supported until the final rootupgrade.sh script is run and the upgrade is complete across all nodes.

    To manage databases in existing earlier release database homes during the Oracle Grid Infrastructure upgrade, use the srvctl from the existing database homes.

  • You can perform upgrades on a shared Oracle Clusterware home.

  • During Oracle Clusterware installation, if there is a single instance Oracle ASM release on the local node, then it is converted to a clustered Oracle ASM 12c Release 1 (12.1) installation, and Oracle ASM runs in the Oracle Grid Infrastructure home on all nodes.

  • If a single instance (non-clustered) Oracle ASM installation is on a remote node, which is a node other than the local node (the node on which the Oracle Grid Infrastructure installation is being performed), then it will remain a single instance Oracle ASM installation. However, during installation, if you select to place the Oracle Cluster Registry (OCR) and voting files on Oracle ASM, then a clustered Oracle ASM installation is created on all nodes in the cluster, and the single instance Oracle ASM installation on the remote node will become nonfunctional.

  • After completing the force upgrade of a cluster to a release, all inaccessible nodes must be deleted from the cluster or joined to the cluster before starting the cluster upgrade to a later release.

See Also:

Oracle Database Upgrade Guide for additional information about preparing for upgrades

B.5 Preparing to Upgrade an Existing Oracle Clusterware Installation

If you have an existing Oracle Clusterware installation, then you upgrade your existing cluster by performing an out-of-place upgrade. You cannot perform an in-place upgrade.

The following sections list the steps you can perform before you upgrade Oracle Grid Infrastructure:

B.5.1 Checks to Complete Before Upgrading Oracle Clusterware

Complete the following tasks before starting an upgrade:

  1. For each node, use Cluster Verification Utility to ensure that you have completed preinstallation steps. It can generate Fixup scripts to help you to prepare servers. In addition, the installer will help you to ensure all required prerequisites are met.

    Ensure that you have information you will need during installation, including the following:

    • An Oracle base location for Oracle Clusterware.

    • An Oracle Grid Infrastructure home location that is different from your existing Oracle Clusterware location.

    • SCAN name and addresses, and other network addresses, as described in Chapter 5.

    • Privileged user operating system groups, as described in Chapter 6.

    • root user access, to run scripts as root during installation, using one of the options described in Section 8.1.1.

  2. For the installation owner running the installation, if you have environment variables set for the existing installation, then unset the environment variables $ORACLE_HOME and $ORACLE_SID, as these environment variables are used during upgrade. For example:

    $ unset ORACLE_BASE
    $ unset ORACLE_HOME
    $ unset ORACLE_SID
    
  3. If the cluster was previously forcibly upgraded, then ensure that all inaccessible nodes have been deleted from the cluster or joined to the cluster before starting another upgrade. For example, if the cluster was forcibly upgraded from 11.2.0.3 to 12.1.0.1, then ensure that all inaccessible nodes have been deleted from the cluster or joined to the cluster before upgrading to another release, for example, 12.1.0.2.

B.5.2 Unset Oracle Environment Variables

Unset Oracle environment variables.

If you have set ORA_CRS_HOME as an environment variable, following instructions from Oracle Support, then unset it before starting an installation or upgrade. You should never use ORA_CRS_HOME as an environment variable except under explicit direction from Oracle Support.

Check to ensure that installation owner login shell profiles (for example, .profile or .cshrc) do not have ORA_CRS_HOME set.

If you have had an existing installation on your system, and you are using the same user account to install this installation, then unset the following environment variables: ORA_CRS_HOME; ORACLE_HOME; ORA_NLS10; TNS_ADMIN; and any other environment variable set for the Oracle installation user that is connected with Oracle software homes.

Also, ensure that the $ORACLE_HOME/bin path is removed from your PATH environment variable.

B.5.3 Running the Oracle ORAchk Upgrade Readiness Assessment

ORAchk (Oracle RAC Configuration Audit Tool) Upgrade Readiness Assessment can be used to obtain an automated upgrade-specific health check for upgrades to Oracle Grid Infrastructure 11.2.0.3, 11.2.0.4, 12.1.0.1, and 12.1.0.2. You can run the ORAchk Upgrade Readiness Assessment tool and automate many of the manual pre-upgrade and post upgrade checks.

Oracle recommends that you download and run the latest version of ORAchk from My Oracle Support. For information about downloading, configuring, and running ORAchk configuration audit tool, refer to My Oracle Support note 1457357.1, which is available at the following URL:

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1457357.1

B.6 Using CVU to Validate Readiness for Oracle Clusterware Upgrades

You can use Cluster Verification Utility (CVU) to assist you with system checks in preparation for starting an upgrade. CVU runs the appropriate system checks automatically, and either prompts you to fix problems, or provides a fixup script to be run on all nodes in the cluster before proceeding with the upgrade.

This section contains the following topics:

B.6.1 About the CVU Grid Upgrade Validation Command Options

You can run upgrade validations in one of two ways:

  • Run OUI, and allow the CVU validation built into OUI to perform system checks and generate fixup scripts

  • Run the CVU manual script cluvfy.sh script to perform system checks and generate fixup scripts

To use OUI to perform pre-install checks and generate fixup scripts, run the installation as you normally would. OUI starts CVU, and performs system checks as part of the installation process. Selecting OUI to perform these checks is particularly appropriate if you think you have completed preinstallation checks, and you want to confirm that your system configuration meets minimum requirements for installation.

To use the cluvfy.sh command-line script for CVU, navigate to the staging area for the upgrade, where the runcluvfy.sh command is located, and run the command runcluvfy.sh stage -pre crsinst -upgrade to check the readiness of your Oracle Clusterware installation for upgrades. Running runcluvfy.sh with the -pre crsinst -upgrade options performs system checks to confirm if the cluster is in a correct state for upgrading from an existing clusterware installation.

The command uses the following syntax, where variable content is indicated by italics:

runcluvfy.sh stage -pre crsinst -upgrade [-rolling] -src_crshome src_Gridhome
-dest_crshome dest_Gridhome -dest_version dest_release
[-fixup][-method {sudo|root} [-location dir_path] [-user user_name]] [-verbose]

The options are:

  • -rolling

    Use this flag to verify readiness for rolling upgrades.

  • -src_crshome src_Gridhome

    Use this flag to indicate the location of the source Oracle Clusterware or Grid home that you are upgrading, where src_Gridhome is the path to the home that you want to upgrade.

  • -dest_crshome dest_Gridhome

    Use this flag to indicate the location of the upgrade Grid home, where dest_ Gridhome is the path to the Grid home.

  • -dest_version dest_release

    Use the -dest_version flag to indicate the release number of the upgrade, including any patchset. The release number must include the five digits designating the release to the level of the platform-specific patch. For example: 12.1.0.1.0.

  • -fixup [-method {sudo|root} [-location dir_path] [-user user_name]

    Use the -fixup flag to indicate that you want to generate instructions for any required steps you need to complete to ensure that your cluster is ready for an upgrade. The default location is the CVU work directory.

    The -fixup -method flag defines the method by which root scripts are run. The -method flag requires one of the following options:

    • sudo: Run as a user on the sudoers list.

    • root: Run as the root user.

    If you select sudo, then enter the -location flag to provide the path to Sudo on the server, and enter the -user flag to provide the user account with Sudo privileges.

  • -verbose

    Use the -verbose flag to produce detailed output of individual checks.

B.6.2 Example of Verifying System Upgrade Readiness for Grid Infrastructure

You can verify that the permissions required for installing Oracle Clusterware have been configured by running a command similar to the following:

$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome 
/u01/app/11.2.0/grid -dest_crshome /u01/app/12.1.0/grid -dest_version
12.1.0.1 -fixup -verbose

B.7 Understanding Rolling Upgrades Using Batches

Upgrades from earlier releases require that you upgrade the entire cluster. You cannot select or de-select individual nodes for upgrade. Oracle does not support attempting to add additional nodes to a cluster during a rolling upgrade.

Oracle recommends that you leave Oracle RAC instances running when upgrading Oracle Clusterware. When you start the root script on each node, the database instances on that node are shut down and then the rootupgrade.sh script starts the instances again. If you upgrade from Oracle Grid Infrastructure 11g Release 11.2.0.2 and later to any later release of Oracle Grid Infrastructure, then all nodes are selected for upgrade by default.

You can use root user automation to automate running the rootupgrade.sh script during the upgrade. When you use root automation, you can divide the nodes into groups, or batches, and start upgrades of these batches. Between batches, you can move services from nodes running the previous release to the upgraded nodes, so that services are not affected by the upgrade. Oracle recommends that you use root automation, and allow the rootupgrade.sh script to stop and start instances automatically. You can also continue to run root scripts manually.

B.8 Performing Rolling Upgrade of Oracle Grid Infrastructure

This section contains the following topics:

B.8.1 Performing a Standard Upgrade from an Earlier Release

Use the following procedure to upgrade the cluster from an earlier release:

  1. Start the installer, and select the option to upgrade an existing Oracle Clusterware and Oracle ASM installation.

  2. On the node selection page, select all nodes.

  3. Select installation options as prompted. Oracle recommends that you configure root script automation, so that the rootupgrade.sh script can be run automatically during the upgrade.

  4. Run root scripts, using either automatically or manually:

    • Running root scripts automatically

      If you have configured root script automation, then use the pause between batches to relocate services from the nodes running the previous release to the new release.

    • Running root scripts manually

      If you have not configured root script automation, then when prompted, run the rootupgrade.sh script on each node in the cluster that you want to upgrade.

      If you run root scripts manually, then run the script on the local node first. The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation.

      After the script completes successfully, you can run the script in parallel on all nodes except for one, which you select as the last node. When the script is run successfully on all the nodes except the last node, run the script on the last node.

      When upgrading from 12.1.0.1 Oracle Flex Cluster, Oracle recommends that you run the rootupgrade.sh script on all Hub Nodes before running it on Leaf Nodes.

  5. After running the rootupgrade.sh script on the last node in the cluster, if you are upgrading from a release earlier than Oracle Grid Infrastructure 11g Release 2 (11.2.0.2), and left the check box labeled ASMCA checked, as is the default, then Oracle Automatic Storage Management Configuration Assistant ASMCA runs automatically, and the Oracle Grid Infrastructure upgrade is complete. If you unchecked the box during the interview stage of the upgrade, then ASMCA is not run automatically.

    If an earlier release of Oracle Automatic Storage Management (Oracle ASM) is installed, then the installer starts ASMCA to upgrade Oracle ASM to 12c Release 1 (12.1). You can choose to upgrade Oracle ASM at this time, or upgrade it later.

    Oracle recommends that you upgrade Oracle ASM at the same time that you upgrade Oracle Clusterware. Until Oracle ASM is upgraded, Oracle Databases that use Oracle ASM cannot be created and the Oracle ASM management tools in the Oracle Grid Infrastructure 12c Release 1 (12.1) home (for example, srvctl) do not work.

  6. Because the Oracle Grid Infrastructure home is in a different location than the former Oracle Clusterware and Oracle ASM homes, update any scripts or applications that use utilities, libraries, or other files that reside in the Oracle Clusterware and Oracle ASM homes.

Note:

At the end of the upgrade, if you set the Oracle Cluster Registry (OCR) backup location manually to the earlier release Oracle Clusterware home (CRS home), then you must change the OCR backup location to the new Oracle Grid Infrastructure home (Grid home). If you did not set the OCR backup location manually, then the backup location is changed for you during the upgrade.

Because upgrades of Oracle Clusterware are out-of-place upgrades, the previous release Oracle Clusterware home cannot be the location of the current release OCR backups. Backups in the old Oracle Clusterware home can be deleted.

See Also:

Section A.12, "Failed or Incomplete Installations and Upgrades" for information about completing failed or incomplete upgrades

B.8.2 Completing an Oracle Clusterware Upgrade when Nodes Become Unreachable

If some nodes become unreachable in the middle of an upgrade, then you cannot complete the upgrade, because the upgrade script (rootupgrade.sh) did not run on the unreachable nodes. Because the upgrade is incomplete, Oracle Clusterware remains in the previous release. You can confirm that the upgrade is incomplete by entering the command crsctl query crs activeversion.

To resolve this problem, run the rootupgrade command with the -force flag on any of the nodes where the rootupgrade.sh script has already completed as follows:

Grid_home/rootupgrade.sh -force

For example:

# /u01/app/12.1.0/grid/rootupgrade.sh -force

This command forces the upgrade to complete. Verify that the upgrade has completed by using the command crsctl query crs activeversion. The active release should be the upgrade release.

The force cluster upgrade has the following limitations:

  • All active nodes must be upgraded to the newer release.

  • All inactive nodes (accessible or inaccessible) may be either upgraded or not upgraded.

  • For inaccessible nodes, after patch set upgrades, you can delete the node from the cluster. If the node becomes accessible later, and the patch version upgrade path is supported, then you can upgrade it to the new patch version.

  • If the cluster was previously forcibly upgraded, then ensure that all inaccessible nodes have been deleted from the cluster or joined to the cluster before starting the upgrade.

B.8.3 Upgrading Inaccessible Nodes After Forcing an Upgrade

Starting with Oracle Grid Infrastructure 12c, after you complete a force cluster upgrade, you can join inaccessible nodes to the cluster as an alternative to deleting the nodes, which was required in earlier releases. To use this option, Oracle Grid Infrastructure 12c Release 1 (12.1) software must already be installed on the nodes.

To complete the upgrade of nodes that were inaccessible or unreachable:

  1. Log in as the Grid user on the node you want to join to the cluster.

  2. Change directory to the /crs/install directory in the Oracle Grid Infrastructure 12c Release 1 (12.1) Grid home. For example:

    $ cd /u01/12.1.0/grid/crs/install
    
  3. Run the following PERL command, where existingnode is the name of the option and upgraded_node is any node that was successfully upgraded and is currently part of the cluster:

    $ rootupgrade.sh -join -existingnode upgraded_node
    

    Note:

    The -join operation is not supported for Oracle Clusterware releases earlier than 11.2.0.1.0. In such cases, delete the node and add it to Oracle Clusterware using the addNode command.

B.8.4 Changing the First Node for Install and Upgrade

If the first node becomes inaccessible, you can force another node to be the first node for installation or upgrade. During installation, if root.sh fails to complete on the first node, run the following command on another node using the -force option:

root.sh -force -first

For upgrade, run the following command:

rootupgrade.sh -force -first

B.9 Restrictions and Guidelines for Upgrading and Patching Oracle ASM

Note the following if you intend to perform either full release or software patch level rolling upgrades of Oracle ASM:

  • The active release of Oracle Clusterware must be 12c Release 1 (12.1). To determine the active release, enter the following command:

    $ crsctl query crs activeversion
    
  • You can upgrade or perform rolling patch of a single instance Oracle ASM installation to a clustered Oracle ASM installation. However, you can only upgrade or patch an existing single instance Oracle ASM installation if you run the installation from the node on which the Oracle ASM installation is installed. You cannot upgrade or perform rolling patch of a single instance Oracle ASM installation on a remote node.

  • You must ensure that any rebalance operations on your existing Oracle ASM installation are completed before starting the upgrade or patching process.

  • During the upgrade or rolling patch process, you alter the Oracle ASM instances to an upgrade state. You do not need to shut down database clients unless they are on Oracle ACFS. However, because this upgrade state limits Oracle ASM operations, you should complete the upgrade process soon after you begin. The following are the operations allowed when an Oracle ASM instance is in the upgrade state:

    • Diskgroup mounts and dismounts

    • Opening, closing, resizing, or deleting database files

    • Recovering instances

    • Queries of fixed views and packages: Users are allowed to query fixed views and run anonymous PL/SQL blocks using fixed packages, such as dbms_diskgroup)

  • You do not need to shut down database clients unless they are on Oracle ACFS.

See Also:

See Section B.10.1, "Upgrading Oracle ASM Using ASMCA" for steps to upgrade Oracle ASM separately using ASMCA

B.10 Performing Rolling Upgrade of Oracle ASM

After you have completed the Oracle Clusterware portion of Oracle Grid Infrastructure 12c Release 1 (12.1) upgrade, you may need to upgrade Oracle ASM separately under the following conditions:

  • If you are upgrading from a release in which Oracle ASM was in a separate Oracle home, such as Oracle ASM 10g Release 2 (10.2) or Oracle ASM 11g Release 1 (11.1)

  • If the Oracle ASM portion of the Oracle Grid Infrastructure upgrade failed, or for some other reason Automatic Storage Management Configuration assistant (asmca) did not run.

You can use asmca to complete the upgrade separately, but you should do it soon after you upgrade Oracle Clusterware, as Oracle ASM management tools such as srvctl do not work until Oracle ASM is upgraded.

Note:

ASMCA performs a rolling upgrade only if the earlier release of Oracle ASM is either 11.1.0.6 or 11.1.0.7. Otherwise, ASMCA performs a non-rolling upgrade, in which ASMCA shuts down all Oracle ASM instances on all nodes of the cluster, and then starts an Oracle ASM instance on each node from the new Oracle Grid Infrastructure home.

After you have upgraded Oracle ASM with Oracle Grid Infrastructure 12c Release 1, you can install individual patches for Oracle ASM by downloading them from the Oracle Automated Release Update site. See Section B.9, "Restrictions and Guidelines for Upgrading and Patching Oracle ASM" for more information about upgrading Oracle ASM separately using ASMCA.

B.10.1 Upgrading Oracle ASM Using ASMCA

Complete the following tasks if you must upgrade from an Oracle ASM release where Oracle ASM was installed in a separate Oracle home, or if the Oracle ASM portion of Oracle Grid Infrastructure upgrade failed to complete:

  1. On the node you plan to start the upgrade, set the environment variable ASMCA_ROLLING_UPGRADE as true. For example:

    $ export ASMCA_ROLLING_UPGRADE=true
    
  2. From the Oracle Grid Infrastructure 12c Release 1 (12.1) home, start ASMCA. For example:

    $ cd /u01/12.1/grid/bin
    $ ./asmca
    
  3. Select Upgrade.

    ASM Configuration Assistant upgrades Oracle ASM in succession for all nodes in the cluster.

  4. After you complete the upgrade, run the command to unset the ASMCA_ROLLING_UPGRADE environment variable.

See Also:

Oracle Database Upgrade Guide and Oracle Automatic Storage Management Administrator's Guide for additional information about preparing an upgrade plan for Oracle ASM, and for starting, completing, and stopping Oracle ASM upgrades

B.11 Applying Patches to Oracle ASM

After you have upgraded Oracle ASM with Oracle Grid Infrastructure 12c Release 1, you can install individual patches for Oracle ASM by downloading them from My Oracle Support.

This section explains about Oracle ASM patches as follows:

B.11.1 About Individual (One-Off) Oracle ASM Patches

Individual patches are called one-off patches. An Oracle ASM one-off patch is available for a specific released release of Oracle ASM. If a patch you want is available, then you can download the patch and apply it to Oracle ASM using the OPatch Utility. The OPatch inventory keeps track of the patches you have installed for your release of Oracle ASM. If there is a conflict between the patches you have installed and patches you want to apply, then the OPatch Utility advises you of these conflicts. See Section B.11.3, "Patching Oracle ASM to a Software Patch Level" for information about applying patches to Oracle ASM using the OPatch Utility.

B.11.2 About Oracle ASM Software Patch Levels

The software patch level for Oracle Grid Infrastructure represents the set of all one-off patches applied to the Oracle Grid Infrastructure software release, including Oracle ASM. The release is the release number, in the format of major, minor, and patch set release number. For example, with the release number 12.1.0.1, the major release is 12, the minor release is 1, and 0.0 is the patch set number. With one-off patches, the major and minor release remains the same, though the patch levels change each time you apply or roll back an interim patch.

As with standard upgrades to Oracle Grid Infrastructure, at any given point in time for normal operation of the cluster, all the nodes in the cluster must have the same software release and patch level. Because one-off patches can be applied as rolling upgrades, all possible patch levels on a particular software release are compatible with each other.

See Also:

B.11.3 Patching Oracle ASM to a Software Patch Level

Starting with Oracle Grid Infrastructure 12c Release 1 (12.1), a new cluster state called "Rolling Patch" is available. This mode is similar to the existing "Rolling Upgrade" mode in terms of the Oracle ASM operations allowed in this quiesce state.

  1. Download patches you want to apply from My Oracle Support:

    https://support.oracle.com

    Select the Patches and Updates tab to locate the patch.

    Oracle recommends that you select Recommended Patch Advisor, and enter the product group, release, and platform for your software. My Oracle Support provides you with a list of the most recent patch set updates (PSUs) and critical patch updates (CPUs).

    Place the patches in an accessible directory, such as /tmp.

  2. Change directory to the /opatch directory in the Grid home. For example:

    $ cd /u01/app/12.1.0/grid/opatch
    
  3. Review the patch documentation for the patch you want to apply, and complete all required steps before starting the patch upgrade.

  4. Follow the instructions in the patch documentation to apply the patch.

B.12 Updating Oracle Enterprise Manager Cloud Control Target Parameters

Because Oracle Grid Infrastructure 12c Release 1 (12.1) is an out-of-place upgrade of the Oracle Clusterware home in a new location (the Oracle Grid Infrastructure for a cluster home, or Grid home), the path for the CRS_HOME parameter in some parameter files must be changed. If you do not change the parameter, then you encounter errors such as "cluster target broken" on Oracle Enterprise Manager Cloud Control.

To resolve the issue, upgrade the Enterprise Manager Cloud Control target, and then update the Enterprise Manager Agent Base Directory on each cluster member node running an agent, as described in the following sections:

B.12.1 Updating the Enterprise Manager Cloud Control Target After Upgrades

  1. Log in to Enterprise Manager Cloud Control.

  2. Navigate to the Targets menu, and then to the Cluster page.

  3. Click a cluster target that was upgraded.

  4. Click Cluster, then Target Setup, and then Monitoring Configuration from the menu.

  5. Update the value for Oracle Home with the new Grid home path.

  6. Save the updates.

B.12.2 Updating the Enterprise Manager Agent Base Directory After Upgrades

  1. Navigate to the bin directory in the Management Agent home.

    The Agent Base directory is a directory where the Management Agent home is created. The Management Agent home is in the path Agent_Base_Directory/core/EMAgent_Version. For example, if the Agent Base directory is /u01/app/emagent, then the Management Agent home is created as /u01/app/emagent/core/12.1.0.1.0.

  2. In the /u01/app/emagent/core/12.1.0.1.0/bin directory, open the file emctl with a text editor.

  3. Locate the parameter CRS_HOME, and update the parameter to the new Grid home path.

  4. Repeat steps 1-3 on each node of the cluster with an Enterprise Manager agent.

B.13 Unlocking the Existing Oracle Clusterware Installation

After upgrade from previous releases, if you want to deinstall the previous release Oracle Grid Infrastructure Grid home, then you must first change the permission and ownership of the previous release Grid home. Complete this task using the following procedure:

Log in as root, and change the permission and ownership of the previous release Grid home using the following command syntax, where oldGH is the previous release Grid home, swowner is the Oracle Grid Infrastructure installation owner, and oldGHParent is the parent directory of the previous release Grid home:


#chmod -R 755 oldGH
#chown -R swowner oldGH
#chown swowner oldGHParent

For example:

#chmod -R 755 /u01/app/11.2.0/grid
#chown -R grid /u01/app/11.2.0/grid
#chown grid /u01/app/11.2.0

After you change the permissions and ownership of the previous release Grid home, log in as the Oracle Grid Infrastructure Installation owner (grid, in the preceding example), and use the Oracle Grid Infrastructure 12c deinstallation tool to remove the previous release Grid home (oldGH).

B.14 Checking Cluster Health Monitor Repository Size After Upgrading

If you are upgrading from a prior release using IPD/OS to Oracle Grid Infrastructure then review the Cluster Health Monitor repository size (the CHM repository). Oracle recommends that you review your CHM repository needs, and enlarge the repository size if you want to maintain a larger CHM repository.

Note:

Your previous IPD/OS repository is deleted when you install Oracle Grid Infrastructure, and you run the root.sh script on each node.

Cluster Health Monitor is not available with IBM: Linux on System z configurations.

By default, the CHM repository size is a minimum of either 1GB or 3600 seconds (1 hour). The CHM repository is one gigabyte (1 GB), regardless of the size of the cluster.

To enlarge the CHM repository, use the following command syntax, where retention_time is the size of CHM repository in number of seconds:

oclumon manage -repos changeretentiontime retention_time

The value for retention_time must be more than 3600 (one hour) and less than 259200 (three days). If you enlarge the CHM repository size, then you must ensure that there is local space available for the repository size you select on each node of the cluster. If there is not sufficient space available, then you can move the repository to shared storage.

For example, to set the repository size to four hours:

$ oclumon manage -repos changeretentiontime 14400

B.15 Downgrading Oracle Clusterware After an Upgrade

After a successful or a failed upgrade to Oracle Clusterware 12c Release 1 (12.1), you can restore Oracle Clusterware to the previous release. This section contains the following topics:

B.15.1 About Downgrading Oracle Clusterware After an Upgrade

Downgrading Oracle Clusterware restores the Oracle Clusterware configuration to the state it was in before the Oracle Clusterware 12c Release 1 (12.1) upgrade. Any configuration changes you performed during or after the Oracle Grid Infrastructure 12c Release 1 (12.1) upgrade are removed and cannot be recovered.

In the downgrade procedures, the following variables are used:

  • first node is the first node on which the rootupgrade script completed successfully.

  • non-first nodes are all other nodes where the rootupgrade script completed successfully.

To restore Oracle Clusterware to the previous release, use the downgrade procedure for the release to which you want to downgrade.

Note:

When downgrading after a failed upgrade, if rootcrs.sh does not exist on a node, then use perl rootcrs.pl instead of rootcrs.sh.

B.15.2 Downgrading to Releases Before 11g Release 2 (11.2.0.2)

To downgrade Oracle Clusterware:

  1. If the rootupgrade script failed on a node, then downgrade the node where the upgrade failed:

    rootcrs.sh -downgrade
    
  2. On all other nodes where the rootupgrade script ran successfully, use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade to stop the 12c Release 1 (12.1) resources, and shut down the Oracle Grid Infrastructure 12c Release 1 (12.1) stack.

    rootcrs.sh -downgrade
    
  3. After the rootcrs.sh -downgrade script has completed on all non-first nodes, on the first node use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade -lastnode.

    For example:

    # /u01/app/12.1.0/grid/crs/install/rootcrs.sh -downgrade  -lastnode
    

    Note:

    With Oracle Grid Infrastructure 12c, you no longer need to provide the location of the previous release Grid home or release number.

    Run this command from a directory that has write permissions for the Oracle Grid Infrastructure installation user.

  4. On any of the cluster member nodes where the rootcrs script has run successfully:

    1. Log in as the Oracle Grid Infrastructure installation owner.

    2. Use the following command to start the installer, where /u01/app/12.1.0/grid is the location of the new (upgraded) Grid home:

    ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent
       CRS=false ORACLE_HOME=/u01/app/12.1.0/grid
    

    Add the flag -cfs if the Grid home is a shared home.

  5. On any of the cluster member nodes where the rootupgrade.sh script has run successfully:

    1. Log in as the Oracle Grid Infrastructure installation owner (grid).

    2. Use the following command to start the installer, where the path you provide for the flag ORACLE_HOME is the location of the home directory from the earlier Oracle Clusterware installation

      For example:

      $ cd /u01/app/12.1.0/grid/oui/bin
      $ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent
          CRS=true ORACLE_HOME=/u01/app/crs
      
    3. For downgrades to 11.1 and earlier releases

      If you are downgrading to Oracle Clusterware 11g Release 1 (11.1) or an earlier release, then you must run root.sh manually from the earlier release Oracle Clusterware home to complete the downgrade after you complete step b.

      OUI prompts you to run root.sh manually from the earlier release Oracle Clusterware installation home in sequence on each member node of the cluster to complete the downgrade. After you complete this task, downgrade is completed.

      Running root.sh from the earlier release Oracle Clusterware installation home restarts the Oracle Clusterware stack, starts up all the resources previously registered with Oracle Clusterware in the earlier release, and configures the old initialization scripts to run the earlier release Oracle Clusterware stack.

      After completing the downgrade, update the entry for Oracle ASM instance in the oratab file (/etc/oratab or /var/opt/oracle/oratab) on every node in the cluster as follows:

      +ASM<instance#>:<RAC-ASM home>:N

B.15.3 Downgrading to 11g Release 1 (11.2.0.2) or Later Release

Follow these steps to downgrade Oracle Grid Infrastructure:

  1. On all remote nodes, use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade to stop the 12c Release 1 (12.1) resources, and shut down the Oracle Grid Infrastructure 12c Release 1 (12.1) stack.

    # /u01/app/12.1.0/grid/crs/install/rootcrs.sh -downgrade
    
  2. After the rootcrs.sh -downgrade script has completed on all remote nodes, on the local node use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade -lastnode

    For example:

    # /u01/app/12.1.0/grid/crs/install/rootcrs.sh -downgrade  -lastnode
    

    Note:

    Starting with Oracle Grid Infrastructure 12c Release 1 (12.1), you no longer need to provide the location of the earlier release Grid home or earlier release number.

    Run this command from a directory that has write permissions for the Oracle Grid Infrastructure installation user.

  3. On any of the cluster member nodes where the rootupgrade.sh script has run successfully:

    1. Log in as the Oracle Grid Infrastructure installation owner.

    2. Use the following command to start the installer, where /u01/app/12.1.0/grid is the location of the new (upgraded) Grid home:

    $ cd /u01/app/12.1.0/grid/oui/bin
    ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList
     -silent CRS=false ORACLE_HOME=/u01/app/12.1.0/grid
    

    Add the flag -cfs if the Grid home is a shared home.

  4. On any of the cluster member nodes where the rootupgrade script has run successfully:

    1. Log in as the Oracle Grid Infrastructure installation owner.

    2. Use the following command to start the installer, where the path you provide for the flag ORACLE_HOME is the location of the home directory from the earlier Oracle Clusterware installation

      For example:

      $ cd /u01/app/12.1.0/grid/oui/bin
      $ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/u01/app/crs
      
    3. For downgrades to 11.2.0.2

      If you are downgrading to Oracle Clusterware 11g Release 1 (11.2.0.2), then you must start the Oracle Clusterware stack manually after you complete step b.

      On each node, start Oracle Clusterware from the earlier release Oracle Clusterware home using the command crsctl start crs. For example, where the earlier release home is /u01/app/11.2.0/grid, use the following command on each node:

      /u01/app/11.2.0/grid/bin/crsctl start crs

  5. For downgrades to 12.1.0.1

    If you are downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1.0.1), then run the following commands to configure the Grid Management Database:

    1. Start the 12.1.0.1 Oracle Clusterware stack on all nodes.

    2. On any node, remove the MGMTDB resource as follows:

      12101_Grid_home/bin/srvctl remove mgmtdb
      
    3. Run DBCA in the silent mode from the 12.1.0.1 Oracle home and create the Management Database as follows:

      12101_Grid_home/bin/dbca -silent -createDatabase -templateName
      MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM
      -diskGroupName ASM_DG_NAME
      -datafileJarLocation 12101_grid_home/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords
      
    4. Configure the Management Database by running the Configuration Assistant from the location 12101_Grid_home/bin/mgmtca.