Oracle Middleware 12c In-Place Upgrade Procedure Project:: Tsoe - Ccoe
Oracle Middleware 12c In-Place Upgrade Procedure Project:: Tsoe - Ccoe
Oracle Middleware 12c In-Place Upgrade Procedure Project:: Tsoe - Ccoe
TSOE- CCOE
Classification: Internal
Document History
Classification: Internal
Table of Contents
1 INTRODUCTION.......................................................................................................4
2 IBM DC’ ORACLE MW IN-PLACE UPGRADE PROCEDURE.........................6
2.1 Weblogic.....................................................................................................6
2.3 BPEL........................................................................................................25
3.3 BPEL........................................................................................................45
Classification: Internal
1
Classification: Internal
2
Introduction
EOL (End Of Life) 2021 is continuation of previous M&I (Maintenance and Improve)
program launched back in 2018 to address Software technical debt removal from
Maersk’ IBM and Azure managed estate. Other objectives are to align existing systems
and applications to TSOE Engineering standards which are embodied in Reference
Architectures, Blueprints and Deployment patterns of various software stack that TSOE
Engineering and Operations support, this include but not limited:
Objectives:
Part of EOL 2021 is to remediate Oracle Fusion Middleware products, the guideline has
been set to target latest 12c stable release 12.2.1.4 which has Extended EoS August
2025.
Oracle Fusion Middleware product includes: Weblogic, OHS, SOA Suite, BPEL, OSB, ESS.
Note: At the time of writing this document, some of Oracle Fusion Middleware products
-but not all- have been released on the new 14c version but it is at its first GA release
14.1.1 with no Extended EoS date! which will be only announced for The Terminal (or
Long Term Support) Release, and it is only at that point of time TSOE and Platforms
teams will be tasked to kick a new migration plan from 12c to 14c.
TSOE used to perform upgrade to new version by doing new builds, ie new VMs having
the target MW version, also known as green-field upgrade. Experience in previous M&I
program has shown that this practice had led to drastic ramifications in term of time
delivery thus affecting project milestones due to various factors:
IBM Forecast Capacity 3 months lead time to allocate required capacity to spin up new VMs
Ops onboarding
Application migration from existing version to new version
The utility and steps have been incorporated in our MW puppet framework -when
possible- which is used by TSOE Engineering to build new MW solutions and being used
by MW OPs for MW configuration Management
Classification: Internal
The purpose of this document is to list the required steps to perform the in-
place upgrade automation via puppet framework + manual steps when required
in IBM and Azure Infra.
Classification: Internal
3 IBM DC’ ORACLE
MW IN-PLACE UPGRADE
PROCEDURE
Each oracle fusion Middleware product will have its own section, SOA products tends to
be more complex as it involves DB
3.1 Weblogic
cd /u01/puppet/environment/<proj>_<env>
git config --global http.proxy http://10.0.11.1:9400
git remote set-url origin https://github.com/Maersk-Global/puppet_automation_ibm.git
git pull (you will be asked to enter your username and pwd for github)
Note: Don’t supply -x prefix if the domain name doesn’t have one!
Classification: Internal
2.1.2 Upgrade Pre-checks
Check the domain size folder not exceeding 2GB or 3GB max as
this can be problematic when taking a backup of the domain, so
any means to clean up unwanted files should be done prior to
upgrade
wls_jdk_version:
wls_jdk_full_version:
jdk_package:
crypto_extension_file:
opatch_instances:
'27342434':
....
<realm>
<sec:authentication-provider
xmlns:sspi="http://sspi.security.apmoller.net"
xsi:type="sspi:msl-authenticatorType">
<sec:name>MSLAuthenticator</sec:name>
Classification: Internal
<sec:control-flag>SUFFICIENT</sec:control-flag>
</sec:authentication-provider>
………….
<sec:name>USI</sec:name>
<sec:delegate-m-bean-
authorization>false</sec:delegate-m-bean-authorization>
</realm>
<default-realm>myrealm</default-realm>
Note:
For HA environment with weblogic cluster set up, the stopping of
servers must be done one at the time to avoid full downtime.
Always start with VM hosting the admin server and once upgrade is
done on that VM, move to other VMs hosting the managed servers
b. Copy script
/u01/puppet/environments/<proj>_<env>/bootstrap/ibm/wls_upgrade_ib
m.sh to /tmp :
cd /tmp
cp /u01/puppet/environments/${PROJECT}_${ENVIRONMENT}/bootstrap/ibm/wls_upgrade_ibm.sh /tmp
chmod u+x wls_upgrade_ibm.sh
This will upgrade the Node hosting admin server and any managed server(s)
coexisting with admin as well but only admin server will come up
automatically upon upgrade completion, so you need to go to start up these
managed servers yourselves manually (ie startms cmd)
After the upgrade is completed on admin node and before moving to managed
node (HA set up), the new config.xml of 12214 domain should be updated by
adding back the USI entry we removed in the pre-checks section and
Classification: Internal
resetting default-realm to USI with admin server restart then move to the
managed node by node.
Note: Check the managed server name running on the VM you are
upgrading: ms2, ms3 or msxxx and adapt the above cmd accordingly
2.1.4 Start Weblogic Admin, Managed server & Nodemanager(As oracle user)
Node manager and admin server will be started automatically by the
upgrade, you only need to start the managed server:
startms -- To start managed server
After the upgrade is completed (step 2.1.4), the new config.xml of 12214
domain should be updated by adding the USI entry we removed in the pre-
checks section and resetting default-realm to USI with full domain restart
node by node.
During the upgrade, the puppet fmk will reset all the permissions under
/u01 and /u02 to oracle user, which means applications artefacts ownership
will change from devops to oracle, this will cause the CD pipeline to fail
if it tries to redeploy the same existing artefacts.
So, it is required in this case to change ownership of these artefacts from
oracle to devops user as post upgrade task and before kicking the
deployment, e.g :
Classification: Internal
cd /u01/oracle/application/config/
chown -R devops:oinstall EDDI2*
For new artefacts that will be pushed by the CD pipeline there should be no
issue as the pipeline will place new artefacts as devops user.
The new OFMW + OSB 12214 binaries installation can be executed online, ie
without affecting the existing installation /u01/oracle/product/fmw1221 as
the 12214 will be installed in a dedicated path:
/u01/oracle/product/fmw12214; thus avoiding any downtime of the application
during the installation.
cd /u01/puppet/environment/<proj>_<env>
git config --global http.proxy http://10.0.11.1:9400
git remote set-url origin https://github.com/Maersk-Global/puppet_automation.git
git pull (you will be asked to enter your username and pwd for github)
Classification: Internal
d. Install on Managed nodes
Note: the installation can be executed in parallel in all nodes of
the domain
Check the domain size folder not exceeding 2GB or 3GB max as
this can be problematic when taking a backup of the domain, so
any means to clean up unwanted files should be done prior to
upgrade
wls_jdk_version:
wls_jdk_full_version:
jdk_package:
crypto_extension_file:
opatch_instances:
'27342434':
....
<realm>
<sec:authentication-provider
xmlns:sspi="http://sspi.security.apmoller.net"
Classification: Internal
xsi:type="sspi:msl-authenticatorType">
<sec:name>MSLAuthenticator</sec:name>
<sec:control-flag>SUFFICIENT</sec:control-flag>
</sec:authentication-provider>
………….
<sec:name>USI</sec:name>
<sec:delegate-m-bean-
authorization>false</sec:delegate-m-bean-authorization>
</realm>
<default-realm>myrealm</default-realm>
Classification: Internal
path>/u01/oracle/product/fmw1221/osb/lib/apps/domainsinglet
onmarker.ear</source-path>
<deployment-order>31</deployment-order>
<security-dd-model>DDOnly</security-dd-model>
<staging-mode>nostage</staging-mode>
<plan-staging-mode xsi:nil="true"></plan-staging-mode>
<cache-in-app-directory>false</cache-in-app-directory>
</app-deployment>
a. Prerequisites:
-> putty (configure it with X11-Forward)
->Xming installed
-> sysdba user on OSB PDBs
-> oracle user pwd to ssh to VM hosting the admin server
-> a guaranteed restore point has been captured online by DB Ops
on OSB PDBs before the upgrade kicks in (step d)
cd /u01/oracle/product/fmw12214/oracle_common/upgrade/bin/
./ua -readiness
Classification: Internal
Classification: Internal
Enter the sysdba user/pwd ensure you enter as shown below :
Classification: Internal
c. Stop all domain processes
Classification: Internal
d. Schema upgrade (GUI)
cd /u01/oracle/product/fmw12214/oracle_common/upgrade/bin/
./ua
Classification: Internal
Classification: Internal
Classification: Internal
Classification: Internal
e. Domain Reconfiguration(as root):
cd /tmp
cp /u01/puppet/environments/${PROJECT}_${ENVIRONMENT}/bootstrap/ibm/osb_upgrade_ibm.sh .
chmod u+x osb_upgrade_ibm.sh
./osb_upgrade_ibm.sh -r admin -p <project> -x <prefix> -e <env> -f osb -v 12214
e.g: ./ osb_upgrade_ibm.sh -r admin -p esb -x 1 -e int2 -f osb -v 12214
cd /u01/oracle/product/fmw12214/oracle_common/upgrade/bin/
./ua
Classification: Internal
Classification: Internal
Classification: Internal
g. If the domain is USI-secured
cd /tmp
cp /u01/puppet/environments/${PROJECT}_${ENVIRONMENT}/bootstrap/ibm/osb_upgrade_ibm.sh .
chmod u+x osb_upgrade_ibm.sh
./osb_upgrade_ibm.sh -r managed -p <project> -x <prefix> -e <env> -f osb -v 12214
Post-upgrade(as oracle):
-> start node manager: startnm
->ensure the below startup scripts are pointing to correct managed
server name running on the node you are upgrading:
/u02/oracle/scripts/stopms.py
/u02/oracle/scripts/startms.py
Classification: Internal
First start up of managed node will fail but it will create the server
folder but it will be missing the startup properties.
copy those from the backup domain:
/u02/oracle/backups/<backedup _domain>>/servers/<ms_name>/data/nodemanager
to:
/u01/oracle/config/domains/ <domain_folder>/servers/<ms_name>/data/nodemanager
then restart the node manager and start the managed server it
again
3.3 BPEL
Classification: Internal
4 AZURE’ ORACLE
MW IN-PLACE UPGRADE
PROCEDURE
4.1 Weblogic
cd /u01/puppet/environment/<proj>_<env>
git config --global http.proxy http://10.0.11.1:9400
git remote set-url origin https://github.com/Maersk-Global/puppet_automation.git
git pull (you will be asked to enter your username and pwd for github)
Note: Don’t supply x prefix if the domain name doesn’t have one!
Classification: Internal
Ensure that there is only the domain folder under:
/u01/oracle/config/domains (admin node)
/u02/oracle/config/domains (managed node)
any other folders must be moved to /u02/oracle/backup or removed
if deemed unnecessary
Check the domain size folder not exceeding 2GB or 3GB max as
this can be problematic when taking a backup of the domain, so
any means to clean up unwanted files should be done prior to
upgrade
wls_jdk_version:
wls_jdk_full_version:
jdk_package:
crypto_extension_file:
opatch_instances:
'27342434':
....
<realm>
<sec:authentication-provider
xmlns:sspi="http://sspi.security.apmoller.net"
xsi:type="sspi:msl-authenticatorType">
<sec:name>MSLAuthenticator</sec:name>
<sec:control-flag>SUFFICIENT</sec:control-flag>
</sec:authentication-provider>
………….
<sec:name>USI</sec:name>
<sec:delegate-m-bean-
authorization>false</sec:delegate-m-bean-authorization>
Classification: Internal
</realm>
<default-realm>myrealm</default-realm>
Note:
For HA environment with weblogic cluster set up, the stopping of
servers must be done one at the time to avoid full downtime.
Always start with VM hosting the admin server and once upgrade is
done on that VM, move to other VMs hosting the managed servers
f. Copy script
/u01/puppet/environments/<proj>_<env>/bootstrap/azure/wls_upgrade_
azure.sh to /tmp:
cd /tmp
cp /u01/puppet/environments/${PROJECT}_${ENVIRONMENT}/bootstrap/azure/wls_upgrade_azure.sh /tmp
chmod u+x wls_upgrade_azure.sh
This will upgrade the Node hosting admin server and any managed server(s)
coexisting with admin as well but only admin server will come up
automatically upon upgrade completion, so you need to go to start up these
managed servers yourselves manually (ie startms cmd)
After the upgrade is completed on admin node and before moving to managed
node (HA set up), the new config.xml of 12214 domain should be updated by
adding back the USI entry we removed in the pre-checks section and
resetting default-realm to USI with admin server restart then move to the
managed node by node.
Classification: Internal
in a dedicated VM. So, this is not applicable to standalone domain
and only HA ones
Note: Check the managed server name running on the VM you are
upgrading: ms2, ms3 or msxxx and adapt the above cmd accordingly
3.1.4 Start Weblogic Admin, Managed server & Nodemanager(As oracle user)
Node manager and admin server will be started automatically by the
upgrade, you only need to start the managed server:
startms -- To start managed server
During the upgrade, the puppet fmk will reset all the permissions under
/u01 and /u02 to oracle user, which means applications artefacts ownership
will change from devops to oracle, this will cause the CD pipeline to fail
if it tries to redeploy the same existing artefacts.
So, it is required in this case to change ownership of these artefacts from
oracle to devops user as post upgrade task and before kicking the
deployment, e.g :
cd /u01/oracle/application/config/
chown -R devops:oinstall EDDI2*
For new artefacts that will be pushed by the CD pipeline there should be no
issue as the pipeline will place new artefacts as devops user.
Classification: Internal
All node manager, admin or managed servers’ processes are
shutdown
Copy the backup from /u02/oracle/backups/<domain_name>_$date
- Admin node to: /u01/oracle/config/domains/<domain_name>
- Managed Node to :/u02/oracle/config/domains/<domain_name>
Update the /home/oracle/.bash_profile to point to fmw1221
instead of fmw12214 , the source the file
Simply restart your domain by starting first node
manager(s),admin server and managed server(s)
The new OFMW + OSB 12214 binaries installation can be executed online, ie
without affecting the existing installation /u01/oracle/product/fmw1221 as
the 12214 will be installed in a dedicated path:
/u01/oracle/product/fmw12214; thus avoiding any downtime of the application
during the installation.
cd /u01/puppet/environment/<proj>_<env>
git config --global http.proxy http://10.0.11.1:9400
git remote set-url origin https://github.com/Maersk-Global/puppet_automation.git
git pull (you will be asked to enter your username and pwd for github)
Classification: Internal
/u02/oracle/config/domains (managed node)
any other folders must be moved to /u02/oracle/backup or removed
if deemed unnecessary
Check the domain size folder not exceeding 2GB or 3GB max as
this can be problematic when taking a backup of the domain, so
any means to clean up unwanted files should be done prior to
upgrade
wls_jdk_version:
wls_jdk_full_version:
jdk_package:
crypto_extension_file:
opatch_instances:
'27342434':
....
<realm>
<sec:authentication-provider
xmlns:sspi="http://sspi.security.apmoller.net"
xsi:type="sspi:msl-authenticatorType">
<sec:name>MSLAuthenticator</sec:name>
<sec:control-flag>SUFFICIENT</sec:control-flag>
</sec:authentication-provider>
………….
<sec:name>USI</sec:name>
<sec:delegate-m-bean-
authorization>false</sec:delegate-m-bean-authorization>
</realm>
Classification: Internal
<default-realm>myrealm</default-realm>
Classification: Internal
3.2.3 Domain Upgrade Process (Perform with oracle user)
a. Prerequisites:
-> putty (configure it with X11-Forward)
->Xming installed
-> sysdba user on OSB PDBs
-> oracle user pwd to ssh to VM hosting the admin server
-> a guaranteed restore point has been captured online by DB Ops
on OSB PDBs before the upgrade kicks in (step d)
Classification: Internal
b. Upgrade Readiness check:
cd /u01/oracle/product/fmw12214/oracle_common/upgrade/bin/
./ua -readiness
Classification: Internal
Enter the sysdba user/pwd ensure you enter as shown below :
Classification: Internal
Classification: Internal
c. Stop all domain processes
cd /u01/oracle/product/fmw12214/oracle_common/upgrade/bin/
./ua
Classification: Internal
Classification: Internal
Classification: Internal
Classification: Internal
Classification: Internal
e. Domain Reconfiguration (as root):
cd /tmp
cp /u01/puppet/environments/${PROJECT}_${ENVIRONMENT}/bootstrap/azure/osb_upgrade_azure.sh .
chmod u+x osb_upgrade_azure.sh
./osb_upgrade_azure.sh -r admin -p <project> -x <prefix> -e <env> -f osb -v 12214
e.g: ./ osb_upgrade_azure.sh -r admin -p esb -x 1 -e int2 -f osb -v 12214
cd /u01/oracle/product/fmw12214/oracle_common/upgrade/bin/
./ua
Classification: Internal
Classification: Internal
Classification: Internal
g. If the domain is USI-secured
cd /tmp
cp /u01/puppet/environments/${PROJECT}_${ENVIRONMENT}/bootstrap/azure/osb_upgrade_azure.sh .
chmod u+x osb_upgrade_azure.sh
./osb_upgrade_azure.sh -r managed -p <project> -x <prefix> -e <env> -f osb -v 12214
Classification: Internal
First start up of managed node will fail but it will create the server
folder but it will be missing the startup properties.
copy those from the backup domain:
/u02/oracle/backups/<backedup _domain>>/servers/<ms_name>/data/nodemanager
to:
/u01/oracle/config/domains/ <domain_folder>/servers/<ms_name>/data/nodemanager
then restart the node manager and start the managed server it
again
4.3 BPEL
Classification: Internal