0% found this document useful (0 votes)
1K views480 pages

Oracle Cloud Infrastructure Architect by IP Specialist

Oracle Cloud Infrastructure Architect by IP Specialist
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views480 pages

Oracle Cloud Infrastructure Architect by IP Specialist

Oracle Cloud Infrastructure Architect by IP Specialist
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 480

1Z0-997-21: Oracle Cloud

Infrastructure (OCI) Architect


Professional 2021
Study Guide with Practice Questions
& Labs
First Edition
www.ipspecialist.net

Document Control

Proposal Name : OCI Architect Professional 2021


Document Edition : First Edition
Document Volume : Volume 2
Document Release Date : 10th May 2022
Reference : 1Z0-997-21

Copyright © 2022 IPSpecialist LTD.


Registered in England and Wales
Company Registration No: 10883539
Registration Office at: Office 32, 19-21 Crawford Street, London W1H 1PJ,
United Kingdom
www.ipspecialist.net

All rights reserved. No part of this book may be reproduced or transmitted in


any form or by any means, electronic or mechanical, including photocopying,
recording, or by any information storage and retrieval system, without the
written permission from IPSpecialist LTD, except for the inclusion of brief
quotations in a review.

Feedback:
If you have any comments regarding the quality of this book, or otherwise
alter it to better suit your needs, you can contact us through email at
info@ipspecialist.net
Please make sure to include the book’s title and ISBN in your message.
About IPSpecialist
IPSPECIALIST LTD. IS COMMITTED TO EXCELLENCE AND
DEDICATED TO YOUR SUCCESS.

Our philosophy is to treat our customers like family. We want you to


succeed, and we are willing to do everything possible to help you make it
happen. We have the proof to back up our claims. We strive to accelerate
billions of careers with great courses, accessibility, and affordability. We
believe that continuous learning and knowledge evolution are the most
important things to keep re-skilling and up-skilling the world.
Planning and creating a specific goal is where IPSpecialist helps. We can
create a career track that suits your visions as well as develop the
competencies you need to become a professional Network Engineer. Based
on the career track you choose, we can also assist you with the execution and
evaluation of your proficiency level, as they are customized to fit your
specific goals.
We help you STAND OUT from the crowd through our detailed IP training
content packages.

Course Features:
v Self-Paced Learning
Learn at your own pace and in your own time
v Covers Complete Exam Blueprint
Prep-up for the exam with confidence
v Case Study Based Learning
Relate the content with real-life scenarios
v Subscriptions that Suits You
Get more and pay less with IPS subscriptions
v Career Advisory Services
Let the industry experts plan your career journey
v Virtual Labs to test your skills
With IPS vRacks, you can evaluate your exam preparations
v Practice Questions
Practice questions to measure your preparation standards
v On Request Digital Certification
On request digital certification from IPSpecialist LTD.

About the Authors:


This book has been compiled with the help of multiple professional engineers
who specialize in different fields, e.g., Networking, Security, Cloud, Big
Data, IoT, etc. Each engineer develops content in his/her own specialized
field, which is then compiled to form a comprehensive certification guide.
About the Technical Reviewers:

Nouman Ahmed Khan


AWS-Architect, CCDE, CCIEX5 (R&S, SP, Security, DC, Wireless), CISSP,
CISA, CISM, Nouman Ahmed Khan is a Solution Architect working with a
major telecommunication provider in Qatar. He works with enterprises,
mega-projects, and service providers to help them select the best-fit
technology solutions. He also works as a consultant to understand customer
business processes and helps select an appropriate technology strategy to
support business goals. He has more than fourteen years of experience
working in Pakistan/Middle-East & the UK. He holds a Bachelor of
Engineering Degree from NED University, Pakistan, and an M.Sc. in
Computer Networks from the UK.

Abubakar Saeed
Abubakar Saeed has more than twenty-five years of experience managing,
consulting, designing, and implementing large-scale technology projects,
extensive experience heading ISP operations, solutions integration, heading
Product Development, Pre-sales, and Solution Design. Emphasizing adhering
to Project timelines and delivering as per customer expectations, he always
leads the project in the right direction with his innovative ideas and excellent
management skills.

Dr. Fahad Abdali


Dr. Fahad Abdali is a seasoned leader with extensive experience managing
and growing software development teams in high-growth start-ups. He is a
business entrepreneur with more than 18 years of experience in management
and marketing. He holds a Bachelor's Degree from NED University of
Engineering and Technology and a Doctor of Philosophy (Ph.D.) from the
University of Karachi.

Mehwish Jawed
Mehwish Jawed is working as a Senior Research Analyst. She holds a
Master's and Bachelors of Engineering degree in Telecommunication
Engineering from NED University of Engineering and Technology. She also
worked under the supervision of HEC Approved supervisor. She has more
than three published papers, including both conference and journal papers.
She has a great knowledge of TWDM Passive Optical Network (PON). She
also worked as a Project Engineer, Robotic Trainer in a private institute and
has research skills in the field of communication networks. She has both
technical knowledge and industry-sounding information, which she utilizes
effectively when needed. She also has expertise in cloud platforms, such as
AWS, GCP, Oracle, and Microsoft Azure.

Ayesha Shaikh
Ayesha Sheikh is a professional technical content writer. She holds a
Bachelor’s Degree in Computer Engineering from Sir Syed University of
Engineering & Technology. She has hands-on experience on SDN (Software
Defined Network), Java, .NET development, machine learning, PHP,
Artificial Intelligence, Python, and other programming and development
platforms, and Database Management Systems like SQL, Oracle, and so on.
She is an excellent research analyst and is capable of performing all her tasks
in a fast and efficient way.
Free Resources:
For Free Resources: Please visit our website and register to access your
desired Resources Or contact us at: helpdesk@ipspecialist.net

Career Report: This report is a step-by-step guide for a novice who wants to
develop his/her career in the field of computer networks. It answers the
following queries:

What are the current scenarios and future prospects?


Is this industry moving towards saturation, or are new opportunities
knocking at the door?
What will the monetary benefits be?
Why get certified?
How to plan, and when will I complete the certifications if I start today?
Is there any career track that I can follow to accomplish the specialization
level?
Furthermore, this guide provides a comprehensive career path towards being
a specialist in networking and highlights the tracks needed to obtain
certification.
IPS Personalized Technical Support for Customers: Good customer
service means helping customers efficiently, in a friendly manner. It is
essential to be able to handle issues for customers and do your best to ensure
they are satisfied. Providing good service is one of the most important things
that can set our business apart from the others of its kind.
Excellent customer service will result in attracting more customers and attain
maximum customer retention.
IPS offers personalized TECH support to its customers to provide better
value for money. If you have any queries related to technology and labs, you
can simply ask our technical team for assistance via Live Chat or Email.

Our Products

Study Guides
IPSpecialist Study Guides are the ideal guides to developing the hands-on
skills necessary to pass the exam. Our Study Guides cover the official exam
blueprint and explain the technology with real-life case study-based labs. The
content covered in each Study Guide consists of individually focused
technology topics presented in an easy-to-follow, goal-oriented, step-by-step
approach. Every scenario features detailed breakdowns and thorough
verifications to help you completely understand the task and associated
technology.
We extensively used mind maps in our Study Guides to visually explain the
technology. Our Study Guides have become a widely used tool to learn and
remember information effectively.

vRacks
Our highly scalable and innovative virtualized lab platforms let you practice
the IPSpecialist Study Guide at your own time and your own place as per
your convenience.

Exam Cram
Our Exam Crams notes are a concise bundling of condensed notes of the
complete exam blueprint. It is an ideal and handy document to help you
remember the most important technology concepts related to the certification
exam.

Practice Questions
IP Specialists' Practice Questions are dedicatedly designed from a
certification exam perspective. The collection of these questions from our
Study Guides is prepared keeping the exam blueprint in mind, covering not
only important but necessary topics as well. It is an ideal document to
practice and revise your certification.
Content at a glance

Chapter 07: Oracle Autonomous Database


Chapter 08: Design for Hybrid Cloud Architecture
Chapter 09: Migrate On-Premises Workloads to OCI
Chapter 10: Design for Security and Compliance
Chapter 11: Real-World Architecture
Answers
Acronyms
References
About Our Products

Table of Contents
Chapter 07: Oracle Autonomous Database
Introduction
OCI Database Services
Advantages
VM DB Systems
Bare Metal DB System
Oracle RAC
Exadata DB System
MySQL Database Service DB System
Autonomous Database
Autonomous Database on Shared Exadata Infrastructure
Autonomous Database on Dedicated Exadata Infrastructure
Autonomous Database Administration
Automated Administration
Provision of an Autonomous Database
OCI Policies for Autonomous Dedicated
Service Lifecycle
Lab 7-01: Create an Autonomous Database
Introduction
Problem
Solution
Start and Stop Autonomous Database
Connecting to Autonomous Database
Autonomous Database Credentials
Wallet Management and Expiration
Predefined Services
Fully Elastic
Fully Managed
Connectivity Options
Events and Alarms
Events
Notification Service
Alarms
ADB Backups and Recovery
Securing Autonomous Database
Monitoring Autonomous Database
Performance Hub
Scaling
Auto Scaling
Move ADB to Another Compartment
Prerequisites
ADB Cloning
Full Clone
Metadata Clone
Oracle Data pump export from Oracle Database
Refreshable Clone
Managing Users
Create Users
Oracle Data Guard
Oracle Data Guard Configuration
Securing the Database System
MySQL Database System
MySQL Database Service
Ease of Use
Security
Fully Managed
In-Memory, Query Processing Engine
HeatWave Architecture
NoSQL on OCI
Configurable ACID
Extreme Availability Through Fault Containment Zones
MR Tables with Cross-Region Service
Security
Easy Online Elastic Expansion and Contraction
HTTP Access
Lab 7-02: Create an Autonomous Data Warehouse
Introduction
Problem
Solution
Mind Map
Practice Questions
Chapter 08: Design for Hybrid Cloud Architecture
Introduction
Software-Defined Data Center
OCVS Overview
VMware Software
Oracle Cloud Infrastructure
OCVS Product Overview
vSphere: The Hypervisor
vSAN: Software-Defined Storage
NXS-T: Software-Defined Networking and Security
HCX: Hybrid Cloud Extension
Use Cases, Key Benefits, and Values
Use Cases
Key Benefits
Core Aspect and Values
SDDC Deployment
SDDC Provisioning Flow
SDDC VLANs
Deploying a Highly Available SDDC
Design Hybrid Cloud
Connecting Between On-premises and OCVS
HCX Components
HCX Layer 2 Extension – Configuration
Mobility Optimized Networking (MON)
Migration with HCX
Use Cases for Hybrid Clouds
OCVS Network Topology
NSX-T Architecture Component
NSX-T Routing and Bridging
N-VDS – The Logical Switch
OCVS Network Architecture
Access to Microsoft Azure
Partnership Benefits
Common Use Cases
OCI-Azure Interconnect Setup
Lab 8-01: Access to Microsoft Azure
Introduction
Problem
Solution
Introduction to IPv6 with Oracle
Overview
Use Cases
Benefits
IPv4 and IPv6
IPv6 Addressing Model
IPv6 Plan in Cloud
Mind Map
Practice Questions
Chapter 09: Migrate On-Premises Workloads to OCI
Introduction
Planning Data Migration to OCI
Applications
Database
Regulatory Compliance
Storage
Networking
Business Critically
Deployment Environment Type
Disaster Recovery
Offline and Online Migration
Offline Transport – Data Transfer Service
Data Transfer Appliance
Data Transfer Disk
Data Transfer Appliance Specifications
How is Data Secure in Transit?
CLI for Appliance Transfer
How Data Transfer Works
Transporting VMs, Data, and Files to Oracle Cloud
Online Transport – Storage Gateway
Overview
Storage Gateway Service
Database Migration – Methods and Best Practices
Database Migrations
Core Use Cases
Differentiated Use Cases
Migration Types
OCI Database Migration – Use Cases
Oracle Solutions to Migrate Database to Oracle Cloud
Tools for All Steps of the Migration Process
Migration Steps – Direct Online Migration
Migration Steps – Indirect Offline Migration
Start Migration – Export Initial Load
Pricing
Migrating to Autonomous Database
Migration Options and Considerations
Migration to Autonomous Database
Migration Methods
Loading and Import Options to ADB
ADB APIs for Object Store Access
Autonomous Database Packages
Using Oracle Object Store Staging
ADB Statistics and Hints for Data Being Loaded
ADW: Managing DML Performance and Compression
Database Migration Service
Database Migration Terminology
Migrating Using Data Pump
Export an Existing Oracle Database to Import into ADB
Import Data Using Oracle Data Pump
Access Log Files for Data Pump Import
Zero Downtime Migration
Introduction
Architecture
Database Support and Supported Configuration
Benefits
ZDM - Enhancement
Migration from AWS RDS to Oracle ADB
Migration from Solaris & AIX based Source Database
Direct Data Transfer Support for Physical Migration
Existing RMAN Backup usage as Migration Source
ZMD - Methodologies
Mind Map
Practice Questions
Chapter 10: Design for Security and Compliance
Introduction
IAM – Federation
General Concept
User Group Mapping
Users Type
Understanding Sign-in Options
Oracle Identity Cloud Service IDCS
IAM Service
When to Use OCI IAM and IDCS
Lab 10-01: Federation
Introduction
Problem
Solution
Web Application Firewall
Introduction
What is meant by WAF?
WAF Concepts
Benefits
Features
Use Cases
Challenges and Whitelisting Capabilities
WAF Architecture
WAF Point of Presence (PoPs)
Shared Responsibility Model for WAF
Lab 10-02: Working with WAF Policy
Introduction
Problem
Solution
Mind Map
Practice Questions
Chapter 11: Real-World Architecture
Introduction
OCI Architecture Overview
OCI Architecture
Introduction
Component of OCI Architecture
OCI best Practices
Hub-Spoke Architecture
Introduction
Architecture
HPC Architecture
Introduction
Architecture
Components
Considerations
Mind Map
Practice Questions
Answers
Chapter 07: Oracle Autonomous Database
Chapter 08: Design for Hybrid Cloud Architecture
Chapter 09: Migrate On-Premises Workloads to OCI
Chapter 10: Design for Security and Compliance
Chapter 11: Real-World Architecture
Acronyms
References
About Our Products
About Oracle Certifications
The Oracle Certification Program certifies applicants on Oracle product and
technology skills and knowledge.
Depending on the degree of certification, credentials are awarded based on a
combination of completing tests, training, and performance-based
assignments. Oracle certifications are measurable indicators of experience
and skill that, according to Oracle, can help a candidate stand out among
employers.

Certification Exam Level


Oracle Certified Junior Associate (OCJA), Oracle Certified Associate (OCA),
Oracle Certified Professional (OCP), Oracle Certified Master (OCM), Oracle
Certified Expert (OCE), and Oracle Certified Specialist (OCS) are the six
levels of Oracle Certification credentials.
These credentials are divided into 9 technology pillars, as well as product
families and product groupings.

Junior Associate – is a beginner-level certification aimed for students in


secondary schools, community colleges, and four-year colleges and
universities, as well as professors who teach basic Java and computer science
classes.

Associate - is the first step on the path to becoming an Oracle Certified


Professional. The OCA credential assures that a candidate has a good
foundation in core skills for maintaining Oracle products.

Professional - expands on the OCA's essential abilities. The Oracle Certified


Professional displays a high degree of knowledge and skills in a specific area
of Oracle technology. The OCP credential is frequently used by IT managers
to assess the qualifications of staff and job seekers.
Master - acknowledges the highest level of demonstrated abilities,
knowledge, and capabilities. OCMs are capable of answering the toughest
queries and resolving the most difficult challenges. The Oracle Certified
Master credential verifies a candidate's skills by requiring them to pass a
series of performance-based tests. The certification usually improves on the
OCA's foundational skills and the OCP's more advanced skills.

Expert - Competency in certain, niche-oriented technologies, architectures, or


domains is recognized. Credentials are not tied to the traditional OCA, OCP,
or OCM hierarchy, but they frequently build on abilities that have been
demonstrated as an OCA or OCP. The Expert program encompasses a wide
range of competencies, from fundamental abilities to advanced technology
expertise.

Specialist - Credentials are primarily implementation-oriented certificates


aimed at current Oracle partners' personnel, while they are open to all
candidates, partner or not. These certifications are based on extremely
specific products or skill sets, and they serve as a reliable indicator of a
candidate's level of experience in a given field.

OCI Architect Professional Certifications


The Oracle Cloud Infrastructure 2021 Architect Professional certification is
the next level of OCI Architect certification for people who have already
acquired an OCI Architect Associate certification. To be eligible for this
certification, you must have already completed the Associate level. An Oracle
Cloud Infrastructure 2020 Certified Architect Professional has demonstrated
the skills and knowledge needed to plan, design, deploy and maintain systems
on OCI. Plan and design solutions, execute and operate solutions, Design,
implement, and run databases are among the skills verified by this
certification. Design a hybrid cloud architecture, migrate on-premises
workloads to OCI, and consider security and compliance. It is recommended
that you have current training and field experience. You will find the
following in this studyguide.
Covers complete OCI Developer Associate blueprint
Summarized content
Case Study based approach
Practice Questions
100% pass guarantee
Mind maps

Case study, short answer, repeated


Exam Questions
answer, MCQs
Number of Questions 50-70
Time to Complete 120 minutes
Exam Fee $245

This exam measures your ability to accomplish the following technical tasks:
Plan and design solutions in Oracle Cloud Infrastructure (OCI)
Implement and operate solutions in OCI
Design, implement and operate databases in OCI
Design for hybrid cloud architecture
Migrate on-premises workloads to OCI
Design for Security and Compliance
Recommended Knowledge
Plan and design solutions to meet business and technical requirements
Create architecture patterns including N-tier applications,
microservices, and serverless architectures
Design scalable and elastic solutions for high availability and disaster
recovery
Implement solutions to meet business and technical requirements
Operate and troubleshoot solutions on OCI
Conduct Monitoring, observability and alerting in OCI
Manage infrastructure using OCI CLI, APIs and SDKs
Evaluate and implement databases
Operate and troubleshoot databases
Design and implement hybrid network architectures to meet high
availability, bandwidth and latency requirements
Evaluate multi-cloud solution architectures
Design strategy for migrating on-premises workloads to OCI
Implement and troubleshoot database migrations
Design, implement and operate solutions for security and governance
Design, implement and operate solutions to meet compliance
requirements

To be eligible for this certification, you must have already completed the
Associate level. An Oracle Cloud Infrastructure 2021 Certified Architect
Professional (1z0-997-21) has demonstrated the skills and knowledge needed
to plan, design, deploy, and maintain systems on OCI.
All the required information is included in this course:
Domain
Domain 1 Plan and design solutions in Oracle Cloud Infrastructure
(OCI)

Domain 2 Implement and operate solutions in OCI

Domain 3 Design, implement and operate databases in OCI


Domain 4 Design for hybrid cloud architecture
Domain 5 Migrate on-premises workloads to OCI
Domain 6 Design for Security and Compliance
Chapter 07: Oracle Autonomous Database

Introduction
In this chapter, you will learn about the benefits, features, and capabilities
provided by Oracle Database.
Oracle database offerings include in-memory, NoSQL, and MySQL
databases. It also includes cost-optimized and high-performance versions of
Oracle Database, the world's premier convergent, multi-model database
management system. Customers may simplify relational database
environments and minimize management tasks using Oracle Autonomous
Database, which is accessible on-premises via Oracle Cloud@Customer or in
the Oracle Cloud Infrastructure.

OCI Database Services


The Database service provides Oracle Database cloud solutions that are self-
managed and co-managed. Autonomous databases are fully managed, pre-
configured systems that can handle transaction processing or data warehouse
workloads. Bare metal, virtual machine, and Exadata DB systems that you
may modify with the resources and settings that match your needs are
examples of co-managed solutions.
You may swiftly set up a self-contained database or a co-managed database
system. Oracle owns and oversees the infrastructure, although you have full
access to the database's capabilities and operations.
Advantages
Reduce Operational Cost Up to 90% - Machine learning-driven automation
can help you save money on monitoring, securing, and maintaining your
Oracle databases. The database is provisioned, scaled and tuned, protected
and patched, and repaired without the need for user intervention.
Guard Against Data Breaches – Oracle database security solutions for
encryption, key management, data masking, privileged user access controls,
activity monitoring, and auditing let you assess, detect, and avoid data
security threats. It also helps reduce the risk of a data breach while also
making compliance easier and faster.
Single Database for Data Types – Oracle's converged database frees
application developers from complex transformations and redundant data.
Deploy Where You Need - Oracle Database can be installed in your
datacenter, public cloud, or private cloud, as needed. This gives you an
option of deploying in your datacenter when residency and latency are or in
the cloud when you need scalability and the most comprehensive set of
capabilities.
License Types and Bring Your Own License (BYOL) Availability
Oracle Cloud Infrastructure supports a licensing model with two license
types. The cost of the cloud service with included license also comprises a
license for the Database service. Oracle Database customers can use their
existing licenses with Oracle Cloud Infrastructure using Bring Your Own
License (BYOL). It is worth noting that Oracle Database customers are still
responsible for adhering to the license restrictions that apply to their BYOLs,
as specified in their program order.
VM DB Systems
Single-node DB systems on bare metal or virtual servers, as well as 2-node
RAC DB systems on virtual machines, are available through Oracle Cloud
Infrastructure. A specific fast-provisioning single-node virtual machine
solution is available if you need to provision a database system for
development or testing.
Bare Metal DB System
A single bare metal server running Oracle Linux 7 with locally-attached
NVMe storage makes up a bare metal DB system. If the node fails, you can
simply launch another system and restore the databases from current backups.
When you set up a bare-metal database system, you choose a single Oracle
Database edition for all of the databases on that system. It is not possible to
change the edition that has been chosen. Each database system can have
many database houses, each with its own set of features. There can only be
one database in each database home, and it must be the same version as the
database home.
Oracle RAC
A cluster is a collection of interconnected computers or servers that seem as
though they are one server to end-users and applications. Oracle Real
Application Clusters (Oracle RAC) allows you to cluster an Oracle database.
Oracle RAC relies on Oracle Clusterware to connect several servers and
make them work as a single system.
Oracle Clusterware is an Oracle Database-integrated portable cluster
management tool, which is also necessary for Oracle RAC to work. Oracle
Clusterware allows non-cluster Oracle databases as well as Oracle RAC
databases to utilize Oracle's high-availability infrastructure.
Exadata DB System
Oracle's Exadata is a database machine that provides customers with
optimized enterprise-level databases and associated workloads capabilities.
Exadata is a Sun Microsystems-developed composite database server
machine that employs Oracle database software and Sun Microsystems-
developed hardware server equipment.

EXAM TIP: Oracle recommends using the new resource model to


provision new Exadata Cloud Service instances. After a period in which
both resource models are supported, the DB system resource model will be
retired for Exadata instances.
DB Systems Backup/Restore
In every Oracle database setup, backing up your database system is essential.
Backups can be kept in the cloud or on your local hard drive. As mentioned
below, each backup destination has advantages, disadvantages, and
requirements to consider.
Object Storage
Object Storage stores the backup in the Oracle Cloud
Infrastructure Object Storage
Durability: High
Availability: High
Back-up and Recovery Rate: Medium
Advantages: High durability, performance, and availability
Figure 7-01: Object Storage Concept

Local Storage
Backups are stored in the Fast Recovery Area of the DB System locally
Durability: Low
Availability: Medium
Back-up and Recovery Rate: High
Advantages: Optimized backup and fast point-in-time recovery
Disadvantages: Backup is unavailable when DB System becomes
available
DB System DR
Database DR can be implemented using Oracle Active Data Guard and
Oracle GoldenGate.
Active Data Guard is a simple and cost-effective solution for Oracle
Database data security and availability. While replication is running, it
maintains an exact physical clone of the production copy at a remote
and open read-only site.
Oracle GoldenGate is a multi-master replication, hub and spoke
deployment, and data transformation product. GoldenGate offers
customers a variety of choices for dealing with a wide range of
replication needs, including heterogeneous hardware platforms.

Figure 7-02: DB System DR

EXAM TIP: An Oracle Data Guard setup on the Oracle Cloud


Infrastructure is limited to one standby database for each primary database.
DB System HA
The web, application, and database tier are all deployed in an availability
domain defined as this architecture's primary (active) environment. A
redundant topology is installed in another availability domain in the same
region as a standby (non-active) environment.
DB System HA and DR (Single AD Region)
This design distributes the Oracle Enterprise Performance Management
applications and databases over two fault domains in a single availability
domain.
The topology's application instances are all active, and the application's
redundant instances are hosted on separate fault domains. As a result, the
instances do not share physical hardware. Within the availability domain, the
architecture shown in Figure 7-03 ensures high availability. The instances in
the other fault domains are unaffected by a hardware failure or maintenance
event that impacts one fault domain. If one instance fails, traffic is routed to
the availability domain's remaining instances, which continue to process the
request.

Figure 7-03: DB System HA and DR (Single AD Region)

DB System HA and DR (Multiple AD Region)


The web layer, application tier, and database tier are all deployed in a single
area designated as the architecture's primary (active) site. In a remote
location, a redundant topology is implemented as a standby (non-active) site.
The primary and standby topologies are symmetric in the architecture,
providing the same computation and storage capacity. You can recover the
application in a distant region if the primary topology is unavailable for any
reason. HTTPS requests from external users, such as Hyperion Financial
Reporting Web Studio users, are routed through a global DNS resolver to the
internet gateway associated with the VCN in the presently active region.
Figure 7-04: DB System HA and DR (Multiple AD Region)

VM DB BM DB Exadata Autonomous Autonomous –


– Shared Dedicated
System System DB
System
Management Customer Customer Customer Oracle Oracle
Updates Customer Customer Customer Automatic Customer
initiated initiated initiated policy
control
Scaling Storage CPU Within Both CPU Both CPU
(CPU (Storage Exa CPU, and and
cores cannot be across Exa Storage Storage
cannot be changed) racks
changed)
Backups Customer Customer Customer Automatic Automatic
initiated initiated initiated
Storage Block Local Local Local Local disks
Storage NVMe disks and disks and and NVMe
disks NVMe NVMe flash cards
flash cards flash cards

RAC Available Not Available Not Not


(2-node) Available Available Available

Data Guard Available Available Available* Available Available


Table 7-01: OCI Database Services
EXAM TIP: Oracle database offerings include in-memory, NoSQL, and
MySQL databases, as well as cost-optimized and high-performance
versions of Oracle Database, the world's premier convergent, multi-model
database management system.
MySQL Database Service DB System
The MySQL instance is housed in a logical container called a DB System. It
has a user interface that allows you to manage operations like provisioning,
backup, restore monitoring, etc. It also has a read/write endpoint to connect to
the MySQL server using normal protocols.
The components of MySQL Database Service DB System are as follows:
A compute instance (specified by the related shape's resources)
Oracle Linux, which is a Linux-based operating system
MySQL Server Enterprise Edition is a commercial version of MySQL
Server. Only the most recent version of MySQL 8.0 is supported. The
MySQL instance gets upgraded to the most recent maintenance release
automatically (8.0.24 to 8.0.25, for example)
The DB System that is connected to a subnet of the Virtual Cloud
Network using a Virtual Network Interface Card
Block storage that is connected to the internet. For all block storage,
MySQL Database Service employs the Higher Performance option.
Autonomous Database
The Autonomous Database from Oracle Cloud Infrastructure is a fully
managed, pre-configured database environment with four workload types:
Autonomous Transaction Processing, Autonomous Data Warehouse, Oracle
APEX Application Development, and Autonomous JSON Database. You
would not have to manage or configure any hardware, and you will not have
to install any software. You can grow the number of CPU cores or database
storage capacity at any moment after provisioning without affecting
availability or performance. The database is created by Autonomous
Database, which also handles the following maintenance tasks:
Backing up the database
Patching the database
Upgrading the database
Tuning the database
The Autonomous Database is built on an Oracle database. Therefore, the
intelligent applications and tools that support the Oracle database also
support the Autonomous database. These tools and applications connect to
the Autonomous database using the standard SQLNet connections. The tools
and applications can either be in the data centers or in the public cloud.
For example, the Oracle Analytics Cloud and other Oracle cloud services are
pre-configured for the Autonomous Data Warehouse. You have connectivity
to the Oracle database with SQLNet, JDBC, and ODC.
The Oracle cloud service integrates easily with the Analytic cloud, Oracle
GoldenGate, marketplace, and Oracle integration service.
Deployment Options
Oracle is only the public cloud that supports bare metal and virtual machines
using the same API hardware, firmware, software stack, and network
infrastructure.
There are three options available when deploying the Oracle cloud
datacentre:
Autonomous – Shared: You provision and manage only the
Autonomous database, and Oracle will handle the infrastructure it runs
on. It is supported by Autonomous Transaction Processing (ATP)
database and Autonomous Data Warehouse (ADW)
Autonomous – Dedicated: You can configure your environment very
similar to the manner that you may currently have in the datacentre,
with exclusive use of Exadata hardware. As shared, it supports both the
transaction processing and the data warehouse. Thus, it provides you
flexibility and allows you to have the Oracle database and Autonomous
Database Cloud Service wherever you want and need it
Cloud@Customer Infrastructure – Oracle provides
Cloud@Customer; you have the Oracle database cloud service running
in your datacentre. You may want that if you require data sovereignty,
data regulatory, and network latency
Available Workload Types
The workload types supported by Autonomous Database are as follows:
Autonomous DB Warehouse - Designed for decision-making and data-
warehousing workloads. It provides quick queries across enormous amounts
of data.
Autonomous JSON Database - Designed for the creation of JSON-centric
applications. Document APIs and native JSON storage are available to
developers.
Autonomous Transaction Processing - Designed to handle transactional
workloads. For short-running queries and transactions, it provides high
concurrency.
Oracle APEX Application Development - Designed for application
developers who need a transaction processing database for Oracle APEX
application development. It helps to create and deploy low-code applications,
including databases.
What does Oracle Autonomous Database do to help customers?
The biggest benefit is the massive decrease in time spent managing
databases. The Autonomous database eliminates the complexity of operating
and securing an Oracle database while giving customers the highest level of
performance, scalability capabilities, and availability.

EXAM TIP: Compared with the conventional database, an Autonomous


database performs all the tasks more efficiently without human intervention.
It runs SQL with no access to the operating system or a container database.
Key Features
There are many features of an Autonomous database that help store all your
business data and facilitate any workload as your business grows. Some
important key features include:
Oracle DB Management – Oracle simplifies end-to-end management
of the database.
Scalable – It is fully elastic scaling, with scalable capabilities of
computing as well as storage.
Self-Scaling – This database provides the capability to do scaling
independently to fit your database workload with no downtime
Autoscaling – It has autoscaling capability. This feature allows your
database to automatically use more CPU and IO resources when the
workload requires it.
Support Existing Applications – The Autonomous database supports
existing applications that can run both on the Cloud environment and in
your current on-premises environment.
Optimize Query Performance – The Autonomous database can
optimize the query performance with pre-configured resource profiles
for different types of users.
Built-in tools – The web-based notebook tools are designed not only
sharing SQL based data but allow interactive documents
Database migration utility – With the database migration utility, you
can easily migrate from MySQL or other databases, such as the
Redshift, SQL Server, or other databases
Fully Managed – Oracle Autonomous Database is a fully managed
database. Oracle automates end-to-end management to the Autonomous
database from the point of provisioning a new database to self-scaling.
Therefore, it can grow or reduce the number of resources that you need,
regardless of whether it is computing and storage
Backup Recovery – There is a capability of automating the backups.
Should the need be, there is a way to go to the backups already placed
in the Oracle cloud and restore and recover the database
Automated Tuning
The Autonomous database does not require any tuning and is designed as a
load-and-go service. You must start the service, define details, load the data,
provide the user access, and then run a query or provide data into the
database. There is no need to consider parallelism, partitioning, indexing, or
compression details. The service automatically configures the database for
those high-performance queries.
Also, Oracle provides other utilities to easily provide data into the database,
such as Oracle Data Pump and Oracle Loader. With Enterprise Management
Database Migration Workbench, Oracle can easily shift all the resources from
your current environment to Oracle Cloud Database
Elasticity
The Autonomous database is a fully elastic service that provides two scaling
features – scaling and self-scaling. With this database service, you can start
off easily and simply specify the number of storage in a unit of terabytes
(TB) when you first create it. Moreover, you can easily size it according to
what you require.
At any time, you can scale up and down for that amount of CPU core count
and storage capability. When you make any resource changes for your
Autonomous Database, the resources are automatically grown or reduced
when needed, without requiring any downtime or service interruption. It is
considered the most powerful feature of the database.

EXAM TIP: The most powerful feature of an Autonomous database is an


automated tuning that requires you to do three things.
Define tables
Load data
Run query
Autonomous Optimization
Autonomous Transaction Processing (ATP)
Oracle Autonomous Transaction Processing (ATP) is a cloud database
solution that makes running and securing high-performance databases simple.
This service is automated by provisioning, setting, adjusting, scaling,
patching, encrypting, and restoring databases. All of Oracle's advanced
database features, such as Real Application Clusters (RAC), multitenancy,
partitioning, in-memory, advanced security, and advanced compression, are
included in the service. The service is designed to handle a wide range of
applications, from simple web apps to huge and complex applications that are
crucial to the company's operations.
Autonomous Data Warehouse (ADW)
Oracle Autonomous Data Warehouse is a cloud data warehouse service that
takes care of all the difficulties of running a data warehouse, DW cloud, data
warehouse center, data security, and data-driven application development. It
automates data warehouse provisioning, configuration, security, tweaking,
scaling, and backup. It comes with tools for self-service data loading, data
transformations, business models, automatic insights, and built-in converged
database capabilities, making it easier to query numerous data types and
perform machine learning research.

EXAM TIP: Both ATP and ADW share the Autonomous Database
platform. The difference is how the service is optimized within the
database.
Autonomous Data Warehouse Autonomous Transaction
Processing
Data is stored in columnar format Data is stored in row format
For query optimization for analytics, In a transaction processing system,
data summaries are used. ADW you can automatically detect any
automatically parallelizes the query missing indexes to help process the
execution to access a large volume data most efficiently.
of data in a short amount of time.
Provides the ability to achieve The data is added using more
optimum execution plan by traditional insert statements or
gathering the statistics as part of all update statements.
the bulk load activities
Table 7-02: ADW vs. ATP

EXAM TIP: The provisioning process of an Autonomous Database is


very easy and fast. There is no need to specify:
Tablespace
File system
Operating System
Initialization parameters
Autonomous Database on Shared Exadata Infrastructure
Only the Autonomous Database is provisioned and managed with this option,
while Oracle deploys and controls the Exadata infrastructure. Both
transaction processing and warehouse workloads may be supplied with
shared Exadata infrastructure.
Autonomous Database uses per-second billing on shared Exadata
infrastructure. This means that OCPU and storage usage is measured in
seconds. The minimal usage duration for OCPU resources is one minute.
Autonomous Database on Dedicated Exadata Infrastructure
Autonomous databases can run on either dedicated or shared Exadata
infrastructure. You may optimize your database for transaction processing or
data warehouse workloads, and you can use Autonomous Data Guard to
construct a backup database for disaster recovery.
“Allows users to set up a Private Database Cloud on the Oracle Public
Cloud,” which runs on dedicated Exadata infrastructure. As a result, it is an
excellent platform for consolidating various databases, independent of
workload type or size, providing a database as a service within an
organization. Dedicated infrastructure ensures complete separation from other
tenants and allows you to tailor operating policies, such as software update
dates, availability, and density, to your specific needs.

Figure 7-05: OCI DB System Options

Benefits
Dedicated infrastructure provides complete isolation from other
tenants.
It provides an opportunity to customize the operational policies, such as
the software update schedule, availability, and density, so that you can
match your business requirements.
Autonomous Dedicated Workloads Isolation
Once the dedicated Exadata infrastructure is provisioned, the administrator
can then partition the system into multiple levels:
Databases
Container Databases (CDB)
VM Cluster
Secure isolation zone within the public cloud
Separated hardware (Exadata infrastructure)
One can have the desired number of cluster or container databases. Each
container database can have multiple update strategies, backup retention
availability, and density. By default, only one container database is
necessary, and all user-created databases will be provisioned within that
container. They will inherit the update strategy backup retention of the
container. The network path is through a VCN, and the subnet is defined by
the Exadata infrastructure hosing the database. By default, this subnet is
defined as private, and there is no public internet access to those databases.
This only ensures that your company can access the Exadata infrastructure
and database.
Autonomous Management Model
With Oracle Autonomous Database Dedicated management model,
customers are only responsible for their data, schemas, and encryption keys.
Oracle automatically manages the database hypervisor, operating system, and
hardware. This allows customers to focus on what is important to them and
allows Oracle to own any issue, whether that is patching the database or
hardware.
Figure 7-06: Autonomous Management Model

Dedicated Network Architecture


The network path to an Autonomous Database Dedicated is through a Virtual
Cloud Network (VCN) and a subnet defined in that dedicated infrastructure
that is hosting that database.
Autonomous Database Dedicated can also take advantage of network
services provided by OCI, including subnet, VCN peering, connection to the
on-premises database through the IP secure VPN, and FastConnect dedicated
corporate network connections. You can also take advantage of the Oracle
Microsoft partnership that enables customers to connect their OCI resources
and Microsoft Azure resources through a dedicated private connection.
However, a move to the public cloud is not possible for some customers.
Perhaps, it is due to industry regulation, performance concerns, or
integrations with legacy on-premises applications.
Regulations
Regulations or policies require data to be local.
Latency
Applications require the performance of local LAN.
Integration
Databases tightly coupled with on-premises applications
Risk
Concerns about multiple tenants sharing the cloud.
Note: For these types of customers, Exadata Cloud@Customer should meet
their requirements for strict data sovereignty and security by delivering
high-performance Exadata Cloud Services capabilities in their datacenter
behind their own firewall.
Autonomous Database: Cloud@Customer Benefits
Autonomous Database on Exadata Cloud@Customer provides the same
service as Autonomous Database Dedicated in the public cloud. Therefore,
you would get the same simplicity, agility, performance, and elasticity.
Similarly, this also provides a very fast and simple transition to an
autonomous cloud because you can easily migrate on-premises databases to
Exadata Cloud@Customer. Once the database is migrated, any existing
applications can simply reconnect to those new databases and run without
needed application changes. Thus, data will never leave your data center,
making it a safe way to adopt a cloud model.
Figure 7-07: Cloud@Customer Benefits

Lightweight Local Cloud Control Plane Server


Each Cloud@Customer rack includes two local control plane servers to
manage the communication to or from the public cloud. The local control
plane acts on behalf of requests from the public cloud, keeping
communications secure and consolidated. Platform control plane commands
are sent to the Exadata Cloud@Customer system through a dedicated
WebSocket secure tunnel. Oracle cloud operations staff use the same tunnel
to monitor the autonomous database on Exadata Cloud@Customer for
maintenance and troubleshooting.
The two remote control plane servers installed in the Exadata
Cloud@Customer rack host the secure tunnel endpoint and act as a gateway
for access to the infrastructure. They also host components that orchestrate
the cloud automation, aggregate and route telemetry messages from the
Exadata Cloud@Customer platform to the Oracle Support Service
infrastructure. Moreover, they also host images for server patching.
Figure 7-08: ADB-ExaC@C Gen 2Network Connectivity

Simple Connectivity to the Datacenter Network


The Exadata Database Server is connected to the customer-managed switches
via either 10 gigabits or 25-gigabit ethernet.
Customers have access to the customer virtual machine via a pair of layer 2
network connections, which are implemented as Virtual Network Interface
Card (VNIC) and are also tagged VLAN.
The physical network connections are implemented for high availability in an
active-standby configuration. Autonomous Database on Exadata
Cloud@Customer provides the best of both worlds (physical and cloud). The
automation includes patching, backing up, scaling, and managing the
database that you get with a cloud service, however, without the data leaving
the customer’s data center.
Figure 7-09: Simple Connectivity to Database Network

ADB on ExaC@C: Resilience to Disrupted OCI Connectivity


With Autonomous Database on Exadata Cloud@Customer, you can perform
all of the automation you would expect from a cloud service, including
patching, backing up, scaling, and management, without the data or the
application having to leave the customer’s datacenter.
In the event the Autonomous Database on Exadata Cloud@Customer loses
network connectivity to the OCI control plane, the ADB will continue to be
available for applications and operations like backups, and loss of network
connectivity will not impact the auto-scaling. However, the management and
monitoring of the Autonomous Database via the OCI console and APIs, as
well as access by Oracle’s Cloud operations, will not be available until the
network connectivity is resumed.
The capability suspended in the case of a lost network connection includes
infrastructure management (manual OCPU scaling, VM cluster creation),
management via OCI interfaces (Web UI, OCI CLI, REST API/SDK,
Terraform), and Oracle cloud operations access to perform maintenance
activities (patching).
ADB on ExaC@C: Database Backup Options
All Autonomous Databases are encrypted, and all can encrypt data at REST.
Data is automatically encrypted as it is written to the storage. However, this
encryption is transparent to authorized users and applications because the
database automatically decrypts the data when it is being read from the
storage.
There are several options for backing up the Autonomous Database
Cloud@Customer, including using a Zero Data Loss Recovery Appliance
(ZDLRA). You can back it up to locally mounted NFS storage or to the
Oracle public cloud.

Figure 7-10: ADB-ExaC@C Database Backup Options

Workflow and Functionality


The fleet administrator role performs the specified steps in the typical
workflow by specifying its size, availability domain, and region within the
Oracle Cloud. Once the hardware has been provisioned, the fleet
administrator partitions the system by provisioning clusters and container
databases. Then, the developer, DBAs, and users provision databases within
those container databases using self-service tools.
Billing is based on the size of the Exadata infrastructure that is provisioned,
whether it is a quarter rack, half rack, or full rack. It also depends on the
number of CPUs that are being consumed.
It is also possible for a customer to use existing Oracle database licenses with
this service to reduce the cost.
Physical Characteristics and Constraints
Autonomous Database Dedicated supports the following Exadata
infrastructure models and shapes:
X7
X8
X8M
Currently, you can create a maximum of 12 VM clusters on an Autonomous
Database Dedicated infrastructure. Oracle also suggested limiting the number
of databases you provide to meet your preferred Service Level Agreement
(SLA).
Note:
To meet the High Availability (HA) SLA, Oracle recommends a maximum
of 100 databases.
To meet the extremely high availability SLA, Oracle recommends a
maximum of 25 databases.
Getting started with Autonomous Database on dedicated infrastructure
involves the following steps:
You need to increase your service limit to include the Exadata
infrastructure
Then, you need to create the fleet and DBS service roles.
You also need to create a necessary network model, VM cluster, and
container database for your organization.
After that, you need to provide access to the end-users who want to
create and use that Autonomous database.
General Selection Considerations
Autonomous Database requires a subscription to that Exadata
infrastructure for a maximum of 48 hours. But once subscribed, you
can test out the ideas and then terminate the subscription with no
ongoing cost.
While subscribed, you can control where you place the resources to
perhaps manage latency-sensitive applications.
You can also control patching schedules and software versions;
therefore, you can be sure that you are testing exactly what you need to.
You can migrate the database to Autonomous Database via
export/import capabilities, object store, or through Data Pump and
GoldenGate.
As with any Autonomous database, once it is provisioned, you have got
full access to both auto-scaling and all cloning capabilities.
Autonomous DB Feature Comparison
Feature ADB-Dedicated ADB-Shared
Autonomous Resources Autonomous Database Autonomous Database
Autonomous Container Database
Autonomous VM Cluster
Autonomous Exadata Infrastructure
Private Single Tenant Exadata Yes No
Infra
Availability Domain Placement Yes No
Choice
Separate Development, Test, Yes No
Production Life Cycle
Maintenance Scheduling for Yes No
Planned and Unplanned (one-
off critical) Updates
Skip Updates During Critical Yes No
Business Periods
Instant Update – Patch Now Yes No
Configurable Backup Retention Yes No
Application Continuity Transparent Application Continuity Application Continuity
Customer-Managed TDE Keys OCI Vault and Oracle Key Vault No
Data-Guard-Replication Mode Sync – Max Availability Async
Async – Max Performance
Data Guard-Active Standby Yes No
SQL Create/Drop Tablespace Yes No
Java in Database Yes No
Predefined Service Connections TCP and TCPS TCPS
Table 7-03: ADB Feature Comparison
Monitoring Dedicated Infrastructure
While Autonomous Database automates most of the repetitive tasks that
DBAs perform, the application DBA will still watch to monitor and diagnose
the database for the appliance to maintain the highest performance and the
greatest security possible.
There are several tools at the application DBA’s disposal, including
Enterprise Manager, performance hub, and the OCI console.
Availability
For Autonomous Dedicated, all the database operations are exposed through
the console UI and available through REST API calls, including:
Provisioning stop/start lifecycle operations for dedicated database types
Unscheduled on-demand backups and restore
Scale CPU, storage, or other storage
Download connection information, including wallet for encryption
Schedule updates for Exadata Infrastructure, VM Cluster, or Container
Database.
Using the performance monitoring, scripting, and schema design through
SQL developer:
Performance Hub along with OCI Metrics and Notifications in Native
Oracle Cloud Console
Can also monitor database using existing Enterprise Manager Grid
Control deployments

Autonomous Database Administration


This section will cover the administrative task that is automated for you with
an Autonomous database. Various administrative tasks can be performed in
an Autonomous database, such as provisioning your database, connecting to
the database, and scaling resources. You will also understand how to clone
the database and take advantage of a new feature to write your dump file
from the data pumped directly into object storage.
Automated Administration
Regardless, the Autonomous database will automate the backup of the
database if you are running Autonomous Transaction Processing or
Autonomous Data Warehouse. It will apply patches to the database to include
any of the latest security patches and the latest features of the database.
Oracle will do this all without any downtime and upgrades the database to
make sure it is running on the best and latest version. Regardless of the type
of workload, Oracle will tune the database for you.
Full Database Lifecycle Automation
One of the Autonomous database features is that it manages everything for
you. The database will maintain the components of the infrastructure and the
database it runs on.
It ensures all the latest versions of patches are applied to keep the database
secure with the latest security patches and current with all the latest features
of the Autonomous database.
As your database needs to grow, the volume of the transaction will increase
as the business grows. Thus, the Autonomous database will ensure that it runs
at peak performance tasks to tune the operating system and the database
operations in order to facilitate any workload that you may have.
Complete Database Automation
Oracle Machine Learning Notebooks with the ADW help data scientist and
developers increase their productivity and reduce their learning curve, using
familiar open-source-based Apache Zeppelin notebook technologies. This
will allow data scientists to gain insights into the company’s data for making
better business decisions much quicker.
The notebook supports SQL, PL SQL, and Python, providing developers with
language options when developing their modules. The automatic machine
learning user interface is an Oracle Machine Learning Interface that provides
a no-code automated machine learning capability. This would allow the
business users to create and deploy machine learning models without the
required extensive data science background, with just a few clicks.
With an Autonomous database, Oracle will provide a database system for you
that is self-driving, self-tuning, self-repairing, and self-securing. It will scale
automatically when you need more resources as your workloads increase. It
will tune as the activities of the database needs. For example, it will
automatically create an index when it processes queries to ensure an efficient
way to return data, resulting in the most optimal manner.
The Autonomous database is a fault tolerance scale-out cluster, which works
transparently on both OLTP and in an analytic workload. This makes a
unique capability of running a mission-critical workload.

Provision of an Autonomous Database


To create an Autonomous database (dedicated), you need to set up a Virtual
Cloud Network (VCN) first. After that, you will create a shared
infrastructure, and if you wish to create a dedicated infrastructure, you need
to provide an Exadata infrastructure. Then, you need to create a container
database in that infrastructure and create an autonomous database.
The architecture shown in Figure 7-11 shows a public-facing Flash web
server connected to an autonomous database, with a private endpoint
provisioned in the Oracle Cloud Infrastructure. In this architecture, there are
two subnets in the VCN, the existing two outer data boxes in the architecture.
One is for the public subnet, and the other is for the private. Autonomous
Database is provisioned in a private subnet. There is a Flash web server in the
public subnet; Flash is a micro web application framework that contains a set
of tools and libraries that make it really easy to create a web server. To
communicate with the internet, the Internet Gateway is used. However, for
security reasons, an Autonomous database is placed inside the private subnet.
Figure 7-11: Example – Provision of an Autonomous DB

OCI Policies for Autonomous Dedicated


Dedicated – Roles
A successful private cloud is set up and managed using clean role separation
between the fleet administration group, the developers, and the DBAs group.
The fleet administration group establishes the governance constraint,
including things like budgeting, capacity compliance, and SLAs according to
the business structure. The physical resources are also logically grouped to
align with this business structure, and then the group of users is given self-
service access to the resources within these groups.

Figure 7-12: Dedicated Roles

Dedicated – Fleet Administrators


Fleet administrators allocate budget by department and are responsible for
creating, monitoring, and managing the Autonomous Exadata infrastructure,
the Autonomous Exadata VM clusters, and the Autonomous container
databases. The fleet administrators must have an Oracle account or user to
perform these duties. The user has permission to manage these resources and
be permitted to use network resources that need to be specified when you
create these other resources.
Figure 7-13: Fleet Administrators

Dedicated – DBAs
Database Administrator (DBA) create, monitor, and manage Autonomous
Database. Both ADB and DBA need to have an Oracle Cloud account or be
Oracle Cloud users. Those accounts need to have the necessary permissions
in order to create and access:
Autonomous Database
Autonomous Backups
Autonomous Container Database
While creating an Autonomous database, administrators will define and gain
access to an admin user account inside the database. Through this account,
they will get the necessary permissions to be able to create and control
database users.
Dedicated – Developers and Users
Database users and developers who write applications using or accessing an
autonomous database do not need Oracle Cloud accounts. They will be given
the network connectivity and authorization information they need to access
those databases by the database administrators.

Figure 7-14: Developers and Users

Service Lifecycle
You can manage the lifecycle of an Autonomous dedicated service through
the Cloud UI, Command-Line Interface, REST APIs, or through one of the
several language SDKs. The lifecycle operations that you can manage
include:
Capacity planning and setup
Provisioning and partitioning of Exadata Infrastructure
The provisioning and management of databases
The scaling of CPU storage and other resources
The scheduling of updates for the infrastructure
The VMs and the databases
Monitoring through event notifications
Figure 7-15: Service Lifecycle

OCI allows fine-grained control over resources through the application of


policies to groups. These policies are applicable to any member of the group.
Dedicated Private Cloud Fleet and DB Admin IAM setup
For Oracle Autonomous Database on dedicated infrastructure, the resources
can be of:
autonomous-exadata-infrastructure
autonomous-container-database
autonomous-database
autonomous-backups
The policy statement can be:
allow group <group> to <verb> <resources> in compartment
<compartment>
where,
A group is a set of users with the same privileges.
COMPARTMENT is an operating context of a specific set of service
resources only accessible to GROUPs who are explicitly granted
access.
POLICY is used to bind privileges for a GROUP to a specific set of
resources in a COMPARTMENT

Lab 7-01: Create an Autonomous Database


Introduction
With a multi-model convergent database and machine learning-based
automation for full lifecycle management, you may save up to 90% on
operational costs. Oracle Autonomous Database is a workload-optimized
cloud solution for transaction processing and data warehousing, operating
natively on Oracle Cloud Infrastructure.
This Oracle database can provide the self-driving, self-securing, and self-
repairing capabilities to secure and authorize data access.
Problem
An organization wants to shift all its on-premises resources to the Oracle
Cloud Platform. It wants to use the best database system that can automate
data protection and security as well as prevent unauthorized access. Which
Oracle service can be used for this requirement?
Solution
In OCI, Autonomous Database System provides self-driving and self-
securing capabilities.
Step 1: Navigate to Autonomous Database
1. Go to https://www.oracle.com/cloud/sign-in.html and log in to Oracle
Cloud Infrastructure Console with your Cloud Account Name and
credentials.
2. The Oracle Cloud Infrastructure Console dashboard will appear.
3. To launch any free resource, go to the navigation menu.
4. Go to Oracle Database.
5. Click on Autonomous Database.
Step 2: Configure Autonomous Database
6. Verify your Home Region and Compartment.
7. Click on Create Autonomous Database.

8. Now, provide the basic information for the Autonomous Database.


9. Select your compartment.
10. Write a unique display name.
11. Write a Database name.
12. Select Data Warehouse as a workload type.
13. Select Shared Infrastructure as deployment type.
14. Enable Always Free Autonomous database.
15. Select database version.
16. Set OCPU count.
17. Create username and password for Administrator Credential.
18. Click on Create Autonomous Database.
19. The database will create within a few seconds.
20. Verify the Autonomous Database Information.
21. Go to the Resources and click on Key History.
22. Check the Oracle manage key.

23. Choose the Allow secure access from everywhere access type.
24. Choose license type.
25. Click on Create Autonomous Database to deploy the database with
these settings.
Note: The provisioning process is very simple to follow and creates a
simple, self-contained, self-secure, self-managing, and scalable
Autonomous database.
Step 03: Connect to Autonomous Database Using SQL Developer
26. Click on the Service Console option from the overview page.
27. From the left-hand side given menu, click on Administration.
28. Click on Download Client Credential (Wallet).
29. Define your Password and confirm it.
30. Click on Download.
31. The wallet will download. Save the downloaded file to a specified
location.
32. After that, open SQL developer and click on a new connection.
33. Specify connection name.
34. Write the same username and password that you used for the
creation.
35. Select Cloud Wallet as Connection Type.
36. Click on Browse to locate the file.
37. Select the file from the downloaded section and click on Open.
38. Select Service name and click on Test.
39. After testing, save the connection.
40. Click on Connect.
41. For confirmation, write username and password and click on OK.
42. In SQL Developer, run the following query with the recently created
connection:
select /* low */ c_city,c_region,count(*)
from ssb.customer c_low
group by c_city,c_region
order by count(*);
43. Note the response from the output.
44. Now, create another connection.
45. Enter the same detail with a different name.
46. Test the connection and save it.

47. Confirm the connection using username and password.


48. Click on OK.

49. After creation of a new connection, run the following query in SQL
Developer:
select /* high */ c_city,c_region,count(*)
from ssb.customer c_high
group by c_city,c_region
order by count(*);

50. Observe the response from the output.


Step 04: Load The Local File Into Autonomous Database
51. Now, select the connection and right-click to select Import Data.
52. Select your file and click on Open.
53. Write table name.
54. Browse and select the file.
55. Leave the remaining options as default.
56. Click on Next.
57. Choose the Columns according to your need.
58. Click on Next.

59. Set the Target Table Columns and click on Next.


60. Now, select Finish to complete the loading.

61. Click on OK for the successful import of data.


62. After loading, run the following query in SQL developer and check
the output:
select * from <file_name>;
63. Observe the output.

Step 5: Upload Data Files To Object Store


64. Go to the navigation menu, and click on “Object Storage &
Archive Storage.”
Note: In this type of storage, you can deploy a bucket to store your large
size data.
65. Select your Home Region and Compartment.
66. Click on Create Bucket.
67. Write your bucket name.
68. Choose Default Storage tier: Standard
69. Enable Auto-Tiering.
70. Use the encryption managed key offered by Oracle.
71. Click on Create.
72. After creation, verify your configuration details.
73. Click on Upload.
74. Write a name for your file.
75. Select Standard as Storage tier.
76. Select the file from the path.
77. Click on Upload.
78. Click on Close.
79. Verify the uploaded file.
Step 7: Create An Object Store
80. Go to the Profile and click on User settings from the given options
present at the top of the OCI console.
81. The user’s identity page will appear.
82. Scroll down and click on Auth Tokens from the options given
inside the Resources.
83. Click on Generate Token.
84. Write a Description for this token.
85. Click on Generate Token.
86. Click on Close.

Note: Save the generated token for future use.


87. Verify the created token.
Step 8: Create Object-Store Credentials
88. Create another user using admin user.
89. Test and save the connection.
90. Click on Connect.
91. After that, run the following query in SQL developer:
begin
DBMS_CLOUD.create_credential (
credential_name => 'OBJ_STORE_CRED',
username => '',
password => '' ) ;
end;
/
92. Wait for the completion.
Step 9: Copy Data from Object Store to Autonomous Transaction
Processing Database Tables
93. Go to the File and select Open.
94. Select the file of your choice and click on Open.
95. Select the connection and click on OK.
96. Run the following query in SQL developer. Find and replace the
function to replace region name, bucket name, object storage
namespace, and tenant name.
begin
dbms_cloud.copy_data(
table_name =>'CHANNELS',
credential_name =>'OBJ_STORE_CRED',
file_uri_list
=>'https://swiftobjectstorage..oraclecloud.com/v1///chan_v3.dat',
format => json_object('ignoremissingcolumns' value 'true',
'removequotes' value 'true')
);
end;
/
97. Observe the output.
Step 10: Clone Autonomous Database
98. Navigate to the Autonomous Database and open the recently
created Autonomous Database.
99. Go to the overview page and expand the More Actions option.
100. Click on Create Clone.
101. Choose Full clone type.
102. Choose Clone from database instance as the Clone source.
103. Select your Home region.
104. Select your Compartment.
105. Write Display name and Database name.
106. Select the Always Free option for configuring the Database.
107. Choose the database version.
108. Select OCPU count.
109. Choose Storage in TB.
110. Write Username and Password.
111. Select Secure Access Type.
112. Choose your License Type.
113. Click on Autonomous Database Clone.
114. Verify the configured details.
Step 11: Restore the Autonomous Database
115. From the overview page, expand the More Actions option.
116. Click on Restore.
117. Enter Timestamp.
118. Click on Restore.
Start and Stop Autonomous Database
The Autonomous Database allows you to start your instance rapidly and stop
your instance on-demand to conserve resources and pause billing.
Note: The pause billing state will not charge you for any CPU cycles
because the instance state will be stopped. However, you will still be
incurring your monthly billing for your storage.
In addition to allowing you to start and stop your instance on-demand, it is
possible to scale your database instance on demand. This can be done very
easily by using the Database Cloud console.

EXAM TIP: Oracle provides you the ability to start and stop your
database with a few clicks very effectively. You can also invoke REST
services to perform automated operations like start and stop ADB.

Connecting to Autonomous Database


Oracle autonomous Database always uses encryption to protect data at rest
and in transit. All data stored in the Oracle cloud and network communication
with the Oracle cloud is encrypted by default, and encryption cannot be
turned off.
By default, Oracle Autonomous Database creates and manages all the master
encryption keys used to protect your data and store them in a secure PKCS1
key store on the same Exadata system where the database resides.
If your company’s security policies require, Oracle autonomous database can
instead use the keys you create and manage. Customers can control key
generation and rotation of the keys. The autonomous database you create
automatically uses customer-managed keys because the autonomous
container database in which they are created is configured to use them. Thus,
users who create and manage autonomous databases do not have to worry
about configuring their databases to use customer-managed keys.

Figure 7-16: Connecting Autonomous Database


Autonomous Database Credentials
The Autonomous Database only accepts secure connections, so you will need
to download the wallet credentials file before you can set up the connection.
The wallet is downloaded from the Autonomous Database Service
Console or from the DB Connection present on the instance details
page.
To access the ADB Service Console, find your database on the table
listing ADB instances and click the database name.
The wallet can also be downloaded through API calls.
Wallet Management and Expiration
After downloading the Autonomous Database credentials, you will have the
option of either downloading the instance wallet or the regional wallet,
depending on your choice.

Figure 7-17: Wallet Management and Expiration

There are many different capabilities for connecting the Autonomous


database.
1. Connect the Autonomous Database using the public internet from your
computer using JDBC or ODBC. You can communicate by using the
TCP/IP with SSH over the private internet, which will then enter into
the VCN. With your ACL or security list, you will define how the
traffic will be routed. It can go into the public subnet, accessing the
compute instance or database instance, or it can go into the private
subnet.
2. For example, you can connect ATP or ADW using the NAT or Service
gateway from the server that is running in a private subnet in the OCI
(same tenancy).
3. You can also connect the database or compute instance that is running
on a public subnet by using two different gateway. Internet gateway
will be used for the public subnet, whereas NAT gateway will be used
for the private subnet.

Figure 7-18: Connecting to ADB

Types of Wallet
For Autonomous Database shared provides either:
Instance wallet
Regional wallet
Instance Wallet – The instance wallet contains only the credentials and keys
for the individual Autonomous Database being provisioned.
Regional Wallet – The regional wallet contains the credentials and keys for
all the Autonomous Databases in a specified region. A regional wallet should
only be used for used by database administrators.
In the case of an Autonomous Database dedicated, the wallet file only
contains the credentials and keys for a single Autonomous Database. There is
no regional file.
Predefined Services
The database has an additional two different service names. You should make
sure that the users connecting to the resources or database have a specific
amount of resources. To assist that, a predefined service for connection to the
Autonomous Database.
Predefined Database Service Names
TPURGENT - It is the highest priority application connection service
designed for time-critical transaction processing operations. This connection
service does support manual parallelism.
TP – It is an application connection for transaction processing operations.
This particular connection service does not run with parallelism.
High database service – It provides the highest level of resources for each
SQL, resulting in high performance. However, because of that, it supports a
fewer number of concurrent SQL statements. Any SQL statements in the
service can use all the SQL or, in this case, the CPU and the I/O resources.
Therefore, in this scenario, the number of concurrent SQL statements that can
run in the service is three. The number is independent of the number of CPUs
in the database.
Medium database service – This provides a lower level of resources than
the high for each SQL statement, potentially resulting in a lower level of
performance than high. However, it supports more concurrent SQL
statements. The number of concurrent SQL statements that can run into the
service depends on a factor. The factor is the number of CPUs in the database
and the scales with the number of CPUs.
Low database service – It provides the least level of resources for each SQL
statement. However, it supports the most concurrent SQL statements. The
number of concurrent SQL statements that can run into service is twice the
CPU in the database.
Benefits of Predefined Services
Predefined services minimize application impact. With 16 OCPUs, you
would get the following number of concurrent queries before queuing would
kick in:
3 concurrent queries à High
20 concurrent queries à Medium
32 concurrent queries à Low
Maintenance proactively drains services during maintenance. TP services
have a five minutes drain, and batch has a one-hour drain. Applications
connect to a predefined database service to control relative priority, SQL
parallelism maximum concurrency executing users.
For example, most all TP applications connect to the TP service, and most
batch to the low service.
Fully Elastic
The Autonomous Database is a fully elastic service, with complete elastic
service for both manual scaling and autoscaling.
When you first start working with the autonomous database, you need to find
the number of CPUs and the number of storage. If you used to share, Oracle
provides one CPU and one TB of storage. At any time, you can scale up or
down the CPU and storage capacity.
Now, when you make any resource changes for your Autonomous database,
the database resources will automatically shrink or grow without any
downtime or service interruption.
This capability is very helpful, especially when you need to keep your
business running and have the database scale to your workload needs.
Due to the scalability feature, you are not constrained by fixed building
blocks or predefined shapes. You can scale on-demand because there are
times you may need to have it to be immediately reflected.
Fully Managed
The Oracle automatically provides end-to-end management of the
Autonomous Database. Provisioning a new database, growing and shrinking
the resources, compute and storage, and applying patches ensure that you are
secured and have the latest and greatest version.
You want to manage the number of resources you allow your users to have
when processing transactions on the database. You can do this from the
database administration, where you can set your resource management rules
and then define what is required to connect.
Figure 7-19: Set Resources

Connectivity Options
You can connect the Oracle Autonomous Database through SQL*Net, JBDC
(thick), OBDC (which leverages Oracle Call Interface), or JBDC (thin)
process.
JBDC thin connections use 12.1 and 12.2 thin drivers and Java key store,
defined in the JKS connection property.
JDBC and ODBC use Oracle Client Interface calls and tools like SQL*Net,
and Data Pump uses it to communicate with the database. All connections use
SSL for encryption, and no unsecured connections are allowed to the
Autonomous Database. This is why clients require a security credentials
wallet to connect.
From the console, navigate to Identity à select the Users Panel à select
Add Public Key
There are two connectivity options to establish a connection to the
Autonomous Database. One option is through the public internet directly, and
the other is using Oracle’s FastConnect service with public peering. The
second option provides private connections from on-premises networks.
Oracle Cloud offers the Oracle Cloud Infrastructure (OCI) Service Gateway,
which offers private access to Oracle services deployed in the Oracle service
network. This allows for additional levels of privacy and obfuscation for
customers who require complete network isolation and private security.

Figure 7-20: Connectivity Options

Public Internet
To review the options of establish the connectivity to an Autonomous
Database, the example is given as shown in Figure 7-21. In the given
example, a connection is established through the public internet between a
customer’s on-premises network and the Oracle Datacenter through a public
internet connection using SSL encryption. To access the Autonomous
Database from behind the firewall, the firewall must permit the use of the
port specified in the database connection when connecting to the servers in
the connection. The default port number for Autonomous Data Warehouse is
1522.
Figure 7-21: Connectivity via Public Internet

FastConnect with Public Peering


Autonomous Database connectivity with FastConnect with public peering is
shown in Figure 7-22. In this case, to establish connectivity to an
Autonomous Database, a connection is established through the FastConnect
with public peering service offer from Oracle Datacenter to a customer’s on-
premises network using SSL encryption.

Figure 7-22: FastConnect with Public Peering

Note: The Oracle Autonomous Database connectivity option eliminates the


complexity of operating and security high-performance databases.

Events and Alarms


Events
Oracle Cloud Infrastructure Events allows you to automate tasks based on
resource state changes across your tenancy. Allow your development teams
to reply automatically when a resource's state changes by using Events.
Here are some examples of how Events can be used:
When a database backup is finished, send a notification to a DevOps
team.
When files are uploaded to an Object Storage bucket, convert them
from one format to another.
Oracle Cloud Infrastructure services send out events, which are structured
communications that signal resource changes. A Creation, Read, Update, or
Delete (CRUD) action, a resource lifecycle state change, or a system event
affecting a resource are all examples of events. When a backup completes or
fails, for example, or when a file in an Object Storage bucket is added,
changed, or deleted, an event can be issued. The Cloud Native Computing
Foundation (CNCF) hosts the CloudEvents industry-standard format for
events (not the service). This standard enables interoperability between
various cloud providers or on-premises systems and cloud providers.
For resources or data, services generate events. Object Storage, for example,
sends out events for buckets and objects. Different sorts of events are emitted
by services for different resource categories, which are referred to as event
types. Create, update, and delete events, for example, are available for
buckets and objects. The types of modifications that a resource makes to
produce events are referred to as event types.
Creating rules is how you work with events; rules comprise a filter that you
define to designate events generated by your tenancy's resources. The filter
can be customized:
Filters can be set to match only selected events or all occurrences.
Filters can be defined based on the way resources are tagged or
specified values in event attributes.
Notification Service
The OCI notification service broadcasts messages to distributed components
through a publish-subscribe pattern, delivering secure, highly reliable, low
latency, and durable messages for applications hosted within the OCI tenancy
and external applications. It enables you to set up communication channels
for publishing messages using topics. When a message is published on the
topic, the message is sent to all of the topics subscriptions. If the subscriber’s
endpoint acknowledges the message's receipt, the notification service retries
that delivery. This situation can occur when the endpoint is offline, like an
email service for an email address may become down. In comparison, there
are other use cases for leveraging notifications, such as directly publishing a
message by way of its API or configuring alarms in the Monitoring service.
In this case, the OCI notification service helps to notify you when certain
Event Rules are triggered in the event service.
To set up the notification action, you must create a topic with one or more
subscriptions. You can use the OCI console, the OCI CLI, REST APIs, or
SDKs to create topics and configure subscriptions. Once the topic is created,
you will be required to configure at least one subscription. You can create a
subscription that will help for:
Email – Email sends an email to one or more addresses each time the
event is triggered.
Functions – The functions options run the specified function for the
Event use cases especially. You could instead choose to use the
Function Rule Action option.
REST API – If you have a particular REST web service that you want
to receive that Event message on, whether it is hosted in the OCI or
hosted externally, you can configure the HTTP option providing the
custom URL for that service.
PagerDuty – A PagerDuty subscription creates a PagerDuty incident
by default when the topic receives the Event message.
Slack – A slack subscription is available to be configured to send the
event to the specified Slack channel.
SMS – The SMS subscription option cannot be directly used for
receiving Event messages.
You can easily edit the Events Rule Action when you have created the topic
and subscription properly. After selecting Notification as the Action Type,
you select the:
1. OCI Notification compartment
2. Appropriate topic
Alarms
The alarm feature of the monitoring service works in tandem with the
notification service to notify you when metrics meet alarms specified
triggers.
The simple illustration of the alarm feature is shown in Figure 7-23. The
particular illustration starts with the resource, which is the emitting metric
data into the monitoring service. If the particular threshold is breached, the
alarm is triggered. When an alarm is triggered, it sends an alarm message to a
configured topic. The topic is a pub/sub method in the notification, in which
the service then sends the message on all the topic subscriptions, including
email, serverless offering call function, slack, SMS, etc., as shown in the
figure:

Figure 7-23: Alarms Configuration

There are three different states for alarms:


Firing state – When the alarm is triggered
Reset state – When you reset the alarm back
Suspended state – When you eliminate the alarm feature

ADB Backups and Recovery


The Autonomous database automatically backs up your database for you.
Now the retention period is 60 days; you can restore and recover your
database at any point in time within this retention period. The Oracle
Autonomous database automatically backs up and provides weekly full
backups and daily incremental backups.
It is not necessary to have a manual backup for the database. However, you
can do the manual backup using the OCI console that can help you provide
backup before any change you wish to make in terms of data in the database.
For example, before you can do extract transformation and low processing,
the manual backups are placed into object cloud storage. When you initiate a
point in time recovery, Oracle cloud will decide which backups to use to
recover the most quickly.
In addition to automatic backup, the Autonomous Database also allows you
to take manual backup, in which case you use your OCI Object Storage. You
need to decide the object storage credentials and provide your own OCI
Object Storage tenancy URL. You will also need to create a bucket to hold
the backups.
There are some rules to doing the manual backup. The manual backup
configuration tasks are a one-time operation. Once this is configured, you can
go ahead and trigger your manual backup any time you wish after that. When
creating the object storage bucket for the manual backups, the name format
for the bucket and the object storage follow this naming convention. It should
be backup_<database-name>; it is not a display name; it is a database name.
In addition to that, the object name has to be all lowercase.
Once you have configured your object storage bucket to meet these rules, you
should set a database property “default_backup_bucket.” This points to the
object storage URL, and it uses the swift protocol.
After creating your mapping to the object storage location, you should create
a database credential inside your database.
You can create a credential for your Oracle Cloud Infrastructure Object
Storage account using:
DBMS_CLOUD.CREATE_CREDENTIAL:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL
( credential_name => ‘DEF_CRED_NAME’,
username => ‘<YOUR ADB USER NAME>’
password => ‘password’ );
END;
/
After creating the credential, you can set the database property
default_credential to the credential you created in the previous step. For
example:
ALTER DATABASE PROPERTY SET default_credential =
‘ADMIN.DEF_CRED_NAME’

Securing Autonomous Database


Security is paramount to Oracle, and you should be aware of what measures
are taken to secure the Autonomous database. All the data is stored in an
encrypted format in the Oracle database. Only authenticated users and
applications can access the data when they connect to the database.
Certificate-based authentication uses an encrypted key that is stored in the
wallet. Therefore, Oracle makes sure that only those that have access to the
database can and should access the data.
The wallet contains a collection of files that include the key and other
information required to connect to your database. For example, the wallet on
both the client, where the applications are running and the server is an
Autonomous database. The key on the client must match the key on the
server to make the connection.
The other secure service for the database is that the database client must use
SSL and TSL, which is a more current transport layer security for connecting
to the database. TSL is a protocol that encrypts data being exchanged or
transferred from the server to the user. This ensures that no authorized access
to the database as well as communication between the client and server is
fully encrypted. This connection cannot be intercepted or altered, and you can
specify a specific IP address to access the Autonomous database, where you
will find the ACL. This control list will block all the IP addresses that are not
listed from accessing the database.

Monitoring Autonomous Database


The Autonomous Database is self-tuning with all other cell features. As an
administrator, you must be aware of the activities on your database, which
can be both in real-time or over a period. With the monitoring feature, you
can monitor the SQL activity. This allows you to have a simple way of
monitoring the activity, including the database, CPU, the statements the
currently running processes, and the connection service they are connected to
when they run that SQL statement. The monitoring feature also provides the
total amount of queued statements along with system and SQL statements in
real-time as well as a range of time.
Performance Hub
Performance Hub allows you to view all the performance data available for a
specific period of time. With this feature, you can analyze and tune to
performance of a selected Oracle Cloud Infrastructure database or in a regular
database that is not running in the Oracle cloud.
With this tool, you can view real-time and historical performance data. The
Oracle database administrator can use this tool to view various things for you.
You can perform your SQL monitoring, view the workloads, or block
sessions.

Scaling
The Autonomous Database allows you to scale up and down both OCPUs
and storage and allows you to do this independently of the other. Therefore,
you can scale up your OCPUs without touching your storage and scale it back
down, which you can do the same with storage.
In addition to that, you can also set up auto-scaling. The database will
automatically do scaling when it detects the need and scale up to 3 times the
base level number of CPUs that you have allocated or provisioned for the
Autonomous Database.
Auto Scaling
The Autonomous Database continuously and autonomously monitors the
overall system performance. It is able to adapt to meet the requirements of
your business workloads. Suppose the workload requires additional CPU
resources to perhaps meet a business goal. In that case, the Autonomous
Database is able to dynamically adapt and increase the number of CPUs to
meet that requirement.
The Autonomous Database is able to scale the number of CPUs that are
available to you up to three times your pre-defined baseline. When you create
an Autonomous Database instance, auto-scaling is enabled by default.
Although, you can manually or explicitly disable it, either when provisioning
or later through the Oracle Cloud Infrastructure console or ADB service
console.
The database can consume up to three times more CPU and IO resources with
auto-scaling enabled than the number of OCPUs currently displayed in the
Scale-Up/Down box. When auto-scaling is enabled, the database will utilize
additional CPU and IO resources if your workload necessitates them without
requiring any direct action.

EXAM TIP: Auto scaling does not influence the parallelism or


concurrency level of the pre-defined service that you would use to connect
to the Autonomous Database.
Billing
From a billing perspective, you are only charged for the base level of CPUs
that you have provisioned at any given time. However, when auto-scaling is
built, it is based on the average number of OCPUs that have been consumed
over the period of time.
Billing Scenario # 1
When the Autonomous Database is active with 4 OCPUs and auto-scaling is
enabled, the workload will not do anything too heavy-duty. Over the span of
an hour, there will be fairly low CPU utilization taking place. Therefore, at
this point in time, the average OCPU consumption is reported to be billing as
four OCPUs for that entire hour.
Figure 7-24: Billing Scenario # 1

Billing Scenario # 2
When the Autonomous Database is active with 4 OCPUs and auto-scaling is
enabled, the workload will not do anything too heavy-duty. Over the span of
an hour, the database has actually gone ahead and consumed additional
OCPUs to meet the business requirements. That additional capacity will
resolve the requirements. In this case, after 30 minutes, the active number of
OCPUs will become 8. The number of OCPUs will decrease when it no
longer meets the required workloads.
Note: Scaling up to three times the number of OCPUs does not exactly
equal three times; it could be based on the required workloads.
In this case, the average OCPU consumption over the period of an hour was
only 6 OCPUs.

Figure 7-25(a): Billing Scenario # 2


Figure 7-25(b): Billing Scenario # 2

Move ADB to Another Compartment


You can move Autonomous Database resources or shared resources from one
compartment to another compartment to meet your business requirements.
There are many reasons to move ADB resources from one compartment to
another compartment. There could be a different subnet that you would like
to move that autonomous database to, or there are some business applications
that are within or accessible in that other compartment that you wish to move
your Autonomous database to.
The way you can do this is simply to take the autonomous database and move
it from one compartment to another. The automatic backup associated with
the autonomous database will be moved with that database when you do so.
Prerequisites
You must have an appropriate privilege in the compartment to move
the autonomous database from the source compartment to the target
compartment
Once the autonomous database is moved to the target compartment,
inherent policies apply immediately and affect access to that resource
and its dependent resource through the console

ADB Cloning
With all the powerful features of the Autonomous database, the database
provides cloning where you can choose to clone either a full database or the
database metadata. This clone can be taken from a live-running autonomous
database or a backup.
In a full clone, you will create a new database with the source data and
the metadata
The metadata clone will create a new database with the source database
metadata but without the data. Therefore, this is perfect for getting a
template or a frame of it
There is another option available called “refreshable clone.” This option is a
read-only clone that stays connected to the actual source.

EXAM TIP: A full clone feature contains the data and metadata, whereas
a refresh feature is the one where you do have the data but is read-only.
However, it will keep current with the changes that are on the original, the
master database.
Full Clone
When creating a full clone, the minimum storage you can specify is the
source database actual used space rounded to the next TB.
You can clone the autonomous database only when it is only in the same
tenancy and the same region as the source database. The optimizer’s statistics
are copied from the source to the clone database during the provisioning of
either full or a metadata clone. This way, you still have all the optimizer’s
statistics in the table of the clone. This will be perfect when you are trying to
load data into the table if behaving the same way as the table statistics that
already exist.
Metadata Clone
Metadata clone creates a new database with all of the source database's
schema metadata but no data.
Note: The Autonomous Database can be configured to use private
endpoints by cloning from an existing Autonomous Database with the
public endpoint.
Oracle Data pump export from Oracle Database
The Oracle database pump can now perform exports from your Autonomous
database, create the dump files, and have that dump file written directly into
the Object Cloud Storage.
The Oracle database pump supports the export into the object storage,
allowing you to migrate data from services you manage in the Oracle
Autonomous database.

Figure 7-26: Oracle Data Pump Export from Oracle DB

Refreshable Clone
The considerable point for the refreshable clone is that the task of keeping
that refreshable clone in sync with the source database is handled by the
administrator. The administrator will be responsible for performing that sync
operation. Performing an operation requires few clicks, and it is an automated
process from the database console.
It is also very important to consider that the refreshable clone can trail the
source database or Autonomous source database for up to 7 days. After that
time, the refreshable clone, if it has not been refreshed or kept in sync with
the source database, will become a standalone read-only copy of that original
source database.

EXAM TIP: The cloning feature is used to create a point-in-time copy of


your Autonomous Database instance for testing, development, or analytics.

Managing Users
When you provision an Autonomous Database for the first time, the admin
user is already pre-created for you, and you would have provided that
password when you created the database. After that, you can create additional
users either as end-users who have access to the database or your application
to use the database.
Creating and managing the users is a very easy task with the Autonomous
Database. Oracle simplifies the task by creating a Data Warehouse Role
(DWROLE) for developers and data warehouse users. The role is very easy
to use; it simplifies a password, and that password adheres to Oracle’s
password complexity rules (available for 365 days). DWROLE includes the
basic privileges that are required for a developer, a data warehouse user,
analyst, etc., to use the database with the issue.
Create Users
The provisioning of users is very straightforward; there is no need to specify
the default tablespace, temporary tablespaces, etc. The creation process
requires creating a username, identified by a password, and then specifying
the privilege of the DWROLE to be assigned to that user.
This all can be done by regular client utility. However, if you do not have a
client utility to connect to the database, then this can be done using the
database action console or the SQL Developer web screen. The example
script required to create s user is shown below:
SQL> create user ocitest identified by P#ssw0rd12##;
User OCITEST created.
SQL> Grant dwrole to ocitest;
Grant succeeded.
Once you have got the database in provision state, you may need to change
the admin user's password, which could be done through regular client tools.
However, the alternative option is also available through the console.

Oracle Data Guard


Oracle Data Guard is a suite of services that allows production Oracle
databases to survive calamities and data corruptions by creating, maintaining,
managing, and monitoring one or more standby databases. These standby
databases are kept as copies of the production database by the Oracle Data
Guard. Then, if the production database is unavailable due to a scheduled or
unplanned outage, Oracle Data Guard can convert any standby database to
the production role, reducing the outage's downtime.
Traditional backup, restore, and clustering approaches can be combined with
Oracle Data Guard to give a high level of data security and availability. Other
Oracle technologies, such as Oracle Streams and Oracle GoldenGate,
leverage Oracle Data Guard transport services for efficient and reliable redo
transmission from a source database to one or more remote destinations.
Administrators can improve production database performance by shifting
resource-intensive backup and reporting tasks to standby systems using
Oracle Data Guard.
Oracle Data Guard Configuration
One primary database and up to thirty destinations can be included in an
Oracle Data Guard configuration.
Oracle Net connects the members of an Oracle Data Guard setup, which may
be geographically distant. As long as the members of an Oracle Data Guard
configuration can communicate with one another, there are no constraints on
where they can be situated. You can, for example, have a standby database in
the same data center as the primary database and two additional standby
databases in different data centers.
Primary Database
One production database, often known as the primary database, serves as the
primary role in an Oracle Data Guard architecture.
The primary database is the one that the majority of your apps use.
A single-instance Oracle database or an Oracle Real Application Clusters
(Oracle RAC) database can be used as the primary database.
Standby Database
A transactionally consistent duplicate of the primary database is referred to as
a standby database.
You can generate up to thirty standby databases from a backup copy of the
primary database and use them in an Oracle Data Guard configuration. Oracle
Data Guard keeps each standby database up-to-date by sending redo data
from the primary database and then applying it to the standby database.
A standby database can be a single-instance Oracle database or an Oracle
RAC database like the primary database.
Physical Standby Database
Provides a physically identical replica of the primary database, including
database structures on disk that are block-for-block identical to the primary
database. The database schema is the same, including indexes. Redo Apply,
which recovers the redo data received from the primary database and applies
it to the physical standby database, keeps the physical standby database
synced with the primary database.
A physical standby database can be utilized for business objectives other than
disaster recovery on a limited basis.
Logical Standby Database
Although the physical arrangement and structure of the data may be different,
it has the same logical information as the production database. SQL Apply
converts the data in the redo received from the primary database into SQL
statements and then executes the SQL statements on the logical standby
database. It keeps the logical standby database synchronized with the primary
database.
In addition to disaster recovery, a logical standby database can be utilized for
various business applications. This allows users to access a logical standby
database at any time for queries and reporting.
Automatic and Manual Failover Options
Automatic Failover Scenario Manual Failover Scenario
If the primary Autonomous When automatic failover is
Database becomes completely unsuccessful, the switchover button
unavailable, the switchover buttonwill become a failover button, and
will turn into a failover button the user can trigger a manual failover
should they wish to do
Automatic failover is The system automatically recovers
automatically triggered. There is as much data as possible, minimizing
no user required any potential data loss. However,
you will experience a few minutes or
seconds of data loss
No user action is required, but You should only perform a manual
automatic failover is allowed to failover in a true disaster scenario
succeed only when no data loss expecting the fact that a few minutes
occurs of potential data loss could occur to
ensure that your database is back
online as soon as possible
Recovery Time Objective (RTO) is Recovery Time Objective (RTO) is
two minutes. The Recovery Point two minutes. The Recovery Point
Objective (RPO) is 0 minutes Objective (RPO) is five minutes
Table 7-04: Automatic and Manual Failover Options

Key Capabilities
Disaster recovery - Oracle Data Guard automates the management of one or
more synchronized copies of a live database, ensuring that no data is lost in
the event of a primary database outage.
In-memory database replication - In-memory redo replication assures that
duplicated data blocks are isolated from underlying corruption such as disc
corruption and provides automatic complete validation.
Protection flexibility - Data Guard offers three alternative protection options
for data replication, allowing you to strike a balance between data loss
prevention and performance.
Real-time query and DML offload - Real-time query and data manipulation
language leverages the standby database for queries, reports, and occasional
updates without affecting the primary database.

Securing the Database System


Various security features need to be applied to the database system. Some of
the security capabilities are summarized in table 7-05:
Security Capability Features
Instance security isolation Bare metal DB system
Network security and access VCN, security list, VCN public
control and private subnet, route table,
service gateway
Secure and highly-available VPN DRGs, VPN, and
Connectivity FastConnect for a dedicated high
throughput connection
User authentication and IAM tenancy, compartment,
authorization security policies, an API signing
key, SSH key
Data encryption DBaaS TDE, local storage, object
storage encryption at rest
End-to-end TSL LBaaS with TLS1.2, Customer-
provided certificates
Auditing OCI API audit logs
Table 7-05: Security Capabilities

MySQL Database System


This section will provide the benefits of the MySQL database service, the
only MySQL public cloud service that is 100% developed, managed, and
supported by the Oracle MySQL team. This includes the MySQL Enterprise
Edition Advanced Features and Tools.
MySQL Database Service
MySQL database cloud service in the Oracle cloud infrastructure is a fully
managed, OCI native service. It is built on MySQL Enterprise edition and is
100% developed, managed, and supported by the Oracle MySQL team. There
are three major categories of the MySQL database service:
Easy
The features are easy to use to automate the database administrative tasks.
That makes the database administrator easily perform the day-to-day tasks
because it is now fully automated, allowing instance provisioning when
ensuring the very new feature.
Secure
In the category of secure, Oracle provides your data protection and advanced
security. MySQL database service protects your data against external attacks
or any internal malicious users with many advanced security features.
Enterprise Ready
With this category, MySQL database service provides you only the public
cloud service that is built on MySQL Enterprise Edition. There are some
advanced features available, like auditing thread pooling and data masking.
There are some management tools available to help you in monitoring and
development. Also, you get 24/7 support at no additional cost from the team
that actually built and developed MySQL. MySQL database service is also
enterprise-ready because it works well with Oracle products, such as Oracle
Data Integrator and Oracle Analytics Cloud. You can also use Docker or
Kubernetes service for the DevOps operations.
MySQL database service is 100% compatible with on-premises MySQL,
making it easy to migrate the applications to the cloud.
Ease of Use
Today, the database administrators are overwhelmed with performing the
same task; the same database administration tasks offer many databases they
are responsible for. This does not give them a lot of time to focus on the
innovation of their business or be able to address the demands of their
business. The ease-of-use feature of the MySQL database service automates
all of these time-consuming tasks. Therefore, database administrators can
improve productivity and have time to focus on higher-value time. With this
feature, your developer can quickly get all the latest features of the MySQL
database service directly from the MySQL team.
Security
The MySQL database service utilizes encryption to keep the data private in
the security category. MySQL database service users block volume for all the
data storage by default. The use of block volume storage will make it
resistant to failures.
MySQL database service protects the data against external attacks and
internal malicious users with a range of advanced security features.
MySQL database service runs on Generation 2 Oracle Cloud Infrastructure
(OCI Gen2). OCI Gen2 is the second generation Infrastructure as a Service
(IaaS) offering, architected on security first design principles. It is designed
based on the principles that ensure the security of your data running in the
environment. With OCI Gen2, it provides maximum isolation and protection.
Therefore, Oracle cannot view the customer’s data, and the customers cannot
see the Oracle-controlled computer, as these are cloud-control computers.
With OCI Gen2 architecture, you will get superior performance on the
compute objects. As OCI Gen2 is open-source, you can run Oracle software,
third-party software, or other open-source software without modification.

Figure 7-27: Security

Note: Oracle realizes that the data is the organization’s most valuable asset.
The use of advanced security features can also help you meet the industry,
customer, and company’s regulatory compliance requirements. For
example, the General Data Protection Regulation (GDPR) that addresses the
transfer of personal data, the Payment Card Industry (PCI) mandates credit
card companies to help ensure credit card transaction security, and a federal
law called the Health Insurance Probability and Accountability (HIPAA)
that requires the creation of standards to protect sensitive patient health
information data from being disclosed without the patient’s consent or
knowledge.
Fully Managed
MySQL database service is a fully managed database service.
User Responsibility
The user responsibility in MySQL database service includes logical schema
modeling (your design, object, column, data, structure, te query design, and
optimization). The user could access the data in terms of query defining the
data access and retention policies, like the backup retention policy that
defines backup, where it should reside, and for how long it should be
archived.
Oracle Responsibility
The MySQL team is responsible for providing automation for the OS
installation, the database, and the operating system patching, including the
security patches performing the backup and recovery. It can also monitor and
define the logs of all of the information running in the environment, the
security with advanced options available with MySQL Enterprise Edition,
and protect the datacenter that is hosting the cloud service.
An example is shown in Figure 7-28: a user has a tenancy and manages the
policies to access the service and create the cloud resources, such as the
compute instance. You can find the group of users who share a common
access with specific privileges to the software instance. The MySQL team has
internal tenancies access that is not visible to the users. This is where the
actual database resides. Users can access the MySQL database service with
the web console Command Line Interface (CLI) or SDK to interact with the
internal control plane. This addresses the MySQL instance and lifecycles,
such as provisioning a new MySQL service instance or protecting the
database by backing it up. MySQL database instances are grouped in the
database system; therefore, an endpoint will show your tenancy when you
create a database system. When you use MySQL protocol to connect, you
will connect to the endpoint.
Figure 7-28: ADB – Fully Managed Service

In-Memory, Query Processing Engine


MySQL database service is a very popular open-source service that is used to
store enterprise data. This service is optimized for OLTP; however, it can
perform analytics processing (OLAP).
To run the analytics applications, MySQL database service is used with
HeatWave. MySQL HeatWave is designed to run your analytics on the data
you store in the Oracle MySQL database. It is built on an innovative in-
memory architected analytic engine for scalability and performance and
optimized to run on OCI.
HeatWave Architecture
HeatWave is a distributed, scalable, share-nothing in-memory columnar
query processing engine designed to fast execute analytic queries. It enables
you to add a MySQL cluster to your MySQL database system.
HeatWave parser consists of MySQL database system node and two or more
HeatWave nodes. The MySQL database system node includes a plug-in that
is responsible for cluster management, loading data into the HeatWave
cluster, query scheduling, and returning query results to the MySQL DB
System, HeatWave node stores data in memory and processes analytics
queries. HeatWave nodes consist of an instance of HeatWave. The number of
HeatWave nodes required depends on the size and amount of your data and
the amount of compression achieved when loading the data into the
HeatWave cluster. The cluster can support 24 nodes.
Figure 7-29: HeatWave Architecture

NoSQL on OCI
This section will define how the database transactions are processed in
NoSQL database service for consistency and durability, how to achieve high
availability and implement the fault containment zones, how to use multi-
region tables to collaborate with others and similar data across regions,
security features, and how to allocate the storage in NoSQL environment.
Configurable ACID
The database transaction in NoSQL is often described in terms of ACID
properties. ACID stands for Automic, Consistency, Isolation, and Durability.
ACID principles ensure database transactions are processed reliably.
Atomicity
Either all or none of the tasks in a transaction are completed. There is no such
thing as a partial transaction. For example, if a transaction begins updating
100 rows but fails after 20 updates, the database will roll back the
modifications to these 20 rows.
Consistency
This refers to maintaining the data integrity constraints. In a consistent
transaction, you will not violate the database role’s integrity constraints
placed on the data.
Isolation
Isolation is considered serializable, meaning that each transaction is in a
distinct order without any transaction occurring in tandem. Any reads and
writes performed on the database will not be impacted by the other reads or
writes of separate transactions that are occurring on the same database.
Therefore, no transaction will affect others.
Durability
It ensures that changes made to the database are successfully committed and
will survive permanently, even in the case of system failure. Therefore, the
written data should never be lost.
When you consider the principles of ACID, you will determine how you
would configure the NoSQL database.
Extreme Availability Through Fault Containment Zones
A way to achieve high availability or extreme availability is by implementing
or addressing the fault containment zones. A zone is a physical location that
supports high-capacity network connectivity between the storage nodes. Each
zone has the same level of physical separation from other zones, such as its
power, communication, connection, etc. When configuring your store, it is
strongly recommended that you configure your store across multiple zones.
Having multiple zones provides that fault isolation increases data availability
if a single zone encounters a failure.
MR Tables with Cross-Region Service
Oracle NoSQL Database provides a multi-region architecture that enables
you to create tables in multiple key-value store clusters and maintain
consistent data across these clusters.
Suppose you wish to collaborate with similar data across regions. In that
case, you will need to create tables that can span across multiple regions and
keep them updated with the inputs of all the participating regions. In that
case, you need a multi-region table (MR tables), which is present across
region services. It is a global logical table that is stored and maintained in
different regions. It is a read and write anywhere table that lives in multiple
regions. All multiple region tables defined in those regions are synchronized
using a NoSQL stream. Each region must be running a cross-region service
that can pull the data from the subscribe table in the remote regions.
Security
There are many features available to protect your data.
AES 256
This feature will protect your data at rest as well in motion. The data at rest is
stored in the cloud environment.
Data in motion is protected when the data is being transmitted or processed
on the network.
Roles, privilege, and groups
Oracle provides resources or accesses to your table and namespace in NoSQL
through roles and privileges.
Configurable password rules
You can configure your password rules containing a specific number of
characters, uppercase, lowercase, minimum length, and maximum length.
Pluggable Authentication
You can take help and advantage of pluggable authentication, including
Active Directory or Kerberos.
Auditing
The ability to know what is happening can be done through auditing. The
auditing feature will provide you with many tools and resources to easily
view the NoSQL environment's activities.
Easy Online Elastic Expansion and Contraction
As your business grows, more storage is required. Oracle provides you with
an easy way to expand, extend, and scale your storage. You can plan and
deploy more zones and then create a pool of your storage. After that, you will
add capacity to the new topology.
For example, you can clone the topology or redistribute the topology and then
deploy the new topology.
Figure 7-30: Expansion and Contraction

HTTP Access
To provide the extra layer of security to ensure who can have access and who
cannot, Oracle provides you with a way to have a single open port. An
example is shown in Figure 7-31. The HTTPS port for the proxy machine
will be used by the proxy to accept secure connections from HTTPS requests.
A single port is left open by the firewall rules to allow connections between
the proxy and the driver. As shown in the figure, multiple types of drivers are
available, including JAVA, Python, or node driver.

Figure 7-31: HTTP Access

Lab 7-02: Create an Autonomous Data Warehouse


Introduction
Oracle Autonomous Data Warehouse is a cloud data warehouse service that
takes care of all the difficulties of running a data warehouse, dw cloud, data
warehouse center, data security, and data-driven application development. It
automates data warehouse provisioning, configuration, security, tweaking,
scaling, and backup. It comes with tools for self-service data loading, data
transformations, business models, automatic insights, and built-in converged
database capabilities, which make it easier to query numerous data types and
do machine learning research.
Problem
An organization is working on Oracle Cloud Infrastructure (OCI) and wants
to use such a service that eliminates the requirement of manual administrative
tasks, including backup, configuration, and patching. How can the
organization do that?
Solution
By using Oracle Autonomous Data Warehouse (ADW) service, the
organization can implement a solution to avoid manual administrative tasks.
Step 1: Navigate to Autonomous Data Warehouse
1. Go to https://www.oracle.com/cloud/sign-in.html and log in to Oracle
Cloud Infrastructure Console with your Cloud Account Name and
credentials.
2. The Oracle Cloud Infrastructure Console dashboard will appear.
3. To launch any free resource, go to the navigation menu.
4. Go to Oracle Databases.
5. Click on Autonomous Data Warehouse under Autonomous Database.
Step 2: Configure ADW
6. Verify your Home Region and compartment.
7. Click on Create Autonomous Database.
8. Provide the basic information, including the Compartment, Display
name, and Database name.
9. Scroll down. Inside Create Autonomous Database, select Data
Warehouse workload type.
10. Select Shared Infrastructure as deployment type.
11. Configure the database with the Always Free option.
12. Select the latest database version.
13. Create Administrator credentials.
14. Scroll down; select License Included license type.
15. Provide your contact email.
16. Click on Create Autonomous Database.
17. The database will take some time for deployment.
18. After deployment of the database, verify the database information.
19. In between, you will receive an email.
Step 3: Connect to Access ADW using SQL.
20. Navigate to the recently created Autonomous Data Warehouse.
21. From the main page, click on Tools.
22. Select the Database Actions tool and click on Open Database
Actions.
23. Log in with the same database credential you had created during the
configuration of Autonomous Data Warehouse.
24. After sign-in, click on SQL.
Note: It will open the worksheet in which you can enter details to fetch data
from your ADW.
25. Now, go to the navigation menu and click on Database Users.

26. Under the Administration section of the current user, click on Edit to
Enable REST.
27. Click on SQL to open the worksheet and create a table with rows and
columns.
28. Execute the SQL statements and verify the result from Logs.

Mind Map
Figure 7-32: Mind Map

Practice Questions
1. Which of the following database options is used in HeatWave?
A. Autonomous Data Warehouse
B. Oracle RAC
C. NoSQL Database
D. MySQL Database System
2. Which of the following services eliminates the step of manual
administrative tasks?
A. Autonomous Data Warehouse
B. VM DB System
C. Oracle RAC
D. Oracle Database
3. Which of the following is used to describe the database transaction in the
MySQL database?
A. Automatic, Control, Independent, Dedicated
B. Alternative, Constant, Irreversible, Dual
C. Automatic, Consistent, Isolation, Durability
D. None of the Above
4. Which of the following service is fully managed and provide a pre-
configured database environment?
A. Oracle RAC
B. Autonomous Database
C. VM DB System
D. Bare Metal DB system
5. The Oracle Database allows to reduce operation cost up to _________.
A. 90%
B. 80%
C. 84%
D. 76%
6. Which of the following provisions can be used to ensure high availability
when using NoSQL in OCI?
A. Nearest Region
B. Fault containment zone
C. Availability Domain
D. None of the above
7. Which of the following database service is used to store enterprise data?
A. Bare Metal DB
B. VM DB
C. NoSQL
D. MySQL
8. How many deployment options are there for Autonomous databases?
A. Four
B. Five
C. Three
D. Two
9. Which of the following feature of the Oracle database makes compliance
easier and faster?
A. Security
B. Cost
C. Flexibility
D. Scalability
10. Which of the following provides a way of separate transaction
processes on the same database?
A. Consistency
B. Durability
C. Automation
D. Isolation
11. Which of the following database can be utilized for business
objectives other than disaster recovery?
A. Primary Database
B. Logical Standby Database
C. Physical Standby Database
D. All of the Above
12. Which of the following allows the database customers to use their
existing license in OCI?
A. BYOL
B. BYOSL
C. PL
D. None of the above
13. The databases created by Autonomous database allow you to handle
____________.
A. Backup
B. Patching
C. Upgrade
D. Tuning
E. All of the Above
14. Which of the following database services provides customers with
optimized capabilities for enterprise-level databases and their associated
workloads?
A. VM DB System
B. Exadata DB System
C. Oracle RAC
D. Bare Metal DB System
15. Which of the following is necessary to use the dedicated fleet
administrator policy?
A. Physical Database
B. Azure User
C. OCI User
D. Events and Alarms Setup
Chapter 08: Design for Hybrid Cloud Architecture

Introduction
This chapter focuses on designing hybrid cloud architecture using the Oracle
Cloud VMware Solution (OCVS). OCVS is an integrated solution developed
from a partnership between Oracle and VMware. It enables you to run a
VMware software-defined data center natively hosted under Oracle Cloud
Infrastructure (OCI). Oracle and VMware have partnered to develop that
solution and provide technical support at different TL levels.
There is also a wide range of capabilities, such as native integration to Oracle
cloud services, including other use cases like databases and applications
deployed on the top of the VMware SDDC to achieve some of its benefits.
Software-Defined Data Center
Primarily, three building blocks form a physical data center.
Compute – includes server
Network – used for switching, routing security, etc.
Storage – used for storing data
In most cases, the compute servers will be running a hypervisor. This means
that you can run several virtual machines overriding the limitations of a
physical server.
SDDC is a concept that extends this virtualization to all of the resources in a
data center, whether it could be a storage array or network. Everything is
fully software-defined and makes it an abstraction layer of resources, forming
a platform of multiple virtual datacenters delivered as a service.
To better control the resource allocation and consumption, you can share
SDDC between the application workloads. There is no longer a one-to-end
dependency on a physical resource. Therefore, this grouping allows you to
over-subscribe, which means you can maximize a resource several times.
SDDC capitalizes on agility, elasticity, and scalability. One of the top
advantages of being software-defined is automation. For example, it could be
automating some of the key functions like creating a compute resource or
operational management, including monitoring the usage of a resource or
taking appropriate action with adding or deleting resources, all in an
automated way.
SDDC provides a high degree of flexibility because the workloads operate
independently; you can deliver an SDDC on a flexible mix of private and
hybrid clouds like OCVS.
Since there is no interdependency, this provides the environment to be
portable and provides capabilities to seamlessly integrate new applications,
which makes SDDC a modernized platform.
VM Cloud Foundation (VCF) is an industry-leading product from VMware
which incorporates compute, storage, and networks while delivering a highly-
reliable, scalable SDDC platform.
OCVS Overview
Oracle Cloud VMware Solution comprises some of the core components of
VMware Cloud Foundation, vSphere, NSX, and vSAN. With this integration,
you can achieve many features, like optimizing east-west traffic, load
balancing your workloads, or storage services like rate protection,
deduplication, compression, etc.
VMware Software
All the software products together provide a proven certified
architecture. The product stack includes vSphere Enterprise Plus. The
versions are interoperable between products, and you can choose to
deploy the latest 7.0 update 2 or 6.5, or 6.7 updates in three versions
NST Enterprise Plus version 3.1.2 and vSAN are also a part of the
deployment process. vSAN is not a separate appliance, and therefore
vSAN version is tied to the version of vSphere deployed
HXC is a key product that brings the service into an actual hybrid cloud
model. There are two license solutions for HCX, the Advanced and
Enterprise
HXC Advanced is a free edition included while enabling the service
HXC Enterprise is an upgrade
Oracle Cloud Infrastructure
The Bare Metal used are Dense IO 2.52 servers; high-performance compute
configurations. You need to choose a minimum of three nodes for production
purposes. This cluster would give you 156 OCPUs, approximately 2 TB of
memory, and 153 TB of NVME SSD drives for vSAN data stores. You
always have the option to add more nodes to the clusters.

OCVS Product Overview


vSphere: The Hypervisor
vSphere is a distributed software system with features enabled by Hypervisor
ESXi and a management server, vCenter, working together.
vSphere enables a virtual machine from the hardware by presenting a
complex x86 platform to the virtual machine guest operating system.

Figure 8-01: vSphere

vSphere Cluster
A vSphere cluster is a group of ESXi nodes that partitions and aggregates the
compute resources in a distributed manner.
For example, the distributed virtual switch is logically created using all the
network adapters and uplinks from the ESXi host to maintain a consistent
network configuration.
The VMs deployed in a cluster share CPU, memory, data store, and network
resources. However, at the same time, vSphere has some intelligent resource
management techniques to reclaim the resources and provide the in-demand
VMs.
There are two primary features of the vSphere cluster.
High Availability (HA) – vSphere provides high availability for virtual
machines within the cluster. If a host within the cluster fails, the VM
residing on that host is restarted on another host in the same cluster.
Distributed Resource Scheduler (DRS) – vSphere DRS is a
distributed resource scheduling mechanism that spreads the virtual
machine workloads across vSphere hosts and monitors the available
resources.
You can set VMs to live to migrate manually or automatically to other hosts
with less resource consumption based on the automation level.
vMotion - vMotion is the live migration of a running virtual machine from
one physical server to another without downtime. The virtual machine retains
its network identity and connections.
Storage vMotion – With storage vMotion, you can migrate a virtual machine
and disk files from one datastore to another while the virtual machine is
running.
In Oracle Cloud VMware Solutions, the minimum number of hosts required
is 3, and the maximum is 64 for all your production purposes.
Using the vSphere 7.0 update, two or newer versions introduce a new feature
called the vSphere Cluster Service (vCLS). The vCLS feature is enabled by
default and runs on all vSphere clusters. vCLS ensures that if the vCenter
becomes unavailable, the cluster service like DRS and HA remains available
to maintain the resources and health of the workloads running in those
clusters.
vCLS uses agent virtual machines to maintain the cluster service holds. The
vCLS VMs are created when you provision in the SDDC stack.
There are three vCLS VMs deployed that are required to run on each vSphere
cluster. vSphere DRS in a DRS-enabled cluster will depend on the
availability of at least one vCLS VM. Unlike your application VMs, you
should treat vCLS VMs like your system VMs. You must not perform any
operations on these VMs unless it is explicitly listed as a supported operation.
vSAN: Software-Defined Storage
vSAN is the hyper-converged storage part of the solution. The term hyper-
converged means having high-performance NVMe or “all-flash” based drives
attached directly to the bare metal compute, which becomes the primary
storage for your VMs.
With having a software-defined storage approach, Oracle can pool these
direct-attached devices across the vSphere cluster to create a
distributed/shared datastore for the VMs.
VMs are objects together, and vSAN is the object store for those objects and
their components.
Disk Groups
vSAN uses a construct called disk groups and manages the devices into two
different tiers: the capacity and the cache.

Figure 8-02: vSAN

Capacity tier – The capacity tier is used as the persistent storage for the
VMs and used for reading cache purposes.
Cache tier – The cache tier in Figure 8-02 has “all-flash” drives and is
dedicated to writing buffering.
The write buffer is all about absorbing the highest rate of write operations
directly to the cache tier. However, a little data stream is written to the
capacity tier.
EXAM TIP: The two-tier (capacity and cache) design gives great
performance to the VMs while ensuring that the device can have data
written in the most efficient way possible.
vSAN Fault Domain
vSAN implements a concept of fault domain. It is different from the OCI
fault domain. A vSAN fault domain is about grouping multiple hosts into a
logical boundary domain. It makes sure that at least two replica copies of the
storage objects are distributed across the domains.
Storage Policies
vSAN storage policies are used to determine the high availability of
individual VMs. You can configure different policies to determine the
number of host and device failures that a VM can tolerate. Failures to
Tolerate (FTT) equal to 1 means that you can accommodate one node failure
within the cluster, where the VMs can sustain and still be functional.
FTM stands for Failure Tolerance Method and is used as RAID-1, which
always maintains an object’s replication.
vSAN Witness Node
A witness node is a dedicated host used for monitoring the availability of an
object. When you have at least two replicas of an object, and during a real
failure, it can host the data object of that application to be active on both
vSAN fault domains. It can be disastrous to any application. Therefore, a
vSAN witness node is configured to avoid the split-brain condition.
A witness node is not meant for deploying VM and storing only the metadata,
which means exclusively deciding for weak components and determining the
actual failure.
NXS-T: Software-Defined Networking and Security
NXS-T is the software-defined networking and security product part of
OCVS. It has the following features.
Heterogeneous
NXS-T is heterogeneous, which means NXS-T can be deployed not only for
vSphere but also for multi-cloud environments. It can extend functionality to
multiple Hypervisor, bare metal servers, containers, and cloud-native
application frameworks.
Security
Some of the standard security services include a firewall to an edge
appliance, load balancing of your workload VMs, distributed and logical
routing and switching, NAT for external inbound and outbound access, and
VPN tunnels for connecting between environments.
Automation
There are different REST APIs with JSON support for scripting operational
tasks. It is also compatible with Terraform and OpenStack Heat orchestration
for provisioning purposes.
With all these capabilities and a software-defined approach, NXS-T is very
familiar with OCI’s Virtual Cloud Network (VCN).
Components of NXS-T
NXS-T works by implementing three integrated planes.
Management
Control
Data
You can implement these three planes as processes, modules, and agents
residing on three different types of nodes.
Manage
Controller
Transport node
NXS Manager – This node hosts the API services. It also provides a
graphical user interface and REST APIs for creating, configuring, and
monitoring the NXS-T datacenter component.
NXS Controller – This node hosts the central control plane cluster service.
NXS Transport – The transport nodes are responsible for performing
stateless forwarding of packets based on the tables populated by the control
plane.
Transport Zone
A transport zone is a container that defines the potential reach of transport
nodes. These nodes are classified into the host and edge nodes.
Host Transport Node – The host transport nodes are ESXi hosts
participating within the zone.
Edge Transport Node – These nodes run the control plane daemons with
forwarding engines and implement the NSX-T data.
Gateways
There are primarily two gateways that you configure for your virtual machine
communication.
Tier 0 – There tier 0 gateway processes the traffic between the logical and
physical network, also known as North/South traffic.
Tier 1 – The tier 1 gateway is for the East/West traffic. The traffic between
VM to VM within the same cloud infrastructure.
To enable access between VMs and the outside world, you can configure an
internal and external Border Gateway Protocol (BGP) connection between a
tier-0 gateway and a router in the physical infrastructure.

EXAM TIP: When configuring BGP, you must configure a local and
remote Autonomous System (AS) number for your tier-0 gateway.
The Open Shortest Path First (OSPF) is an integer gateway protocol that you
can configure the tier-0 gateway and operates within a single autonomous
system.
Segments
Segments are defined as Virtual Layer two Domains. There are two types of
segments in NXS-T.
VLAN-backed Segments – VLAN-backed segment is a layer 2 broadcast
domain implemented as a traditional VLAN in the physical infrastructure.
The traffic between the two VMs on two different hosts but attached to the
same VLAN-backed segment is carried over a VLAN between the two hosts.
Overlay-backed Segments – In this segment, the traffic between two VMs
on two different hosts but attached to the same overlay segment has their
layer two traffic carried by a tunnel between the host.
Geneve encapsulation
Geneve is a network encapsulation protocol. It works by creating a layer 2
logical network encapsulated in UDP packets. It provides the overlay
capability by creating an isolated multitenant broadcast domain across the
data center fabrics.
HCX: Hybrid Cloud Extension
Hybrid Cloud Extension is an application mobility platform that can simplify
the migration of application workloads with rebalancing and help you
achieve business continuity between an on-premises and Oracle Cloud
VMware Solution.
HCX Advanced
HCX advanced edition can be enabled as a part of OCVS deployment, and it
has a wide range of features.
Network extension with hybrid connect is the top feature of HCX. It
allows layer two networks such as VLAN in your data center to extend
to the OCVS environment
Cross cloud connectivity, you can do site pairing and create a secure
channel between the environments
WAN optimization is used to optimize your network traffic with
deduplication, compression, and line conditioning
If you run a legacy vSphere version, you can use HCX to migrate your
workloads to a newer vSphere version
Cloud-to-cloud migration. There are different migration types: online,
live, offline, etc.
HCX also supports disaster recovery feature
HCX Enterprise
HCX Enterprise is an upgrade option with additional features. Some of the
features are:
Migration from the non-vSphere-based environment to vSphere
Large-scale bulked migration is supported through this edition
You can extend the disaster recovery feature with VMware Site
Recovery Manager (SRM) product, which will help you orchestrate the
DR workflows
Traffic engineering allows you to optimize the resiliency of your
network oaths and use them more efficiently
Mobility groups are about structuring your migration waves based on
your applications' functionalities networks and without service
disruption
Mobility Optimized Network (MON) ensures the traffic between
environments uses optimal paths while the flow remains symmetric

Use Cases, Key Benefits, and Values


Use Cases
Disaster Recovery (DR)
Disaster Recovery is one of Oracle Cloud VMware Solutions (OCVS). It is
something referred to as business continuity. There are multiple ways a
customer would look at the DR strategy, either by adding a new site or
replacing an existing DR.
Cloud is often the easiest and most economical option for net new DR sites as
it does not require any upfront capital expenditure.
With a Disaster Recovery site, a real-time application and live migration are
less likely to be considered. Instead, creating backups of existing virtual
machines and placing them in the cloud might be the best efficient way.
If you run a vSphere environment on-premises, you can continue using the
same tools to achieve DR. For example; you can use tools like VMware Site
Recovery Manager (SRM) to schedule snapshot copies out to the DR site.
With SRM, you can also control some aspects of planned and unplanned
failures at the primary site. The speed and quantity of data transferred in this
scenario are significantly lesser than in a full-migration scenario. Therefore,
for that reason, customers may choose to use FastConnect or IPSec VPN to
connect between the production environment and the DR site. This
connectivity brings both the sites mobile and portable for workloads.
Datacenter Expansion and Hybrid Deployment
One of the challenges in this approach is the lead time to prepare or other
restrictions in place to secure additional fixed assets. In most cases, a
customer might hit a scenario of running out of network capacity leading to
extending their data center. That is when you decide to move into the cloud
eventually.
Capacity expansion is a long-term deployment that links between on-
premises and Oracle Cloud, which will enable the customer to gain on-
demand capacity. Once the environment is interconnected, Oracle can
manage both the environments from a single pane of a glass window. It
would help customers like virtual desktop providers who could potentially
burst their capacity needs and spin up their environment in a matter of time.
Datacenter Expansion with Migration
The data center migration scenario is often precipitated by a customer’s need
to retire an on-premises data center. It will also require interconnecting the
on-premises and the Oracle cloud environment until the migration is done.
There are different ways you can achieve this migration. You could simply
do a lift and shift, which means there is no need to refactor the VMs because
the source and destination are vSphere, the same platform. FastConnect is
required to facilitate the connectivity and movement of data, given that the
quantity of data is in several TB. The customer will evaluate the migration
requirement to determine how and when to move the resources to the cloud.
Once all the virtual machines and data have been migrated to the cloud, the
FastConnect circuit at the customer on-premises can be decommissioned. In
this way, you can eliminate the source data center and complete the
infrastructure refresh quickly. You can choose live migrations for business-
critical workloads, and less-critical workloads, cold migration, or backup and
restore options are also considered.
Key Benefits
There are some key benefits of Oracle Cloud VMware Solutions.
Dedicated with full control.
You get full administrative control to do self-service, and it also includes root
access with full control over your environment. After provisioning OCVS,
the customers then own the root credentials, which gives them the operation
of the environment, including upgrading and patching the VMware software.
Keep the same VMware tool.
OCVS gives a very familiar experience compared to VMware on-premises
environments. You can leverage the tool using your existing skill sets, such
as vSphere, vSAN, NSX, or any third-party ISV solution.
Seamless Migration
Since the underlying infrastructure platform is similar and uses a single
VMware specification, you can migrate applications as it is and then avoid
refactoring or re-architecting them.
Access to Oracle Cloud Service
With getting your workload’s footprint into the Oracle cloud, it gives access
to Oracle’s broad portfolio of cloud services such as autonomous database or
Database as a Service (DBaaS) and Exadata.
Global Availability
OCVS is available in every OCI public cloud region. Therefore, you can
choose to deploy on any commercial OCI region.
Core Aspect and Values
OCVS is a pay-as-you-go model, and there is a subscription period. The
minimum subscription is one month.
There are no separate charges for infrastructure and the VMware software
license, and it is built through a single SKU post. The licensing policy of
Oracle software remains the same, and it is subject to the current server or
hardware partitioning policy; this means it does not include within the OCVS
SKU, and any additional software like the Oracle database should be licensed
separately.
VMware core components like vSphere, NSX, vSAN, and HCX are licensed
as the single host SKU.
Value Proposition
There are some core values and value propositions of Oracle Cloud VMware
Solution.
Value Proposition
Manageability Customer-managed/controlled
Support Integrated Oracle + VMware support
Security Customer-managed security without
any Oracle access
Updates, patches, and upgrades Customers decide when to upgrade
Deployment Deploy in customer VCN
Table 8-01: Value Proposition

SDDC Deployment
Oracle created a VMware-certified Software-Defined Datacentre (SDDC)
implementation for usage within Oracle Cloud Infrastructure in collaboration
with VMware. Oracle Cloud Infrastructure hosts a highly available VMware
SDDC in this SDDC installation, dubbed the Oracle Cloud VMware
Solution. It also enables you to migrate all of your on-premises VMware
SDDC workloads to Oracle Cloud VMware Solution smoothly.
SDDC Provisioning Flow
You need to know a few prerequisites before you start the SDDC
provisioning.
Compartment
Virtual Cloud Network
The first step is to provide an SDDC name to your software-defined data and
choose which compartment you want to provide the SDDC. You have the
option to enable or disable HCX. HCX advanced can be selected as a part of
the provisioning with no additional cost. If HCX enterprise can be selected,
there will be an additional cost. After that, you choose the VMware software
version (7.0 update two or go to 6.7 or 6.5 versions).
Then, you choose the pricing interval. A pricing interval is the collection of
your usage. You can choose hourly, monthly, or yearly based on your
consumption. You need to confirm the pricing interval you selected from the
previous step. The next step is to select the number of ESXi hosts required in
your SDDC.
To access your ESXi host, you need to upload the SSH public keys. Your
SDDC environment is deployed in an availability domain. The next step is to
select the VCN you want to provision and which network you want to create
all the subnets for your management and workloads. In the next step, you
have an option to create new VLANs or use existing VLANs for your
management and other functions. After that, enter a CIDR block for your
VLANs. It is an IP address space for your VLAN functions. Then, enter a
CIDR block for your workloads. The final step is to review and submit. It
will take approximately an hour and a half or two hours to complete the
SDDC provisioning.

Figure 8-03: ADDC Provisioning Flow

SDDC VLANs
A VLAN is an object within the VCN. VMware uses these VLANs to
segregate traffic for different purposes. There are some functions used for
SDDC VLANs.
NSX Edge Uplink 1 and 2
The NSX Edge Uplink VLAN is used for north-south communication
between the SDDCs. It is also used for communication to native Oracle cloud
services and the internet.
NSX Edge VTEP
The NSX Edge VTEP VLAN sends GENEVE encapsulated traffic between
the NSX-T Edges and the ESXi host.
NSX VTEP
The NSX VTEP VLAN is used for the NSX-T overlay network; GENEVE
encapsulated traffic that will flow east-west between the ESXi host.
vSAN
The vSAN VLAN is dedicated to vSAN traffic.
vSphere
The vSphere VLAN is where all your management VMs, like vCenter, NXS-
T manager, HCX manager, and all the NSX-T Edges live.
HCX
The HCX VLAN is used for its CX traffic.
Replication Net
Replication net for the replication traffic initiated from the HCX.
Each VLAN is assigned with a VLAN ID, and these VLANs are local to the
VCN. If you configure the same VLAN ID on a different VCN, it is still
considered a different VLAN domain.
These VLAN partitions your VCN into layer two broadcast domains.
Each VLAN has a routing table and network security group associated with
it.
VMware SDDC uses this VLAN to segregate traffic.
Note: If you are using an existing VLAN, update your routing table and
network security groups to allow traffic to flow between the components.
Deploying a Highly Available SDDC
Follow the following steps to deploying a highly available SDDC.
Compute
The first step is to determine the total number of compute nodes you
require. The compute represents the CPU and memory. You can
determine the total number of ESXi nodes required in your architecture
based on your on-premises
OCVS provides a 3 to 64-node ESXi cluster. The high availability of
these compute nodes is provided through OCI Fault Domain
architecture
When you provision these ESXi nodes, these are placed in different
fault domains and provide the first level of high availability to your
compute nodes
vSphere cluster HA is also enabled to protect VMs running within the
cluster
Virtual Cloud Network
A VCN represents a traditional network with all firewalls and
gateways, and that is your underlay network for your SDDC
NSX-T is an overlay network with a few components installed. NSX
manager and controller is an appliance that you deploy
NSX Edges are deployed from the NSX manager for routing and
switching purposes
You can factor in a vSphere Cluster HA function for the NSX
component’s high availability
NSX networking depends on VCN. VCN provides a highly scalable
architecture, and it provides a greater extension for your SDDC to
scale-out
The SDDC bare metal server is backed by two into 25 Gbps network
connections and supports up to 52 virtual NICs, which means 26 per
physical NIC and gives the high network bandwidth for your virtual
machines
Storage
VMware vSAN provides an inbuilt enterprise-class performance
datastore, reliability, and availability
It uses an all-flash solution attached directly to the NVMe-backed bare
metal instance
vSAN implements storage fault domains. They ensure multiple replica
copies of storage objects are distributed across the domains
You can also use vSAN storage policies to determine the high
availability of individual VMs

Design Hybrid Cloud


Connecting Between On-premises and OCVS
OCVS is a hybrid cloud that enables connecting the on-premises to the
SDDC environment created under the Oracle Cloud Infrastructure. There are
few configuration requirements to achieve a hybrid cloud.
FastConnect
A FastConnect is a dedicated high-bandwidth network link of 1 to 10 Gbps
providing a high Quality of Service (QoS). It is a connection with lower
latency and provides 99.95%of availability.
VMware HCX (Hybrid Cloud Extension)
After providing a FastConnect, the following configuration deploys the HCX
components. HCx is a set of appliances that provide secure connectivity to
support use cases like migration and disaster recovery.
HCX Components
There are primarily two components of HCX that you start.
HCX Manager
HCX Connector
The HCX manager and HCX connector together build the service mesh. A
service mesh is built using a set of appliances deployed, which creates an
effective service configuration to be used by the source and destination.
HCX interconnect is a mandatory appliance. At the same time, HCX WAN
optimization, Network Extension, and all other appliance are optional.
HCX Manager
There is a one-to-one mapping and requires one HCX manager per SDDC.
As a part of SDDC provisioning, this would deploy the HCX Manager
Appliance. It is licensed along with other management VMs.
The OCVS environment, where the HCX manager has deployed, acts as a
target for site pairing. Site pairing establishes the connection between the
environment for management, authentication, and running HCX features.

EXAM TIP: The service mesh of the HCX manager is considered the
tunnel receivers.
HCX Connector
The connector is always the source for site pairing. It is paired with the HCX
manager, which means it cannot pair with another connector.
The HCX connector is licensed based on OCVS SDDC deployment. In this
case, the connector service mesh is the tunnel initiator.
These two integrated components build the service mesh with features like
migrations with VMotion, disaster recovery, bulk migration, assisted
migration, traffic engineering, etc.
HCX Layer 2 Extension – Configuration
Prerequisites
There are some prerequisites to make HCX work.
A distributed virtual switch can configure an on-premises environment
You can establish a dedicated private network using Oracle cloud
FastConnect
The required routing configuration and security rules for VCN and on-
premises should be configured to allow communication
Preparation
The first step is to download and deploy the HCX Connector appliance on-
premises as part of preparation. The connector is to be downloaded from the
HCX Manager Interface and then deployed on-premises, then activating the
product using the key provided by the HCX manager.
Site Pairing
Once the product is activated, the next step is to configure the site pairing by
entering the IP address and other authentication details. The site pairing
establishes the connection for management, authentication, and orchestration
for HCX services across both environments.
Compute Profile
A compute profile needs to be created, a container of the compute, storage,
and network settings that are used to deploy the HCX appliances.
Network Profile
A network profile is part of the computed profile. It is an abstraction of
network properties like distributed port group or NXS logical switch or any
of the other layer three properties of that network.
Service Mesh
After that, the service mesh is created, which is the HCX service
configuration used by both on-premises and OCVS.
Configure L2 Extension
The last step is to select the on-premises VLAN to extend, including the other
property like a gateway, destination’s first hop, router, etc.
With L2 configuration, your VLANs are now extended from on-premises to
OCVS. After the VM migration of VMs, the network settings of those VMs
need to be changed to that extended VLAN protocol.
Mobility Optimized Networking (MON)
Mobility Optimized Networking (MON) is an HXC Enterprise feature.
This feature allows you to route the traffic of a migrated virtual
machine within OCVS without Trombone
A network trombone occurs when all the workloads on an extended
network at the destination are routed through the on-premises router
gateway
MON ensures that the traffic remains symmetric and uses an optimal
path to reach its destination
With MON, an NSX-T tier-1 gateway is connected to perform the local
routing task. You can configure all this optimized routing in an
automated way
MON allows policy-based routing. The policy route defined the traffic
which needed to be routed through the source gateway compared to the
traffic routed through the cloud gateway
You can evaluate the MON policy when the destination network for
traffic flow is not within the SDDC NSX-T tier-1 router. It is evaluated
so that if the destination IP is matched and configured to allow in the
policy, the packet is forwarded to the on-premises gateway using the
HCX Network Extension appliance. If the destination IP is not matched
or configured to deny in the policy, the packet is forwarded to the
SDDC tier-0 gateway to be routed.
Migration with HCX
There are different ways to migrate virtual machines to Oracle Cloud
VMware Solution using HCX.
HCX Advanced Edition
HCX Advanced Edition gives a few migration types.
Cold Migration – A cold migration is done with VMs powered off, and data
transfer is done using the Network File Copy (NFC) protocol. The cold
migration means there is a downtime for the VM.
HCX vMotion – vMotion transfer captures the virtual machine’s active
memory, execution state, IP address, and MAC address and transfers to the
destination. It is also referred to as the live migration feature of VMware;
therefore, there is no downtime for the VM.
HCX Bulk – It uses Host Based Replication (HBR) method. It is done by
replicating several live virtual machines to the target site. The replication
process makes initial full sync of the VM in the target site. Then, the delta
and changes block are replicated. Once the delta replication is complete, a
switchover is triggered, with minimal downtime for the running VM.
Therefore, it is called the warm migration type.
HCX Enterprise provides additional migration features such as OS Assisted
Replication. This is done through an agent deployed on the guest operating
system of the VM. It has minimal downtime. Therefore, it is a warm
migration type.
Replication-assisted vMotion is designed for massive migrations, and it is an
enhanced version of the HCX bulk migration. It works with a combination of
HBR and vMotion working together. In this migration type, the difference is
that the last switchover is done through vMotion, and there is no impact on
the running VM. Therefore, it is considered a live migration.
Use Cases for Hybrid Clouds
In a variety of cases, a hybrid cloud is the best option. The following use
examples demonstrate how you can use hybrid cloud computing effectively.
Disaster Recovery
You can fine-tune private and public disaster recovery with hybrid cloud
solutions to match an organization's specific needs. It results in a more
straightforward approach that conserves local storage space and bandwidth
while streamlining the backup process—assuring an efficient and speedy
recovery of locally stored private data. It ensures continuity while
maximizing the efficiency that only a hybrid architecture can provide.
Workload Migration
A hybrid cloud solution can be a temporary setup that allows for permanent
cloud migration, and a business cloud migration could take months in some
circumstances. Using a hybrid cloud to transition allows for a staged shift
with simple and safe rollback while maintaining the flexibility that minimizes
or eliminates downtime.
Development lifetime
During the development lifecycle, resource requirements change. The testing
phase will require certain resources that will not be required during beta or
even launch. These resources can scale according to the needs of each phase
in a hybrid cloud environment. It provides flexibility throughout the life cycle
without requiring hardware or configuration changes.
Legacy Applications
While you can transfer many tools, applications, and resources to the cloud,
some still require on-premises resources. Hybrid cloud computing supports
these scenarios, which has the benefit of allowing an organization to
transition to the cloud at its speed.

OCVS Network Topology


NSX-T Architecture Component
In OCVS, the NSX-T overlay manages the traffic flow between the VM and
the VMs, the other resources, and the solution.
NSX-T works by implementing three separate integrated planes. These are:
Management
Control
Data
Management Plane – The management plane provides an entry point to the
system through API and graphical user interface. It is responsible for
maintaining the user configuration, handling user queries, and performing
operational tasks on all management, control, and data plane.
The NSX-T manager appliance implements the management plane for the
NSX-T ecosystem.
Control Plane – The control plane computes the runtime state of the system
based on the configuration provided by the management plane. It is also
responsible for disseminating topology information reported by the data plane
elements and pushing the stateless configuration to the forwarding engine.
NSX-T splits the control plane into two different parts.
Central Control Plane (CCP) – The CCP nodes are implemented as a
cluster of virtual machines, and this form factor provides both
redundancy and scalability of resources. The CCP is logically separated
from all data plane traffic, which means any failure in the control plane
does not affect the existing data plane operation
Local Control Plane (LCP) – The LCP runs on the transport nodes. It
is adjacent to the data plane controlled and connected to the CCP. The
transport nodes are the host that runs the local control plane daemons
and the forwarding engines implemented by the NSX data plane. The
LCP is responsible for programming the forwarding entries and viable
rules of the data plane. NSX Manager and NSX Controller are bundled
together in a virtual machine called the NSX manager appliance
Data Plane – The data plane performs stateless forwarding and transmission
of packets based on the tables populated by the control plane. It also reports
the topology information to the control plane and maintains the packet-level
statistics. The transport nodes play a major role in the data plane.
There are two main types of transport nodes.
Host Transport Nodes – These nodes are the ESXi host prepared and
configured for NSX-T and provide the network services to those virtual
machines running in those hosts
Edge Nodes – The edge nodes are service appliances representing a
pool of capacity dedicated to running a centralized network service that
cannot be distributed to the ESXi host
Figure 8-04: NSX-T Architecture

NSX-T Routing and Bridging


There are two types of communication flows.
North-South Traffic
East-West Traffic
Tier-0 Gateway
Tier-0 gateway is used for routing the north-south traffic, which means the
traffic between the workloads VMs to the upstream network, whether it could
be the physical upstream or the external network. The edge node installed on
the host as a VM is required for using the tier-0 gateway, which means the
tier 0 gateway is a function that lives inside the NSX-T edge node.
Tier-1 Gateway
The tier-1 gateway provides a distributed router's function and runs locally
inside the ESXi host as a VMkernel module. The VM traffic between two
different subnets on the same hosts does not have to leave the ESXi host. It
provides the function for East-West traffic routing without harpooning the
flow.
The second function of the tier 1 gateway is the service router, which is
responsible for stateful services and is instantiated as a service on the NSX-T
edge node.
NSX-T Bridge (VLAN – overlay)
Tier 0 gateway can support the communication between the VLAN-backed
workload and the back overlay VLAN; however, there could be scenarios
where layer two connectivity is required between the VMs and the physical
devices. For such functionality, NSX-T introduces the NSX-T Bridge, a
service that can be instantiated on edge to connect an NSX-T logical segment
with a traditional VLAN and layer2.
N-VDS – The Logical Switch
N-VDS stands for NSX-T Managed Virtual Distributed Switch. It is a logical
switch configured for your workload's VMs to be connected. The N-VDS
performs the heavy lifting of getting the packets in and out of the vSphere
environment.
With NSX-T N-VDS, you are instantiating the layer2 broadcast domain
within the hypervisor ESXi host, which means at each host level, you are
deploying and enabling NSX-T.
The communication between the ESXi hosts is done through the IP Tunnels
(vTEPs), the virtual endpoints on every host. The vTEP is responsible for
encapsulating and routing the packet to a destination vTEP for further
processing.
The data plane of NSX-T will keep track of which VM or on which host will
have what Mac address and that response to the tunnel endpoint. An example
is shown in Figure 8-05.
In this example, the two racks represent two VMs running. VM1 is on one
ESXi host-1, and VM2 is on ESXi host-three. The data plane track which
VM is running on which host with the corresponding Mac and vTEP address.
Now, what happens when the VM moves to another host through vMotion?
The VM1 to VM2 traffic is encapsulated through the vTEP tunnel, the
overlay architecture. And this is when the data plane will update the settings
to match the new vTEP channel endpoint location and route through that IP
tunnel. It means that the traffic between the racks never has to leave the top
of rack switches, and it is routed through the IP tunnel created between the
hosts. With this overlay architecture, the underlay subnets can still be
different and have connectivity between the VM.

Figure 8-05: Example

OCVS Network Architecture


OCVS is hybrid cloud infrastructure. Therefore, the first infrastructure you
consider is the customer on-premises environment, where they have a set of
workloads connected to a few different networks. The dynamic routing and
firewall are configured and contain end-users connecting to the application
through the internet or VPN.
The OCVS deployment starts from the OCI region, with prerequisites like
VCN and some public and private subnets. It must have a security list like a
perimeter firewall and route through the internet gateway to the external
network.
The internet gateway allows inbound connections to be terminated within the
public subnets or through the NSX edge uplink VLAN to the SDDC
workloads.
The three-node cluster is deployed using this architecture's dedicated OCI
provisioning network. DDC deployment would create the management VMs
like vCenter, NSX manager, and HCX manager and will be connected to the
vSphere management network. During the activation stage, you will deploy
the HCX fleet appliance will be deployed later during the activation stage,
and it will also be connected to the vSphere network. These management
components are connected to the OCI VLAN- backed code groups. They are
reachable from external access, virtual IPs assigned to the VLANs, and
connected through the VCN routing without traversing through the NSX-T
network fabric. During SDDC provisioning, the edge nodes are also
deployed. vSphere uses anti-affinity rules to keep these nodes separated on
each host. Some other entities exist inside the OCVS SDDC.
The solid line with the circular end means these connections are
terminated on the ESXi host VMkernel port
At the same time, a dotted network connection means it supports the
guest VM port group of vSphere
The vMotion network is used for only vMotion traffic for both
management and workload VMs
HCX VLAN will be created if you choose to enable HCX during the
provisioning process of SDDC
The vSAN VLAN is used for the storage of data traffic
The provisioning VLAN is used for virtual machines called migration,
cloning, and snapshot migration purposes
The replication net VLAN is used for the vSphere replication engine,
and this is applicable only if you have chosen VMware version 7.0
update two or the latest SDDC version
The NSX vTEP is used for the data plane traffic between the ESXi host
through the tunnel endpoints, and the NSX edge vTEP is used for the
data plane traffic between the ESXi host and the NSX edge
The edge node deployed on the ESXi host uses the edge vTEP, which
means the VM uses a network connection supported by the guest VM
port group of vSphere
The NSX edge uplink one communicates between the VMware SDDC
and OCI. NSX edge uplink two is reserved for future use, which you
could use for deploying an external-facing application on the VMware
SDDC that you have created
These VLANs have the corresponding network security group attached to the
device interfaces, similar to the NSX-T distributor firewall. A fast link is
established between the customer on-premises and the OCVS region. With
this connectivity, you can now enable the interconnect service of HCX, create
a secure tunnel, and configure the fleet appliance such as interconnect
appliance, run optimization, and layer two extension appliances. The
workloads are connected to the NSX-T overlay segment. The NSX-T tier 0
gateway is used for your north-south traffic, and the tier 1 gateway is used for
your east-west traffic. Net the source address through the tier 0 gateway to
the edge uplink VLAN and the public IP address through the VCN internet
gateway for internet egress. The NSX edge node uses separate tunnel
endpoint interfaces to its parent host. The edge uplink VLAN that will
terminate on the tier 0 router is also presented to the edge node VMs. VCN
routing is automatic for destinations where OCI has direct connectivity. You
should also update the route list in all of the cases. The regional OCI services
are routed by deploying a service gateway into the SDDC parent virtual cloud
network.

Figure 8-06: OCVS Network Architecture

Access to Microsoft Azure


About two years ago, Oracle Cloud Infrastructure signed a partnership with
Microsoft Azure. This partnership consists of linking the two cloud
providers. However, you still have to construct your tenancy connectivity to
the Azure account. The alternate option is available; normally, a virtual
circuit connects the on-premises data center to OCI via a partnership with a
provider like Equinix, Megaport, AT&T, etc. Microsoft Azure is another
partner. When you select the Microsoft option, you will link the OCI
resources to the Azure account.
This approach's primary goal is to allow you to migrate or run the critical or
enterprise workloads on both cloud service providers and across them, link
them, and seamlessly use them as if it was one cloud provider.
You might have some applications on Azure that you want to keep there.
However, you want to leverage the Oracle Cloud Autonomous Database on
OCI.
Partnership Benefits
When two cloud service providers become partners, they get the following
benefits.
Innovation
You can run enterprise-grade, multi-cloud applications between the Oracle
Cloud Infrastructure and Microsoft Azure.
Choice of Service
Your choice of service in the cloud-native applications, typical datacenter
applications are running in the cloud.
Leverage existing Investments
You can leverage existing investments on either one of those two cloud
providers. It will help you migrate on-premises applications and databases to
the cloud without re-architecting technology.
Other Benefits
When you connect Oracle Cloud and Microsoft Azure, you receive the
following advantages:
A private intercloud connection that avoids using the internet
Through redundant 10-Gbps physical connections, high availability and
dependability are ensured
The cross-cloud network performance is predictable
There is no requirement for an intermediary network provider because
the configuration is done once
Common Use Cases
Full-stack Oracle or custom apps on Oracle Database on OCI and full-
stack apps on Azure that interoperate and share data
Oracle Apps (PSFT, JDE) on Azure using Oracle Database on OCI
Custom .NET applications on Azure using Oracle Database on OCI
Custom Cloud Native apps on Azure using Oracle Autonomous DB
Applications/Oracle Database in OCI, Azure Data Lake for analytics,
and Cognitive Services for AI
SQL Azure, SQL Server, SQL DW on Azure and Oracle Analytics
Cloud, and Data Science service on OCI

Figure 8-07: Use Cases

OCI-Azure Interconnect Setup


Architecture
Oracle and Microsoft have partnered to allow you to create a secure, direct
link between Oracle Cloud and Microsoft Azure.
The networking for workloads spreads between Oracle Cloud and Microsoft
Azure is depicted in the diagram below:
Figure 8-08: OCI-Azure Interconnect Setup

Use Oracle Cloud Infrastructure FastConnect or IPSec VPN for secure access
from your data center to Oracle Cloud. Use ExpressRoute or VPN for private
traffic from your data center to Microsoft Azure.
Set up a link between a FastConnect circuit in Oracle Cloud and an
ExpressRoute circuit in Microsoft Azure for cross-cloud networking between
Oracle Cloud and Microsoft Azure.
The following figure depicts further specifics of Oracle Cloud and Microsoft
Azure cross-cloud networking with a sample workload. In this example, the
workload in Microsoft Azure is a custom application with a public-facing
load balancer. The bespoke application takes advantage of an Oracle Cloud
private database.
Figure 8-09: OCI-Azure Interconnect Architecture

Use Oracle Cloud Infrastructure FastConnect or IPSec VPN for


administrative access to the Oracle Cloud database from your data
center. Use ExpressRoute or VPN for private access from your data
center to the workload in Microsoft Azure
Set up a link between a FastConnect circuit in Oracle Cloud and an
ExpressRoute circuit in Microsoft Azure for cross-cloud networking
between Oracle Cloud and Microsoft Azure
The communication between the clouds is secure and segregated. The
cross-cloud connection does not allow traffic from networks other than
Oracle Cloud and Microsoft Azure to access either cloud. Traffic from
your data center, for example, cannot reach Oracle Cloud via Microsoft
Azure
The FastConnect virtual circuit finishes at an Oracle Cloud dynamic
routing gateway (DRG) connected to a virtual cloud network (VCN)
The ExpressRoute connection in Microsoft Azure terminates at a
virtual network gateway (VNG) connected to a virtual network (VNet)
The traffic between the application and the database is routed through
Microsoft Azure's VNG to Oracle Cloud's DRG. Traffic is routed
through the DRG to the VNG in the opposite direction. The traffic
never exits the private network in either direction
The required steps (flow) used for OCI-azure interconnect setup are
summarized below.

Figure 8-10: OCI-Azure Interconnect Flow

Lab 8-01: Access to Microsoft Azure


Introduction
Oracle and Microsoft have teamed up to deliver Oracle Cloud and Microsoft
Azure with low-latency, private connectivity. This relationship provides you
with a cross-cloud experience that is highly optimized, safe, and unified.
Utilize the best of Oracle Cloud and Microsoft Azure services and any
current Oracle and Microsoft technology investments.
Problem
An organization has started within Oracle Cloud Infrastructure for a few
years. The management had decided to maintain the on-premises and another
existing setup to provide low latency, private connection between workloads
distributed across two cloud service providers, and extend on-premises data
centers to cloud providers. How is it possible?
Solution
In OCI, the organization can create the interconnect setup to achieve the
requirements.
Step 1: Navigate to VM Instance
1. Go to https://www.oracle.com/cloud/sign-in.html and log in to Oracle
Cloud Infrastructure Console with your Cloud Account Name and
credentials.

2. The Oracle Cloud Infrastructure Console dashboard will appear.


3. To launch any free resource, go to the Navigation menu.
4. Go to Compute and click on Instances.

Step 2: Configuration of the VM Instance


5. The default of the VM instance will appear. Verify your Home Region
and Compartment.
6. Click on Create Instance.

7. Now, enter a unique name for the instance.


8. Choose your compartment.
9. Select the default Availability Domain (AD).
10. Choose the image and shape according to your requirement.
11. Select the default virtual cloud network created by Oracle.
12. Select Assign a public IPv4 address option.
13. Leave the remaining options as default and click on Create.
14. After the VM instance deployment, you can verify the configured
information.
Step 3: Configure Dynamic Routing Gateway
15. Go to the Navigation menu and open Dynamic Routing Gateway.
16. Choose the same compartment and home region.
17. Click on Create Dynamic Routing Gateway.

18. Write a unique name.


19. Choose your compartment.
20. Click on Create Dynamic Routing Gateway.

21. After provisioning, verify the configured details.


22. Go to the Networking section from the Navigation menu and click
on Virtual Cloud Network.

23. Click on the created VCN.


24. Verify the entered details.
Step 4: Add Route Rule
25. Scroll down the page, navigate Route Tables, and open the default
route table.
26. Click on Add Route Rules.
27. Select DRG as Target Type.
28. Enter Destination CIDR Block.
29. Click on Add Route Rules.
30. After that, scroll down the VCN page and create Dynamic Routing
Gateway Attachments.
31. Click on the attachment.
Step 5: Create Azure Resource Group
32. Log in to the Azure Portal by using your credentials.
33. From the homepage, click on Resource groups.
34. You will see the list of created resource groups. Click on + Create to
create a new resource group.
35. Enter the Basics details; inside the Project details, choose your
Subscription.
36. Write a unique name for the resource group.
37. Inside the Resource details, choose the nearest location.
38. Click on Review + create.

39. Once the validation is passed, click on Create.


40. After creation, you can verify the configured details from its overview
page.
Step 6: Create Virtual Network Gateway
41. Go to the Home page and click on Create a resource.
42. Search and open Virtual Network Gateway.
43. Click on Create.
44. Choose your Subscription.
45. Inside the instance details, write a unique name.
46. Select your nearest region.
47. Choose ExpressRoute as a Gateway type.
48. Select Standard SKU.
49. Click on a Create virtual network.
50. For a virtual network, write a unique name.
51. Select the recently created resource group.
52. Select the Address range according to your requirement.
53. Similarly, create and write a subnet range as well.
54. Click on OK.
55. Add the recently created VNet in VCN configuration.
56. Add Gateway subnet address range.
57. For the Public IP address, create a new address with Basic SKU.
58. Click on Review + create.
59. Once the validation is passed, click on Create.
60. Verify the details from the overview page.
Step 7: Create Virtual Machine
61. Go to the Home page and click on Virtual Machine.
62. Click on + Virtual machine.

63. Enter the Basics and choose your Subscription.


64. Select the same resource group.
65. Inside the Instance details, write a unique name for VM.
66. Select the same Region.
67. Choose the Standard security type.
68. Select Linux image.
69. Select Size according to your choice.
70. For the Administrator account, choose the Password option.
71. Write a unique Username and Password.
72. Choose Allow selected ports for Public inbound ports.
73. Select SSH (22) inbound port.
74. Click on Review + create.
75. Write your name and email address for verification.
76. Once the validation is passed, click on Create.
77. Verify the configured details from the Overview page.
78. Now connect the virtual machine using the SSH option.
79. Now, sign in to VM using the same username and password for the
administrator account.

80. You will observe the following output.


81. Now, ping the private IP address of the VM to see the response.
Step 8: Configure ExpressRoute
82. Again, go to the home page and click on Create a resource.
83. Search and open ExpressRoute.
84. Click on Create.
85. Select subscription.
86. Select instance location and resource group.
87. Click on Next : Configuration >
88. Select Provider as a Port type.
89. Create a new circuit type.
90. Select Oracle Cloud FastConnect provider.
91. Select a peering location.
92. Select bandwidth.
93. Leave the remaining options as default.
94. Click on Next: Tags >
95. Add tags.
96. Click on Review + create.
97. Once the validation is passed, click on Create.
98. Verify the configuration details from the Overview page.
99. Copy Service key for future use.
Step 9: Configure FastConnect
100. Open OCI Console and open the FastConnect service from the
Navigation menu.
101. Choose the same Home region and compartment.
102. Click on Create FastConnect.

103. Select FastConnect Partner as a Connection type.


104. Select Microsoft Azure: ExpressRoute as a Partner.
105. Click on Next.

106. For configuration details, select name.


107. Choose your compartment.
108. Select Private Virtual Circuit Type.
109. Add your recently created DRG.
110. Select Provisioned Bandwidth value.
111. Paste the service key in the Partner Service Key option.
112. Set BGP IP addresses.
113. Click on Create.
114. The provisioning process will take a few minutes.
Step 10: Create Connection in Azure
115. Go to the Azure Portal and click on Create a resource.
116. Search and open Connection.
117. Click on Create.
118. Select your subscription.
119. Choose the same resource group.
120. Choose ExpressRoute as a connection type.
121. Scroll down and choose a name and nearest location. Click on Next:
Setting >
122. In the Settings pane, choose the recently created virtual network
gateway.
123. Select the recently created ExpressRoute circuit.
124. Click on Review + create.
125. You will observe the IPv4 BGP State as Up.
126. Go to the Azure VM and connect via SSH.
127. Run the ping command and observe the output similar to the
following.
128. Go to the Azure VM and copy the public IP address from the
Overview page.
129. Now connect it via PuTTY.
130. Write <username>@<public IP address> inside the Host name.
131. Choose SSH 22 port.
132. Click on Open.
133. Now connect the session using username and password.
134. Go to the OCI VM instance and copy its private IP address.
135. Run the ping command with the private IP of compute instance.
136. The exact output will appear as seen with the private IP address of
Azure VM.

Introduction to IPv6 with Oracle


You can publish IPv6 addresses allocated by Oracle on the internet for public
connection or utilize them only for private connectivity within and between
your Virtual Cloud Networks (VCNs) or on-premises networks with IPv6
support in OCI (No NAT required). Create and deploy apps that can
communicate through VPN or FastConnect from IPv6 endpoints to IPv6-
enabled compute instances and resources linked to on-premises networks.
Your IPv6 customers can also connect to a virtual IP address for web load
balancing and be routed to IPv4 web application instances. Customers can
now make their applications available to IPv6 end-users over the internet.
Overview
All commercial and government regions support IPv6 addressing
You can choose whether or not to allow IPv6 during VCN creation, or
you can enable IPv6 on existing IPv4-only VCNs. You can also select
each subnet in an IPv6-capable VCN that is IPv6 enabled
A /56 IPv6 CIDR block is used by IPv6-enabled VCNs. Oracle assigns
the VCN a /56 globally routable IPv6 CIDR block for internet
connectivity. All subnets are in the /64 range. By setting the
"public/private" subnet level flag, you can allow or disallow internet
communication to a subnet
You can also specify whether an IPv6-enabled subnet's VNICs have
IPv6 addresses (up to 32 maximum per VNIC)
Only the dynamic routing gateway (DRG), the local peering gateway
(LPG), and the internet gateway support IPv6 traffic
Between your VCN and the internet, and between your VCN and your
on-premises network, both inbound and outbound IPv6 connections are
enabled. It is also possible to communicate between resources within
your VCN or between VCNs
IPv6 traffic (intra- and inter-VCN) is supported between resources
within a region. See Routing for IPv6 Traffic and Internet
Communication for more information
FastConnect and Site-to-Site VPN support iPv6 communication
between your VCN and on-premises network. For IPv6, you must use
FastConnect or Site-to-Site VPN
Use Cases
The following scenarios are possible with this feature:
IPv6 endpoints are required for communication in mobile applications
or websites.
The Internet of Things (IoT) necessitates a greater number of IP
addresses than ever before. IoT devices can connect with IPv6
endpoints in the cloud via IPv6
IPv6 connectivity on-premises aids in onboarding IPv6 apps from the
on-premises network to the cloud
Benefits
More Efficient Routing - Routing is more efficient and hierarchical with
IPv6 since it minimizes the routing tables. Fragmentation is handled by the
source device, not the router, in IPv6 networks, employing a protocol for
determining the path's maximum transmission unit.
More efficient packet processing — Unlike IPv4, IPv6 does not have an IP-
level checksum, which means the checksum does not need to be regenerated
at each router hop.
Directed Data Flow - IPv6 provides multicast rather than broadcast for
directed data flows. Multicast allows traffic-intensive packet flows to be
transmitted simultaneously to numerous destinations, conserving network
bandwidth.
Simplified Network Configuration — When linked to other IPv6 devices,
IPv6 devices can auto-configure themselves. You can automate IP address
assignment and device numbering configuration chores.
Security - IPv6 has IPSec security embedded, providing confidentiality,
authentication, and data integrity.
IPv4 and IPv6
The following are the most significant differences between IPv4 and IPv6:
IPv6 expands routing and addressing capabilities by increasing the IP
address size from 32 to 128 bits, allowing for additional layers of
addressing hierarchy, a larger number of addressable nodes, and easier
address auto-configuration
The addition of a scope field increases multicast routing to multicast
addresses scalability
An anycast address is a new form of address that has been defined. It
defines groups of nodes to which a packet addressed to an anycast
address is sent. Anycast addresses are used in the IPv6 source route to
allow nodes to control their communication path
Simplified header format - Some IPv4 header fields have been removed
or made optional. The IPv6 header is only twice the size of the IPv4
header, even though IPv6 addresses are four times longer than IPv4
addresses. Despite the increased size of the addresses, this decreases
the common-case processing cost of packet handling and keeps the
bandwidth cost of the IPv6 header as low as possible
Improved option support - Changes to the way IP header options are
encoded allow for more efficient forwarding, fewer restrictions on the
length of choices, and more flexibility in the future for adding new
options
Quality-of-service capabilities - A new feature enables the marking of
packets belonging to specific traffic flows. The sender desires special
treatment, such as non-default quality or real-time service
Authentication and personal capabilities - IPv6 involves defining
extensions that offer authentication, data integrity, and confidentiality
support
IPv6 Addressing Model
IPv6 addressing model has specific scope in which the device is defined. A
scope is a topological area within the IPv6 address, which can be used as a
unique identifier for the interface or set of interfaces. The scopes can be:
Global
Site-local
Link-local
IPv6 Plan in Cloud
When planning IPv6 in the cloud, the best consideration is not to consider
NAT. IPv6 addresses are abundant.
In OCI, when you create a subnet in the VCN, you can select if you want to
get or provide the resources in that subnet IPv6 capabilities.

EXAM TIP: You will always get public-facing IP addresses when you
get IPv6 on a virtual cloud network.
Having a dual-stack IPv4/IPv6 will enable you to continue offering IPv4
services even if the resource in your virtual cloud network is leveraging IPv6.
Example
The example of Oracle cloud IPv6 support is shown in Figure 8-11. This type
of configuration is made for East-West, meaning no communication through
the internet. A VCN consists of IPv4 CIDR and a /56 IPv6 prefix. The IPv6
support is also provided on the subnet level (only two subnets). However, it is
optional to provide IPv6 support on the subnet level. Then, you place your
resources on your subnets. The load balancer has an IPv6, but the subnets do
not in subnet C. The load balancer will balance both virtual machines in
subnet C. Whenever you try to reach the resource, the load balancer will
reach it through IPv6. However, the load balancer will be communicating
with those subnets through IPv4. The virtual machine in subnet A will
communicate with the load balancer through IPv6. With this configuration,
you can implement another virtual network using the local peering gateway
and communicate with the resources on the other virtual network through
IPv4 or IPv6. The given example is through IPv6 because that is your
destination on the routing table.

Figure 8-11: Example

In the north-south configuration, you can enable IPv6 on your resources and
communicate to the open internet from the internet gateway to any IPv6
supporting service out there in the world outside of Oracle Cloud
Infrastructure.

Mind Map
Figure 8-12: Mind Map

Practice Questions
1. How many building blocks of a physical data center are there?
A. Three
B. Four
C. Two
D. Five
2. Which of the following is not required to allocate IPv6 address by
Oracle?
A. VPN
B. FastConnect
C. NAT
D. Virtual Network
3. Which of the following is based on the core components of VMware
Cloud Foundation?
A. HCX
B. OCVS
C. SDDC
D. None of the above
4. Which of the following is a unique identifier for the IPv6 addressing
model interfaces?
A. CIDR
B. IP Address
C. NAT
D. Scope
5. Which of the following distributed software systems with features
enabled by Hypervisor ESXi?
A. HCX
B. vSphere
C. vSAN
D. NXS-T
6. Which of the following is a component of NSX host API services?
A. Controller
B. Transport
C. Manger
D. None of the above
7. What is the main reason Oracle implements access to other Cloud Service
Providers?
A. Latency
B. Multi-region
C. Cost
D. Connectivity
E. All of the Above
8. How many integrated planes exist in NXS-T architecture?
A. Two
B. Three
C. Four
D. Six
9. Which of the following NXS-T integrated planes has a central and local
part?
A. Data
B. Management
C. Control
D. None of the above
10. Which of the following instantiated on edge for connection?
A. NSX-T Bridge
B. N-VDS
C. HCX
D. SDDC
11. How many components of HCX are there?
A. Four
B. Two
C. Three
D. Five
12. Which of the following is implemented by OCI in collaboration with
VMware?
A. VLANs
B. NSX-T
C. HCX
D. SDDC
13. Which of the following prerequisites are used to start SDDC
provisioning?
A. Database
B. Compartment
C. Policy
D. OKE
E. Virtual Cloud Network
14. Which of the following is an HCX enterprise feature?
A. MON
B. SDDC
C. DR
D. HA
15. Which of the following is a storage part of the solution?
A. vSphere
B. NSX
C. vSAN
D. SDDC
16. Which of the following is a dedicated host used for monitoring the
availability of an object?
A. Disk Group
B. Fault Domain
C. Storage Policies
D. Witness Nodes
17. Which of the following is a mobility platform?
A. vSphere
B. HCX
C. vSAN
D. SDDC
18. Which of the following migration types is called live migration?
A. Cold Migration
B. Development Lifetime
C. Hot Migration
D. Bulk
E. vMotion
Chapter 09: Migrate On-Premises Workloads to OCI

Introduction
This chapter focuses on the migration process of on-premises workloads to
the Oracle Cloud Infrastructure (OCI).
Moving Oracle applications to OCI, such as E-Business Suite, JD Edwards,
PeopleSoft, Siebel, and Hyperion reduces TCO, enhances agility, and boosts
productivity. Oracle Cloud's migration, provisioning, and administration tools
and Cloud Lift services enable speedy deployment while preserving
important customizations and integrations. Our cloud infrastructure and
databases improve the performance and security of your applications. The
best part is that you can put your money back into innovation.

Planning Data Migration to OCI


Before moving data to OCI, you need to evaluate your system requirements
to determine the strategy better.
Applications
You need to know how the applications are deployed and licensing
restrictions, hardware requirements, and dependencies.
Database
It will help if you understand what the database requirements are. Do you
need to compute or install from scratch, or are there platform options
available? You should also know about the data storage or license involved.
Regulatory Compliance
It is another point of attention. HIPAA, PCI, Fed RAMP, etc., have different
requirements. You should understand the data migration process stored in
OCI is the key thing compliant.
Storage
You should plan for the type of storage you require regarding size,
performance, and durability.
Networking
You should decide whether to migrate the data over the public or private
internet? Check the security requirements of data you need to transfer, and
the frequency that it needs to be synced over can help you choose the best
choice.
Some gradual approaches can be employed when moving your data to the
OCI. After using the assessment as a guide, the next step is to create a
detailed multi-phase cloud migration plan, with each phase focusing on the
migration or specific subnets of related resources. It is also a good time to
consider upgrading resources like databases and business applications and
purchasing any add-ons required to license portability to the cloud.
Business Critically
One strategy is to migrate non-critical data first and then move to more
important business-critical data.
Deployment Environment Type
For example, low-risk environments like development and testing typically
go first, followed by user acceptance training, integration, and finally, high-
risk production environment.
Disaster Recovery
One low-risk migration strategy involves creating a complete disaster
recovery environment in OCI.
Organizations then switch to using the OCI backup deployment as the
production and on-premises environments for disaster recovery.
Offline and Online Migration
When the amount of data is too large, or the one bandwidth is low, you can
use a data transport service for secure offline data ingest without added cost
for up lines, loans, and shipping. When your bandwidth is high enough to
support ingesting over WAN, you can use Storage Gateway for secure online
data ingest.
Data transport service and storage gateway can be combined to perform
initial ingest of files with continued access for read-write into the object
storage bucket.
Figure 9-01: Offline and Online Migration

Offline Transport – Data Transfer Service


There are two options. You can use offline data migration. When the data is
large and bandwidth is low.
Data Transfer Appliance
Each data transfer appliance enables organizations to migrate up to 150
terabytes of data. The appliance should be configured and connected to the
on-premises network. After creating a transfer job, the appliance can be
requested via the Oracle Cloud Infrastructure Management console.
Migration teams also need to mount NFS volumes of the appliance and copy
the data onto the appliance. After copying the data, ship the appliance back to
Oracle and monitor the data transfer status.
Data Transfer Disk
Data Transfer Disk is another offline data transfer solution. This service
allows the customer to use their own SATA or USB drive and send up to 10
drives and 100 terabytes of data per transfer package to the Oracle data
transfer site. Then, site operators upload the files into the organization’s
designated object storage bucket. Users are free to move the uploaded data
into other Oracle Cloud Infrastructure (OCI) services as needed.
In both services, customer data is secured in transit using rest encryption. The
devices are wiped clean to NIST 800-88 standards after the data is ingested
into the customer’s accounts.
Data Transfer Appliance Specifications
The following defines the data transfer appliance specifications.
Appliance Capacity 150 TB
Management Interface NFS v3.0, v4.0, v4.1
Appliance Weight 38 pounds (17.24kg) (64 pounds [29.03kg] with
shipping case)
Form Factor 2u Device. It can be standalone on a desk or a
standard rack shelf
Data Security AES 256 Encryption
Appliance Security Tamper-resistant and tamper-evident enclosure
with physical and digital control
Only network power and serial ports are
exposed
The appliance is secure and wiped after each
use
Network Connectivity 10 GbE RJ45
10 GbE SFP+
Power 554W
Shipping Case 11 x 25 x 28 inches (27.94 x 63.5 x 71.12 cm)
Dimension
Table 9-01: Data Transfer Appliance Specifications

How is Data Secure in Transit?


Whether using the appliance or the disk service, you can encrypt customer
data at rest at AES-256 bit encryption. The encryption password is stored
securely in OCI, separated from the transfer appliance or disk. The OCI
bucket is again encrypted at rest. Its enclosure only exposes power,
networking, and serial port using the appliance. No other ports are exposed or
accessible to customers, and there are no external facing screws or access
points. This spin around the enclosure is welded shut and sealed. After that,
the transit case shipped to the customer has a tamper evidence security tie and
another one for the return shipment.
CLI for Appliance Transfer
Since 2019, appliance-based data transfer has used the OCI CLI for all
command-line-based tasks. You need to use OCI command software DTS
functionality to prepare the appliance for your data and shipment to Oracle.
Although, you can use your data transfer utility to prepare disks. To install
and use the CLI, you must have the following requirements.
Required OCI users and groups with the required IAM policies
A key pair used for signing API requests, with the public key uploaded
to Oracle
To use the CLI without a key pair, you can use a token-based
authentication method
Host Linux machine running Python version 2.7.5 or 3.5 or later
If you use the CLI installer and do not have Python on your
machine, the installer automatically installs Python
OCI CLI also runs on Windows, but the DTS functionality performs some
operations only possible on Linux.
How Data Transfer Works
The following steps are used to understand the working of the data transfer
process.
1. Using the console, you will create a transfer job.
2. Request an appliance or supply your drives depending on the amount of
data you need to transfer.
3. Connect and set up the appliance IP. If you are using the disk, they
should be connected accordingly.
4. Prepare the appliance or drive with encryption and copy your data.
5. Lock the device and generate the manifest of files on the file system.
6. Ship the appliance or the drive as a transfer package to Oracle for
ingesting your customer’s object storage bucket.
After that, you can monitor the progress from the console or the CLI. On
completion, disks and the appliance are erased, and the disk is returned to the
customers.
Transporting VMs, Data, and Files to Oracle Cloud
The entire data set transported to OCI should include data housed in VMs,
storage, backups, and databases. The transfer may be carried out online and
offline depending on the amount of data moved, available bandwidth,
downtime tolerance, and cost.
The following table provides information about how long it takes to migrate
datasets online and offline. The table accounts for dataset sizes and available
bandwidth.
Approximate Data Upload Time
Dataset 10 Mbps 100 Mbps 1 Gbps 10 Gbps Using Data
Size (Fast (Fast Transfer
Connect) Connect) Service
10 TB 92 Days 9 Days 22 Hrs. 2 Hrs. 1 week
100 TB 1018 Days 101 Days 10 Days 24 Hrs 1 week
500 TB 5092 Days 509 Days 50 Days 5 Days 1 week
1 PB 10185 1018 days 101 days 10 days 2 weeks
days
Table 9-02: Migrate Datasets Online and Offline

Online Transport – Storage Gateway


Overview
Organizations can migrate datasets over the public internet or set up private
connectivity between on-premises data centers and Oracle Cloud
Infrastructure. If you choose online migration, it is essential to consider
bandwidth and security when transporting data, VMs, and files over the wire.
Data should always be encrypted at rest and in transit.
VPN over Internet
Use a virtual private network between the source environment and OCI to
secure security. IPSec VPN is the best option to use in this case.
OCI Fast Connect provides an easy way to create a dedicated private
connection between your data center and Oracle Cloud Infrastructure.
FastConnect
FastConnect provides higher bandwidth options and a more reliable and
consistent networking experience than internet-based connections.
FastConnect port speeds are available in 1, 10, and 100 Gbps. The
FastConnect ports are charged per hour, and there is no extra charge for the
traffic.
Storage Gateway
Once a secure connection has been established, the organizations can use
OCI Storage Gateway to securely create copies of on-premises files and place
them into the Oracle Cloud Object Storage, modifying the applications.
Storage Gateway Service
Storage Gateway is a virtual appliance installed as a Linux Docker instance in
one or more hosts in your on-premises datacenter. Storage Gateway exposes
an NFS mount point mounted to any host supporting NFS version 4 clients.
The Storage Gateway mount point maps to an object storage bucket. Any file
written to a Storage Gateway file system is written as an object with the same
name that is associated public storage bucket. The system stores associated
file attributes as object metadata. You can access object storage objects using
native APIs, SDKs, third-party tools, the HDFS connector, and the OCI CLI
and console.
Enterprise applications typically work with files in nested directories. Storage
Gateway flattens the directory hierarchy in nested object prefixes in object
storage. Object storage buckets and the objects within those buckets exist in
the flat hierarchy. You can use the refresh operation in the Storage Gateway
to ingest any data added or modified directly into object storage.

Figure 9-02: Storage Gateway Service

Use Cases
The Storage Gateway has two main use cases:
Hybrid Cloud: On-premises applications actively use cloud storage
content
In a hybrid scenario, on-premises are in constant sync with the cloud.
Use cloud storage and archive as a low-cost, high-durability data tier
Create a permanent data archive in the cloud
Extend on-premises datacenter to the cloud with limitless backend
storage
Enhance disaster recovery and business continuity using remote cloud
resource
One-time data migration or periodic transfer
The Storage Gateway is a low-cost option to keep data synchronized. It can
supplement solutions that cannot write directly to OCI. It can also be part of
your disaster recovery strategy. In the migration scenario:
It can move data to the cloud for app migration or adjacent analysis
It can copy data to the cloud as it is written
It moves existing bulk data at one time
Exam tip: In the migration scenario, Storage Gateway acts as a tool that
makes it easier to transfer data dynamically and continuously.

Database Migration – Methods and Best Practices


Database Migrations
Migration methods to provide optimum performance in Oracle with the least
cost. The core use case is to get various on-premises and third-party cloud
databases into the cloud. Oracle supports both offline and online migrations.
Additionally, Oracle is based on Zero Downtime Migration and uses Golden
Gate and Data Pump as the core technologies underneath. When selecting a
migration method for moving your database to the cloud, take into
consideration:
Database Version
Database Size
High Availability (HA)
Note: The database migration service provides free six months per
migration.
Core Use Cases
In Oracle, Some services like Data Marts and Data Warehouse provide
machine-assisted migrations for Oracle Databases.
Differentiated Use Cases
Some different use cases of the Database migration provide the following
benefits:
Simplifies the underlying technologies and resource
Logical offline and online migrations
Allows for schema/metadata migrations
You can get Zero Downtime Migration, Golden Gate, and Data Pump
capabilities based on Enterprise strength Oracle tools
Migration Types
There are different migration methods available.
Offline Migration
Offline migration is a one-time snapshot of your source database moved into
the target database. The main problem with this type of migration is that it is
only a one-time snapshot. Suppose anything that comes after the snapshot
will not be moved. Therefore, you will stop using the database and take it
offline as you start your migration. It causes downtime, which is not
acceptable in multiple cases.
Online Migration
In online migration, you do a one-time snapshot, and at the same time, you
will also start replications. Anything that changes your source databases will
be continuously replicated with the target. Therefore, your application can
stay online, and at that moment, you are doing the cutover when you have
downtime.
Physical Migration
The physical migration is a block-wise copy of your source database and uses
the physical structure. The typical toll for this is RMAN and Data Guard. It is
very efficient; however, it is not flexible on the downside. You cannot filter
or transform the content of your database. Therefore, you cannot change the
version of your database.
Logical Migration
In a logical migration type, you are interpreting the contents of a database.
You are copying it logically, and you will get more flexibility. You could
change the version of the event to the render of your database. Also, you can
transform and filter the content as well. The tools used for the logical
migration process include Data Pumps or Golden Gate.
Direct Connection
In the direct connection, the service can access the source database directly.
You have to establish networking to the source database through VPN or
FastConnect. But, your service can directly access the ports of the database.
No firewall is hiding the database.
Indirect Connection
In an indirect connection, the database is not accessible by the service, either
because you have not set up the networking through FastConnect or VPN, or
the database is behind the firewall where the database ports are not directly
addressable. In that case, there is no agent you can install on the source
environment where you have direct access to the database ports, and as
always, you will communicate with the agent on the source environment.
OCI Database Migration – Use Cases
In the first release, the first service supports the Oracle database sources from
11 to 19. There is a requirement that the source database is on Linux, and it
would be interesting compared with the other Linux Operating systems (OS).
This requirement is necessary because you use SSH to execute commands on
the source database directly.

Figure 9-03: OCI Database Migration - USe Cases

Target in the first release are Autonomous Database (ADB) or Database


Warehouse (DBW), both shared and dedicated.
For agent requirements, you require Linux operating system; In general, there
are several use cases when migrating from the on-premises third-party cloud,
Oracle legacy clouds, such as Oracle classic, or migrating within OCI cloud
and doing that with or without direct connection.
If you have indirect connections behind the firewall, most support offline
migration. And, if you have a direct connection, online and offline migration
will be used.
Oracle Solutions to Migrate Database to Oracle Cloud
For OCI data migration, DMS is more focused in terms of ease of use. Based
on Zero Downtime Migration, you could choose to use either one. In Zero
Downtime Migration, you will get more fine-grained control, and you have
more options you could choose from it. Right now, ZDM desktops are both
non-autonomous targets and also migrated to ExaCC. Therefore, you can
install ZDM on-premises and do an on-premises to ExaCC migration. It is not
possible with OCI Database Migration.
The other tools have their capabilities as well, and there are three levels
where you have the EMS being the most abstracted one based on ZDM, and
ZDM itself is based on the database tools.
Tools for All Steps of the Migration Process
You used the Migration Advisor to make a decision and Cloud Premigration
Advisor Tools (CPAT) for planning, such as is your data supported for
migration.
To move databases for the data migration, you use the OCI Database
Migration Service. To move your applications and VMs to the cloud,
application migration, you use OCI Application Migration.
The Golden Gate Veridata tools validate data by comparing source and target
databases.
Figure 9-04: Tools for Migrations Process

Migration Steps – Direct Online Migration


The architectural view is shown in Figure 9-05; the more customer-centric
architecture DMS runs in its service tenancy. The architecture view defines
the DMS control plane and a data plane. Within the data plane, there is a
ZDM operation tool. You also have your customer tenancy, and if you are
using an ADB in its tenancy, you have your source database, which might be
on an on-premises third-party cloud. You are doing an initial load from the
source database; you have two choices to make the initial load.
You can do it through the SQL Net and database links or by copying it into
object stores and then from the object store into the target database.
Note: It is recommended to use the object store route because it is much
faster and enables you to do parallel export from the database. SQL Net
does not require you to do SSH into the source database, but also it is much
slower. The user has a choice between the two.
For ongoing replication, you can use GoldenGate. You need to install a
marketplace image for GoldenGate and manage the Private Endpoint to have
the service tenancy communicate with the private IPs in the customer
tenancy.
Figure 9-05: Direct Online Migration

Migration steps:
1. Configure all prerequisites:
Set up VPN or FastConnect for access to source DB
Provision Target DB
Provision OGG VM
Configure the source and target DBs for replication
2. Create Migration in DMS.
3. Evaluate Migration.
4. Start Migration.
Export source DB to target DB using a Data pump over a DB link
Create and start OGG replication from source DB to target DB, starting
with all changes after the initial load
5. Complete Migration.

Migration Steps – Indirect Offline Migration


Compared to direct online migration, indirect offline migration is a little
different. There is a DMS agent that is co-located with the source database.
The only migration mechanism that is supported is Data pumps through
object stores. All the communication between the agent and the service is
done by either having the agent call the public APIs of the control plane or by
sending agents through the OSS stream back and forth. The user needs to set
up an OSS stream, and the agent and the control plane will then work on that
stream doing all the agent’s current messaging back and forth.
By doing this, you do not have to expose the address space of the database
and the port.

Figure 9-06: Indirect Offline Migration

Migration steps:
1. Configure all prerequisites:
Provision Target DB
Create OSS Stream
Object Store Bucket
2. Download and install DMS Agent on site.
3. Configure connectivity for Agent to DMS Service and OSS Stream.
4. Create Migration in DMS.
5. Evaluate Migration.
6. Start Migration.
Export source DB to target DB using Data pump
Import to target DB using Data pump
Step 1: Open Database Migration in the OCI console.
In the OCI console, the Migration service exists in the main console menu,
under the Database Migration section. There are two main objects which are
Migrations and Registered Databases.
Registered Databases contain the connection information, who your source
and database are. You need to create a registered database for the source and
target database. If your source database is CDB architecture, you must be
registered for both CDB and PDB.
Migration is the definition of how to migrate one database to another.
Therefore, you would use migration and refer to your registered databases for
the source and target information.
Figure 9-07: Data Migration in OCI Console

Step 2: Register Source and Target Databases


The first thing is to register your source and target databases. Therefore, you
would create your registered database objects for those.
Figure 9-08(a): Register Source and Target Databases
Figure 9-08(b): Register Source and Target Databases

Step 3: Create Migration


After that, you will create a migration object; there are multi-step options
available to set up all the migration details, such as picking the registered
databases. Once you have created your migration object, you can validate that
object.
Figure 9-09: Create Migration

Step 4: Validate Migration


There are quite a several requirements for the setting in the database. It will
check the connectivity and validity of all your entered data during validation.
When creating a registered database, it does not connect to the database.
Therefore, if you make any typos, it will not recognize it. When you will do
validation, it will recognize all of these. It will also check that all the other
necessary permissions and users are created in the database. If the necessary
permissions go wrong with the validation, the logs will help you know about
the failed validation or migration drops.
If you are working with ZDM, the log is identical to the tool output.
Therefore, at that point, you are diving down into the ZDM layer. Once you
have done the validation, you will allow starting the migration.
Note: In real-life scenarios, you will probably create the migration, you will
validate it, and then you will wait for the right time to do the migration.
Similar to the validation, there is a parameter called job where you can see
the progress of your migration through different phases.
Open the OCI Console and log in. Navigate to Database Migration, and then
validate the migration under Migration.

Figure 9-10: Validate Migration

Start Migration – Export Initial Load


As you start the migration, this process will start again by reading your
validation. Then, it goes through the preparation step. If you choose the
object store route, it starts by exporting into the local directory on the server
environment.
Figure 9-11: Export Initial Load

The next step will be to export the database dump, and it will load into the
object store. After that, it will take the dumps and import them into the target
database. Once it has done that, it will continue replicating in the “Monitor
Replication Lag” phase. The monitor replication lag will monitor that the lag
goes under a certain threshold, which can be defined as part of the migration.
And, if it has passed the threshold, it determines that the lag of your
transaction has been worked down to the extent that you are ready to do the
cutover. Because if you are migrating a large database with quite a bit of
transaction volume, it will first take some time to do the steps (export and
import). During all this time, you are building a backlog of these transactions.
This backlog will take some time to work down. At that point, the Monitor
Replication Lag will still be running, and once it is completed, you have
usually set a way that will continue replication. When you are ready to
finish this replication to the cutover, you will continue from there.
Figure 9-12: Replicate Transactions until the user resumes

Once you continue from there, you are going to another phase called
switchover. At that point, the user deactivates the source applications. The
user waits for the switchover to complete, and the switchover phase will
apply any leftover transactions. After it is completed, a user can activate the
target application.
Figure 9-13: Switchover

After going through all the steps, you will see the succeeded migration.
Pricing
Freely available
DMS is free for all the common use cases and what is included in the pricing
is the service itself. All the service's environment and the infrastructure that
the service runs on. There is a GoldenGate marketplace license specific for
migration, and it is free for the first six months.
Cost required
When you are in a free trial of 6 months, you are not charged for the licensing
fee for the GoldenGate. However, you will be charged for all resources that
run in your tenancy, and there are dependencies. The compute that
GoldenGate marketplace runs on, the object store bucket, and if you are using
an agent, the streaming service also all the networking; if you are setting up
FastConnect or VPN, it is not free. You have to pay for all that resources
separately, including the source and target databases.
Exceptions
The pricing is free; however, to prevent abuse, you must set limits. After that,
if the limit has been passed, the billing will kick in. to deal with this type of
situation, you can design approaches so that you can smoothly handle all the
use cases without having to get into billing.
For example, there is 183 days or six months limit for using a migration.
Therefore, if you create a migration, you can use it for up to six months after
the creation date. If you are running it after six months, you will get billed,
and this is just running. The migration object itself is not billed. Therefore,
having a migration object that sits in your tenancy and is not running will not
create any cost if you are running it.
If the user needs to go beyond six months, they can always clone the
migration object into a different migration object and then run this new
object, and once you clone it, the clock starts from zero again. There is
another exception: if a migration is running with no data transfer for 60 days,
that is also the reason for starting billing.

Migrating to Autonomous Database


Migration Options and Considerations
Oracle Autonomous Database – Loading Architecture
As shown in Figure 9-14, the architectural diagram provides a high-level
architectural view of the different ways to load and maintain data in the
autonomous database. Data can be loaded directly through a client connected
to the autonomous database through tools such as SQL developer.
Alternatively, applications could be connecting to a middle-tier running in the
OCI layer and connecting to the backend Autonomous Database that way as
well.
Another approach is using the client tools such as SQL Loader to directly
pump data into the Autonomous text files, CSV files, JSON files, etc.
Also, you could be loading these different file types into Oracle’s object
storage and then loading that data directly into the Autonomous Database.
The other forms of data could be loaded into other object storage facilities
such as AWS S3, Azure Blob Storage, etc. As shown in the figure, these can
go directly into the autonomous database or through object storage.

Figure 9-14: Loading Architecture

Migration to Autonomous Database


When migrating to the Autonomous Database, it is vital to understand that a
physical database cannot be migrated to an autonomous database because:
The database has to be converted to a pluggable database, to begin
with. It has to be upgraded if necessary and encrypted
Any changes made to the Oracle system catalog, the Oracle shipped
store procedures, or view must be found and reverted before migrating
to the Autonomous Database
All uses of the Container Database (CDB) and admin privileged have to
be removed.
All legacy features that are not supported must be removed. For
example, legacy LOBs will not be compatible with the autonomous
database
Oracle Data Pump can be used to move database data into a new autonomous
database that has been already provisioned.
You can use the data pump (expdp/impdp) to export Oracle database
version 10.1 and higher
It eliminates legacy formats, upgrades the version, and assists with
encrypting the data, removing admin privilege, etc.
GoldenGate replication can be used to keep the database online during
the process
Migration Methods
When considering the extended set of methods used for migration and
loading, it is crucial to segregate the methods by functionality and limitations
of use against the autonomous database. You can consider how large the data
set is to be imported, the input file format, third-party support, and object
storage support.
Data Loading Tool Recommended Input Third- Object
for Large Data Formats Party Store
Sets Supported DB Support
Support
SQL Developer Data Yes Flat Files Yes
Import Wizard
SQL Developer Yes Third- Yes
Migration party DBs
Workbench
Oracle Data Pump Yes Oracle Yes
DBs
SQL *Loader Flat files
Database Link Oracle Yes
DBs
DBMS_CLOUD Yes Flay files Yes
Package
External Tables Yes Flat files Yes
Data Sync Yes DBs, Flat Yes Yes
files
Other ETL Yes DBs, Flat Yes Some
Tools/Scripts files
Table 9-03: Migration Methods

Loading and Import Options to ADB


You can choose different ways to load data into your autonomous database.
Based upon the volume of your data, the time you have available to load your
data, and the possibility of any changes from your source system. Using your
Oracle database tools and third-party integration tools, you could load data:
From your local environment (such as a local client computer)
From file stored in the cloud-based object-store.
Option Loading & Import
Loading data (or accessing directly)
From Oracle Object Store. Data from
to ADB from cloud-based object any application or data source export
store (DBMS_CLOUD) –
to text (.csv or. JSON); Output from
PREFERRED METHOD 3rd party data integration tools.
Flexibility to access data stored on
object storage from other clouds
(AWS S3 and Azure Blob Storage)
Loading using SQL *Loader Loading Data files (Oracle Data)
from a local file on a local client
machine
Importing using Oracle Data Pump Using Data Pump Export from
existing Oracle Database (Version
12.2.0.1 and Earlier)
Importing Using Import wizard from Using SQL Developer, import
the desktop tool: DQL Developer Wizards from local desktop (CSV,
Excel, TXT)
Table 9-04: Loading and Import Options

ADB APIs for Object Store Access


The PL/SQL package DBMS_CLOUD is a wrapper on the Oracle external
table functionality. It simplifies creating external tables on an object-store. It
can also authenticate to the cloud and perform queries against external tables
or others residing on the object storage.
Autonomous Database Packages
The ADB includes two different packages to help load and move data into an
autonomous database.
DBMS_CLOUD
DBMS_CLOUD is the main one that includes the database routines for
working with cloud resources.
It includes support for object storage, file access (moving log files,
output files, error logs), and access management (setting up credentials
to authenticate to cloud tenancy)
It can be used to send REST API requests
It also supports access to the third-party cloud via the Swift Protocol
It also supports AVRO, ORC, Parquet, JSON, etc., data formats
DBMS_CLOUD_ADMIN
On the Autonomous Database shared cloud, there is a package called
DBMS_CLOUD_ADMIN.
It is not available in the Autonomous dedicated environment.
This package provides administrative routines for configuring a
database.
Specifically, it can be used to configure:
Database links
Application continuity
Setting tablespace quotas
Note: DBMS_CLOUD_ADMIN package is not there in the Autonomous
database dedicated environment. The autonomous database lets you
configure database links and table space quotas directly, and application
continuity is configured by default.
The Oracle object store is directly integrated into the autonomous database
and is the best option for staging data that the autonomous database will
consume.
Any file type can be stored in the object store, including SQL loader files,
Excel, JSON, Parque, and Data Pump dump files. Flat files stored on the
object store can also be used as Oracle database external tables. Therefore,
they can be queried directly from the database as part of o normal DML
operation and not even loaded into the database.
Using Oracle Object Store Staging
The object store is separate from the storage allocated to the autonomous
database for regular storage, such as tables and indexes. That storage is part
of the Exadata system that the autonomous database runs on and is
automatically allocated and managed for you. Therefore, users do not have
direct access to that storage.
Benefits
There are many benefits to using object storage. For example, it is possible to
query data in place in the object storage or to query data or push data from an
on-premises database into the object storage. Therefore, most importantly,
object storage is less expensive than regular storage and is fully encrypted,
secure, and resilient.

Figure 9-15: Oracle Object Store Staging

ADB Statistics and Hints for Data Being Loaded


When discussing the database statistics and hints for data being loaded,
statistics are gathered automatically during direct path load operations for the
autonomous data warehouse and transaction processing database.
Manual statistics can be gathered if needed. In the case of ATP, statistics are
gathered automatically when the data changes.
Optimizer and parallel hints are ignored by default but can be enabled
explicitly.
Note: Some database options and features are locked.

ADW: Managing DML Performance and Compression


The ADW uses hybrid columnar compression for all of its tables by default.
Oracle recommends using bulk operations like direct-path loads and
CREATE TABLE AS SELECT statements to assist this compression.
However, compression can be avoided with the “nocompress” create table
keyword.
The compression level can be changed in real-time with the ALTER TABLE
MOVE statement.
Database Migration Service
Oracle Cloud Infrastructure Database Migration is a fully managed solution
that allows you to migrate databases to Oracle Cloud Infrastructure in a high-
performing, self-service manner (OCI).
Database Migration is a managed cloud service that runs independently of
your tenancy and resources. The service communicates with your resources
using Private Endpoints (PEs) and runs as a multitenant service under a
Database Migration service tenancy. Database Migration is in charge of
managing PEs.
The following features are included in Database Migration:
Data migration between on-premises and Oracle Cloud databases into
Oracle Cloud Infrastructure is manually installed, co-managed,
Autonomous Data Warehouse, or Autonomous Transaction Processing
services
Simple offline migration or logical migration at the enterprise level
with little downtime
Powered by the Zero Downtime Migration engine and based on the
industry-leading Oracle GoldenGate replication
Supports Oracle Database 11g Release 2 (11.2.0.4) and later database
releases. Compliant with Oracle Maximum Availability Architecture
(MAA)
The transition from initial load to streaming replication is seamless
Change data is captured from the source database and replicated in the
target database
Database migrations can be performed and managed at a fleet level
using the Job subsystem
You can halt and continue your migration process if necessary, which
is useful if you need to adhere to a maintenance window, for example
Rather than waiting for a running migration job to finish, you can
terminate it now
Database Migration Terminology
Working with Database Migration necessitates knowledge of the following
topics.
Migration
A single migration operation is represented by this class, which provides the
migration's specifications. The specifications include the source and target
databases, whether or not to copy bulk data and/or record ongoing changes.
Migration Job
A migration job represents a currently running or completed migration. A
migration job is a snapshot of the migration with real-time data. This data is
used to audit logs and look into failures. When you start a migration, a
migration task is generated implicitly.
Validation Job
Validation Job verifies that the prerequisites and connection for the source
and target databases, Oracle GoldenGate instances, and Oracle Data Pump
are correct. When you assess the migration, a validation job is produced.
Registered Database
To include all schemas inside a database that must be migrated, a data asset
can have one or more connections. In APIs, the registered database is
sometimes referred to as a Connection. The database metadata and
connection details are stored in this object, representing a database instance.
Agent
The agent contains the information needed to link Oracle Cloud Infrastructure
to a source database that is not directly accessible on OCI, such as a database
in a different region or tenancy, an on-premises database, or a cloud database
that was manually deployed.
Schema
Database objects such as tables, views, stored procedures, and so on are
organized using database organizational concepts.
Migrating Using Data Pump
Data Pump Import to ADB
There are many good reasons to use Oracle’s Data Pump for your database
migration.
Oracle’s Data Pump utility is platform-independent.
It offers fast bulk data and metadata movement between the Oracle and
autonomous databases.
It allows you to import data from Data Pump files that could be residing on
the OCI object storage, AWS S3, and Azure Blob Storage in addition to your
local storage system.
You can save your data to your OCI object store and use Data Pump to load
the data directly into ADB for the fastest way to load data into the ADB.
On Oracle Autonomous Dedicated, you can input data pump files from OCI
object storage.
Data Pump uses four mechanisms for moving data in and out of the database,
and they are as follows in order of decreasing speed.
Therefore, the fastest approach in the Data Pump engine is to copy the data
files and load them to your object storage, and Data Pump imports them into
your autonomous database.
It is also possible to bypass the object storage by direct-path, external tables,
and network link import.
Export an Existing Oracle Database to Import into ADB
When exporting an existing Oracle database, import it into the autonomous
database. Oracle recommends using the schema model for migrating to the
autonomous database.
This is also useful when you separate that schema and pair them to a
pluggable database or an individual shared autonomous database. It is also
recommended that you set the parallel parameter to at least the number of
CPUs in your autonomous database. Therefore, you can use parallelism for
faster migration.
The following are the recommended set of parameters to provide fast and
easier migration.
expdp sh@orcl\
exclude=index, cluster, indextype, materialized_view,
materialized_view_log, materialized_zonemap, db_link\
data_options=group_partition_table_data \
parallel=16 \
schemas=sh \
dumpfile=export%u.dmp
Import Data Using Oracle Data Pump
Oracle recommends using the latest Oracle Data Pump version when
importing data from Data Pump files. The most recent version will always
contain enhancements and fixes for a better experience. You will need to
store your object storage credential using the DBMS_CLOUD package
CREATE_CREDENTIAL. You are also required to store your Cloud Object
Storage credentials using DBMS_CLOUD.CREATE_CREDENTIAL. The
sample example is shown.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => ‘DEF_CRED_NAME’,
username => ‘adwc_user@oracle.com’,
password => ‘auth-token-string’ );
END;
/
Run Data Pump Import with the dump file parameter set to the list of file
URLs on your Cloud Object Storage and the credential parameter set to the
name of the credentials for your autonomous database.
impdp admin@ADW_high \
directory=data_pump_dir \
credential-def_cred_name \
dumpfile=https://swiftobjectstorage.us=phoenix-
1.oraclecloud.com/v1/adwc/adwc_user/export%u.dump \
parallel= 16 \
transform=segment_attributes:n \
transform=dwcs_cvt_iots:y transform=constraint_use_default_index:y \
exclude=index, cluster, indextype, materialized_view \
materialize_view_log, materialized_zonemap, db_link
For the best import performance use the HIGH database service for your
import connection and set the PARALLEL parameter to the number of
OCPUs in your autonomous database as shown in the example.
If using Data Pump version 12.2 or older, the credentials parameter is not
supported. Suppose you are using an older version of Data Pump Import. In
that case, you need to define a default credentials property for Autonomous
Database and use the default credential keyword in the dump file parameter.
If you have multiple examples of importing data using the Oracle Data Pump,
you can use the same credential as the default credential for the ADB. For
example:
alter database property set default credential =
‘ADMIN.DEF_CRED_NAME’
Access Log Files for Data Pump Import
The log files for Data Pump Import operations are stored in the directory
DATA_PUMP_DIR. It is the only directory you specify for the data pump
directory parameter.
To access the log file, you need to move the log file to your Cloud Object
Storage using the procedure called DBMS_CLOUD.PUT_OBJECT. The
following example moves the file import.log to your Cloud Object Storage:
BEGIN
DBMS_CLOUD.PUT_OBJECT(
credential_name => ‘DEF_CRED_NAME’,
object_uri => ‘https://swiftobjectstorage.us-phoenix-
1.oraclecloud.com/v1/adwc_user/import.log’,
directory_name => ‘DATA_PUMP_DIR’,
file_name => ‘import.log’ ),
END;
/

Zero Downtime Migration


Introduction
Depending on the migration scenario, Zero Downtime Migration (ZDM) is
Oracle's top option for a streamlined and automated migration experience,
with zero to negligible downtime for the production system. ZDM enables
you to migrate your Oracle Databases directly and easily to and from any
Oracle-owned infrastructure, such as Exadata Database Machine On-
Premises, Exadata Cloud at Customer (ExaC@C), and Oracle Cloud
Infrastructure. Oracle ZDM supports a wide range of Oracle Database
versions and, as the name implies, assures that the migration has little to no
impact on the production database.
ZDM adheres to Oracle's Maximum Availability Architecture (MAA)
principles and includes tools like GoldenGate and Data Guard to assure High
Availability and an online migration strategy that uses technologies like
Recovery Manager and Data Pump, and Database Links.
Architecture
The Oracle Maximum Availability Architecture (MAA) recommends Oracle
Zero Downtime Migration (ZDM) to transfer Oracle Databases to the Oracle
Cloud. The fundamental nature of ZDM makes migration as simple as
possible while ensuring the least amount of impact on production workloads.
On-premises, Oracle on AWS RDS, Oracle Public Cloud Gen 1, or Oracle
Cloud Infrastructure can all be used to move the source database. Database
Cloud Service on Oracle Cloud Infrastructure (OCI) Virtual Machine,
Exadata Cloud Service, Exadata Cloud at Customer, or Autonomous
Database can all be used to deploy the target database.
Oracle ZDM supports physical and logical migration workflows. ZDM
automates the whole migration process, significantly minimizing the risk of
human error. ZDM uses Oracle Database-integrated high availability (HA)
technologies, including Oracle Data Guard and GoldenGate and all MAA
best practices, to ensure that production environments are never unavailable
for a long time.
Database Support and Supported Configuration
Oracle ZDM is compatible with Oracle Database 11.2.0.4, 12.1.02, 12.2.0.1,
18c, 19c, and 21c. The source and target databases must be in the same
database release for ZDM's physical migration method to succeed. The
Logical Migration workflow now allows cross-version migration, allowing
for an in-flight update when migrating to the Oracle Cloud, starting with
ZDM 21.1.
Oracle ZDM is compatible with Oracle Databases running on Linux, AIX,
and Solaris. Databases that can be used as sources include:
Databases with a single instance
RAC Databases on a Single Node or RAC Databases
Oracle ZDM supports Oracle Enterprise and Standard Edition as a source
database. Using the logical offline methodology, ZDM can only migrate
databases hosted on AIX or Solaris operating systems.
Oracle ZDM allows you to use a non-CDB or a CDB with one or more
Pluggable Databases as the source database (PDBs). Oracle ZDM 21.1 adds
the ability to convert non-CDB databases to Pluggable Databases on the fly,
allowing for total conversion and more flexibility in the migration workflow.
Benefits
Following are some benefits of Zero Downtime Migration (ZMD)
Simple and Efficient
The Oracle ZDM automated methodology makes moving your Oracle
Database to the Oracle Cloud a breeze. Oracle ZDM ensures an error-free and
effective migration to Oracle Cloud or Oracle Engineered systems on-
premises in general by reducing the need for manual settings and operations.
Highly Available
Oracle ZDM complies with the Oracle Maximum Availability Architecture. It
is tight interaction with Oracle Database technologies like Oracle Data
Guard, and Oracle GoldenGate guarantees that your migration is completed
with zero downtime and no impact on production.
Flexible
Depending on your requirements and business demands, you can migrate
your Oracle Database to the Oracle Cloud and the Exadata Database Machine
on-premises directly from various source databases.
Validation
Oracle ZDM does thorough checks before and after migration, enabling you
to pause and resume your migration processes if necessary. It also provides
an evaluation mode to catch any problems before they happen.
Cost-Efficient
The Oracle ZDM engine is provided at no additional cost.
ZDM - Enhancement
The improvements in the latest version of Oracle ZDM include:
New Source DB
New Source Platforms
Physical Direct Data Transfer
Existing RMAN Backup Reuse
Migration Enhancements
Migration Advisor Enhancement

Figure 9-16: ZDM - Enhancements

Migration from AWS RDS to Oracle ADB


Oracle ZDM 21.2 introduces Oracle Autonomous Database migration from
an Oracle Source Database on AWS RDS. Customers can use ZDM's logical
offline migration methodology to take advantage of this new feature.
Customers can use either Amazon's Simple Storage Service (S3 Buckets) or a
direct connection via Database Links (DBLINK) as a backup location instead
of the standard Object Storage.
Migration from Solaris & AIX based Source Database
Source databases based on Solaris and AIX now have cross-platform
migration support. Customers can use this capability to migrate to Oracle
Autonomous Database and CoManaged Cloud Oracle Database targets while
using the logical offline methodology.
Direct Data Transfer Support for Physical Migration
Customers can now use ZDM's direct data transfer for physical migration to
migrate their On-Premises Databases to the cloud. Users can now avoid the
intermediate store for backups on the selected backup site, making migrations
more efficient. Active Database Duplication for 11.2 Oracle Databases and
Restore from Service for 12+ Oracle Databases are the two Direct Data
Transfer techniques offered by ZDM.
Existing RMAN Backup usage as Migration Source
Creating a level 0 and level 1 backup of the source database usually is part of
ZDM's physical migration operation. Starting with ZDM 21.2, you can use
incremental level=0 to use an existing level 0 backup instead of having ZDM
produce a full L0 backup. This feature is compatible with all backup devices.
ZMD - Methodologies
Physical Online Migration
The migration process has many stages. The working process of ZMD
physical Online Migration is summarized in the following steps.
1. Download the ZDM binaries in a separate server on Oracle Linux 7.
You must install and configure the binaries and prefill the template file.
2. After that, provide all the relevant information to ZDM before starting
the migration.
3. This information includes source database information, target database
information, backup location, connectivity, etc.
4. Once the prerequisites have been fulfilled, the migration process can
start by issuing the Migrate Database command.
5. ZDM will connect to the primary database on-premises into the target
database cloud service in the Oracle cloud.
6. ZDM will perform all the relevant checks on both source and target.
7. After that, ZDM will start the database migration process.
8. ZDM will connect to the source database on a chosen backup location.
Suppose a customer migrates to Oracle Database Cloud Service Virtual
Machines, Bare Metal, or Exadata Cloud Service. In that case, the
backup location will be the OCI Object Storage in the customer’s
tenancy. If a customer is migrating to Exadata Cloud@Customer or
Exadata on-premises, the backup location can be NFS storage or Oracle
Zero Data Loss recovery appliance. ZDM will then start transferring
database backups from the source to the chosen backup location.
9. Upon completion of the transfer of the backup files, ZDM will leverage
the files to instantiate a standby database in the target Oracle Cloud
Database Service.
10. Once the standby database has been instantiated in the cloud,
ZDM will establish SQLNet connectivity between the primary on-
premises and the standby database in the Oracle cloud.
11. Once connectivity has been established, ZDM will synchronize
the primary and standby databases.
12. Once the migration process is over, ZDM will finalize by
cleaning up the established connectivity and resources created for the
migration.
Logical Online Migration
A basic workflow first instantiates a target database using a local Data Pump.
After that, both source and target synchronize using real-time data replication
with GoldenGate. A logical online migration workflow contains extra
components compared with physical online migration. ZDM server runs
independently, the source database on-premises, the target database running
on Oracle Cloud. The logical migration required Oracle GoldenGate Hub,
which has two GoldenGate microservices from the Oracle Cloud marketplace
acting as the extract and replicant.
ZDM logical migration workflow can be summarized in the following simple
steps.
1. Download the ZDM binaries on a separate server, an Oracle Linux 7.
2. You must install and configure the binaries, fill the template file and
provide all the relevant information to ZDM before starting the
migration.
3. The information includes source database information, target database
information, backup location, connectivity, etc.
4. Once the prerequisites have been fulfilled, the migration process can
start by issuing the Migrate Database command.
5. ZDM will connect to the source database to verify that the source
database fulfills all the migration prerequisites.
6. ZDM will then connect to the target database. A placeholder target
database in the cloud needs to be configured beforehand.
7. ZDM will perform the required validations at the target database before
proceeding with the rest of the migration process.
8. ZDM then connects the source database to the backup location of
choice.
9. Afterward, ZDM configures a specified Oracle GoldenGate
deployment from the Oracle Cloud marketplace on a designated OCI
compute that will act as a GoldenGate Hub. It is done via REST API
calls for Oracle GoldenGate microservices.
10. ZDM will start capturing transactions from the source database,
generating the trail.
11. ZDM starts a Data Pump export job from the source database.
12. ZDM will monitor the Data Pump export job, report progress
and finalize the process by uploading the dump files from the job to the
user-specified backup location.
13. After that, ZDM will start a data pump import job to import
into the target database. If the target is in an Autonomous Database, the
import job directly imports the dump files into the target database. In
the case of the different targets, ZDM will first copy the dump files into
a local file system in the target, and then the Data Pump import job will
import the dump files into the target database.
14. Once the Data Pump instantiation of the target database has
been completed, ZDM will configure a specified Oracle GoldenGate
appointment from the Oracle marketplace on the GoldenGate Hub. This
deployment will act as the replicant and immediately start applying
changes to the target database using the trail files from the capture done
by the previously configured extract. Then, ZDM will monitor
GoldenGate replication by calling REST APIs for the Oracle
GoldenGate microservices and will finish the synchronization upon
completion.
15. After that, ZDM will switch over the client to the target
database in the cloud.
16. Finally, ZDM will proceed to finalize the migration.
Installation and Technical Requirements
Oracle ZDM is a downloadable engine that powers the on-premises Oracle
databases into the Oracle cloud. The binaries where ZDM can be downloaded
from the ZDM’s product page are listed below.
Upon download, ZDM must be installed on a box with Oracle Linux 7.
This box can be on-premises on bare metal or VM or in the cloud
It does not require much computing power or specific configurations
besides certain packages, and at least 100 GB of the file system free
storage due to ZDM migration workflow generated logs
Connectivity
The following are the requirements for connectivity between the ZDM
service node and the source and target database.
ZDM service nodes require SSH keys (password-less files) or root
access, along with the access to at least one RAC node
Customized connectivity access is feasible through a custom adapter
Any hostname to the source or target database host, if specified, should
be resolvable on the ZDM host
Also, the following are the port requirement for connectivity between the
ZDM service node and the source and target database.
Port 22 – Source and Target require access to port 22 for
authentication based operations (SSH)
Port 443 – Source and Target require access to port 443 to access the
object store
Port 1521 – Source and Target require port 1521 for SQL *Net
connectivity
Source DB
The requirements for source database configuration are as follows.
It must be running in “Archive log” mode
In version 12.2 and above, TDE must be enabled, or the TDE wallet
must be configured
If the source database is the Oracle RAC database, the SNAPSHOT
_CONTROLFILE must be pointed to a shared location across all the
instances
Port 1521 should be open for SQLNet access
Target DB
The requirements for target database configuration are as follows.
A placeholder database using the OCI console must be created in open
mode
Cloud and on-premises databases must be at the same patch level
The TDE wallet status must be set to open, and the wallet type should
be set to Auto-Login

Mind Map

Figure 9-17: Mind Map


Practice Questions
1. Which of the following allows to migrate up to 150 terabytes of data?
A. Storage Gateway
B. Data Transfer Disk
C. Data Transfer Appliance
D. Virtual Cloud Network
2. Which of the following services can be used as an offline data transfer
solution?
A. Data Transfer Disk
B. Compute
C. IAM
D. VCN
3. Which of the following follows Oracle MAA principles and practices?
A. Autonomous Database
B. Zero Downtime Migration
C. Data Pump
D. None of the Above
4. How many migration methods are offered by ZDM?
A. Two
B. Three
C. Four
D. Five
5. Which of the following is used to create copies of the on-prem file for
migration purposes?
A. VCN
B. Compute
C. Storage Gateway
D. File Storage
6. Which of the following is necessary when doing migrations?
A. Database Version
B. Database Size
C. High Availability
D. All of the Above
7. How many migration types are available in Oracle?
A. Seven
B. Six
C. Five
D. Four
8. Which of the following migration types allow you to do one-time
snapshots and replications?
A. Online Migration
B. Direct Migration
C. Offline Migration
D. Logical Migration
9. Which of the following automates the migration process?
A. Virtual Cloud Network
B. Offline Migration
C. Data Pump
D. ZDM
10. Which of the following migrates data in Oracle?
A. Gateway
B. Hybrid network
C. GoldenGate
D. None of the Above
11. DMS is free for ___________.
A. Two months
B. Six months
C. 30 days
D. Three days
12. Which of the following communicates the resource in DMS?
A. Public Endpoints
B. Private Endpoints
C. Logical Server
D. Load Balancer
13. Which of the following performs verification activities?
A. Migration Job
B. Schema
C. Agent
D. Validation Job
14. During migration, users deactivate all the source application for
______________ process.
A. Export
B. Switchover
C. Import
D. Database deployment
15. Which of the following links Oracle Cloud Infrastructure to a source
database present in other regions?
A. Agent
B. Schema
C. Registered Database
D. Migration job
Chapter 10: Design for Security and Compliance

Introduction
This chapter focuses on the design parameters necessary for security and
compliance in the Oracle cloud. The topics include:
IAM – Federation
Sign-in Options
Web Application Firewall

IAM – Federation
Federated users select which identity provider they want to use for sign-in
and are then redirected to that identity provider's sign-in experience for
authentication. After inputting their login and password, the Identity Provider
(IdP) authenticates and redirects them back to the Oracle Cloud Infrastructure
Console.
General Concept
The identity provider provides identifying credentials and authentication for
users. OCI can be protected with any IdP that supports the Security Assertion
Markup Language, also referred to as SAML 2.0 protocol. The service
(application or website) calls upon the IdP to authenticate users. There will
always be a trust between the identity provider and service provider that
provides the relationship that an administrator configures between an IdP and
a service provider. OCI console or API is used to set up this type of
relationship. Once you build the relationship, a particular IdP is federated to
OCI.
To make this a functioning process, let us consider that you have a user who
wants to log into OCI. There is a kind of federated trust OCI has set up with
an identity provider such as Oracle IDCS, Okta, Microsoft Azure Active
Directory, or Active Directory Federation Service. When the user tries to log
in to the OCI console, OCI will redirect to the IdP for authentication because
of federation trust. Therefore, the request would go to the IdP, the specific
IdP for authentication, so that users can be authenticated, and IdP would
return the authentication assertion, the security assertion. The user would log
into OCI using the IdP authentication assertion and that security assertion. By
doing so, the federation trust process will function successfully.

Figure 10-01: General concept

User Group Mapping


This topic will define the mapping between an IdP group and an OCI group.
This is mainly used for user authentication. Figure 10-02 shows that
Enterprise Identity Provider contains a group of users A, B, C, etc., belonging
to groups defined in the figure. When you federate the users with OCI, within
OCI, the user can appear like “enterpriseidentityprovider/userA” for user A,
“enterpriseidentityprovider/userB” for user B, etc. belonging to some specific
group with some policies in various compartments or whole tenancy.
Figure 10-02: User Group Mapping

Note: The process of mapping groups is primarily used for authorization


purposes.
Users Type
There are three kinds of users available.
Federated User
A federated user is someone who signs in to use the OCI console by way of a
federated IdP. These users are created and managed by an admin in the IdP
and use SSO sign-in to the console.
The federated users are granted access to OCI based on their membership in
groups mapped to the OCI groups.

EXAM TIP: Federated users cannot have other OCI credentials, like API
keys, authentication tokens, or passwords.

Local User
A local user is a non-federated user. Someone who signs in to use the OCI
console with a login and password created in OCI is the OCI admin.
Provisioned/Synchronized User
OCI supports System for Cross-domain Identity Management (SCIM), open
standard management that enables user provisioning systematically across
identity systems (IdP).
For Oracle IDCS and Okta using SCIM, federated users can be provisioned
into OCI. Also, this allows you to assign credentials (unique OCID) to the
users in OCI.
Note: When you delete a federated user in IdP, delete the synchronized user
in OCI.

Understanding Sign-in Options


The Oracle sign-up process creates your users in two different identity
systems.
1. Oracle Identity Cloud Service (IDCS)
2. Oracle Cloud Infrastructure’s own native identity system called Identity
and Access Management (IAM) service.
Oracle Identity Cloud Service IDCS
This SSO option will allow you to check in to the OCI and then go to other
Oracle Cloud services without re-authenticating. This user has administrator
access to all Oracle cloud services bundled with your account.
IAM Service
IAM is a native OCI identity system; you are granted administrator privileges
in OCI to get started right away with all OCI services.
When to Use OCI IAM and IDCS
When using all the OCI services, like Compute, Autonomous Database, and
Storage, you should be using a local source (IAM service) only.
However, several platform services currently use IDCS as core identity
systems. Some examples of platform services include Java Cloud Service,
Integration Cloud Service, Identity Cloud Service, etc.
OCI Console Single Sign-On
1. Go to https://www.oracle.com/cloud/sign-in.html and log in to Oracle
Cloud Infrastructure Console with your Cloud Account Name and
credentials.
2. Choose the Single Sign-On (SSO) option.
3. Select Identity Provider and click on Continue.
4. Write your OCI credentials and click on Sign In.
5. The Oracle Cloud Infrastructure Console dashboard will appear.
6. To verify your identity, go to the Profile and verify the details.
OCI Console Direct Sign In
1. Go to https://www.oracle.com/cloud/sign-in.html and log in to Oracle
Cloud Infrastructure Console with your Cloud Account Name and
credentials.
2. Select the Direct Sign-In option and enter your OCI credentials.
3. Click on Sign In.
4. The Oracle Cloud Infrastructure Console dashboard will appear.
Lab 10-01: Federation
Introduction
Federated users select which identity provider they want to use for sign-in
and are then redirected to that identity provider's sign-in experience for
authentication. After inputting their login and password, the Identity Provider
(IdP) authenticates and redirects them back to the Oracle Cloud Infrastructure
Console.
Problem
An organization has multiple enterprise applications, including Microsoft
Azure and Oracle Cloud. To use both Cloud Service Providers (CSPs),
credentials are required. The organization's security head wants to create a
global way to log in to the cloud service provider instead of creating separate
login credentials for each CSP. How would it be possible?
Solution
With the feature of Federation service, the security head can easily create
federated trust between Azure and Oracle and use single login credentials to
use both clouds.
Step 1: Get the Metadata File
1. Go to https://www.oracle.com/cloud/sign-in.html and log in to Oracle
Cloud Infrastructure Console with your Cloud Account Name and
credentials.

2. The Oracle Cloud Infrastructure Console dashboard will appear.


3. Go to Identity & Services and click on Federation.
4. Verify your Home Region and Compartment.
5. Click on Download this document.
6. Download the file and save it in any desired folder.
Step 2: Create OCI Console Enterprise Application
7. After that, log in to Microsoft Azure.
8. From the Home page, click on Azure Active Directory.
9. Explore the details available on the Overview page.

10. From the left side given menu, click on Enterprise Applications
under Manage.
11. Check the available Enterprise applications and click on + New
application.
12. Search and click on Oracle Cloud Infrastructure Console.
13. Check the details and click on Create.
14. Check the properties.
15. Then, start configuration from the Getting Started section.
16. Click on point 2, Set up single sign on.
17. Inside the single sign-on option, click on SAML.
18. For basic SAML configuration, upload the metadata file recently
taken from the OCI Console.
19. Browse the metadata file and click on Add.
20. Check the SAML configuration details.
21. Scroll down the page and write a URL for Sign on URL.
22. Click on Save.
23. After that, check the configurations.
24. Scroll down and click on the Edit option for Attributes and Claims.
25. Click on the existing Claim name.
26. Select Persistent for Name identifier format.
27. Leave the remaining options as default and click on Save.
28. After that, click on +Add a group claims.
29. Select the Security Groups option.
30. Set Group ID as Source attribute.
31. Enable Customize under Advanced options.
32. Write group name and namespace.
33. Leave the remaining options as default.
34. Click on Save.
35. Verify the recently created claim.
36. Inside the SAML Signing Certificate section, download the
Federation Metadata XML.
37. Save the downloaded file in any desired folder.
Step 3: Add User/Group to this Application
38. On your application page, select Users and groups from the left
side given menu.
39. Click on +Add user/group.
40. Select your user/group from the list and click on Select.
41. Check the selected user and click on Assign.
42. Click on your recently added user.
43. Check the details of the added user.

44. Scroll down and copy the Object ID of that user.


Note: Save this object ID somewhere because you will need this ID in the
impending steps.
Step 4: Add Identity Provider in OCI
45. Return to the OCI console again and navigate to the Federation
service.
46. Click on Add Identity Provider.
47. Write a unique name and description for the identity provider.
48. Choose the SAML type.
49. Browse and upload the Federation Metadata file that was recently
downloaded from Azure.
50. Scroll down the page, leaving the remaining options default, and
click on Continue.
51. Inside the Add Identity Provider dialog box, paste the Azure user
object ID under Identity Provider groups.
52. Click on Add Provider.
53. Verify the identity provider.
Step 5: Check the Federation Trust between Azure and Oracle
54. Log out from the OCI console. Again, go to the OCI console sign-in
page.
55. Choose the SSO option.
56. This time, you will see an additional option in the list of Identity
Providers.
57. Select the recently created IdP.
58. Click on Continue.
59. Enter your Azure credentials instead of your OCI credentials.
60. You will directly navigate to the OCI console.
61. Check the User Profile. You should see IdP of Azure.
62. You can now do and create any resource in this OCI.

63. All options are available in the same manner as was available from
your profile IdP.
Web Application Firewall
Introduction
The Oracle Cloud Infrastructure Web Application Firewall (WAF) is a global
security solution protecting applications from dangerous and undesirable
internet traffic. It is cloud-based and PCI compliant. WAF may secure any
internet-facing endpoint by enforcing the same set of rules across all of a
customer's apps.
You can develop and maintain rules for internet threats, including Cross-Site
Scripting (XSS), SQL Injection, and other OWASP-defined vulnerabilities
using WAF. Unwanted bots can be mitigated, while desired bots can be
admitted tactically. Access rules can be set up to restrict access based on
location or the request's signature.
The Oracle Cloud Infrastructure WAF is a regional and edge enforcement
service connected to an enforcement point like a load balancer or a web
application domain name. WAF guards against malicious and undesired
internet traffic for apps. WAF may secure any internet-facing endpoint by
enforcing the same set of rules across all of a customer's apps.
What is meant by WAF?
The Oracle Cloud Infrastructure Web Application Firewall (WAF) is a
global security solution protecting applications from dangerous and
undesirable internet traffic. It is cloud-based and PCI compliant
WAF stands for Web Application Firewall, a device, server-side
plugin, or filter that applies a set of rules to HTTP/S traffic
WAF can detect and guard against attack streams hitting a web
application by intercepting HTTP/S traffic and passing it through a
series of filters and rules
Common assaults (Cross-site Scripting (XSS), SQL Injection) are
covered by the rules and the ability to filter specific source IPs or
malicious bots
Typical WAF responses include allowing the request to proceed, audit
logging the request, or blocking the request by returning an error page
The OCI Web Application Firewall is a global security service that is
cloud-based and PCI-compliant
WAF Concepts
The necessary concepts associated with the WAF include:
WAF Policy
The whole configuration of your WAF service, including origin management,
protection rule settings, and bot detection features, is referred to as WAF
policies.
Origin
It is the origin host server for your web application. To set up protection rules
or other elements in your WAF policy, you must first establish an origin.
Protection Rules
When network requests fulfill the given criteria of a protection rule, they can
be configured to allow, block, or log them. The WAF will track traffic to
your web application over time and provide recommendations for
implementing new rules.
Bot Management
The WAF service has numerous features that allow you to detect bot traffic
and either ban or allow it to access your web apps. JavaScript Challenge,
CAPTCHA Challenge, and GoodBot Whitelists are some of the bot control
tools.
Access Control
Request and response controls are included in access control.
Actions
Actions are items that represent one or more of the following:
Allow: An action that skips all remaining rules in the current module if a
matching rule is found.
Check: An action that does not halt the current module's rule execution.
Instead, it creates a log message that records the outcome of rule execution.
Return HTTP response: This action returns a specific HTTP response.
Condition
As a condition, each rule accepts a JMESPath expression. WAF rules are
triggered by HTTP requests or HTTP responses (depending on the type of
rule).
Firewall
A logical relationship between a WAF policy and an enforcement point, such
as a load balancer, is the Firewall resource.
Network Address List
WAF policies use network address lists to store specific public IP addresses,
CIDR IP ranges, and private IP addresses.
Limiting the Rate of Change
Rate restriction provides inspection of HTTP connection attributes as well as
the limitation of the number of requests for a specific key.
Request Control
Inspection of HTTP request characteristics and return of a defined HTTP
response are both possible with request control.
Rules for Requesting Protection
Request protection rules provide for the detection of harmful material in
HTTP requests and the delivery of a predefined HTTP response.
Benefits
The Oracle Cloud Infrastructure WAF protects your web application or API
from fraudulent requests. It also gives you a better understanding of where
your traffic is coming from, and Layer 7 DDoS attacks are prevented,
assuring increased uptime. It also filters the traffic at layer 7.
To identify and restrict harmful and/or suspicious bot activity from scraping
your website for competitive data, the bot management system employs
detection techniques such as IP rate limiting, CAPTCHA, device
fingerprinting, and human interaction challenges. Simultaneously, the WAF
can allow legitimate bot traffic from Google, Facebook, and other sources to
access your web apps as intended.
Features
Integrated Threat Intelligence
Adopt a layered defensive (edge and in-region) security strategy with a web
application firewall that gathers threat intelligence from various sources,
including WebRoot BrightCloud® and over 250 predefined OWASP
applications compliance-specific rules.
Extensive Policy Control
Access controls based on the geolocation data, whitelisted and blacklisted IP
addresses, HTTP URLs, and HTTP headers protect applications deployed in
Oracle Cloud Infrastructure, on-premises, and in multi-cloud settings.
Flexible Enforcement
Gain the ability to implement WAF protection on internal and external load
balancers closest to OCI applications and at the OCI edge closest to end-
users. Flexible Enforcement protects application infrastructure and workloads
in any environment: OCI, on-premises, multicloud, and everywhere in
between.
Use Cases
OCI WAF is a global security solution protecting applications from
dangerous and unwanted internet traffic. It is cloud-based and PCI compliant.
• Defend any internet-connected endpoint from cyberattacks and
criminals
• Prevent SQL injection and cross-site scripting (XSS)
• Bot management — block malicious bots in real-time
• Protection against Layer 7 Distributed-Denial-of-Service (DDoS)
attacks using threat intelligence gathered from many sources, including
Webroot BrightCloud
OCI WAF Rulesets
To protect against the most prevalent web vulnerabilities, OCI WAF employs
the OWASP ModSecurity Core Rule Set. The open-source community
manages and maintains these guidelines.
OCI WAF is pre-configured to protect against the top ten Internet risks, as
determined by the OWASP Top 10. These include:
• A1 – Injections (SQL, LDAP, OS, etc.)
• A2 – Broken Authentication and Session Management
• A3 – Cross-site Scripting (XSS)
• A4 – Insecure Direct Object References
• A6 – Sensitive Data Exposure
• A7 – Missing Function-Level Access Control
Note: Each type of vulnerability ruleset is shown within the OCI console,
with granular controls for each specific rule.
Challenges and Whitelisting Capabilities
JavaScript Challenge
It gives you a quick and easy approach to stopping a substantial percentage of
bot attacks. Every client, attacker, and real user's browser receives a piece of
JavaScript after receiving an HTTP request. It tells the browser to take a
specific action. Bots, often not equipped with JavaScript, will fail or be
blocked, while legitimate browsers will pass the challenges without the user's
knowledge.
CAPTCHA Challenge
CAPTCHA protection can be used to restrict access to a URL that humans
should only visit. You can also personalize the CAPTCHA challenge remarks
for each URL.
Whitelisting
You can manage which IP addresses show on the IP whitelist by using
Whitelisting.

WAF Architecture
The architecture of the WAF in OCI is shown in Figure 10-03. You have
your application deployed inside your tenancy on the side of a VCN. The
architecture must contain a load balancer and the deployed applications.
When you have applications deployed in the architecture, you should use
WAF to protect the access and implement all the rules and capabilities to
safeguard against attacks or any threats that potentially bring your
applications down. When you deploy WAF, the WAF becomes a regional-
based service. After the configuration of WAF, you should configure the load
balancer to which the load balancer you want to point to.
After that, you should push the configuration towards the WAF edge nodes.
These nodes are the places spread around the globe where you have control
and deploy the WAF policies. After then, everyone will try to access your
application through the internet. This type of restriction will be done using
the WAF edge node to verify all the connections and traffic sent over
applications. Oracle will ensure that all policies are applied and then execute
the access to the web.
Figure 10-03: WAF Architecture

WAF Point of Presence (PoPs)


In real-time, Oracle Cloud Infrastructure WAF uses a sophisticated DNS
data-driven algorithm to select the optimum global Point Of Presence (POP)
to serve a specific user. As a result, users are routed to avoid global network
difficulties and potential latency while receiving the finest uptime and service
levels available.

Figure 10-04: WAF PoP

Shared Responsibility Model for WAF


The following table provides the summarized view of the shared
responsibility model for WAF.
Responsibility Oracle Customer
Configure WAF on-boarding dependencies (DNS, No Yes
Ingress rules, network)
On-board/Configure the WAF policy for the web No Yes
application
Construct new rules based on the new Yes No
vulnerabilities and mitigations
Review and accept new recommended rules No Yes
Keep WAF infrastructure patched and up-to-date Yes No
Monitor data-plane logs for abnormal, undesired Yes Yes
behavior
Monitor for Distributed Denial of Service (DDoS) Yes No
attacks
Provide High availability (HA) for the WAF Yes No
Tune the WAF’s access rules and bot management No Yes
strategies for your traffic
Table 10-01: Shared Responsibility Model

Lab 10-02: Working with WAF Policy


Introduction
The Oracle Cloud Infrastructure Web Application Firewall (WAF) is a global
security solution protecting applications from dangerous and undesirable
internet traffic. It is cloud-based and PCI compliant. WAF may secure any
internet-facing endpoint by enforcing the same set of rules across all of a
customer's apps.
You can develop and maintain rules for internet threats, including Cross-Site
Scripting (XSS), SQL Injection, and other OWASP-defined vulnerabilities
using WAF. Unwanted bots can be mitigated, while desired bots can be
admitted tactically. Access rules can be set up to restrict access based on
location or the request's signature.
Problem
An organization has deployed the Virtual Cloud Network through OCI
Console and wants to add some protection rules to ensure the security of the
data and applications running within this network. How would it be possible?
Solution
In OCI, the organization can make the network secure and protect from
malicious attacks using a WAF policy.
Step 1: Navigate to WAF
1. Go to https://www.oracle.com/cloud/sign-in.html and log in to Oracle
Cloud Infrastructure Console with your Cloud Account Name and
credentials.
2. The Oracle Cloud Infrastructure Console dashboard will appear.
3. Go to Networking and click on Virtual Cloud Networks.

4. Verify your Home Region and Compartment.


5. Click on Create VCN.
6. Write a name for your VCN.
7. Select your compartment.
8. Specify IPV4 CIDR BLOCK.
9. Scroll down and enable DNS RESOLUTION.
10. Write a DNS LABEL.
11. Click on Create VCN.
12. After the deployment, verify the VCN Information.
13. Scroll down and click on Create Subnet to create a subnet in this
VCN.
14. Give a unique name for this subnet.
15. Choose your compartment.
16. Select the Regional subnet type.
17. Select Public access for this subnet.
18. Enable DNS RESOLUTION.
19. Set DNS LABEL.
20. Select Default security list.
21. Click on Create Subnet.
22. After the creation, click on this subnet and verify the configuration
details.
Step 3: Create a DNS Zone
23. Go to the Navigation menu and open Zones.
24. Write a unique name for the zone.
25. Select your compartment.
26. Select Primary zone type.
27. Leave the remaining options as default.
28. Click on Create.
29. After creating your DNS zone, verify all the information.
30. Scroll down and click on your zone.
31. Explore all the available records.
Step 4: Create a WAF Policy
32. Go to the Navigation menu and click on Web Application
Firewall under Identity & Security.
33. First, verify your Home Region and Compartment.
34. Click on Create WAF Policy.
35. Write a unique name for your policy.
36. Set a primary domain.

37. Give the Origin name and URI.


38. Add TAG NAMESPACE and click on Create WAF Policy.

39. The deployment process will take a few seconds.


40. After the creation, verify the policy information.
41. Scroll down and click on Access Control from the menu on the left
side.
42. Click on Add Access Rule to add a new rule.

43. Write a unique name for the rule.


44. Under Conditions, select the country and your Home Region.
45. Select Block actions.
46. Select Show Error Page for the block action.
47. Click on Add Access Rule.
48. Now, update your recently created WAF policy. It will take 10 to
15 minutes.
49. After updating the WAF policy, verify the access rule by
navigating to the CNAME Target.
Mind Map
Figure 10-05: Mind Map

Practice Questions
1. In the Shared Security model, you are responsible for __________.
A. Workloads
B. Configuration of Resources
C. Software
D. Hardware
E. Datacenter Facilities
2. Which of the following service manages user access, authentication, and
policies?
A. Bastion
B. Networking
C. Vault
D. IAM
3. How many user types are available?
A. Two
B. Four
C. Three
D. None
4. Which of the following security service protects applications from
dangerous and undesirable internet traffic?
A. WAF
B. Security Advisor
C. Cloud Guard
D. Bastion
5. Which of the following security service restricts and limits access to
target resources without public endpoints?
A. WAF
B. Bastion
C. Vault
D. None of the Above
6. Which of the following component of WAF define the whole
configuration of your WAF service?
A. Origin
B. Condition
C. Policy
D. Protection Rules
7. Which of the following allows you to choose your own sign-in option?
A. IAM
B. IDCS
C. WAF
D. Federation
8. Which of the following service uses DNS data-driven algorithm to select
the optimum global point of presence?
A. WAF
B. IAM
C. Security Zone
D. None of the Above
9. Which of the following can be used to restrict access to a URL?
A. JavaScript
B. CAPTCHA
C. Whitelisting
D. None of the Above
10. Which of the following uses JavaScript challenge as its control tool?
A. Protection Rule
B. Policy
C. Origin
D. Bot Management
11. How many OCI sign-in options are available?
A. Only one
B. Three
C. Two
D. More than five
12. Which of the following user types grants access to OCI based on their
membership mapped to the OCI groups?
A. Federated User
B. Local User
C. Synchronized User
D. All of the Above
13. Which of the following services is responsible for managing the
access of the cloud resources?
A. Server
B. Interface
C. Channel
D. IAM
14. WAF is used to develop and maintain rules for internet threats,
including Cross-Site Scripting (XSS), SQL Injection, and other OWASP-
defined vulnerabilities.
A. False
B. True
15. How many action items are provided by WAF?
A. Only one
B. Maximum two
C. At least three
D. None
Chapter 11: Real-World Architecture

Introduction
This chapter focuses on the real-world architectures provided by Oracle. It
includes:
General Architecture
Hub-Spoke Architecture
HPC Architecture
Security Architecture

OCI Architecture Overview


The Oracle Architecture Centre is a resource library that allows developers
and IT professionals to optimize and personalize their cloud, hybrid, and on-
premises deployments. The OCI Architecture Centre offers a wealth of
resources for seasoned Oracle users and those just starting on their cloud
adventure, including reference architectures, quick-start instructions, etc.
Oracle’s library of materials lets you design and implement your projects
faster, easier, and more efficiently.
Figure 11-01: OCI Global Footprint

There are some building blocks on the top of the OCI global footprint. At the
bottom side, there exist core primitives, including compute, storage, and
networking.
The compute services are core Virtual Machine (VM), bare metals servers,
containers, a managed Kubernetes service, and VMware services. These
services are primarily for performing calculations, executing logic, and
running applications.
Cloud storage includes disks attached to virtual machines, file storage, object
storage, archive storage, and data migration service.
OCI offers a complete range of storage services for you to store, access,
govern, and analyze structured and unstructured data.
The networking feature lets you set up software-defined private networks in
the Oracle cloud. OCI provides the broadest and deepest networking services
with the highest capability, most security features, and highest performance.
There are multiple database services available, both Oracle and open-source.
Oracle is only the cloud that runs Autonomous Databases with multiple
capabilities, including OLTP, OLAP, and JSON. You can run databases and
virtual machines, bare-metal servers, or Exadata in the cloud. You can also
run open-source databases, such as MySQL and NoSQL, in the Oracle Cloud
Infrastructure.
With Data and AI Services, there is a managed Apache Spark service called
Dataflow, a managed service for tracking data artifacts across OCI called
Data Catalog, and a managed service for data ingestion and ETL called Data
Integration.
There is also a managed data science platform for machine learning models
and training in Oracle. Also, there is a managed Apache Kafka service for
event streaming use cases.
The Governance and Administration services include security identity,
observability, and management.
Some unique features, like compartments, make it operationally easier to
manage large and complex environments. Security is integrated into every
aspect of OCI, whether automatic detection and remediation, what is typically
referred to as Cloud Security Posture Management, robust network
protection, or encryption by default.
There is an integrated observability and management platform with logging,
logging analytics, and Application Performance Monitoring (APM).
Many developer services are available that have the managed low code
service called APEX, several other developer services, and a managed
Terraform service called Resource Manager.
For analytics, a managed analytics service called Analytics Cloud integrates
with various third-party solutions.
The application services have managed server-less offerings call functions, an
API gateway, and an Event service to help you create microservices and
event-driven architectures.
You have a comprehensive connected SaaS suite across your business,
finance, Human Resources (HR), supply chain, manufacturing, advertising,
sales, customer service, and marketing, all running an OCI.
Figure 11-02: Services in Global Region

OCI Architecture
Introduction
Oracle Cloud Architecture (OCI) is a secure public cloud infrastructure
designed for enterprise-critical applications. Oracle changed the virtualization
stack to decrease the risk of hypervisor-based attacks and to enhance tenant
isolation. Consequently, the next-generation public cloud infrastructure
design outperforms first-generation cloud infrastructure designs in terms of
security. This architecture has been applied in every data center and area.
OCI is a full-fledged IaaS platform. It offers the services required to develop
and execute applications in a highly secure, hosted environment with
excellent performance and availability. Customers can run the Compute and
Database services on either bare metal instances (customer-dedicated
physical servers) or Virtual Machines (VM) instances) isolated computing
environments on top of bare metal hardware). Because bare metal and VM
instances use the same server hardware, firmware, underlying software, and
networking infrastructure, the OCI safeguards are built into those levels.
Core Concept of OCI Architecture
OCI supports the following cloud concepts:
High Availability – Cloud resources are available at all times and have no
single point of failure.
Disaster Recovery – Allows for speedy recovery or service continuance in
the event of any type of downtime.
Fault Tolerance – Keeps downtime to a minimum.
Scalability – Allows resources to be scaled up or down (vertical scaling), in
and out (horizontal scaling).
Elasticity – It is the ability to scale resources such as virtual machines and
storage rapidly.
Capital Expenditure (CAPEX) on fixed assets such as physical infrastructure
is referred to as pricing. Operational Expenditure (OPEX) is money spent on
utilities and power.
Component of OCI Architecture
The five primary components of the OCI Architecture are as follows:
OCI Regions
OCI region is a localized geographic area comprising one or more
Availability Domain (AD).

Figure 11-03: OCI Region

Depending on where you are from, you may have access to a local region or a
region close to you. Check out the image below to see a list of the regions
that are now available:

Figure 11-04: OCI Regions

OCI has decided to start additional regions in new geographies with a single
AD (to increase our global reach quickly).

Availability Domain (AD)


Availability domains are fault-tolerant data centers located within the regions
but connected by a low latency, high bandwidth network.
The domain that square measure physically isolated have their network and
square measure unlikely to fail at an equivalent time. ADs within Regions are
coupled via a low-latency network.
Figure 11-05(a): Availability Domain

The availability domain is isolated from each other, fault-tolerant, and very
unlikely to fail simultaneously. Because the availability domain does not
share physical infrastructure such as power or coding or the internal network,
a failure that impacts one availability domain is unlikely to impact the
availability.
As shown in Figure 11-05(b), there are three availability domains. One AD
has an outage that is not available. However, the other two ADs are still up
and running.

Figure 11-05(b): Multi-AD OCI Region

Fault Domain (FD)


A fault domain could be an assortment of hardware and instrumentality
within the availability domain. There square measure three fault domains in
every availability domain. Fault domains enable you to distribute your
instances such that they do not share physical hardware inside a single
availability domain. Consequently, sudden hardware breakdown or
calculating hardware maintenance in one fault zone results in different fault
domain instances. At the start, you will optionally offer the fault domain for a
brand new instance; otherwise, you will let the system opt for one for you.
Figure 11-06: Fault Domain

You can leverage FD for your services. In any region, resources in that most
one FD are actively changed at any point in time. It means that availability
problems caused by change procedures are isolated at the fault domain level.
Therefore, you can specify which fault domain you want to use. Moreover,
you can control the placement of your computer database instances to the
fault domain at instance launch time.

Figure 11-07: FD in Multi-AD OCI Region

How do you choose a region?


The first key criteria for choosing a region is picking the closest region to
your users for the lowest latency and highest performance.
The second criteria include data residency and compliance requirements.
Many countries have strict data residency requirements, and you have to
comply with them. Therefore, you choose a region based on the compliance
requirements.
The third vital criterion is service availability. The cloud services are made
available based on regional demand, regulatory compliance reasons, and
resource availability.

EXAM TIP: You should have a region based on three key criteria.
1. Location
2. Data Residency and Compliance
3. Service Availability
Avoid Single Point of Failure
Consider an example in which you have a region and availability domain.
One AD has three FDs, as shown in Figure 11-08. When you create an
application, you create a software-defined virtual network. The architecture
consists of an application tier database tier. Both application and database
tiers are replicated across FDs. It gives you an extra layer of redundancy.
When something happens to an FD, your application is still up and running.
Similarly, you could replicate the same design in another AD. You could
have two copies of your application running and two copies of your database
running. Also, you could use various technologies like Oracle Data Guard to
make sure that your primary and standby –data is kept in sync.
It is how you can design these types of architecture to avoid single points of
failure.

Figure 11-08: Avoid Single Point of Failure

High Availability Design


Three essential factors ought to be addressed when planning a high
availableness architecture: redundancy, monitoring, and failover:
Redundancy refers to the flexibility of many parts to execute an
equivalent task. As a result of redundant parts that could take up a
requirement performed by a failing element, the matter of one purpose
of failure is avoided
Monitoring entails deciding whether or not or not an element is
functioning properly
Failover refers to the method through which a secondary element takes
the place because the original element fails
Figure 11-09(a): Failure Protection with Fault Domain

Figure 11-09(b): Failure Protection with Multi-AD Region

OCI best Practices


Security and compliance, reliability and resilience, performance cost
optimization, and operational efficiency are the four business goals of the
OCI best practices framework.
Security and Compliance
Oracle's security-first approach protects your most valuable data in the cloud
and on-premises. Oracle has decades of experience securing data and
applications; Oracle Cloud Infrastructure helps our customers build
confidence and preserve their most valuable data by providing a more secure
cloud. The domain of OCI security and compliance include:
Data Protection
User Authentication
Access Control
Resource Isolation
Computer Security
Network Security
Database Security
Reliability and Resilience
Build, implement, and maintain a fault-tolerant cloud infrastructure that you
can promptly recover in the event of an outage. The provided features
include:
Scaling
Data Backup
Fault-tolerant Network Architecture
Service Limits and Quotas
Performance and Cost Optimization
It ensures that your cloud resources are used effectively and at the lowest
possible cost.
Storage Strategy
Cost Tracking Management
Compute Sizing
Network Monitoring and Tuning
Operational Efficiency
It enables you to manage resources and administer your cloud topology
efficiently.
Support
Workload Monitoring
OS Management
Deployment Strategy

Hub-Spoke Architecture
Introduction
The core component of a hub-and-spoke network, also known as a star
network, is connected to several networks around it. The overall architecture
resembles a wheel, with many spokes connecting a central hub to spots along
the wheel's periphery. Setting up this topology in a standard on-premises data
center can be costly. However, there is no additional expense in the cloud.
Use Cases
For the following popular use cases, you can utilize the hub-and-spoke design
to create unique and powerful networking solutions in the cloud:
Creating a development and production environment that is separate
Isolating the workloads of various clients, such as an ISV's subscribers
Separating environments to meet PCI and HIPAA compliance needs
Using a central network provides shared IT services such as a log
server, DNS, and file sharing
Architecture
An Oracle Cloud Infrastructure region with a hub VCN and two spoke VCNs
in this reference architecture as defined in Figure 11-10. Each spoke VCN is
connected to the hub VCN using a pair of Local Peering Gateways (LPGs).
Every subnet has a routing table with rules for routing traffic to destinations
outside the VCN. Security lists are utilized to regulate network traffic to and
from each subnet. A few sample subnets and virtual machines are shown in
the architecture.
The hub VCN includes an internet gateway for network traffic to and from
the public internet, as well as a Dynamic Routing Gateway (DRG) for private
connectivity with your on-premises network, which you may set up using
Oracle Cloud Infrastructure FastConnect, IPSec VPN, or both.
Figure 11-10: Hub-Spoke Architecture

Components
The following elements make up the architecture:
On-premises network
This network refers to your company's local network. It is one of the
topology's spokes.
Region
An Oracle Cloud Infrastructure region is a defined geographic area that
includes one or more available domains and data centers. Regions are
autonomous from one another, and great distances can divide them (across
countries or even continents).
Virtual Cloud Network
In an Oracle Cloud Infrastructure area, a VCN is a configurable, software-
defined network that you create. VCNs allow you complete control over your
network environment, just like traditional data center networks. A VCN can
have numerous non-overlapping CIDR blocks that can be changed after the
VCN has been created. A VCN can be divided into subnets, each assigned to
a region or an availability domain. Each subnet comprises a continuous range
of addresses that do not overlap with any of the other VCN subnets. You can
change a subnet's size after its creation. A subnet might be open to the public
or closed to the private.
Security List
You can build security rules for each subnet that specify the source,
destination, and kind of traffic allowed in and out.
Route Tables
Virtual Route Tables (VRTs) includes routing traffic from subnets to
destinations outside of a VCN, usually via gateways.
Dynamic Routing Gateway (DRG)
The DRG is a virtual router that connects a VCN to a network outside the
region, such as a VCN in another Oracle Cloud Infrastructure area, an on-
premises network, or a network hosted by another cloud provider.
Bastion Host
The bastion host is a compute instance that acts as a secure and controlled
entry point into the topology from the outside. In most cases, the bastion host
is set up in a Demilitarized Zone (DMZ). It allows you to safeguard sensitive
data by storing it in private networks not accessible from outside the cloud.
You can monitor and audit the topology because it has a single, well-known
entry point. As a result, you can avoid revealing the topology's more sensitive
components without jeopardizing access to them.
Local Peering Gateway (LPG)
Using an LPG, you can peer one VCN with another VCN in the same region.
Peering refers to the use of private IP addresses by VCNs to communicate
without going over the internet or your on-premises network.
VPN Connect
Site-to-site IPSec VPN connectivity between your on-premises network and
VCNs in Oracle Cloud Infrastructure is provided via VPN Connect. Before
packets are transported from the source to the destination, the IPSec protocol
suite encrypts them and decrypts them when they arrive.
FastConnect
Oracle Infrastructure for the Cloud FastConnect makes setting up a dedicated,
private connection between your data center and Oracle Cloud Infrastructure
a breeze. Compared to internet-based connections, FastConnect offers higher-
bandwidth possibilities and a more stable networking experience.
Considerations
Consider the following criteria while designing a cloud hub-and-spoke
network topology:
Cost
The compute instances and FastConnect are the only components of this
design that cost (port hours and provider charges). The other components are
free of charge.
Security
To safeguard the topology, use proper security techniques.
Scalability
Consider your tenancy's service restrictions for VCNs and subnets. Request
an increase in the limits if different networks are required.
Performance
The amount of VCNs in a region has no bearing on performance. Consider
latency when peering VCNs from different locations. The connection’s
throughput is an additional factor when using spokes linked via VPN Connect
or FastConnect.
Redundancy and Availability
The remaining components, except the instances, have no redundancy
requirements.

EXAM TIP: VPN Connect and FastConnect are redundant components.


Use numerous connections, ideally from different providers, for added
redundancy.

HPC Architecture
Introduction
HPC on Oracle Cloud Infrastructure (OCI) provides strong, cost-effective
computing capabilities for solving complicated mathematical and scientific
issues in various industries. OCI's bare metal servers, along with Oracle's
cluster networking, enable ultra-low latency Remote Direct Memory Access
(RDMA) overconvergent Ethernet (RoCE) v2 (two seconds latency over
clusters of tens of thousands of cores).
Architecture
Oracle Cloud Infrastructure's Cluster Networking solution enables HPC
instances to communicate via a high-bandwidth, low-latency network. For
cluster networking, Oracle employs the RDMA over converged ethernet or
RoCEv2 Protocol. Each node in the cluster is a bare-metal computer that is
physically near the others. The latency of RDMA networking between nodes
is less than two microseconds, which is equivalent to on-premises HPC
clusters.
Cluster networks are intended for parallel computing workloads that are
extremely demanding, such as the following:
Simulations of computational fluid dynamics for automotive and
aerospace models
Simulation of a crash
Risk analysis and financial modeling
Simulations in biomedicine
Space exploration trajectory analysis and design
Workloads using artificial intelligence and big data
A bastion or head node is deployed in the reference architecture, as shown in
Figure 11-11, which operates the scheduler, and you can use it as a bastion
server for cluster access.
Depending on your needs, you can establish a visualization node using a
GPU virtual machine (VM) or a bare-metal system. According to our
recommendations, the visualization node should be in the public subnet. For
pre-or post-processing, monitoring, or analyzing the output of simulations,
HPC workloads frequently require visualization tools. Oracle Cloud
Marketplace allows you to deploy an NVIDIA GRID-enabled workstation.
This architecture is implemented via virtual cloud networks, both public and
private (VCNs). Only IPSec VPN, Oracle Cloud Infrastructure FastConnect,
or the public internet are available to the client network to access the head
node and visualization node.
A region with a single availability domain and regional subnets is used in the
architecture. You can use the same design in an area with various availability
domains. Regardless of the available domains, we advocate using regional
subnets for your setup.
You can use Oracle Cloud Marketplace to get these cluster networks or
manually deploy them. In either scenario, we recommend starting with the
baseline reference design and tweaking it to fit your needs.

Figure 11-11: HPC Architecture

Components
The following elements make up the architecture:
Region
An Oracle Cloud Infrastructure region is a defined geographic area that
includes one or more available domains and data centers. Regions are
autonomous from one another, and great distances can divide them (across
countries or even continents).
Availability Domains
Within a region, availability domains are freestanding, independent data
centers. Each availability domain's physical resources are segregated from
those in other availability domains, allowing for fault tolerance. Availability
domains do not share the internal availability domain network and
infrastructures such as power and cooling. As a result, a failure in one
availability domain is unlikely to affect the region's other availability
domains.
Fault Domains
Your applications can endure physical server failure, system maintenance,
and power failures within a fault domain when resources are distributed over
many fault domains. A fault domain is a collection of hardware and systems
within an availability domain. Each availability domain is divided into three
fault domains: power and hardware.
VCN and Subnets
In an Oracle Cloud Infrastructure area, a VCN is a configurable, software-
defined network that you create. VCNs allow you complete control over your
network environment, just like traditional data center networks. A VCN can
have numerous non-overlapping CIDR blocks that can be changed after the
VCN has been created. A VCN can be divided into subnets, each assigned to
a region or an availability domain. Each subnet comprises a continuous range
of addresses that do not overlap with any of the other VCN subnets. A
subnet's size can be changed after it has been created. A subnet might be open
to the public or closed to the private.
Bastion Host
The bastion host is a compute instance that acts as a secure and controlled
entry point into the topology from the outside. In most cases, the bastion host
is set up in a demilitarized zone (DMZ). It allows you to safeguard sensitive
data by storing it in private networks not accessible from outside the cloud.
You can monitor and audit the topology because it has a single, well-known
entry point. As a result, you can avoid revealing the topology's more sensitive
components without jeopardizing access to them.
HPC cluster node
These compute nodes, RDMA-enabled clusters, are provisioned and de-
provisioned by the head node (100 Gbps RoCE v2 isolated network). They
process the data in file storage and then return the findings.
Virtualization node
A 2D or 3D application is usually installed on the visualization node to
display and analyze data produced by HPC cluster nodes visually.
Security List
You can build security rules for each subnet that specify the source,
destination, and kind of traffic allowed in and out.
Considerations
Consider these implementation alternatives when building High-Performance
Computing (HPC) on Oracle Cloud Infrastructure.
Performance
Choose the right compute shape with the right bandwidth to get the best
results.
Availability
Based on your deployment requirements and region, consider choosing a
high-availability solution. Using numerous availability domains in an area
and fault domains are two options.
Cost
A bare-metal GPU instance delivers the necessary CPU power for a larger
price. Examine your requirements to determine the best compute shape.
Monitoring and Alerts
Set up CPU and memory use monitoring and notifications for your nodes so
you can scale the shape up or down as needed.

Mind Map
Figure 11-12: Mind Map

Practice Questions
1. Oracle Cloud Architecture (OCI) is a _______ public cloud infrastructure.
A. Open
B. Safe
C. Secure
D. None of the above
2. A region can be made up of one or more ___________.
A. Fault Domain
B. Availability Domains
C. Location
D. All of the above
3. A fault domain is a collection of _____________ and equipment.
A. Hardware
B. Software
C. Both
D. None
4. Which of the following is used for creating a development and production
environment?
A. VCN
B. Hub-Spoke Architecture
C. HPC Architecture
D. None of the Above
5. Fault Domain is a grouping of _____________ and infrastructure within
the AD.
A. Imaginary
B. Real
C. Physical
D. Hardware
6. Which of the following allows developers and IT professionals to do their
deployments?
A. Oracle Architecture Centre
B. Availability Domain
C. VCN
D. Fault Domain
7. A low-latency, high-bandwidth network connects all of the
______________ in an area.
A. FD
B. High Availability domains
C. Availability domains
D. None of above
8. If you want to add High Availability inside the region, you might want to
introduce _________ instance with Oracle Data Guard to another AD.
A. Standby
B. Another
C. Parallel
D. Imaginary
9. Compartments are ___________ collection of related resources.
A. Real
B. Imaginary
C. Physical
D. Logical
10. Which of the following architecture allows traffic to flow from an on-
premises network to the Hub, communicating with a VCN?
A. Hub-spoke
B. Security
C. HPC
D. Hub-hub
11. Each resource can belong to only ___________ compartment.
A. One
B. Two
C. Three
D. Four
12. Resources can be deleted or added to the compartment.
A. Increase or Decrease
B. Deleted or Added
C. Vanish
D. None of the Above
13. Which of the following is used to resolve complex mathematical and
scientific issues?
A. Hub-Spoke Architecture
B. Security
C. HPC Architecture
D. None of the above
14. How many main components of OCI architecture are there?
A. One
B. Five
C. Three
D. Two
15. Which of the following URL is used to navigate to the OCI console?
A. https://cloud.oracle.com
B. https://oracle.cloud.com
C. https://cloud.oracle.net
D. https://cloud.doc.oracle.com
Answers
Chapter 07: Oracle Autonomous Database
1. Answer: D
Explanation: HeatWave is a distributed, scalable, share-nothing in-memory
columnar and query processing engine designed for the fast execution of
analytic queries. It enables you to add a MySQL cluster to your MySQL
database system.
HeatWave parser consists of MySQL database system node and two or more
HeatWave nodes. The MySQL database system node includes a plug-in that
is responsible for cluster management, loading of the data into that
HeatWave cluster, querying schedule, and returning the query results to the
MySQL database system. The HeatWave node stores data in memory and
processes analytics queries. HeatWave nodes consist of an instance of
HeatWave.
2. Answer: A
Explanation Oracle Autonomous Data Warehouse is a cloud data warehouse
service that takes care of all the difficulties of running a data warehouse, dw
cloud, data warehouse center, data security, and data-driven application
development. It automates data warehouse provisioning, configuration,
security, tweaking, scaling, and backup. It comes with tools for self-service
data loading, data transformations, business models, automatic insights, and
built-in converged database capabilities, which make it easier to query
numerous data types and do machine learning research.
3. Answer: C
Explanation: The database transaction in NoSQL is often described in terms
of ACID properties. ACID stands for Automic, Consistency, Isolation, and
Durability. ACID principles ensure database transactions are processed
reliably.
4. Answer: B
Explanation: The Autonomous Database from Oracle Cloud Infrastructure
is a fully managed, preconfigured database environment with four workload
types: Autonomous Transaction Processing, Autonomous Data Warehouse,
Oracle APEX Application Development, and Autonomous JSON Database.
You would not have to manage or configure any hardware, and you will not
have to install any software. You can grow the number of CPU cores or
database storage capacity at any moment after provisioning without affecting
availability or performance.
5. Answer: A
Explanation: Oracle database uses Machine learning-driven automation
which can help you save money up to 90% on monitoring, securing, and
maintaining your Oracle databases. The database is provisioned, scaled and
tuned, protected and patched, and repaired without the need for user
intervention.
6. Answer: B
Explanation: A way to achieve high availability or extreme availability is
by implementing or addressing the fault containment zones. A zone is a
physical location that supports high-capacity network connectivity between
the storage nodes. Each zone has the same level of physical separation from
other zones, such as its power, communication, connection, etc. When
configuring your store, it is strongly recommended that you configure your
store across multiple zones. Having multiple zones provides that fault
isolation increases data availability if a single zone encounters a failure.
7. Answer: D
Explanation: MySQL database service is a very popular open-source
service that is used to store enterprise data. This service is optimized for
OLTP; however, it can perform analytics processing (OLAP).
8. Answer: C
Explanation: There are three options available when deploying the Oracle
cloud datacentre:
Autonomous – Shared: You provision and manage only the Autonomous
database, and Oracle will handle the infrastructure it runs on. It is
supported both for Autonomous Transaction Processing (ATP) database
and Autonomous Data Warehouse (ADW).
Autonomous – Dedicated: You can configure your environment very
similar to the manner that you may currently have in the datacenter. You
have exclusive use of Exadata hardware. As shared, it supports both the
transaction processing and the data warehouse. This feature provides you
flexibility and allows you to have the Oracle database and Autonomous
Database Cloud Service wherever you want and need it.
Cloud@Customer Infrastructure – Oracle provides Cloud@Customer;
you have the Oracle database cloud service running in your datacentre.
You may want that if you require data sovereignty, data regulatory, and
network latency.
9. Answer: A
Explanation: Oracle database security solutions for encryption, key
management, data masking, privileged user access controls, activity
monitoring, and auditing let you assess, detect, and avoid data security
threats. They reduce the risk of a data breach while also making compliance
easier and faster.
10. Answer: D
Explanation: Isolation is considered serializable, meaning that each
transaction is in a distinct order without any transaction occurring in tandem.
Any reads and writes performed on the database will not be impacted by the
other reads or writes of separate transactions occurring on the same database.
Therefore, no transaction will affect others.
11. Answer: C
Explanation: Provides a physically identical replica of the primary database,
including database structures on disk that are block-for-block identical to the
primary database. The database schema is the same, including indexes. Redo
Apply, which recovers the redo data received from the primary database and
applies it to the physical standby database, keeps the physical standby
database synced with the primary database.
On a limited basis, a physical standby database can be utilized for business
objectives other than disaster recovery
12. Answer: A
Explanation: Oracle Database customers can use their existing licenses with
Oracle Cloud Infrastructure using Bring Your Own License (BYOL). It is
worth noting that Oracle Database customers are still responsible for
adhering to the license restrictions that apply to their BYOLs, as specified in
their program order.
13. Answer: E
Explanation: The database is created by Autonomous Database, which also
handles the following maintenance tasks:
Backing up the database
Patching the database
Upgrading the database
Tuning the database
14. Answer: B
Explanation: Oracle's Exadata is a database machine that provides
customers with optimized capabilities for enterprise-level databases and their
associated workloads. Exadata is a Sun Microsystems-developed composite
database server machine that employs Oracle database software and Sun
Microsystems-developed hardware server equipment.
15. Answer: C
Explanation: Fleet administrators allocate budget by department and are
responsible for the creation, monitoring, and management of the
Autonomous Exadata infrastructure, the Autonomous Exadata VM clusters,
and the Autonomous container databases. The fleet administrators must have
an Oracle account or user to perform these duties. The user has permission to
manage these resources and be permitted to use network resources that need
to be specified when you create these other resources.
Chapter 08: Design for Hybrid Cloud Architecture
1. Answer: A
Explanation: Primarily, three building blocks form a physical data center.
Compute – includes server
Network – used for switching, routing security, etc.
Storage – used for storing data
2. Answer: C
Explanation: You can publish IPv6 addresses allocated by Oracle on the
internet for public connection or utilize them only for private connectivity
within and between your Virtual Cloud Networks (VCNs) or on-premises
networks with IPv6 support in OCI (No NAT required). Create and deploy
apps that can communicate through VPN or FastConnect from IPv6
endpoints to IPv6-enabled compute instances and resources linked to on-
premises networks. Your IPv6 customers can also connect to a virtual IP
address for web load balancing and be routed to IPv4 web application
instances. Customers can now make their applications available to IPv6
end-users over the internet.
3. Answer: B
Explanation: Oracle Cloud VMware Solution is based on some of the core
components of VMware Cloud Foundation, vSphere, NSX, and vSAN.
With this integration, you can achieve a wide range of features, like
optimizing east-west traffic, load balancing your workloads, or storage
services like rate protection, deduplication, compression, etc.
4. Answer: D
Explanation: IPv6 addressing model has specific scope in which the device
is defined. A scope is a topological area within the IPv6 address. It can be
used as a unique identifier for the interface or set of interfaces. The scopes
can be:
Global
Site-local
Link-local
5. Answer: B
Explanation: vSphere is a distributed software system with features
enabled by Hypervisor ESXi and a management server, vCenter, working
together.
vSphere enables virtual machines from the hardware by presenting a
complex x86 platform to the virtual machine guest operating system.
6. Answer: C
Explanation: NXS Manager – This node hosts the API services. It also
provides a graphical user interface and REST APIs for creating,
configuring, and monitoring the NXS-T datacenter component.
7. Answer: A and D
Explanation: Oracle and Microsoft have teamed up to deliver Oracle Cloud
and Microsoft Azure with low-latency, private connectivity. This
relationship provides you with a cross-cloud experience that is highly
optimized, safe, and unified.
8. Answer: B
Explanation: In OCVS, the NSX-T overlay manages the traffic flow
between the VM and between the VMs, the other resources, and the
solution.
NSX-T works by implementing three separate integrated planes.
Management
Control
Data
9. Answer: C
Explanation: Control Plane – The control plane computes the runtime
state of the system based on the configuration provided by the management
plane. It is also responsible for disseminating topology information reported
by the data plane elements and pushing the stateless configuration to the
forwarding engine.
NSX-T splits the control plane into two different parts.
Central Control Plane (CCP) – The CCP nodes are implemented as a
cluster of virtual machines, and this form factor provides both
redundancy and scalability of resources. The CCP is logically separated
from all data plane traffic, which means any failure in the control plane
does not affect the existing data plane operation
Local Control Plane (LCP) – The LCP runs on the transport nodes. It is
adjacent to the data plane controlled and connected to the CCP. The
transport nodes are the host that runs the local control plane daemons and
the forwarding engines implemented by the NSX data plane. The LCP is
responsible for programming the forwarding entries and viable rules of
the data plane. NSX Manager and NSX Controller are bundled together
in a virtual machine called the NSX manager appliance
10. Answer: A
Explanation: Tier 0 gateway can support the communication between the
VLAN-backed workload and the back overlay VLAN; however, there could
be scenarios where layer two connectivity is required between the VMs and
the physical devices. For such functionality, NSX-T introduces the NSX-T
Bridge, a service that can be instantiated on edge to connect an NSX-T
logical segment with a traditional VLAN and layer2.
11. Answer: B
Explanation: There are primarily two components of HCX that you start.
HCX Manager
HCX Connector
The HCX manager and HCX connector together build the service mesh. A
service mesh is built using a set of appliances deployed, which creates an
effective service configuration to be used by the source and destination.
HCX interconnect is a mandatory appliance. At the same time, HCX WAN
optimization, Network Extension, and all other appliance are optional.
12. Answer: D
Explanation: Oracle created a VMware-certified Software-Defined Data
Center (SDDC) implementation for usage within Oracle Cloud
Infrastructure in collaboration with VMware. Oracle Cloud Infrastructure
hosts a highly available VMware SDDC in this SDDC installation, dubbed
the Oracle Cloud VMware Solution. It also enables you to migrate all of
your on-premises VMware SDDC workloads to Oracle Cloud VMware
Solution smoothly.
13. Answer: B and E
Explanation: You need to know a few prerequisites before starting the
SDDC provisioning.
Compartment
Virtual Cloud Network
14. Answer: A
Explanation: Mobility Optimized Networking (MON) is an HXC
Enterprise feature.
This feature allows you to route the traffic of a migrated virtual machine
within OCVS without Trombone
A network trombone occurs when all the workloads on an extended
network at the destination are routed through the on-premises router
gateway
MON ensures that the traffic remains symmetric and uses an optimal path
to reach its destination
15. Answer: C
Explanation: vSAN is the hyper-converged storage part of the solution.
The term hyper-converged means having a high-performance NVMe or
“all-flash” based drives attached directly to the bare metal compute and
becomes the primary storage for your VMs.
With having a software-defined storage approach, Oracle can pool these
direct-attached devices across the vSphere cluster to create a
distributed/shared datastore for the VMs.
16. Answer: D
Explanation: A witness node is a dedicated host used to monitor an object's
availability. When you have at least two replicas of an object, and during a
real failure, it can host the data object of that application to be active on
both vSAN fault domains. This can be disastrous to any application.
Therefore, to avoid split-brain conditions, a vSAN witness node is
configured.
17. Answer: B
Explanation: Hybrid Cloud Extension is an application mobility platform
that can simplify the migration of application workloads with rebalancing
and help you achieve business continuity between an on-premises and
Oracle Cloud VMware Solution.
18. Answer: E
Explanation: vMotion transfer captures the virtual machine’s active
memory, its execution state, its IP address and MAC address, and its
transfer to the destination. It is also referred to as the live migration feature
of VMware, and therefore, there is no downtime for the VM.
Chapter 09: Migrate On-Premises Workloads to OCI
1. Answer: C
Explanation: Each data transfer appliance enables organizations to migrate
up to 150 terabytes of data. The appliance should be configured and
connected to the on-premises network. After creating a transfer job, the
appliance can be requested via the Oracle Cloud Infrastructure Management
console.
2. Answer: A
Explanation: Data Transfer Disk is an offline data transfer solution. This
service allows the customer to use their own SATA or USB drive and send
up to 10 drives and 100 terabytes of data per transfer package to the Oracle
data transfer site. Then, site operators upload the files into the organization’s
designated object storage bucket. Users are free to move the uploaded data
into other Oracle Cloud Infrastructure (OCI) services as needed.
3. Answer: B
Explanation: ZDM adheres to Oracle's Maximum Availability Architecture
(MAA) principles and includes tools like GoldenGate and Data Guard to
assure High Availability, as well as an online migration strategy that makes
use of technologies like Recovery Manager, Data Pump, and Database Links.
4. Answer: A
Explanation: Zero Downtime Migration service provides two ways to
migrate data from source to target.
Physical online migration
Logical online migration
5. Answer: C
Explanation: Once a secure connection has been established, the
organizations can use OCI Storage Gateway to securely create copies of on-
premises files and place them into the Oracle Cloud Object Storage without
modifying the applications.
6. Answer: D
Explanation: Migration methods provide optimum performance in the
Oracle with the least cost. Oracle supports both offline and online
migrations. The core use cases are to get various on-premises and third-party
cloud databases into the cloud.
Additionally, Oracle is based on Zero Downtime Migration and uses
GoldenGate and Data Pump as the core technologies underneath. When
selecting a migration method for moving your database to the cloud, take the
following into consideration:
Database Version
Database Size
High Availability (HA)
7. Answer: B
Explanation: Oracle provides six different data sources from the source to
the target database.
Offline Migration
Online Migration
Physical Migration
Logical Migration
Direct Migration
Indirect Migration
8. Answer: A
Explanation: In online migration, you do a one-time snapshot, and at the
same time, you will also start replications. Anything that changes to your
source databases will be continuously replicated with the target. Therefore,
your application can stay online, and at that moment, you are doing the
cutover when you have downtime.
9. Answer: D
Explanation: Zero Downtime Migration automates the whole migration
process, significantly minimizing the risk of human error. ZDM uses Oracle
Database-integrated High Availability (HA) technologies.
10. Answer: C
Explanation: Oracle GoldenGate is a piece of software that allows you to
replicate, filter, and alter data between databases.
11. Answer: B
Explanation: DMS is free for all the common use cases and what is
included in the pricing is the service itself. All the service's environment and
the infrastructure that the service runs on. There is a GoldenGate
marketplace license specific for migration, and it is free for the first six
months.
12. Answer: A
Explanation: Database Migration is a managed cloud service that runs
independently of your tenancy and resources. The service communicates
with your resources using Private Endpoints (PEs) and runs as a multitenant
service under a Database Migration service tenancy. Database Migration is
in charge of managing PEs.
13. Answer: D
Explanation: The validation Job verifies that the prerequisites and
connection for the source and target databases, Oracle GoldenGate instances,
and Oracle Data Pump are correct. When you assess the migration, a
validation job is produced.
14. Answer: B
Explanation: Once you continue from there, you will go to another phase
called switchover. At that point, the user deactivates the source applications.
The user waits for the switchover to complete, and the switchover phase will
apply any leftover transactions. After it is completed, a user can activate the
target application.
15. Answer: A
Explanation: Agent contains the information needed to link Oracle Cloud
Infrastructure to a source database that is not directly accessible on OCI,
such as a database in a different region or tenancy, an on-premises database,
or a cloud database that was manually deployed.
Chapter 10: Design for Security and Compliance
1. Answer: A and B
Explanation: Oracle is responsible for the security of the underlying cloud
infrastructure (such as datacenter facilities, hardware, and software systems),
while you are responsible for securing your workloads and configuring your
services (such as compute, network, storage, and database) securely in a
shared, multi-tenant compute environment.
2. Answer: D
Explanation: In Oracle, Identity and Access Management (IAM) service
can:
Manage user access and policies
Manage Multi-Factor Authentication (MFA)
Single sign-on to identity providers
3. Answer: C
Explanation: There are three user types available. These are:
Federated User
Local User
Provisioned/Synchronized User
4. Answer: A
Explanation: The Oracle Cloud Infrastructure Web Application Firewall
(WAF) is a global security solution protecting applications from dangerous
and undesirable internet traffic. It is cloud-based and PCI compliant. WAF
may secure any internet-facing endpoint by enforcing the same set of rules
across all of a customer's apps.
5. Answer: B
Explanation: Bastions allows authorized users to connect to target resources
using Secure Shell (SSH) sessions from defined IP addresses. Users can
communicate with the target resource using any software or protocol that
SSH supports once connected. To connect to a Windows host, for example,
you can use the Remote Desktop Protocol (RDP), or to connect to a
database, you can utilize Oracle Net Services.
6. Answer: C
Explanation: The whole configuration of your WAF service, including
origin management, protection rule settings, and bot detection features, is
referred to as WAF policies.
7. Answer: D
Explanation: Federated users select which identity provider they want to
use for sign-in and are then redirected to that identity provider's sign-in
experience for authentication. After inputting their login and password, the
Identity Provider (IdP) authenticates them and redirects them back to the
Oracle Cloud Infrastructure Console.
8. Answer: A
Explanation: In real-time, Oracle Cloud Infrastructure WAF uses a
sophisticated DNS data-driven algorithm to select the optimum global
POINT OF PRESENCE (POP) to serve a specific user. As a result, users are
routed to avoid global network difficulties and potential latency while
receiving the finest uptime and service levels available.
9. Answer: B
Explanation: CAPTCHA protection can be used to restrict access to a URL
that humans should only visit. You can also personalize the CAPTCHA
challenge remarks for each URL.
10. Answer: D
Explanation: The WAF service has numerous features that allow you to
detect bot traffic and either ban or allow it to access your web apps.
JavaScript Challenge, CAPTCHA Challenge, and GoodBot whitelists are
some of the bot control tools.
11. Answer: C
Explanation: The Oracle sign-up process creates your users in two different
identity systems.
Oracle Identity Cloud Service (IDCS)
Oracle Cloud Infrastructure’s own native identity system called Identity
and Access Management (IAM) service
12. Answer: A
Explanation: A federated user is someone who signs in to use the OCI
console through a federated IdP. These users are created and managed by an
admin in the IdP and use SSO sign-in to the console.
The federated users are granted access to OCI based on their membership in
groups mapped to the OCI groups.
13. Answer: D
Explanation: Identity and Access Management (IAM) for Oracle Cloud
Infrastructure allows you to manage who has access to your cloud resources.
You can manage which users have access to certain resources and what kind
of access they have.
14. Answer: B
Explanation: You can develop and maintain rules for internet threats,
including Cross-Site Scripting (XSS), SQL Injection, and other OWASP-
defined vulnerabilities using WAF. Unwanted bots can be mitigated, while
desired bots can be admitted tactically. Access rules can be set up to restrict
access based on location or the request's signature.
15. Answer: C
Explanation: Actions are items that represent one or more of the following:
Allow: An action that skips all remaining rules in the current module if a
matching rule is found
Check: An action that does not halt the current module's rule execution.
Instead, it creates a log message that records the outcome of rule
execution
Return HTTP response: This action returns a specific HTTP response
Chapter 11: Real-World Architecture
1. Answer: C
Explanation: Oracle's Cloud Architecture (OCI) is a secure public cloud
infrastructure designed for mission-critical workloads.
2. Answer: C
Explanation: A region's Availability Domain (AD) comprises one or more
data centers. Three availability domains make up a region.
3. Answer: A
Explanation: A fault domain is a collection of hardware and systems within
an availability domain.
4. Answer: B
Explanation: The core component of a hub-and-spoke network, also known
as a star network, is connected to several networks around it. The overall
architecture resembles a wheel, with many spokes connecting a central hub
to spots along the wheel's periphery. This type of architecture can be used for
creating a development and production environment that is separate. Setting
up this topology in a standard on-premises data center can be costly.
However, there is no additional expense in the cloud.
5. Answer: D
Explanation: A fault domain is a collection of hardware and systems within
an availability domain. There are three fault domains in each availability
domain. Within a single availability domain, fault domains allow you to
distribute your instances such that they are not all on the same physical
hardware.
6. Answer: A
Explanation: The Oracle Architecture Centre is a resource library that
allows developers and IT professionals to optimize and personalize their
cloud, hybrid, and on-premises deployments.
7. Answer: C
Explanation: Availability Domains are fault-tolerant, segregated Oracle
data centers that house cloud resources, including instances, volumes, and
subnets. There are multiple Availability Domains in a region.
8. Answer: A
Explanation: If you add High Availability inside the region, you might want
to introduce a standby instance with Oracle Data Guard to another
availability domain.
9. Answer: D
Explanation: Compartments enable you to structure your resources to
delegate cost restrictions and administrative access. These logical containers
hold your resources, are not bound to a certain data center, and span multiple
datacenters. A compartment holds all of the resources.
10. Answer: A
Explanation: Security lists are utilized to regulate network traffic to and
from each subnet. Every subnet has a routing table with rules for routing
traffic to destinations outside the VCN.
The hub VCN includes an internet gateway for network traffic to and from
the public internet and a dynamic routing gateway (DRG) for private
connectivity with your on-premises network, which you may set up using
Oracle Cloud Infrastructure FastConnect, IPSec VPN, or both.
11. Answer: A
Explanation: A resource must belong to only one compartment.
12. Answer: B
Explanation: It is possible to add and delete the resource in the
compartment.
13. Answer: C
Explanation: HPC on Oracle Cloud Infrastructure (OCI) provides strong,
cost-effective computing capabilities for solving complicated mathematical
and scientific issues in various industries. OCI's bare metal servers, along
with Oracle's cluster networking, enable ultra-low latency Remote Direct
Memory Access (RDMA) overconvergent Ethernet (RoCE) v2 (two seconds
latency over clusters of tens of thousands of cores).
14. Answer: B
Explanation: In the architecture of OCI, there are several main components,
which are essential for anyone who is getting started with it.
Regions
Availability Domains
Fault Domains
High Availability Design
Compartments
15. Answer: A
Explanation: To navigate to the Oracle Cloud Infrastructure Console, you
should use https://cloud.oracle.com.
Acronyms

AAA Authentication, Authorization, and Auditing


ACID Automatic, Consistency, Isolation, Durability
ACL Access Control List
AD Availability Domain
ADB Autonomous Database
ADW Autonomous Data Warehouse
AES Advanced Encryption Standard
AMQP Advanced Message Queuing Protocol
API Application Programming Interface
APM Application Performance Monitoring
ASN Autonomous System Number
ATP Autonomous Transaction Processing
AWS Amazon Web Service
BICC Business Intelligent Cloud Connector
BYOIP Bring Your Own IP
BYOL Bring Your Own License
BYOSL Bring Your Own Software and License
CD Continuous Deployment
CDIR Classless Inter-Domain Routing
CI Continuous Integration
CIS Center of Internet Security
CLI Command Line Interface
CMEK Customer-Manager Encryption Key
CNCF Cloud Native Computing Foundation
CPAT Cloud Premigration Advisor Tools
CPU Central Processing Unit
CSP Cloud Service Provider
CRUD Create, Read, Update, and Delete
CVE Common Vulnerabilities and Exposures
DBaaS Database as a Service
DB Database
DiD Defense in Depth
DML Data Manipulation Language
DRS Distributed Resource Scheduler
ECDSA Elliptic Curve Digital Signature Algorithm
ExaC@C Exadata Cloud@Customer
FaaS Function as a Service
FD Fault Domain
FDKs Function Development Kits
FIPS Federal Information Processing Standards
GDPR General Data Protection Regulation
GPU Graphics Processing Unit
HIPAA Health Insurance Probability and Accountability
HPA Horizontal Pod Autoscaling
HPC High Performance Computing
HTTP HyperText Transfer Protocol
HTTPS HyperText Transfer Protocol Secure
IAM Identity and Access Management
I/O Input/Output
IoT Internet of Things
IP Internet Protocol
MAA Maximum Availability Architecture
MFA Multi-Factor Authentication
MON Mobility Optimized Networking
MQL Monitoring Query Language
MQTT Message Queuing Telemetry Transport
MR Multi-Region
NoSQL Non-Structured Query Language
NVMe Non-Volatile Memory express
OCA Oracle Cloud Agent
OCI Oracle Cloud Infrastructure
OCI Open Container Initiative
OCI Gen2 Oracle Cloud Infrastructure Generation 2
OCIR Oracle Cloud Infrastructure Registry
OKE Oracle Kubernetes Engine
OLAP Online Analytical Processing
OLTP Online Transaction Processing
ONS Oracle Notification Service
OS Operating System
OSS OCI Streaming Service
PaaS Platform as a Service
PCI Payment Card Industry
PL Processor License
PoP Point of Presence
RDP Remote Desktop Protocol
RDMA Remote Direct Memory Access
RPC Remote Procedure Call
SaaS Software as a Service
SASL Simple Access and Security Later
SDDC Software-Defined Data Center
SDK Software Development Kit
SIEM Security Information and Event Management
SLA Service Level Agreement
SQL Structured Query Language
SIM Security Incident Management
SQL Structured Query Language
SSH Secure Shell
SSL Secure Socket Layer
TB Terabyte
TCP Transmission Control Protocol
vCLS vSphere Cluster Service
VCN Virtual Cloud Network
VM Virtual Machine
VPA Vertical Pod Autoscaling
vSAN virtual Storage Area Network
UI User Interface
UX User Experience
URL Uniform Resource Locator
XSS Cross-site Scripting
ZDM Zero Downtime Migration
References
https://www.oracle.com/cloud/architecture-center/
https://www.oracle.com/cloud/data-regions/
https://docs.oracle.com/en/solutions/cis-oci-benchmark/index.html#GUID-
4572A461-E54D-41E8-89E8-9576B8EBA7D8
https://docs.oracle.com/en/solutions/deploy-hpc-on-oci/index.html#GUID-
EAE35729-1B0B-469C-A70A-7CAF3A0B20A8
https://github.com/oracle-quickstart/oci-arch-hub-spoke
https://education.oracle.com/products/trackp_OCICAP2021OPN
https://krrai77.medium.com/building-blocks-of-oracle-cloud-infrastructure-
oci-753b07599e7b
https://docs.oracle.com/en/solutions/oci-best-practices-
resilience/index.html#GUID-AD19E46D-0C1C-4FC4-92FE-
7DFE7B00FAF6
https://docs.oracle.com/en/solutions/deploy-hpc-on-oci/index.html
https://blogs.oracle.com/cloud-infrastructure/post/oracle-builds-on-cloud-
momentum-with-five-new-regions-worldwide
https://www.oracle.com/cloud/hpc/
https://www.oracle.com/a/ocom/docs/oci-operations-associate-certification-
2020-study-guide.pdf
http://oracle-blogs-test.compendiumblog.com/oci-architecture-center%3A-a-
single-view-for-technical-content-about-oracle-cloud
https://database-heartbeat.com/2020/12/31/high-availability-disaster-
recovery-in-oracle-cloud-infrastructure/
https://docs.oracle.com/en/solutions/design-ha/index.html#GUID-
76ECDDB4-4CB1-4D93-9A6D-A8B620F72369
https://www.thatfinnishguy.blog/2021/02/22/oci-high-availability-designs-
with-availability-domains/
https://k21academy.com/oracle-compute-cloud-services-iaas/oracle-cloud-
infrastructure-availability-domains-fault-domains/
https://www.oracle.com/cloud/data-regions/
https://docs.oracle.com/en-
us/iaas/Content/Identity/Tasks/managingregions.htm
https://k21academy.com/1z0-1072/oracle-cloud-infrastructure-new-region-
added/
https://avinetworks.com/glossary/high-availability/
https://blogs.oracle.com/cloud-infrastructure/post/best-practices-for-
compartments
https://docs.oracle.com/en-
us/iaas/Content/General/Concepts/regions.htm#:~:text=About%20Regions%20and%20Av
https://docs.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm
https://docs.oracle.com/en-us/iaas/tools/oci-
cli/2.9.1/oci_cli_docs/cmdref/iam/availability-domain.html
https://docs.oracle.com/en-us/iaas/Content/GSG/Concepts/console.htm
https://www.ateam-oracle.com/post/oracle-cloud-infrastructure-
compartments#:~:text=By%20default%2C%20any%20OCI%20tenancy,of%20the%20def
https://docs.oracle.com/en-
us/iaas/Content/Identity/Tasks/managingcompartments.htm
https://blogs.oracle.com/developers/post/introduction-to-the-key-concepts-of-
oracle-cloud-infrastructure
https://docs.oracle.com/en-
us/iaas/Content/Bastion/Concepts/bastionoverview.htm#:~:text=Bastions%20are%20logic
https://docs.oracle.com/en-us/iaas/Content/Bastion/home.htm
https://docs.oracle.com/en/solutions/use-bastion-service/index.html#GUID-
1B455658-0988-42C4-A52F-4757A3201232
https://docs.oracle.com/en/cloud/paas/content-cloud/administer/understand-
your-deployment-architecture-options.html#GUID-661F896D-0D25-4BD6-
A9C8-DC4CA787F43E
https://www.oracle.com/security/cloud-security/bastion/
https://learning.oracle.com/public_content/ohr/DL/AA_Vault.pdf
https://docs.oracle.com/en-
us/iaas/Content/Bastion/Tasks/managingbastions.htm
https://docs.oracle.com/en-us/iaas/cloud-guard/using/part-start.htm
https://www.oracle.com/a/ocom/docs/oracle-cloud-infrastructure-waf-data-
sheet.pdf
https://blogs.oracle.com/cloud-infrastructure/post/simplify-secure-access-
with-oci-bastion-service
https://www.zdnet.com/article/oracle-cloud-guard-maximum-security-zones-
now-generally-available/
https://www.oracle.com/a/ocom/docs/bastion-hosts.pdf
https://www.oracle.com/security/cloud-security/security-zones/
https://www.oracle.com/pk/security/cloud-security/security-zones/faq/
https://docs.oracle.com/en-us/iaas/Content/WAF/Concepts/overview.htm
https://docs.oracle.com/en-
us/iaas/Content/KeyManagement/Concepts/vaultstarthere.htm
https://www.ericsson.com/en/cloud-native?
gclid=CjwKCAjw49qKBhAoEiwAHQVTo7VWgPmkb-
AROiTPIn2IVsbNzhYU7Jl6009PwsvFz_uADqyDJwKvVhoCfUYQAvD_BwE&gclsrc=a
https://docs.oracle.com/en-us/iaas/pl-sql-sdk/doc/vaults-package.html
https://docs.oracle.com/en-
us/iaas/Content/Security/Concepts/security_overview.htm
https://docs.oracle.com/en-
us/iaas/Content/Logging/Concepts/loggingoverview.htm
https://docs.oracle.com/en-us/iaas/Content/KeyManagement/home.htm
https://docs.oracle.com/en-
us/iaas/Content/KeyManagement/Tasks/managingvaults.htm
https://docs.oracle.com/en-
us/iaas/Content/KeyManagement/Reference/developing-with-vault.htm
https://www.oracle.com/security/cloud-security/web-application-firewall/
https://www.oracle.com/a/ocom/docs/security/oci-web-application-
firewall.pdf
https://www.oracle.com/security/cloud-security/web-application-firewall/faq/
https://docs.oracle.com/en-us/iaas/Content/WAF/Concepts/gettingstarted.htm
https://docs.oracle.com/en-us/iaas/Content/WAF/Concepts/landing.htm
https://docs.oracle.com/en-
us/iaas/Content/KeyManagement/Concepts/keyoverview.htm
https://docs.oracle.com/en-
us/iaas/Content/Monitoring/Concepts/monitoringoverview.htm
https://docs.oracle.com/en-
us/iaas/scanning/using/overview.htm#:~:text=Oracle%20Vulnerability%20Scanning%20S
https://docs.oracle.com/en-us/iaas/security-zone/using/security-zones.htm
https://blogs.oracle.com/pcoe/post/introducing-oracle-cloud-infrastructure-
vulnerability-scanning-service
https://www.oracle.com/security/cloud-security/vulnerability-scanning-
service/
https://docs.oracle.com/en-us/iaas/scanning/home.htm
https://docs.oracle.com/en-us/iaas/Content/SecurityAdvisor/home.htm
https://docs.oracle.com/en-us/iaas/cloud-guard/using/index.htm
https://docs.oracle.com/en-us/iaas/application-performance-
monitoring/doc/application-performance-monitoring.html
https://learn.oracle.com/ols/home/oracle-cloud-infrastructure-learning-
subscription/35644#filtersGroup1=.f1778%2C.f2954&filtersGroup2=&filtersGroup3=&fi
https://learn.oracle.com/ols/course/developing-cloud-native-applications-on-
oci/35644/98824/141811
https://docs.oracle.com/en-us/iaas/cloud-guard/using/part-start.htm
https://docs.oracle.com/en-
us/iaas/Content/Logging/Concepts/loggingoverview.htm
https://www.oracle.com/devops/logging/
https://docs.oracle.com/en-us/iaas/application-performance-
monitoring/index.html
https://docs.oracle.com/en-us/iaas/operations-
insights/index.html#:~:text=Operations%20Insights%20provides%20360%2Ddegree,issue
https://www.oracle.com/cloud/cloud-native/what-is-cloud-native/
https://www.functionize.com/blog/5-ways-cloud-native-application-testing-
is-different-from-testing-on-premises-software/
https://www.oracle.com/manageability/application-performance-monitoring/
https://docs.oracle.com/en-us/iaas/operations-insights/doc/operations-
insights.html
https://www.oracle.com/manageability/operations-insights/
https://docs.oracle.com/en-us/iaas/operations-insights/doc/get-started-
operations-insights.html
https://docs.oracle.com/en-us/iaas/logging-analytics/index.html
https://www.oracle.com/manageability/logging-analytics/
https://docs.oracle.com/en/cloud/paas/logging-
analytics/logqs/#before_you_begin
https://www.coursera.org/lecture/oracle-cloud-infrastructure-architect-
professional/planning-your-data-migration-to-oci-WlgLI
https://www.infosys.com/industries/communication-
services/documents/oracle-data-migration-comparative-study.pdf
https://www.oracle.com/database/technologies/cloud-migration.html
https://www.oracle.com/pk/cloud/migrate-applications-to-oracle-cloud/
https://www.oracle.com/a/tech/docs/oracle-zdm-technical-brief.pdf
https://www.oracle.com/cloud/migrate-data-and-databases-to-oci/
https://www.oracle.com/assets/database-migration-service-2392549.pdf
https://docs.oracle.com/en/cloud/paas/database-migration/dmsus/getting-
started-oracle-cloud-infrastructure-database-migration.html
https://docs.oracle.com/en-us/iaas/database-migration/doc/overview-oracle-
cloud-infrastructure-database-migration.html
https://www.oracle.com/technical-resources/articles/cloud/migrate-db-to-
cloud-with-datapump.html
https://docs.oracle.com/en-
us/iaas/Content/ResourceManager/Concepts/resourcemanager.htm
https://www.oracle.com/devops/resource-manager/
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/cloud/vmware-
ocvs-network-layout-reference-architecture-for-vsphere.pdf
https://docs.oracle.com/en-us/iaas/Content/Network/Concepts/ipv6.htm
https://docs.oracle.com/cd/E19455-01/806-0916/6ja8539bc/index.html
https://docs.oracle.com/en/solutions/learn-azure-oci-
interconnect/index.html#GUID-50C5E868-7E8B-4C67-A226-
9B0C1559DECA
https://docs.oracle.com/en/solutions/learn-azure-oci-
interconnect/index.html#GUID-FBE38C70-A4CF-40C5-A37A-
121241D21199
https://www.oracle.com/technetwork/database/enterprise-
edition/oracledatabaseipv6sod-4007245.pdf
https://fr.coursera.org/lecture/oracle-cloud-infrastructure-architect-
professional/introduction-to-ipv6-with-oracle-nWBvi
https://docs.oracle.com/en-
us/iaas/Content/ResourceManager/Concepts/resource-manager-and-
terraform.htm
https://www.oracle.com/cloud/multicloud/hybrid-cloud/what-is-hybrid-cloud/
https://www.vmware.com/products/hcx.html
https://blogs.oracle.com/cloud-infrastructure/post/ipv6-on-oracle-cloud-
infrastructure
https://docs.oracle.com/cd/E18752_01/html/816-4554/ipv6-overview-7.html
https://www.infosys.com/industries/communication-
services/documents/oracle-data-migration-comparative-study.pdf
https://docs.oracle.com/en/solutions/migrate-vmware-workloads-
oraclecloud/configure-oracle-cloud-vmware-solution-hcx-components.html
https://www.oracle.com/database/technologies/cloud-migration.html
https://www.terraform.io/downloads.html
https://docs.oracle.com/en-us/iaas/releasenotes/changes/31b14208-edc4-
48c2-bd47-0c4956900960/
https://docs.oracle.com/en-us/iaas/developer-tutorials/tutorials/tf-provider/01-
summary.htm
https://blogs.oracle.com/cloud-infrastructure/post/deploy-vmware-hcx-
connector-in-your-on-premises-vmware-environment-and-establish-a-site-
pairing-with-oracle-cloud-vmware-solution
https://learn.oracle.com/ols/course/design-for-hybrid-cloud-
architecture/35644/86768/85029
https://www.ericsson.com/en/cloud-native?
gclid=CjwKCAjw49qKBhAoEiwAHQVTo7VWgPmkb-
AROiTPIn2IVsbNzhYU7Jl6009PwsvFz_uADqyDJwKvVhoCfUYQAvD_BwE&gclsrc=a
https://docs.oracle.com/en-
us/iaas/Content/Logging/Concepts/loggingoverview.htm
https://learning.oracle.com/public_content/ohr/DL/AA_ADB.pdf
https://docs.oracle.com/en-
us/iaas/Content/Database/Concepts/adboverview.htm#deploymenttypes
https://www.oracle.com/database/technologies/datawarehouse-bigdata/adb-
faqs.html#PandL-BOOKMARK
https://docs.oracle.com/en-
us/iaas/Content/Monitoring/Concepts/monitoringoverview.htm
https://www.oracle.com/autonomous-database/autonomous-data-warehouse/
https://www.oracle.com/mysql/
https://docs.oracle.com/cd/E11882_01/server.112/e10575/tdpsg_intro.htm
https://docs.oracle.com/en-us/iaas/Content/Database/Tasks/adbcreating.htm
https://docs.oracle.com/cd/B19306_01/server.102/b14239/concepts.htm#g1049956
https://docs.oracle.com/en-
us/iaas/Content/Database/Tasks/usingdataguard.htm
https://www.oracle.com/autonomous-database/autonomous-transaction-
processing/#:~:text=Oracle%20Autonomous%20Transaction%20Processing%20is,encryp
https://docs.oracle.com/en-us/iaas/mysql-database/doc/db-systems.html
https://docs.oracle.com/en-us/iaas/mysql-database/doc/overview-mysql-
database-service.html
https://www.oracle.com/database/dataguard/
https://docs.oracle.com/en-us/iaas/application-performance-
monitoring/doc/application-performance-monitoring.html
https://learn.oracle.com/ols/home/oracle-cloud-infrastructure-learning-
subscription/35644#filtersGroup1=.f1778%2C.f2954&filtersGroup2=&filtersGroup3=&fi
https://learn.oracle.com/ols/course/developing-cloud-native-applications-on-
oci/35644/98824/141811
https://docs.oracle.com/en-
us/iaas/Content/Logging/Concepts/loggingoverview.htm
https://www.oracle.com/devops/logging/
https://docs.oracle.com/en-us/iaas/application-performance-
monitoring/index.html
https://docs.oracle.com/en-us/iaas/operations-
insights/index.html#:~:text=Operations%20Insights%20provides%20360%2Ddegree,issue
https://www.oracle.com/cloud/cloud-native/what-is-cloud-native/
https://www.functionize.com/blog/5-ways-cloud-native-application-testing-
is-different-from-testing-on-premises-software/
https://www.oracle.com/manageability/application-performance-monitoring/
https://docs.oracle.com/en-us/iaas/operations-insights/doc/operations-
insights.html
https://www.oracle.com/manageability/operations-insights/
https://docs.oracle.com/en-us/iaas/operations-insights/doc/get-started-
operations-insights.html
https://docs.oracle.com/en-us/iaas/logging-analytics/index.html
https://www.oracle.com/manageability/logging-analytics/
https://docs.oracle.com/en/cloud/paas/logging-
analytics/logqs/#before_you_begin
About Our Products
Other products from IPSpecialist LTD regarding CSP technology
are:

AWS Certified Cloud Practitioner Study guide

AWS Certified SysOps Admin - Associate Study guide

AWS Certified Solution Architect - Associate Study guide

AWS Certified Developer Associate Study guide

AWS Certified Advanced Networking – Specialty Study guide

AWS Certified Security – Specialty Study guide

AWS Certified Big Data – Specialty Study guide

AWS Certified Database – Specialty Study guide

AWS Certified Machine Learning – Specialty Study guide

Microsoft Certified: Azure Fundamentals

Microsoft Certified: Azure Administrator


Microsoft Certified: Azure Solution Architect

Microsoft Certified: Azure DevOps Engineer

Microsoft Certified: Azure Developer Associate

Microsoft Certified: Azure Security Engineer

Microsoft Certified: Azure Data Fundamentals

Microsoft Certified: Azure AI Fundamentals

Microsoft Certified: Azure Database Administrator Associate

Google Certified: Associate Cloud Engineer

Google Certified: Professional Cloud Developer


Microsoft Certified: Azure Data Engineer Associate

Microsoft Certified: Azure Data Scientist

Ansible Certified: Advanced Automation

Oracle Certified: OCI Foundations Associate

Oracle Certified: OCI Developer Associate

Oracle Certified: OCI Architect Associate

Oracle Certified: OCI Operations Associate

Kubernetes Certified: Application Developer

Other Network & Security related products from IPSpecialist LTD


are:
CCNA Routing & Switching Study Guide
CCNA Security Second Edition Study Guide
CCNA Service Provider Study Guide
CCDA Study Guide
CCDP Study Guide
CCNP Route Study Guide
CCNP Switch Study Guide
CCNP Troubleshoot Study Guide
CCNP Security SENSS Study Guide
CCNP Security SIMOS Study Guide
CCNP Security SITCS Study Guide
CCNP Security SISAS Study Guide
CompTIA Network+ Study Guide
Certified Blockchain Expert (CBEv2) Study Guide
EC-Council CEH v10 Second Edition Study Guide
Certified Blockchain Expert v2 Study Guide

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy