0% found this document useful (0 votes)
14 views5 pages

Use The AWS Management Console To Do The Following:: o o o o o

Uploaded by

romdhani.amazon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views5 pages

Use The AWS Management Console To Do The Following:: o o o o o

Uploaded by

romdhani.amazon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Migrating Oracle databases to Amazon RDS for PostgreSQL with DMS Schema Conversion

PDF

This walkthrough gets you started with heterogeneous database migration from Oracle to Amazon RDS
for PostgreSQL. To automate the migration, we use the AWS DMS Schema Conversion. This service helps
assess the complexity of your migration and converts source Oracle database schemas and code objects
to a format compatible with PostgreSQL. Then, you apply the converted code to your target database.
This introductory exercise shows how you can use DMS Schema Conversion for this migration.

At a high level, this migration includes the following steps:

 Use the AWS Management Console to do the following:

o Create a VPC in the Amazon VPC console.

o Create IAM roles in the IAM console.

o Create an Amazon S3 bucket in the Amazon S3 console.

o Create your target Amazon RDS for PostgreSQL database in the Amazon RDS console.

o Store database credentials in AWS Secrets Manager.

 Use the AWS DMS console to do the following:

o Create an instance profile for your migration project.

o Create data providers for your source and target databases.

o Create a migration project.

 Use DMS Schema Conversion to do the following:

o Assess the migration complexity and review the migration action items.

o Convert your source database.

o Apply the converted code to your target database.

This walkthrough takes approximately three hours to complete. Make sure that you delete resources at
the end of this walkthrough to avoid additional charges.

Topics

 Migration overview

 Prerequisites for migrating Oracle databases to Amazon Aurora MySQL with DMS schema
conversion

 Step-by-step Oracle databases to Amazon Aurora MySQL with DMS schema conversion migration
walkthrough
 Migration from Oracle databases to Amazon Aurora MySQL with DMS schema conversion next
steps

Migration overview
 This section provides high-level guidance for customers looking to migrate from Oracle to
PostgreSQL using DMS Schema Conversion.
 DMS Schema Conversion automatically converts your source Oracle database schemas and
most of the database code objects to a format compatible with PostgreSQL. This conversion
includes tables, views, stored procedures, functions, data types, synonyms, and so on. Any
objects that DMS Schema Conversion can’t convert automatically are clearly marked. To
complete the migration, you can convert these objects manually.
 At a high level, DMS Schema Conversion operates with the following three components:
instance profiles, data providers, and migration projects.
An instance profile specifies network and security settings.
A data provider stores database connection credentials.
A migration project contains data providers, an instance profile, and migration rules.
AWS DMS uses data providers and an instance profile to design a process that converts
database schemas and code objects.
 The following diagram illustrates the DMS Schema Conversion process.

https://docs.aws.amazon.com/dms/latest/sbs/chap-rdsoracle2postgresql.steps.convertschema.html
https://docs.aws.amazon.com/dms/latest/sbs/chap-manageddatabases.postgresql-rds-postgresql-full-
load-pd_dump.html

PostgreSQL pg_dump and pg_restore utility

PDF

pg_dump and pg_restore is a native PostgreSQL client utility. You can find this utility as part of the
database installation. It produces a set of SQL statements that you can run to reproduce the original
database object definitions and table data.

The pg_dump and pg_restore utility is suitable for the following use cases if:

 Your database size is less than 100 GB.

 You plan to migrate database metadata as well as table data.

 You have a relatively large number of tables to migrate.

The pg_dump and pg_restore utility may not be suitable for the following use cases if:

 Your database size is greater than 100 GB.

 You want to avoid downtime.

Example

At a high level, you can use the following steps to migrate the dms_sample database.

1. Export data to one or more dump files.

2. Create a target database.

3. Import the dump file or files.

4. (Optional) Migrate database roles and users.

Export Data

You can use the following command to create dump files for your source database.

pg_dump -h <hostname> -p 5432 -U <username> -Fc -b -v -f <dumpfilelocation.sql> -d


<database_name>

-h is the name of source server where you would like to migrate your database.

-U is the name of the user present on the source server

-Fc: Sets the output as a custom-format archive suitable for input into pg_restore.
-b: Include large objects in the dump.

-v: Specifies verbose mode

-f: Dump file path

Create a Database on Your Target Instance

First, login to your target database server.

psql -h <hostname> -p 5432 -U <username> -d <database_name>

-h is the name of target server where you would like to migrate your database.

-U is the name of the user present on the target server.

-d is the name of database name present on target already.

Then, use the following command to create a database.

create database migrated_database;

Import Dump Files

You can use the following command to import the dump file into your Amazon RDS instance.

pg_restore -v -h <hostname> -U <username> -d <database_name> -j 2 <dumpfilelocation.sql>

-h is the name of target server where you would like to migrate your database.

-U is the name of the user present on the target server.

-d is the name of database name that was created in step 2.

<dumpfilelocation.sql> is the dump file that was created to generate the script of the database using
pg_dump

Migrate Database Roles and Users

To export such database objects as roles and users, you can use the pg_dumpall utility.

To generate a script for users and roles, run the following command on the source database.

pg_dumpall -U <username> -h <hostname> -f <dumpfilelocation.sql> --no-role-passwords -g

-h is the name of source server where you would like to migrate your database.

-U is the name of the user present on the source server.


-f: Dump file path.

-g: Dump only global objects (roles and tablespaces), no databases.

To restore users and roles, run the following command on your target database.

psql -h <hostname> -U <username> -f <dumpfilelocation.sql>

-h is the name of target server where you would like to migrate your database.

-U is the name of the user present on the target server.

-f: Dump file path.

To complete the export and import operations, the pg_dump and pg_restore requires some time. This
time depends on the following parameters.

 The size of your source database.

 The number of jobs.

 The resources that you provision for your instance used to invoke pg_dump and pg_restore.

https://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/chap-oracle-
aurora-pg.hadr.datapump.html

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy