0% found this document useful (0 votes)
12 views99 pages

Oracle DBA Interview Question & Ans

The document provides a comprehensive list of Oracle DBA interview questions and answers covering various topics such as data storage, database concepts, user connectivity, and database management. Key concepts discussed include the differences between instance and database, the role of the optimizer, and the importance of base tables and data dictionary views. Additionally, it addresses practical scenarios like database installation, user management, and troubleshooting connectivity issues.

Uploaded by

Nagesh Giri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views99 pages

Oracle DBA Interview Question & Ans

The document provides a comprehensive list of Oracle DBA interview questions and answers covering various topics such as data storage, database concepts, user connectivity, and database management. Key concepts discussed include the differences between instance and database, the role of the optimizer, and the importance of base tables and data dictionary views. Additionally, it addresses practical scenarios like database installation, user management, and troubleshooting connectivity issues.

Uploaded by

Nagesh Giri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 99

Oracle DBA interview questions

Explain about data and how do you store data?


 Data is any value which we store for future reference. There are different types of data tools
which we can use to store data. The simplest data storage tools are notepad, MS-Excel, MS-Access
etc.

How data storage is different from data representation?

 Data storage can be done by using any data tools. Data storage defines how data is stored. On the
other hand, data representation is how data is displayed to a user. For example, we can store
data in the form of table but represent it in the form of charts!

As a DBA, explain what is a database? (Imp)


 Database is a software which allows applications to store and retrieve data faster. It allows
companies to create a three tier system where first tier is users, second is application and third tier
is database.

Can you differentiate Instance and Database?

 In Oracle, Instance and database are two separate components but work together. Instance
resides on RAM and it is the way users speak to database. The user data resides inside data files
which reside on Hard Disks.
 Users connect to database instance and ultimately instance speaks with database.
 Instance can also be defined as combination of memory structures and background processes.
How LRU algorithm impacts database instance?
 RAM works on LRU(Least recently used) algorithm. As the database instance resides on RAM, it has
to follow same rules as RAM. Hence, Oracle instance also follow LRU algorithm.

What is database client?

 A database client is a small software which must be installed on application server so that
application can connect to database server. For any client to connect oracle database, oracle client
must be installed.
Explain how user connectivity happens in database.
 All new user connections lands on listener. Listener hands over the incoming connections to
PMON. The user credentials are then verified with base tables. If details are correct, server
process is created on server side by allocating PGA memory.
There are 100 users connected to database and listener goes down. What will happen?

 Nothing will happen to existing users. The problem will be only with new database connections.
Listener only comes into pictures when there is a new database connection.

What are base tables?

 Base tables are binary tables inside database which contains encrypted data. These are also called
as metadata because it stores data regarding other data inside database. Base tables are copied
only to data dictionary area under instance and flushed out. Base tables are also known as
dictionary tables. Any modifications to base tables will corrupt the database.
 Only oracle background process can modify these tables.
 Base tables reside under system tablespace.

Explain about optimizer?

 When we travel from point A to point B, we tend to take the shortest route even though we have
multiple options. Same way, optimizer generates different plans or ways in which a SQL can be
executed. These plans are known as execution plans. Optimizer then chooses the best plan based
on CPU cost and resources. The job of optimizer is to generate execution plans and choose the
best one.

What is PGA or Private Global Area? How it is different from SGA?

 Any work inside server is done through background processes. These processes need some
memory to store basic information. On database server, we have one server process created for
every user connection. These server process also takes some memory. This memory is known as
PGA.
 SGA is shared global area. Anything placed in SGA is shared with all the users.

What do you understand by In-Memory sort?

 All the small data filtering happens in PGA. This is known as In-Memory sort.
 If the data is big, sorting is done under temp tablespace.
Why LGWR writes before DBWR writes? (imp)
 In general, transaction recording is more important than transaction execution. Lets take, if we
have redo entries on disk and suddenly power failure happens (the dirty blocks are not yet
written to disk). Then oracle can still recover the transactions by reading the redologs on disk. This
way first redo logs are made permanent and then dirty blocks are written to disk.
Can we have multiple DBWR processes in database?
 We can have between 1 to 36 DBWR in 11g.
 In 12c , 1 to 100 DBWR.
o 19c , we have similar to 12c 100 max.
Explain about SCN and checkpoint number.

 SCN is unique transaction number assigned to set of redo logs generated. This identify that OK,
these all redo entries are part of one transaction.
 A checkpoint is a database event, which synchronize the database blocks in memory with the
datafiles on disk. It has two main purposes – To establish a data consistency and enable faster
database Recovery.
I would like to perform 100 database installations. How will you do it?
 For such big number of installations, we can go with silent mode installation using response file.

What are oinstall and dba groups? Why we assign these groups to oracle user?

 oinstall group provides oracle software installation permissions to all users in the group.
 dba group provides oracle administration permissions to all users in the group.
Is it compulsory that we need to give group names as oinstall and dba? Or can we give any other name?
 We can give any name, but those are oracle standards.
What are kernel parameters and why to set them?
 Kerner parameters are system level settings that control how the OS manages hardware and
software resources.
 They will define the memory allocation from physical memory to Oracle database.
 These parameters are found in file : etc/sysctl.conf

What is Oracle inventory?

 It is a location which provides the oracle product information which are installed on a server.

Why to run orainstRoot.sh and root.sh scripts at the end of installation?


 orainstRoot.sh will change the permissions for oraInventory and
 root.sh will create oratab file.

Explain oracle installation pre-requisite steps.

 Create oinstall& dba groups


 modify kernel parameters.
 check disk space for installation.
 create oracle user and provide permissions on installation location to oracle user.

I am not able to connect sqlplus utility. What could be the issue?

 The environment variables are not set properly OR

 .bash_profile is not executed immediately after making changes to it.


Can we set environment variables in a different file apart from .bash_profile?

 Yes we can do that but still we need to set ORACLE_HOME and PATH variables in .bash_profile.
I made changes to .bash_profile but still variables are not set.
 When you make changes to .bash_profile, you must execute it at least once using
# . .bash_profile.
How can I check environment variables are set properly?
 Using env | grep ORA.
 Using echo command like echo $ORACLE_HOME.
What is the difference between /etc/oratab file and ps -ef|grep pmon output?
 ps -ef | grep pmon gives output of running database instances on a server.
 /etc/oratab file lists all the databases created on a server, weather they are running or shutdown
is a different case.
How many databases can I create? (Imp)
 Unlimited, as much as CPU and memory is supported.
How many databases are there in your environment?
 Say any value between 150 to 200 and tell them 40 are prod and rest all are dev, test and QA.
Which is the biggest databases in your environment?
 Say any value between 700 DB to 1.5 TB. (2TB )
How many servers are there in your environment?
 You can say 70 to 80 servers with Linux, AIX and windows flavors. (48 servers)

Can we change database block size after creation?


 No, we cannot change block size.
What are main steps after database creation manually?
 Execute catalog.sql and catproc.sql.
 Update /etc/oratab file.

Application team requested you to delete a database. What will you do?
 These kind of requests must be checked with database architect. If we still have to do so, we can
stop the listener for 1 week, Next shutdown the database and take DB cold backup.
 If application team does not come back after 1 month, then we can drop database.
Base tables are in encrypted format, how can you check data from it?
 There are Data dictionary views and dynamic performance views created on base tables. We can
query these views as they have data in human readable format.
How data dictionary views and dynamic performance views are created?
 The catalog.sql and catproc.sql scripts created necessary views and procedures.
When are base tables created? what will happen if we do not run catalog.sql and catproc.sql
 Base tables are created when you create a database.
 If you do not run those scripts, we will not be able to query any data dictionary view or dynamic
performance view.
Why we do not run catalog.sql and catproc.sql when we create database using DBCA?
 DBCA will run those scripts internally.
 We must run those scripts only when we create database manually.

I would like to know current user details in database. How to find this information?

 You can query V$SESSION view to see the user connection details.

What is the difference between pfile and spfile?

 Pfile is human readable file and spfile is binary file. We can start database instance with either of
the files but first preference is given to spfile.
 Both reside under $ORACLE_HOME/dbs location and are used to allocate instance memory.

I only have spfile, how can I create pfile? (imp)


 You can create it by CREATE PFILE FROM SPFILE query.

Why we need temp tablespace when there is In-Memory sort in PGA?

 When the data is big, database needs more space. It uses temp tablespace in such cases to
perform sorting.

How can you find data files related to a tablespace?

 We can query DBA_DATA_FILES to check this information.


What do you understand by log switch?


 LGWR switching from one redo log file to another is known as log switch.
Can we create different size of redo files inside one group?
 No, all the members inside a group must have same size.
What is FAST_START_MTTR_TARGET parameters?
 This parameter specifies the checkpoint frequency. By default, it is set to 3 seconds. (in modern
oracle version the default value is 0)

What happens in mount stage?


 In mount stage, only control file is read but its contents are not physically validated.
 Oracle will not check the physical existence of data files.

What happens in open stage?

 The contents of control file is validated against physical files. Oracle will physically check data files
and redo log files on disk. The data file headers and redo log files are matched with the SCN
number in control file. Once validation is done, database will be open.

I have 4 multiplexed copies of control files under /u01. Do you suggest to keep more copies or 4 are
enough?
 We must have minimum 2 multiplexed copies but they must be on different physical disks. You
have kept all 4 files under /u01. If we loose /u01 mount point, we loose all multiplexed copies.
I lost control file under /u01 but I have multiplexed copy in /u02. How do you recover database?
 We can simply copy control file from /u02 using cp command and make a copy under /u01 with
same name as lost control file. Once we have both control files, we can start the database.

What is scope parameter? (Imp)

 All the parameters that you modify while database is up and running can take three scope values:
spfile, memory, both.
 Spfile scope will make changes from next reboot, memory scope will make changes immediately
but will revert back after reboot and both will make changes immediately.
 Ex: alter system set SGA_TRGET=2G SCOPE=Memery;

What happens if archive log destination is full?

 The database will hang. We must have archive log backup scripts to take archive log backup and
delete to release space.
What is the difference between redo logs and archive logs? (Imp)
 Redo logs are overwritten by LGWR in cyclic order.
 Archive logs are backup or copy of redo logs in a separate location.

How to resize the redolog files?

 Not possible, we can create a new group with big size and drop the existing one.
If archive destination is full, what will you do?
 We will first try to take backup of archives if possible. If not, we will move some archives to
another location OR we can even change the archive destination inside database to a location
which has more space.
How do you monitor tablespaces in your environment?
 We have tablespace utilization scripts scheduled on each server. The script triggers email
whenever a tablespace utilization crosses above 80%. Depending on the alert and space on server,
we add space to tablespaces.
Can we take SYSTEM and SYSAUX tablespace offline? (Imp)
 We can take SYSAUX offline but not SYSTEM.
 SYSTEM store critical metadata required for basic database operations.
 It contains components like AWR(Automatic workload repository), Enterprise manger data.
 Oracle allows SYSAUX to take offline for maintenance or recovery purpose
.
A query is executing and temp tablespace is full. You added 20GB but again temp is full. What will you
do next?
 We need to check the query and if possible tune the query. We can even speak with application
team and allocate more temp space and reclaim space once their activity is done.
How can you identify which data file is modified today?
 We can check the data file timestamp at OS level.

You are trying to add data file to tablespace but getting error. What could be the issue?
(Imp)
 Control file has MAXDATAFILES parameter. If this number is exceeded, you cannot add more data
files.

Explain OMF? Have you worked on OMF?

 Oracle Manged Files enables us to create tablespaces without providing file names and locations.
But the problem is the naming convention of the files.
 No, my environment does not use OMF feature.

How to check database SCN?

 select current_scn from v$database.


What will happen if we do not specify user default tablespace?
 The DATABASE DEFAULT TABLESPACE will become user default tablespace.
After creating a new user, what all privileges you assign?
 We must grant CONNECT role so that user can connect to database. We must also grant other
privileges as per environment.
Do you suggest to grant RESOURCE role to a user? (Imp)
 No, this will grant unlimited quota on user default tablespace.
How to create new user and grant permission is a single command?
 SQL> grant create session to Giri identified by Giri123;
What happens to the objects if we change default tablespace?
 Nothing will happen, things will continue normally.
How do you implement password policies?
 Using PROFILES MANAGEMENT.

What are SNIPPED sessions in database?

 The sessions which exceed idle time are marked as snipped. The oracle level processes are cleared
when user exceed idle time but OS level processes will still exist. This is overhead on server.
IDLE_TIME is set to 15 min but even after 20 min, user session is not getting disconnected. What is the
issue?
 RESOURCE_LIMIT parameter is not set to TRUE.

How to check which users are granted sysdba role?

 We can query v$syspw_file view.


What you will check first before downloading Oracle client?
 The server operating system where we are installing Oracle client as we need to download as per
client server, not as per database server OS.
What all can you do with Oracle client software?
 We can only use oracle client software to connect a database server. We cannot create a local
database using oracle client.

Can a single listener handle multiple database services? (Imp)


 Yes, but the problem is if the listener is down, all new connections to multiple DB will not be
accepted.
 It is suggested to have different listeners for each database. If we bring down one listener for one
database, other listeners will not be affected.
Can we configure listener on Oracle client?
 No, listener must be configured only on database server which can accept incoming connections.

Tnsping is not working, everything is fine on listener.ora and tnsnames.ora file. What
could be the issue? (imp)
 The default port 1521 or if you specified any other port is not enabled on servers.
 We can ask network team to enable those ports.
I would like to select database from a table which is in different database. How can I do it?
 You can create DB LINK and query from the remote table on remote database.

How to check listener status at OS level? (Imp)

 ps -ef | grep tns.


What troubleshooting steps you will follow when user complaints about connectivity issue?
 Check listener and tnsentries.
 Perform tnsping from client machine.
 Based on above results we troubleshoot accordingly
There are 20 databases created from one single oracle home. How many listeners will you configure?
 One is enough but it is recommended to have separate listeners for each database.

Multitenant DB

Q. What are the major changes in architecture for 12c?


From 12c Onwards, the instance is shared with multiple databases.
This multiple databases are self-contained and pluggable from one database to another database. This is
very useful methodology where database consolidation.
In short, a single SGA and background process will be shared to multiple databases,
the databases can be created on fly and drop or attach and detach from one server to another server.
Q. What is a pluggable database (PDB) in Multitenant Architecture? imp
Pluggable Databases (PDBs) is new in Oracle Database 12c Release 1 (12.1). You can have many pluggable
databases inside a single Oracle Database occurrence. Pluggable Databases are always part of a
Container Database (CDB) but a PDB looks like a normal standalone database to the outside world.
Q. Does In-memory require a separate license cost?
Yes, In-memory does require a separate license cost.
Q. Why would I consider using the Multitenant option? Imp
A: It offeres a modern rchitecture that enables organization to consolidate and manage multiple
databases more efficiently.
1. Reduce Hardware cost, improves resource utilization.
2. Centralized management for tasks like backup patching and upgrade.
3. Reduce administration tasks.
 Limitation:
1. Require additional licensings
2. Increases complex in understanding and managing CCDB/PDB architecture.

Q. What are the common concepts of multitenant database?


Multitenant database consists of
 CDB is a container database which is similar like standalone database. Called CDB$ROOT
 PDB$SEED is a template database to create a databases within the CDB databases
 PDB<n> are individual or application databases
 Data dictionary between this databases are shared via internal links called object link and data
link
 Users between CDB and PDB are different, there will be common users (starts with C##) and local
users
 When the CDB starts up, the PDB will be in mount state, you must open them exclusively.

Q. Will CDB’s become the future?


Yes, Oracle has already depreciated standalone database development.
No further improvements will be released in future for standalone databases.

Q. How many PDBs can you create?


12cR1 –> 252 PDBs
12cR2 –> 4096 PDBs

Q. Can multiple CDBs run on the same server?


Yes!

Q. Can multiple CDBs run out of the same ORACLE_HOME installation?


Yes, you can invoke DBCA and create new CDBs out of same Oracle_Home

What we have in Oracle_Home? IMP


Which contains critical components of an oracle database installation. It contains all the files
subdirectories required to run, manage, and maintain iracle database and relted softwares.
1. Binaries and executable – We have sqlplus utility, tnsping, dbca, lsnrctl rmn.
2. Configuration files: sqlnet.ora , listener.ora, tnsnames.ora
3. Libraries: locted under lib/ (.so in linux)
4. Administrative tools: dbshut, dbstart, emctl.
5. Log and trace files
6. Oracle universal installer file
7. Sample schemas and scripts

Q. What are the methods to create Multitenant Database?


 DBCA method (DB CONFIGURATION ASSISTANT)
 DBCA silent method
 Manual method using CREATE DATABASE statement

Q. What is the limit of container databases (CDBs) on a server?


As many as supported by server CPU, RAM and Hard disk space.

Q. How do I know if my database is Multitenant or not?


You can use below query to identify a CDB database:
SELECT NAME, OPEN_MODE, CDB FROM V$DATABASE;
SHOW CON_ID;
SHOW CON_NAME;

Q. How to distinguish you are in CDB or PDB?


Once you logged in you can check show con_name or con_id will show you which db you are in.

Q. What are the different ways you can create a PDB?


 Copying from PDB$SEED
 Copying from another PDB
 Copying from a remote PDB
 Converting a Non-CDB into PDB
 Unplugging and plugging in a PDB
Q. Can I have one PDB at release 1, and a second PDB at release 2?
No, one instance, one version for all PDBs.

Q. What Pluggable databases do we have in this container database ?


You can check this by querying v$containers:
SELECT NAME, OPEN_MODE FROM V$CONTAINERS;
SHOW pdbs;

Q. How do you switch from one container to another container inside SQL*PLUS?
ALTER SESSION SET CONTAINER=pdb1;
Q. How about the datafiles system, sysaux , undo, redo etc , does they create when you create PDB?
IMP
 Datafiles are individual to each database for cdb and each pdb
 Undofiles and redofiles are only one across container
 From 12cR2 onwards we can create local undo for each PDB
 Temp files can be created in each database or share one across all databases
 SGA is shared across all databases
 Background process are shared across all databases , no additional back ground process defined

Q. Is the alert log the same for all pdbs in a cdb, or are they different?
Yes, one CDB, one alert log.

Q. How can I connect to a PDB directly from SQL* PLUS?


You can use Oracle easy connect method to connect a PDB directly.
CONNECT username/password@host[:port][/service_name][:server][/instance_name]
OR
sqlplus user/password@//localhost/pdb2

Q. How do I switch to main container Database?


ALTER SESSION SET CONTAINER = CDB$ROOT;

Memory structure:

Q. What is data block?


A: A data block is smallest unit of storage in an oracle databse, typically 8kb in size. It holds data for rows
of a table and is used for efficient data retrieval and storage.
Block – Holds table row and indexes
Extent - store data in tble or index
segment – segment represent specific db object
tablespace – Alogical storage unit contin segments. It cotin physical datafiles.
File: the physical file on the disk.
(BEST-F)
Q. What is the difference between SGA and PGA? Why these are important?
A: The System Global Area (SGA) and Program Global Area (PGA) are both memory
structures in Oracle Database, but they serve different purposes and are critical
for ensuring the database operates efficiently.
1. SGA (System Global Area)
• Definition: The SGA is a shared memory area that stores data and control
information for the entire database instance. It is shared by all users connected to
the database.
• Key Components of SGA:
• Database Buffer Cache:
Caches data blocks read from disk to reduce I/O operations.
• Shared Pool:
Stores SQL statements, execution plans, and metadata to enable SQL reuse.
• Redo Log Buffer:
Temporarily holds redo entries before writing them to redo log files.
• Large Pool:
Optional area used for specific operations like backups or large queries.
• Java Pool:
Used for Java-related objects and execution in the database.
• Streams Pool:
Allocated for Oracle Streams (if enabled).
• Purpose:
• Enhances performance by reducing disk I/O.
• Enables efficient resource sharing among multiple database users.
2. PGA (Program Global Area)
• Definition:
The PGA is a private memory area specific to each database session or process. It is
not shared between sessions.
• Key Components of PGA:
• Session Memory:
Stores session-specific variables and data.
• Sort Area:
Used for sorting operations when executing queries.
• Hash Area:
Memory used for hash join operations in SQL queries.
• Private SQL Area:
Contains information like bind variable values and query execution state.
• Purpose:
Supports session-specific operations like sorting, hashing, and processing.
Handles private data for individual users or processes.

Major Differences Between SGA and PGA


ASPECT SGA PGA
Scope Shared by all sessions Private to a single
in the database. session or process.
Purpose Manages shared Manages session-
resources for the specific operations.
database.
Management Managed by the Managed by Oracle’s
database instance. Process Manager.
PMON
Configuration Controlled by Controlled by
parameters like PGA_AGGREGATE_TAR
SGA_TARGET. GET.
Size Usually, larger due to Smaller and specific to
shared components. user sessions.
Example of usage SQL execution plans, Sorting, joins, and
cached data blocks. temporary data.

Why Are These Important?

SGA Importance

1. Optimizes Performance:
• Reduces I/O operations by caching frequently accessed data.
• Enables efficient query execution with cached SQL and metadata.
2. Supports Multiple Users:
• Facilitates sharing of critical resources like the buffer cache and shared
pool.
3. Controls Memory Usage:
• Proper sizing of the SGA avoids contention and excessive disk usage.
PGA Importance
1. Session Efficiency:
• Improves performance for operations like sorting and joining.
• Minimizes disk I/O for session-specific tasks.
2. Scalability:
• Properly allocated PGA ensures each session gets adequate
resources.
3. Parallel Query Performance:
• Operations like parallel joins and hash aggregations rely on sufficient PGA
memory.
How to Configure SGA and PGA
ALTER SYSTEM SET SGA_TARGET=2G;
ALTER SYSTEM SET PGA_AGGREGATE_TARGET=1G;
SELECT * FROM V$SGAINFO;
SELECT * FROM V$PGASTAT;

Q. As you said, if SGA and background process are shared, is there any performance impact?

A: Sharing the SGA and BP among multiple pdbs cn potentially impact on perpormance. This impact
depends on following factores:
3. SGA sharing: Is sharing among all PDBs withing the same CDB.
4. Background processes like DBWn,LGWR,and SMON are also shared.
5. Potential performance impact: When multiple PDBs are active:
a) SGA contention (Buffer cache and shred pool my face contention if the workload from
multiple PDBs is high. High I/O workloads impact buffer cache contention) and
b) Background process contention: BP are like LGWR or DBWR mu struggle if multiple PDBs
performance high level of writes simultaneously.
6. Higher CPU and I/O load: More PDBs means more sessions, which increase CPU usage and
I/O requests, potentially leading to bottlenecks if the infrastructure is not scaled
appropriately.
7. Query performance degrades
8. Disk and network bottlenecks

How to mitigate it:


1. Resource management: Set CPU count for each PDB to allocate CPU ressources fairly. And define
memory limits with Memory_target or SGA_TARGET for each PDB.
ALTER SYSTEM SET CPU_COUNT=2 CONTINER=’PDB1’;
ALTER SYSTEM SET SGA_TARGET=2G CONTINER=’PDB1’;
2. Sizing the SGA: SGA_TRGET and PG_AGGREGATE_TARGET parameters. Use AMM (Automatic
memory management) if necessary
3. Adjust Background process: For I/O intention applications, ensure enough database writer
processes (DBWn) are configured.
ALTER SYSTEM SET DB_WRITER_PROCESSES=4;
4. Monitor and tune: use AWL (Automatic work load repository) and ADDM (Automatic database
diagnostic monitor) to identify bottlenecks.
5. Choose RAC architecture to distribute load across multiple nodes.
6. Upgrade hardware to provide more CPU, Memory and I/O bandwidth.

Q. How do I start up a Pluggable database?


From CDB$ROOT container
ALTER PLUGGABLE DATABASE PDB1 OPEN;

Q. How about creating a user?


Normally you will use create user username identified by password, however this is not work anymore.
If you want to create common user across the container we have to use c##
If you want to create local user for particular pdb, then you can create user without C##
Create user C##GIRI identified by GIRI123;

Common user exists in all pdbs.


Common user and local user don’t have previlages to access other pdbs. If you want to grant access
previlages to local user then we have to provide access.
GRANT CREATE SESSION TO C##GIRI CONTAINER=ALL;
Limitations on common user:
Common user cannot own local objects in PDBs.

Q. Which parameters are modifiable at PDB level?


select NAME, ISPDB_MODIFIABLE from V$PARAMETER;
Q. What is the difference between Container ID Zero and One?
Following table describes various values of CON_ID Column in Container Data Objects.
0 = The data pertains to the entire CDB
1 = The data pertains to the root
2 = The data pertains to the seed
3 – 254 = The data pertains to a PDB, Each PDB has its own container ID.

Q. Are there any background processes ex, PMON, SMON etc associated with PDBs? (imp)
No. There is one set of background processes shared by the root and all PDBs.

Q. Are there separate control file required for each PDB?


No. There is a single redo log and a single control file for an entire CDB.

Q. Can I monitor SGA usage on a PDB by PDB basis?


SQL> alter session set container=CDB$ROOT;
SQL> select POOL, NAME, BYTES from V$SGASTAT where CON_ID = '&con_id';
SQL> select CON_ID, POOL, sum(bytes) from v$sgastat group by CON_ID, POOL order by CON_ID, POOL;

Q. Do I need separate SYSTEM and SYSAUX tablespaces for each of my PDB?


There is a separate SYSTEM and SYSAUX tablespace for the root and for each PDB.

Q. Where is user data stored in CDB?


In a CDB, most user data is in the PDBs. The root contains no user data or minimal user data.

Q. How can I create a pluggable database ?


1. Connect to CDB as sys : sqlplus sys as sysdba;
2. create pluggble database
--------------------------------------------------------------------------------
CREATE PLUGGABLE DATABASE PDB_GIRI
ADMIN USER giri IDENTIFIED BY giri123
ROLES = (dba)
DEFAULT TABLESPACE users
DATAFILE '/u01/app/oracle/oradata/CDB1/PDB_GIRI/users01.dbf' SIZE 500M AUTOEXTEND ON
FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/CDB1/',
'/u01/app/oracle/oradata/CDB1/PDB_GIRI/');

• PDB Name: PDB_GIRI


• Admin User: giri with password giri123
• Tablespace: users
• Datafile: Points to a location where the PDB’s data file will be created.
• File Name Convert: Converts the file path to the PDB-specific directory.

Step 3: Open the PDB


After the PDB is created, you need to open it.
ALTER PLUGGABLE DATABASE PDB_GIRI OPEN;
Step 4: Create the User in the PDB
Now, you can create the user giri in the PDB_GIRI.
Example: Creating the User giri in PDB_GIRI

ALTER SESSION SET CONTAINER = PDB_GIRI;

CREATE USER giri IDENTIFIED BY giri123


DEFAULT TABLESPACE users
TEMPORARY TABLESPACE temp;

GRANT CONNECT, RESOURCE TO giri;

• User giri is created with password giri123.


• Default Tablespace: users
• Temporary Tablespace: temp
• Privileges: CONNECT and RESOURCE roles are granted.

Step 5: Verify the User in the PDB


SHOW USER;

Q. How to drop a PDB irrevocably?


sql> drop pluggable database pdb_giri including datafiles;

3+ Years of DBA Experience Interview Questions


Below are the senior Oracle DBA interview questions for people who are projecting 3+ years of DBA
experience

1. Explain about password file, what is stored inside password file and its use?
A:
In Oracle Database, a password file is a special file used to authenticate administrative users (also
known as privileged users) who connect to the database remotely with SYSDBA, SYSOPER, or other
administrative privileges. It is primarily used in cases where authentication through the operating
system is not possible.
What is a Password File?
• A password file is an external binary file that stores the credentials of database
users with administrative privileges.
• It is located outside the database and allows Oracle to authenticate users
attempting to connect to the database with administrative rights.
Contents of a Password File
The password file stores:
1. Username and Password Hash:
Location of Password File

• The default location of the password file is determined by the


REMOTE_LOGIN_PASSWORDFILE initialization parameter.
• For a single-instance database:
On Linux/UNIX: $ORACLE_HOME/dbs/orapw<SID>

On Windows: %ORACLE_HOME%\database\PWD<SID>.ora
• For RAC (Real Application Clusters):
Password files are stored in shared storage.

Password File Authentication:


• When an administrative user attempts to connect to the database remotely using
sqlplus or similar tools:

sqlplus sys/<password>@<db_service> as sysdba

• Oracle checks the username and password against the password file.
• If credentials are valid and the user has the requested privilege (e.g., SYSDBA), the
connection is granted.

Uses of a Password File:

1. Remote Administrative Authentication:


• Enables database administrators to connect remotely with SYSDBA or other
administrative privileges.
2. Oracle Data Guard:
• Synchronizes passwords for SYS or other administrative users across primary and
standby databases.
3. Startup and Shutdown Operations:
• Required when starting up or shutting down a database remotely, especially if OS
authentication is unavailable.
4. RMAN (Recovery Manager) Operations:
• Essential for RMAN operations performed remotely with administrative privileges.
5. Support for Non-OS Authentication:
• Allows administrative users to bypass OS authentication, which might not be
available in some environments.

Types of Privileges Stored in a Password File

1. SYSDBA:
Full administrative privileges, including starting, stopping, and recovering the database.
2. SYSOPER:
Limited administrative privileges, mainly for starting and stopping the database.
3. SYSBACKUP:
Privilege for performing backup and recovery tasks using RMAN.
4. SYSDG:
Privilege for managing Oracle Data Guard.
5. SYSKM:
Privilege for managing Transparent Data Encryption (TDE) keys.
6. SYSRAC:
Privilege for managing Oracle RAC.

How to Create and Manage a Password File:

1. Create a Password File:


Use the orapwd utility:

orapwd file=$ORACLE_HOME/dbs/orapw<SID> password=<password>


entries=<number_of_users>

• file: Path to the password file.


• password: Password for the SYS user.
• entries: Maximum number of users that can be added.

2. Add Users to the Password File:


GRANT SYSDBA TO <username>;
SELECT * FROM V$PWFILE_USERS;

4. Modify Passwords:
Use the ALTER USER command to update the password of users in the password file:

ALTER USER SYS IDENTIFIED BY new_password;

Initialization Parameter for Password File


None, Exclusive, Shared
ALTER SYSTEM SET REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE;

2. What is force=y option while generating password file?


The FORCE=Y option in the orapwd utility is used when generating or recreating an Oracle
password file. It allows you to overwrite an existing password file if one already exists.
When to Use FORCE=Y?
1. Recreating a Password File:
2. Updating SYS Password:
If you need to change the password for the SYS user and want to recreate the password file to
reflect the new password.
How to Use FORCE=Y
The command to generate a password file with the FORCE=Y option is:
orapwd file=$ORACLE_HOME/dbs/orapw<SID> password=<sys_password>
entries=<number_of_users> force=Y

• file: Specifies the path to the password file.


• password: The password for the SYS user.
• entries: Number of administrative users allowed.
• force=Y: Forces the utility to overwrite an existing password file.
• If a password file named orapwORCL already exists, this command will overwrite it.

Impact of Using FORCE=Y

1. Existing Password File is Overwritten:


2. Risk of Downtime:
3. Synchronization Required in RAC/Data Guard:
Precautions When Using FORCE=Y
• Backup the existing password file before using FORCE=Y, in case you need to restore.
• Use FORCE=Y only when necessary, such as in cases of corruption, password
change, or reconfiguration.

3. What are the contents of control file and which parameter defines the controlfile
retention? IMP
A: The control file in Oracle Database is a crucial binary file that records the physical structure and
state of the database. It contains metadata essential for database operations, recovery, and
consistency. Without a valid control file, the database cannot be opened or operate.
Contents of a Control File:
The control file contains the following critical information:
1. Database Information: DBID, Timestamps
2. Physical File Structure: Paths to datafiles, redologfiles
3. Backup and Recovery Information: RMAN and SCN
4. Checkpoint Information: checkpoint SCN
5. Archived Log Information:
6. Redo Threads:
• Details of redo threads for each instance in RAC.
7. Tablespace Information:
• Details about tablespaces and their states.
8. Log History:
• Metadata about redo log history for instance recovery.
9. Flashback Information:
• Information about the flashback log files (if enabled).
10. Database Incarnation Information:
• Tracks database incarnations (relevant for RMAN recovery).

Control File Retention Parameter


CONTROL_FILE_RECORD_KEEP_TIME
• Definition:
Specifies the minimum number of days for which Oracle retains reusable records (e.g., RMAN
backups and archived log information) in the control file.
• Default Value: 7 days.
• Range: 0 to any positive integer. Setting it to 0 causes the records to be overwritten
as soon as they become eligible for reuse.
• Purpose:
• Ensures that metadata needed for recovery is not prematurely overwritten.
• Helps Oracle manage space in the control file.

How to Set CONTROL_FILE_RECORD_KEEP_TIME:


You can set this parameter in the database parameter file (spfile or pfile) or dynamically using an
ALTER SYSTEM command:

Dynamic Setting:
ALTER SYSTEM SET CONTROL_FILE_RECORD_KEEP_TIME=14 SCOPE=BOTH;
• This sets the retention period to 14 days and applies it to both the current and
future instances.

Impact of CONTROL_FILE_RECORD_KEEP_TIME
• Longer Retention Period: Ensures more recovery metadata is available for RMAN or
manual recovery.
• Requires larger control files to accommodate the additional records.
• Shorter Retention Period: Saves space in the control file but increases the risk of
losing metadata needed for recovery operations.

Best Practices:
1. Use RMAN:
Use RMAN catalog for long-term backup metadata storage
2. Monitor Control File Size:
3. Synchronize with Backup Policy:

Checking Control File Contents and Retention:

• Query Control File Metadata:


SELECT * FROM V$CONTROLFILE_RECORD_SECTION;
• Check Retention Parameter:
SHOW PARAMETER CONTROL_FILE_RECORD_KEEP_TIME;

4. Difference between checkpoint and SCN number? IMP


A: A Checkpoint is a database event where all modified data in memory (dirty buffers) is written
to datafiles, ensuring data consistency. It marks a point in time for recovery purposes.
A System Change Number (SCN) is a unique, ever-incrementing number that represents a specific
point in time in the database’s lifecycle, used to track and order changes.
Key difference:
 Checkpoint ensures data is flushed to disk;
 SCN tracks logical order of transactions.
 Checkpoint is a physical event; SCN is a logical marker.
 Both are critical for crash recovery, with SCN determining recovery scope.

5. Differentiate between data file header and data block header? What it contains?
A: The datafile header and data block header are both essential metadata components in an
Oracle database, but they serve different purposes and contain distinct information:
1. Datafile Header:
The datafile header is located at the beginning of each datafile and stores metadata about the
entire file.
2. Data Block Header
The data block header is located at the start of each Oracle data block and stores metadata
specific to that block.
Key Difference
• Scope: Datafile header is for the entire file; data block header is for individual
blocks within the file.
• Purpose: Datafile header manages file-level metadata; data block header manages
block-specific metadata.

6. Give the command to take controlfile trace backup?


A: To take a trace backup of the control file in Oracle, use the following SQL command:

ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

Explanation:
1. Purpose:
This command generates a trace file containing the SQL statements to recreate the control file. It is
useful for disaster recovery or migrating databases.
2. Trace File Location: USER_DUMP_DEST

ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS 'filename';

This writes the trace directly to the specified location.


4. Note:
The trace file does not create a physical backup; it provides a script to recreate the control file.
For a physical backup, use RMAN.

7. Can I have redo log groups with different size?


A: Yes, you can have redo log groups of different sizes in Oracle Database. However, this is not
recommended as it may lead to uneven performance and potential issues during recovery.

Why It’s Possible?


Oracle allows redo log groups with varying sizes to provide flexibility for specific use cases, such as:
• Accommodating disk space limitations.
• Gradual resizing of redo logs (by adding new logs and dropping old ones).
Why It’s Not Recommended?
• Groups with larger sizes take longer to fill
• Uneven redo log group sizes may result in inconsistencies in log file rotation and
increased contention.
• During recovery, Oracle expects consistent redo log sizes for smooth processing.
Best Practice
• Maintain uniform sizes for all redo log groups to ensure smooth log switching,
balanced archiving, and consistent performance.

How to Resize Redo Log Groups Properly?


To resize redo logs, follow these steps:
1. Add new redo log groups with the desired size:
ALTER DATABASE ADD LOGFILE GROUP 4 ('/path/to/logfile4.log') SIZE 100M;
2. Drop the old smaller-sized redo log groups:
ALTER DATABASE DROP LOGFILE GROUP 1;
3. Ensure all groups are of the same size after resizing.

8. Differentiate between redo log groups and redo log members?


A:
Key Differences:
Feature A logical collection of Redo Log Member
members
Definition A logical collection of A physical redo log file
members
Role Manages the redo log Stores actual redo data
rotation cycle
Redundancy Achieved by adding Individual file; no
members redundancy alone

Number Limited by the number Can have multiple


of groups members per group
Configuration Minimum: 2 groups Minimum: 1 member per
group

9. How do you rename redo log file – online or offline? Give the command?
A: Renaming a redo log file in Oracle can be done either online (when the database is running) or
offline (when the database is shut down). However, renaming redo log files online is more
common and avoids downtime.

Steps to Rename Redo Log File Online

Renaming a redo log file online requires the redo log group to be inactive. Here’s how to do it:
1. Check Redo Log Status:
Use the following query to check the status of the redo log groups:
SELECT GROUP#, MEMBER, STATUS FROM V$LOGFILE;
• The status should be INACTIVE before renaming.
• If the group is CURRENT, perform a log switch:
ALTER SYSTEM SWITCH LOGFILE;
2. Take the Redo Log Group Offline:
ALTER DATABASE CLEAR LOGFILE GROUP <group_number>;
This clears the redo log group and ensures it is not in use.
3. Rename the Redo Log File at the OS Level.
mv /old_path/redo01.log /new_path/redo01.log
4. Update the Control File with the New File Location:
ALTER DATABASE RENAME FILE '/old_path/redo01.log' TO '/new_path/redo01.log';
5. Bring the Redo Log Group Back Online:
ALTER DATABASE OPEN;

Steps to Rename Redo Log File Offline

If the database is not running, you can rename redo log files while the database is in the MOUNT
state.
1. Shutdown the Database:
SHUTDOWN IMMEDIATE;
2. Rename the Redo Log File at the OS Level:
mv /old_path/redo01.log /new_path/redo01.log
3. Mount the Database:
STARTUP MOUNT;
4. Update the Control File:
ALTER DATABASE RENAME FILE '/old_path/redo01.log' TO '/new_path/redo01.log';
5. Open the Database:
ALTER DATABASE OPEN;

Key Notes

• You cannot rename a current redo log file while the database is online. Perform a
log switch to make it inactive.
• Always back up the database before performing structural changes to redo log files.
• If using multiplexed redo log files, update all members of the group.

10.Which parameter defines the archive log naming format?


A: The parameter that defines the archive log naming format in Oracle is the
LOG_ARCHIVE_FORMAT parameter.
This parameter specifies the format for naming archived redo log files. By default, it uses a
placeholder-based format that includes information such as the log sequence number and thread
ID.

LOG_ARCHIVE_FORMAT = '%t_%s_%r.arc'

• %t = Thread number (for multi-instance configurations like RAC).


• %s = Log sequence number.
• %r = Resetlogs ID (used in case of database reset).
• .arc = File extension.

Setting LOG_ARCHIVE_FORMAT
You can set or modify this parameter in the spfile or pfile.
For example:

ALTER SYSTEM SET LOG_ARCHIVE_FORMAT = '%t_%s_%r.arc' SCOPE=BOTH;

Alternatively, you can define it in the init.ora or spfile configuration file.

11.Can I change the archive log destination while the database is running?
A: Yes, you can change the archive log destination while the database is running. Oracle allows you
to modify the archive log destination dynamically using the ALTER SYSTEM command without
restarting the database.

Steps to Change the Archive Log Destination:


1. Check the Current Archive Log Destinations:
SHOW PARAMETER LOG_ARCHIVE_DEST;
2. Change the Archive Log Destination:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_1 = 'LOCATION=/new/path/to/archivelogs' SCOPE=BOTH;
3. Verify the Change:
SHOW PARAMETER LOG_ARCHIVE_DEST;
4. Switch the Log to Test:
Perform a log switch to ensure that the archive logs are written to the new location:

ALTER SYSTEM SWITCH LOGFILE;


5. Check Archive Logs:
ARCHIVE LOG LIST;
Key Points:
• SCOPE Option: Use SCOPE=BOTH to update the parameter in both the memory and
the spfile.
Example with Multiple Destinations:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2 = 'SERVICE=standby_db' SCOPE=BOTH;

12.I lost redo log file and have no multiplexed copy or archive log. How can I recover
the database? (IMP)
A: Losing a redo log file without a multiplexed copy or an archived version is a critical situation.
Recovery is possible in some scenarios but depends on the state of the database and the
availability of other files. Here’s how you can handle it:

Scenario 1: Lost Non-Current (Inactive) Redo Log File


Steps:
1. Check the Status of Redo Log Groups:
SELECT GROUP#, STATUS FROM V$LOG;

• CURRENT: Active redo log in use.


• ACTIVE: Required for recovery.
• INACTIVE: No longer needed for recovery.

2. Clear the Affected Log Group:


ALTER DATABASE CLEAR LOGFILE GROUP <group_number>;
3. Add a New Redo Log File:
After clearing the log group, add a new redo log file to replace the lost one:

ALTER DATABASE ADD LOGFILE GROUP <group_number> ('/new/path/redo01.log') SIZE 100M;


4. Verify the Changes:
SELECT GROUP#, MEMBER, STATUS FROM V$LOGFILE;

Scenario 2: Lost Current or Active Redo Log File

If the lost redo log file belongs to the current or active redo log group, you cannot recover
committed transactions up to the point of failure. However, you can attempt incomplete recovery.

Steps:

1. Shutdown the Database:


SHUTDOWN ABORT;
2. Start the Database in Mount Mode:
STARTUP MOUNT;
3. Restore and Recover the Database:
Perform an incomplete recovery up to the last available SCN or timestamp before the redo log file
loss:
RECOVER DATABASE UNTIL CANCEL;
When prompted, type CANCEL to stop recovery when no more logs are available.
4. Open the Database with Resetlogs:
After incomplete recovery, open the database in resetlogs mode:
ALTER DATABASE OPEN RESETLOGS;

13.What is different about v$datafile and dba_data_files? (Imp)


A: The views V$DATAFILE and DBA_DATA_FILES in Oracle Database provide information about data
files, but they serve different purposes and contain slightly different types of information.
V$DATAFILE (Dynamic Performance View):
• Purpose: Displays the current state of data files in the database, based on
information from the control file.
• Data Source: It is a dynamic performance view (in-memory) and provides real-time
information.
DBA_DATA_FILES (Static Data Dictionary View):
• Purpose: Displays detailed information about the data files associated with
tablespaces in the database.
• Data Source: It is a static data dictionary view and reflects metadata stored in the
data dictionary.
Key Differences:

Feature V$DATAFILE DBA_DATA_FILES


Type of View Dynamic performance view Static data dictionary view
(real-time)
Data Source Control file and in-memory Data dictionary tables
structures
Scope Focuses on physical and Focuses on tablespace-
runtime information level metadata
Status Column Indicates runtime state Indicates availability
(ONLINE, OFFLINE) (AVAILABLE, INVALID)

File Path Stored in NAME Stored in FILE_NAME


Update Frequency Reflects real-time changes Reflects changes only after
DDL operations

When to Use Each View:


1. V$DATAFILE:
• To check the current status of data files (e.g., online/offline).
• To monitor file-level activity in real time.
• Useful for troubleshooting runtime issues related to data files.
2. DBA_DATA_FILES:
• To get metadata about data files, such as size, tablespace, and AUTOEXTEND
settings.
• To plan space management for tablespaces.
• For static configuration details.
14.What are the contents of oracle inventory file and in which format does it exists?
imp
A: The Oracle Inventory file (oraInventory) contains metadata about Oracle software installed on
a system. It tracks products, patches, and Oracle homes.

Contents:
1. Installed Products: Details of all installed Oracle software.
2. Oracle Home Locations: Paths to all Oracle homes.
3. Component Details: Versions, names, and patch IDs of components.
4. Patch History: Record of patches applied.
5. Configuration Data: OS and system-specific details.
Format:
• Stored in XML format in the ContentsXML subdirectory under the oraInventory
directory.
• Key file: inventory.xml.
$ Cat inventory.xml

15.You are designing a database for a client. Explain below:


1. How would you recommend the storage of datafiles, control files, redo log files etc?
A: Storage of Datafiles, Control Files, and Redo Log Files
• Datafiles: Place on high-performance storage, preferably with RAID 10 for
reliability and performance. Separate files based on tablespaces (e.g., SYSTEM, USERS).
• Control Files: Multiplex across different disks to prevent single points of
failure.
• Redo Log Files: Use at least two members per group and multiplex them on
separate disks for data recovery.
2. How will you calculate the initial size of the database?
A: Calculating Initial Database Size
• Estimate based on:
- Table sizes: Multiply estimated rows by row size.
- Indexes: Approx. 20-30% of table size.
- Temporary space: Based on expected query complexity.
- Redo Logs: Size depends on transaction volume.
Example: A 50 GB database might need 60 GB storage for growth.
3. What backup strategy you would recommend?
A: Backup Strategy
• Enable ARCHIVELOG mode.
• Use RMAN for daily incremental backups and weekly full backups.
• Back up critical files (datafiles, control files, redo logs, SPFILE).
• Store backups both on-site and off-site.

4. On which locations you will set alerts?


A: Locations for Alerts
• Tablespace usage thresholds.
• High redo generation rates.
• Unavailable control or datafiles.
• Long-running queries or locks.
Configure alerts in Oracle Enterprise Manager (OEM) or scripts using DBMS_ALERT.

16.How many OS blocks does DB block contains?


A: The number of Operating System (OS) blocks contained in a Database Block is determined by
the relationship between the database block size and the OS block size.
Formula:
Number of OS blocks per DB block = Database Block ¿ ¿ OS block ¿ ¿ ¿ ¿
Example:
• If the Database Block Size is 8 KB and the OS Block Size is 4 KB:
8
Number of OS blocks per DB block = =2kb
4
So, 1 DB block contains 2 OS blocks.
Common Sizes:
• Database Block Sizes: Typically 2 KB, 4 KB, 8 KB, 16 KB, or 32 KB.
• OS Block Sizes: Usually 512 bytes, 2 KB, or 4 KB.
Why This Matters:
• Misalignment (e.g., DB block size not being a multiple of OS block size) can lead to
inefficient I/O operations.
• Ensure block sizes are optimized for the workload during database creation. Use
DB_BLOCK_SIZE to configure database block size.

17.Why data blocks are always 80% used and not 100%?
A: Data blocks in Oracle are typically designed to be 80% full rather than 100% to maintain efficient
database performance and allow for updates to existing rows.
This behavior is controlled by the PCTFREE parameter.

Reasons for 80% Utilization:


1. Space for Updates:
• When rows are updated, their size might increase (e.g., columns with VARCHAR or
LOB types).
• Reserving free space prevents row migration or chaining, which occurs when
updated rows no longer fit in the original block, leading to performance issues.
2. Efficient Block Management:
• Keeping some free space reduces contention for block-level resources, such as
latches, during updates and inserts.

18.How many types of segments are there in Oracle? How many types of objects are
there in Oracle and how are they stored in the segments?
A: Types of Segments in Oracle
Oracle Database uses segments to store database objects. There are four main types of segments:
1. Data Segments:
• Store table or cluster data.
• Each table or cluster has a single data segment.
2. Index Segments:
• Store index data for tables.
• Each index has its own segment.
3. Undo Segments:
• Store undo information for rollback and recovery operations.
• Managed by Oracle automatically in Undo tablespaces.
4. Temporary Segments:
• Used for intermediate operations like sorting or temporary tables.
• Exist only during the query’s execution and are deallocated afterward.

Types of Objects in Oracle


Oracle objects include:
• Tables
• Indexes
• Partitions
• Views
• Synonyms
• Sequences
• Stored Procedures and Functions

Non-physical objects like Views, Synonyms, and Sequences do not occupy segments as they are
logical objects.

19.How will you calculate the best data block size for a new database and propose it
to the client?
A: To calculate the best data block size, consider the workload:
• OLTP (Online Transaction Processing) systems: Use 8 KB for frequent small
transactions.
• DSS/Analytics systems: Use 16 KB or 32 KB for large queries.
Analyze average row size to determine how many rows fit per block without excessive overhead.
Match the database block size to the OS block size (e.g., 4 KB or 8 KB) for efficient I/O.
Estimate database size and growth to avoid excessive block management.
Test performance using workload simulations.
Propose block size based on workload type: 8 KB for OLTP or 16–32 KB for DSS, ensuring it aligns
with performance and scalability needs.

20.What is undo retention policy? How do you estimate the undo retention policy?
A: The undo retention policy in Oracle defines how long undo data is retained in the undo
tablespace to support read consistency, flashback queries, and rollbacks. It is controlled by the
UNDO_RETENTION parameter.
To estimate the retention policy:
1. Determine the longest-running query or flashback requirement.
2. Measure the undo generation rate using the V$UNDOSTAT view.
3. Use the formula:
Undo tablespace Size = Undo generation rate × Retention period
Set retention based on workload needs (e.g., 900 seconds for OLTP, longer for analytics).
Use AUTO undo management for dynamic tuning.

21.Explain what happens during instance recovery?


A: Oracle Database 19c, instance recovery occurs automatically after an instance failure when the
database restarts. Its purpose is to ensure the database’s consistency by applying changes that
were committed but not yet written to the data files and by rolling back uncommitted transactions.
Here’s an overview of the process:
1. Redo Log Application (Rolling Forward):
• The database applies changes from the redo logs to the data files. These changes
represent committed transactions that were not yet saved in the data files at the time of the
failure.
2. Undo Application (Rolling Back):
• The database identifies uncommitted transactions from the undo segments and
rolls them back to ensure transactional consistency.
3. Checkpoint Update:
• Once the recovery completes, the system updates the checkpoint to reflect the
most recent consistent state.

Key Mechanisms:
• Redo Logs: Oracle uses redo logs to store all changes made to the database. During
recovery, redo logs are critical for rolling forward committed transactions.
• Undo Segments: Undo segments are used to track uncommitted changes and
facilitate rolling back transactions.
Outcomes:

• Ensures the database is brought to a consistent state.


• Allows the system to resume normal operations with no manual intervention
needed.

Instance recovery is typically fast because Oracle performs it in memory using efficient
algorithms and minimizes downtime.

22.Difference between Roll-back and roll-forward?


A: In the context of Oracle databases, rollback and roll forward are two distinct processes used
during recovery and transaction management:
Rollback:
• Purpose: Undo uncommitted changes made by a transaction.
• When It Happens:
• During instance recovery: To reverse incomplete transactions.
• During user-initiated commands: When a user explicitly issues a ROLLBACK
statement.
• How It Works:
• Oracle uses undo segments to restore data to its original state before the
transaction began.
• Outcome: Ensures that uncommitted changes do not persist, maintaining data
consistency.

Roll Forward:
• Purpose: Apply committed changes from redo logs to the database files.
• When It Happens:
• During instance recovery: To recover committed changes that were not written to
data files due to an instance failure.
• During media recovery: When restoring from a backup.
• How It Works:
• Oracle applies redo records stored in redo logs to data files to bring them up to
date.
• Outcome: Ensures committed changes are not lost.

Key Differences:
Aspect Rollback Roll Forward
Primary Purpose Undo uncommitted changes Apply committed changes
Data Source Undo segments Redo logs
Use Case Transaction abort, recovery Recovery after failure
End Result Original state of data is restored Database is brought up to date
Both processes are essential for maintaining consistency and ensuring reliable database
operations.

23.What is a temporary table and how it is different from normal table?


A: A temporary table in Oracle is a type of table that stores data specific to a session or
transaction. Its data is not visible to other sessions and is automatically cleared when the session
Differences Between Temporary and Normal Tables:
Aspect Temporary Table Normal Table
Persistence of Data Data persists only for the duration of a Data persists until explicitly
session or transaction deleted.

Visibility Data is private to the session; other Data is visible to all users with
sessions cannot see it. appropriate access.

Storage Data is stored temporarily in temporary Data is stored permanently in


segments. regular tablespaces.

Automatic Cleanup Automatically cleans up data at the Data remains until deleted
end of a session/transaction. manually or programmatically.
Use Cases Suitable for intermediate results, Used for storing persistent
staging, and sorting. application data.

Key Features of Temporary Tables in Oracle


1. Defined Syntax:
• Created using CREATE GLOBAL TEMPORARY TABLE.
• E.g.,
-------------------------------------------------------------------
CREATE GLOBAL TEMPORARY TABLE temp_table (
id NUMBER,
name VARCHAR2(50)
) ON COMMIT PRESERVE ROWS;

24.Why we need to use temporary table?


A: Temporary tables are useful in Oracle and other databases for handling transient data efficiently,
especially during complex operations that do not require long-term storage. Here’s why they are
needed:
1. Temporary table is a specific session and it accessible by the user who creates it.
2. Which ensure data privacy and prevents conflicts
3. It reduces disk I/O, because data is not stored into redo logs.
4. Useful for efficient sorting and aggregation
5. Helps developer to test logic with temporary data without affecting production data.
25.What is materialized view and how it is different from view and normal table? imp
A: Material view

A materialized view is a database object that stores the results of a query physically on disk,
unlike a regular view, which is a virtual table that retrieves data dynamically during execution.

Differences:
Aspect Materialized View View Normal Table
Storage Stores query results physically No storage; retrieves Stores data
data dynamically permanently
Refresh Requires manual or scheduled Always up-to-date No refresh needed;
refresh with the base tables direct updates apply
Performance Improves query performance Slower for large Faster for direct reads
for complex queries. queries
Use Case Aggregates, joins, or Simplifies query Stores persistent,
frequently accessed data complexity transactional data

26.How do you query row ID column in a table? What exactly is row ID?
A: What is a Row ID?
The Row ID is a unique identifier for each row in an Oracle database table. It represents the
physical location of a row in the database and includes the following components:
• Object Number: Identifies the table or cluster.
• File Number: Identifies the data file.
• Block Number: Indicates the block containing the row.
• Row Number: Indicates the position of the row within the block.

The Row ID is useful for locating rows quickly during internal database operations.
How to Query the Row ID Column?
SELECT ROWID, column1, column2
FROM table_name;
Use Cases of Row ID:
1. Debugging: Identify specific rows for troubleshooting.
2. Optimizing Queries: Row IDs can be used in direct access methods for faster
retrieval.
3. Row Uniqueness: Useful in applications requiring unique row identification.

27.How do you give partial access to a user on a table?


A: User partial access
To grant partial access to a user on a table in Oracle, you can use the GRANT statement with
specific permissions, or create views and assign permissions on the views. Here’s how you can
achieve this:
1. Give column level access:
Grant select (column1,column2) ON table123 to GIRI;
2. Give Insert permission to user.
Grant insert (column1, column2) ON schema_name.table123 to GIRI;
Schema_name: the use who owns the table. EX:HR
HR.emploee

28.What are the table partitioning strategies used in your environment? (Imp)
A: Table partitioning divides a large table into smaller, more manageable pieces,
improves query performance and manageability.
Each partitioned can be managed and accessed independently while still appearing as single table
to users.
Strategies used for partitioning: range (Historical data split by time), list, Hash, composite.

29.Do you recommend partitioned tables in Data Ware House or OLTP databases?
A: Partitioned tables are generally recommended for Data Warehouse (DW) environments rather
than OLTP (Online Transaction Processing) systems. In DW systems, partitioning improves query
performance by enabling efficient data pruning, particularly for large datasets involving historical
data. Partitioning by range (e.g., date) or hash can speed up scans, aggregation, and reporting.
In contrast, OLTP systems, which handle frequent small transactions, benefit less from
partitioning. The overhead of maintaining partitions and managing fine-grained indexing may
outweigh the performance benefits in OLTP, where data retrieval is typically more transactional
and smaller in scale.

30.I want to install Oracle software on 300 servers at a time. How will you do it?
A: To install Oracle software across 300 servers at once, particularly for Oracle products such as
Oracle Database, Oracle WebLogic Server, Oracle E-Business Suite, or Oracle Fusion Middleware,
an automated approach is crucial to ensure consistent deployment and reduce manual
intervention.
1. Preparation of Installation Package
Before beginning, ensure that the Oracle software packages are readily available For Oracle
applications, make sure to have response files for silent installations, which will automate the
setup by specifying configuration parameters like Oracle Home, installation directories, ports, and
database names.

2. Automation Tools for Deployment


Ansible, Puppet, or Chef are ideal automation frameworks to handle this scale. These tools allow
for remote execution of tasks, such as running scripts to install Oracle products across multiple
servers at the same time.
• Ansible Playbooks: For Oracle Database, you can create an Ansible playbook that
installs the Oracle software and uses the response file for silent installation. This playbook can also
be used for installing Oracle WebLogic Server or Fusion Middleware, configuring them in parallel
across all servers.
• Puppet and Chef: Both tools use their respective manifests (Puppet) or recipes
(Chef) to automate the installation process, ensuring the deployment is consistent across all
servers. Puppet’s module for Oracle or Chef’s custom recipes can be used for installation.
3. Parallel Execution:
To handle 300 servers, it’s crucial to execute tasks in parallel. Both Ansible and Puppet support
parallel execution, allowing you to manage multiple servers simultaneously. In Ansible, you can
specify the number of parallel tasks to run. This ensures that installations occur concurrently,
speeding up the deployment process.
4. Monitoring and Logging:
Centralized logging is vital for troubleshooting and tracking the installation process. Oracle’s
Enterprise Manager or open-source tools like Nagios can be used to monitor the status of Oracle
products such as Oracle E-Business Suite or Oracle Database during installation. These tools help
track errors, warnings, and other log outputs.
• Oracle Enterprise Manager can also manage ongoing tasks like patches, upgrades,
and database monitoring post-installation.
• For logging, use centralized log servers (e.g., ELK Stack or Splunk) to aggregate logs
and quickly detect any issues across all servers.
5. Post-Installation Configuration
Once the installation is complete, you may need to configure Oracle products. For instance:
• Oracle WebLogic Server or Fusion Middleware configurations might require setting
up clusters or configuring the environment based on the product.
• Oracle E-Business Suite will need application-level configuration after the base
installation, such as configuring instance managers and database connections.
6. Oracle Cloud Infrastructure
If deploying on Oracle Cloud Infrastructure (OCI), you can use Oracle Cloud Automation tools like
Terraform to automate provisioning and configuration across multiple instances. OCI also provides
pre-built Oracle VM templates for quick deployment of Oracle products.

31. My client recommends to put undo and temp on auto extend ON. What you have to say
about it?
A: Enabling auto-extend for undo and temp tablespaces can be useful in certain environments, but
it should be approached with caution.
1. Undo Tablespace: Auto-extend helps prevent ORA-1652 (unable to extend
segment) errors, especially in systems with unpredictable transaction volumes. However, setting an
unlimited auto-extend may lead to unmonitored space growth, potentially filling up disk storage
and impacting system performance. It’s better to set a reasonable limit for growth, allowing
alerting mechanisms to monitor space usage.
2. Temporary Tablespace: Auto-extend is often recommended to handle temporary
space for large sorts or queries. However, constant space extension can lead to fragmentation and
complicate performance tuning. Proper monitoring and regular cleanup should accompany its
use.
In both cases, careful monitoring is key. Ensure sufficient disk space and performance
considerations are made.

32.Explain about tablespace utilization process in your environment?


A: In our environment, tablespace utilization is closely monitored to ensure optimal database
performance. The following process is used:
1. Regular Monitoring: We use tools like Oracle Enterprise Manager and custom
scripts to track tablespace usage, checking for high usage or potential fragmentation.
2. Threshold Alerts: Alerts are configured for tablespace usage at 80-90% capacity,
allowing proactive intervention before critical thresholds are reached.
3. Space Management: When tablespaces approach their limits, we add datafiles or
resize existing datafiles to prevent ORA-01652 errors. For undo or temporary tablespaces, auto-
extend is monitored carefully to avoid uncontrolled growth.
4. Maintenance: Routine tablespace clean-up and data file compaction are performed,
ensuring efficient space utilization.

33.Client is asking for sysdba access. What is the command to give access?
A:
To grant SYSDBA access to a user in Oracle, you can execute the following SQL command as an
Oracle SYS user:

GRANT SYSDBA TO GIRI;

After running this command, the user will have SYSDBA privileges, allowing them to perform
administrative tasks such as database startup, shutdown, and access to all objects within the
database.
Important: Be cautious while granting SYSDBA privileges as it provides full administrative access
to the database.

Backup Channels:
34.My database size is 25 TB. How many channels will you allocate in RMAN
command?
A: When performing backups on a large 25 TB database using RMAN, the number of channels
allocated depends on the system’s hardware capabilities, disk I/O performance, and the specific
backup method (e.g., full, incremental). A good starting point is to allocate multiple channels to
speed up the process and improve throughput. Here’s how I would approach it:

1. Default Allocation: RMAN typically uses a default of one channel for each backup
device. However, for large databases like 25 TB, you can increase this number to improve
performance.
2. Consider Disk I/O and Network Throughput: Based on the environment’s disk I/O
capabilities and network throughput, allocate around 4-8 channels for each device (disk or tape) in
a typical configuration. For a 25 TB database, it might be reasonable to use 8-16 channels for disk
backups, depending on available resources.
3. Resource Allocation: Ensure that you have sufficient resources (CPU, memory, disk
throughput) to handle the increased load. Monitor the backup process and adjust the number of
channels as needed.
4. Testing and Tuning: Start with a smaller number of channels (e.g., 4) and
progressively increase, testing the performance impact. RMAN provides a CONFIGURE CHANNEL
command to adjust the settings.

For example:

CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/path/to/backup/%U' MAXCHANNELS 8;

This approach balances performance and resource usage. Always monitor the backup performance
and adjust channels accordingly.

35.Differentiate between automatic channels and manual channels.


A: In Oracle RMAN, Automatic Channels are pre-configured by the DBA using a CONFIGURE
CHANNEL command, making them reusable for all backup and restore operations without
redefining them each time. They ensure consistency and save time by standardizing settings like
device type, location, or parallelism.
Manual Channels, on the other hand, are defined explicitly within an RMAN command block using
the ALLOCATE CHANNEL command. They are temporary, applying only to the specific operation
where they are configured.
Automatic channels are ideal for routine tasks, while manual channels provide flexibility for one-
off or customized operations. Both help manage I/O efficiently.

RMAN
36.What is backup optimization on in RMAN? Do you recommend it to be enabled?
(IMP)
A: Backup Optimization in RMAN prevents redundant backups of files that haven’t changed since
the last backup. When enabled (CONFIGURE BACKUP OPTIMIZATION ON), RMAN skips backing up
files already present in the backup media if:
• The file exists in the same recovery window.
• No changes were made to the file since the last backup.
• It matches a pre-existing backup in a configured backup location.
Recommendation:
• Enable it for redundancy and recovery window-based backups, especially for large
databases, to save storage and reduce backup time.
• Disable it when ensuring a full backup set is critical, such as before major database
changes or migrations.

It depends on your backup strategy and business requirements.

37.I want to configure tape backups. How will you configure tape with RMAN?
A: To configure tape backups with RMAN, follow these steps:

1. Install and Configure Media Management Library (MML):


• Ensure the tape device is connected and install a compatible MML software (e.g.,
Oracle Secure Backup, Veritas NetBackup).
• Verify MML installation by running:
----------------------------------------------
SHOW SBT_LIBRARY;
-----------------------------------------------
2. Set the SBT_TAPE Channel:
• Configure RMAN to use the SBT_TAPE device type:
----------------------------------------------------------------
CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE;
-----------------------------------------------------------------
3. Specify Tape Settings (Optional):
• Configure parallelism for multiple tape drives:
-----------------------------------------------------------------------
CONFIGURE DEVICE TYPE SBT_TAPE PARALLELISM 4;
-----------------------------------------------------------------------------------------
• Define default retention or pool settings if MML supports them.
4. Test the Setup:

• Run a test backup to tape:


-----------------------------------------------------------------
BACKUP DEVICE TYPE SBT_TAPE DATABASE;
------------------------------------------------------------------
5. Monitor Backups:

• Use MML logs and RMAN logs to verify successful tape operations.

38.Difference between expired backups and obsolete backups?


A: Expired Backups and Obsolete Backups in RMAN are different concepts related to the status of
backups:
Expired Backups

• Definition: A backup is marked expired when RMAN cannot find the physical backup
file during a CROSSCHECK operation.
• Cause: The file was manually deleted, moved, or became inaccessible.
• Identification:
----------------------------------------------------
RMAN> LIST EXPIRED BACKUP;
---------------------------------------------------------
• Action: Either remove the backup entry with DELETE EXPIRED BACKUP or
restore/move the missing file.

Obsolete Backups:
• Definition: A backup is marked obsolete when it is no longer needed to satisfy the
retention policy (e.g., recovery window or redundancy).
• Cause: Defined by policies, not physical file status.
• Identification:
---------------------------------------------
RMAN> REPORT OBSOLETE;
----------------------------------------------
• Action: Use DELETE OBSOLETE to clean up obsolete backups.

Key Difference

• Expired: The file is missing or inaccessible.


• Obsolete: The file exists but is no longer needed based on retention policies.

39.What is database incarnation? What happens when database goes into new
incarnation?
A: Database Incarnation in Oracle refers to a version of the database with a specific set of system
change numbers (SCN) and a unique incarnation ID. The incarnation tracks the history of the
database as it evolves over time, especially in scenarios involving database recovery, resetlogs, or
incomplete recovery.

When a Database Goes into a New Incarnation:

1. Creation of New Incarnation ID:


When you perform a RESETLOGS after a point-in-time recovery or incomplete recovery, Oracle
creates a new incarnation for the database. This new incarnation starts with a new SCN and a new
incarnation ID.
2. Impact of RESETLOGS:
After a RESETLOGS operation, the database’s control file is updated, and the current incarnation
starts fresh, while the previous incarnations remain in the recovery catalog or control file history.
3. Recovery Considerations:
When recovering a database, RMAN will use the incarnation to determine which backups and
archived logs belong to the correct incarnation of the database. If you’re recovering a database to
a point before a RESETLOGS, you’ll need to switch to the correct incarnation using the following
command:
--------------------------------------------------------------------------------------
RMAN> SET DBID <DBID>;
RMAN> SWITCH TO INCARNATION <INCARNATION_ID>;
-----------------------------------------------------------------------------------
Example Scenario:
• After a RESETLOGS during point-in-time recovery, the database enters a new
incarnation. The prior incarnations are still available for reference, but any new backups taken
after the reset will belong to the new incarnation.

40.How do I sync recovery catalog with my target database?


A: To synchronize the recovery catalog with your target database, you need to ensure that the
recovery catalog is up to date with the current state of the target database. This can be done using
the RMAN command SYNC to synchronize the catalog with the target database.
Steps to Sync Recovery Catalog:

1. Connect to the Target Database and Recovery Catalog:

RMAN> CONNECT TARGET target_db;


RMAN> CONNECT CATALOG catalog_user/catalog_password@catalog_db;

2. Synchronize the Catalog:


Use the SYNC command to synchronize the recovery catalog with the target database:

RMAN> SYNC;
This command updates the recovery catalog with the latest information about the target
database’s backups, archived logs, and other related metadata.

3. Verify Synchronization:
You can verify the synchronization by checking the backup status:

RMAN> LIST BACKUP;


The SYNC command ensures the catalog reflects any changes or updates in the target database’s
backup state, especially after performing backup, restore, or recovery operations.

41.Tell me the process of recovery catalog creation with commands?


A: To create a recovery catalog for RMAN, follow these steps:
1. Create the Catalog Schema (on the catalog database):
sqlplus sys/password@catalog_db as sysdba
@?/javavm/install/rmjscat.sql
2. Connect to the Catalog:
RMAN> CONNECT CATALOG catalog_user/catalog_password@catalog_db;
3. Create the Recovery Catalog:
RMAN> CREATE CATALOG;
4. Register the Target Database:
RMAN> CONNECT TARGET target_db;
RMAN> REGISTER DATABASE;
5. Verify Catalog:
RMAN> LIST BACKUP;
This process creates the recovery catalog and registers the target database for backup
management.

42.Recovery catalog server is down. How will handle the failed backups for an
environment with 1000 databases?
A: If the recovery catalog server is down and backups are failing for a large environment, you can
handle the situation by:
1. Use of Control File for Backup Metadata:
RMAN will store backup metadata in the control file of each database during the downtime.
Ensure control file autobackup is enabled.
2. Reschedule Backup:
Once the catalog server is back online, use RMAN’s RESYNC command to synchronize all databases
with the catalog:

RMAN> CONNECT CATALOG catalog_user/catalog_password;


RMAN> RESYNC CATALOG;

3. Restore Catalog:
If required, restore the catalog from backup to ensure consistency across all databases.
4. Monitor Backups:
Continue monitoring backups and verify that metadata is synchronized once the catalog is back.

43.When will you recommend cumulative over differential backups?


A: I would recommend using cumulative backups over differential backups in the following
scenarios:
1. Faster Recovery: Cumulative backups include all changes since the last full backup,
reducing the number of backups needed during recovery. This can speed up recovery times,
especially in large databases.
2. Lower Management Overhead: Cumulative backups are easier to manage because
you only need to restore the last full backup and the latest cumulative backup, reducing
complexity.
3. Stable Change Rates: When database changes are consistent, cumulative backups
provide a more efficient solution by capturing all changes, unlike differential backups, which may
grow in size over time.

This sequence applies the archived redo logs to the database and opens it for normal operations.

44.What is fractured block?


A: A fractured block in Oracle refers to a data block that is not completely written to disk during a
transaction due to a system failure or crash. This incomplete state leaves the block in an
inconsistent or corrupted condition. Oracle handles this by using redo logs and rollback segments
to recover and restore the block during the recovery process.

45.What happens when you put DB in Begin backup mode?


A: When you put the database in BEGIN BACKUP mode using RMAN, the following occurs:
1. Datafiles Are Locked: Oracle marks the datafiles as “hot backup” mode, allowing
RMAN to back them up while the database remains online. This ensures no changes can be
made to the datafiles during the backup.
2. Redo Logs Start Recording: Oracle begins writing changes to the redo logs, allowing
transactions to continue while the backup is in progress.
3. Consistent Backup: Oracle ensures that any ongoing changes to datafiles are fully
captured in the backup, making the backup consistent when completed.
To end the backup, use END BACKUP.

46.What is database health check? How do you perform health check?


A: A database health check is a proactive assessment of a database’s performance, availability, and
integrity to ensure optimal functioning. It identifies potential issues, performance bottlenecks, and
configuration problems.
Steps to Perform a Health Check:

1. Instance Status: Verify database up/down status and check alert logs for errors.
2. Performance: Monitor CPU, memory, and I/O usage using tools like AWR, ADDM, or
OEM.
3. Datafile and Tablespace Usage: Check for sufficient free space.
4. Backup Validation: Ensure backups are recent and restorable.
5. Redo and Undo Logs: Monitor log usage and archiving.
6. Security: Check user privileges and audit settings.

Regular health checks prevent downtime and improve efficiency.

47.How do you calculate the archivelog backup frequency and schedule in crontab?
A: To calculate archivelog backup frequency:
1. Monitor Archive Log Generation: Check archive log size and generation rate using
queries like:

SELECT COUNT(*) FROM V$ARCHIVED_LOG WHERE FIRST_TIME > SYSDATE - 1;

2. Assess Space Availability: Ensure sufficient disk space and plan backups before it
fills up.
3. Schedule in Crontab: Use RMAN scripts and cron to automate backups. For
example:
• Create an RMAN script (arch_backup.rman):

RUN { BACKUP ARCHIVELOG ALL DELETE INPUT; }

• Add to crontab (backup every 2 hours):


----------------------------------------------------------------------------------------------------------
0 */2 * * * rman target / cmdfile=arch_backup.rman log=arch_backup.log
--------------------------------------------------------------------------------------------------------------

Adjust frequency based on log generation and retention policies.

48.Explain RMAN restore / recover cloning process with steps?


A: RMAN Restore/Recover Cloning Process:
1. Prepare Environment:
• Set up the target and clone database environments, ensuring the clone has the
same version and patch level.
2. Restore Control File:
• On the clone server, use RMAN to restore the control file from the backup:
RMAN> RESTORE CONTROLFILE FROM '/path/to/backup/controlfile.bkp';
3. Restore Datafiles:
• Restore the datafiles from the backup to the clone environment.
RMAN> RESTORE DATABASE;
4. Recover Database:
• Apply any missing archived logs to synchronize the clone database.
RMAN> RECOVER DATABASE;
5. Rename the Database (if needed):
• If renaming, update the database name and parameters.
6. Open the Database:
• Open the database with RESETLOGS.
ALTER DATABASE OPEN RESETLOGS;
7. Post-Cloning Steps:
• Update configuration files, reset passwords, and perform any post-cloning tasks like
statistics gathering.
This process creates an exact clone of the original database for testing or other purposes.

49.How do you rename a database post cloning?


A: To rename a database after cloning, follow these steps:
1. Prepare the Environment
• Ensure that the original and cloned databases are not running.
2. Rename the Database Files
• Rename the datafiles, control files, and log files at the OS level to reflect the new
database name.
3. Update the Initialization Parameters
• Edit the init.ora or spfile to change the database name. Update the DB_NAME and
DB_UNIQUE_NAME parameters:
ALTER SYSTEM SET DB_NAME='new_db_name' SCOPE=SPFILE;
ALTER SYSTEM SET DB_UNIQUE_NAME='new_db_name' SCOPE=SPFILE;
4. Rename the Database Using SQL
• Start the database in MOUNT mode:
STARTUP MOUNT;
• Rename the database:
ALTER DATABASE RENAME GLOBAL DATABASE 'old_db_name' TO 'new_db_name';
5. Update Directory Paths

• If required, update directory paths in the database.

6. Recompile Invalid Objects


• Recompile invalid objects after renaming:
@$ORACLE_HOME/rdbms/admin/utlrp.sql

7. Shutdown and Restart

• Shut down the database and restart it:

SHUTDOWN IMMEDIATE;
STARTUP;

This process ensures the cloned database is properly renamed while maintaining consistency and
functionality.

50.Explain RMAN Duplicate and difference between RMAN Duplicate cloning for new
database and cloning for physical standby? (INQ)
A: RMAN Duplicate creates a copy of the target database, which can be used for various purposes
like testing or creating a standby.
Difference between RMAN Duplicate for New Database and Physical Standby:
New Database:
• Used to create an independent copy of the target database.
• Involves full database restoration and recovery.
• Requires unique names, DBID, and file locations.
Physical Standby:
• Clones the target to create a standby database.
• Syncs with the primary through Redo Apply (MRP) after cloning.
• The database remains in a read-only state until activated.

Both use similar RMAN commands, but the physical standby is continuously synchronized with the
primary.

51.I started the cloning and it failed in between. What should I do now? (IMP)
A: I started the cloning and it failed in between. What should I do now?
If the cloning process fails, follow these steps:
1. Check Logs: Review the RMAN or database logs for specific error messages to
identify the cause.
2. Restore Backup: If the cloning process used backups, restore the database from the
last successful backup.
3. Resolve Issues: Fix the underlying issue (e.g., disk space, connectivity, or
permissions).
4. Cleanup: Remove any partial files or incomplete clones from the target system to
avoid conflicts.
5. Resume Cloning: Restart the cloning process using the appropriate RMAN
commands, ensuring all prerequisites are met.
6. Verify: After successful cloning, verify the database status, data integrity, and
configurations.
Always test the cloning process in a non-production environment first to minimize issues.

52.Difference between MRP and LSP? (IMP)


A: MRP (Managed Recovery Process) and LSP (Log Shipping Process) are related to Oracle Data
Guard but serve different purposes:
1. MRP (Managed Recovery Process):
• Used in physical standby databases.
• Applies archived redo logs from the primary database to the standby database.
• Ensures data synchronization by recovering changes to the standby database in real-
time or in batches.
2. LSP (Log Shipping Process):
• Refers to the process of manually shipping archived redo logs from the primary
database to the standby.
• The logs are then applied manually or via MRP on the standby database.
• LSP is less automated and often requires manual intervention.

Key Difference: MRP automates recovery and log application on a standby, whereas LSP involves
manual log shipping.

53.There is a GAP of 1000 archives in my standby. How will you resolve it?
A: To resolve a gap of 1000 archive logs in your standby, follow these steps:
1. Identify the Missing Archives:
• On the standby, check the gap:

SQL> SELECT MIN(SEQUENCE#), MAX(SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DEST_ID=1;


• Compare this with the archive log sequence on the primary.

2. Check Archive Log Availability on Primary:


• Verify the missing logs exist on the primary using:

SQL> SELECT * FROM V$ARCHIVED_LOG WHERE SEQUENCE# BETWEEN <start_seq> AND


<end_seq>;

3. Manually Transfer Missing Logs:


• If the logs are available on the primary, manually transfer them to the standby’s
archive destination. You can use scp or rsync:
--------------------------------------------------------------------------------------------------------
scp /primary/archive_log_dir/* [standby_host]:/standby/archive_log_dir/
----------------------------------------------------------------------------------------------------------

4. Apply Missing Logs on Standby:


• On the standby, apply the logs:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;

5. Verify Synchronization:
• Ensure the gap is resolved by querying v$managed_standby and checking the
applied logs.

If the missing logs are no longer available, consider performing a point-in-time recovery (PITR)
or an RMAN restore from backup to synchronize.

54.I want my standby to run 4 hours behind the primary server. How can I achieve it?
A: To configure a standby database to run 4 hours behind the primary, you can set log transport
delay using SQL*Net or Data Guard configuration. Here’s how to do it:

1. Set Log Transport Delay:


On the primary database, set the ARCHIVE_LAG_TARGET parameter to introduce a delay in
shipping archived logs to the standby.
-------------------------------------------------------------------------------------------------------------------------------
ALTER SYSTEM SET LOG_ARCHIVE_LAG_TARGET = 14400; -- 4 hours in seconds (14400 seconds =
4 hours)
-----------------------------------------------------------------------------------------------------------------------------------
2. Configure Log Shipping on Primary:
Ensure that archive logs are being shipped with a delay by modifying LOG_ARCHIVE_DEST_n on
the primary database:
--------------------------------------------------------------------------------------------------------------------------------
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=standby_delay ASYNC REOPEN=60';
Adjust 'REOPEN' as needed.
--------------------------------------------------------------------------------------------------------------------------------
3. Enable Managed Recovery on Standby:
Set up the standby to apply logs with a delay:
=------------------------------------------------------------------------------------------------------------------------------
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DELAY 2400; -- Apply logs with a
delay of 2400 minutes (4 hours).
---------------------------------------------------------------------------------------------------------------------------------

4. Monitor and Verify:


• Use v$archive_dest and v$managed_standby on the standby to verify the delay
and that the logs are being applied as intended.

This configuration will ensure the standby database lags by 4 hours behind the primary server.

55.How to improve the performance of MRP process?


A: To improve the performance of the Managed Recovery Process (MRP) on a standby database:
1. Optimize Resources: Allocate sufficient CPU, memory, and fast storage for redo log
operations.
2. Use Direct I/O: Enable Direct I/O for faster writes to archive logs.
3. Multiple Standby Redo Logs: Configure multiple redo log groups to minimize
contention and improve recovery speed.
4. Network Optimization: Ensure dedicated network bandwidth for log shipping
between primary and standby.
5. Monitor MRP: Regularly check v$managed_standby and v$archive_dest for
performance issues.
6. Appropriate Log File Sizing: Use smaller redo log files for faster log application.
These steps enhance MRP performance and reduce lag.

56.The client does not want to spend on active data guard license. What will you
recommend?
A: If the client does not want to spend on an Active Data Guard license, consider the following
alternatives:
1. Physical Standby with Managed Recovery:
• Use a Physical Standby database with Managed Recovery Mode (MRP) for data
redundancy without real-time read-write capabilities.
• This setup can provide disaster recovery with lower costs compared to Active Data
Guard.
2. Data Guard with Delayed Apply:
• Implement a delayed apply mode on a physical standby, where there’s a lag (e.g., 4
hours) in applying logs, reducing potential data corruption risks but still providing near real-time
replication.
3. Oracle GoldenGate:
• If replication of data across databases is required, use Oracle GoldenGate, which
offers real-time data integration and replication without needing the Data Guard license.
4. Regular Backup and Recovery Strategy:
• Implement a robust backup and recovery strategy using RMAN for disaster
recovery, avoiding the need for synchronous data replication.

These solutions reduce costs while ensuring data availability and recovery.

57.What happens when you convert physical standby to snapshot standby?


A: Converting a physical standby to a snapshot standby allows the standby database to be
opened in read-write mode while still applying archived redo logs. This enables the standby to be
used for reporting, testing, or development without affecting the primary database.

During the conversion:

1. Redo log application stops temporarily.


2. A snapshot standby database is created, preserving the current state.
3. After changes, the database can be converted back to a physical standby, and any
differences between primary and snapshot will be synchronized by applying the archived logs.

This process provides flexibility but can impact sync between primary and standby.

58.In which conditions you will recommend Oracle Data Guard to client?
A: I would recommend Oracle Data Guard to a client in the following conditions:

1. High Availability (HA): When the client requires a disaster recovery solution with
minimal downtime, ensuring continuous database availability.
2. Data Protection: For protecting against data loss with real-time or near-real-time
replication of changes to a standby database.
3. Offload Reporting: When the client needs to offload reporting and read-only
queries to a standby database without impacting the primary system.
4. Geographical Redundancy: To provide geographically distributed standby databases
for disaster recovery and improved data resilience.

These conditions help maintain business continuity and improve overall system reliability.

59.How will you verify that the standby is in sync with primary?
A: To verify that the standby is in sync with the primary:

• Check Log Shipping Status:


On primary: SHOW PARAMETER LOG_ARCHIVE_DEST to confirm log shipping
configuration.
On standby: SELECT DEST_ID, STATUS, ERROR FROM V$ARCHIVE_DEST_STATUS; to verify
log application status.
• Check Redo Apply Status:
On standby: SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG ORDER BY
SEQUENCE# DESC; to verify all logs are applied.
• Check Managed Recovery Process (MRP) Status:
On standby: SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY; to ensure MRP is
running and logs are being applied.
• Check for Archive Gap:
On standby: SELECT * FROM V$ARCHIVE_GAP; to ensure no archive log gaps exist.

60.What is guaranteed restore point? How it is different from restore point?


A: A Guaranteed Restore Point is a special type of restore point in Oracle that guarantees recovery
to that point, even if the restore point is dropped or the database undergoes changes. It ensures
that the database can always be restored to the exact point in time, even after DDL changes or
backups.
Restore Point is a user-defined marker to indicate a point in time for potential recovery, but it
doesn’t guarantee its availability after certain changes or database operations (like dropping or
purging).
Key Difference: A Guaranteed Restore Point ensures the ability to recover, while a regular Restore
Point does not guarantee persistence after database changes.

61.Explain the difference between flashback database and database PITR?


A: Flashback Database:
• Restores the entire database to a specific point in time.
• Uses undo and redo data to revert database changes.
• Faster recovery for recent mistakes or user errors.
• Requires flashback logs to be enabled.
Database PITR (Point-in-Time Recovery):
• Recovers the database to a specific time or SCN.
• Typically used when the database is corrupted or data is lost.
• Involves restoring backups and applying archived redo logs.
• More time-consuming than Flashback Database.
Difference: Flashback Database is quicker and simpler for recent errors, while PITR involves
restoring backups and is useful for more severe recovery situations.

62.Explain about how upgrades are done in your environment?


A: Upgrades in my environment typically follow a structured process:

1. Pre-Upgrade Activities:
• Review the upgrade documentation for compatibility and requirements.
• Perform health checks on the database and applications.
• Backup the database and configuration files.
• Upgrade testing in a non-production environment.
2. Upgrade Process:
• Install the new Oracle version or Oracle E-Business Suite version.
• Run the Database Pre-Upgrade Information Tool to check prerequisites.
• Apply the required patches and perform database schema upgrades.
• Run DBUA (Database Upgrade Assistant) for database upgrade or use manual
upgrade methods for EBS.
3. Post-Upgrade Activities:
• Verify application functionality.
• Check database performance and logs for issues.
• Reconfigure backups and perform test restores.
• Update monitoring and alerting systems.

The process is carefully planned to minimize downtime and ensure a smooth transition.

63.What are the pre-upgrade steps you follow?


A: Pre-Upgrade Activities:
• Review the upgrade documentation for compatibility and requirements.
• Perform health checks on the database and applications.
• Backup the database and configuration files.
• Upgrade testing in a non-production environment.

64.Explain the process of upgrading from 11g to 12c version?


A: Upgrading from Oracle 11g to 12c involves the following steps:

1. Pre-Upgrade Preparation:
• Check compatibility of hardware, software, and applications with 12c.
• Backup the 11g database.
• Review Oracle 12c documentation for new features and changes.
• Run the Database Pre-Upgrade Information Tool.
2. Install Oracle 12c:
• Install Oracle 12c software on the target system.
• Configure Oracle 12c environment.
3. Database Upgrade:
• Run DBUA (Database Upgrade Assistant) to upgrade the database, or use manual
upgrade with catctl.pl.
• Apply required patches after the upgrade.
4. Post-Upgrade:
• Verify the database functionality.
• Recompile invalid objects.
• Check for performance issues.
• Update backups and perform a test restore.

The process ensures a seamless upgrade with minimal downtime.

65.What are the post upgrade OR post patching steps in oracle?


A: Post-upgrade steps in Oracle typically include:
1. Verify Database Functionality:
• Check that all applications and services are running correctly.
• Run basic database checks (e.g., SELECT * FROM v$version to verify the new
version).
2. Recompile Invalid Objects:
• Run utlrp.sql to recompile invalid PL/SQL objects.
3. Update Database Configuration:
• Review and update initialization parameters for the new version.
• Adjust any configuration changes required for performance improvements.
4. Run Health Checks:
• Perform a database health check using tools like DBMS_METADATA or Oracle
Enterprise Manager.
5. Backup the Upgraded Database:
• Take a full backup of the upgraded database for recovery purposes.
6. Test Backup and Recovery:
• Perform a test restore to verify backup integrity and recovery procedures.
7. Monitor Performance:
• Use AWR, ASH, and other performance tools to monitor and address any issues.

Typical Interview Questions for people projecting 5+ years of experience:


====================================================================================

 Application team query is running slow. What you will do?


A: To address a slow-running query, I would follow these steps:

1. Check Execution Plan:


Use EXPLAIN PLAN or DBMS_XPLAN.DISPLAY to analyze the query execution plan and identify
inefficiencies like full table scans or missing indexes.
2. Analyze Resource Usage: Check for CPU or I/O bottlenecks using v$session,
v$sysstat, or AWR reports.
3. Check Indexes: Ensure proper indexes are in place, or rebuild/reorganize existing
indexes if needed.
4. Optimize SQL: Rewrite the query to improve performance (e.g., using joins or
subqueries).
5. Consider Caching: Use result caching if applicable to reduce query time.
6. Check Database Configuration: Ensure database parameters are optimized for
performance.

 Select statement is taking longer than usual. What could be the issue?
A: A slow SELECT statement could be caused by several issues:

1. Inefficient Query Execution Plan: It might be using full table scans instead of
indexes. Analyzing the execution plan with EXPLAIN PLAN can help identify this.
2. Lack of Indexes: Missing or fragmented indexes could slow down query
performance.
3. High Data Volume: Large tables or complex joins may result in longer processing
times.
4. Database Resource Bottlenecks: CPU, memory, or I/O issues could impact query
performance. Check resource usage with v$session or AWR reports.
5. Locking/Concurrency: Other sessions may be locking the table or causing delays.
Use v$lock to diagnose.

 What is the frequency of index rebuilding in your environment?


A: In my environment, index rebuilding is typically performed when fragmentation exceeds 30%, or
when there is significant performance degradation due to inefficient queries. We regularly monitor
index health through the DBMS_STATS package and rebuild indexes during maintenance windows,
usually quarterly or after major data changes.

 Differentiate between delete and truncate command?


A: The DELETE command removes rows from a table based on a condition and logs the operation,
allowing rollback. It triggers associated constraints and is slower for large datasets.
TRUNCATE removes all rows without logging, cannot be rolled back, doesn’t trigger constraints,
and is faster for large datasets.
 Explain stats gathering and how it impacts database performance? (Imp)
A: Statistics Gathering for Schema involves collecting data on table sizes, indexes, and data
distribution to help the optimizer create efficient execution plans. Here’s how it impacts database
performance:

• Improved Query Execution: Accurate stats allow the optimizer to choose the best
plan.
• Reduced Resource Usage: Helps avoid inefficient joins, sorts, or full table scans.
• Timely Updates: Regular gathering keeps stats in sync with data growth or changes.
• Automatic vs. Manual: Can use DBMS_STATS.GATHER_SCHEMA_STATS for precise
control or rely on automated tasks.
• Impacts Complex Queries: Beneficial for joins and partitioned tables.

Always gather stats during low-usage periods to minimize performance impact.

 How many DB writer process can you configure in database? Tell me the command for increasing
the DB writer process.
A: You can configure up to 36 DB Writer (DBWR) processes in Oracle Database, depending on
system needs and workload. The default is 1 DBWR process, but multiple DBWRs can be
configured for large databases or high I/O workloads.

Command to Set DBWR Processes:

ALTER SYSTEM SET DB_WRITER_PROCESSES = n; -- Replace 'n' with the desired number (1 to 36)

Key Points:
• Increasing DBWR processes improves performance in I/O-intensive environments.
• Set based on hardware and database load.
• Monitor I/O bottlenecks using tools like AWR or ADDM to decide the optimal value.

 What is the most challenging task for you as a DBA?


A: Most Challenging Tasks for a DBA:

• Ensuring database availability during critical operations.


• Managing unexpected issues like hardware failures or data corruption.
• Resolving performance bottlenecks and slow-running queries.
• Performing upgrades and patches with minimal downtime.
• Balancing multiple priorities under tight deadlines.
• Ensuring data security and compliance with regulatory standards.
• Handling disaster recovery efficiently during emergencies.
 Explain about cost in SQL and how optimizer defines cost?
A: Cost in SQL represents the estimated resource usage (CPU, I/O, memory) for executing a query.
The optimizer defines cost using statistics like table size, data distribution, and index availability.

Key factors include:

• Row Estimates: Number of rows to scan or join.


• Access Paths: Cost of full table scans vs. index scans.
• Join Methods: Nested loops, hash joins, etc.
• I/O Operations: Disk reads/writes required.

The optimizer aims to minimize cost by selecting the most efficient execution plan, balancing
resources and performance. Accurate statistics improve cost estimation and query optimization.

 How do you setup data guard in 8i?


 What are the instance level, RMAN level and other changes in 9i and 10g?

 Junior DBA brought down the listener, how do you react?


A: Steps to Address a Listener Shutdown by a Junior DBA:
• Assess the Situation: Confirm the listener status using lsnrctl status.
• Minimize Impact: Inform affected teams about the issue to manage downtime
expectations.
• Restart Listener: Use lsnrctl start to bring it back online immediately.
• Analyze the Cause: Investigate logs to determine why the listener was stopped.
• Educate the Junior DBA: Explain the importance of the listener and its impact on
connectivity.
• Prevent Recurrence: Restrict unnecessary privileges and implement change control
processes.
• Document the Incident: Record the event for future learning and process
improvements.

General Interview Questions


======================================================================================
These are the questions with which interviews generally starts

 Tell about yourself, your background, some experience, and what you have worked
on?
A: Completed B.Tech in Mechanical Engineering.
• Joined TCS and completed 3 years of experience.
• Worked extensively with Oracle DB 19c and EBS 12.2.
• Involved in system administration, database optimization, patching, and
troubleshooting.
• Managed 5 teams, responsible for:
• Creating and managing shift rosters to ensure coverage and balanced workload.
• Worked on associates’ onboarding processes:
• Ensured smooth integration of new team members.
• Developed strong problem-solving and team management skills.
• Focused on improving operational efficiency and team performance.

 What do you like doing the most?


A: What I like doing the most is diving into complex problems where a solution is not immediately
apparent.
I enjoy researching, experimenting, and working hard to find answers, which helps me expand my
knowledge and problem-solving skills.
Meeting deadlines is another important aspect I focus on, as it ensures accountability and allows
me to stay organized and efficient.
I also take great pride in motivating my team members to stay focused on their work, encouraging
them to stay on track and deliver quality results. By fostering a collaborative environment, I make
sure that everyone is aligned with the team’s objectives.
Additionally, I always aim to support my peers when they face challenging tasks. Whether it’s
offering advice or assisting with troubleshooting, I believe that teamwork and mutual support are
key to overcoming difficulties. These actions not only help in completing the task at hand but also
contribute to personal growth and a positive team dynamic.
Top 50 Oracle Apps DBA Interview Questions

Oracle APPS DBA


1. How to find the Database version?
We havea SQL query select * from v$version;
It provides details bout oracle db version include release number.
In SQL Plus : Use SHOW PARAMETER version;
This will return the version of db ex: 19.0.0.0

What is the top command?


This is real time system monitoring tool available on linux OS.
It provides dynamic and real time view of the system resource utilisation including CPU, Memory, running
process.

What are the different modes you can run your adpatch? (IMP)
A: So let us assume that we have 3 application nodes and Non RAC DB server and also the patch is available only in
American English and there are no other languages installed on the application.
Steps for patching (EBS 12.1) would be
 Shut down the application on all the 3 nodes by logging into each node separately.
 From adadmin put the application into maintenance mode
 Take the count of invalids by logging to sql plus with apps user
 Use adpatch to apply patches to the application.

 Again check the count of invalid objects in database and compare with pre-patch application invalid count.
 From adadmin disable the maintenance mode
 Start the application on all the 3 nodes

These phases ensure that the system is properly configured and aligned with Oracle E-Business Suite requirements.

How to run auto-config in test mode?


A: To run AutoConfig in test mode in Oracle E-Business Suite, use the following command:

$AD_TOP/bin/adautocfg.sh test

This command will perform the configuration tasks without making any changes to the environment. It
allows you to verify the configuration without applying the changes, ensuring there are no errors before
running it in the actual mode.

9. What is the top command?


The top is an operating system command, it will display the top 10 processes that are taking high CPU and
memory.

20. TYPES of file systems in oracle EBS 12.2 ? (IMP)


A: 1. RUN file system, patch file system and non edition file system
RUN FILE SYSTEM: user able to access and online state
Patch file: patches are going to apply here
Non edition file: All non-executable files are stored here like, datafile, configuration files etc.. which
enables smooth patching operation.
21. What inputs do you need to apply a patch other than driver name and etc?
Apps and system passwords
40. What are the latest ORA errors you have encountered?
Usually, we will get the ORA errors like unable to extend the tablespace by so and so size. And we will
check those tablespaces for space, if space is not there we will resize the data file and add one more
datafile.

41. Which table you will query to check the tablespace space issues? IMP
bytes column in dba_free_spaces and dba_data_files

42. Which table you will query to check the temp tablespace space issues?
dba_temp_files

43. What is temp tablespace? And what is the size of temp tablespace in you are instances?
Temp tablespace is used by so many application programs for sorting and other stuff. Its size is between 3
to 10 GB.

6.What are the key differences between the DBA_OBJECTS, DBA_OBJECTS_AE, and
AD_OBJECTS tables?i
A: Here are the key differences between DBA_OBJECTS, DBA_OBJECTS_AE, and AD_OBJECTS in Oracle:

1. DBA_OBJECTS:
• Provides metadata about all objects in the database, such as tables, views, indexes, and
triggers.
• Shows objects visible to the DBA, including their status and owner.
2. DBA_OBJECTS_AE:
• An editioned version of DBA_OBJECTS, used in environments supporting Edition-Based
Redefinition (EBR).
• Includes additional editioning-related columns for managing database objects in EBR.
3. AD_OBJECTS:
• Specific to Oracle E-Business Suite.
• Tracks customizations and development objects within the EBS environment.

Each serves a distinct purpose based on the context and environment.

10.What are the main technological difference between R12.2 and R12.1? (Imp)
A: The main technological differences between R12.2 and R12.1 are:

• Online Patching (R12.2): Introduces Edition-Based Redefinition (EBR), allowing patching


with minimal downtime.
• Faster and Simplified Setup: Enhanced AutoConfig and Rapid Install processes.
• Enhanced Security: Integration of Oracle Identity Management and Security Console.
• Improved User Interface: R12.2 offers Oracle Application Framework (OAF) for better UI
customization.
• Support for New Technologies: R12.2 includes better integration with cloud services and
mobile capabilities.
Weblogic server was introduced in 12.2.

18.What are the steps to clone from a single node to a multi-node


=====================================================================================

Daily used RAC commands


Essential Oracle RAC Commands for Daily Use
1. Checking Cluster Status
- Cluster status overview:
$crsctl status cluster -all
-Check the status of cluster resources (e.g., database, listeners, services):
$crsctl stat res -t

2. Node and Instance Management


- Check the status of cluster nodes:
$olsnodes -n
- View Oracle RAC instances and their statuses:
SQL>select instance_name, status from gv$instance;

3.Database and Service Control Commands


- Start or stop the entire RAC database
$srvctl start database -d <db_name>
$srvctl stop database -d <db_name>

- Start or stop an instance on a specific node:


$srvctl start instance -d <db_name> -i <instance_name>
$srvctl stop instance -d <db_name> -i <instance_name>
- Check the status of a RAC service:
$srvctl status service -d <db_name> -s <service_name>

4. Cluster Resource Management


- Start or stop specific cluster resources (e.g., database, ASM, listeners):
$crsctl start resource <resource_name>
$crsctl stop resource <resource_name>

- Enable or disable a cluster resource:


$crsctl modify resource <resource_name> -attr "ENABLED=1"
$crsctl modify resource <resource_name> -attr "ENABLED=0"

5.ASM (Automatic Storage Management) Commands


- Check ASM instance status across nodes:
SQL>select instance_name, status from gv$asm_instance;
- List ASM disk groups and usage:
SQL>select name, state, total_mb, free_mb from v$asm_diskgroup;

6.Diagnosing Issues and Logs


- Check alert logs for all instances in the RAC environment:
tail -f /u01/app/oracle/diag/rdbms/<db_name>/<instance_name>/trace/alert_<instance_name>.log

- View CRS (Cluster Ready Services) logs for troubleshooting:


tail -f /u01/app/grid/diag/crs/<hostname>/crs/trace/crsd.log
- Get diagnostic information for all cluster resources:

crsctl stat res -t -v

7. Backup and Maintenance


- Verify backup status (RMAN):
bash
rman target / <<EOF
LIST BACKUP SUMMARY;
EOF
- Check Data Guard configuration in RAC environment (if applicable):
SQL>select * from v$dataguard_config;

8. Environment and Network


- View SCAN listeners and services:
srvctl config scan_listener

9. Stopping and Restarting the Cluster


- Shut down and restart the clusterware stack on a single node:
crsctl stop crs
crsctl start crs

These commands are commonly used for monitoring, managing, and troubleshooting Oracle RAC
environments in daily DBA operations.
Topic wise Interview questions generated by ChatGPT
=================================================================================
5. Describe how you would move an Oracle E-Business Suite instance from one server to
another, and what issues you might encounter.
A:

Steps to Move an Oracle EBS Instance:


1. Pre-Move Preparation:
• Verify both source and target servers meet hardware, OS, and software requirements.
• Ensure identical Oracle EBS and database versions on both servers.
2. Backup the Source Environment:
• Take a full backup of the application tier and database tier.
3. Copy Files to the Target Server:
• Copy application and database files using tools like rsync or scp.
4. Reconfigure the Environment:
• Update context files using perl adclonectx.pl to reflect the target server configuration.
• Run AutoConfig to regenerate configuration files.
5. Reconfigure the Database:
• Update tnsnames.ora and listener configurations for the new server.
6. Start Services:
• Start database and application services.
• Verify the environment functionality.

Potential Issues and Resolutions:


1. File Permission Issues:
• Ensure correct ownership and permissions for all copied files.
2. Configuration Errors:
• Double-check context files and database listener configurations.
3. Invalid Hostnames:
• Update hosts file or DNS settings if the target server hostname differs.
4. Network Issues:
• Verify connectivity between tiers (application and database).
5. Environment-Specific Customizations:
• Migrate and validate all custom scripts and configurations.

Performance Tuning:

EBS Configuration & Troubleshooting:


15. A new custom module is causing issues after deployment in Oracle EBS. How do you
troubleshoot and fix this?
To troubleshoot and fix issues caused by a new custom module in Oracle EBS:
1. Review Log Files:
• Check Apache logs and WebLogic logs for errors during module deployment or execution.
• Review the concurrent request logs for any errors related to the custom module.
2. Check for Missing or Incorrect Dependencies:
• Ensure that the custom module has all required dependencies (e.g., packages, procedures,
libraries) properly deployed.
3. Verify Custom Code:
• Inspect the custom module’s code for issues like incorrect SQL, PL/SQL, or forms
configuration.
• Ensure that any new database objects (tables, triggers, etc.) are properly created and
indexed.
4. Check Profile Options:
• Ensure any required profile options specific to the custom module are correctly set.
5. Test in Development:
• If the issue is complex, replicate the issue in a development or test environment to debug
further.
6. Rollback or Patch:
• If necessary, roll back the deployment and re-deploy after addressing the issues, or apply
any necessary patches.

These steps will help identify and resolve issues caused by the custom module in Oracle EBS.

What is profile option?


A profile option in Oracle E-Business Suite (EBS) is a configuration setting that controls the behavior and
functionality of the application for a specific user, responsibility, or organization.
Profile options allow you to customize the behavior of the system without modifying the underlying code,
enabling flexibility in how Oracle EBS operates.

Key Points:
• Scope: Profile options can be set at various levels—site, application, responsibility, and user
—to provide granularity in their application.
• Purpose: They control system-wide settings like currency, session timeouts, and security
preferences, as well as more specific configurations for particular modules or tasks.
• Common Examples:
• ICX: SESSION TIMEOUT: Specifies the session timeout duration.
• AP: DEFAULT INVOICE PAYMENT TERMS: Sets the default payment terms for Accounts
Payable.
• FND: USER ID: Defines the user’s identifier in the system.
Usage:
• Configuration: Profile options can be set using the System Administrator responsibility
through the Profile Options form.
• Customization: They are often used to control features, change system behavior, or adapt
EBS to meet specific business needs without altering code.

General Troubleshooting & Maintenance: Database Management:

3. Your database is experiencing high disk space usage. How would you
investigate and resolve this issue?
A: To investigate and resolve high disk space usage in an Oracle database, follow these steps:
1. Check Tablespaces: Use DBA_FREE_SPACE and DBA_DATA_FILES views to check for
tablespaces that are consuming excessive space or have little free space.

SELECT tablespace_name, SUM(bytes)/1024/1024 AS MB


FROM dba_data_files
GROUP BY tablespace_name;

2. Identify Large Segments: Query the DBA_SEGMENTS view to identify large tables, indexes,
or other segments consuming space.

SELECT segment_name, segment_type, bytes/1024/1024 AS MB


FROM dba_segments
ORDER BY bytes DESC;

3. Check for Fragmentation: Analyze table fragmentation, and consider reorganizing or


rebuilding fragmented tables and indexes using ALTER commands or tools like DBMS_REDEFINITION or
DBMS_SPACE.
4. Review Archivelogs: If archivelog mode is enabled, check if old archived logs are
consuming disk space. Archive log cleanup can be done with:
rman crosscheck archivelog all;
rman delete expired archivelog all;

5. Move Data to Larger Storage: If needed, move large objects (LOBs) or partitions to other
tablespaces with more available space.
6. Add Datafiles: If tablespace growth is unavoidable, add new datafiles:

ALTER DATABASE DATAFILE '/path/to/datafile.dbf' RESIZE 500M;

9. Monitor Usage Regularly: Set up regular monitoring for disk usage to avoid future issues.

2. During the cloning process, the system is stuck at “Running AutoConfig”. How would
you troubleshoot this?
1. Check Log Files:
• Review the clone log files: $APPL_TOP/admin/<sid>/logs and
$ORACLE_HOME/appsutil/log.
• Look for any errors or timeouts related to AutoConfig.
2. Check Resource Usage:
• Verify CPU, memory, and disk usage. If resources are low, consider increasing resources or
freeing up space.
3. Validate Environment Variables:
• Ensure all environment variables ($ORACLE_HOME, $APPL_TOP, $PATH, etc.) are correctly
set.
4. Check for Locking Issues:
• Run ps -ef | grep config to check for any stuck or long-running processes related to
AutoConfig.
5. Check Database and Application Services:
• Ensure the database is up and accessible.
• Check if the application services are running (adstrtal.sh).
6. Re-run AutoConfig Manually:
• Run AutoConfig manually to check for errors:
• perl $ORACLE_HOME/appsutil/clone/bin/adautocfg.pl
7. Review Configuration Files:
• Verify the appsutil configuration files (e.g., adbldxml, applprod.txt) for any discrepancies.
8. Run adctrl for Debugging:
• If AutoConfig is stuck during a specific process, use adctrl to identify the issue step by step.

Backup and Recovery:


1. A major failure has occurred on the database. You need to restore it from backup.
Describe the steps you would take to perform the recovery.
A: 1. Identify the Failure:
• Diagnose the failure (instance crash, corruption, etc.) and determine the time of failure.
2. Check Backup:
• Verify the latest backup (full and incremental, if any) is available and intact.
3. Prepare Environment:
• Set environment variables for the Oracle database (ORACLE_HOME, ORACLE_SID).
4. Shutdown Database:
• Shutdown the database if it’s running: shutdown abort (if necessary).
5. Restore Backup:
• Restore the full database backup:
• Use RMAN to restore the full backup: rman target / restore database from backup;.
6. Recover the Database:
• Apply incremental backups, if any, using RMAN:
• rman target / recover database until time 'YYYY-MM-DD HH24:MI:SS';
7. Open the Database:
• Open the database with resetlogs: alter database open resetlogs;
8. Validate the Recovery:
• Check the database for consistency and integrity (DBMS_REPAIR, DBMS_UTILITY) and
review the alert logs.
9. Backup After Recovery:
• Take a fresh full backup once the recovery is successful.

2. You notice that the archive logs are not being applied to the standby database. How
would you troubleshoot this?
A: 1. Check Archive Log Shipping:
• Ensure archive logs are being generated on the primary database by verifying archive log
list and alert.log.
2. Verify Log Transport Services:
• Check the Data Guard configuration: show parameter log_archive_dest_2 and verify if the
destination is correct.
3. Verify Network Connectivity:
• Ensure the primary and standby databases can communicate over the network (check
firewall, DNS, etc.).
4. Check Standby Log Apply Services:
• Verify that the managed recovery process (MRP) is running on the standby:
• select process, status from v$managed_standby;
• If MRP is not running, start it with:
• alter database recover managed standby database using current logfile disconnect from
session;
5. Check Archive Log Status on Standby:
• On the standby, verify the archive log application with:
• select sequence#, applied from v$archived_log where destination = 'STANDBY';
• Ensure the logs are not missing or corrupted.
6. Check for Errors:
• Review the alert.log on both primary and standby databases for any errors related to
archive log transport or apply.
7. Resync Archive Logs (if necessary):
• If logs are missing or corrupted, manually copy them from primary to standby using scp or
similar tools and register them:
• recover managed standby database using backup controlfile;
8. Verify Data Guard Configuration:
• Ensure the Data Guard configuration is valid using dgmgrl or show parameter standby and
resolve any configuration issues.

WebLogic Server and Middleware:

1. How would you configure WebLogic Server for high availability in an Oracle EBS
environment?
A: To configure WebLogic Server for high availability (HA) in an Oracle EBS environment:
1. Cluster Creation:
• Create a WebLogic Server cluster using the WebLogic Administration Console.
• Ensure the cluster has at least two managed servers to distribute the load.
2. Deploy EBS Application:
• Deploy Oracle E-Business Suite applications (like Oracle Applications or Forms) on all the
managed servers in the cluster.
3. Configure Node Manager:
• Set up Node Manager for automatic server restarts in case of failure.
• Configure Node Manager in config.xml and ensure it is running on all servers.
4. Load Balancer Configuration:
• Configure an external load balancer (like Oracle Traffic Director) to distribute HTTP
requests across the WebLogic Server cluster nodes.
5. Session Persistence:
• Enable session persistence for WebLogic to maintain session state in case of server failover.
• Configure “replicated” or “database” session persistence under WebLogic’s “Cluster”
settings.
6. Database Connections:
• Ensure all managed servers in the cluster point to the same Oracle database for
consistency.
7. Health Monitoring:
• Configure WebLogic Server’s health monitoring and alerting to detect server failures and
trigger failover.
8. Testing:
• Test failover and load balancing to ensure proper high availability functionality.
2. If an EBS application server is showing intermittent crashes, how would you
investigate and resolve the issue?
A:
To investigate and resolve intermittent crashes on an EBS application server:
1. Check Logs:
• Review the apache, opmn, and WebLogic logs for error messages and stack traces related
to the crash.
• Check for appltop and dms logs in $APPL_LOG and $INST_TOP.
2. Resource Utilization:
• Monitor CPU, memory, and disk usage to ensure the server is not resource-starved. Use
tools like top, vmstat, or sar.
3. Application Server Configuration:
• Verify WebLogic and Oracle HTTP Server (OHS) configurations to ensure they are properly
tuned for resource management (e.g., heap size, thread pool settings).
4. Check for Patches:
• Ensure that the latest Oracle EBS patches and security updates are applied.
5. Database Connectivity:
• Verify database connections and investigate any timeout issues or database-related errors
that may cause crashes.
6. Intermittent Issues:
• Check if there is any pattern to the crashes (e.g., time of day, specific actions).
7. Review External Integrations:
• Investigate if third-party integrations or customizations are causing instability.
8. Test with Minimal Configuration:
• Reduce the configuration to the minimum required services and test for stability.
9. Restart Services:
• If necessary, restart application services (opmn, WebLogic, etc.) and test the system
behavior.
10. Raise SR:

• If the issue persists, raise an Oracle Service Request (SR) for further assistance.

Upgrades:

1. You need to upgrade Oracle EBS from version 12.1 to 12.2. What steps would you follow?
A: To upgrade Oracle EBS from version 12.1 to 12.2, follow these precise steps:
1. Pre-Upgrade Preparation:
• Review the Oracle EBS 12.2 upgrade documentation.
• Ensure system meets hardware, OS, and database prerequisites for 12.2.
• Backup the existing system (database, application tier, and configurations).
• Apply all recommended patches to the current 12.1 instance.
2. Run Pre-Upgrade Checks:
• Use the adutconf.sql script to check for any existing issues.
• Run pre-upgrade and upgrade readiness checks using the provided Oracle scripts.
3. Install Oracle EBS 12.2 Environment:
• Set up a new 12.2 environment (use Rapid Install for new installation).
• Apply the latest patches for EBS 12.2 on the new environment.
4. Clone the Existing 12.1 Instance:
• Clone the 12.1 instance to the new 12.2 environment using adcfgclone.pl.
5. Upgrade the Database:
• Upgrade the Oracle database to a supported version for EBS 12.2.
• Run catcon.pl to upgrade the database schema.
6. Run the EBS Upgrade:
• Use the adgrants.sql script to grant appropriate privileges.
• Run adpreclone.pl and adclone.pl to prepare and clone the environment.
• Apply necessary patches and upgrade the application tier.
• Run the adupgrd.sql script to complete the upgrade process.
7. Post-Upgrade Steps:
• Perform functional and regression testing to ensure everything is working as expected.
• Recompile any custom code, reports, or forms.
• Ensure database backups are taken after upgrade.
• Monitor logs for any issues.
8. Verify and Finalize:
• Perform a final validation of the system and services.
• Clean up obsolete files and logs.

2. What steps would you take to prepare for a smooth Oracle database upgrade to 19c?
1. A: To prepare for a smooth Oracle database upgrade from 12c to 19c:
2. Review Oracle Documentation: Study the Oracle 19c Release Notes and Database Upgrade Guide
for specific 12c to 19c upgrade requirements.
3. Check Compatibility: Confirm that your current 12c version is eligible for a direct upgrade to 19c
by referring to the Oracle Compatibility Matrix.
4. Pre-Upgrade Check: Use preupgrade.jar to identify potential issues, deprecated features, and
recommend actions.
5. Backup Database: Perform a full database backup including data files, control files, and archive
logs.
6. Ensure System Requirements: Verify that your hardware, OS, and Oracle software patches are
compatible with 19c.
7. Apply Latest Patches: Apply the latest patches for Oracle 12c before starting the upgrade process.
8. Check Database Parameters: Review and adjust database initialization parameters as needed,
especially those that are changed or deprecated in 19c.
9. Test Upgrade in Non-Production: Clone the database and test the upgrade process in a non-
production environment to ensure compatibility with applications.
10. Plan for Deprecated Features: Address any deprecated or removed features in 19c that may affect
your current environment.
11. Optimize Database Performance: Perform tasks such as gathering optimizer statistics and cleaning
up obsolete data before the upgrade.
12. Plan Downtime: Schedule the upgrade during a maintenance window with minimal user impact.
13. Upgrade Method: Use DBUA (Database Upgrade Assistant) or perform a manual upgrade based on
your preference.

These practical questions test not just knowledge but also hands-on experience in resolving
common challenges that Oracle Apps DBAs face in daily operations.

----------------------------------------------------------------------------------------------------------------------------------
---------
Here are more questions that might be asked in an Oracle Apps DBA interview:

Database Management & Troubleshooting:

1. How would you monitor and manage Oracle database growth in EBS?
A: Monitoring and Managing Oracle Database Growth in EBS:
• Use DBA views like DBA_TABLESPACE_USAGE_METRICS and DBA_SEGMENTS to
monitor space usage.
• Set alerts for tablespace thresholds.
• Implement partitioning for large tables and archiving for historical data.
• Regularly perform database housekeeping tasks such as purging old data or
archiving.

3. What steps would you follow to resolve a corruption issue in Oracle database files?
A: Steps to Resolve Corruption in Oracle Database Files:
• Check alert logs for errors.
• Use RMAN to validate and restore corrupted files.
• If corrupt data is identified, use DBMS_REPAIR or flashback.
• In extreme cases, restore from the most recent backup.

4. How do you handle database storage optimization in Oracle EBS environments?


A: Database Storage Optimization in Oracle EBS:
• Implement tablespace management to ensure efficient space allocation.
• Regularly reclaim unused space with shrink operations.
• Use data compression and partitioning to optimize large tables.
• Monitor and tune index usage and consider index rebuild to save space.

5. Can you describe the process of migrating an Oracle database from one server
to another?
A: Migrating an Oracle Database from One Server to Another:
• Perform a full database export using expdp or rmancatalog.
• Set up a new Oracle home and configure init.ora and tnsnames.ora on the new
server.
• Use RMAN to back up and restore the database to the new server.
• After migration, validate database integrity and configure necessary network
settings.

Cloning & EBS Maintenance:

2. What is the purpose of AutoConfig in Oracle EBS, and how do you use it?
A: Purpose of AutoConfig in Oracle EBS:
 AutoConfig is used to manage and configure Oracle EBS application and database tier files.
It centralizes configuration information in .xml files and regenerates configuration files
when needed.
 Usage: Update the .xml files, run adconfig.sh (apps tier) or autoconfig (db tier) to apply
changes. Use adtmplreport.sh for configuration verification.

4. How do you handle customizations in Oracle EBS during a patching or upgrade


process?

A: Handling Customizations During Patching/Upgrades:


• Identify customizations using tools like Customization Analyzer.
• Back up custom objects and scripts.
• Use customization layers (e.g., personalization, extensions) to minimize conflicts.
• Reapply and test customizations post-patch/upgrade to ensure functionality.

5. How would you perform cloning in an Oracle RAC (Real Application Clusters)
environment?
A: Performing Cloning in an Oracle RAC Environment:
• Source Node: Back up the database using RMAN.
• Target Node: Set up the cluster environment and install Oracle binaries.
• Use Rapid Clone to copy application tier files and restore the database to the target.
• Reconfigure RAC components (e.g., services, VIPs) and run autoconfig on both tiers.
• Verify cloned environment functionality.

WebLogic and Middleware:

1. What is your experience configuring WebLogic Server in an Oracle EBS environment?


A: Experience Configuring WebLogic Server in Oracle EBS:
 Configured WebLogic for Oracle EBS R12.2.x, including deploying managed servers for Forms, OAF,
and Web Services.
 Used Autoconfig to manage WebLogic domain settings.
 Configured Node Manager, tuned JVM parameters, and enabled monitoring with tools like
Enterprise Manager.

3. What are the key differences between WebLogic 12c and earlier versions in terms of
performance and configuration?
A: Key Differences Between WebLogic 12c and Earlier Versions:
 Performance: Enhanced multitenancy, improved JDBC performance, and better integration with
Oracle DB 12c features.
 Configuration: Simplified deployment with WebLogic Deployment Tool (WDT) and support for
REST APIs.
 Support: Better compatibility with cloud and newer standards like Java EE 7.

4. How do you troubleshoot issues related to WebLogic, such as server crashes or


performance degradation?
A: Troubleshooting WebLogic Issues:
 Server Crashes: Check server logs (nohup.out, .log) and JVM heap usage. Configure thread dumps
for analysis.
 Performance Degradation: Analyze datasource connections, thread states, and tune JVM GC
settings. Use WLST scripts and monitoring tools to identify bottlenecks.
 Restart affected managed servers and coordinate with DBA and network teams if necessary.

Performance Tuning:

1. How do you analyze performance using AWR and ASH reports in Oracle?
A: Analyzing Performance Using AWR and ASH Reports:
• Use AWR to review top SQLs, wait events, and resource usage.
• Focus on queries with high elapsed times and tune execution plans.
• Use ASH to analyze active sessions during peak load for bottlenecks.

2. Explain how you would perform Oracle database tuning in an EBS environment for
better user experience.
A: Database Tuning in Oracle EBS:
• Optimize SQL queries using execution plans and gather statistics.
• Adjust SGA/PGA and redo log sizes based on workload.
• Implement partitioning for large tables and balance I/O distribution.
• Use AWR/ADDM recommendations for continuous improvement.
3. What steps do you take when you notice high CPU utilization in Oracle EBS? How
would you identify and resolve the issue?
A: Addressing High CPU Utilization in Oracle EBS:
• Identify CPU-intensive sessions using top or AWR.
• Analyze SQLs with high execution time and optimize plans.
• Kill problematic sessions if needed and redistribute workload.
• Scale resources or tune instance parameters for prevention.

4. How do you identify and resolve performance bottlenecks in WebLogic Server?


A: Resolving Performance Bottlenecks in WebLogic Server:
• Monitor JVM heap usage and tune garbage collection settings.
• Check thread pools for stuck or overloaded threads.
• Optimize connection pool and deployment configurations.
• Use load balancing and scale horizontally/vertically for performance

EBS Advanced Configurations:

1. How do you configure and troubleshoot Single Sign-On (SSO) with Oracle EBS?
A: Configuring and Troubleshooting SSO with Oracle EBS:
• Configure SSO Profile Options in EBS and integrate with OAM.
• Synchronize users via OID or LDAP.
• Verify SSO URL redirections and authentication flows.
• Troubleshoot using logs (oam.log, Apache logs) and validate certificate configurations.

2. What are the key considerations when configuring Oracle Internet Directory (OID) and Oracle Access
Manager (OAM) in EBS?
A: Key Considerations for OID and OAM in EBS:
• Ensure OID schema synchronization with EBS users.
• Configure SSL/TLS for secure communication.
• Validate authentication policies and session management in OAM.
• Test high availability for both OID and OAM.

3. How do you integrate Oracle EBS with a Load Balancer for scalability and high availability?
A: Integrating Oracle EBS with a Load Balancer:
• Configure the load balancer for sticky sessions and SSL offloading.
• Define VIPs for EBS tiers and ensure redundancy.
• Update EBS context files with load balancer details.
• Test failover and performance under load scenarios.

Upgrades & Patch Management:

2. How do you plan and execute rolling patching in a multi-node EBS environment?
A: Planning and Executing Rolling Patching in Multi-Node EBS:
• Plan: Identify patch compatibility with rolling mode and schedule downtime if needed.
• Execute: Patch one node at a time while keeping others operational to minimize downtime.
• Use tools like adop in online patching mode (R12.2.x).
• Validate the environment after applying patches on all nodes.

3. Explain the differences between a major upgrade and a patch in Oracle EBS.
A: Differences Between a Major Upgrade and a Patch in Oracle EBS:
• Major Upgrade: Upgrades the entire EBS version (e.g., R12.1 to R12.2), includes new
features, and often requires significant downtime.
• Patch: Fixes specific bugs, adds minor enhancements, or updates compliance without
upgrading the overall version

Backup & Recovery:

1. What are the backup strategies you follow for Oracle EBS and databases?
A: Backup Strategies for Oracle EBS and Databases:
• Full Backups: Perform regular RMAN backups for the database.
• Incremental Backups: Schedule daily backups of changed data to save space.
• Apps Tier Backup: Use file system backups for application tier files, including configuration
files.
• Enable Archive Logs for point-in-time recovery.

2. How do you restore the database and applications in case of a failure?


A: 2. Restoring Database and Applications After Failure:
• Use RMAN to restore the database to the latest backup.
• Recover to the desired point-in-time using archive logs.
• Restore application tier files from the backup and rerun AutoConfig.
• Validate connectivity and functionality post-restore.

3. What is the procedure to recover Oracle EBS after a disaster?


A: Recovering Oracle EBS After a Disaster:
• Activate the DR site using replicated backups for both tiers.
• Synchronize database with Data Guard or RMAN.
• Restore application tier and reconfigure context files for the new environment.
• Test the system thoroughly before making it operational.

Oracle Database and Apps DBA Skills:

1. Can you explain the key components of the Oracle E-Business Suite architecture?
A: Key Components of Oracle E-Business Suite Architecture:
• Application Tier: WebLogic Server, Forms, Reports, and OA Framework.
• Database Tier: Oracle Database for data storage.
• Concurrent Processing: Handles background jobs and reports.
• Middle Tier: Interfaces between the application and database, including web servers and
load balancers.
• File System: Stores configurations, logs, and files for EBS.

2. How do you handle patching in Oracle EBS? Walk us through the steps.
A: Handling Patching in Oracle EBS:
• Prepare: Backup databases and applications, verify patch prerequisites.
• Download: Get the patch from Oracle Support.
• Apply: Use ADOP for R12.2 or Rapid Install for older versions to apply patches in
online/offline mode.
• Post-patch: Perform necessary configurations and testing.

3. What is your experience with Oracle Multitenant and Pluggable Databases?


A: 3. Experience with Oracle Multitenant and Pluggable Databases:
• Multitenant: Used in Oracle 12c and later, provides a container database (CDB) and
multiple pluggable databases (PDBs).
• PDBs: Provides isolation, easier patching, and resource sharing.
• Experience with creating, plugging, and unplugging PDBs, as well as managing CDB/PDB
architecture for EBS.

4. How do you troubleshoot performance issues in Oracle SQL/PLSQL?


A: Troubleshooting Performance Issues in Oracle SQL/PLSQL:
• Check execution plans using EXPLAIN PLAN for slow queries.
• Identify bottlenecks like high I/O or locking using AWR, ASH, and v$views.
• Optimize queries by creating appropriate indexes and adjusting database parameters.
• Use SQL Trace and TKPROF to analyze and resolve long-running SQL.

5. Explain the difference between cold and hot backups in Oracle?


A: Difference Between Cold and Hot Backups in Oracle:
• Cold Backup: The database is offline. It ensures a consistent backup but requires
downtime.
• Hot Backup: The database remains online, performed in ARCHIVELOG mode for consistent
backups, allowing minimal downtime.

6. What are the steps you follow to clone an Oracle E-Business Suite instance?
A: 6. Steps to Clone an Oracle E-Business Suite Instance:
• Prepare: Take full backups of both the database and applications.
• Clone DB: Use RMAN to create a duplicate database.
• Clone Apps Tier: Use Rapid Clone to clone the application tier and update context files.
• Post-cloning: Run AutoConfig and verify the instance after cloning.

7. How do you manage database upgrades in Oracle EBS?


A: Managing Database Upgrades in Oracle EBS:
• Backup: Take full backups of database and applications.
• Pre-upgrade checks: Verify prerequisites and compatibility.
• Upgrade: Use Oracle’s Database Upgrade Assistant (DBUA) or manual upgrade scripts for
patching the database.
• Post-upgrade: Run necessary configuration tasks, recompile invalid objects, and test the
upgrade.

Performance Tuning:

3. Can you explain how to perform EBS performance tuning, particularly with database and middleware
components?
A: EBS Performance Tuning with Database and Middleware:
• Database Tuning: Optimize SQL queries, indexes, and partitioning. Adjust SGA/PGA and
monitor undo tablespaces for efficient data handling.
• Middleware Tuning: Tune WebLogic and Forms Server JVM settings, optimize thread pools,
and configure load balancing for better resource distribution.
• Caching and Concurrency: Improve performance through better cache management and
parallel processing configuration.

Thread pool:

A thread pool is a collection of pre-created, reusable threads that handle multiple tasks concurrently. It
improves performance by reducing the overhead of creating and destroying threads for each task,
managing task execution efficiently by reusing available threads.

A thread is the smallest unit of execution in a process. It consists of an execution context, such as a
program counter and stack, and shares memory and resources with other threads in the same process.
Threads enable concurrent execution, improving performance and responsiveness in multi-tasking
environments
EBS Advanced Configurations:

General Troubleshooting:

1. What steps would you take to recover an Oracle EBS instance from a crash or failure?
A: Recovering an Oracle EBS Instance from a Crash or Failure:
• Restore Database: Use RMAN to restore the database from the last backup or perform
point-in-time recovery if needed.
• Restore Application Tier: Restore application files from backups and run AutoConfig to
reconfigure.
• Check Logs: Review EBS and database logs to identify and resolve the cause of the crash.
• Test: Validate the application and database after recovery to ensure everything is
functioning correctly.

——————————————————————————-

Here’s a list of potential interview questions for an Oracle EBS 12.2.9 Apps DBA role based on real-world
scenarios:

General EBS Architecture:


1. Explain the three-tier architecture of Oracle EBS?
A: Three-Tier Architecture of Oracle EBS:
• Database Tier: The core component that stores all Oracle EBS data (e.g., Oracle Database).
It processes all database queries and handles business logic.
• Application Tier: Contains the application server, Oracle Forms, Reports, and the
concurrent processing server. It acts as an intermediary between the database and the user interface.
• Client Tier: The user interface, usually accessed through a web browser or Oracle Forms
client. It allows end-users to interact with Oracle EBS applications.

2. What are the key features introduced in Oracle EBS 12.2.x compared to earlier versions?
A: Key Features Introduced in Oracle EBS 12.2.x:
• Online Patching (ADOP): Introduces the ability to apply patches while the system is online,
reducing downtime.
• Enhanced WebLogic Integration: Improved integration with WebLogic Server for scalability
and high availability.
• Advanced Security Features: Enhanced security with improved authentication methods
(e.g., SSO, OAM, OID integration).
• Mobile Applications: Expanded support for mobile access and mobile-friendly interfaces.
• Improved User Interface: Modernized and more responsive web interfaces for a better
user experience.

3. What is Online Patching (ADOP) in Oracle EBS 12.2, and how does it work?
A: Online Patching (ADOP) in Oracle EBS 12.2:

ADOP (Application DBA Online Patching) allows applying patches to Oracle EBS 12.2 systems without
taking the application offline, minimizing downtime. It uses a two-file system approach, where one is the
run file system (active) and the other is the patch file system (inactive).
ADOP Patch Phases with Example Patch:
1. Prepare the Patch:
• Stage the patch and prepare the system for patching.
adop phase=prepare patch=12345678
2. Apply the Patch:
• Apply the patch to the patch file system.
adop phase=apply patch=12345678
3. Finalize the Patch:
• Finalize the patching process and apply any final updates needed before switching to the
patched environment.
adop phase=finalize patch=12345678
4. Cutover to the Patched File System:
• Perform the cutover to the patched environment (make the patched file system the active
one).
adop phase=cutover patch=12345678
5. Rollback (if necessary):
• If there are issues, roll back to the previous version.
adop phase=rollback patch=12345678

Explanation:
• Prepare: Prepares the system for patching.
• Apply: Applies the patch to the patch file system.
• Finalize: Completes patching tasks and makes the patch ready for cutover.
• Cutover: Switches from the old system to the newly patched system.
• Rollback: If any issues arise during cutover, you can revert to the previous environment.
ADOP (Online Patching):
These commands help manage the patching lifecycle in a non-disruptive manner.

Cloning and Backup:


7. Explain the steps involved in cloning an EBS 12.2 instance?
A: 1. Steps Involved in Cloning an EBS 12.2 Instance:
• Prepare Source Environment: Ensure the source EBS environment is stable and that no
patches or configuration changes are in progress.
• Run Pre-Cloning Scripts: Execute pre-cloning scripts on both the database and application
tiers (adpreclone.pl and adcfgclone.pl).
• Copy Files: Copy the source environment’s files (database, application files) to the target
system using rsync or scp.
• Run Post-Cloning Scripts: Execute post-cloning scripts (adcfgclone.pl for the application tier
and adclonectrl.pl for the database tier) on the target system.
• Reconfigure: Run AutoConfig on both tiers to reconfigure the target instance.
• Verify: Test the cloned instance by starting the services and ensuring everything is
functioning correctly.

8. What are the critical post-cloning steps you perform?


A: Critical Post-Cloning Steps:
• Run AutoConfig on both the database and application tiers to apply configuration changes.
• Check and Update Configuration Files: Verify $APPL_TOP, $COMMON_TOP, and
$ORACLE_HOME configurations.
• Verify Services: Ensure that all services (WebLogic, Apache, etc.) are up and running.
• Recreate Oracle Listener if necessary on the target system.
• Validate Database Connectivity: Ensure that the cloned instance can connect to the
database.

9. How do you manage backups for Oracle EBS? Which tools or strategies do you use?
A: Managing Backups for Oracle EBS:
• Database Backup: Use RMAN (Recovery Manager) for regular full and incremental backups
of the Oracle database.
• Application Tier Backup: Use file-level backups (e.g., rsync or tar) for application files,
configuration files, and logs.
• Tools: Oracle Data Guard for disaster recovery, RMAN for database-level backup, and EBS-
specific backup tools like adbkup for application data.
• Backup Strategy: Implement regular full backups weekly and incremental backups daily
for both database and application tiers, with off-site storage for disaster recovery.

Patching and Upgrades:

12. How do you perform a version upgrade (e.g., from 12.2.8 to 12.2.9)? (upgrading nothing but
Patching)
A: Performing a Version Upgrade (e.g., from 12.2.8 to 12.2.9):
• Pre-Upgrade Steps:
• Backup both the application and database tiers.
• Ensure the current environment is stable and meets the version requirements for the
upgrade.
• Run the AutoUpgrade tool or manually download the upgrade patch from Oracle Support.
• Apply the Upgrade Patch:
• Use adpatch to apply the patch for version upgrade.
• Post-Upgrade Steps:
• Run AutoConfig on both the database and application tiers.
• Validate the upgrade by running tests to confirm the new version is functioning correctly.
• Check the logs for errors and resolve any issues encountered during the upgrade.

Database Administration:

14. How do you handle concurrent manager tuning?


A: Handling Concurrent Manager Tuning:
• Adjust Manager Parameters: Tune parameters like Concurrent Processes, Queue Size, and
Priority in the Concurrent Manager settings.
• Diagnose Long-Running Requests: Use Concurrent Request Manager logs to identify slow
or blocked requests.
• Optimize Workload Distribution: Balance the workload across multiple servers by
configuring Load Balancing and Queue Assignment.
• Monitor and Adjust Timeouts: Increase or decrease timeouts and manage the concurrent
request’s retry parameters.

15. How do you troubleshoot locking issues in Oracle EBS?


A: Troubleshooting Locking Issues in Oracle EBS:
• Identify Locked Sessions: Use SQL queries on v$session and v$lock views to identify
sessions holding locks.
• Check Blocking Sessions: Run queries to identify blocking sessions and investigate the root
cause of the blockage.
• Kill Blocked Sessions: Use the ALTER SYSTEM KILL SESSION command to terminate sessions
causing deadlocks or long waits.
• Resolve Application Issues: Investigate and resolve application issues (like long-running
transactions) that lead to locking problems.
• Database Monitoring Tools: Use AWR and ASH reports to spot performance bottlenecks
and locking problems.

Security:
Troubleshooting and Monitoring:
23. What are your steps for troubleshooting forms or OAF page issues?
A: Troubleshooting Forms or OAF Page Issues:
• Logs: Examine Forms logs and OAF logs for errors or stack traces.
• Debugging: Enable debugging in the form or page to track functionality and pinpoint
issues.
• Configuration: Verify if there are any configuration issues with personalization or access
controls.

.
Miscellaneous:

26. What is the ETCC (Enablement Tool Check Compliance) utility, and when is it used?
A: 6. ETCC (Enablement Tool Check Compliance) Utility:
• ETCC helps verify that the Oracle EBS system complies with required configurations before
applying patches or performing upgrades.
• It checks system readiness and identifies missing configurations or incorrect settings that
could hinder the upgrade or patching process.

Oracle Apps DBA role:

Basic Questions:

2. What is the difference between a table and a view? When would you use a view?
A: Table vs. View:
• Table: A database object that stores data physically.
• View: A virtual table based on the result of a query. It doesn’t store data but displays it
from underlying tables.
• Use: Use a view when you need to simplify complex queries, aggregate data, or provide
restricted access to certain data.
4. What is an index? Explain the different types of indexes in Oracle.
A: Index:
• Index: A database object that improves the speed of data retrieval operations.
• Types:
• B-tree Index: Default, used for equality and range queries.
• Bitmap Index: Used for columns with low cardinality.
• Clustered Index: Organizes data in the index order.
• Function-Based Index: Built on expressions.
5. What are materialized views, and how do they differ from regular views? When would you use them
in Oracle Apps?
A: Materialized Views:
• Materialized View: Stores query results physically and can be refreshed periodically.
• Difference: Unlike regular views, materialized views store data, making them more efficient
for complex aggregations.
• Use: Use in Oracle Apps for performance optimization where frequent queries retrieve
large datasets.

6. What are triggers, and how are they implemented in Oracle? Can they be used in Oracle Apps for
customization?
A: Triggers:
• Triggers: A set of SQL statements automatically executed when certain events occur.
• Implementation: Triggers are defined on tables and views, and can be used for data
validation, auditing, or enforcing business rules.
• Customization: In Oracle Apps, triggers can be used for custom business logic.

7. Explain the types of constraints available in Oracle (e.g., primary key, foreign key). How do they help
maintain data integrity?
A: Constraints:
• Primary Key: Uniquely identifies a row in a table.
• Foreign Key: Ensures referential integrity between tables.
• Unique: Ensures all values in a column are unique.
• Check: Enforces domain integrity.
• Not Null: Ensures a column cannot have NULL values.

8. What is a sequence in Oracle, and where is it commonly used in Oracle Apps?


A: Sequence:
• Sequence: A database object that generates a sequence of unique numbers.
• Use: Commonly used to generate unique primary keys for records in Oracle Apps.

9. What is a partitioned table? How does it improve performance in large databases?


A: Partitioned Table:
• Partitioned Table: Divides large tables into smaller, more manageable pieces, known as
partitions.
• Performance Improvement: Improves query performance by reducing the amount of data
scanned during queries.

10. How do you perform partition management in Oracle? Can partitioning impact Oracle Apps
performance?
A: Partition Management:
• Management: Involves creating, dropping, or merging partitions.
• Impact: Proper partitioning improves query performance and load balancing but may
require extra management effort in Oracle Apps.
12. What are object views, and how do they differ from regular views?
A: Object Views:
• Object Views: A view that provides access to object types in an object-oriented way.
• Difference: Unlike regular views, object views can represent an object type with methods
and attributes.
13. Explain the concept of inheritance in Oracle Objects. Is it used in Oracle Apps?
A: Inheritance in Oracle Objects:
• Inheritance: Allows one object type to inherit the attributes and methods of another.
• Use in Oracle Apps: Inheritance is used in Oracle to extend and model
14. How do you manage invalid database objects? What steps do you take to troubleshoot and
recompile them?
A: Managing Invalid Database Objects:
• Steps: Use utlrp.sql to recompile invalid objects, check for dependency issues, and resolve
any errors in the object code.
• Troubleshoot: Use DBMS_UTILITY.compile_schema for recompiling individual schemas.

15. How would you check the status of database objects? Which Oracle dictionary views help in
monitoring them?
A: Check Status of Database Objects:
• Use views like DBA_OBJECTS and USER_OBJECTS to monitor the status (valid or invalid) of
database objects.
• Views: ALL_OBJECTS, DBA_OBJECTS.

16. How do you move objects between schemas or databases in Oracle Apps?
A: Moving Objects Between Schemas/Databases:
• Moving Objects: Use tools like Data Pump (expdp/impdp) or Database Links for cross-
database migration.
• Between Schemas: Use DBMS_METADATA to extract and move objects.

17. How do you monitor and manage indexes in large Oracle Apps environments?
A: Monitor and Manage Indexes:
• Monitor: Use DBA_INDEXES and V$OBJECT_USAGE to track index usage.
• Manage: Rebuild or drop unused indexes using ALTER INDEX or DROP INDEX.

18. What steps would you take to optimize a materialized view for better performance in an EBS
environment?
A: Optimizing Materialized Views:
• Optimization: Refresh materialized views during off-peak hours, use incremental refresh,
and partition materialized views for better performance.
• EBS Use: Optimize materialized views to reduce query time and improve performance in
reporting or data analysis jobs.
19. What is the role of statistics in maintaining database object performance? How do you gather and
analyze statistics for objects?
A: Role of Statistics in Database Object Performance:
• Role: Statistics help the Oracle optimizer choose the best execution plan, affecting query
performance.
• Gathering: Use the DBMS_STATS.GATHER_TABLE_STATS or
DBMS_STATS.GATHER_SCHEMA_STATS procedures to collect statistics.
• Analyze: Review statistics using DBA_TAB_STATS_HISTORY, DBA_TAB_COLUMNS, and
DBMS_STATS views. Regularly gather stats after major data changes to ensure efficient query plans.

20. How would you handle fragmentation in Oracle tables and indexes? What tools or commands do
you use?
A: Handling Fragmentation in Oracle Tables and Indexes:
• Tables: Use ALTER TABLE MOVE to defragment tables. Rebuild partitions if applicable.
• Indexes: Use ALTER INDEX REBUILD to rebuild fragmented indexes.
• Tools/Commands: DBMS_REPAIR can be used to identify and repair fragmentation issues.
Also, use DBMS_SPACE to analyze free space.

21. You find that a key table in Oracle Apps has a high number of chained rows. How do you resolve this
issue?
a: Resolving Chained Rows in a Key Table:
• Cause: Chained rows typically occur when rows are too large for a single data block.
• Resolution: Use ALTER TABLE MOVE to move the table to a new segment, eliminating
chained rows. Rebuild indexes on the affected table afterward to improve performance.

23. What steps do you follow for schema upgrades in Oracle EBS? How do you handle dependent
objects?
A: Steps for Schema Upgrades in Oracle EBS:
• Steps:
1. Backup: Take a full backup of the database and EBS environment.
2. Run AD Administration: Use AD Administration utilities to apply schema updates and
patches.
3. Database Upgrade: Run catcon.pl and rdbms upgrade scripts for the schema upgrade.
4. Post-Upgrade: Run AutoConfig, recompile invalid objects, and verify the application tier.
• Handling Dependent Objects: Ensure dependent objects like triggers, stored procedures,
and indexes are recompiled post-upgrade. Use utlrp.sql to recompile invalid objects.
===================================================================================

1. Rsync and SCP?


• Rsync: A file transfer utility that syncs directories and files between systems, supporting
incremental transfers, compression, and file deletion on the target system. Commonly used for backups
and migration.
• SCP: Securely copies files from one system to another over SSH but does not support
incremental transfers. Suitable for smaller, quick transfers.

2. Difference between listener.ora and tnsnames.ora?


• listener.ora: Located on the database server, it configures the listener service that accepts
client connection requests. Specifies hostname, port, and SID.
• tnsnames.ora: Resides on the client side. It maps service names to connection details like
host, port, and protocol for database connectivity.

3. Ping vs Tnsping?
• Ping: Tests network connectivity to a server or host by sending ICMP packets. Does not
verify database connectivity.
• Tnsping: Checks Oracle Net connectivity by validating a TNS entry in the tnsnames.ora file.
It ensures the listener is accessible for the database.

4. Commands: top, vmstat, sar


• top: Displays real-time system performance, including CPU, memory, and running
processes. Useful for spotting resource-hogging processes.
• vmstat: Reports CPU utilization, memory usage, and I/O stats at specific intervals. Helps
analyze performance trends.
• sar: Collects historical performance data, such as CPU and memory usage. Ideal for post-
mortem analysis.

5. Different types of DBA file objects?


• Datafiles: Store all database data, including user and system data.
• Control files: Maintain metadata about the database structure, such as tablespace
locations.
• Redo log files: Record changes to the database for recovery purposes. Differences lie in
their role within the DB.

6. Services starting with the application server?

When an application server starts, services such as OACore (Java-based pages), Forms (Oracle Forms),
Web (Apache server), and Concurrent Processing (handles batch jobs) are initiated. These services are
essential for processing user requests and running concurrent programs.

7. Services stopping first in application shutdown?


During shutdown, services are stopped in reverse order of their dependencies. Typically, Concurrent
Manager is stopped first to prevent background jobs from running, followed by Forms and Web services to
ensure clean termination of user sessions.

8. Database states: MOUNT vs NOMOUNT


• NOMOUNT: Only the instance starts, and initialization files are read. Used for database
creation or recovery.
• MOUNT: Control files are read, and database structure is validated, but datafiles are not yet
opened. Used for recovery operations.

9. OAM, OIM, OID, OAS, SODF


• OAM: Manages authentication and single sign-on (SSO).
• OIM: Focuses on identity and role management.
• OID: An LDAP-based directory for storing user credentials.
• OAS: Oracle Application Server for web applications.
• SODF: Refers to SOA or Service-Oriented Architecture, facilitating application integration.

10. What is an API?

An Application Programming Interface provides a standardized way for applications to communicate. In


Oracle Apps, APIs allow for programmatic data manipulation, such as creating users or updating records,
without direct database interaction.

11. What is OACORE?

OACORE manages Java-based services in Oracle EBS. It handles OAF (Oracle Application Framework)
pages, login functionalities, and concurrent request submissions. It runs within the WebLogic Server and is
critical for rendering web pages.

12. Database 12c vs 19c?


• 12c: Introduced multitenancy with Container and Pluggable Databases (CDB/PDB),
optimizing resource sharing.
• 19c: Builds on 12c with features like Automatic Indexing, JSON support, and better
stability for enterprise-grade deployments.

13. Logical vs Physical Standby?


• Logical Standby: Allows queries and DML by converting redo logs into SQL. Supports
business reports.
• Physical Standby: Maintains a bit-for-bit replica of the primary database. Mainly used for
disaster recovery.

14. Standalone DB vs Clustered DB?


• Standalone: A single database instance without failover support.
• Clustered (RAC): Multiple database instances share the same storage, providing high
availability and load balancing.

15. Cutover vs Switchover?


• Cutover: Marks the final step in an online patching cycle, switching users to the patched
edition.
• Switchover: A planned role reversal between primary and standby databases, maintaining
both databases in sync.

16. What is purging?

Purging removes obsolete or redundant data from the database to free up storage and improve
performance. In Oracle EBS, purging helps manage log files, concurrent request data, and temporary
tables.

17. Types of listeners?


• Default Listener: Handles connections to a single database.
• SCAN Listener: Introduced in RAC for dynamic load balancing. SCAN simplifies client
connection by providing a single hostname.

18. Tables created during patching/cloning?


• Patching: Tables like AD_PATCH_DRIVERS and AD_CONTROL track patch progress.
• Cloning: Tables like FND_NODES store information about application nodes in the cloned
environment.

19. Different TOPs in EBS?


• APPL_TOP: Holds application-specific files.
• DB_TOP: Contains database-related files.
• COMMON_TOP: Shared utilities and scripts, used across multiple products.

20. What is TDE (Transparent Data Encryption)?

TDE secures data at rest by encrypting tablespaces and columns. The encryption and decryption are
transparent to applications, ensuring data security without modifying application logic.

21. Data Guard concept?

Data Guard ensures high availability and disaster recovery by synchronizing a primary database with
standby databases. It supports automatic failover, switchover, and both logical and physical standby
configurations.

22. GoldenGate concept?


Oracle GoldenGate enables real-time data replication across heterogeneous environments. It is widely
used for data migration, disaster recovery, and reporting solutions, ensuring high availability for business-
critical applications.

How to setup GRID, DB AND Application server? how to install and what is the sequence
of setup the environment?

Setting up Oracle Grid Infrastructure (GI), Database (DB), and Application Server for an Oracle E-Business
Suite environment involves several steps. Below is a detailed explanation of the sequence, commands,
and process:

1. Setup Oracle Grid Infrastructure (Clusterware + ASM)

Pre-requisites:
• Ensure servers meet hardware and software requirements.
• Configure shared storage for ASM (via SAN/NAS or other shared storage solutions).
• Setup private/public/virtual IPs for RAC.

Steps:
1. Install Oracle Grid Infrastructure:
• Run the installer (runInstaller) from the Grid home.

./runInstaller

• Choose Configure Grid Infrastructure for a Cluster.


• Provide Cluster Name, SCAN name, and SCAN ports.
• Select shared disks for ASM setup.
• Create the ASM disk group:
• During installation, provide disks for ASM using ASMLib or udev.

2. Configure ASM:
• ASM setup includes the creation of DATA and FRA (Fast Recovery Area) disk groups.
• Use asmca for disk group management.

asmca

3. Verify Installation:
• Check cluster and ASM status:

crsctl check cluster


srvctl status asm
2. Install and Configure Oracle Database

Pre-requisites:
• Ensure GI is running and ASM disk groups are available.
• Install required OS packages.

Steps:
1. Install Oracle Database Software:
• Run the installer from the database software home:

./runInstaller

• Choose Install database software only.


• Select RAC if installing in a clustered environment.

2. Create Database using DBCA:


• Use the Database Configuration Assistant (dbca) to create the database:

dbca

• Select ASM for storage and choose DATA and FRA disk groups.
• Specify the database name (SID) and configure listener details.

3. Verify Database Installation:


• Check database and listener status:

srvctl status database -d <dbname>


lsnrctl status

4. Post-Installation Tasks:
• Configure the database initialization parameters.
• Set up backups using RMAN.
• Apply required patches using opatch.

3. Install and Configure Oracle Application Server (EBS Application Tier)

Pre-requisites:
• Database should be running.
• Verify required ports are free.
• Install JDK as required by Oracle Application Server.
Steps:
1. Install Application Server:
• Use Oracle EBS Rapid Install to install the application tier:

./rapidwiz

• Specify:
• Database details for connectivity.
• Node configurations (single or multi-node).

2. Start Application Services:


• Use adstrtal.sh to start services:

$ADMIN_SCRIPTS_HOME/adstrtal.sh

• Verify services:
$ADMIN_SCRIPTS_HOME/adstpall.sh

3. Configure WebLogic Server:


• Install WebLogic Server:

java -jar wls1036_generic.jar

• Configure domains for Oracle EBS using config.sh.

4. Post-Installation Steps:
• Apply application patches using adpatch.
• Configure Concurrent Manager and Workflow services.

4. Sequence of Setup
1. Install and configure Grid Infrastructure.
2. Install Oracle Database and create required databases.
3. Install and configure the Application Server (EBS).

Important Commands
1. Grid Infrastructure:
• Start/Stop GI:

crsctl start cluster


crsctl stop cluster

• ASM Disk Groups:


asmcmd lsdg
2. Database:
• Listener Control:
lsnrctl start
• Database Startup:

srvctl start database -d <dbname>

3. Application Server:
• Start/Stop Services:

$ADMIN_SCRIPTS_HOME/adstrtal.sh
$ADMIN_SCRIPTS_HOME/adstpall.sh

Validation
1. Verify all components are running:
• Grid Infrastructure:

crsctl check cluster

• Database:
sqlplus / as sysdba
SQL> SELECT instance_name, status FROM gv$instance;

• Application Server:
• Access the EBS login page to verify.

By following these steps and commands, you can successfully set up a Grid, Database, and Application
Server environment for Oracle E-Business Suite.

Proper configuration of weblogic servicer with EBS set admin server and managed
servers
Setting up WebLogic Server with Oracle EBS (Including Admin Server and Managed Servers)

Oracle EBS R12.2 integrates Oracle WebLogic Server for managing the application tier. Below is the
detailed process for setting up WebLogic Server as part of the EBS environment.

Pre-requisites
1. Verify the following software versions are compatible with Oracle EBS:
• Oracle WebLogic Server version.
• Java Development Kit (JDK) version.
2. Ensure you have the required ports available.
3. Install and configure the database before setting up the application server.

Steps for WebLogic Setup with EBS

1. Install WebLogic Server


1. Download WebLogic Software:
• Obtain the WebLogic installer (e.g., wls1036_generic.jar).
2. Install WebLogic Server:
• Run the installer with the appropriate JDK:

java -jar wls1036_generic.jar

• Choose a Middleware Home directory (e.g., /u01/app/oracle/middleware).

3. Post-Installation:
• Verify the installation:

$MIDDLEWARE_HOME/wlserver/common/bin/wlst.sh

2. Configure WebLogic Server for EBS


1. Set Up the WebLogic Domain for EBS:
• Use the config.sh script located in the WebLogic home directory:

$MIDDLEWARE_HOME/wlserver/common/bin/config.sh

• Steps during configuration:


• Create a new domain for EBS (e.g.,
/u01/app/oracle/middleware/user_projects/domains/EBS_domain).
• Configure an Admin Server and Managed Servers:
• Admin Server: Used for managing the WebLogic domain.
• Managed Servers: Used for hosting EBS applications such as oacore, oafm, forms, etc.

2. Admin Server Setup:


• Assign a name (e.g., AdminServer) and a port (default: 7001).
• Enable SSL if required (port: 7002).
• Create the user credentials for the WebLogic Admin Console.
3. Configure Managed Servers:
• Add the following Managed Servers with appropriate ports:
• oacore_server1 (default: 7201)
• forms_server1 (default: 7202)
• oafm_server1 (default: 7203)
4. Configure Node Manager:
• Enable the Node Manager for managing WebLogic instances:

$MIDDLEWARE_HOME/wlserver/common/bin/startNodeManager.sh

3. Deploy EBS Applications on WebLogic


1. Use Oracle EBS Rapid Install:
• Run the rapidwiz installer to deploy EBS applications on WebLogic.
• Provide database connectivity details and configure the application tier.
2. Verify Deployment:
• Check that the WebLogic domain contains the required applications (e.g., oacore, oafm,
forms).

4. Start and Stop WebLogic Components


1. Start Admin Server:
• Navigate to the domain directory and start the Admin Server:

cd $DOMAIN_HOME/bin
./startWebLogic.sh

2. Start Managed Servers:


• Use the WebLogic Admin Console (URL: http://<hostname>:7001/console) or start
manually:

cd $DOMAIN_HOME/bin
./startManagedWebLogic.sh oacore_server1 http://<hostname>:7001
./startManagedWebLogic.sh forms_server1 http://<hostname>:7001
./startManagedWebLogic.sh oafm_server1 http://<hostname>:7001

3. Stop Servers:
• Stop Managed Servers first:

./stopManagedWebLogic.sh oacore_server1 http://<hostname>:7001

• Then stop the Admin Server:

./stopWebLogic.sh
4. Using Node Manager:
• Use Node Manager to manage start/stop operations for all WebLogic servers.

5. Configure WebLogic for High Availability


• Cluster Setup:
• Group Managed Servers into a cluster for load balancing and failover.
• Add oacore_server1, forms_server1, and oafm_server1 to the cluster.
• Configure Load Balancer:
• Integrate a load balancer like F5 or Apache with WebLogic.
• Enable WebLogic Diagnostic Framework (WLDF):
• Use WLDF to monitor the health of WebLogic servers.

6. Post-Installation Validation
1. Access WebLogic Admin Console:
• URL: http://<hostname>:7001/console
• Check that all Managed Servers are running.
2. Verify EBS Login:
• Access the Oracle EBS homepage and verify functionality.
3. Logs for Troubleshooting:
• Admin Server log: $DOMAIN_HOME/servers/AdminServer/logs/AdminServer.log
• Managed Server logs: $DOMAIN_HOME/servers/<server_name>/logs/<server_name>.log

Commands Summary
1. Start/Stop Admin Server:

./startWebLogic.sh
./stopWebLogic.sh

2. Start/Stop Managed Servers:

./startManagedWebLogic.sh <server_name> http://<hostname>:7001


./stopManagedWebLogic.sh <server_name> http://<hostname>:7001

3. Node Manager:

./startNodeManager.sh
./stop

DR Setup:
etting Up Disaster Recovery (DR) for Oracle E-Business Suite

Disaster Recovery (DR) ensures business continuity by providing a standby environment for Oracle EBS in
case the primary environment fails. The setup typically involves Data Guard for the database and file
system synchronization for the application tier.

High-Level Steps for Setting Up DR


1. Primary Site Preparation
• Configure the primary database for Data Guard.
• Identify application tier components for replication.
2. Standby Site Setup
• Configure the standby database.
• Synchronize the application tier files.
3. Network Configuration
• Ensure proper connectivity between primary and standby.
4. Monitoring and Testing
• Regularly validate synchronization.
• Conduct failover/switchover testing.

Detailed Steps

1. Database Disaster Recovery with Oracle Data Guard


Primary Database Configuration
1. Enable Force Logging:
ALTER DATABASE FORCE LOGGING;
2. Check Archive Log Mode:

SELECT LOG_MODE FROM V$DATABASE;

• If not in ARCHIVELOG mode, enable it:

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;

3. Create Standby Redo Logs:


• Add one more redo log group than the primary:

ALTER DATABASE ADD STANDBY LOGFILE ('/path_to_file/redo1.log') SIZE 50M;

4. Configure tnsnames.ora and listener.ora:


• Add entries for the standby database.
5. Create Initialization Parameter File (pfile):
• Add the following:

DB_UNIQUE_NAME=PRIMARY_DB
LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIMARY_DB,STANDBY_DB)'
LOG_ARCHIVE_DEST_2='SERVICE=STANDBY_DB VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=STANDBY_DB'

• Restart the primary database using the updated parameters.

Standby Database Configuration


1. Duplicate Primary Database to Standby:
• Use RMAN to clone the primary:

rman TARGET sys/password@PRIMARY_DB AUXILIARY sys/password@STANDBY_DB


RMAN> DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE DORECOVER;

2. Start the Standby Database in Managed Recovery Mode:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM
SESSION;

Validate Data Guard Configuration


1. Verify Configuration:

SELECT * FROM V$DATAGUARD_CONFIG;

2. Check Log Shipping:

SELECT DEST_ID, STATUS, ERROR FROM V$ARCHIVE_DEST;

2. Application Tier Disaster Recovery

File System Synchronization


1. Synchronize Application Files Using rsync:
• Perform a full sync of the application tier:

rsync -avz /u01/app/oracle/production/ user@standby_server:/u01/app/oracle/standby/


• Use cron jobs to perform incremental syncs:

rsync -avz --delete /u01/app/oracle/production/ user@standby_server:/u01/app/oracle/standby/

2. Shared File System Approach (Optional):


• Use NFS or ASM to share application files between sites.

Update Configuration Files


• Update the apps_context.xml file to reflect the standby environment’s hostname and ports.

3. Application and Database Integration


1. Update Context File for DR:
• Modify context_file parameters for the standby environment:

<s_dbhost>standby_hostname</s_dbhost>
<s_dbport>1521</s_dbport>
<s_dbname>STANDBY_DB</s_dbname>

2. Register the DR Instance with Autoconfig:

perl $AD_TOP/bin/adconfig.pl contextfile=/path/to/context_file.xml appspass=<apps_password>

4. Testing Failover and Switchover

Database Failover
1. Force Standby Database to Become Primary:

ALTER DATABASE ACTIVATE STANDBY DATABASE;

2. Reconfigure the Application Tier:


• Point the application tier to the new primary database.

Database Switchover
1. Initiate Switchover on the Primary:

ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;

2. Activate the New Primary:

ALTER DATABASE OPEN;


Validate Application Connectivity
• Access Oracle EBS to ensure that the application tier communicates correctly with the new
database.

5. WebLogic Disaster Recovery


1. Install WebLogic on Standby Server:
• Perform the same WebLogic installation and domain setup as the primary environment.
2. Sync WebLogic Configuration Files:

rsync -avz /path/to/weblogic/domain/ user@standby_server:/path/to/weblogic/domain/

3. Start Admin and Managed Servers:


• Use Node Manager to start the WebLogic servers in the standby environment.

Commands Summary:
• Database Sync:

rman TARGET sys/password@PRIMARY_DB AUXILIARY sys/password@STANDBY_DB


RMAN> DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE DORECOVER;

• File System Sync:

rsync -avz /u01/app/oracle/production/ user@standby_server:/u01/app/oracle/standby/

• Start Managed Recovery:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM
SESSION;

• Failover:

ALTER DATABASE ACTIVATE STANDBY DATABASE;

Best Practices
1. Automate backups and file synchronization.
2. Regularly validate the DR environment with failover testing.
3. Use monitoring tools to track replication health and system performance.

This process ensures a robust DR setup for Oracle EBS with minimal downtime during failovers.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy