Oracle DBA Interview Question & Ans
Oracle DBA Interview Question & Ans
Data storage can be done by using any data tools. Data storage defines how data is stored. On the
other hand, data representation is how data is displayed to a user. For example, we can store
data in the form of table but represent it in the form of charts!
In Oracle, Instance and database are two separate components but work together. Instance
resides on RAM and it is the way users speak to database. The user data resides inside data files
which reside on Hard Disks.
Users connect to database instance and ultimately instance speaks with database.
Instance can also be defined as combination of memory structures and background processes.
How LRU algorithm impacts database instance?
RAM works on LRU(Least recently used) algorithm. As the database instance resides on RAM, it has
to follow same rules as RAM. Hence, Oracle instance also follow LRU algorithm.
A database client is a small software which must be installed on application server so that
application can connect to database server. For any client to connect oracle database, oracle client
must be installed.
Explain how user connectivity happens in database.
All new user connections lands on listener. Listener hands over the incoming connections to
PMON. The user credentials are then verified with base tables. If details are correct, server
process is created on server side by allocating PGA memory.
There are 100 users connected to database and listener goes down. What will happen?
Nothing will happen to existing users. The problem will be only with new database connections.
Listener only comes into pictures when there is a new database connection.
Base tables are binary tables inside database which contains encrypted data. These are also called
as metadata because it stores data regarding other data inside database. Base tables are copied
only to data dictionary area under instance and flushed out. Base tables are also known as
dictionary tables. Any modifications to base tables will corrupt the database.
Only oracle background process can modify these tables.
Base tables reside under system tablespace.
When we travel from point A to point B, we tend to take the shortest route even though we have
multiple options. Same way, optimizer generates different plans or ways in which a SQL can be
executed. These plans are known as execution plans. Optimizer then chooses the best plan based
on CPU cost and resources. The job of optimizer is to generate execution plans and choose the
best one.
Any work inside server is done through background processes. These processes need some
memory to store basic information. On database server, we have one server process created for
every user connection. These server process also takes some memory. This memory is known as
PGA.
SGA is shared global area. Anything placed in SGA is shared with all the users.
All the small data filtering happens in PGA. This is known as In-Memory sort.
If the data is big, sorting is done under temp tablespace.
Why LGWR writes before DBWR writes? (imp)
In general, transaction recording is more important than transaction execution. Lets take, if we
have redo entries on disk and suddenly power failure happens (the dirty blocks are not yet
written to disk). Then oracle can still recover the transactions by reading the redologs on disk. This
way first redo logs are made permanent and then dirty blocks are written to disk.
Can we have multiple DBWR processes in database?
We can have between 1 to 36 DBWR in 11g.
In 12c , 1 to 100 DBWR.
o 19c , we have similar to 12c 100 max.
Explain about SCN and checkpoint number.
SCN is unique transaction number assigned to set of redo logs generated. This identify that OK,
these all redo entries are part of one transaction.
A checkpoint is a database event, which synchronize the database blocks in memory with the
datafiles on disk. It has two main purposes – To establish a data consistency and enable faster
database Recovery.
I would like to perform 100 database installations. How will you do it?
For such big number of installations, we can go with silent mode installation using response file.
What are oinstall and dba groups? Why we assign these groups to oracle user?
oinstall group provides oracle software installation permissions to all users in the group.
dba group provides oracle administration permissions to all users in the group.
Is it compulsory that we need to give group names as oinstall and dba? Or can we give any other name?
We can give any name, but those are oracle standards.
What are kernel parameters and why to set them?
Kerner parameters are system level settings that control how the OS manages hardware and
software resources.
They will define the memory allocation from physical memory to Oracle database.
These parameters are found in file : etc/sysctl.conf
It is a location which provides the oracle product information which are installed on a server.
Yes we can do that but still we need to set ORACLE_HOME and PATH variables in .bash_profile.
I made changes to .bash_profile but still variables are not set.
When you make changes to .bash_profile, you must execute it at least once using
# . .bash_profile.
How can I check environment variables are set properly?
Using env | grep ORA.
Using echo command like echo $ORACLE_HOME.
What is the difference between /etc/oratab file and ps -ef|grep pmon output?
ps -ef | grep pmon gives output of running database instances on a server.
/etc/oratab file lists all the databases created on a server, weather they are running or shutdown
is a different case.
How many databases can I create? (Imp)
Unlimited, as much as CPU and memory is supported.
How many databases are there in your environment?
Say any value between 150 to 200 and tell them 40 are prod and rest all are dev, test and QA.
Which is the biggest databases in your environment?
Say any value between 700 DB to 1.5 TB. (2TB )
How many servers are there in your environment?
You can say 70 to 80 servers with Linux, AIX and windows flavors. (48 servers)
Application team requested you to delete a database. What will you do?
These kind of requests must be checked with database architect. If we still have to do so, we can
stop the listener for 1 week, Next shutdown the database and take DB cold backup.
If application team does not come back after 1 month, then we can drop database.
Base tables are in encrypted format, how can you check data from it?
There are Data dictionary views and dynamic performance views created on base tables. We can
query these views as they have data in human readable format.
How data dictionary views and dynamic performance views are created?
The catalog.sql and catproc.sql scripts created necessary views and procedures.
When are base tables created? what will happen if we do not run catalog.sql and catproc.sql
Base tables are created when you create a database.
If you do not run those scripts, we will not be able to query any data dictionary view or dynamic
performance view.
Why we do not run catalog.sql and catproc.sql when we create database using DBCA?
DBCA will run those scripts internally.
We must run those scripts only when we create database manually.
I would like to know current user details in database. How to find this information?
You can query V$SESSION view to see the user connection details.
Pfile is human readable file and spfile is binary file. We can start database instance with either of
the files but first preference is given to spfile.
Both reside under $ORACLE_HOME/dbs location and are used to allocate instance memory.
When the data is big, database needs more space. It uses temp tablespace in such cases to
perform sorting.
The contents of control file is validated against physical files. Oracle will physically check data files
and redo log files on disk. The data file headers and redo log files are matched with the SCN
number in control file. Once validation is done, database will be open.
I have 4 multiplexed copies of control files under /u01. Do you suggest to keep more copies or 4 are
enough?
We must have minimum 2 multiplexed copies but they must be on different physical disks. You
have kept all 4 files under /u01. If we loose /u01 mount point, we loose all multiplexed copies.
I lost control file under /u01 but I have multiplexed copy in /u02. How do you recover database?
We can simply copy control file from /u02 using cp command and make a copy under /u01 with
same name as lost control file. Once we have both control files, we can start the database.
All the parameters that you modify while database is up and running can take three scope values:
spfile, memory, both.
Spfile scope will make changes from next reboot, memory scope will make changes immediately
but will revert back after reboot and both will make changes immediately.
Ex: alter system set SGA_TRGET=2G SCOPE=Memery;
The database will hang. We must have archive log backup scripts to take archive log backup and
delete to release space.
What is the difference between redo logs and archive logs? (Imp)
Redo logs are overwritten by LGWR in cyclic order.
Archive logs are backup or copy of redo logs in a separate location.
Not possible, we can create a new group with big size and drop the existing one.
If archive destination is full, what will you do?
We will first try to take backup of archives if possible. If not, we will move some archives to
another location OR we can even change the archive destination inside database to a location
which has more space.
How do you monitor tablespaces in your environment?
We have tablespace utilization scripts scheduled on each server. The script triggers email
whenever a tablespace utilization crosses above 80%. Depending on the alert and space on server,
we add space to tablespaces.
Can we take SYSTEM and SYSAUX tablespace offline? (Imp)
We can take SYSAUX offline but not SYSTEM.
SYSTEM store critical metadata required for basic database operations.
It contains components like AWR(Automatic workload repository), Enterprise manger data.
Oracle allows SYSAUX to take offline for maintenance or recovery purpose
.
A query is executing and temp tablespace is full. You added 20GB but again temp is full. What will you
do next?
We need to check the query and if possible tune the query. We can even speak with application
team and allocate more temp space and reclaim space once their activity is done.
How can you identify which data file is modified today?
We can check the data file timestamp at OS level.
You are trying to add data file to tablespace but getting error. What could be the issue?
(Imp)
Control file has MAXDATAFILES parameter. If this number is exceeded, you cannot add more data
files.
Oracle Manged Files enables us to create tablespaces without providing file names and locations.
But the problem is the naming convention of the files.
No, my environment does not use OMF feature.
The sessions which exceed idle time are marked as snipped. The oracle level processes are cleared
when user exceed idle time but OS level processes will still exist. This is overhead on server.
IDLE_TIME is set to 15 min but even after 20 min, user session is not getting disconnected. What is the
issue?
RESOURCE_LIMIT parameter is not set to TRUE.
Tnsping is not working, everything is fine on listener.ora and tnsnames.ora file. What
could be the issue? (imp)
The default port 1521 or if you specified any other port is not enabled on servers.
We can ask network team to enable those ports.
I would like to select database from a table which is in different database. How can I do it?
You can create DB LINK and query from the remote table on remote database.
Multitenant DB
Q. How do you switch from one container to another container inside SQL*PLUS?
ALTER SESSION SET CONTAINER=pdb1;
Q. How about the datafiles system, sysaux , undo, redo etc , does they create when you create PDB?
IMP
Datafiles are individual to each database for cdb and each pdb
Undofiles and redofiles are only one across container
From 12cR2 onwards we can create local undo for each PDB
Temp files can be created in each database or share one across all databases
SGA is shared across all databases
Background process are shared across all databases , no additional back ground process defined
Q. Is the alert log the same for all pdbs in a cdb, or are they different?
Yes, one CDB, one alert log.
Memory structure:
SGA Importance
1. Optimizes Performance:
• Reduces I/O operations by caching frequently accessed data.
• Enables efficient query execution with cached SQL and metadata.
2. Supports Multiple Users:
• Facilitates sharing of critical resources like the buffer cache and shared
pool.
3. Controls Memory Usage:
• Proper sizing of the SGA avoids contention and excessive disk usage.
PGA Importance
1. Session Efficiency:
• Improves performance for operations like sorting and joining.
• Minimizes disk I/O for session-specific tasks.
2. Scalability:
• Properly allocated PGA ensures each session gets adequate
resources.
3. Parallel Query Performance:
• Operations like parallel joins and hash aggregations rely on sufficient PGA
memory.
How to Configure SGA and PGA
ALTER SYSTEM SET SGA_TARGET=2G;
ALTER SYSTEM SET PGA_AGGREGATE_TARGET=1G;
SELECT * FROM V$SGAINFO;
SELECT * FROM V$PGASTAT;
Q. As you said, if SGA and background process are shared, is there any performance impact?
A: Sharing the SGA and BP among multiple pdbs cn potentially impact on perpormance. This impact
depends on following factores:
3. SGA sharing: Is sharing among all PDBs withing the same CDB.
4. Background processes like DBWn,LGWR,and SMON are also shared.
5. Potential performance impact: When multiple PDBs are active:
a) SGA contention (Buffer cache and shred pool my face contention if the workload from
multiple PDBs is high. High I/O workloads impact buffer cache contention) and
b) Background process contention: BP are like LGWR or DBWR mu struggle if multiple PDBs
performance high level of writes simultaneously.
6. Higher CPU and I/O load: More PDBs means more sessions, which increase CPU usage and
I/O requests, potentially leading to bottlenecks if the infrastructure is not scaled
appropriately.
7. Query performance degrades
8. Disk and network bottlenecks
Q. Are there any background processes ex, PMON, SMON etc associated with PDBs? (imp)
No. There is one set of background processes shared by the root and all PDBs.
1. Explain about password file, what is stored inside password file and its use?
A:
In Oracle Database, a password file is a special file used to authenticate administrative users (also
known as privileged users) who connect to the database remotely with SYSDBA, SYSOPER, or other
administrative privileges. It is primarily used in cases where authentication through the operating
system is not possible.
What is a Password File?
• A password file is an external binary file that stores the credentials of database
users with administrative privileges.
• It is located outside the database and allows Oracle to authenticate users
attempting to connect to the database with administrative rights.
Contents of a Password File
The password file stores:
1. Username and Password Hash:
Location of Password File
On Windows: %ORACLE_HOME%\database\PWD<SID>.ora
• For RAC (Real Application Clusters):
Password files are stored in shared storage.
• Oracle checks the username and password against the password file.
• If credentials are valid and the user has the requested privilege (e.g., SYSDBA), the
connection is granted.
1. SYSDBA:
Full administrative privileges, including starting, stopping, and recovering the database.
2. SYSOPER:
Limited administrative privileges, mainly for starting and stopping the database.
3. SYSBACKUP:
Privilege for performing backup and recovery tasks using RMAN.
4. SYSDG:
Privilege for managing Oracle Data Guard.
5. SYSKM:
Privilege for managing Transparent Data Encryption (TDE) keys.
6. SYSRAC:
Privilege for managing Oracle RAC.
4. Modify Passwords:
Use the ALTER USER command to update the password of users in the password file:
3. What are the contents of control file and which parameter defines the controlfile
retention? IMP
A: The control file in Oracle Database is a crucial binary file that records the physical structure and
state of the database. It contains metadata essential for database operations, recovery, and
consistency. Without a valid control file, the database cannot be opened or operate.
Contents of a Control File:
The control file contains the following critical information:
1. Database Information: DBID, Timestamps
2. Physical File Structure: Paths to datafiles, redologfiles
3. Backup and Recovery Information: RMAN and SCN
4. Checkpoint Information: checkpoint SCN
5. Archived Log Information:
6. Redo Threads:
• Details of redo threads for each instance in RAC.
7. Tablespace Information:
• Details about tablespaces and their states.
8. Log History:
• Metadata about redo log history for instance recovery.
9. Flashback Information:
• Information about the flashback log files (if enabled).
10. Database Incarnation Information:
• Tracks database incarnations (relevant for RMAN recovery).
Dynamic Setting:
ALTER SYSTEM SET CONTROL_FILE_RECORD_KEEP_TIME=14 SCOPE=BOTH;
• This sets the retention period to 14 days and applies it to both the current and
future instances.
Impact of CONTROL_FILE_RECORD_KEEP_TIME
• Longer Retention Period: Ensures more recovery metadata is available for RMAN or
manual recovery.
• Requires larger control files to accommodate the additional records.
• Shorter Retention Period: Saves space in the control file but increases the risk of
losing metadata needed for recovery operations.
Best Practices:
1. Use RMAN:
Use RMAN catalog for long-term backup metadata storage
2. Monitor Control File Size:
3. Synchronize with Backup Policy:
5. Differentiate between data file header and data block header? What it contains?
A: The datafile header and data block header are both essential metadata components in an
Oracle database, but they serve different purposes and contain distinct information:
1. Datafile Header:
The datafile header is located at the beginning of each datafile and stores metadata about the
entire file.
2. Data Block Header
The data block header is located at the start of each Oracle data block and stores metadata
specific to that block.
Key Difference
• Scope: Datafile header is for the entire file; data block header is for individual
blocks within the file.
• Purpose: Datafile header manages file-level metadata; data block header manages
block-specific metadata.
Explanation:
1. Purpose:
This command generates a trace file containing the SQL statements to recreate the control file. It is
useful for disaster recovery or migrating databases.
2. Trace File Location: USER_DUMP_DEST
9. How do you rename redo log file – online or offline? Give the command?
A: Renaming a redo log file in Oracle can be done either online (when the database is running) or
offline (when the database is shut down). However, renaming redo log files online is more
common and avoids downtime.
Renaming a redo log file online requires the redo log group to be inactive. Here’s how to do it:
1. Check Redo Log Status:
Use the following query to check the status of the redo log groups:
SELECT GROUP#, MEMBER, STATUS FROM V$LOGFILE;
• The status should be INACTIVE before renaming.
• If the group is CURRENT, perform a log switch:
ALTER SYSTEM SWITCH LOGFILE;
2. Take the Redo Log Group Offline:
ALTER DATABASE CLEAR LOGFILE GROUP <group_number>;
This clears the redo log group and ensures it is not in use.
3. Rename the Redo Log File at the OS Level.
mv /old_path/redo01.log /new_path/redo01.log
4. Update the Control File with the New File Location:
ALTER DATABASE RENAME FILE '/old_path/redo01.log' TO '/new_path/redo01.log';
5. Bring the Redo Log Group Back Online:
ALTER DATABASE OPEN;
If the database is not running, you can rename redo log files while the database is in the MOUNT
state.
1. Shutdown the Database:
SHUTDOWN IMMEDIATE;
2. Rename the Redo Log File at the OS Level:
mv /old_path/redo01.log /new_path/redo01.log
3. Mount the Database:
STARTUP MOUNT;
4. Update the Control File:
ALTER DATABASE RENAME FILE '/old_path/redo01.log' TO '/new_path/redo01.log';
5. Open the Database:
ALTER DATABASE OPEN;
Key Notes
• You cannot rename a current redo log file while the database is online. Perform a
log switch to make it inactive.
• Always back up the database before performing structural changes to redo log files.
• If using multiplexed redo log files, update all members of the group.
LOG_ARCHIVE_FORMAT = '%t_%s_%r.arc'
Setting LOG_ARCHIVE_FORMAT
You can set or modify this parameter in the spfile or pfile.
For example:
11.Can I change the archive log destination while the database is running?
A: Yes, you can change the archive log destination while the database is running. Oracle allows you
to modify the archive log destination dynamically using the ALTER SYSTEM command without
restarting the database.
12.I lost redo log file and have no multiplexed copy or archive log. How can I recover
the database? (IMP)
A: Losing a redo log file without a multiplexed copy or an archived version is a critical situation.
Recovery is possible in some scenarios but depends on the state of the database and the
availability of other files. Here’s how you can handle it:
If the lost redo log file belongs to the current or active redo log group, you cannot recover
committed transactions up to the point of failure. However, you can attempt incomplete recovery.
Steps:
Contents:
1. Installed Products: Details of all installed Oracle software.
2. Oracle Home Locations: Paths to all Oracle homes.
3. Component Details: Versions, names, and patch IDs of components.
4. Patch History: Record of patches applied.
5. Configuration Data: OS and system-specific details.
Format:
• Stored in XML format in the ContentsXML subdirectory under the oraInventory
directory.
• Key file: inventory.xml.
$ Cat inventory.xml
17.Why data blocks are always 80% used and not 100%?
A: Data blocks in Oracle are typically designed to be 80% full rather than 100% to maintain efficient
database performance and allow for updates to existing rows.
This behavior is controlled by the PCTFREE parameter.
18.How many types of segments are there in Oracle? How many types of objects are
there in Oracle and how are they stored in the segments?
A: Types of Segments in Oracle
Oracle Database uses segments to store database objects. There are four main types of segments:
1. Data Segments:
• Store table or cluster data.
• Each table or cluster has a single data segment.
2. Index Segments:
• Store index data for tables.
• Each index has its own segment.
3. Undo Segments:
• Store undo information for rollback and recovery operations.
• Managed by Oracle automatically in Undo tablespaces.
4. Temporary Segments:
• Used for intermediate operations like sorting or temporary tables.
• Exist only during the query’s execution and are deallocated afterward.
Non-physical objects like Views, Synonyms, and Sequences do not occupy segments as they are
logical objects.
19.How will you calculate the best data block size for a new database and propose it
to the client?
A: To calculate the best data block size, consider the workload:
• OLTP (Online Transaction Processing) systems: Use 8 KB for frequent small
transactions.
• DSS/Analytics systems: Use 16 KB or 32 KB for large queries.
Analyze average row size to determine how many rows fit per block without excessive overhead.
Match the database block size to the OS block size (e.g., 4 KB or 8 KB) for efficient I/O.
Estimate database size and growth to avoid excessive block management.
Test performance using workload simulations.
Propose block size based on workload type: 8 KB for OLTP or 16–32 KB for DSS, ensuring it aligns
with performance and scalability needs.
20.What is undo retention policy? How do you estimate the undo retention policy?
A: The undo retention policy in Oracle defines how long undo data is retained in the undo
tablespace to support read consistency, flashback queries, and rollbacks. It is controlled by the
UNDO_RETENTION parameter.
To estimate the retention policy:
1. Determine the longest-running query or flashback requirement.
2. Measure the undo generation rate using the V$UNDOSTAT view.
3. Use the formula:
Undo tablespace Size = Undo generation rate × Retention period
Set retention based on workload needs (e.g., 900 seconds for OLTP, longer for analytics).
Use AUTO undo management for dynamic tuning.
Key Mechanisms:
• Redo Logs: Oracle uses redo logs to store all changes made to the database. During
recovery, redo logs are critical for rolling forward committed transactions.
• Undo Segments: Undo segments are used to track uncommitted changes and
facilitate rolling back transactions.
Outcomes:
Instance recovery is typically fast because Oracle performs it in memory using efficient
algorithms and minimizes downtime.
Roll Forward:
• Purpose: Apply committed changes from redo logs to the database files.
• When It Happens:
• During instance recovery: To recover committed changes that were not written to
data files due to an instance failure.
• During media recovery: When restoring from a backup.
• How It Works:
• Oracle applies redo records stored in redo logs to data files to bring them up to
date.
• Outcome: Ensures committed changes are not lost.
Key Differences:
Aspect Rollback Roll Forward
Primary Purpose Undo uncommitted changes Apply committed changes
Data Source Undo segments Redo logs
Use Case Transaction abort, recovery Recovery after failure
End Result Original state of data is restored Database is brought up to date
Both processes are essential for maintaining consistency and ensuring reliable database
operations.
Visibility Data is private to the session; other Data is visible to all users with
sessions cannot see it. appropriate access.
Automatic Cleanup Automatically cleans up data at the Data remains until deleted
end of a session/transaction. manually or programmatically.
Use Cases Suitable for intermediate results, Used for storing persistent
staging, and sorting. application data.
A materialized view is a database object that stores the results of a query physically on disk,
unlike a regular view, which is a virtual table that retrieves data dynamically during execution.
Differences:
Aspect Materialized View View Normal Table
Storage Stores query results physically No storage; retrieves Stores data
data dynamically permanently
Refresh Requires manual or scheduled Always up-to-date No refresh needed;
refresh with the base tables direct updates apply
Performance Improves query performance Slower for large Faster for direct reads
for complex queries. queries
Use Case Aggregates, joins, or Simplifies query Stores persistent,
frequently accessed data complexity transactional data
26.How do you query row ID column in a table? What exactly is row ID?
A: What is a Row ID?
The Row ID is a unique identifier for each row in an Oracle database table. It represents the
physical location of a row in the database and includes the following components:
• Object Number: Identifies the table or cluster.
• File Number: Identifies the data file.
• Block Number: Indicates the block containing the row.
• Row Number: Indicates the position of the row within the block.
The Row ID is useful for locating rows quickly during internal database operations.
How to Query the Row ID Column?
SELECT ROWID, column1, column2
FROM table_name;
Use Cases of Row ID:
1. Debugging: Identify specific rows for troubleshooting.
2. Optimizing Queries: Row IDs can be used in direct access methods for faster
retrieval.
3. Row Uniqueness: Useful in applications requiring unique row identification.
28.What are the table partitioning strategies used in your environment? (Imp)
A: Table partitioning divides a large table into smaller, more manageable pieces,
improves query performance and manageability.
Each partitioned can be managed and accessed independently while still appearing as single table
to users.
Strategies used for partitioning: range (Historical data split by time), list, Hash, composite.
29.Do you recommend partitioned tables in Data Ware House or OLTP databases?
A: Partitioned tables are generally recommended for Data Warehouse (DW) environments rather
than OLTP (Online Transaction Processing) systems. In DW systems, partitioning improves query
performance by enabling efficient data pruning, particularly for large datasets involving historical
data. Partitioning by range (e.g., date) or hash can speed up scans, aggregation, and reporting.
In contrast, OLTP systems, which handle frequent small transactions, benefit less from
partitioning. The overhead of maintaining partitions and managing fine-grained indexing may
outweigh the performance benefits in OLTP, where data retrieval is typically more transactional
and smaller in scale.
30.I want to install Oracle software on 300 servers at a time. How will you do it?
A: To install Oracle software across 300 servers at once, particularly for Oracle products such as
Oracle Database, Oracle WebLogic Server, Oracle E-Business Suite, or Oracle Fusion Middleware,
an automated approach is crucial to ensure consistent deployment and reduce manual
intervention.
1. Preparation of Installation Package
Before beginning, ensure that the Oracle software packages are readily available For Oracle
applications, make sure to have response files for silent installations, which will automate the
setup by specifying configuration parameters like Oracle Home, installation directories, ports, and
database names.
31. My client recommends to put undo and temp on auto extend ON. What you have to say
about it?
A: Enabling auto-extend for undo and temp tablespaces can be useful in certain environments, but
it should be approached with caution.
1. Undo Tablespace: Auto-extend helps prevent ORA-1652 (unable to extend
segment) errors, especially in systems with unpredictable transaction volumes. However, setting an
unlimited auto-extend may lead to unmonitored space growth, potentially filling up disk storage
and impacting system performance. It’s better to set a reasonable limit for growth, allowing
alerting mechanisms to monitor space usage.
2. Temporary Tablespace: Auto-extend is often recommended to handle temporary
space for large sorts or queries. However, constant space extension can lead to fragmentation and
complicate performance tuning. Proper monitoring and regular cleanup should accompany its
use.
In both cases, careful monitoring is key. Ensure sufficient disk space and performance
considerations are made.
33.Client is asking for sysdba access. What is the command to give access?
A:
To grant SYSDBA access to a user in Oracle, you can execute the following SQL command as an
Oracle SYS user:
After running this command, the user will have SYSDBA privileges, allowing them to perform
administrative tasks such as database startup, shutdown, and access to all objects within the
database.
Important: Be cautious while granting SYSDBA privileges as it provides full administrative access
to the database.
Backup Channels:
34.My database size is 25 TB. How many channels will you allocate in RMAN
command?
A: When performing backups on a large 25 TB database using RMAN, the number of channels
allocated depends on the system’s hardware capabilities, disk I/O performance, and the specific
backup method (e.g., full, incremental). A good starting point is to allocate multiple channels to
speed up the process and improve throughput. Here’s how I would approach it:
1. Default Allocation: RMAN typically uses a default of one channel for each backup
device. However, for large databases like 25 TB, you can increase this number to improve
performance.
2. Consider Disk I/O and Network Throughput: Based on the environment’s disk I/O
capabilities and network throughput, allocate around 4-8 channels for each device (disk or tape) in
a typical configuration. For a 25 TB database, it might be reasonable to use 8-16 channels for disk
backups, depending on available resources.
3. Resource Allocation: Ensure that you have sufficient resources (CPU, memory, disk
throughput) to handle the increased load. Monitor the backup process and adjust the number of
channels as needed.
4. Testing and Tuning: Start with a smaller number of channels (e.g., 4) and
progressively increase, testing the performance impact. RMAN provides a CONFIGURE CHANNEL
command to adjust the settings.
For example:
This approach balances performance and resource usage. Always monitor the backup performance
and adjust channels accordingly.
RMAN
36.What is backup optimization on in RMAN? Do you recommend it to be enabled?
(IMP)
A: Backup Optimization in RMAN prevents redundant backups of files that haven’t changed since
the last backup. When enabled (CONFIGURE BACKUP OPTIMIZATION ON), RMAN skips backing up
files already present in the backup media if:
• The file exists in the same recovery window.
• No changes were made to the file since the last backup.
• It matches a pre-existing backup in a configured backup location.
Recommendation:
• Enable it for redundancy and recovery window-based backups, especially for large
databases, to save storage and reduce backup time.
• Disable it when ensuring a full backup set is critical, such as before major database
changes or migrations.
37.I want to configure tape backups. How will you configure tape with RMAN?
A: To configure tape backups with RMAN, follow these steps:
• Use MML logs and RMAN logs to verify successful tape operations.
• Definition: A backup is marked expired when RMAN cannot find the physical backup
file during a CROSSCHECK operation.
• Cause: The file was manually deleted, moved, or became inaccessible.
• Identification:
----------------------------------------------------
RMAN> LIST EXPIRED BACKUP;
---------------------------------------------------------
• Action: Either remove the backup entry with DELETE EXPIRED BACKUP or
restore/move the missing file.
Obsolete Backups:
• Definition: A backup is marked obsolete when it is no longer needed to satisfy the
retention policy (e.g., recovery window or redundancy).
• Cause: Defined by policies, not physical file status.
• Identification:
---------------------------------------------
RMAN> REPORT OBSOLETE;
----------------------------------------------
• Action: Use DELETE OBSOLETE to clean up obsolete backups.
Key Difference
39.What is database incarnation? What happens when database goes into new
incarnation?
A: Database Incarnation in Oracle refers to a version of the database with a specific set of system
change numbers (SCN) and a unique incarnation ID. The incarnation tracks the history of the
database as it evolves over time, especially in scenarios involving database recovery, resetlogs, or
incomplete recovery.
RMAN> SYNC;
This command updates the recovery catalog with the latest information about the target
database’s backups, archived logs, and other related metadata.
3. Verify Synchronization:
You can verify the synchronization by checking the backup status:
42.Recovery catalog server is down. How will handle the failed backups for an
environment with 1000 databases?
A: If the recovery catalog server is down and backups are failing for a large environment, you can
handle the situation by:
1. Use of Control File for Backup Metadata:
RMAN will store backup metadata in the control file of each database during the downtime.
Ensure control file autobackup is enabled.
2. Reschedule Backup:
Once the catalog server is back online, use RMAN’s RESYNC command to synchronize all databases
with the catalog:
3. Restore Catalog:
If required, restore the catalog from backup to ensure consistency across all databases.
4. Monitor Backups:
Continue monitoring backups and verify that metadata is synchronized once the catalog is back.
This sequence applies the archived redo logs to the database and opens it for normal operations.
1. Instance Status: Verify database up/down status and check alert logs for errors.
2. Performance: Monitor CPU, memory, and I/O usage using tools like AWR, ADDM, or
OEM.
3. Datafile and Tablespace Usage: Check for sufficient free space.
4. Backup Validation: Ensure backups are recent and restorable.
5. Redo and Undo Logs: Monitor log usage and archiving.
6. Security: Check user privileges and audit settings.
47.How do you calculate the archivelog backup frequency and schedule in crontab?
A: To calculate archivelog backup frequency:
1. Monitor Archive Log Generation: Check archive log size and generation rate using
queries like:
2. Assess Space Availability: Ensure sufficient disk space and plan backups before it
fills up.
3. Schedule in Crontab: Use RMAN scripts and cron to automate backups. For
example:
• Create an RMAN script (arch_backup.rman):
SHUTDOWN IMMEDIATE;
STARTUP;
This process ensures the cloned database is properly renamed while maintaining consistency and
functionality.
50.Explain RMAN Duplicate and difference between RMAN Duplicate cloning for new
database and cloning for physical standby? (INQ)
A: RMAN Duplicate creates a copy of the target database, which can be used for various purposes
like testing or creating a standby.
Difference between RMAN Duplicate for New Database and Physical Standby:
New Database:
• Used to create an independent copy of the target database.
• Involves full database restoration and recovery.
• Requires unique names, DBID, and file locations.
Physical Standby:
• Clones the target to create a standby database.
• Syncs with the primary through Redo Apply (MRP) after cloning.
• The database remains in a read-only state until activated.
Both use similar RMAN commands, but the physical standby is continuously synchronized with the
primary.
51.I started the cloning and it failed in between. What should I do now? (IMP)
A: I started the cloning and it failed in between. What should I do now?
If the cloning process fails, follow these steps:
1. Check Logs: Review the RMAN or database logs for specific error messages to
identify the cause.
2. Restore Backup: If the cloning process used backups, restore the database from the
last successful backup.
3. Resolve Issues: Fix the underlying issue (e.g., disk space, connectivity, or
permissions).
4. Cleanup: Remove any partial files or incomplete clones from the target system to
avoid conflicts.
5. Resume Cloning: Restart the cloning process using the appropriate RMAN
commands, ensuring all prerequisites are met.
6. Verify: After successful cloning, verify the database status, data integrity, and
configurations.
Always test the cloning process in a non-production environment first to minimize issues.
Key Difference: MRP automates recovery and log application on a standby, whereas LSP involves
manual log shipping.
53.There is a GAP of 1000 archives in my standby. How will you resolve it?
A: To resolve a gap of 1000 archive logs in your standby, follow these steps:
1. Identify the Missing Archives:
• On the standby, check the gap:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;
5. Verify Synchronization:
• Ensure the gap is resolved by querying v$managed_standby and checking the
applied logs.
If the missing logs are no longer available, consider performing a point-in-time recovery (PITR)
or an RMAN restore from backup to synchronize.
54.I want my standby to run 4 hours behind the primary server. How can I achieve it?
A: To configure a standby database to run 4 hours behind the primary, you can set log transport
delay using SQL*Net or Data Guard configuration. Here’s how to do it:
This configuration will ensure the standby database lags by 4 hours behind the primary server.
56.The client does not want to spend on active data guard license. What will you
recommend?
A: If the client does not want to spend on an Active Data Guard license, consider the following
alternatives:
1. Physical Standby with Managed Recovery:
• Use a Physical Standby database with Managed Recovery Mode (MRP) for data
redundancy without real-time read-write capabilities.
• This setup can provide disaster recovery with lower costs compared to Active Data
Guard.
2. Data Guard with Delayed Apply:
• Implement a delayed apply mode on a physical standby, where there’s a lag (e.g., 4
hours) in applying logs, reducing potential data corruption risks but still providing near real-time
replication.
3. Oracle GoldenGate:
• If replication of data across databases is required, use Oracle GoldenGate, which
offers real-time data integration and replication without needing the Data Guard license.
4. Regular Backup and Recovery Strategy:
• Implement a robust backup and recovery strategy using RMAN for disaster
recovery, avoiding the need for synchronous data replication.
These solutions reduce costs while ensuring data availability and recovery.
This process provides flexibility but can impact sync between primary and standby.
58.In which conditions you will recommend Oracle Data Guard to client?
A: I would recommend Oracle Data Guard to a client in the following conditions:
1. High Availability (HA): When the client requires a disaster recovery solution with
minimal downtime, ensuring continuous database availability.
2. Data Protection: For protecting against data loss with real-time or near-real-time
replication of changes to a standby database.
3. Offload Reporting: When the client needs to offload reporting and read-only
queries to a standby database without impacting the primary system.
4. Geographical Redundancy: To provide geographically distributed standby databases
for disaster recovery and improved data resilience.
These conditions help maintain business continuity and improve overall system reliability.
59.How will you verify that the standby is in sync with primary?
A: To verify that the standby is in sync with the primary:
1. Pre-Upgrade Activities:
• Review the upgrade documentation for compatibility and requirements.
• Perform health checks on the database and applications.
• Backup the database and configuration files.
• Upgrade testing in a non-production environment.
2. Upgrade Process:
• Install the new Oracle version or Oracle E-Business Suite version.
• Run the Database Pre-Upgrade Information Tool to check prerequisites.
• Apply the required patches and perform database schema upgrades.
• Run DBUA (Database Upgrade Assistant) for database upgrade or use manual
upgrade methods for EBS.
3. Post-Upgrade Activities:
• Verify application functionality.
• Check database performance and logs for issues.
• Reconfigure backups and perform test restores.
• Update monitoring and alerting systems.
The process is carefully planned to minimize downtime and ensure a smooth transition.
1. Pre-Upgrade Preparation:
• Check compatibility of hardware, software, and applications with 12c.
• Backup the 11g database.
• Review Oracle 12c documentation for new features and changes.
• Run the Database Pre-Upgrade Information Tool.
2. Install Oracle 12c:
• Install Oracle 12c software on the target system.
• Configure Oracle 12c environment.
3. Database Upgrade:
• Run DBUA (Database Upgrade Assistant) to upgrade the database, or use manual
upgrade with catctl.pl.
• Apply required patches after the upgrade.
4. Post-Upgrade:
• Verify the database functionality.
• Recompile invalid objects.
• Check for performance issues.
• Update backups and perform a test restore.
Select statement is taking longer than usual. What could be the issue?
A: A slow SELECT statement could be caused by several issues:
1. Inefficient Query Execution Plan: It might be using full table scans instead of
indexes. Analyzing the execution plan with EXPLAIN PLAN can help identify this.
2. Lack of Indexes: Missing or fragmented indexes could slow down query
performance.
3. High Data Volume: Large tables or complex joins may result in longer processing
times.
4. Database Resource Bottlenecks: CPU, memory, or I/O issues could impact query
performance. Check resource usage with v$session or AWR reports.
5. Locking/Concurrency: Other sessions may be locking the table or causing delays.
Use v$lock to diagnose.
• Improved Query Execution: Accurate stats allow the optimizer to choose the best
plan.
• Reduced Resource Usage: Helps avoid inefficient joins, sorts, or full table scans.
• Timely Updates: Regular gathering keeps stats in sync with data growth or changes.
• Automatic vs. Manual: Can use DBMS_STATS.GATHER_SCHEMA_STATS for precise
control or rely on automated tasks.
• Impacts Complex Queries: Beneficial for joins and partitioned tables.
How many DB writer process can you configure in database? Tell me the command for increasing
the DB writer process.
A: You can configure up to 36 DB Writer (DBWR) processes in Oracle Database, depending on
system needs and workload. The default is 1 DBWR process, but multiple DBWRs can be
configured for large databases or high I/O workloads.
ALTER SYSTEM SET DB_WRITER_PROCESSES = n; -- Replace 'n' with the desired number (1 to 36)
Key Points:
• Increasing DBWR processes improves performance in I/O-intensive environments.
• Set based on hardware and database load.
• Monitor I/O bottlenecks using tools like AWR or ADDM to decide the optimal value.
The optimizer aims to minimize cost by selecting the most efficient execution plan, balancing
resources and performance. Accurate statistics improve cost estimation and query optimization.
Tell about yourself, your background, some experience, and what you have worked
on?
A: Completed B.Tech in Mechanical Engineering.
• Joined TCS and completed 3 years of experience.
• Worked extensively with Oracle DB 19c and EBS 12.2.
• Involved in system administration, database optimization, patching, and
troubleshooting.
• Managed 5 teams, responsible for:
• Creating and managing shift rosters to ensure coverage and balanced workload.
• Worked on associates’ onboarding processes:
• Ensured smooth integration of new team members.
• Developed strong problem-solving and team management skills.
• Focused on improving operational efficiency and team performance.
What are the different modes you can run your adpatch? (IMP)
A: So let us assume that we have 3 application nodes and Non RAC DB server and also the patch is available only in
American English and there are no other languages installed on the application.
Steps for patching (EBS 12.1) would be
Shut down the application on all the 3 nodes by logging into each node separately.
From adadmin put the application into maintenance mode
Take the count of invalids by logging to sql plus with apps user
Use adpatch to apply patches to the application.
Again check the count of invalid objects in database and compare with pre-patch application invalid count.
From adadmin disable the maintenance mode
Start the application on all the 3 nodes
These phases ensure that the system is properly configured and aligned with Oracle E-Business Suite requirements.
$AD_TOP/bin/adautocfg.sh test
This command will perform the configuration tasks without making any changes to the environment. It
allows you to verify the configuration without applying the changes, ensuring there are no errors before
running it in the actual mode.
41. Which table you will query to check the tablespace space issues? IMP
bytes column in dba_free_spaces and dba_data_files
42. Which table you will query to check the temp tablespace space issues?
dba_temp_files
43. What is temp tablespace? And what is the size of temp tablespace in you are instances?
Temp tablespace is used by so many application programs for sorting and other stuff. Its size is between 3
to 10 GB.
6.What are the key differences between the DBA_OBJECTS, DBA_OBJECTS_AE, and
AD_OBJECTS tables?i
A: Here are the key differences between DBA_OBJECTS, DBA_OBJECTS_AE, and AD_OBJECTS in Oracle:
1. DBA_OBJECTS:
• Provides metadata about all objects in the database, such as tables, views, indexes, and
triggers.
• Shows objects visible to the DBA, including their status and owner.
2. DBA_OBJECTS_AE:
• An editioned version of DBA_OBJECTS, used in environments supporting Edition-Based
Redefinition (EBR).
• Includes additional editioning-related columns for managing database objects in EBR.
3. AD_OBJECTS:
• Specific to Oracle E-Business Suite.
• Tracks customizations and development objects within the EBS environment.
10.What are the main technological difference between R12.2 and R12.1? (Imp)
A: The main technological differences between R12.2 and R12.1 are:
These commands are commonly used for monitoring, managing, and troubleshooting Oracle RAC
environments in daily DBA operations.
Topic wise Interview questions generated by ChatGPT
=================================================================================
5. Describe how you would move an Oracle E-Business Suite instance from one server to
another, and what issues you might encounter.
A:
Performance Tuning:
These steps will help identify and resolve issues caused by the custom module in Oracle EBS.
Key Points:
• Scope: Profile options can be set at various levels—site, application, responsibility, and user
—to provide granularity in their application.
• Purpose: They control system-wide settings like currency, session timeouts, and security
preferences, as well as more specific configurations for particular modules or tasks.
• Common Examples:
• ICX: SESSION TIMEOUT: Specifies the session timeout duration.
• AP: DEFAULT INVOICE PAYMENT TERMS: Sets the default payment terms for Accounts
Payable.
• FND: USER ID: Defines the user’s identifier in the system.
Usage:
• Configuration: Profile options can be set using the System Administrator responsibility
through the Profile Options form.
• Customization: They are often used to control features, change system behavior, or adapt
EBS to meet specific business needs without altering code.
3. Your database is experiencing high disk space usage. How would you
investigate and resolve this issue?
A: To investigate and resolve high disk space usage in an Oracle database, follow these steps:
1. Check Tablespaces: Use DBA_FREE_SPACE and DBA_DATA_FILES views to check for
tablespaces that are consuming excessive space or have little free space.
2. Identify Large Segments: Query the DBA_SEGMENTS view to identify large tables, indexes,
or other segments consuming space.
5. Move Data to Larger Storage: If needed, move large objects (LOBs) or partitions to other
tablespaces with more available space.
6. Add Datafiles: If tablespace growth is unavoidable, add new datafiles:
9. Monitor Usage Regularly: Set up regular monitoring for disk usage to avoid future issues.
2. During the cloning process, the system is stuck at “Running AutoConfig”. How would
you troubleshoot this?
1. Check Log Files:
• Review the clone log files: $APPL_TOP/admin/<sid>/logs and
$ORACLE_HOME/appsutil/log.
• Look for any errors or timeouts related to AutoConfig.
2. Check Resource Usage:
• Verify CPU, memory, and disk usage. If resources are low, consider increasing resources or
freeing up space.
3. Validate Environment Variables:
• Ensure all environment variables ($ORACLE_HOME, $APPL_TOP, $PATH, etc.) are correctly
set.
4. Check for Locking Issues:
• Run ps -ef | grep config to check for any stuck or long-running processes related to
AutoConfig.
5. Check Database and Application Services:
• Ensure the database is up and accessible.
• Check if the application services are running (adstrtal.sh).
6. Re-run AutoConfig Manually:
• Run AutoConfig manually to check for errors:
• perl $ORACLE_HOME/appsutil/clone/bin/adautocfg.pl
7. Review Configuration Files:
• Verify the appsutil configuration files (e.g., adbldxml, applprod.txt) for any discrepancies.
8. Run adctrl for Debugging:
• If AutoConfig is stuck during a specific process, use adctrl to identify the issue step by step.
2. You notice that the archive logs are not being applied to the standby database. How
would you troubleshoot this?
A: 1. Check Archive Log Shipping:
• Ensure archive logs are being generated on the primary database by verifying archive log
list and alert.log.
2. Verify Log Transport Services:
• Check the Data Guard configuration: show parameter log_archive_dest_2 and verify if the
destination is correct.
3. Verify Network Connectivity:
• Ensure the primary and standby databases can communicate over the network (check
firewall, DNS, etc.).
4. Check Standby Log Apply Services:
• Verify that the managed recovery process (MRP) is running on the standby:
• select process, status from v$managed_standby;
• If MRP is not running, start it with:
• alter database recover managed standby database using current logfile disconnect from
session;
5. Check Archive Log Status on Standby:
• On the standby, verify the archive log application with:
• select sequence#, applied from v$archived_log where destination = 'STANDBY';
• Ensure the logs are not missing or corrupted.
6. Check for Errors:
• Review the alert.log on both primary and standby databases for any errors related to
archive log transport or apply.
7. Resync Archive Logs (if necessary):
• If logs are missing or corrupted, manually copy them from primary to standby using scp or
similar tools and register them:
• recover managed standby database using backup controlfile;
8. Verify Data Guard Configuration:
• Ensure the Data Guard configuration is valid using dgmgrl or show parameter standby and
resolve any configuration issues.
1. How would you configure WebLogic Server for high availability in an Oracle EBS
environment?
A: To configure WebLogic Server for high availability (HA) in an Oracle EBS environment:
1. Cluster Creation:
• Create a WebLogic Server cluster using the WebLogic Administration Console.
• Ensure the cluster has at least two managed servers to distribute the load.
2. Deploy EBS Application:
• Deploy Oracle E-Business Suite applications (like Oracle Applications or Forms) on all the
managed servers in the cluster.
3. Configure Node Manager:
• Set up Node Manager for automatic server restarts in case of failure.
• Configure Node Manager in config.xml and ensure it is running on all servers.
4. Load Balancer Configuration:
• Configure an external load balancer (like Oracle Traffic Director) to distribute HTTP
requests across the WebLogic Server cluster nodes.
5. Session Persistence:
• Enable session persistence for WebLogic to maintain session state in case of server failover.
• Configure “replicated” or “database” session persistence under WebLogic’s “Cluster”
settings.
6. Database Connections:
• Ensure all managed servers in the cluster point to the same Oracle database for
consistency.
7. Health Monitoring:
• Configure WebLogic Server’s health monitoring and alerting to detect server failures and
trigger failover.
8. Testing:
• Test failover and load balancing to ensure proper high availability functionality.
2. If an EBS application server is showing intermittent crashes, how would you
investigate and resolve the issue?
A:
To investigate and resolve intermittent crashes on an EBS application server:
1. Check Logs:
• Review the apache, opmn, and WebLogic logs for error messages and stack traces related
to the crash.
• Check for appltop and dms logs in $APPL_LOG and $INST_TOP.
2. Resource Utilization:
• Monitor CPU, memory, and disk usage to ensure the server is not resource-starved. Use
tools like top, vmstat, or sar.
3. Application Server Configuration:
• Verify WebLogic and Oracle HTTP Server (OHS) configurations to ensure they are properly
tuned for resource management (e.g., heap size, thread pool settings).
4. Check for Patches:
• Ensure that the latest Oracle EBS patches and security updates are applied.
5. Database Connectivity:
• Verify database connections and investigate any timeout issues or database-related errors
that may cause crashes.
6. Intermittent Issues:
• Check if there is any pattern to the crashes (e.g., time of day, specific actions).
7. Review External Integrations:
• Investigate if third-party integrations or customizations are causing instability.
8. Test with Minimal Configuration:
• Reduce the configuration to the minimum required services and test for stability.
9. Restart Services:
• If necessary, restart application services (opmn, WebLogic, etc.) and test the system
behavior.
10. Raise SR:
• If the issue persists, raise an Oracle Service Request (SR) for further assistance.
Upgrades:
1. You need to upgrade Oracle EBS from version 12.1 to 12.2. What steps would you follow?
A: To upgrade Oracle EBS from version 12.1 to 12.2, follow these precise steps:
1. Pre-Upgrade Preparation:
• Review the Oracle EBS 12.2 upgrade documentation.
• Ensure system meets hardware, OS, and database prerequisites for 12.2.
• Backup the existing system (database, application tier, and configurations).
• Apply all recommended patches to the current 12.1 instance.
2. Run Pre-Upgrade Checks:
• Use the adutconf.sql script to check for any existing issues.
• Run pre-upgrade and upgrade readiness checks using the provided Oracle scripts.
3. Install Oracle EBS 12.2 Environment:
• Set up a new 12.2 environment (use Rapid Install for new installation).
• Apply the latest patches for EBS 12.2 on the new environment.
4. Clone the Existing 12.1 Instance:
• Clone the 12.1 instance to the new 12.2 environment using adcfgclone.pl.
5. Upgrade the Database:
• Upgrade the Oracle database to a supported version for EBS 12.2.
• Run catcon.pl to upgrade the database schema.
6. Run the EBS Upgrade:
• Use the adgrants.sql script to grant appropriate privileges.
• Run adpreclone.pl and adclone.pl to prepare and clone the environment.
• Apply necessary patches and upgrade the application tier.
• Run the adupgrd.sql script to complete the upgrade process.
7. Post-Upgrade Steps:
• Perform functional and regression testing to ensure everything is working as expected.
• Recompile any custom code, reports, or forms.
• Ensure database backups are taken after upgrade.
• Monitor logs for any issues.
8. Verify and Finalize:
• Perform a final validation of the system and services.
• Clean up obsolete files and logs.
2. What steps would you take to prepare for a smooth Oracle database upgrade to 19c?
1. A: To prepare for a smooth Oracle database upgrade from 12c to 19c:
2. Review Oracle Documentation: Study the Oracle 19c Release Notes and Database Upgrade Guide
for specific 12c to 19c upgrade requirements.
3. Check Compatibility: Confirm that your current 12c version is eligible for a direct upgrade to 19c
by referring to the Oracle Compatibility Matrix.
4. Pre-Upgrade Check: Use preupgrade.jar to identify potential issues, deprecated features, and
recommend actions.
5. Backup Database: Perform a full database backup including data files, control files, and archive
logs.
6. Ensure System Requirements: Verify that your hardware, OS, and Oracle software patches are
compatible with 19c.
7. Apply Latest Patches: Apply the latest patches for Oracle 12c before starting the upgrade process.
8. Check Database Parameters: Review and adjust database initialization parameters as needed,
especially those that are changed or deprecated in 19c.
9. Test Upgrade in Non-Production: Clone the database and test the upgrade process in a non-
production environment to ensure compatibility with applications.
10. Plan for Deprecated Features: Address any deprecated or removed features in 19c that may affect
your current environment.
11. Optimize Database Performance: Perform tasks such as gathering optimizer statistics and cleaning
up obsolete data before the upgrade.
12. Plan Downtime: Schedule the upgrade during a maintenance window with minimal user impact.
13. Upgrade Method: Use DBUA (Database Upgrade Assistant) or perform a manual upgrade based on
your preference.
These practical questions test not just knowledge but also hands-on experience in resolving
common challenges that Oracle Apps DBAs face in daily operations.
----------------------------------------------------------------------------------------------------------------------------------
---------
Here are more questions that might be asked in an Oracle Apps DBA interview:
1. How would you monitor and manage Oracle database growth in EBS?
A: Monitoring and Managing Oracle Database Growth in EBS:
• Use DBA views like DBA_TABLESPACE_USAGE_METRICS and DBA_SEGMENTS to
monitor space usage.
• Set alerts for tablespace thresholds.
• Implement partitioning for large tables and archiving for historical data.
• Regularly perform database housekeeping tasks such as purging old data or
archiving.
3. What steps would you follow to resolve a corruption issue in Oracle database files?
A: Steps to Resolve Corruption in Oracle Database Files:
• Check alert logs for errors.
• Use RMAN to validate and restore corrupted files.
• If corrupt data is identified, use DBMS_REPAIR or flashback.
• In extreme cases, restore from the most recent backup.
5. Can you describe the process of migrating an Oracle database from one server
to another?
A: Migrating an Oracle Database from One Server to Another:
• Perform a full database export using expdp or rmancatalog.
• Set up a new Oracle home and configure init.ora and tnsnames.ora on the new
server.
• Use RMAN to back up and restore the database to the new server.
• After migration, validate database integrity and configure necessary network
settings.
2. What is the purpose of AutoConfig in Oracle EBS, and how do you use it?
A: Purpose of AutoConfig in Oracle EBS:
AutoConfig is used to manage and configure Oracle EBS application and database tier files.
It centralizes configuration information in .xml files and regenerates configuration files
when needed.
Usage: Update the .xml files, run adconfig.sh (apps tier) or autoconfig (db tier) to apply
changes. Use adtmplreport.sh for configuration verification.
5. How would you perform cloning in an Oracle RAC (Real Application Clusters)
environment?
A: Performing Cloning in an Oracle RAC Environment:
• Source Node: Back up the database using RMAN.
• Target Node: Set up the cluster environment and install Oracle binaries.
• Use Rapid Clone to copy application tier files and restore the database to the target.
• Reconfigure RAC components (e.g., services, VIPs) and run autoconfig on both tiers.
• Verify cloned environment functionality.
3. What are the key differences between WebLogic 12c and earlier versions in terms of
performance and configuration?
A: Key Differences Between WebLogic 12c and Earlier Versions:
Performance: Enhanced multitenancy, improved JDBC performance, and better integration with
Oracle DB 12c features.
Configuration: Simplified deployment with WebLogic Deployment Tool (WDT) and support for
REST APIs.
Support: Better compatibility with cloud and newer standards like Java EE 7.
Performance Tuning:
1. How do you analyze performance using AWR and ASH reports in Oracle?
A: Analyzing Performance Using AWR and ASH Reports:
• Use AWR to review top SQLs, wait events, and resource usage.
• Focus on queries with high elapsed times and tune execution plans.
• Use ASH to analyze active sessions during peak load for bottlenecks.
2. Explain how you would perform Oracle database tuning in an EBS environment for
better user experience.
A: Database Tuning in Oracle EBS:
• Optimize SQL queries using execution plans and gather statistics.
• Adjust SGA/PGA and redo log sizes based on workload.
• Implement partitioning for large tables and balance I/O distribution.
• Use AWR/ADDM recommendations for continuous improvement.
3. What steps do you take when you notice high CPU utilization in Oracle EBS? How
would you identify and resolve the issue?
A: Addressing High CPU Utilization in Oracle EBS:
• Identify CPU-intensive sessions using top or AWR.
• Analyze SQLs with high execution time and optimize plans.
• Kill problematic sessions if needed and redistribute workload.
• Scale resources or tune instance parameters for prevention.
1. How do you configure and troubleshoot Single Sign-On (SSO) with Oracle EBS?
A: Configuring and Troubleshooting SSO with Oracle EBS:
• Configure SSO Profile Options in EBS and integrate with OAM.
• Synchronize users via OID or LDAP.
• Verify SSO URL redirections and authentication flows.
• Troubleshoot using logs (oam.log, Apache logs) and validate certificate configurations.
2. What are the key considerations when configuring Oracle Internet Directory (OID) and Oracle Access
Manager (OAM) in EBS?
A: Key Considerations for OID and OAM in EBS:
• Ensure OID schema synchronization with EBS users.
• Configure SSL/TLS for secure communication.
• Validate authentication policies and session management in OAM.
• Test high availability for both OID and OAM.
3. How do you integrate Oracle EBS with a Load Balancer for scalability and high availability?
A: Integrating Oracle EBS with a Load Balancer:
• Configure the load balancer for sticky sessions and SSL offloading.
• Define VIPs for EBS tiers and ensure redundancy.
• Update EBS context files with load balancer details.
• Test failover and performance under load scenarios.
2. How do you plan and execute rolling patching in a multi-node EBS environment?
A: Planning and Executing Rolling Patching in Multi-Node EBS:
• Plan: Identify patch compatibility with rolling mode and schedule downtime if needed.
• Execute: Patch one node at a time while keeping others operational to minimize downtime.
• Use tools like adop in online patching mode (R12.2.x).
• Validate the environment after applying patches on all nodes.
3. Explain the differences between a major upgrade and a patch in Oracle EBS.
A: Differences Between a Major Upgrade and a Patch in Oracle EBS:
• Major Upgrade: Upgrades the entire EBS version (e.g., R12.1 to R12.2), includes new
features, and often requires significant downtime.
• Patch: Fixes specific bugs, adds minor enhancements, or updates compliance without
upgrading the overall version
1. What are the backup strategies you follow for Oracle EBS and databases?
A: Backup Strategies for Oracle EBS and Databases:
• Full Backups: Perform regular RMAN backups for the database.
• Incremental Backups: Schedule daily backups of changed data to save space.
• Apps Tier Backup: Use file system backups for application tier files, including configuration
files.
• Enable Archive Logs for point-in-time recovery.
1. Can you explain the key components of the Oracle E-Business Suite architecture?
A: Key Components of Oracle E-Business Suite Architecture:
• Application Tier: WebLogic Server, Forms, Reports, and OA Framework.
• Database Tier: Oracle Database for data storage.
• Concurrent Processing: Handles background jobs and reports.
• Middle Tier: Interfaces between the application and database, including web servers and
load balancers.
• File System: Stores configurations, logs, and files for EBS.
2. How do you handle patching in Oracle EBS? Walk us through the steps.
A: Handling Patching in Oracle EBS:
• Prepare: Backup databases and applications, verify patch prerequisites.
• Download: Get the patch from Oracle Support.
• Apply: Use ADOP for R12.2 or Rapid Install for older versions to apply patches in
online/offline mode.
• Post-patch: Perform necessary configurations and testing.
6. What are the steps you follow to clone an Oracle E-Business Suite instance?
A: 6. Steps to Clone an Oracle E-Business Suite Instance:
• Prepare: Take full backups of both the database and applications.
• Clone DB: Use RMAN to create a duplicate database.
• Clone Apps Tier: Use Rapid Clone to clone the application tier and update context files.
• Post-cloning: Run AutoConfig and verify the instance after cloning.
Performance Tuning:
3. Can you explain how to perform EBS performance tuning, particularly with database and middleware
components?
A: EBS Performance Tuning with Database and Middleware:
• Database Tuning: Optimize SQL queries, indexes, and partitioning. Adjust SGA/PGA and
monitor undo tablespaces for efficient data handling.
• Middleware Tuning: Tune WebLogic and Forms Server JVM settings, optimize thread pools,
and configure load balancing for better resource distribution.
• Caching and Concurrency: Improve performance through better cache management and
parallel processing configuration.
Thread pool:
A thread pool is a collection of pre-created, reusable threads that handle multiple tasks concurrently. It
improves performance by reducing the overhead of creating and destroying threads for each task,
managing task execution efficiently by reusing available threads.
A thread is the smallest unit of execution in a process. It consists of an execution context, such as a
program counter and stack, and shares memory and resources with other threads in the same process.
Threads enable concurrent execution, improving performance and responsiveness in multi-tasking
environments
EBS Advanced Configurations:
General Troubleshooting:
1. What steps would you take to recover an Oracle EBS instance from a crash or failure?
A: Recovering an Oracle EBS Instance from a Crash or Failure:
• Restore Database: Use RMAN to restore the database from the last backup or perform
point-in-time recovery if needed.
• Restore Application Tier: Restore application files from backups and run AutoConfig to
reconfigure.
• Check Logs: Review EBS and database logs to identify and resolve the cause of the crash.
• Test: Validate the application and database after recovery to ensure everything is
functioning correctly.
——————————————————————————-
Here’s a list of potential interview questions for an Oracle EBS 12.2.9 Apps DBA role based on real-world
scenarios:
2. What are the key features introduced in Oracle EBS 12.2.x compared to earlier versions?
A: Key Features Introduced in Oracle EBS 12.2.x:
• Online Patching (ADOP): Introduces the ability to apply patches while the system is online,
reducing downtime.
• Enhanced WebLogic Integration: Improved integration with WebLogic Server for scalability
and high availability.
• Advanced Security Features: Enhanced security with improved authentication methods
(e.g., SSO, OAM, OID integration).
• Mobile Applications: Expanded support for mobile access and mobile-friendly interfaces.
• Improved User Interface: Modernized and more responsive web interfaces for a better
user experience.
3. What is Online Patching (ADOP) in Oracle EBS 12.2, and how does it work?
A: Online Patching (ADOP) in Oracle EBS 12.2:
ADOP (Application DBA Online Patching) allows applying patches to Oracle EBS 12.2 systems without
taking the application offline, minimizing downtime. It uses a two-file system approach, where one is the
run file system (active) and the other is the patch file system (inactive).
ADOP Patch Phases with Example Patch:
1. Prepare the Patch:
• Stage the patch and prepare the system for patching.
adop phase=prepare patch=12345678
2. Apply the Patch:
• Apply the patch to the patch file system.
adop phase=apply patch=12345678
3. Finalize the Patch:
• Finalize the patching process and apply any final updates needed before switching to the
patched environment.
adop phase=finalize patch=12345678
4. Cutover to the Patched File System:
• Perform the cutover to the patched environment (make the patched file system the active
one).
adop phase=cutover patch=12345678
5. Rollback (if necessary):
• If there are issues, roll back to the previous version.
adop phase=rollback patch=12345678
Explanation:
• Prepare: Prepares the system for patching.
• Apply: Applies the patch to the patch file system.
• Finalize: Completes patching tasks and makes the patch ready for cutover.
• Cutover: Switches from the old system to the newly patched system.
• Rollback: If any issues arise during cutover, you can revert to the previous environment.
ADOP (Online Patching):
These commands help manage the patching lifecycle in a non-disruptive manner.
9. How do you manage backups for Oracle EBS? Which tools or strategies do you use?
A: Managing Backups for Oracle EBS:
• Database Backup: Use RMAN (Recovery Manager) for regular full and incremental backups
of the Oracle database.
• Application Tier Backup: Use file-level backups (e.g., rsync or tar) for application files,
configuration files, and logs.
• Tools: Oracle Data Guard for disaster recovery, RMAN for database-level backup, and EBS-
specific backup tools like adbkup for application data.
• Backup Strategy: Implement regular full backups weekly and incremental backups daily
for both database and application tiers, with off-site storage for disaster recovery.
12. How do you perform a version upgrade (e.g., from 12.2.8 to 12.2.9)? (upgrading nothing but
Patching)
A: Performing a Version Upgrade (e.g., from 12.2.8 to 12.2.9):
• Pre-Upgrade Steps:
• Backup both the application and database tiers.
• Ensure the current environment is stable and meets the version requirements for the
upgrade.
• Run the AutoUpgrade tool or manually download the upgrade patch from Oracle Support.
• Apply the Upgrade Patch:
• Use adpatch to apply the patch for version upgrade.
• Post-Upgrade Steps:
• Run AutoConfig on both the database and application tiers.
• Validate the upgrade by running tests to confirm the new version is functioning correctly.
• Check the logs for errors and resolve any issues encountered during the upgrade.
Database Administration:
Security:
Troubleshooting and Monitoring:
23. What are your steps for troubleshooting forms or OAF page issues?
A: Troubleshooting Forms or OAF Page Issues:
• Logs: Examine Forms logs and OAF logs for errors or stack traces.
• Debugging: Enable debugging in the form or page to track functionality and pinpoint
issues.
• Configuration: Verify if there are any configuration issues with personalization or access
controls.
.
Miscellaneous:
26. What is the ETCC (Enablement Tool Check Compliance) utility, and when is it used?
A: 6. ETCC (Enablement Tool Check Compliance) Utility:
• ETCC helps verify that the Oracle EBS system complies with required configurations before
applying patches or performing upgrades.
• It checks system readiness and identifies missing configurations or incorrect settings that
could hinder the upgrade or patching process.
Basic Questions:
2. What is the difference between a table and a view? When would you use a view?
A: Table vs. View:
• Table: A database object that stores data physically.
• View: A virtual table based on the result of a query. It doesn’t store data but displays it
from underlying tables.
• Use: Use a view when you need to simplify complex queries, aggregate data, or provide
restricted access to certain data.
4. What is an index? Explain the different types of indexes in Oracle.
A: Index:
• Index: A database object that improves the speed of data retrieval operations.
• Types:
• B-tree Index: Default, used for equality and range queries.
• Bitmap Index: Used for columns with low cardinality.
• Clustered Index: Organizes data in the index order.
• Function-Based Index: Built on expressions.
5. What are materialized views, and how do they differ from regular views? When would you use them
in Oracle Apps?
A: Materialized Views:
• Materialized View: Stores query results physically and can be refreshed periodically.
• Difference: Unlike regular views, materialized views store data, making them more efficient
for complex aggregations.
• Use: Use in Oracle Apps for performance optimization where frequent queries retrieve
large datasets.
6. What are triggers, and how are they implemented in Oracle? Can they be used in Oracle Apps for
customization?
A: Triggers:
• Triggers: A set of SQL statements automatically executed when certain events occur.
• Implementation: Triggers are defined on tables and views, and can be used for data
validation, auditing, or enforcing business rules.
• Customization: In Oracle Apps, triggers can be used for custom business logic.
7. Explain the types of constraints available in Oracle (e.g., primary key, foreign key). How do they help
maintain data integrity?
A: Constraints:
• Primary Key: Uniquely identifies a row in a table.
• Foreign Key: Ensures referential integrity between tables.
• Unique: Ensures all values in a column are unique.
• Check: Enforces domain integrity.
• Not Null: Ensures a column cannot have NULL values.
10. How do you perform partition management in Oracle? Can partitioning impact Oracle Apps
performance?
A: Partition Management:
• Management: Involves creating, dropping, or merging partitions.
• Impact: Proper partitioning improves query performance and load balancing but may
require extra management effort in Oracle Apps.
12. What are object views, and how do they differ from regular views?
A: Object Views:
• Object Views: A view that provides access to object types in an object-oriented way.
• Difference: Unlike regular views, object views can represent an object type with methods
and attributes.
13. Explain the concept of inheritance in Oracle Objects. Is it used in Oracle Apps?
A: Inheritance in Oracle Objects:
• Inheritance: Allows one object type to inherit the attributes and methods of another.
• Use in Oracle Apps: Inheritance is used in Oracle to extend and model
14. How do you manage invalid database objects? What steps do you take to troubleshoot and
recompile them?
A: Managing Invalid Database Objects:
• Steps: Use utlrp.sql to recompile invalid objects, check for dependency issues, and resolve
any errors in the object code.
• Troubleshoot: Use DBMS_UTILITY.compile_schema for recompiling individual schemas.
15. How would you check the status of database objects? Which Oracle dictionary views help in
monitoring them?
A: Check Status of Database Objects:
• Use views like DBA_OBJECTS and USER_OBJECTS to monitor the status (valid or invalid) of
database objects.
• Views: ALL_OBJECTS, DBA_OBJECTS.
16. How do you move objects between schemas or databases in Oracle Apps?
A: Moving Objects Between Schemas/Databases:
• Moving Objects: Use tools like Data Pump (expdp/impdp) or Database Links for cross-
database migration.
• Between Schemas: Use DBMS_METADATA to extract and move objects.
17. How do you monitor and manage indexes in large Oracle Apps environments?
A: Monitor and Manage Indexes:
• Monitor: Use DBA_INDEXES and V$OBJECT_USAGE to track index usage.
• Manage: Rebuild or drop unused indexes using ALTER INDEX or DROP INDEX.
18. What steps would you take to optimize a materialized view for better performance in an EBS
environment?
A: Optimizing Materialized Views:
• Optimization: Refresh materialized views during off-peak hours, use incremental refresh,
and partition materialized views for better performance.
• EBS Use: Optimize materialized views to reduce query time and improve performance in
reporting or data analysis jobs.
19. What is the role of statistics in maintaining database object performance? How do you gather and
analyze statistics for objects?
A: Role of Statistics in Database Object Performance:
• Role: Statistics help the Oracle optimizer choose the best execution plan, affecting query
performance.
• Gathering: Use the DBMS_STATS.GATHER_TABLE_STATS or
DBMS_STATS.GATHER_SCHEMA_STATS procedures to collect statistics.
• Analyze: Review statistics using DBA_TAB_STATS_HISTORY, DBA_TAB_COLUMNS, and
DBMS_STATS views. Regularly gather stats after major data changes to ensure efficient query plans.
20. How would you handle fragmentation in Oracle tables and indexes? What tools or commands do
you use?
A: Handling Fragmentation in Oracle Tables and Indexes:
• Tables: Use ALTER TABLE MOVE to defragment tables. Rebuild partitions if applicable.
• Indexes: Use ALTER INDEX REBUILD to rebuild fragmented indexes.
• Tools/Commands: DBMS_REPAIR can be used to identify and repair fragmentation issues.
Also, use DBMS_SPACE to analyze free space.
21. You find that a key table in Oracle Apps has a high number of chained rows. How do you resolve this
issue?
a: Resolving Chained Rows in a Key Table:
• Cause: Chained rows typically occur when rows are too large for a single data block.
• Resolution: Use ALTER TABLE MOVE to move the table to a new segment, eliminating
chained rows. Rebuild indexes on the affected table afterward to improve performance.
23. What steps do you follow for schema upgrades in Oracle EBS? How do you handle dependent
objects?
A: Steps for Schema Upgrades in Oracle EBS:
• Steps:
1. Backup: Take a full backup of the database and EBS environment.
2. Run AD Administration: Use AD Administration utilities to apply schema updates and
patches.
3. Database Upgrade: Run catcon.pl and rdbms upgrade scripts for the schema upgrade.
4. Post-Upgrade: Run AutoConfig, recompile invalid objects, and verify the application tier.
• Handling Dependent Objects: Ensure dependent objects like triggers, stored procedures,
and indexes are recompiled post-upgrade. Use utlrp.sql to recompile invalid objects.
===================================================================================
3. Ping vs Tnsping?
• Ping: Tests network connectivity to a server or host by sending ICMP packets. Does not
verify database connectivity.
• Tnsping: Checks Oracle Net connectivity by validating a TNS entry in the tnsnames.ora file.
It ensures the listener is accessible for the database.
When an application server starts, services such as OACore (Java-based pages), Forms (Oracle Forms),
Web (Apache server), and Concurrent Processing (handles batch jobs) are initiated. These services are
essential for processing user requests and running concurrent programs.
OACORE manages Java-based services in Oracle EBS. It handles OAF (Oracle Application Framework)
pages, login functionalities, and concurrent request submissions. It runs within the WebLogic Server and is
critical for rendering web pages.
Purging removes obsolete or redundant data from the database to free up storage and improve
performance. In Oracle EBS, purging helps manage log files, concurrent request data, and temporary
tables.
TDE secures data at rest by encrypting tablespaces and columns. The encryption and decryption are
transparent to applications, ensuring data security without modifying application logic.
Data Guard ensures high availability and disaster recovery by synchronizing a primary database with
standby databases. It supports automatic failover, switchover, and both logical and physical standby
configurations.
How to setup GRID, DB AND Application server? how to install and what is the sequence
of setup the environment?
Setting up Oracle Grid Infrastructure (GI), Database (DB), and Application Server for an Oracle E-Business
Suite environment involves several steps. Below is a detailed explanation of the sequence, commands,
and process:
Pre-requisites:
• Ensure servers meet hardware and software requirements.
• Configure shared storage for ASM (via SAN/NAS or other shared storage solutions).
• Setup private/public/virtual IPs for RAC.
Steps:
1. Install Oracle Grid Infrastructure:
• Run the installer (runInstaller) from the Grid home.
./runInstaller
2. Configure ASM:
• ASM setup includes the creation of DATA and FRA (Fast Recovery Area) disk groups.
• Use asmca for disk group management.
asmca
3. Verify Installation:
• Check cluster and ASM status:
Pre-requisites:
• Ensure GI is running and ASM disk groups are available.
• Install required OS packages.
Steps:
1. Install Oracle Database Software:
• Run the installer from the database software home:
./runInstaller
dbca
• Select ASM for storage and choose DATA and FRA disk groups.
• Specify the database name (SID) and configure listener details.
4. Post-Installation Tasks:
• Configure the database initialization parameters.
• Set up backups using RMAN.
• Apply required patches using opatch.
Pre-requisites:
• Database should be running.
• Verify required ports are free.
• Install JDK as required by Oracle Application Server.
Steps:
1. Install Application Server:
• Use Oracle EBS Rapid Install to install the application tier:
./rapidwiz
• Specify:
• Database details for connectivity.
• Node configurations (single or multi-node).
$ADMIN_SCRIPTS_HOME/adstrtal.sh
• Verify services:
$ADMIN_SCRIPTS_HOME/adstpall.sh
4. Post-Installation Steps:
• Apply application patches using adpatch.
• Configure Concurrent Manager and Workflow services.
4. Sequence of Setup
1. Install and configure Grid Infrastructure.
2. Install Oracle Database and create required databases.
3. Install and configure the Application Server (EBS).
Important Commands
1. Grid Infrastructure:
• Start/Stop GI:
3. Application Server:
• Start/Stop Services:
$ADMIN_SCRIPTS_HOME/adstrtal.sh
$ADMIN_SCRIPTS_HOME/adstpall.sh
Validation
1. Verify all components are running:
• Grid Infrastructure:
• Database:
sqlplus / as sysdba
SQL> SELECT instance_name, status FROM gv$instance;
• Application Server:
• Access the EBS login page to verify.
By following these steps and commands, you can successfully set up a Grid, Database, and Application
Server environment for Oracle E-Business Suite.
Proper configuration of weblogic servicer with EBS set admin server and managed
servers
Setting up WebLogic Server with Oracle EBS (Including Admin Server and Managed Servers)
Oracle EBS R12.2 integrates Oracle WebLogic Server for managing the application tier. Below is the
detailed process for setting up WebLogic Server as part of the EBS environment.
Pre-requisites
1. Verify the following software versions are compatible with Oracle EBS:
• Oracle WebLogic Server version.
• Java Development Kit (JDK) version.
2. Ensure you have the required ports available.
3. Install and configure the database before setting up the application server.
3. Post-Installation:
• Verify the installation:
$MIDDLEWARE_HOME/wlserver/common/bin/wlst.sh
$MIDDLEWARE_HOME/wlserver/common/bin/config.sh
$MIDDLEWARE_HOME/wlserver/common/bin/startNodeManager.sh
cd $DOMAIN_HOME/bin
./startWebLogic.sh
cd $DOMAIN_HOME/bin
./startManagedWebLogic.sh oacore_server1 http://<hostname>:7001
./startManagedWebLogic.sh forms_server1 http://<hostname>:7001
./startManagedWebLogic.sh oafm_server1 http://<hostname>:7001
3. Stop Servers:
• Stop Managed Servers first:
./stopWebLogic.sh
4. Using Node Manager:
• Use Node Manager to manage start/stop operations for all WebLogic servers.
6. Post-Installation Validation
1. Access WebLogic Admin Console:
• URL: http://<hostname>:7001/console
• Check that all Managed Servers are running.
2. Verify EBS Login:
• Access the Oracle EBS homepage and verify functionality.
3. Logs for Troubleshooting:
• Admin Server log: $DOMAIN_HOME/servers/AdminServer/logs/AdminServer.log
• Managed Server logs: $DOMAIN_HOME/servers/<server_name>/logs/<server_name>.log
Commands Summary
1. Start/Stop Admin Server:
./startWebLogic.sh
./stopWebLogic.sh
3. Node Manager:
./startNodeManager.sh
./stop
DR Setup:
etting Up Disaster Recovery (DR) for Oracle E-Business Suite
Disaster Recovery (DR) ensures business continuity by providing a standby environment for Oracle EBS in
case the primary environment fails. The setup typically involves Data Guard for the database and file
system synchronization for the application tier.
Detailed Steps
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
DB_UNIQUE_NAME=PRIMARY_DB
LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIMARY_DB,STANDBY_DB)'
LOG_ARCHIVE_DEST_2='SERVICE=STANDBY_DB VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=STANDBY_DB'
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM
SESSION;
<s_dbhost>standby_hostname</s_dbhost>
<s_dbport>1521</s_dbport>
<s_dbname>STANDBY_DB</s_dbname>
Database Failover
1. Force Standby Database to Become Primary:
Database Switchover
1. Initiate Switchover on the Primary:
Commands Summary:
• Database Sync:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM
SESSION;
• Failover:
Best Practices
1. Automate backups and file synchronization.
2. Regularly validate the DR environment with failover testing.
3. Use monitoring tools to track replication health and system performance.
This process ensures a robust DR setup for Oracle EBS with minimal downtime during failovers.