Q04. How To Check The Datapump Import Jobs Are Running or Not ?
Q04. How To Check The Datapump Import Jobs Are Running or Not ?
Ans: streams_pool_size. If streams_pool_size is zero 0 then probably you will get memory related error. Please
check this parameter and set minimum to 96M value.
• Use CLUSTER=N : In a RAC environment it can improve the speed of Data Pump API based operations.
• Set PARALLEL_FORCE_LOCAL to a value of TRUE since PARALLEL_FORCE_LOCAL could have a wider scope of effect
than just Data Pump API based operations.
• EXCLUDE=STATISTICS: excluding the generation and export of statistics at export time will shorten the time needed
to perform any export operation. The DBMS_STATS.GATHER_DATABASE_STATS procedure would then be used at
the target database once the import operation was completed.
• Use PARALLEL : If there is more than one CPU available and the environment is not already CPU bound or disk I/O
bound or memory bound and multiple dump files are going be used (ideally on different spindles) in the DUMPFILE
parameter, then parallelism has the greatest potential of being used to positive effect, performance wise.
Q04. How to check the datapump import jobs are running or not ?
When the export or import job is running press +C keys to get to the respective datapump client prompt OR you can
use another session of datapump client and using the ATTACH clause attach to the running job and then issue the
STATUS command:-
Export> status
Q8: I exported dumpfile of metadata/ddl only from production but during import in test machine it is consuming
huge size and probably we don’t have that much available disk space? What could be the reason that only ddl is
consuming huge space?
Ans: Below are the snippet ddl of one table extracted from prod. As you can see that during table creation oracle
always allocate the initial bytes as shown below.
• PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
• NOCOMPRESS LOGGING
• STORAGE(INITIAL 1342177280 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
• PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
• BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
• As you can see above, oracle allocating 128MB for one table initially even if row count is zero.
• To avoid this you need to set deferred_segment_creation parameter value to true. By default it is false.
• Q9: If you don’t have sufficient disk space on the database server, how will take the export? OR, How to
export without dumpfile?
• Ans: You can use network link/ DB Link for export. You can use network_link by following these simple
steps:
• Create a TNS entry for the remote database in your tnsnames.ora file
Test with tnsping sid
Create a database link to the remote database
Specify the database link as network_link in your expdp or impdp syntax
Q09: Tell me some of the parameters you have used during export?
Ans:
Q10: You are getting undo tablespace error during import, how you will avoid it?
Ans: We can use COMMIT=Y option
Q11: Can we import a 11g dumpfile into 10g database using datapump?
Ans: Yes we can import from 11g to 10g using VERSION option.
Q12.During impdp with full=y caluse , do we need to create user before import?
No.User will be automatically created.
If you need consistent backup for example when you do initial load using datapump for golden gate.
Q18. Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?
Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.
Q25. How to suspend and Resum Export Jobs (Attaching and Re-Attaching to the Jobs) ?
parameter name JOB_NAME=myfullJob
press CTRL+C to enter into interactive mode. Then he will get the Export> prompt where he can type interactive
commands
expdp / as sysdba attach=job_name
export>status
export>stop_job
export>start_jop
export>kill_job
export>continue_job
Q27. You are observing undo tablespace error during import, how you will avoid it?
We can use COMMIT=Y option
Q29. Is it possible to export two tablespace at a time and import into a single tablespace ?
Maybe, at a time you can export but you can't import them at a time
Client process: this process initiated by client utility and makes call data pump api. Once data pump is initiated this
process is necessary for the job.Shadow process:when client login to db foreground process is created. It services the
client data pump api requests. This process creates the master table and creates advanced queues for
communication. Once client process ends shadow process also go away.
Master control process :MCP controls the execution of data pump job.there is oneMCP per job. MCP divides the
data pump job into various metadata and data load or unload jobs and hand over them to worker processes.Worker
processes:MCP creates worked process based on the value of parallel parameter. The worker process performs the
task requested by MCP.
Q34.What is the use of table existing option parameter in datapump?
Table_exists_action: While Import process is running if any table exists in database what will Oracle do ? replace
existing table or skip if it exists. this parameter can take following values.
SKIP is default value. If SKIP is used then table replacing is not done.
APPEND loads rows from the export files and leaves target existing rows unchanged.
TRUNCATE deletes existing rows in target table and then loads rows from the export.
REPLACE drops the existing table in the target and then creates and loads it from the export.
For example: you run the following import command, If you don’t use table_exists_action parameter or use
table_exists_action=skip, then Oracle will not perform table replacing when same tables exist in the same schema.
And the old tables and their data remain in the Original table.
If you use table_exists_action=APPEND option, then Oracle will load rows from the export files and leaves target
existing rows unchanged. Probably lots of data will be duplicated.
If you use table_exists_action=TRUNCATE, then Oracle will deletes ( truncate table ) existing rows in target table and
then loads rows from the export.
If you use table_exists_action=REPLACE , then Oracle will drop the existing table in the target and then creates and
loads it from the export.