0% found this document useful (0 votes)
173 views

Q04. How To Check The Datapump Import Jobs Are Running or Not ?

The document contains answers to various questions about Oracle Data Pump utilities. It discusses parameters like STREAMS_POOL_SIZE, CONTENT, DIRECT, PARALLEL that can help improve the performance of export and import jobs. It also provides ways to check if a Data Pump job is running, stop/start/kill jobs, take consistent exports using FLASHBACK_SCN, and import metadata or data separately.

Uploaded by

Yasir Shakil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
173 views

Q04. How To Check The Datapump Import Jobs Are Running or Not ?

The document contains answers to various questions about Oracle Data Pump utilities. It discusses parameters like STREAMS_POOL_SIZE, CONTENT, DIRECT, PARALLEL that can help improve the performance of export and import jobs. It also provides ways to check if a Data Pump job is running, stop/start/kill jobs, take consistent exports using FLASHBACK_SCN, and import metadata or data separately.

Uploaded by

Yasir Shakil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Q1. How to export only ddl/metadata of a table?

Ans: you can use CONTENT=METADATA_ONLY parameter during export.

Q2: Which memory area used by datapump process?

Ans: streams_pool_size. If streams_pool_size is zero 0 then probably you will get memory related error. Please
check this parameter and set minimum to 96M value.

show parameter STREAMS_POOL_SIZE

NAME TYPE VALUE


———————————— ———– —————–
streams_pool_size big integer 96M

Q3: How to improve datapump performance so that export/import happens faster?


Ans: Allocate streams_pool_size memory. Below query will give you the recommended settings of this parameter.
select ‘ALTER SYSTEM SET STREAMS_POOL_SIZE=’|| (max(to_number(trim(c.ksppstvl)))+67108864)||’
SCOPE=SPFILE;’
from sys.x$ksppi a, sys.x$ksppcv b, sys.x$ksppsv c
where a.indx = b.indx and a.indx = c.indx and lower(a.ksppinm) in (‘__streams_pool_size’,’streams_pool_size’);
ALTER SYSTEM SET STREAMS_POOL_SIZE=XXXX MB SCOPE=SPFILE;

• Use CLUSTER=N : In a RAC environment it can improve the speed of Data Pump API based operations.
• Set PARALLEL_FORCE_LOCAL to a value of TRUE since PARALLEL_FORCE_LOCAL could have a wider scope of effect
than just Data Pump API based operations.
• EXCLUDE=STATISTICS: excluding the generation and export of statistics at export time will shorten the time needed
to perform any export operation. The DBMS_STATS.GATHER_DATABASE_STATS procedure would then be used at
the target database once the import operation was completed.
• Use PARALLEL : If there is more than one CPU available and the environment is not already CPU bound or disk I/O
bound or memory bound and multiple dump files are going be used (ideally on different spindles) in the DUMPFILE
parameter, then parallelism has the greatest potential of being used to positive effect, performance wise.

Q04. How to check the datapump import jobs are running or not ?

1.Using the datapump client (expdp & impdp) STATUS command:-

When the export or import job is running press +C keys to get to the respective datapump client prompt OR you can
use another session of datapump client and using the ATTACH clause attach to the running job and then issue the
STATUS command:-
Export> status

2.Querying V$SESSION_LONGOPS & V$SESSION views


3.Querying V$SESSION_LONGOPS & V$DATAPUMP_JOB views
4.Querying DBA_RESUMABLE view.
5.Querying v$session-wait

Q5: How to stop/start/kill datapump jobs?


• Ans: expdp / as sysdba attach=job_name
• export>status
• export>stop_job
• export>start_jop
• export>kill_job
• You can also kill jobs from alter system kill session command. SID and SERIAL# you will get from the above
command.
Q6: How will you take consistent export backup? What is the use of flashback_scn ?
• Ans: To take a consistent export backup you can use the below method:
• SQL:
• select to_char(current_scn) from v$database;
• Expdp parfile content:
• —————————
• directory=OH_EXP_DIR
• dumpfile=exporahow_4apr_<yyyymmdd>.dmp
• logfile=exporahow_4apr_<yyyymmdd>.log
• schemas=ORAHOW
• flashback_scn=<<current_scn>>

Q7: How to drop constraints before import?


Ans:
set feedback off;
spool /oradba/orahow/drop_constraints.sql;
select ‘alter table SCHEMA_NAME.’ || table_name || ‘ drop constraint ‘ || constraint_name || ‘ cascade;’
from dba_constraints where owner = ‘SCHEMA_NAME’
and not (constraint_type = ‘C’)
order by table_name,constraint_name;
Spool off;
exit;

Q8: I exported dumpfile of metadata/ddl only from production but during import in test machine it is consuming
huge size and probably we don’t have that much available disk space? What could be the reason that only ddl is
consuming huge space?
Ans: Below are the snippet ddl of one table extracted from prod. As you can see that during table creation oracle
always allocate the initial bytes as shown below.
• PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
• NOCOMPRESS LOGGING
• STORAGE(INITIAL 1342177280 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
• PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
• BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
• As you can see above, oracle allocating 128MB for one table initially even if row count is zero.
• To avoid this you need to set deferred_segment_creation parameter value to true. By default it is false.
• Q9: If you don’t have sufficient disk space on the database server, how will take the export? OR, How to
export without dumpfile?
• Ans: You can use network link/ DB Link for export. You can use network_link by following these simple
steps:
• Create a TNS entry for the remote database in your tnsnames.ora file
Test with tnsping sid
Create a database link to the remote database
Specify the database link as network_link in your expdp or impdp syntax

Q09: Tell me some of the parameters you have used during export?
Ans:

CONTENT: Specifies data to unload where the valid keywords are:


(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dumpfiles and logfiles.
DUMPFILE List of destination dump files (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE_ONLY Calculate job estimates without performing the export.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN SCN used to set session snapshot back to.
FULL Export entire database (N).
HELP Display Help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of export job to create.
LOGFILE Log file name (export.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile (N).
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to export a subset of a table.
SCHEMAS List of schemas to export (login schema).
TABLES Identifies a list of tables to export – one schema only.
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.

Q10: You are getting undo tablespace error during import, how you will avoid it?
Ans: We can use COMMIT=Y option

Q11: Can we import a 11g dumpfile into 10g database using datapump?
Ans: Yes we can import from 11g to 10g using VERSION option.

Q12.During impdp with full=y caluse , do we need to create user before import?
No.User will be automatically created.

Q13.which case we need to use flashback_time or flashback_scn parameter?

If you need consistent backup for example when you do initial load using datapump for golden gate.

Q14. What is use of DIRECT=Y option in exp?


Setting direct=yes, to extract data by reading the data directly, bypasses the SGA, bypassing the SQL command-
processing layer (evaluating buffer), so it should be faster. Default value N.

Q15. How to improve imp performance?


1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building
indexes later on after the import. Indexes can easily be recreated after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during
import.

Q16. What is use of INDEXFILE option in imp?


Will write DDLs of the objects in the dumpfile into the specified file.
Q17. What are the differences between expdp and exp (Data Pump or normal exp/imp)?
Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesn’t exist.

Q18. Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?
Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.

Q19. How to improve expdp performance?


Using parallel option which increases worker threads. This should be set based on the number of cpus.

Q20. How to improve impdp performance?


Using parallel option which increases worker threads. This should be set based on the number of cpus.

Q21. What is the order of importing objects in impdp?


Tablespaces
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions
Materialized views

Q22. How to import only metadata?


CONTENT= METADATA_ONLY
Q23. How to import into different user/tablespace/datafile/table?
REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA

Q24. What is COMPRESSION parameter in expdp?


The COMPRESSION parameter enables the user to specify which data to compress before writing theexport data to a
dump file. By default, all metadata is compressed before it’s written out to an export dump file. You can disable
compression by specifying a value of NONE for the COMPRESSION parameter, as shown here:
$ expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=NONE
The COMPRESSION parameter can take any of the following four values:
ALL: Enables compression for the entire operation.
DATA_ONLY: Specifies that all data should be written to the dump file in a compressed format.
METADATA_ONLY: Specifies all metadata be written to the dump file in a compressed format.
This is the default value.

Q25. How to suspend and Resum Export Jobs (Attaching and Re-Attaching to the Jobs) ?
parameter name JOB_NAME=myfullJob

press CTRL+C to enter into interactive mode. Then he will get the Export> prompt where he can type interactive
commands
expdp / as sysdba attach=job_name
export>status
export>stop_job
export>start_jop
export>kill_job
export>continue_job

Q26. How to get estimate size of dumpfile before exporting


Expdp estimate_only=y estimate=blocks | statistics

Q27. You are observing undo tablespace error during import, how you will avoid it?
We can use COMMIT=Y option

Q28. What is the meaning of table_exists_action=truncate in Expdp ?


truncate deletes existing rows and then loads rows from the source, or this says to truncate the existing table rows,
leaving the table definition and replacing the rows from the expdp dumpfile file being imported

Q29. Is it possible to export two tablespace at a time and import into a single tablespace ?
Maybe, at a time you can export but you can't import them at a time

Q30. What you will do when export is running slow?


We need to skip taking export of indexes, use BUFFER and DIRECT parameters

Q31.Can we import a 11g dumpfile into 10g database using datapump?


Yes we can import from 11g to 10g using VERSION option.

Q32. Which are the common IMP/EXP problems?


ORA-00001: Unique constraint ... violated - Perhaps you are importing duplicate rows. Use IGNORE=N to skip tables
that already exist (imp will give an error if the object is re-created) or the table could be dropped/ truncated and re-
imported if we need to do a table refresh..
IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import parameter to ignore these errors,
but be careful as you might end up with duplicate rows.
ORA-01555: Snapshot too old - Ask your users to STOP working while you are exporting or use parameter
CONSISTENT=NO (However this option could create possible referential problems, because the tables are not
exported from one snapshot in time).
ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set parameter COMMIT=Y (with
an appropriate BUFFER parameter) while importing.

Q33. What are the Process involved in Expdp/ImpDP? at back end?

Client process: this process initiated by client utility and makes call data pump api. Once data pump is initiated this
process is necessary for the job.Shadow process:when client login to db foreground process is created. It services the
client data pump api requests. This process creates the master table and creates advanced queues for
communication. Once client process ends shadow process also go away.

Master control process :MCP controls the execution of data pump job.there is oneMCP per job. MCP divides the
data pump job into various metadata and data load or unload jobs and hand over them to worker processes.Worker
processes:MCP creates worked process based on the value of parallel parameter. The worker process performs the
task requested by MCP.
Q34.What is the use of table existing option parameter in datapump?

Table_exists_action: While Import process is running if any table exists in database what will Oracle do ? replace
existing table or skip if it exists. this parameter can take following values.

TABLE_EXISTS_ACTION=[SKIP | APPEND | TRUNCATE | REPLACE]

SKIP is default value. If SKIP is used then table replacing is not done.
APPEND loads rows from the export files and leaves target existing rows unchanged.
TRUNCATE deletes existing rows in target table and then loads rows from the export.
REPLACE drops the existing table in the target and then creates and loads it from the export.

For example: you run the following import command, If you don’t use table_exists_action parameter or use
table_exists_action=skip, then Oracle will not perform table replacing when same tables exist in the same schema.
And the old tables and their data remain in the Original table.

the new table data= Old data and Old Metadata

impdp \"/ as sysdba\" SCHEMAS=HR DIRECTORY=DATAPUMP LOGFILE=HR.log

impdp \"/ as sysdba\" SCHEMAS=HR DIRECTORY=DATAPUMP LOGFILE=HR.log table_exists_action=skip

If you use table_exists_action=APPEND option, then Oracle will load rows from the export files and leaves target
existing rows unchanged. Probably lots of data will be duplicated.

the new table data= Old data + Export data

impdp \"/ as sysdba\" SCHEMAS=HR DIRECTORY=DATAPUMP LOGFILE=HR.log table_exists_action=APPEND

If you use table_exists_action=TRUNCATE, then Oracle will deletes ( truncate table ) existing rows in target table and
then loads rows from the export.

the new table data= Only Export data

impdp \"/ as sysdba\" SCHEMAS=HR DIRECTORY=DATAPUMP LOGFILE=HR.log table_exists_action=TRUNCATE

If you use table_exists_action=REPLACE , then Oracle will drop the existing table in the target and then creates and
loads it from the export.

the new table data= Export data and Export Metadata

impdp \"/ as sysdba\" SCHEMAS=HR DIRECTORY=DATAPUMP LOGFILE=HR.log table_exists_action=REPLACE

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy