Oracle Performance Tuning
Oracle Performance Tuning
Oracle Performance Tuning
Oracle Performance Tuning is very important part. It is not just the DBA responsibility ,it is also
the responsibility of Oracle developers.Performance tuning should starts before Design
and should be continuously tested. We need to have good Knowledge of Oracle
Software working. Many times problems occurs as we are not aware of the working of
the Oracle
In this section,We are presenting the Oracle performance tuning articles which will help u in
solving the problems quickly
Tuning Tools
;
select lpad( ,2*level) || operation operations,options,object_name
from plan_table
where statement_id = x
connect by prior id = parent_id and statement_id = x
start with id = 1 and statement_id = x
order by id;
or
SQL> set linesize 132
SQL> SELECT * FROM TABLE(dbms_xplan.display);
How to Read?
-Read innermost out, top/down
-Join operations always require two sets. The order you read the sets is top down, so the first
set is the driving set and the second is the probed set. In the case of a nested loop, the first set
is the outer loop. In the case of a hash join, the first set is used to build the hash table.
-One join is performed at a time, so you only need to consider two sets and their join operation
at any one time.
What to look for ?
-Look for TABLE ACCESS (FULL)
-Costly if table is big
-Costly if performed many times
-Look for INDEX (RANGE SCAN)
-Expected for non-unique indexes
-Suspicious for unique indexes
Some info about joins
Nested Loops-Good for on-line screens and reports
-Read all rows from table 1
-Then access table 2 once for each row returned from table 1
-Fastest if:
rows returned from table 1 are small
access to table2 is inexpensive. This means either a UNIQUE lookup or a SMALL Range Scan.
Merge Join
-Better than nested loops for joining a large number of rows
Advantages
Provides execution path
Provides row counts
Can tell timings for all events associated with SQL.
Can tell what values the SQL was run with
Disadvantages
Trace file may easily max out due to all the information Oracle must write to the trace file and
then only partial information is available in trace file.
Formatting Your SQL Trace File with TKPROF
Trace Files are unformatted dumps of data
TKPROF is tool used to format trace files for reading
Syntax
tkprof {tracefile} {outputfile}
[explain={user/passwd} ] [table= ]
[print= ] [insert= ] [sys= ] [sort= ]
TKPROF Usage Best Practices
The following is the recommended usage:
tkprof {trace file} {output file} explain={usernm/passwd} print=? sort=prsela,exeela,fchela
Use explain= to get Plan in the report
Use print= to only report on the first ? statements
Use sort= for sorting the statements with the longest elapsed times first (works with
timed_statistics=true)
Some more command related to tracing
1. To trace any sid from outside
sys.dbms_system.set_ev(sid, serial#, , , )
Examples: SQL> execute sys.dbms_system.set_ev(8, 219, 10046, 12, );
2.Gathering stats for any object in APPS
exec
apps.fnd_stats.GATHER_TABLE_STATS(APPLSYS,FND_CONCURRENT_REQUESTS,100,
4);
3. Using oradebug
oradebug setospid
oradebug close_trace
oradebug setospid
oradebug event 10046 trace name context off;
oradebug event 10046 trace name context forever, level 4;
4.Using tkprof
This print 10 sql only
tkprof .trc elaps.prf sys=no explain=apps/ sort=(prsela,exeela,fchela) print=10
This print all the sql
tkprof .trc elaps.prf sys=no explain=apps/apps sort=prsela,exeela,fchela
Autotrace Utility
Autotrace is beautiful tool provided by Oracle for getting the explain plan and execution
statistics.
You need to know the query and its bind variable if any and with autotrace access, we can get
all the useful information about
Autotrace Utility installation
1. cd [ORACLE_HOME]/rdbms/admin2. log into SQL*Plus as SYSTEM3. Run @utlxplan4. Run
CREATE PUBLIC SYNONYM PLAN_TABLE FOR PLAN_TABLE;5. Run GRANT ALL ON
PLAN_TABLE TO PUBLIC;
6. Log in to SQL*Plus as SYS or as SYSDBA7. Run @plustrce
8. Run GRANT PLUSTRACE TO PUBLIC;
Autotrace options
SET AUTOTRACE OFF: No AUTOTRACE report is generated. This is the default.
SET AUTOTRACE ON EXPLAIN: The AUTOTRACE report shows only the optimizer execution
path.
SET AUTOTRACE ON STATISTICS: The AUTOTRACE report shows only the SQL statement
execution statistics.
SET AUTOTRACE ON: The AUTOTRACE report includes both the optimizer execution path and
the SQL statement execution statistics.
SET AUTOTRACE TRACEONLY: This is like SET AUTOTRACE ON, but it suppresses the
printing of the users query output, if any. This is very useful for queries returning large rows,so
we dont need to scroll down that much
SET AUTOTRACE TRACEONLY STATISTICS: This is like SET AUTOTRACE TRACEONLY but
it shows the statistics only and supress the explain plan output
SET AUTOTRACE TRACEONLY EXPLAIN: This is like SET AUTOTRACE TRACEONLY but it
shows the explain plan only and supress the statistics . This does not execute the select
statement ,just parse the statement and shows the explain. INSERT/UPDATE are executed and
then explain plan shown
Understanding Autotrace Output
Autotrace shows two things
a) Explain plan: Explain plan shows the plan for query and it shows exactly how the query will
be executed in Database.It will shows the cost ,cardinality and bytes for the each step in the
explain plan
b) Statistics: Lots of statistics will be shown.Some of the statistics are
i) Recursive calls: Number of sql statement executed
in order to execute the statement
ii) DB block gets: The no of blocks read from buffer cache in current mode
iii) Consistent gets: The no of blocks read from buffer cache in consistents reads
iv)redo size: redo generated by sql
v) physical reads: No of disks reads
information on tuning that you can use to improve performance. You can change the value using
the DBMS_SYSTEM package
Let us take some look at how to turn on SQL trace, 10046 event in Oracle database and
trcsess, tkprof utlity
Normal trace
alter session set sql_trace = true; To put trace on
alter session set sql_trace = false; To put trace off
Full level with wait event And bind trace
alter session set events = 10046 trace name context forever, level 12;
To put tracing off
alter session set events = 10046 trace name context off;
Same as Normal trace
exec DBMS_SESSION.set_sql_trace(sql_trace => TRUE);
exec DBMS_SESSION.set_sql_trace(sql_trace => FALSE);
If you want to trace in other running session,here are the steps
Normal trace
2
It Provides execution path,row counts,smallest flat file
4
8
Regular Trace plus database operation timings that the SQL waited to have done in order to
complete. For example: disk access timings.
The same as 2, but with both bind variable values and wait events
12
Regular trace with the both the wait and bind information. Contains the most complete
information and will produce the largest trace file.
Provides execution path
Provides row counts
Can tell timings for all events associated with SQL.
Can tell what values the SQL was run with
There are other ways to do the tracing also. Here are some of these
1) ORADEBUG
This requires login as sysdba
oradebug setospid 1111 Debug session with the specified Oracle process id
oradebug setorapid 1111 Debug session with the specified OS process
oradebug event 10046 trace name context forever, level 4;
oradebug event 10046 trace name context off; This disable the trace
oradebug close_trace This closes the trace file
Oradebug TRACEFILE_NAME;
2) With Oracle 10g the SQL tracing options have been extended using the DBMS_MONITOR
package
EXECUTE dbms_monitor.session_trace_enable
Which is similar
ALTER SESSION SET EVENTS 10046 trace name context forever, level 2;
EXECUTE dbms_monitor.session_trace_enable (binds=>true);
Which is similar
ALTER SESSION SET EVENTS 10046 trace name context forever, level 4;
EXECUTE dbms_monitor.session_trace_enable (waits=>true);
Which is similar
ALTER SESSION SET EVENTS 10046 trace name context forever, level 8;
EXECUTE dbms_monitor.session_trace_enable(sid,serial#)
Which is similar
execute dbms_system.set_ev(sid,serial,10046,2,);
EXECUTE dbms_monitor.session_trace_enable (sid,serial#,binds=>true);
Which is similar
execute dbms_system.set_ev(sid,serial,10046,4,);
SunOS12
CLOSE #5:c=0,e=5,dep=1,type=3,tim=8866891138503
=====================PARSING IN CURSOR #4 len=230 dep=1 uid=173 oct=3 lid=173
tim=8866891138634 hv=3575592451 ad=3aeea3da0 sqlid=55dc767ajydh3
SELECT PROFILE_OPTION_VALUE FROM FND_PROFILE_OPTION_VALUES WHERE
PROFILE_OPTION_ID = :B4 AND APPLICATION_ID = :B3 AND LEVEL_ID = 10003 AND
LEVEL_VALUE = :B2 AND LEVEL_VALUE_APPLICATION_ID = :B1 AND
PROFILE_OPTION_VALUE IS NOT NULL
END OF STMT
BINDS #4:
Bind#0
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=96 off=0
kxsbbbfp=ffffffff7d677b68 bln=22 avl=03 flg=05
value=1204
Bind#1
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=24
kxsbbbfp=ffffffff7d677b80 bln=22 avl=02 flg=01
value=800
Bind#2
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=48
kxsbbbfp=ffffffff7d677b98 bln=22 avl=04 flg=01
value=50334
Bind#3
Bind#2
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=48
kxsbbbfp=ffffffff7d673ba8 bln=22 avl=04 flg=01
value=10001
Bind#3
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=72
kxsbbbfp=ffffffff7d673bc0 bln=22 avl=01 flg=01
value=0
EXEC #5:c=0,e=377,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=2802907561,tim=8866891139624
FETCH #5:c=0,e=26,p=0,cr=4,cu=0,mis=0,r=1,dep=1,og=1,plh=2802907561,tim=8866891139692
CLOSE #5:c=0,e=7,dep=1,type=3,tim=8866891139739=====================PARSING IN
CURSOR #4 len=356 dep=1 uid=173 oct=3 lid=173 tim=8866891139952 hv=2468783182
ad=4c70e4398 sqlid=0wmwsjy9kd92f
SELECT PROFILE_OPTION_ID, APPLICATION_ID, SITE_ENABLED_FLAG ,
APP_ENABLED_FLAG , RESP_ENABLED_FLAG , USER_ENABLED_FLAG,
ORG_ENABLED_FLAG , SERVER_ENABLED_FLAG, SERVERRESP_ENABLED_FLAG,
HIERARCHY_TYPE, USER_CHANGEABLE_FLAG FROM FND_PROFILE_OPTIONS WHERE
PROFILE_OPTION_NAME = :B1 AND START_DATE_ACTIVE <= SYSDATE AND
NVL(END_DATE_ACTIVE, SYSDATE) >= SYSDATE
END OF STMT
BINDS #4:
Bind#0
oacdty=01 mxl=128(80) mxlc=00 mal=00 scl=00 pre=00
Bind#2
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=48
kxsbbbfp=ffffffff7d673ba8 bln=22 avl=04 flg=01
value=50334
Bind#3
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=72
kxsbbbfp=ffffffff7d673bc0 bln=22 avl=01 flg=01
value=0
EXEC #5:c=0,e=325,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=2802907561,tim=8866891140599
FETCH
#5:c=0,e=19,p=0,cr=3,cu=0,mis=0,r=0,dep=1,og=1,plh=2802907561,tim=8866891140659CLOSE
#5:c=0,e=1,dep=1,type=3,tim=8866891140710
=====================
PARSING IN CURSOR #4 len=191 dep=1 uid=173 oct=3 lid=173 tim=8866891140843
hv=303338305 ad=3bedf0e48 sqlid=7qs7fx89194u1SELECT PROFILE_OPTION_VALUE
FROM FND_PROFILE_OPTION_VALUES WHERE PROFILE_OPTION_ID = :B4 AND
APPLICATION_ID = :B3 AND LEVEL_ID = :B2 AND LEVEL_VALUE = :B1 AND
PROFILE_OPTION_VALUE IS NOT NULL
END OF STMT
BINDS #4:
Bind#0
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=96 off=0
tkprof utility
The trace files obtained from above method is in raw form which can be converted into more
readable format using tkprof utility (Transient Kernel PROFile utility)
tkprof
Usage: tkprof tracefile outputfile [explain= ] [table= ]
[print= ] [insert= ] [sys= ] [sort= ]
table=schema.tablename Use schema.tablename with explain= option.
explain=user/password
print=integer
aggregate=yes|no
insert=filename List SQL statements and data inside INSERT statements.
sys=no
Record summary for any wait events found in the trace file.
Set of zero or more of the following sort options:
buffers for consistent read during parseprscu number of buffers for current read during
parseprsmis number of misses in library cache during parse
execnt number of execute was called
execpu cpu time spent executing
exeela elapsed time executing
exedsk number of disk reads during execute
exeqry number of buffers for consistent read during execute
execu number of buffers for current read during execute
exerow number of rows processed during execute
cpu
rows
********************************************************************************
SQL ID: 6w821sggrtysx
Plan Hash: 2325776775SELECT FUNCTION_NAME
FROM FND_USER_DESKTOP_OBJECTS
WHERE USER_ID = :b1 AND APPLICATION_ID = :b2 AND RESPONSIBILITY_ID = :b3 AND TYPE
= FUNCTION AND ROWNUM <= 10 ORDER BY SEQUENCE
call
count
cpu
elapsed
disk
query
current
rows
- - - - - Parse
Execute
0.00
Fetch
0.00
0.00
0.00
0.00
0.00
- - - - - total
0.00
0.00
Rows
Execution Plan
-
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (ORDER BY)
1
COUNT (STOPKEY)
FND_USER_DESKTOP_OBJECTS (TABLE)
1
FND_USER_DESKTOP_OBJECTS_N1 (INDEX)
Elapsed times include waiting on following events:
Event waited on
- Waited -
SQL*Net message to client
0.00
5
0.00
0.00
0.00********************************************************************************
SQL ID: 276ut2y7ywqux
Plan Hash: 3856112528
select object_name, icon_name
from
fnd_desktop_objects
call
count
cpu
elapsed
disk
query
current
rows
- - - - - Parse
Execute
Fetch
1
1
3
0.00
0.00
0.00
0.00
0.00
0.00
0
0
0
0
0
6
0
0
0
0
0
47
- - - - - total
0.00
0.00
47
47
Execution Plan
-
0 SELECT STATEMENT MODE: ALL_ROWS
47 TABLE ACCESS MODE: ANALYZED (FULL) OF FND_DESKTOP_OBJECTS
(TABLE)
Elapsed times include waiting on following events:Event waited on
0.00
4
0.00
0.00
0.00
********************************************************************************
trcsess utlity
When using shared server sessions, many processes are involved. The trace pertaining to the
user session is scattered across different trace files belonging to different processes. This
makes it difficult to get a complete picture of the life cycle of a session.
The trcsess utility consolidates trace output from selected trace files based on several criteria
trcsess [output=output_file_name]
[session=session_id]
[clientid=client_id]
[service=service_name]
[action=action_name]
[module=module_name]
[trace_files]
trcsess output=main.trc service=TEST *trc
After the consolidate trace file had been generated you can execute tkprof on it.
0
Performance Terms explained
V$SYSSTAT
Use the following query (or similar) to get the information from this
dictionary view:
select name, value from v$sysstat where name in (consistent
gets,db block gets,physical reads);
Which is better logical I/O and physical I/O in terms of performance
A physical I/O is not good for performance of queries . Whenever a physical I/O takes place,
Oracle tries to read the data block from the disk which will be slow. The goal hence is to avoid
physical I/O as far as possible.
A Logical I/O is considered better for performance (when compared to Physical I/O) because the
reads happen from the memory as the data block is pre-fetched from the disk. So now Oracle
does not need to go to disk to fetch blocks for your query results. But it is important to note that
excess of Logical reads (I/O) is also not good or recommended. There are many reasons for
that
1) a logical read might have resulted in a physical read to fetch the data block into the buffer
cache.
2) every time a block is read from the cache, a lock or latch is acquired on the cache and hence
higher logical reads could mean that there are high chances of buffer cache contention.
So our goal should be to access least number of logical I/O for queries inorder to improve its
performance
Physical write IO requests Number of write requests for application activity (mainly buffer
cache and direct load operation) which wrote one or more database blocks per request.
Physical write total IO requests Number of write requests which wrote one or more database
blocks from all instance activity including application activity, backup and recovery, and other
utilities.
physical read total multi block requests Number of large read requests which read multi
database blocks for all instance activity
physical write total multi block requests -Number of large write requests which write multi
database blocks for all instance activity
physical read total bytes -Total bytes read which read one or more database blocks for all
instance activity including application, backup and recovery, and other utilities.
physical write total bytes Total bytes write which read one or more database blocks for all
instance activity including application, backup and recovery, and other utilities.
To calculate small reads:
Small Reads = Total Reads Large Reads
Small Writes = Total Writes Large Writes
These metrics can be used taken at two point of time can also be used to calculate IOPS for
small read,wrire,large write,large read , total bytes per sec etc
FROM gv$active_session_history
WHERE sample_time > SYSDATE 1/24
AND session_type = BACKGROUND
GROUP BY sql_id
ORDER BY COUNT(*) DESC;
SELECT sql_id,COUNT(*),ROUND(COUNT(*)/SUM(COUNT(*)) OVER(), 2) PCTLOAD
FROM gv$active_session_history
WHERE sample_time > SYSDATE 1/24
AND session_type = FOREGROUND
GROUP BY sql_id
ORDER BY COUNT(*) DESC;
all_objects o
where event like enq: TX%
and o.object_id (+)= a.CURRENT_OBJ#
and sample_time > sysdate 40/(60*24)
Order by sample_time
/
If this is major waiting event, it means control file location need to changed to faster disk
location
We must try to allocate the redo logs to high performance disk(Solid state disk). Also we should
try to reduce the load on LGWR by reducing commits in the applications.
The manual hotbackup piece can also introduce stress in the system by generating lot of redo
stuff,So avoid that during peak time
Sometimes LGWR is starving for CPU resource. If the server is very busy, then LGWR can
starve for CPU too. This will lead to slower response from LGWR, increasing log file sync
waits. After all, these system calls and I/O calls must use CPU. In this case, log file sync is a
secondary symptom and resolving root cause for high CPU usage will reduce log file sync
waits.
Due to memory starvation issues, LGWR can also be paged out. This can lead to slower
response from LGWR too.
name, you will not see this wait event during parallel query or parallel DML. In those cases wait
events with PX in their names occur instead.)
db file parallel write
The process, typically DBWR, has issued multiple I/O requests in parallel to write dirty blocks
from the buffer cache to disk, and is waiting for all requests to complete.
direct path read, direct path write
The process has issued asynchronous I/O requests that bypass the buffer cache, and is waiting
for them to complete. These wait events typically involve sort segments.
SQL statements with functions that require sorts, such as ORDER BY, GROUP BY, UNION,
DISTINCT, and ROLLUP, write sort runs to the temporary tablespace when the input size is
larger than workarea in PGA
Make sure the optimizer stats are up to data and query is using the right driving table. Check to
see if the composite indexs columns can be rearranged to match the ORDER BY clause to
avoid sort entirely.
Make sure appropriate value PGA_AGGREGATE_TARGET is set. If possible use UNION ALL
instead of UNION.
Shared pool latch
The shared pool latch is used to protect critical operations when allocating and freeing memory
in the shared pool. Contentions for the shared pool and library cache latches are mainly due to
intense hard parsing. A hard parse applies to new cursors and cursors that are aged out and
must be re-executed
The cost of parsing a new SQL statement is expensive both in terms of CPU requirements and
the number of times the library cache and shared pool latches may need to be acquired and
released.
Eliminating literal SQL is also useful to avoid the shared pool latch
Nested Loops
-For each row in the first row source access all the rows from the second row source.
-Best for OLTP type transactions
-it will be Fastest if rows returned from first table are small
-The optimizer first determine the driving table and designates it as the outer loop.This is the
driving row source. It produces a set of rows for driving the join condition. The row source can
be a table accessed using index scan or full table scan. The rows can be produced from any
other operation too. For example the output from a Nested Loop Join can be used as a row
source.
-The optimizer designate other table as inner Loop.This is iterated for every row returned from
the outer loop. This is an access operation on a table and ideally should be an index scan.
-Operation performed by INNER table is repeated for every row returned in OUTER table
-If the optimizer is choosing to use some other join method, you can use the USE_NL(A B) hint,
where A and B are the aliases of the tables being joined.
Plan
SELECT STATEMENT
NESTED LOOPS
TABLE ACCESS FULL OF OE_ORDER_LINES_ALL
TABLE ACCESS BY INDEX ROWID OE_LINES_ALL
INDEX RANGE SCAN OE_LINES_N1
Nested Loop outer join
-Similar to nested loop
-Rows returned even if inner loop does not have any rows meeting the criteria
-Unlike the nested loop which might be driven from either of the tables, this is one way join
a = b(+) will always go to a before b, this may result in more expensive plan (possibly non-NL)
(+) always goes on deficient side
Sort Merge Join
-Rows are produced from first table and are then sorted
-Rows are produced from second table and sorted by the same sort key as first table
-Table A and B are NOT accessed concurrently
-Sorted rows from both sides are then merged together(joined)
-No concept of driving table order cannot affect outcome
-Faster than Nested loops if:
rows produced from table 1 are large
access to table 2 requires a large range scan for each row in table 1
-To instruct the optimizer to use a sort merge join, apply the USE_MERGE hint. You might also
need to give hints to force an access path.
-There are situations where it is better to override the optimize with the USE_MERGE hint.
Execution Plan
MERGE JOIN
SORT JOIN
TABLE ACCESS FULL OF OE_ORDER_LINES_ALL
SORT JOIN
TABLE ACCESS FULL OF OE_HEADERS
Hash Join
-Smallest table is passed over and a hashing algorithm is applied to each row to create a hash
table in memory.
-Second table is passed over and the same hashing algorithm applied to check for matches (ie.
joins)
-Faster than sort merge join:
sort operation required by sort merge join can be expensive if tables are large and not in any
matching order
-Apply the USE_HASH hint to instruct the optimizer to use a hash join when joining two tables
together.
-Ensure that hash_area_size is large enough to hold the smaller table in memory. Otherwise,
Oracle must write to the TEMP tablespace, slowing down the hash join
Plan
SELECT STATEMENT
HASH JOIN
TABLE ACCESS FULL OE_ORDER_LINES_ALL
TABLE ACCESS FULL OE_HEADERS
Cartesian Join-Are generally expensive as the result is the Cartesian product of the two tables.
-Can result from 1 or more of the tables not having any join conditions to any other tables in the
statement
-Can occur even with a join.
Plan
SELECT STATEMENT
SORT UNIQUE
MERGE JOIN CARTESIAN
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL
INDEX RANGE SCAN OE_ORDER_LINES_ALL_N1
TABLE ACCESS BY INDEX ROWID OE_HEADERS
INDEX RANGE SCAN OE_HEADERS_N1
SORT JOIN
INDEX FAST FULL SCAN OE_HEADERS_N1
Statistics
FOR k in ObjList.FIRST..ObjList.LAST
LOOP
dbms_output.put_line(ObjList(i).ownname || . || ObjList(k).ObjName || || ObjList(k).ObjType ||
|| ObjList(k).partname);
END LOOP;
END;
/
The below sql can also be used to find out insert,updates,deletes
select u.TIMESTAMP,
t.last_analyzed,
u.table_name,
u.inserts,
u.updates,
u.deletes,
d.num_rows,
decode(num_rows,0,Table Stats indicate No Rows,
nvl(TO_CHAR(((U.inserts+u.deletes+u.updates)/d.num_rows) * 100,999.99)
,Null Value in USER_TAB_MODIFICATIONS)
) percent
from user_tables t,USER_TAB_MODIFICATIONS u,dba_tables d
where u.table_name = t.table_name
and d.table_name = t.table_name
and d.owner = &Owner
and (u.inserts > 3000 or u.updates > 3000 or u.deletes > 3000)
order by t.last_analyzed
/
Some other Important takeaways
Prior to Oracle11g, the staleness threshold is hard coded at 10%. This means that an object is
considered stale if the number of rows inserted, updated or deleted since the last statistics
gathering time is more than 10% of the number of rows. There is no way to modify this value
prior to Oracle 11g.
Starting with Oracle11g, the staleness threshold can be set using the STALE_PERCENT
statistics preference. This can be set globally using DBMS_STATS.SET_GLOBAL_PREFS or at
the table level using DBMS_STATS.SET_TABLE_PREFS.
Now once we find out the objects,we can use below queries to unlock them
exec dbms_stats.unlock_schema_stats(schema_owner);
exec dbms_stats.unlock_table_stats(table_owner,table_name);
General Performance topics
/*+ USE_NL (table alias1 table alias 2)*/ asks optimizer to use the Nested Loop Join for the two
tables specified
Avoid unneccsary hint and use them with care
c bitmap index
d bitwise index
Solution (c)
6) which of the following factors would not make a column a good candidate for b-tree index
a the data in the column has low cardinality
b the column frequently used in sql statement where clases
c most queries on the table return only small portion of all row
d none of the above
Solution (a)
7) the process of preparing a statement for execution is called
a caching
b hashing
c parsing
d none of the above
Solution (c)
8) finding a statement already cached in the library cache is referred as
a cache hit
b cache miss
c cache match
d cache parse
Solution (a)
9) to determine if a new statement is a match for an existing statement already in the shared
pool, oracle compares each statement
a result set
b security
c execution plan
d hashed value
Solution (d)
10) in order for two statements to result in a cache hit, which of the following must be true
a the statements must use the same case either upper , lower , or mixed
b the statements must be issued against the same table
c the statements must be on the same number of lines
d all of the above must be true
Solution (d)
11) which dynamic data dictionary view contains information about the library cache hit ratio
a v$rowcache
b v$librarycache
c v$dictionarycache
d all of the above
Solution (b)
12) according to oracle, what should the data dictionary hit ratio be for a well tuned oltp system
a more than 85 percent
b less than 85 percent
c between 50 and 60 percent
d non of the above
Solution (a)