Re: "direct path write" wait event
From: <shweta.kaparwan_at_googlemail.com>
Date: Sat, 31 Jan 2009 09:48:16 -0800 (PST)
Message-ID: <9a55fea8-5056-4812-921d-88399b5c8a6f_at_l33g2000pri.googlegroups.com>
On Jan 31, 1:31 pm, Charles Hooper <hooperc2..._at_yahoo.com> wrote:
> On Jan 31, 5:10 am, shweta.kapar..._at_googlemail.com wrote:
>
>
>
>
>
> > Snap Id Snap Time Sessions Curs/Sess Comment
> > --------- ------------------ -------- ---------
> > -------------------
> > Begin Snap: 58 31-Dec-08 09:18:02 ####### .0
> > End Snap: 59 31-Dec-08 09:31:19 ####### .0
> > Elapsed: 13.28 (mins)
>
> > Cache Sizes (end)
> > ~~~~~~~~~~~~~~~~~
> > Buffer Cache: 512M Std Block Size:
> > 8K
> > Shared Pool Size: 752M Log Buffer:
> > 512K
>
> > Load Profile
> > ~~~~~~~~~~~~ Per Second Per
> > Transaction
> > ---------------
> > ---------------
> > Redo size: 16,819.64
> > 5,054.77
> > Logical reads: 28,533.04
> > 8,574.97
> > Block changes: 37.31
> > 11.21
> > Physical reads: 2,291.61
> > 688.69
> > Physical writes: 4,898.05
> > 1,472.00
> > User calls: 173.51
> > 52.14
> > Parses: 63.73
> > 19.15
> > Hard parses: 0.61
> > 0.18
> > Sorts: 1.61
> > 0.48
> > Logons: 0.06
> > 0.02
> > Executes: 64.97
> > 19.52
> > Transactions: 3.33
>
> > Top 5 Timed Events
> > ~~~~~~~~~~~~~~~~~~
> > % Total
> > Event Waits Time (s)
> > Ela Time
> > -------------------------------------------- ------------ -----------
> > --------
> > direct path write 978,716
> > 2,475 41.56
> > direct path read 645,975
> > 2,002 33.63
> > CPU time
> > 1,222 20.52
> > control file sequential read 25,617
> > 49 .82
> > db file scattered read 6,728
> > 47 .78
>
> (snip)
> > No free space in temp.
>
> > also we have
>
> > NAME TYPE VALUE
> > ------------------------------------ ----------- -------------------
> > sort_area_size integer 524288
>
> > NAME TYPE VALUE
> > ------------------------------------ -----------
> > ------------------------------
> > pga_aggregate_target big integer 209715200
>
> > Kindly , help how to reduce direct path write waits?
>
> > Regards
>
> > Shweta
>
> First of all, thanks for posting most of the necessary information and
> showing the steps you performed to examine the problem. You did not
> state the Oracle version, but it appears from the other information
> you provided to be a version of 9i (pga_aggregate_target present, Log
> Buffer not affected by granule size). I believe that Oracle 10g
> separates out the writes to the temp tablespace into a separate wait
> event, while 9i and below group several possible causes into the
> direct path write waits. My notes for direct path write waits (from
> the days of Oracle 8i):
> "DIRECT PATH WRITE: When a process is writing buffers directly from
> PGA, instead of DBWR writing them from the buffer cache, the process
> waits on the direct path wait event for the write call to complete.
> Causes: a sort operation being written to disk, parallel DML
> operations, direct path INSERT, parallel create table as SELECT, some
> LOB operations. In the wait events: P1: file number, P2: first data
> block address (file & block), P3: block cnt"
>
> For direct path read (these notes appear to be updated for 9i):
> "DIRECT PATH READ: A single or multiblock read into the PGA that
> bypasses the SGA. Causes of 'direct read': sorts that are too large
> to fit in memory and some of the sort data is written out directly to
> disk - this data is later read back in using direct reads. Also, the
> server process is processing buffers faster than the I/O system can
> return the buffers - this can indicate an overloaded I/O system.
> EXAMINE: The file_id shows if the reads are for an object in TEMP
> tablespace (sorts to disk), or full table scans by parallel slaves.
> For sorts to disk, query the V$TEMPSEG.USAGE column to find the SQL
> statement that is generating the sort. Query the statistics from V
> $SESSTAT to determine the size of the sort. If WORK_AREA_POLICY is
> manual, consider increasing the SORT_AREA_SIZE for the system. If
> WORK_AREA_SIZE_POLICY is AUTO, investigate whether to increase
> PGA_AGGREGATE_TARGET.
> P1: file number P2: first data block address (file & block) P3: block
> cnt"
>
> Suggestions:
> * Determine if it is necessary to retrieve 42 columns from the table,
> each column adds additional overhead carried into the sorting
> operation.
> * The query you identified is executing with a FIRST_ROWS optimizer
> goal, have you tried adding an /*+ ALL_ROWS */ hint just after the
> "SELECT" keyword in the SQL statement to see if the CODE index is
> still used?
> * From another session, repeatedly query v$session_wait while the
> query is executing, so that you may see the changing values for P1:
> file number, P2: first data block address (file & block), P3: block
> cnt for this wait events so that you may determine if the wait event
> is related to the temp tablespace.
> * Start a 10046 trace at level 8 or 12 for the session, execute the
> query of interest, execute a simple query, such as SELECT SYSDATE FROM
> DUAL; and then disable the 10046 trace. Manually examine the trace
> file to see the P1, P2, and P3 values for the various wait events.
> * If you execute the query twice, does it execute just as slowly the
> second time?
> * Check V$SYSSTAT and V$SESSTAT to see if the sorts (disk) statistic
> is increasing.
>
> Charles Hooper
> IT Manager/Oracle DBA
> K&M Machine-Fabricating, Inc.- Hide quoted text -
>
> - Show quoted text -
Date: Sat, 31 Jan 2009 09:48:16 -0800 (PST)
Message-ID: <9a55fea8-5056-4812-921d-88399b5c8a6f_at_l33g2000pri.googlegroups.com>
On Jan 31, 1:31 pm, Charles Hooper <hooperc2..._at_yahoo.com> wrote:
> On Jan 31, 5:10 am, shweta.kapar..._at_googlemail.com wrote:
>
>
>
>
>
> > Snap Id Snap Time Sessions Curs/Sess Comment
> > --------- ------------------ -------- ---------
> > -------------------
> > Begin Snap: 58 31-Dec-08 09:18:02 ####### .0
> > End Snap: 59 31-Dec-08 09:31:19 ####### .0
> > Elapsed: 13.28 (mins)
>
> > Cache Sizes (end)
> > ~~~~~~~~~~~~~~~~~
> > Buffer Cache: 512M Std Block Size:
> > 8K
> > Shared Pool Size: 752M Log Buffer:
> > 512K
>
> > Load Profile
> > ~~~~~~~~~~~~ Per Second Per
> > Transaction
> > ---------------
> > ---------------
> > Redo size: 16,819.64
> > 5,054.77
> > Logical reads: 28,533.04
> > 8,574.97
> > Block changes: 37.31
> > 11.21
> > Physical reads: 2,291.61
> > 688.69
> > Physical writes: 4,898.05
> > 1,472.00
> > User calls: 173.51
> > 52.14
> > Parses: 63.73
> > 19.15
> > Hard parses: 0.61
> > 0.18
> > Sorts: 1.61
> > 0.48
> > Logons: 0.06
> > 0.02
> > Executes: 64.97
> > 19.52
> > Transactions: 3.33
>
> > Top 5 Timed Events
> > ~~~~~~~~~~~~~~~~~~
> > % Total
> > Event Waits Time (s)
> > Ela Time
> > -------------------------------------------- ------------ -----------
> > --------
> > direct path write 978,716
> > 2,475 41.56
> > direct path read 645,975
> > 2,002 33.63
> > CPU time
> > 1,222 20.52
> > control file sequential read 25,617
> > 49 .82
> > db file scattered read 6,728
> > 47 .78
>
> (snip)
> > No free space in temp.
>
> > also we have
>
> > NAME TYPE VALUE
> > ------------------------------------ ----------- -------------------
> > sort_area_size integer 524288
>
> > NAME TYPE VALUE
> > ------------------------------------ -----------
> > ------------------------------
> > pga_aggregate_target big integer 209715200
>
> > Kindly , help how to reduce direct path write waits?
>
> > Regards
>
> > Shweta
>
> First of all, thanks for posting most of the necessary information and
> showing the steps you performed to examine the problem. You did not
> state the Oracle version, but it appears from the other information
> you provided to be a version of 9i (pga_aggregate_target present, Log
> Buffer not affected by granule size). I believe that Oracle 10g
> separates out the writes to the temp tablespace into a separate wait
> event, while 9i and below group several possible causes into the
> direct path write waits. My notes for direct path write waits (from
> the days of Oracle 8i):
> "DIRECT PATH WRITE: When a process is writing buffers directly from
> PGA, instead of DBWR writing them from the buffer cache, the process
> waits on the direct path wait event for the write call to complete.
> Causes: a sort operation being written to disk, parallel DML
> operations, direct path INSERT, parallel create table as SELECT, some
> LOB operations. In the wait events: P1: file number, P2: first data
> block address (file & block), P3: block cnt"
>
> For direct path read (these notes appear to be updated for 9i):
> "DIRECT PATH READ: A single or multiblock read into the PGA that
> bypasses the SGA. Causes of 'direct read': sorts that are too large
> to fit in memory and some of the sort data is written out directly to
> disk - this data is later read back in using direct reads. Also, the
> server process is processing buffers faster than the I/O system can
> return the buffers - this can indicate an overloaded I/O system.
> EXAMINE: The file_id shows if the reads are for an object in TEMP
> tablespace (sorts to disk), or full table scans by parallel slaves.
> For sorts to disk, query the V$TEMPSEG.USAGE column to find the SQL
> statement that is generating the sort. Query the statistics from V
> $SESSTAT to determine the size of the sort. If WORK_AREA_POLICY is
> manual, consider increasing the SORT_AREA_SIZE for the system. If
> WORK_AREA_SIZE_POLICY is AUTO, investigate whether to increase
> PGA_AGGREGATE_TARGET.
> P1: file number P2: first data block address (file & block) P3: block
> cnt"
>
> Suggestions:
> * Determine if it is necessary to retrieve 42 columns from the table,
> each column adds additional overhead carried into the sorting
> operation.
> * The query you identified is executing with a FIRST_ROWS optimizer
> goal, have you tried adding an /*+ ALL_ROWS */ hint just after the
> "SELECT" keyword in the SQL statement to see if the CODE index is
> still used?
> * From another session, repeatedly query v$session_wait while the
> query is executing, so that you may see the changing values for P1:
> file number, P2: first data block address (file & block), P3: block
> cnt for this wait events so that you may determine if the wait event
> is related to the temp tablespace.
> * Start a 10046 trace at level 8 or 12 for the session, execute the
> query of interest, execute a simple query, such as SELECT SYSDATE FROM
> DUAL; and then disable the 10046 trace. Manually examine the trace
> file to see the P1, P2, and P3 values for the various wait events.
> * If you execute the query twice, does it execute just as slowly the
> second time?
> * Check V$SYSSTAT and V$SESSTAT to see if the sorts (disk) statistic
> is increasing.
>
> Charles Hooper
> IT Manager/Oracle DBA
> K&M Machine-Fabricating, Inc.- Hide quoted text -
>
> - Show quoted text -
Many thanks Charles for the valuable information you provided. I missed to give the Oracle ver earlier, sorry for that - It is 9.2.0.6 ,
I will try the suggestion provided by you and get it posted here.
Regards
Shweta
Received on Sat Jan 31 2009 - 11:48:16 CST