Re: High db file sequential reads during imp?

From: Hemant K Chitale <hkchital_at_singnet.com.sg>
Date: Fri, 09 May 2008 23:12:43 +0800
Message-Id: <200805091513.m49FDdm6027350@smtp18.singnet.com.sg>

  1. If importing into an existing table with a pre-created index, the import has to update the index for each row being imported (ideally the index should be dropped before the import and created as part of the import or created manually later)
  2. When inserting into a heap table, Oracle doesn't really care about which block a row goes into. However, since indexes are ordered structures, Oracle has to be particular about which leaf block a new value goes into. Therefore, it has to identify the correct leaf block (walking through the root and branches) before it updates the leaf block. That activity of "identifying and getting to the correct block" causes the 'db file sequential reads'. Things get worse when block splits occur. Also, to complicate it further, Oracle now has very many "dirty" buffers to write -- if DBWR cannot catch up, you would aso see 'free buffer waits'.

See this note by Jonathan Lewis :

http://www.jlcomp.demon.co.uk/faq/slowdown.html

At 03:22 PM Friday, cam wrote:
>Hello all,
>
>Relative newcomer to performance analysis. I was surprised, upon
>investigating a very slow import job, to find very high 'db file
>sequential read's. This activity was mostly associated with an index
>that was being updated while around 11 million rows were inserted to
>a normal heap table. I'm clearly missing something obvious here but
>why would imp be reading (and not writing?) so heavily from this index?
>
>Regards,
>cam

Hemant K Chitale
http://hemantoracledba.blogspot.com

"A 'No' uttered from the deepest conviction is better than a 'Yes' merely uttered to please, or worse, to avoid trouble." Mohandas Gandhi Quotes
: http://www.brainyquote.com/quotes/authors/m/mohandas_gandhi.html

--
http://www.freelists.org/webpage/oracle-l
Received on Fri May 09 2008 - 10:12:43 CDT

Original text of this message