Re: huge "db file sequential read" of composite index during DML on large table

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Fri, 13 Jan 2012 16:19:28 -0000
Message-ID: <48910D309AE846CAB83CF69CCD4D2B98_at_Primary>


What's the execution path for the delete ? By index range scan or tablescan ?

Given your description it's likely that rows for the same datetime arrive at the same time so end up in the same (small number) of table blocks, and clearly they will and up in a small number of consecutive index blocks. However, in any couple of minutes a single subscriber probably won't make several calls, and even in the course of 24 hours many subscribers will make only a few calls each - so the calls for a given day are likely to be scattered very finely across the whole of the index that starts with subscriber.

That's why your comment:
> Is it that, while deleting, it finds enough rows in 1 single block of
> the table for which it has to hit 30-34 blocks of the composite index?
is essentially correct.

Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com
Oracle Core (Apress 2011)
http://www.apress.com/9781430239543

  • Original Message ----- From: "Purav Chovatia" <puravc_at_gmail.com> To: <oracle-l_at_freelists.org> Sent: Friday, January 13, 2012 3:13 PM Subject: huge "db file sequential read" of composite index during DML on large table

Hi,

We have a table containing 320million rows; approx. 10m rows for each day. Size of table is approx. 37GB.
There are 2 indices on the table. One is an index on the column containing datetime values and other is a composite index containing 3 columns: subscriber_number (distinct values 1m), flag (2 distinct values), datetime (same column on which there is another index).

When we delete a day's worth of data from the table i.e. 10m rows, based on the datetime column, it uses the datetime column in the explain plan. The delete takes very long to complete. Inserting 10m rows for a day also takes very long. I know the table is huge and the data is also huge but what perplexes me is, it is waiting for almost all the time on db file sequential reads for both operations. On enabling 10046 trace, what I see is that there is single block read of the single column index, single block read of the table and then 30-34 single block reads of the composite index. Why?

Is it that, while deleting, it finds enough rows in 1 single block of the table for which it has to hit 30-34 blocks of the composite index?

Thanks.

--
http://www.freelists.org/webpage/oracle-l




-----
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2012.0.1901 / Virus Database: 2109/4739 - Release Date: 01/12/12


--
http://www.freelists.org/webpage/oracle-l
Received on Fri Jan 13 2012 - 10:19:28 CST

Original text of this message