huge "db file sequential read" of composite index during DML on large table

From: Purav Chovatia <puravc_at_gmail.com>
Date: Fri, 13 Jan 2012 20:43:24 +0530
Message-ID: <CADrzpjESk_0mJHD_pk=3XFzU9vtHwuOrDebhfY4VbpCiZcWNcw_at_mail.gmail.com>



Hi,

We have a table containing 320million rows; approx. 10m rows for each day. Size of table is approx. 37GB.
There are 2 indices on the table. One is an index on the column containing datetime values and other is a composite index containing 3 columns: subscriber_number (distinct values 1m), flag (2 distinct values), datetime (same column on which there is another index).

When we delete a day's worth of data from the table i.e. 10m rows, based on the datetime column, it uses the datetime column in the explain plan. The delete takes very long to complete. Inserting 10m rows for a day also takes very long. I know the table is huge and the data is also huge but what perplexes me is, it is waiting for almost all the time on db file sequential reads for both operations. On enabling 10046 trace, what I see is that there is single block read of the single column index, single block read of the table and then 30-34 single block reads of the composite index. Why?

Is it that, while deleting, it finds enough rows in 1 single block of the table for which it has to hit 30-34 blocks of the composite index?

Thanks.

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Jan 13 2012 - 09:13:24 CST

Original text of this message