RE: db file sequential read wait event and Insert performance

From: Mark W. Farnham <>
Date: Thu, 28 Feb 2008 15:48:39 -0500
Message-ID: <00eb01c87a4b$4d47b4e0$>


I'm not sure exactly what you mean by having 1 batch and yet it inserts into 15 partitions on one node and 15 partitions on the other node. Possibly this means you have a supervisor reading the batch and directing the inserts to the appropriate one of 30 children. Anyway, however you're doing that, as a diagnostic and possibly part of the solution I suggest configuring your batch so you do all the inserts on the one node and its fifteen assigned insert partitions first and then the other. You've already diagnosed that 25 minutes of the time is reading and coordinating insert driven changes into the indexes. You've noted that most of the read wait is on the two 3 column indexes, but you didn't mention whether they were local indexes. If they are global indexes, even though you've separated the table partition activity by node, you'll still have to coordinate the tree where it is not directly corresponding to the partitioning. If such global index coordination is gating your throughput, you'll probably see gc_ stats drop and your overall throughput increase even though you're serializing the batch into two pieces.  

If you have any control over the order of the batch, you might try ordering it by the three column index that is dominantly used for subsequent read queries (or the longer average key length one if neither is dominant on read). That would at least tend to cache the bits of the tree you'll be needing to modify to do the index part of the insert for one of the indexes.  

I'd also be curious what the timing was for completely serializing the 30 partitions. That might tease out some additional thoughts about how to make this as fast as possible.  

Often when partitioning is by a date column the new data (inserts) are skewed toward the younger partitions - is that the case for you?  

Good luck.

From: [] On Behalf Of Harvinder Singh
Sent: Thursday, February 28, 2008 2:30 PM To: Finn Jorgensen
Subject: RE: db file sequential read wait event and Insert performance  

Indexes are used by other part of application so we can't drop them from this testing. Is there any options that I can try to reduce these waits.

(Anyhow if we drop these indexes insert of 12m ROWS only takes 5 min with 250M rows already in table)  


We are doing a benchmark testing of the application and currently focus on area where we insert about 12M rows in 1 table in 1 batch and commit every 50k rows, Inserts are done by application using OCI calls. When we started and tables were empty the batch was taking about 6:30 min and now with tables having 200M rows the batch is taking about 30 min (5 times more).

Following is the configuration

Oracle <> , 2 node RAC cluster on Linux AS. Each node has 8 CPU's and 32GB RAM and db_buffers are allocated 12GB and Shared_pool to 3 GB on both the machine.

We are using ASM (stripe size 1mb default), ASSM, and local extent management.  

Table has 22 columns and 4 indexes, 2 column primary key, 2 (3 column indexes) and 1 (1 column index). Table is range partitioned on date column and currently has 30 partitions and all the inserts are going to all the 30 partitions, we already done all the filtering at application layer so that there is very less internodes communication and both the RAC nodes are inserting into 15 separate partitions.

I checked the wait events and db file sequential read is now consuming about 70% of total time and most of the time is spent on both the 3 column indexes of the table. What options can we try or what should be the sequence of steps to follow to reduce these waits and increase the insert performance.  



-- Received on Thu Feb 28 2008 - 14:48:39 CST

Original text of this message