RE: Import speed falls off at 2million rows
Date: Thu, 12 Mar 2009 13:31:11 -0400
Extended trace? That might tend to remove the guessing. You might even scan down in the raw trace to look at the section when it starts to only trickle and compare the new (or exacerbated) waits to system resource consumption and waits from something like sar and iostat. What becomes different at that point should match up with what you are now waiting for and it MAY be clearer than observing the aggregation with a tool at that point, and of course if you aggregate with 2 million rows worth of trace combined with what you're willing to wait for a small time after it begins to crawl, then the root cause of the change will be masked by the accumulation of small tolerable waits in the beginning of the file. ... and it sounds like you can't wait for the trickle to finish.
Cary, et. al, talk about this as properly scoping the trace, though in your case it is part of a single statement, not a whole user transaction. Still, the principle is the same. Turning on the trace only when it starts to only trickle should handle the same thing, if that is something you can do.
Guess 1: the data size burst of your import has overtaken the write cache of your array. That is to say, if you compare the size of your write cache to the size of the data you're pushing at it, it absorbs what you're pushing at it up through the first 2 million rows
Guess 2: Whatever indexes you're using reach enough multiples of levels and frequent leaf and branch block splits that the overhead to insert into the indexes jumps in significance at that point.
Guess 3: A combination of 1 and 2 and other side effects of crossing the boundary between cache speeds exceeding the ability of cpu-memory cycles to process to spinning rust being much slower than the ability of cpu-memory cycles to process the flow.
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of David Cheyne
Sent: Thursday, March 12, 2009 10:48 AM
Subject: Import speed falls off at 2million rows
I'm trying to import a table with about 8 million rows in to oracle 184.108.40.206 EE on solaris 10.
The import starts as you would expect but seems to hit a wall at about 2 million rows and the rows seem to trickle in after that point. I've tried exporting the data again, expanding the datafiles before hand, increasing redo log size. The only different thing about this table is that it contains 2 columns of an XML type which will be populated. Archiving is switched off and no users or developers are logged in.
Odd spacing and typos courtesy of my iPhone
http://www.freelists.org/webpage/oracle-l Received on Thu Mar 12 2009 - 12:31:11 CDT