RE: Import speed falls off at 2million rows
Date: Thu, 12 Mar 2009 13:43:59 -0400
I will chime in here to say that I have more than once experienced slowing imports *when indexes already existed on the tables*, for the likely reasons described by Mark in Guess 2. If you're importing with indexes in place, I'd guess that this is the most likely explanation. Drop them, import, and re-create after import.
Prime Services Databases Americas
One Madison Avenue
New York, NY 10010
[mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Mark W. Farnham Sent: Thursday, March 12, 2009 1:31 PM
To: david.cheyne_at_gmail.com; 'oracle-l' Subject: RE: Import speed falls off at 2million rows
Extended trace? That might tend to remove the guessing. You might even
down in the raw trace to look at the section when it starts to only trickle
and compare the new (or exacerbated) waits to system resource consumption
and waits from something like sar and iostat. What becomes different at that
point should match up with what you are now waiting for and it MAY be clearer than observing the aggregation with a tool at that point, and of course if you aggregate with 2 million rows worth of trace combined with what you're willing to wait for a small time after it begins to crawl, then
the root cause of the change will be masked by the accumulation of small tolerable waits in the beginning of the file. ... and it sounds like you can't wait for the trickle to finish.
Cary, et. al, talk about this as properly scoping the trace, though in
case it is part of a single statement, not a whole user transaction. Still,
the principle is the same. Turning on the trace only when it starts to only
trickle should handle the same thing, if that is something you can do.
Guess 1: the data size burst of your import has overtaken the write
your array. That is to say, if you compare the size of your write cache to
the size of the data you're pushing at it, it absorbs what you're pushing at
it up through the first 2 million rows
Guess 2: Whatever indexes you're using reach enough multiples of levels
frequent leaf and branch block splits that the overhead to insert into the
indexes jumps in significance at that point.
Guess 3: A combination of 1 and 2 and other side effects of crossing the
boundary between cache speeds exceeding the ability of cpu-memory cycles
process to spinning rust being much slower than the ability of cpu-memory
cycles to process the flow.
[mailto:oracle-l-bounce_at_freelists.org] On Behalf Of David Cheyne
Sent: Thursday, March 12, 2009 10:48 AM
Subject: Import speed falls off at 2million rows
I'm trying to import a table with about 8 million rows in to oracle 220.127.116.11 EE on solaris 10.
The import starts as you would expect but seems to hit a wall at about 2 million rows and the rows seem to trickle in after that point. I've tried exporting the data again, expanding the datafiles before hand, increasing redo log size. The only different thing about this table is that it contains 2 columns of an XML type which will be populated. Archiving is switched off and no users or developers are logged in.
Odd spacing and typos courtesy of my iPhone
Please access the attached hyperlink for an important electronic communications disclaimer:
http://www.freelists.org/webpage/oracle-l Received on Thu Mar 12 2009 - 12:43:59 CDT