RE: Import speed falls off at 2million rows

From: Baumgartel, Paul <paul.baumgartel_at_credit-suisse.com>
Date: Thu, 12 Mar 2009 13:43:59 -0400
Message-ID: <21469B88E0EA11498818517F210335310C03C939_at_EPRI17P32001A.csfb.cs-group.com>



I will chime in here to say that I have more than once experienced slowing imports *when indexes already existed on the tables*, for the likely reasons described by Mark in Guess 2. If you're importing with indexes in place, I'd guess that this is the most likely explanation. Drop them, import, and re-create after import.

Paul Baumgartel
CREDIT SUISSE
Information Technology
Prime Services Databases Americas
One Madison Avenue
New York, NY 10010
USA
Phone 212.538.1143
paul.baumgartel_at_credit-suisse.com
www.credit-suisse.com

-----Original Message-----

From: oracle-l-bounce_at_freelists.org
[mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Mark W. Farnham Sent: Thursday, March 12, 2009 1:31 PM
To: david.cheyne_at_gmail.com; 'oracle-l' Subject: RE: Import speed falls off at 2million rows

Extended trace? That might tend to remove the guessing. You might even scan
down in the raw trace to look at the section when it starts to only trickle
and compare the new (or exacerbated) waits to system resource consumption
and waits from something like sar and iostat. What becomes different at that
point should match up with what you are now waiting for and it MAY be clearer than observing the aggregation with a tool at that point, and of course if you aggregate with 2 million rows worth of trace combined with what you're willing to wait for a small time after it begins to crawl, then
the root cause of the change will be masked by the accumulation of small tolerable waits in the beginning of the file. ... and it sounds like you can't wait for the trickle to finish.

Cary, et. al, talk about this as properly scoping the trace, though in your
case it is part of a single statement, not a whole user transaction. Still,
the principle is the same. Turning on the trace only when it starts to only
trickle should handle the same thing, if that is something you can do.

Guess 1: the data size burst of your import has overtaken the write cache of
your array. That is to say, if you compare the size of your write cache to
the size of the data you're pushing at it, it absorbs what you're pushing at
it up through the first 2 million rows

Guess 2: Whatever indexes you're using reach enough multiples of levels and
frequent leaf and branch block splits that the overhead to insert into the
indexes jumps in significance at that point.

Guess 3: A combination of 1 and 2 and other side effects of crossing the boundary between cache speeds exceeding the ability of cpu-memory cycles to
process to spinning rust being much slower than the ability of cpu-memory
cycles to process the flow.

-----Original Message-----

From: oracle-l-bounce_at_freelists.org
[mailto:oracle-l-bounce_at_freelists.org] On Behalf Of David Cheyne
Sent: Thursday, March 12, 2009 10:48 AM
To: oracle-l
Subject: Import speed falls off at 2million rows

Hi list!

I'm trying to import a table with about 8 million rows in to oracle 9.2.0.6 EE on solaris 10.

The import starts as you would expect but seems to hit a wall at about 2 million rows and the rows seem to trickle in after that point. I've tried exporting the data again, expanding the datafiles before hand, increasing redo log size. The only different thing about this table is that it contains 2 columns of an XML type which will be populated. Archiving is switched off and no users or developers are logged in.

Any ideas?

David Cheyne
B.A.(hons)

Oracle DBA

Odd spacing and typos courtesy of my iPhone
--

http://www.freelists.org/webpage/oracle-l

--

http://www.freelists.org/webpage/oracle-l



Please access the attached hyperlink for an important electronic communications disclaimer:

http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html


--

http://www.freelists.org/webpage/oracle-l Received on Thu Mar 12 2009 - 12:43:59 CDT

Original text of this message