Re: Slooow Import

From: David Platt <plattd_at_gov.on.ca>
Date: 1996/04/18
Message-ID: <4l5mor$gte_at_govonca3.gov.on.ca>#1/1


jmjm_at_hogpf.ho.att.com (-J.MIKLEWICZ) wrote:

>I am trying to import a table wit > 200,000 rows. The import takes about
>2 hours. I am using the "commit = y" option to import because without it
>I run out of rollback. The progress of import with commit = y is
>CONSIDERABLY slower. I also noticed that none of my rollback segments
>were extending past their optimum setting of 200K so I tried increasing
>the buffer size.
 

>The manual says that commit = y specifies that import should commit
>after each array insert. The buffer option states that the BUFFER
>(buffer size) parameter determines the number of rows in the array
>inserted by import. It also says that the default buffer size is
>documented in my pplatform dependent doc, but I've been unable to locate
>it in the doc. I've tried setting buffer to 2000000 and 4000000 with no
>increase in speed or rollback usage. Are these numbers so high that
>import ignores them, maybe?
 

>It seems to me that based on the doc increasing the buffer size should
>decrease the number of commits performed and should therefore spped
>things up at least a little. Am I on the wrong track here? If so what's
>the right track, other than adding more rollback?
 

>Thanks,
 

>--

If you have the disk space you can try creating a tablespace with a single rollback segment large enough to process your import. Take your other rollbacks offline. After you are finished your import blow the new rollback and the associated tablespace up.

The buffer option usually speeds this process up for me.

The import with indexfile= option will only generate the sql statements required to recreate your objects. It will not actually import anything. Recreating the indexes, using this script, after doing an import with 'INDEXES=N' should also speed things up. Received on Thu Apr 18 1996 - 00:00:00 CEST

Original text of this message