Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Why import slow

Re: Why import slow

From: Tanel Poder <tanel_at_@peldik.com>
Date: Sun, 16 Feb 2003 18:46:49 +0200
Message-ID: <3e4fbfdf$1_1@news.estpak.ee>


Hello!

There can be several hundreds of thousands of sql scripts which have to be run during in an Oracle Applications upgrade. Each of these (mostly small) scripts end with commit, these scripts contain a lot of DDL also, even then the changes are committed in the end.. probably because all the scripts have been generated using the same standard.

So, a lot of commits.

I've usually set _wait_for_sync=false and log buffer to 40-50MB during the numerous test upgrades, even on some production upgrades where downtime requirements have to be kept at absolute minimum. Of course, the parameter is removed before new system goes live.

Of course, there's a risk - if your instance crashes, you have probably messed up your database, if instance doesn't - no problem. If a session crashes, no problem either. However, don't try it if you don't know what consequences this parameter might introduce. *Never set it on running production instance!*

So, if you are just populating a new fresh database with this import (not doing a regular data transfer task which is done on production), then you could try if you get to see any improvement. You should, I got about 50% performance improvement with those heavy-commiting scripts.

So, if you want to keep commit=y and still achieve high import speed, study _wait_for_sync

And last, it's unsupported as it surpasses Oracles super-foolproof mechanisms ;)

Tanel.
More on those issues on my presentation "Keep Downtime Short on 11i Migration" @ IOUG Live! 2003

"koert54" <nospam_at_nospam.com> wrote in message news:b8BY9.6880$Jd.960_at_afrodite.telenet-ops.be...
> From the manual - read last line :
>
>

http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a76955/ch02
> .htm#40480
> COMMIT
> Default: N
>
> Specifies whether Import should commit after each array insert. By
default,
> Import commits only after loading each table, and Import performs a
rollback
> when an error occurs, before continuing with the next object.
>
> If a table has nested table columns or attributes, the contents of the
> nested tables are imported as separate tables. Therefore, the contents of
> the nested tables are always committed in a transaction distinct from the
> transaction used to commit the outer table.
>
> If COMMIT=N and a table is partitioned, each partition and subpartition in
> the Export file is imported in a separate transaction.
>
> Specifying COMMIT=Y prevents rollback segments from growing inordinately
> large and improves the performance of large imports. Specifying COMMIT=Y
is
> advisable if the table has a uniqueness constraint. If the import is
> restarted, any rows that have already been imported are rejected with a
> nonfatal error.
>
> If a table does not have a uniqueness constraint, Import could produce
> duplicate rows when you reimport the data.
>
> ****For tables containing LONG, LOB, BFILE, REF, ROWID, UROWID, DATE, or
> type columns, array inserts are not done. If COMMIT=Y, Import commits
these
> tables after each row. ****
>
>
>
> -> I remember reading on metalink that it has to be a bug in the
> documentation - nobody believed DATE showed this kind of effect on the
> import process. However,
>
> it's still there in the 9R2 docs. I've never experienced slow imports with
> date columns - I did however had serious performance problems importing
> longs and lobs when I specified
>
> commit=Y.
>
> The only way to be certain is to use logminer and check the logfiles - but
I
> believe the docs for now ...
>
>
>
> "Sybrand Bakker" <gooiditweg_at_nospam.demon.nl> wrote in message
> news:77m53v4to8sju1hcdl63tamk6ie86gu19c_at_4ax.com...
> > On Sat, 25 Jan 2003 15:32:11 GMT, "koert54" <nospam_at_nospam.com> wrote:
> >
> > >If I'm not mistaking - import performs a commit after each record if a
> table
> > >contains a long column even if
> > >you specify a large buffer ...
> >
> >
> > Just curious: do you have any pertinent evidence? I'm not trying to
> > contradict you, but to my knowledge commit=n is a big nono.
> >
> > Regards
> >
> >
> > Sybrand Bakker, Senior Oracle DBA
> >
> > To reply remove -verwijderdit from my e-mail address
>
>
Received on Sun Feb 16 2003 - 10:46:49 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US