Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: What is the best way to copy 2G table data between two databases

Re: What is the best way to copy 2G table data between two databases

From: Burt Peltier <burttemp1ReMoVeThIs_at_bellsouth.net>
Date: Tue, 17 Feb 2004 22:59:43 -0600
Message-ID: <ohCYb.22618$kR3.16934@bignews4.bellsouth.net>


Assuming huge really is huge.

Do the constraints=y at export time when you do the export of indexes (#2).

Also, do constraints=n when doing (#3).

One more thing to speed up things, turn off archivelog mode during the imports, especially when doing the indexes and constraints. - When in noarchivelog mode, I saw the response of import of indexes performs the same as passing the Nologging parameter on an individual create index statement.
- Something about when the database is in noarchivelog mode changes the default from logging to nologging for indexes .... at least it worked that way in 8i .

Not sure I would use the direct=y . This is debate-able, but I like to lean more toward the safe side, even though direct=y definintely speeds things up.

I have used the "disk-less" method before too and it is slick.

-- 
"Rick Denoire" <100.17706_at_germanynet.de> wrote in message
news:3d4530t8e53eap7omko42nnprs5h6hsu3h_at_4ax.com...

> "Burt Peltier" <burttemp1ReMoVeThIs_at_bellsouth.net> wrote:
>
>
> >I use exp/imp a LOT and have used the technique I listed below MANY
times.
>
> You seem to be very experienced with exp/imp. Please comment on the
> following method; this is not exactly about the question asked
> originally, but refers to a huge export of different schemas:
>
> 1) Do a structure export (rows=n) and import the dump into the target
> dB.
> 2) Do an export of indexes.
>
> 3) Start export (direct=y compress=n) without indexes schemawise in
> parallel, directing the output into a pipe, from which the dumps would
> be compressed and sent to the target DB on the fly (compression factor
> about 30 in my case). The target DB would load the dumps in parallel
> too.
> 4) Import indexes.
>
> The whole process is diskless.
>
> I am actually preparin to do just that.
>
> >- Note1: If you do not have fast local disk, then this would not work
> >anywhere near as well as I have seen.
>
> No disk issue with my method.
>
> Bye
> Rick Denoire
Received on Tue Feb 17 2004 - 22:59:43 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US