Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Migrate 200 Million of rows

Re: Migrate 200 Million of rows

From: Ryan Gaffuri <rgaffuri_at_cox.net>
Date: 25 Jul 2003 09:14:36 -0700
Message-ID: <1efdad5b.0307250814.4926a220@posting.google.com>


vslabs_at_onwe.co.za (Billy Verreynne) wrote in message news:<1a75df45.0307242338.3173ad9a_at_posting.google.com>...
> nuno.tavares_at_ams.com (Nuno) wrote in
>
> > I would like to migrate 200 Million records. Furthermore my new table
> > will have more fields that the old one. The new table will be in
> > different database therefore a link will be used as well.
> <snipped>
>
> A slightly different take (and is exactly what I'm doing at the moment
> - also moving several 50+ million partitions from one db to another).
>
> The criteria:
> I want to move compressed (zipped) data across then network. Network
> lag is a big problem (it can take 24 hours to move 10 Gb across).
> Worse, the other db is not even on the same segment. And there's
> _nothing_ I can do about the network issues (besides bitching which I
> do often ;-).
>
> Problem:
> I can not unload on the source platform. Insufficient file system
> space for an export (or a custom CSV unload). Even a compressed one.
>
> Solution:
> NFS mount the destination platform file system on the source system.
> On the source system do a:
> # mkfifo xpipe
> # compress <xpipe >/dest_system_nfs_mount/part1.dmp.Z&
> # exp file=xpipe parfile=part1.par
>
> The trick is that instead of pushing the exported data across in
> uncompressed format, the compressor runs locally on the source system.
> It then writes the compressed export data to the NFS mount. Sure,
> there are some overhead using NFS, but that is insignificant against
> the fact that I'm moving compressed data across the (very slow)
> network from the source platform to destination platform.
>
> If NFS is not available, FTP can also be instead. Have the compressor
> writing into pipe ypipe. Start the export also in the background. Now
> start up FTP and connect to the destination system and FTP the
> contents of the compressed pipe:
> # ftp dest
> .. enter userid & password
> cd /pub/uploads
> type image
> put ypipe part1.dmp.Z
> ...
> bye
>
>
> The only problem with using FTP is that you increase the risk of a
> process failing (2 pipes + 3 processes versus 1 pipe + 2 processes).
> If a process fails, a pipe will break and that will take the whole
> thing down. Kind of a NOLOGGING/UNRECOVERABLE op.

we push 20-80 GBs of data at a time but have better bandwidth than you. We found that Zip only works on files up to a certain point and TAR or Compress is too slow.

What do you use to do your compression?

btw, you may get network improvements if its possible to 'packetize' your file. IE break it up into a bunch of smaller files and reassemble at the end. Yes its the same total bandwidth, but it will pass through the routers alot faster.

dont know if its practicaly if your sending executable files or .dbfs. Just thought Id point it out. Received on Fri Jul 25 2003 - 11:14:36 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US