Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Migrate 200 Million of rows
rgaffuri_at_cox.net (Ryan Gaffuri) wrote
> we push 20-80 GBs of data at a time but have better bandwidth than
> you. We found that Zip only works on files up to a certain point and
> TAR or Compress is too slow.
Strange...
> What do you use to do your compression?
compress/uncompress. The way I understand it, it compresses a certain buffer/block size at a time. So it works something like this:
- export writes bytes into pipe - compress does a read of a block of x bytes size - the block is compressed/zipped - the block is written to the NFS mount
Thus file size plays no role in the effectiveness of the compression.
BTW, compress uses an adaptive Lempel-Ziv coding scheme.. which is the same one that is used by WinZip/pkzip.
> btw, you may get network improvements if its possible to 'packetize'
> your file. IE break it up into a bunch of smaller files and reassemble
> at the end. Yes its the same total bandwidth, but it will pass through
> the routers alot faster.
Does not make sense. At IP packet level, there is no concept of a thing such a file. Actual IP packet size however does play a critical role. So whether you are transmitting small files in serial or a single lathe file.. that simply does not matter.
Pushing a file thru in parallel can be faster if there is sufficient bandwidth (check the FTP RFC for the REST command).
> dont know if its practicaly if your sending executable files or .dbfs.
Data is data is data. Whether that is data from an executable, data from a binary datafile, or data from a text file. At IP level, that does not play anty role.
-- BillyReceived on Mon Jul 28 2003 - 01:15:08 CDT