Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: Unloading more than 2Gb

Re: Unloading more than 2Gb

From: <graemel_at_enertel.nl>
Date: 1997/09/10
Message-ID: <873884321.30622@dejanews.com>#1/1

One way you might be able to avoid having to write too much code is by using a named pipe into which you send your SQL*Plus output and from out of which you read the results into gzip or compress and then send that to a file.

Here's what you do:

In UNIX, create a named pipe, e.g.:

mknod sqlplus.pyp p

In your SQL*Plus script, have the spool command set to the following:

spool sqlplus.pyp

Start the script up. It should then start spooling output into the pipe and will seem to "hang". You could run this in the background.

To start emptying the pipe, you need to read from it from another UNIX session by running the following (or other similar) command:

compress < sqlplus.pyp > output.dat.Z &

This then takes the output from your SQL*Plus command (i.e., the pipe) and passes it through the compression utility (or any other compression tool you may have, such as gzip) and redirects the result to a file named output.dat.

To find out whether you have enough space prior to running the above, i.e., to ensure that the compression ratio is suitably high, you could first pipe the output to the word count utility "wc", as in:

compress < sqlplus.pyp | wc -c

This will give you the final file size of output.dat.

To get data out of the compressed file, you merely do the reverse:

uncompress < output.dat.Z > sqlplus.pyp &

Then with something like SQL*Loader you can read from the named pipe and do something desirable with the output.

Of course, this will only work for you if you have a good compression ratio with your data and there's enough disk space. If it's at all possible, the best solution would be not to create an output file at all.  In this case you merely pipe the query directly into the SQL*Loader process.

If you have to load the data onto a different machine, the best way is to create two pipes, one on each machine, pipe the SQL*Plus output into the pipe on the source node, read using SQL*Loader from the pipe on the target node and then kick off a remote copy command "rcp" from source to target pipe!

By now you've probably realised that you can use this method using a concurrent export and import procedure. Export the table into the source pipe, import the table from the target pipe and perform a remote copy.

The above assumes you have network connectivity.

Hoping to have been of service,

Graeme

-------------------==== Posted via Deja News ====-----------------------
      http://www.dejanews.com/     Search, Read, Post to Usenet
Received on Wed Sep 10 1997 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US