Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Performance of UTL_FILE package

Re: Performance of UTL_FILE package

From: David T. Bath <david.bath_at_nre.vic.gov.au>
Date: Thu, 25 Jun 1998 10:35:10 +0000
Message-ID: <359227DE.1BC61760@nre.vic.gov.au>


Niklas Iveslatt wrote:
>
> Hi,
>
> I was wondering if anyone has had any experience in using a stored procedure
> to select a massive amount of data and dump it to a file using
> the UTL_FILE package....
>
> I would appreciate any input,
>
> Niklas

UTL_FILE has severe limitations (at least on UNIX boxes, and O7), because
it is constrained to directories mentioned in "init.ora", and seems not to like using sudbirectories unless explicitly mentioned.

I'd use an external procedure (e.g. DBD::Oracle/perl or Pro*C if pushed that writes to a pipe with compression on the other end, if what you are doing is something like making it available as an archive. This saves oodles of disk. You could do something similar with a named pipe in the UTIL_FILE targets, with slurpers changing from time to time to "move" the data to where you really want it. This is more complex than the DBD::Oracle/perl solution.

I was involved in shifting Australian Yellow Pages data (i.e. the phone book, not the NIS-) around, and using gzip on a pipe gave me better than 60:1 compression. When long-hauling such data, it was damn useful to write to a compressing pipe then to the transmission program, then to the uncompression straight into the database. Total disk involved: about 8 disk blocks!
--
David T. Bath (from home) david.bath_at_nre.vic.gov.au Phone: +613 9500 0894 Mobile: 015 824 171 (not always on) Office: Global Consulting Ph:+613 9347 7511 Fax:+613 9347 0182 #include <std_disclaim.h> Received on Thu Jun 25 1998 - 05:35:10 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US