Home » SQL & PL/SQL » SQL & PL/SQL » Spool performance
Spool performance [message #122910] Thu, 09 June 2005 03:11 Go to next message
steel_dot
Messages: 4
Registered: June 2005
Junior Member
I am spooling the contents of a couple of tables to file.
The user wants a file that contains one line for each row in the table and the fields are delimited by a specific character.

The tables are pretty big for here (8 million - 32 million rows).
The spooling is taking over 5 hours to perform.

Exporting the table to file takes considerable less time but this is not what the user wants.

The export file size is similar to the spool file size, hence I thought that the performance should be similar.

Is there anything that I can do that can improve performance?

Cheers
Re: Spool performance [message #122932 is a reply to message #122910] Thu, 09 June 2005 05:05 Go to previous messageGo to next message
Frank
Messages: 7901
Registered: March 2000
Senior Member
Ask the user what he wants with a 32 mln rows file. Chances are he wants to read it in Excel and create reports on them. This could better be done in Oracle.

Spooling output means: sqlplus request to server, server response to sqlplus, sqlplus showing output on screen (=very very slow), sqlplus writing output to file, back to step 1 for next batch of records.

Things that might help:
- (if on Windows) minimize sqlplus. This will prevent sqlplus from refreshing the screen and will speed up throughput
- set termout off. Same effect (even stronger) as option 1
- Create a procedure that uses utl_file. This will stop the client-server communication.

hth
Re: Spool performance [message #122933 is a reply to message #122910] Thu, 09 June 2005 05:06 Go to previous messageGo to next message
girish.rohini
Messages: 744
Registered: April 2005
Location: Delhi (India)
Senior Member
pls post the query that you are using for this spool purpose.

Also, when you are spooling data, the data gets written to disk. So u need to check out ur disk i/o as well during the time this spool operation is getting performed. If possible, write the spool data on a separate disk, other than one on which oracle s/w and datafiles are present.
Re: Spool performance [message #123105 is a reply to message #122932] Thu, 09 June 2005 19:09 Go to previous messageGo to next message
steel_dot
Messages: 4
Registered: June 2005
Junior Member
Thanks for the quick response.

See my response to the reply below. I have included how the spool is being done. I am not getting any output to screen, it is being written only to file.

I'm not familiar with 'utl_file', I will do some research on this.

Thanks for the help.
Re: Spool performance [message #123108 is a reply to message #122933] Thu, 09 June 2005 19:18 Go to previous messageGo to next message
steel_dot
Messages: 4
Registered: June 2005
Junior Member
Thanks for the response.

Below is roughly how the query is being run/called from a Unix Shell script.
Note: That no output is appearing to screen.
Also, I tried playing with arraysize (SET ARRYASIZE from 10, 50, 100 and 1000) and it did nothing when I did a test dump of 250,000 lines. I thought I saw on the net that this would help but it didn't.

sqlplus -s <<fim > /dev/null
$USERPW

WHENEVER SQLERROR EXIT 1;
WHENEVER OSERROR EXIT 1;
SET NEWPAGE NONE;
SET PAGESIZE 0;
SET LINESIZE 5000;
SET ECHO OFF;
SET SHOWMODE OFF;
SET FEEDBACK OFF;
SET TERMOUT OFF;
SET VERIFY OFF;
SET TRIMSPOOL ON;

SPOOL $SPOOL_FILE;

SELECT
field1 || '|' ||
field2 || '|' ||
field3 || '|' ||
....
fieldN || '|'
FROM
TABLE_X;

SPOOL OFF;

EXIT;

fim



I not sure what to do with this one. The performance is a bit worrying for me.

Thanks again for the time!

Re: Spool performance [message #123109 is a reply to message #122910] Thu, 09 June 2005 19:23 Go to previous messageGo to next message
DMcG
Messages: 51
Registered: May 2005
Location: Auckland, New Zealand
Member
You'll probably be quicker writing a small chunk of PL/SQL to read the data using a cursor and write the output to a file using the UTL_File packasge as suggested.

I really would question what the user intends to do with the data. There's not many desktop apps that will be able to handle that number of data rows, Excel is limited to 65,535 rows. It's possible they're going to import it into Access though.

Dougie McGibbon
Re: Spool performance [message #123224 is a reply to message #122910] Fri, 10 June 2005 10:52 Go to previous messageGo to next message
smartin
Messages: 1803
Registered: March 2005
Location: Jacksonville, Florida
Senior Member
I'd agree with the other folks who implied that this is a bad idea to be doing in the first place. I've run into this situation before myself, where users think that they think they want a data dump of the entire database. Get the requirement changed. Find out the query they really want and execute that for them instead. Just think of how long it takes for you to generate the data file, imagine how long it will take them to load that file into excel and then do something with it.

And in general when working with large data sets in sqlplus keep your arraysize at a higher value, although the exact value will be based on your system. Test with 100, 500, and 1000 at least. And Trimspool on and termout off, but you've got those.

ooooh...and don't use concatenation to put your field separator in there. Look into the sqlplus "colsep" option.


MYDBA@ORCL > show colsep
colsep " "
MYDBA@ORCL > select rownum, rownum, rownum from all_objects where rownum <= 10

    ROWNUM     ROWNUM     ROWNUM
---------- ---------- ----------
         1          1          1
         2          2          2
         3          3          3
         4          4          4
         5          5          5
         6          6          6
         7          7          7
         8          8          8
         9          9          9
        10         10         10

MYDBA@ORCL > set colsep |
MYDBA@ORCL > show colsep
colsep "|"
MYDBA@ORCL > select rownum, rownum, rownum from all_objects where rownum <= 10;

    ROWNUM|    ROWNUM|    ROWNUM
----------|----------|----------
         1|         1|         1
         2|         2|         2
         3|         3|         3
         4|         4|         4
         5|         5|         5
         6|         6|         6
         7|         7|         7
         8|         8|         8
         9|         9|         9
        10|        10|        10

10 rows selected.

Re: Spool performance [message #123248 is a reply to message #123224] Fri, 10 June 2005 15:09 Go to previous messageGo to next message
girish.rohini
Messages: 744
Registered: April 2005
Location: Delhi (India)
Senior Member
I do agree to Smartin intotality. Get the requirement changed. Try to make it more specific.

Also as for utl_file utility use... the below example should help:

CONNECT SCOTT/TIGER

DECLARE
fHandler UTL_FILE.FILE_TYPE;
BEGIN
fHandler := UTL_FILE.FOPEN('MYDIR', 'myfile', 'w');
UTL_FILE.PUTF(fHandler, 'Look ma, I''m writing to a file!!!n');
UTL_FILE.FCLOSE(fHandler);
EXCEPTION
WHEN utl_file.invalid_path THEN
raise_application_error(-20000,
'ERROR: Invalid path. Create directory or set UTL_FILE_DIR.');
END;
/

**Note
This has been copied from orafaq.com itself.
http://www.orafaq.com/faq/plsql
Re: Spool performance [message #123265 is a reply to message #122910] Sat, 11 June 2005 00:05 Go to previous messageGo to next message
Frank
Messages: 7901
Registered: March 2000
Senior Member
If you go for the utl-file solution, make sure you use bulk collect fetches.

hth
Re: Spool performance [message #123798 is a reply to message #123265] Tue, 14 June 2005 18:56 Go to previous message
steel_dot
Messages: 4
Registered: June 2005
Junior Member
Hi All,

I sent a response yesterday but I can't see it so I am trying again.

Thanks alot for all the help provided, I appreciate it.

I ended up using the utl_file option suggested and the performance increase over the spool option was about 5x, quite a big improvement. Thanks again for the help.

Cheers
Previous Topic: Loop problem-Where am I wrong please?
Next Topic: Plz. help... Very Urgent
Goto Forum:
  


Current Time: Sat Sep 06 06:03:49 CDT 2025