Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Fast table unload using pro*c array fetch? Do you know where it is?

Re: Fast table unload using pro*c array fetch? Do you know where it is?

From: Yass Khogaly <ykhogaly_at_us.oracle.com>
Date: Sun, 18 Jul 1999 19:20:33 -0700
Message-ID: <7mtugv$mal$1@inet16.us.oracle.com>


You could try this sql script!

I hope it helps you to achive your goal

Regards

Yass Khogaly
Senior Principal Consultant
Core Technology Services
Oracle Consulting - Oracle Corporation



Disclaimer:

This script is provided for educational purposes only. It is NOT supported by
Oracle World Wide Technical Support. The script has been tested and appears to work as intended. However, you should always test any script before relying on it.



Abstract:

Generates a SQL*Plus script to unload a table to a file and a SQL*Loader script to reload the same data. The intent is to create a faster alternative
to export/import. Initial testing indicates that the unload takes about 20% longer than EXP, but that the reload takes only 45% as long as IMP. Under Oracle7 r7.1, the capability of parallelizing direct loads should provide much
faster load times, reasonably 10% of the time for IMP.



Performance Considerations

Performance is very specific to the data distribution of the table data. Much
poorer performance has been seen in the following cases:

All of these cases lead to inefficient use of the fixed record format of the unload file. Padding the unload file with leading/trailing zeros or trailing
blanks yields files 4X the size of an export dmp file and unload times 4X those of export. (Even in these extreme test situations, the load time was between 80% and 90% of IMP.)

This unload/reload utility has some other advantages besides speed. The unload can easily select a subset of the original table (for statistical sampling or retrieving rows for a particular department or business date for
instance) whereas EXP/IMP deals with entire tables. Additionally, if desired,
unload can sort the output in order to speed index builds and/or optimize cache hits on the reloaded data based on loading frequently used rows contiguously. This may provide an additional benefit in some reorg efforts. Finally, the unload might have a GROUP BY appended which would facilitate development of summary tables.

By editing the generated unload2.sql and SQL*Loader .CTL scripts, one could additionally remove columns or modify them with SQL (or PL/SQL functions in r7.1) during the extract. Just be sure to adjust the length of the COLUMN and
PAGESIZE in unload2.sql and the input field in .CTL to reflect whatever changes.

This utility can also unload data from a view which is not possible via EXP. This facility may be used to do subsets--selection (specific rows), projection
(specific columns), joins, GROUP BY summaries or function application without
having to edit this script or its generated output scripts.



Limitations:

This utility doesn't handle long, raw, long raw, rowid, mlslabel datatypes.

This utility has been tested in a Solaris 2.3 environment, but is expected to
be fully portable to any ASCII platform. Unlike EXP/IMP, however, it will not
automatically make translations when the unload file is moved from ASCII to EBCDIC or vice versa. Since all data is written in external formats, one should expect that file transfer utilities that do such conversions should work. As an alternative, one could edit the SQL*Loader .CTL script to specify
the alternative encoding scheme.

If a numeric column is encountered which has no defined precision or scale, then this script will use default values (prompted for); this poses three risks:

  1. that you may overspecify the precision and thereby waste space in the unload file;
  2. you may underspecify the precision and thereby get overflow indicators in the unloaded data which may not be caught prior to loading.
  3. you may underspecify the scale and introduce truncation which will not be found by either the unload or load processes. For this reason,it is strongly recommended that numeric table columns be defined with appropriate precision and scale values.

The generated SQL*Loader script assumes that fields of blanks should be loaded
as NULLS. If the table has columns for which SPACES are valid values, then it
will be necessary to edit the generated unload2.sql script to concatenate double quotes before and after the affected column(s) along with changing the
length (pagesize in unload2.sql and the individual field's length in the generated .CTL file) by two bytes.



Requirements:

SELECT privileges on the exported table(s)



Script:

SET ECHO off
REM NAME: TFSUNLD.SQL
REM USAGE:"@path/TFSUNLD.SQL"
REM ------------------------------------------------------------------------

--

REM REQUIREMENTS:
REM SELECT on the given table(s)
REM ------------------------------------------------------------------------

--

REM PURPOSE:
REM Generates a sql*plus script to unload a table to a file and a REM SQL*Loader script to reload the same data. Intent is to create REM a faster alternative to export/import.
REM ------------------------------------------------------------------------

---

REM DISCLAIMER:
REM This script is provided for educational purposes only. It is NOT REM supported by Oracle World Wide Technical Support. REM The script has been tested and appears to work as intended. REM You should always run new scripts on a test instance initially.
REM ------------------------------------------------------------------------

--

REM Main text of script follows:

set tab off
set heading off heading off feedback off echo off verify off space 1 pagesize
0 linesize 120
accept owner prompt 'What schema owns the table to be unloaded? '
accept table_name prompt 'What table is to be unloaded? ' accept default_precision prompt 'What TOTAL number of digits should be reserved for numbers without defined precision? ' accept default_scale prompt 'What number of DECIMAL digits should be reserved for numbers without defined scale? '


--Calculate the sum of all output field lengths and set the output record
size
select 'SET LINESIZE '

       || (sum(decode(data_type,
                      'CHAR',data_length,
                      'VARCHAR',data_length,
    'VARCHAR2',data_length,
                      'DATE',14,
   'NUMBER',decode(data_precision,
                                   '',&default_precision+2,

greatest(data_precision-data_scale,1)+decode(data_scale,0,0,1)+data_scale)+1 ,

                      'FLOAT',&default_precision+2,
           data_length)))

  from dba_tab_columns
 where owner=upper('&&owner')
   and table_name=upper('&&table_name') /

rpad('0',greatest(data_precision-data_scale,1),'9') || decode(data_scale,0,'','.')

                       ||

decode(data_scale,0,'',rpad('9',data_scale,'9'))),

'FLOAT',rpad('0',&default_precision-&default_scale,'9')||'.'||rpad('9',&defa ult
_scale,'9'),

                 'ERROR'),40)
       || ' HEADING ''X'''

  from dba_tab_columns
 where owner=upper('&&owner')
   and table_name=upper('&&table_name')  order by column_id
/

spool off


-

-- Generate the SQL*Loader control file


-
set lines 120 pages 0
spool &&owner..&&table_name..CTL
select 'a' var1, 0 var2, 'OPTIONS(DIRECT=TRUE)'   from dual
union
select 'b', 0, 'LOAD DATA'
  from dual
union
select 'c', 0, 'INFILE ''/tmp/&&owner..&&table_name..DAT'''   from dual
union
select 'd', 0, 'BADFILE &&owner..&&table_name..BAD'   from dual
union
select 'e', 0, 'DISCARDFILE &&owner..&&table_name..DSC'   from dual
union
select 'f', 0, 'DISCARDMAX 999'
  from dual
union
select 'm', 0, 'INTO TABLE &&owner..&&table_name'   from dual
union
select 'n', column_id,
rpad(decode(column_id,1,'(',',')||'"'||column_name||'"',31)
                       || decode(data_type,
 'CHAR','CHAR('||data_length||')',
                                 'VARCHAR','CHAR('||data_length||')',
                             'VARCHAR2','CHAR('||data_length||')',
                    'DATE','DATE(14) "YYYYMMDDHH24MISS"',
 'NUMBER','DECIMAL
EXTERNAL('||decode(data_precision,

'',&default_precision+2,

greatest(data_precision-data_scale,1)+decode(data_scale,0,0,1)+data_scale+1)

              ||')',
                                 'FLOAT','DECIMAL
EXTERNAL('||to_char(&default_precision+2)||')',   'ERROR--'||data_type)
                       || ' NULLIF ("' ||column_name||'" = BLANKS)'
  from dba_tab_columns
 where owner = upper('&&owner')
   and table_name = upper('&&table_name') union
select 'z', 0, ')'
  from dual
 order by 1, 2
/

spool off


-

-- Cleanup


-
clear column
clear break
clear compute
undef owner
undef table_name
undef default_precision
undef default_scale

"The Views expressed here are my own and not necessarily those of Oracle Corporation"
Brian <bmotzer_at_mediaone.mn.net> wrote in message news:wEpk3.126$o4.38498_at_typhoon3.rdc-detw.rr.com...

> Fellow DBA's,
>
> I'm currently trying to locate a pro*c program I saw on the web a while
ago.
> It did array fetches to unload
> Oracle tables to flat files in pro*c but I can't seem to find it anymore. I
> am trying to unload huge amounts of table
> data and bring it into another database. I don't want to use/purchase
> platinum/bmc etc. as this is a one
> time deal. I don't want to use import/export because the import is so
> incredibly slow. If I can get a decent unload
> speed I could use sqlldr in direct path to load the data back. If you know
> of this code please email
> me.
>
> Thanks,
> Brian Motzer
> bmotzer_at_mediaone.net
> or
> bmotzer_at_bestbuy.com
>
>
>


Received on Sun Jul 18 1999 - 21:20:33 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US