Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Fast table unload using pro*c array fetch? Do you know where it is?
You could try this sql script!
I hope it helps you to achive your goal
Regards
Yass Khogaly
Senior Principal Consultant
Core Technology Services
Oracle Consulting - Oracle Corporation
This script is provided for educational purposes only. It is NOT supported
by
Oracle World Wide Technical Support. The script has been tested and appears
to work as intended. However, you should always test any script before
relying on it.
Generates a SQL*Plus script to unload a table to a file and a SQL*Loader
script to reload the same data. The intent is to create a faster
alternative
to export/import. Initial testing indicates that the unload takes about 20%
longer than EXP, but that the reload takes only 45% as long as IMP. Under
Oracle7 r7.1, the capability of parallelizing direct loads should provide
much
faster load times, reasonably 10% of the time for IMP.
Performance is very specific to the data distribution of the table data.
Much
poorer performance has been seen in the following cases:
All of these cases lead to inefficient use of the fixed record format of the
unload file. Padding the unload file with leading/trailing zeros or
trailing
blanks yields files 4X the size of an export dmp file and unload times 4X
those of export. (Even in these extreme test situations, the load time was
between 80% and 90% of IMP.)
This unload/reload utility has some other advantages besides speed. The
unload can easily select a subset of the original table (for statistical
sampling or retrieving rows for a particular department or business date
for
instance) whereas EXP/IMP deals with entire tables. Additionally, if
desired,
unload can sort the output in order to speed index builds and/or optimize
cache hits on the reloaded data based on loading frequently used rows
contiguously. This may provide an additional benefit in some reorg efforts.
Finally, the unload might have a GROUP BY appended which would facilitate
development of summary tables.
By editing the generated unload2.sql and SQL*Loader .CTL scripts, one could
additionally remove columns or modify them with SQL (or PL/SQL functions in
r7.1) during the extract. Just be sure to adjust the length of the COLUMN
and
PAGESIZE in unload2.sql and the input field in .CTL to reflect whatever
changes.
This utility can also unload data from a view which is not possible via EXP.
This facility may be used to do subsets--selection (specific rows),
projection
(specific columns), joins, GROUP BY summaries or function application
without
having to edit this script or its generated output scripts.
This utility doesn't handle long, raw, long raw, rowid, mlslabel datatypes.
This utility has been tested in a Solaris 2.3 environment, but is expected
to
be fully portable to any ASCII platform. Unlike EXP/IMP, however, it will
not
automatically make translations when the unload file is moved from ASCII to
EBCDIC or vice versa. Since all data is written in external formats, one
should expect that file transfer utilities that do such conversions should
work. As an alternative, one could edit the SQL*Loader .CTL script to
specify
the alternative encoding scheme.
If a numeric column is encountered which has no defined precision or scale, then this script will use default values (prompted for); this poses three risks:
The generated SQL*Loader script assumes that fields of blanks should be
loaded
as NULLS. If the table has columns for which SPACES are valid values, then
it
will be necessary to edit the generated unload2.sql script to concatenate
double quotes before and after the affected column(s) along with changing
the
length (pagesize in unload2.sql and the individual field's length in the
generated .CTL file) by two bytes.
SELECT privileges on the exported table(s)
REM ------------------------------------------------------------------------
REM ------------------------------------------------------------------------
REM ------------------------------------------------------------------------
REM ------------------------------------------------------------------------
set tab off
set heading off heading off feedback off echo off verify off space 1
pagesize
0 linesize 120
accept owner prompt 'What schema owns the table to be unloaded?
'
accept table_name prompt 'What table is to be unloaded? '
accept default_precision prompt 'What TOTAL number of digits should be
reserved for numbers without defined precision? '
accept default_scale prompt 'What number of DECIMAL digits should be
reserved for numbers without defined scale? '
--Calculate the sum of all output field lengths and set the output record
size
select 'SET LINESIZE '
|| (sum(decode(data_type, 'CHAR',data_length, 'VARCHAR',data_length, 'VARCHAR2',data_length, 'DATE',14, 'NUMBER',decode(data_precision, '',&default_precision+2,
greatest(data_precision-data_scale,1)+decode(data_scale,0,0,1)+data_scale)+1 ,
'FLOAT',&default_precision+2, data_length)))
rpad('0',greatest(data_precision-data_scale,1),'9') || decode(data_scale,0,'','.')
||
'FLOAT',rpad('0',&default_precision-&default_scale,'9')||'.'||rpad('9',&defa
ult
_scale,'9'),
'ERROR'),40) || ' HEADING ''X'''
spool off
-
|| decode(data_type, 'CHAR','CHAR('||data_length||')', 'VARCHAR','CHAR('||data_length||')', 'VARCHAR2','CHAR('||data_length||')', 'DATE','DATE(14) "YYYYMMDDHH24MISS"','NUMBER','DECIMAL
'',&default_precision+2,
greatest(data_precision-data_scale,1)+decode(data_scale,0,0,1)+data_scale+1)
||')', 'FLOAT','DECIMALEXTERNAL('||to_char(&default_precision+2)||')', 'ERROR--'||data_type)
|| ' NULLIF ("' ||column_name||'" = BLANKS)'from dba_tab_columns
spool off
-
"The Views expressed here are my own and not necessarily those of Oracle
Corporation"
Brian <bmotzer_at_mediaone.mn.net> wrote in message
news:wEpk3.126$o4.38498_at_typhoon3.rdc-detw.rr.com...
> Fellow DBA's, > > I'm currently trying to locate a pro*c program I saw on the web a whileago.
> am trying to unload huge amounts of table > data and bring it into another database. I don't want to use/purchase > platinum/bmc etc. as this is a one > time deal. I don't want to use import/export because the import is so > incredibly slow. If I can get a decent unload > speed I could use sqlldr in direct path to load the data back. If you know > of this code please email > me. > > Thanks, > Brian Motzer > bmotzer_at_mediaone.net > or > bmotzer_at_bestbuy.com > > >Received on Sun Jul 18 1999 - 21:20:33 CDT