Matthew,
How about if you send the export to a pipe and then have import read
directly from the pipe? That way, you never run into a file size issue
because you create a file.
Runs faster too, because you're pipelining as well performing only half the
I/O. Instead of: EXP (read db, write file) then IMP (read file, write db),
you'll now be doing: EXP (read db, write memory) then IMP (read memory,
write db).
Sample korn-shell script attached below. You'll want to replace the
"<user>", "<pwd>", "<sourceTNS>", and "<targetTNS>" strings as necessary.
Also, the EXP and IMP commands are currently configured to do FULL=Y; you
might want to replace with "TABLES=xyz".
Hope this helps...
-Tim
- Begin sample UNIX shell script -----------------
#!/bin/ksh
#=========================================================================
# File: expimp.sh
# Date: 15jan97
# Author: Tim Gorman
# Description:
# Shell script to perform a complete database dump and rebuild using
# the EXP and IMP utilities, connected via UNIX "named pipe".
#
# Modifications:
#=========================================================================
#
# Create shell variables for important filenames...
#
_ExpPar=/tmp/expimp_exp_$$.par
_ExpLog=/tmp/expimp_exp_$$.log
_ExpOut=/tmp/expimp_exp_$$.out
_ExpErr=/tmp/expimp_exp_$$.err
_ImpPar=/tmp/expimp_imp_$$.par
_ImpLog=/tmp/expimp_imp_$$.log
_ImpOut=/tmp/expimp_imp_$$.out
_ImpErr=/tmp/expimp_imp_$$.err
_Pipe=/tmp/expimp_$$.pipe
#
# Create a protected parameter file for the EXP utility...
#
echo "commit=y" > ${_ExpPar}
chmod 600 ${_Par}
echo "userid=<user>/<pwd>@<sourceTNS>" >> ${_ExpPar}
echo "file=${_Pipe}" >> ${_ExpPar}
echo "buffer=10485760" >> ${_ExpPar}
echo "compress=n" >> ${_ExpPar}
echo "log=${_ExpLog}" >> ${_ExpPar}
echo "full=y" >> ${_ExpPar}
#
# Create a protected parameter file for the IMP utility...
#
echo "commit=y" > ${_ImpPar}
chmod 600 ${_Par}
echo "userid=userid=<user>/<pwd>@<targetTNS>" >> ${_ImpPar}
echo "file=${_Pipe}" >> ${_ImpPar}
echo "buffer=10485760" >> ${_ImpPar}
echo "ignore=y" >> ${_ImpPar}
echo "log=${_ImpLog}" >> ${_ImpPar}
echo "full=y" >> ${_ImpPar}
#
# Create the UNIX "named pipe" to act as the method of
# interprocess communication...
#
mknod ${_Pipe} p
if (( $? != 0 ))
then
echo "\"mknod ${_Pipe} p\" failed; aborting..."
exit 1
fi
#
# Start the EXP utility in "background" and the IMP
# utility in "foreground". Both will complete around
# the same time...
#
exp parfile=${_ExpPar} > ${_ExpOut} 2> ${_ExpErr} &
imp parfile=${_ImpPar} > ${_ImpOut} 2> ${_ImpErr}
#
# Clean up the temporary files, as well as any files containing
# usernames and/or passwords, and then exit...
#
rm -f ${_Pipe} ${_ExpPar} ${_ImpPar}
exit 0
- End sample UNIX shell script -------------------
on 5/7/04 1:37 PM, Adams, Matthew (GE Consumer & Industrial) at
MATT.ADAMS_at_GE.COM wrote:
> I've got a table in an Oracle7.3 database (that I'm finally getting
> to upgrade and I'm trying to figure out how to get the data from
> the old DB over to the new DB on the new server.
>
> 1) table is too big to export (even sending the export to a pipe and =
> thence
> to a compress)
> 2) table contains a long raw, so I cannot use the sqlplus COPY command
> 3) table contains a long raw, so I cannot do INSERT INTO <DBLINKED =
> TABLE> SELECT * FROM...
> 4) table contains a long raw, so dumping to a flat file and re-inserting =
> using
> sqlplus will not work.
>
> Other than writing a Pro*C program to do it myself, how can this data be =
> moved?
>
> Matt
----------------------------------------------------------------
Please see the official ORACLE-L FAQ:
http://www.orafaq.com
To unsubscribe send email to: oracle-l-request_at_freelists.org
put 'unsubscribe' in the subject line.
--
Archives are at http://www.freelists.org/archives/oracle-l/
FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------
Received on Fri May 07 2004 - 16:25:47 CDT