Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: export dump larger than 2Gbytes
A copy of this was sent to eddisonng_at_hotmail.com
(if that email address didn't require changing)
On Sat, 28 Aug 1999 12:09:13 +0800, you wrote:
>Hi,
>
>May anyone help me on the question :
>
>Exporst oracle objects and having the dump file larger than 2Gbytes.
>However, the O/S only support max. 2Gbytes file size. We are using Sun
>w/s and has no tape drive. We think about using compress command, but
>in the case the compress dump also > 2G, what can we do ?
>
>Thanks anyone!
export to a pipe and use gzip/compress and SPLIT on the results. Here is a script that does a full export of a database, compresses it, splits the results into 500m files. it then puts it all back together to test the export (shows how to re-import the data after the split)
#!/bin/csh -vx
setenv UID sys/xxxxx
setenv FN exp.`date +%j_%Y`.dmp
setenv PIPE /tmp/exp_tmp.dmp
setenv MAXSIZE 500m
setenv EXPORT_WHAT "full=y COMPRESS=n"
echo $FN
cd /nfs/atc-netapp1/expbkup
ls -l
rm expbkup.log export.test exp.*.dmp* $PIPE mknod $PIPE p
date > expbkup.log
( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &
exp userid=$UID buffer=20000000 file=$PIPE $EXPORT_WHAT >>& expbkup.log date >> expbkup.log
date > export.test
cat `echo $FN.* | sort` | gunzip > $PIPE &
imp userid=$UID file=$PIPE show=y full=y >>& export.test date >> export.test
tail expbkup.log
tail export.test
ls -l
rm -f $PIPE
--
See http://govt.us.oracle.com/~tkyte/ for my columns 'Digging-in to Oracle8i'...
Current article is "Part I of V, Autonomous Transactions" updated June 21'st
Thomas Kyte tkyte_at_us.oracle.com Oracle Service Industries Reston, VA USA
Opinions are mine and do not necessarily reflect those of Oracle Corporation Received on Sat Aug 28 1999 - 09:56:52 CDT