Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Export Fails=Help

Re: Export Fails=Help

From: Suresh Bhat <suresh.bhat_at_mitchell-energy.com>
Date: Fri, 09 Jul 1999 14:14:16 GMT
Message-ID: <01beca25$fa2e6da0$a504fa80@mndnet>


Hi -

I DO NOT TAKE CREDIT OR BLAME FOR ANY OF THESE SCRIPTS.




If you have a limit of 2G on file size then use the following that Thomas Kyte had posted here a while ago.

#You could export to a tape device or if you want to goto disk, you can use
a
pipe to export to 'compress' or 'split'. (or compress and then split). For
speed, I export to split creating a series of 500m files. Here is an example:  



#!/bin/csh -vx
 

setenv UID sys/xxxxx
setenv FN exp.`date +%j_%Y`.dmp.gz
setenv PIPE /tmp/exp_tmp.dmp  

echo $FN  

cd /u01/atc-netapp1/expbkup
ls -l  

#remove last export

rm expbkup.log export.test exp.*.dmp.* $PIPE mknod $PIPE p  

date > expbkup.log
split -b 500m $PIPE $FN. &
exp userid=$UID buffer=20000000 file=$PIPE full=y >>& expbkup.log date >> expbkup.log  

date > export.test
cat `echo $FN.* | sort` > $PIPE &
imp userid=sys/o8isgr8 file=$PIPE show=y full=y >>& export.test date >> export.test  

tail expbkup.log
tail export.test  

ls -l
rm -f $PIPE  


 

this script exports the full database into a series of 500m files. It then 'reconstructs' the original file after the export is done using cat into the
pipe and tests the integrity of the export using IMP show=y....




Here is another script from Oracle Magazine archive.

Exporting a Database That's More Than 2GB When Compressed  

This Tip of the Week entry comes from Devarajan Sundaravaradan, a Senior Consultan
t for Leading Edge Systems, Inc. in Edison, New Jersey.      

In HP-UX, there is a 2GB limit on file sizes of 2GB. Many of us have reached this
limit when exporting files and the most common solution is to do a filesystem comp
ression of the export dump using named pipes and then store the compressed file. B
ut what if the compressed file itself passes the 2GB limit? There is solution to t
his, too.          

# Create new Named pipes.
 

mknod -p /dev/split_pipe  

mknod -p /dev/compress_pipe # You can use the existing named pipe

                          # itself, instead of creating new.
 
======================================================================
Create a shell script under a file, named Split_export.sh
 

# -b1000m indicates to split command to split the input file into every
1000 MB si
ze.  

# As it splits, the split command will suffix aa, ab, ac, ad ... upto zz to
the fi
le name specified.    

# The export file name is expfile.
 

nohup split -b1000m < /dev/split_pipe > /DumpDir/expfile &  

nohup compress < /dev/compress_pipe > /dev/split_pipe &  

exp username/password full=y file=/dev/compress_pipe and other parameters for expo
rt.  



After saving the above three commands in split_export.sh, execute the following.
 

chmod a+x split_export.sh  

nohup split_export.sh > /tmp/split_export.log 1>&2 &  



After a few minutes you should see files in the export dump directory.

Create a shell script with the following command under the file name split_import.
sh.  

After creating provide execution permission to this script as follows:


 

Chmod a+x split_import.sh  

# The import script assumes in this example that the above export script
created 2
 split files  

# called expfileaa and expfileab. The order of the file for the cat command
is ver
y important.  

nohup cat /dumpdir/expfileaa /dumpdir/expfileab > /dev/split_pipe &  

# sleep 3 seconds
 

Sleep 3  

nohup uncompress < /dev/split_pipe > /dev/compress_pipe &  

#Sleep at this point is very important as some time is needed to uncompress
the fi
le and send it to the pipe.  

sleep 60  

imp username/password file=/dev/compress_pipe and other parameters for export.  

nohup split_import.sh > /tmp/split_import.log 1>&2 &  



Wait for the import to finish.

Suresh Bhat

Scott Hahn <scotty_at_superior-sdc.com> wrote in article <31ah3.1632$c85.17653_at_server1.news.adelphia.net>...

> Hello-
>     My environment,
>             Oracle 7.3.4, AIX 4.2
> Problem
>     My export seems to fail as it reaches two gig.  command = exp
> user/pass_at_inst direct=y full=y file=/d5/oracle/export.exp 
log=/d1/log.txt
>
> the export fails claiming "Cannot write to export file" At that point the
> file size is by using ls -s 2097814  which if I am not mistaken is 2 gig
> 
> Now I have all ulimits set to -1 which gives the oracle user unlimited
file
> size, or so I think.  Also the /d5 logical drive was created to support
> large files.
> 
> Does anyone know how to work around this?
> 
> I need help from someone who knows IBM's AIX specifically to confirm
these
> things? Does anyone know how to confirm from the shell that 1) the user > oracle has unlimited file size 2) That the /d5 logical drive supports large
> file sizes?
> 
> If anyone can help me I would GREATLY appreciatte it.  I am pretty lost.
> 
> In addition can someone please tell me how I can export directly to tape
if
> my tape drive is /dev/rmt0
> 
> You can see I am not much of a UNIX admin
> 
> Scott Hahn
> scotty_at_superior-sdc.com
> 
> 
> 
> 
> 
Received on Fri Jul 09 1999 - 09:14:16 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US