Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Re: 2 Gb file limit problem

Re: 2 Gb file limit problem

From: Satish Iyer <Satish.Iyer_at_ci.seattle.wa.us>
Date: Mon, 30 Jul 2001 12:25:17 -0700
Message-ID: <F001.00359384.20010730120031@fatcity.com>

Thnaks Joe,
Yes that was what I did as a work-around yesterday and I had to be around for a long time on a week-end. I have most of these  processes automated and it works fine with 95% of the tables . Doing this for about 600 tables. So this would mean going back to change code again. Was wondering if there was any straightforward options that I was missing.
 

Satish
>>> "JOE TESTA" <JTESTA_at_longaberger.com> 07/30/01 11:44AM >>>
how about this:
 

(avg_row_size + delimiters) * number_of_rows = total bytes.
 

total bytes / 1900000000 = number of pieces.
 

number_of_rows / number_of_pieces = number of rows per piece
 

select number of rows needed multiple times, spooling each one individually.
 

then sqlldr all the pieces.
 

joe
 

>>> Satish.Iyer_at_ci.seattle.wa.us 07/30/01 02:20PM >>>
Hi List,
 

I need to transport few tables from one instance to another and of course found the sqlldr method much faster than the exp/imp.

But the problems is for large tables .When I spool such input tables to a flat file , it stops spooling into it after about
 2 Gb. Any possible solutions to get around it. I am on AIX
4.3.3/8.1.5
 

My ulimits on AIX  are
<FONT face=Arial

size=2>time(seconds)        
unlimitedfile(blocks)         
unlimiteddata(kbytes)         
unlimitedstack(kbytes)        

unlimitedmemory(kbytes)      
unlimitedcoredump(blocks)    
unlimitednofiles(descriptors) 2000
 

Thanks
 

<FONT face=Arial
size=2>Satish  Received on Mon Jul 30 2001 - 14:25:17 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US