Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: 2 Gb file limit problem
How about using a pipe and the
split command if necessary?
If your unix machine can
handle files greater than 2 gig's but just SQL*Plus can't,
then:
1) mknod pipe.lst
p
2) cat pipe.lst >mybigfile
&
3) sqlplus u/p
<FONT
size=1> spool
pipe.lst
@my_commands.lst
spool off
exit
4) rm
pipe.lst
If the unix machine also has a
2 gig limit, then:
<FONT
size=1> spool
pipe.lst
@my_commands.lst
spool off
exit
4) rm
pipe.lst
<SPAN
class=409480220-30072001>
These
examples should give you enough to work with.
<FONT face="MS Sans Serif"
size=1>
-----Original
Message-----From: JOE TESTA
[mailto:JTESTA_at_longaberger.com]Sent: Monday, July 30, 2001 12:52
PMTo: Multiple recipients of list ORACLE-LSubject: Re: 2
Gb file limit problem
how about this:
(avg_row_size + delimiters) * number_of_rows =
total bytes.
total bytes / 1900000000 = number of pieces.
number_of_rows / number_of_pieces = number of rows per
piece
select number of rows needed multiple times, spooling each
one individually.
then sqlldr all the pieces.
joe
>>> Satish.Iyer_at_ci.seattle.wa.us 07/30/01 02:20PM
>>>
Hi List,
I need to transport few tables from one instance
to another and of course found the sqlldr method much faster than the exp/imp.
But the problems is for large tables .When I
spool such input tables to a flat file , it stops spooling into it
after about 2 Gb. Any possible solutions to get around
it. I am on AIX 4.3.3/8.1.5
My ulimits on AIX are
<FONT face=Arial
size=2>time(seconds) unlimitedfile(blocks) unlimiteddata(kbytes) unlimitedstack(kbytes)