Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: 2gb filesize, large disks and splitting tablespaces
I heard that the sky may fall when you do export with a pipe as the file name. But even in that case you may consider using FILESIZE option of the exp program. Other than this, search on metalink and you may find some real-life stories about problems using >2GB files.
Generally, it's a good idea of using files smaller than 2GB. Think about it. 2G, or 2 x 10 [sup]9, corresponds to a 32-bit number all filled. Even if the mount command says your filesystems have the option largefiles enabled, how about Oracle? If Oracle binary, $ORACLE_HOME/bin/oracle, supports large files (file oracle on Solaris), how about any utility under $ORACLE_HOME/bin? There're too many possibilities, even at OS level (is your OS 64-bit?).
A side point. I think it's always a good idea to have more than one datafile in one tablespace than one file, even if the size of the tablespace currently doesn't need 2GB. More than one datafile causes less inode locking when being written. I think (not proved) the wait event Direct Path Write can be alleviated if you break up one monolithic datafile into at least two, particularly for the temporary tablespace for which Oracle buffer cache is bypassed.
Yong Huang
yong321_at_yahoo.com
Frank Hubeny <fhubeny_at_ntsource.com> wrote in message news:<3B1DCEB6.24072326_at_ntsource.com>...
> Encouraged by the fact that the sky had not fallen on this database, we let the
> files get even larger.
>
> The sky still did not fall.
>
> Frank Hubeny
Received on Wed Jun 06 2001 - 13:05:07 CDT