Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Re: datafile size

Re: Re: datafile size

From: Mladen Gogala <gogala_at_sbcglobal.net>
Date: Tue, 02 Nov 2004 22:59:02 +0000
Message-Id: <1099436342l.26739l.0l@medo.noip.com>

On 11/02/2004 05:48:05 PM, Tanel Poder wrote:
> Hi all!

>=20

> Btw, in 10g with bigfile tablespaces you can have datafile sizes up =20
> to
> 128TB.

And they say that size doesn't matter? Can utilities like tar, cpio, =20 gzip and bzip2 operate on such monsters? I know that "rm -f" will not =20 have problems even with the largest file, but that's probably not =20 something that my boss would like to see.

>=20

> This means files with 2^32-1 blocks per datafile - it can be done
> because in a bigfile tablespace, the ROWID bits for relative fno are
> now used for block# as well. This effectively means, that you can =20
> have
> only one datafile in a bigfile TS.

You say that in the Highlander tablespaces, as far as data files are =20 concerned, there can be only one? Interesting.

--=20
Mladen Gogala
Oracle DBA

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Nov 02 2004 - 16:55:27 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US