Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: 2 GB myth
On 2004-11-27, Noons <wizofoz2k_at_yahoo.com.au> wrote:
> "Howard J. Rogers" <hjr_at_dizwell.com> wrote in message news:<419f0ddc$0$20379$afc38c87_at_news.optusnet.com.au>...
>
>
>> Unless you were into the several terabyte total database size, I can't
>> see any particular reason to break the 2GB limit. A mere 500 such files
>> would get you to the terabyte stage, and 500 files is not that many
>> (recall that you are technically allowed 65,000-odd per database).
>>
>
> I don't want to buy into the "smaller files more of them" vs
> "larger files less of them" discussion in terms of ease
> of backup and/or recovery. IMHO, it is all very contingent on
> what each site is running as far as recovery goes.
>
> However, a word of warning:
> many versions of Unix and most Linux distros until 2.6 kernel
> did have a problem in handling too many file units open
> per process. This used to be a serious problem with databases
OTOH, all that is required to really ruin your day is ONE single tool that not's fully 64-bit. You may not always have control of this. Even your own support people might not have total control of this in all disaster recovery scenarios.
Whereas a file smaller than 2G is a gauranteed to be trouble free in this regard as anything can be.
[deletia]
-- The best OS in the world is ultimately useless ||| if it is controlled by a Tramiel, Jobs or Gates. / | \Received on Wed Jun 01 2005 - 14:01:03 CDT