Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: 2 GB myth
"Howard J. Rogers" <hjr_at_dizwell.com> wrote in message news:<419f0ddc$0$20379$afc38c87_at_news.optusnet.com.au>...
> Unless you were into the several terabyte total database size, I can't
> see any particular reason to break the 2GB limit. A mere 500 such files
> would get you to the terabyte stage, and 500 files is not that many
> (recall that you are technically allowed 65,000-odd per database).
>
I don't want to buy into the "smaller files more of them" vs "larger files less of them" discussion in terms of ease of backup and/or recovery. IMHO, it is all very contingent on what each site is running as far as recovery goes.
However, a word of warning:
many versions of Unix and most Linux distros until 2.6 kernel
did have a problem in handling too many file units open
per process. This used to be a serious problem with databases
that opened one file per table/index, like Ingres. Still is.
One of the many reasons you can't use Ingres for very large
databases with lots of tables and indexes. And the source
of many "myths" about multiple databases when dealing with
large schemas as opposed to one database for everything.
Until recently, the default maximum file units per process in *nix was 100. It was raised to 1000 somewhere in the late 90s/early 00's (noughties?). But even these may still cause problems in some odd versions of *nix. So, if you plan to use lots and lots of files (above 100) in a database server process, make sure you check the internals doco or the system doco to find out if you need to reconfigure the kernel. Nothing to do with the database limit.
The penalty is that *nix will behind the scenes start to dynamically close and re-open file units as accesses are made. Not a major problem if your I/O access follows a pre-determined pattern, but definitely a huge problem if it is random in nature.
DAMHIKT... Received on Sat Nov 27 2004 - 05:57:04 CST
![]() |
![]() |