Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: file size for a large database

Re: file size for a large database

From: Tim Gorman <>
Date: Fri, 06 Jul 2007 11:37:58 -0600
Message-ID: <>


Consider the following scenario: you have to move datafiles from one file-system mount-point to another (the reason doesn't matter). You have a 16-CPU server (or four 4-CPU servers in a cluster -- it doesn't matter). The total volume of files to be moved is 125 Gb, let's say.

If you have a single125 Gb datafile, then the maximum number of CPUs you could use is 1. If you have twenty-five 5 Gb datafiles, then you could shoot twenty-five "mv" commands to background, and let the server sort things as fast as it can. So, clearly there is some advantage to "many smaller" over "one big" when backing up, restoring, and moving/copying files.

At the same time, there are a couple issues with having hundreds or thousands of datafiles in a database. If you are not using RMAN for backup/recovery, then putting all those datafiles into "backup mode" using ALTER TABLESPACE ... BEGIN/END BACKUP can be problematic in some situations on some versions, for example. Staying within the old limit of 1022 datafiles per database, though obsolete since Oracle9i, has some attractive qualities to it. Call me superstitious...

So, the best approach I've found is to determine a maximum datafile size and stick to it. Depending on the situation, 2G, 4G, 16G, or 64G seems to work well for me in the past, with 2G and 4G being the most common (i.e. partly a throwback to old 32-bit limitations). For efficiency and forecasting purposes, I'll chose a max datafile size that is a little smaller than needed, so that I can build the database like "bricks" in a wall. It is difficult to plan and forecast space utilization when the "bricks" (a.k.a. datafiles) are all different sizes, and it is easy to waste space when the "bricks" are too large.

If your database is expected to grow to 2.5-3.0 Tb, I'd double that just to be cautious, making it a 5-6 Tb expected size. Realize that, at these scales, little objects such as SYSTEM, UNDO, USERS, and TOOLS don't even figure in. You don't necessarily want to checkpoint too many datafile headers if you can help it, so target having only about 1000 - 2000 datafiles in that final state. This implies a maximum datafile size of anywhere from 3-10 Gb. So personally, I'd go with a max datafile size of 4G -- I just like those powers of two, personally. Your tastes and style may vary, and of course feel free to play with the numbers as you wish....

Hope this helps...

Tim Gorman
consultant - Evergreen Database Technologies, Inc. P.O. Box 1802, Evergreen CO 80437-1802

website =
email   =
mobile  = +1-303-885-4526
fax     = +1-303-484-3608 wrote:
> Hi all:
> I am building a database which will grow to 2.5 or 3 Tb. I am wondering
> what is a realistic file size I should be using.
> One of my tablespaces will be about 1.5Tb. I don't want to have too many
> files, but I don't want to have the datafiles
> that are too big. I don't know however how big is too big. Does anyone have
> any guidelines or documents for that?
> Any experience with this? My filesystems are 130G so I can create fairly
> large files.
> thank you
> Gene Gurevich
> --

Received on Fri Jul 06 2007 - 12:37:58 CDT

Original text of this message