Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Datafile size - Is bigger better?

Re: Datafile size - Is bigger better?

From: Zhu,Chao <>
Date: Fri, 17 Nov 2006 21:56:28 +0800
Message-ID: <>

Another problem related to but not same problem; when you have a database with several TB storage, and in a cluster with several database of that size, the SAN is around say, 30TB, and 5 database share the SAN.

We are hitting a problem, with Sun/Veritas/SAN storage, when our cluster is sharing so big storage , it is taking a long time for the oracle instance to failover from one node to another node. the major of the time is spent on disk group deport and import stage.

Is there anyone else facing the same kind of problem?

On 11/16/06, JayDBA <> wrote:
> Excellent inputs from Jared, Wolfgang, Bob, Stephen and Richard.
> Thanks for pointing out the rman backup details, I just did not think
> about it and it totally makes sense. Glad to know that you are in favor of
> 2+ GB files as I am. As for load balancing you are right, SAN/ NAS/ Raid I
> think take pretty good care of load balancing and conceptually (conceptually
> only) I think if it were not for the OFA we can pretty much live with
> putting all files on one single Unix mount (list - feel free to shoot me
> down on this one :-))
> My apologies to not have provided more details. Ours is a data warehouse
> environment on a 64-bit O/S and we have multiple data warehouses/ marts that
> run into 100's of GB to a few TB in size. Some tables alone are more than 50
> GB in size and larger ones can run into a couple of hundred GB. I am not
> working as a dba in the current shop so dont have access to a lot of
> information that can help; however being a dba for more than 8 years it just
> doesnt seem right to me to have a 2gb limitation on the file size,
> especially on a VLDB environment and I am trying to question myself for why
> "many small files" are not good for a VLDB. I am on the same page as you
> are to set a max-limit on the file size and precreate the datafile with the
> max size - keeps the mount points from blowing up.
> Excellent input and the example by Wolfgang; Thanks for reminding me of
> the fact that the df header only takes up about 64K space. I now remember
> doing that in my earlier shop about 3 years back. In fact I had created a
> sql script that generated the "alter database .. resize" for all the files
> that had space wasted due to this fact and it made reusable more than 125 GB
> of space across different databases that spanned 3TB in size.
> What can be the Query/ DML performance impact of having many smaller files
> in a database? One of the things, I can come up with is:
> 1. Possibly slower checkpoints since CKPT has to work more to update many
> smaller files.
> Any more?
> Regards.
> **
> ------------------------------
> *From:* [mailto:
>] *On Behalf Of *JayDBA
> *Sent:* Tuesday, November 14, 2006 5:57 PM
> *To:*
> *Subject:* Datafile size - Is bigger better?
> Can I trigger a discussion on the pros and cons of datafile max-size. I
> am in a shop where the datafile size is limited to 2gb on a 64-bit platform
> (Raided) and I am trying to weigh the pros and cons of having a larger file
> limit.
> Pros:
> 1. Quicker recovery
> 2. Easy to load balance
> Cons:
> 1. How often do we have to recover?
> 2. Is load balancing on the database level really an option for raided
> systems
> 3. On locally managed files, we loose header space equal to the extent
> size. E.g. on a datafile with uniform extent sizing of 128M a 2gb file
> would waste 6% space / file. This number can run into gigabytes on systems
> with 100's of 1000's of files
> 4. CKPT having to work more since it has to update many more smaller
> files.
> Regards

Zhu Chao

Received on Fri Nov 17 2006 - 07:56:28 CST

Original text of this message