Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Creating datafiles for 500GB database

Re: Creating datafiles for 500GB database

From: Tanel Põder <change_to_my_first_name_at_integrid.info>
Date: Thu, 15 Apr 2004 00:48:42 +0300
Message-ID: <407db1bb$1_2@news.estpak.ee>


Hi!

> Ah well... that changes my advice then. Since your smallest unit of backup
> and recovery is the *database*, how you break that database up into data
> files becomes a matter of utter irrelevance.

One reason for having smaller datafiles in a heavy IO environment are inode locks on files, used by regular buffered file systems. Every datafile has a single inode lock, which serializes reads and writes to the file (also cached reads). Now when you have a lot of direct read/write operations by many processes (like sorting, LOB operations in some cases or parallel query for example), the inode locks might become subject to contention.

So if you have 50x2GB files instead of 2x50GB, you'll have 50 inodes instead of 2 and the contention can be spread among them better...

It also applies to Windows (altough the terminology is different there), but I vaguely remember from somewhere that VMS was (is) able to have several writers to a datafile concurrently. Others will probably know better.

And from Steve Adams website I remember, that even in case of direct-IO the inode lock restriction still applies...
With raw devices you won't have such a problem...

Tanel. Received on Wed Apr 14 2004 - 16:48:42 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US