Re: Few big or many small datafiles per tablespace?

From: Chuck <chuckh1958_nospam_at_gmail.com>
Date: Wed, 09 Jul 2008 15:33:07 GMT
Message-ID: <TS4dk.1129$al3.185@trnddc06>


Ronny wrote:
> We are on 10.2.0.4 on Linux SLES-10 (64-bit) and we are analyzing the
> IO setup of an OLAP/DWH system (about 800GB in total). We have a large
> table of about 50 GB in size and it is stored in its own tablespace.
>
> Now the question comes up whether few big (say 2 x 25GB) or many small
> (say 25 x 2GB) datafiles are more beneficial. We have currently
> DB_WRITER_PROCESSES=8.
>
> Can there be only one file handle to access a datafile? Or can a
> datafile be accessed by more than one db writer process? So dbw0 could
> read at the beginning of a datafile, and dbw1 could read at the end at
> the same time.
>
> Any suggestions for a good datafile size independently from
> DB_WRITER_PROCESSES?
>
> Thanks for your comments,
> Ronny

The real question IMO is about recoverability. If you lose a datafile, how long can you afford to be down? A 50g file takes a whole lot longer to restore and recover than a 4g file. Will you ever need to move the file to balance I/O or space? If so, same problem with a large file.

I don't create datafiles larger than 4g. Received on Wed Jul 09 2008 - 10:33:07 CDT

Original text of this message