Dimensioning datafiles (smaller pieces, medium big ones ?)

From: Ricardo Santos <saints.richard_at_gmail.com>
Date: Fri, 15 Feb 2008 15:00:10 +0000
Message-ID: <34e16fec0802150700k728cc8e8v2c475ff9326de93a@mail.gmail.com>

Hello to you all,

I would like to ask some advices on I should dimension datafiles, when I already know how much space is going to be occupied by the objects on the tablespace to which the datafile(s) belong.

I'm going to create a new tablespace for table objects that I already know that are going to occupy 24.6 Gb and with a tendency to grow relatively fast (0,5 Gb per month). These objects are going to be imported to the new system.

My question is: Should I create one bigger datafile (let's say 30Gb) to contain all the tables or should I organize things in smaller datafiles ? What would be an optimal size, if there is an answer for this question ?

My preference and felling goes to have less pieces to manage and handle, but I don't want to go to performance problems due to the operating system handling large files.

Here's some technical information about the environment:

Datbabase version: 64 bits

SO: RedHat 4 Update 6 64 bits

Disks: Internal Disks with a total size of 400Gb, formatted as RAID 10 and organized in a LVM Group, with several LVM's.

Thanks for all your attention.

Best regards,
Ricardo Santos.

Received on Fri Feb 15 2008 - 09:00:10 CST

Original text of this message