Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Best size for datafiles

Re: Best size for datafiles

From: Frank Hubeny <fhubeny_at_ntsource.com>
Date: Wed, 17 Oct 2001 23:19:48 -0700
Message-ID: <3BCE7484.5050708@ntsource.com>


I am able to create 32 gigabyte files on Solaris 7. The idea that one cannot go beyond 2 gigabytes is obsolete and a needless restriction on tablespace design.

Some might claim that a particular file size out-performs another. I have listened to enough pseudo-performance results to come to the conclusion that unless such claims are backed by sound statistical analysis (means, standard deviations, hypothesis testing), I reject them out-of-hand as hearsay.

Frank Hubeny

Considering that disk drives are now availabe at over 160 gigabytes

Niall Litchfield wrote:

>"Dusan Bolek" <pagesflames_at_usa.net> wrote in message
>news:1e8276d6.0110170402.5bc36b7f_at_posting.google.com...
>
>>willjamu_at_mindspring.com (James A. Williams) wrote in message
>>
>news:<3bcd26c9.9000852_at_news.mindspring.com>...
>
>>>I am on a Solaris Platform running 8.1.7. I am creating a .5 tb
>>>datawarehouse.
>>>
><snip>
>
>>I do not thing that 1000MB or even 500 MB is a good datafile size for
>>0.5 TB data warehouse. Maybe you should consider larger datafiles,
>>because 500 (1000) data_files would be a nightmare to administer.
>>You can have 5000MB data files (100 of them),
>>
>
>Does Solaris support 5gb data files?
>
>
>--
>Niall Litchfield
>Oracle DBA
>Audit Commission UK
>
>
Received on Thu Oct 18 2001 - 01:19:48 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US