Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Choosing data file size for a multi TB database?

RE: Choosing data file size for a multi TB database?

From: Zoran Martic <zoran_martic_at_yahoo.com>
Date: Sat, 3 Sep 2005 07:46:32 -0700 (PDT)
Message-ID: <20050903144632.99337.qmail@web52613.mail.yahoo.com>


The checkpointing heavily depends on the multiple DBWR having the good I/O subsystem behind that can survive the I/O throughput usually measured in random I/O operations then throughput. Usually does not matter is it 50 or 1000 files, it I believe more depends on the physical I/O layout and how the disk I/O is handling fast delivery of I/O operations at the backend, then is it 100 or 1000 files in te database.

The huge database are using various ways to protect itself. Who has money usually have some crazy disk based backup strategy that is anyway the fastest backup and recovery. Usually all very important databases are also covered by good disaster standby or somekind of replication.

Regards,
Zoran

> What about checkpoint against tens of thousands of
> data files, surely
> more-merrier rule holds? For that reason (or due to
> a fear factor) I
> was under may be false impression that smaller
> number (in hundreds)
> of relatively larger data files (20 GB or so) might
> be better choice.
>
> Other very real problem with 10TB database I can
> easily foresee, but
> for which I do not know proper solution, is how
> would one go about the
> business of regular verification of taped backup
> sets? Have another
> humongous hardware just for that purpose? Fully
> trust the rust? (i.e.
> examine backup logs and never try restoring, or...)
> What do people
> do to ensure multi TB monster databases are surely
> and truly safe
> and restorable/rebuildable?
>
>
> Branimir
>
> -----Original Message-----
> From: Tim Gorman [mailto:tim_at_evdbt.com]
> Sent: Friday, September 02, 2005 5:59 PM
> To: oracle-l_at_freelists.org
> Subject: Re: Choosing data file size for a multi TB
> database?
>
>
> Datafile sizing has the greatest regular impact on
> backups and restores.
> Given a large multi-processor server with 16 tape
> drives available, which
> would do a full backup or full restore fastest?
>
>
>
> * a 10-Tbyte database comprised of two 5-Tbyte
> datafiles
>
> * a 10-Tbyte database comprised of ten 1-Tbyte
> datafiles
>
> * a 10-Tbyte database comprised of two-hundred
> 50-Gbyte datafiles?
>
> * a 10-Tbyte database comprised of two-thousand
> 5-Gbyte datafiles?
>
>
>
> Be sure to consider what type of backup media are
> you using, how much
> concurrency will you be using, and the throughput of
> each device?
>
> There is nothing "unmanageable" about hundreds or
> thousands of datafiles;
> don't know why that's cited as a concern. Oracle8.0
> and above has a
> limitation on 65,535 datafiles per tablespace, but
> otherwise large numbers
> of files are not something to be concerned about.
> Heck, the average
> distribution of a Java-based application is
> comprised of 42 million
> directories and files and nobody ever worries about
> it...
>
>
>
>



Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
--
http://www.freelists.org/webpage/oracle-l
Received on Sat Sep 03 2005 - 09:48:26 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US