Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Creating datafiles for 500GB database

Re: Creating datafiles for 500GB database

From: Joel Garry <joel-garry_at_home.com>
Date: 13 Apr 2004 13:04:30 -0700
Message-ID: <91884734.0404131204.947134a@posting.google.com>


Mark Bole <makbo_at_pacbell.net> wrote in message news:<MaIec.36249$KL7.24480_at_newssvr29.news.prodigy.com>...
> Howard J. Rogers wrote:
>
> > On 11 Apr 2004 18:45:34 -0700, Sandy N <urockblue96_at_yahoo.com> wrote:
> >
> >> Hi,
> >>
> >> I have been entrusted the responsibility of creating physical
> >> structure of a datamart. Approximate size of the database would be
> >> 500GB. It will work on windows server. Till now I have been working
> >> mostly working with around 10 - 20 GB databases.
> >>
> >> Any indicators as to how much size I should go with the database files
> >> ? I was thinking of creating 50 10GB files. Does it sound feasible ?
> >> Any thoughts would be helpful.
> >>
> >> Regards
> >> Sandy
> >
> >
> > Generally, the unit of backup and recovery is the data file, so to my
> > way of thinking 10GB is too big. I prefer no bigger than 2GB. And 250
> > data files would not be outrageous for a database (though you'd want to
> > make sure you didn't let the default MAXDATAFILES control file sizing
> > parameter kick in: it's 30!
> >
> > Regards
> > HJR
>
> I think of the unit of (physical) backup and recovery as the tablespace.
>
> Having favored the "max 2GB" datafile limit (like Howard) in the past, I
> have found that in the last few years almost every OS utility
> (compression, network copy, local copy, archive, encryption) now
> supports "largefile" functionality, so my arguments for "max 2GB" have
> fallen away... I happily embrace >5GB datafiles now!

What the file may do back to you while you are embracing it may be painful. I've had a couple of experiences where copying to nfs has corrupted files. Not a good thing when copying data files for a backup. It's the sort of thing where I'd been exporting to nfs every night and the files created are 1-10G, no problem, but copying over 30G of data files where one is 24G corrupted an alphabetically subsequent 2G file... didn't even notice it until later since the size was right, soooooo glad I didn't neeed to try to use it to recover (hpux 11, mysterious box nfs).

>
> To me, 50 10GB datafiles sounds very reasonable. Just make sure you
> test your backup strategy! (and assuming that with some kind of disk
> volume management, the physical datafile layout is not too important for
> performance).

Testing strategy good, testing each backup to restore better (but I don't know anyone who does that - the closest I've seen is like tkytes reading of export files, and some automatic database propagation schemes effectively do it).

>
> --Mark Bole

jg

--
@home.com is bogus.
From selling books out of the back of a Volvo to accounting scandal in
22 years:  http://www.signonsandiego.com/uniontrib/20040413/news_1b13advanced.html
Received on Tue Apr 13 2004 - 15:04:30 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US