Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: GB Datafiles on W2K3 server
Thank you Joel for your sincere answer.
Oracle 9i Release 2 Patch Set 5
This is a database serving up images stored in either blobs or long raw datatypes (Yeah... yeah... I know long raw isn't the way to go). The data will not be change much once it is loaded it is very much static. So we are totally looking at a DSS situation.
Backups are current managed using Veritas Oracle Server Agent to do a complete back up once a week. This is fine, currently, I don't have any downtime requirements, so if something fails, I have some time to get it back online.
My main concern is with query performance. Better to have one large 150 datafile or smaller datafiles. The bulk of this day will reside in a few tables less than 20. Currently, I am going with 5 30 GB files. Get the feeling you would say go even smaller with more files to make up this 150 GB.
Thanks for your input. I will let you all know what I find out if I get a chance to do some performance testing.
Joel Garry wrote:
> There are a number issues pertaining to this. Just off the top of my
> head:
>
> What version are you on? Older versions of Oracle had some tools that
> had 2G file limits. Some people still habitually use this as a limit.
> 10G has options to make the decision easier.
>
> What will your database do? You don't want database writers to be
> waiting for the file system to contend for the same file. You don't
> want to contend between redo and data files. Is it a data warehouse?
> OLTP? Mixed? Just a data store feeding an app server?
>
> How will you do backups? RMAN can parallelize quite a bit if you let
> it and if your hardware can handle it, 5 datafiles better than 1. If
> you don't use a catalog, this can also affect your controlfile.
>
> How will you restore? More datafiles means less data likely a problem,
> assuming a given problem only affects a datafile. Also, some
> management can be easier if indices are split from tables (but don't
> get suckered into the old myth of better performance with that split).
>
> Are you going to be moving tablespaces between databases?
>
> Are some tables going to be stable while others are volatile? Size
> differences of objects? Partitioning? Rate of growth? Other things
> on server?
>
> Oracle seems to put things into a tablespace in different places with
> different numbers of data files. I'm not sure if this would make any
> difference given a SAME setup. It won't make any difference unless you
> are doing full table or index scans, and maybe not even then.
>
> I'd go with the higher number of datafiles. But if you should happen
> to test something, many of us would be interested in your results and
> enough information to replicate. If you are just interested in getting
> the thing working with minimal work, less is more. I'm hoping the mere
> fact that you asked the question indicates you have an interest in
> "doing it right." With the most modern setups, there isn't really much
> impact unless you do it way wrong. Some people consider fewer large
> drives (as opposed to more smaller drives) wrong.
>
> jg
> --
> @home.com is bogus.
> http://theturtles.com/media.html
Received on Fri Jun 17 2005 - 09:07:07 CDT
![]() |
![]() |