Datafile sizing has the greatest regular impact on backups and restores.
Given a large multi-processor server with 16 tape drives available, which
would do a full backup or full restore fastest?
- a 10-Tbyte database comprised of two 5-Tbyte datafiles
- a 10-Tbyte database comprised of ten 1-Tbyte datafiles
- a 10-Tbyte database comprised of two-hundred 50-Gbyte datafiles?
- a 10-Tbyte database comprised of two-thousand 5-Gbyte datafiles?
Be sure to consider what type of backup media are you using, how much
concurrency will you be using, and the throughput of each device?
There is nothing ³unmanageable² about hundreds or thousands of datafiles;
donıt know why thatıs cited as a concern. Oracle8.0 and above has a
limitation on 65,535 datafiles per tablespace, but otherwise large numbers
of files are not something to be concerned about. Heck, the average
distribution of a Java-based application is comprised of 42 million
directories and files and nobody ever worries about it...
on 8/30/05 10:17 AM, Paul Baumgartel at paul.baumgartel_at_gmail.com wrote:
> Good advice. These are known as "bigfile" tablespaces (the conventional kind
> are now called "smallfile").
>
> On 8/30/05, Allen, Brandon <Brandon.Allen_at_oneneck.com> wrote:
>> You might want to consider "largefile" tablespaces if you're using 10g -
>> these are tablespaces that have one and only one datafile, which can be up to
>> 4,294,967,296 (roughly 4 billion - a.k.a 4GB) BLOCKS, which means a single
>> file can be 8-to-128TB (terabytes) depending on your block size (2k to 32k).
>> The other nice thing about these is that you can control the files with ALTER
>> TABLESPACE commands, e.g. ALTER TABLESPACE BIG1 RESIZE 10TB; ALTER
>> TABLESPACE BIG2 AUTOEXTEND ON NEXT 100G MAXSIZE 10TB;
>>
>> Disclaimer: I've never actually used largefile tablespaces myself - just read
>> about them :-)
>>
>>
>> -----Original Message-----
>> From: oracle-l-bounce_at_freelists.org
>> [mailto:oracle-l-bounce_at_freelists.org]On Behalf Of Branimir Petrovic
>> Sent: Tuesday, August 30, 2005 4:33 AM
>> To: oracle-l_at_freelists.org
>> Subject: Choosing data file size for a multi TB database?
>>
>>
>> How would you approach task of sizing data files for a project that will
>> start with
>> a 1TB database but may relatively quickly grow to stabilize at around 10TB
>> mark?
>>
>> Obvious options are:
>>
>> - start with many smallish files (like 2GB each), then add some
>> thousands more
>> as the database grows,
>> or
>> - start with a number of largish data files (in 10-100GB range each),
>> then add
>> more such files to accommodate growth.
>>
>> Neither of the above options look very desirable (to me at least). First
>> option
>> might be bad choice with checkpointing in mind, but the second option is not
>> the
>> winner if data files ever needs to be moved around. Anyway some initial
>> choice must
>> be made, and all I'd like at this moment is not to give perilous initial
>> advice...
>> (admission: once the "ball" starts rollin', this bastard ain't gonna be
>> mine:))
>>
>> So from practical perspective - what would be the least troublesome choice?
>>
>> Branimir
>>
>>
>>
>> FYI I - OS platform is the darkest secret at this point, as is the hardware
>> specs
>> (no-one can tell, early signs of "well communicated, well managed" project
>> are all
>> there)
>>
>>
>> FYI II - I've never had to deal with DBs much bigger than 100GB, thus the
>> need for
>> "reality check"..
>> --
>> http://www.freelists.org/webpage/oracle-l
>> <http://www.freelists.org/webpage/oracle-l>
>>
>> Privileged/Confidential Information may be contained in this message or
>> attachments hereto. Please advise immediately if you or your employer do not
>> consent to Internet email for messages of this kind. Opinions, conclusions
>> and other information in this message that do not relate to the official
>> business of this company shall be understood as neither given nor endorsed by
>> it.
>>
>> --
>> http://www.freelists.org/webpage/oracle-l
>
>
--
http://www.freelists.org/webpage/oracle-l
Received on Fri Sep 02 2005 - 17:01:25 CDT