From oracle-l-bounce@freelists.org Fri Sep 2 17:01:25 2005 Return-Path: Received: from air891.startdedicated.com (root@localhost) by orafaq.com (8.12.10/8.12.10) with ESMTP id j82M1O2o021447 for ; Fri, 2 Sep 2005 17:01:24 -0500 X-ClientAddr: 206.53.239.180 Received: from turing.freelists.org (freelists-180.iquest.net [206.53.239.180]) by air891.startdedicated.com (8.12.10/8.12.10) with ESMTP id j82M1MIP021437 for ; Fri, 2 Sep 2005 17:01:22 -0500 Received: from localhost (localhost [127.0.0.1]) by turing.freelists.org (Avenir Technologies Mail Multiplex) with ESMTP id E4D8E1E94BC; Fri, 2 Sep 2005 17:01:14 -0500 (EST) Received: from turing.freelists.org ([127.0.0.1]) by localhost (turing [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 07059-09; Fri, 2 Sep 2005 17:01:14 -0500 (EST) Received: from turing (localhost [127.0.0.1]) by turing.freelists.org (Avenir Technologies Mail Multiplex) with ESMTP id 61A0B1E946A; Fri, 2 Sep 2005 17:01:14 -0500 (EST) User-Agent: Microsoft-Entourage/10.1.4.030702.0 Date: Fri, 02 Sep 2005 15:59:00 -0600 Subject: Re: Choosing data file size for a multi TB database? From: Tim Gorman To: Message-ID: In-Reply-To: Mime-version: 1.0 Content-Type: multipart/alternative; boundary="B_3208521558_2011728" X-archive-position: 24977 X-ecartis-version: Ecartis v1.0.0 Sender: oracle-l-bounce@freelists.org Errors-To: oracle-l-bounce@freelists.org X-original-sender: tim@evdbt.com Precedence: normal Reply-To: tim@evdbt.com X-list: oracle-l X-Virus-Scanned: by amavisd-new-20030616-p9 (Debian) at avenirtech.net X-mailscan-MailScanner-Information: Please contact the ISP for more information X-mailscan-MailScanner: Found to be clean X-MailScanner-From: oracle-l-bounce@freelists.org X-Spam-Level: X-Spam-Checker-Version: SpamAssassin 2.63 (2004-01-11) on air891.startdedicated.com X-Spam-Status: No, hits=-2.6 required=5.0 tests=AWL,BAYES_00,HTML_MESSAGE autolearn=ham version=2.63 --B_3208521558_2011728 Content-Type: text/plain; charset="ISO-8859-1" Content-transfer-encoding: quoted-printable Datafile sizing has the greatest regular impact on backups and restores. Given a large multi-processor server with 16 tape drives available, which would do a full backup or full restore fastest? * a 10-Tbyte database comprised of two 5-Tbyte datafiles * a 10-Tbyte database comprised of ten 1-Tbyte datafiles * a 10-Tbyte database comprised of two-hundred 50-Gbyte datafiles? * a 10-Tbyte database comprised of two-thousand 5-Gbyte datafiles? Be sure to consider what type of backup media are you using, how much concurrency will you be using, and the throughput of each device? There is nothing =B3unmanageable=B2 about hundreds or thousands of datafiles; don=B9t know why that=B9s cited as a concern. Oracle8.0 and above has a limitation on 65,535 datafiles per tablespace, but otherwise large numbers of files are not something to be concerned about. Heck, the average distribution of a Java-based application is comprised of 42 million directories and files and nobody ever worries about it... on 8/30/05 10:17 AM, Paul Baumgartel at paul.baumgartel@gmail.com wrote: > Good advice. These are known as "bigfile" tablespaces (the conventional = kind > are now called "smallfile"). >=20 > On 8/30/05, Allen, Brandon wrote: >> You might want to consider "largefile" tablespaces if you're using 10g - >> these are tablespaces that have one and only one datafile, which can be = up to >> 4,294,967,296 (roughly 4 billion - a.k.a 4GB) BLOCKS, which means a sing= le >> file can be 8-to-128TB (terabytes) depending on your block size (2k to 3= 2k). >> The other nice thing about these is that you can control the files with = ALTER >> TABLESPACE commands, e.g. ALTER TABLESPACE BIG1 RESIZE 10TB; ALTER >> TABLESPACE BIG2 AUTOEXTEND ON NEXT 100G MAXSIZE 10TB; >>=20 >> Disclaimer: I've never actually used largefile tablespaces myself - just= read >> about them :-) >>=20 >>=20 >> -----Original Message----- >> From: oracle-l-bounce@freelists.org >> [mailto:oracle-l-bounce@freelists.org]On Behalf Of Branimir Petrovic >> Sent: Tuesday, August 30, 2005 4:33 AM >> To: oracle-l@freelists.org >> Subject: Choosing data file size for a multi TB database? >>=20 >>=20 >> How would you approach task of sizing data files for a project that will >> start with >> a 1TB database but may relatively quickly grow to stabilize at around 10= TB >> mark? >>=20 >> Obvious options are: >>=20 >> - start with many smallish files (like 2GB each), then add some >> thousands more >> as the database grows, >> or >> - start with a number of largish data files (in 10-100GB range each)= , >> then add >> more such files to accommodate growth. >>=20 >> Neither of the above options look very desirable (to me at least). First >> option >> might be bad choice with checkpointing in mind, but the second option is= not >> the >> winner if data files ever needs to be moved around. Anyway some initial >> choice must >> be made, and all I'd like at this moment is not to give perilous initial >> advice... >> (admission: once the "ball" starts rollin', this bastard ain't gonna be >> mine:)) >>=20 >> So from practical perspective - what would be the least troublesome choi= ce? >>=20 >> Branimir >>=20 >>=20 >>=20 >> FYI I - OS platform is the darkest secret at this point, as is the hard= ware >> specs >> (no-one can tell, early signs of "well communicated, well managed" proje= ct >> are all >> there) >>=20 >>=20 >> FYI II - I've never had to deal with DBs much bigger than 100GB, thus th= e >> need for >> "reality check".. >> -- >> http://www.freelists.org/webpage/oracle-l >> >>=20 >> Privileged/Confidential Information may be contained in this message or >> attachments hereto. Please advise immediately if you or your employer do= not >> consent to Internet email for messages of this kind. Opinions, conclusio= ns >> and other information in this message that do not relate to the official >> business of this company shall be understood as neither given nor endors= ed by >> it. >>=20 >> -- >> http://www.freelists.org/webpage/oracle-l >=20 >=20 --B_3208521558_2011728 Content-Type: text/html; charset="ISO-8859-1" Content-transfer-encoding: quoted-printable Re: Choosing data file size for a multi TB database? Datafile sizing has the g= reatest regular impact on backups and restores.  Given a large multi-pr= ocessor server with 16 tape drives available, which would do a full backup o= r full restore fastest?

  • a 1= 0-Tbyte database comprised of two 5-Tbyte datafiles
  • a 10-Tb= yte database comprised of ten 1-Tbyte datafiles
  • a 10-Tb= yte database comprised of two-hundred 50-Gbyte datafiles?
  • a 10-Tb= yte database comprised of two-thousand 5-Gbyte datafiles?

Be sure to consider what type of backup media are you using, how much concu= rrency will you be using, and the throughput of each device?

There is nothing “unmanageable” about hundreds or thousands of = datafiles;  don’t know why that’s cited as a concern.  = ;Oracle8.0 and above has a limitation on 65,535 datafiles per tablespace, bu= t otherwise large numbers of files are not something to be concerned about. =  Heck, the average distribution of a Java-based application is comprise= d of 42 million directories and files and nobody ever worries about it...


on 8/30/05 10:17 AM, Paul Baumgartel at paul.baumgartel@gmail.com wrote:
Good advice.  These are known as "bigfile" tablespaces (the = conventional kind are now called "smallfile").

On 8/30/05, Allen, Brandon  <Brandon.Allen@oneneck.com> w= rote:
You might want to consider "largefile" tablespaces if you're usin= g 10g - these are tablespaces that have one and only one datafile, which can= be up to 4,294,967,296 (roughly 4 billion - a.k.a 4GB) BLOCKS, which means = a single file can be 8-to-128TB (terabytes) depending on your block size (2k= to 32k).  The other nice thing about these is that you can control the= files with ALTER TABLESPACE commands, e.g. ALTER TABLESPACE BIG1 RESIZE 10T= B;  ALTER TABLESPACE BIG2 AUTOEXTEND ON NEXT 100G MAXSIZE 10TB;

Disclaimer: I've never actually used largefile tablespaces myself - just re= ad about them :-)


-----Original Message-----
From: oracle-l-bounce@freelists.org
[mailto:oracle-l-bounce@f= reelists.org]On Behalf Of Branimir Petrovic
Sent: Tuesday, August 30, 2005 4:33 AM
To: oracle-l@freelists.org
Subject: Choosing data file size for a multi TB database?


How would you approach task of sizing data files for a project that will start with
a 1TB database but may relatively quickly grow to stabilize at around 10TB =
mark?

Obvious options are:

    - start with many smallish files (like 2GB each), t= hen add some
thousands more
      as the database grows,
            &nb= sp;   or
    - start with a number of largish data files (in 10-= 100GB range each),
then add
      more such files to accommodate growth.<= BR>
Neither of the above options look very desirable (to me at least). First option
might be bad choice with checkpointing in mind, but the second option is no= t
the
winner if data files ever needs to be moved around. Anyway some initial
choice must
be made, and all I'd like at this moment is not to give perilous initial advice...
(admission: once the "ball" starts rollin', this bastard ain't go= nna be
mine:))

So from practical perspective - what would be the least troublesome choice?=

Branimir



FYI I  - OS platform is the darkest secret at this point, as is the ha= rdware
specs
(no-one can tell, early signs of "well communicated, well managed"= ; project
are all
there)


FYI II - I've never had to deal with DBs much bigger than 100GB, thus the need for
"reality check"..
--
http://www.freelists.or= g/webpage/oracle-l  <http://www.freelists.org/webpage/oracle-l>

Privileged/Confidential Information may be contained in this message or att= achments hereto. Please advise immediately if you or your employer do not co= nsent to Internet email for messages of this kind. Opinions, conclusions and= other information in this message that do not relate to the official busine= ss of this company shall be understood as neither given nor endorsed by it.<= BR>
--
http://www.freelists.or= g/webpage/oracle-l



--B_3208521558_2011728-- -- http://www.freelists.org/webpage/oracle-l