Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Choosing data file size for a multi TB database?

RE: Choosing data file size for a multi TB database?

From: Mark W. Farnham <>
Date: Fri, 2 Sep 2005 23:23:16 -0400
Message-ID: <>

Re: Choosing data file size for a multi TB database?Of course this is a trick question, because you actually back up to alternating sets of disks so that you always have a backup of each physical file available (ie. copy b still exists when you're making copy a).

Then for restore you just point at your most recent and apply logs. (after preserving the online redo logs and making copy c, perhaps, unless copy b is secure as well as the logs to get b current just in case).

But seriously, if you're gated on 16 tape drives, then you want some multiple of 16 independently operating trays of drives. The number of files beyond the even multiple of 16 trays seems irrelevant to me, but maybe that is an artifact of how I lay things out. How do you get into trouble otherwise (giving you the 16 as the minimum number for non-split distribution tape sets)? You did say a full restore, right?

(notice I did *NOT* enter into the debate about whether managing a lot of files is a hassle or not. I do observe from time to time that the folks who have very many files tend to be the ones who have trouble managing them, but we probably never hear from the folks who have no trouble managing lots of files, so that is probably a sampling error.)

Of course since you're really using raw files, you can just carve them up into convenient sizes by starting address and length to fit in 90-95% of a tape's length. (rahaha!) I still hate ribbon rust. (Except when you need it.)


  -----Original Message-----
[]On Behalf Of Tim Gorman   Sent: Friday, September 02, 2005 5:59 PM   To:
  Subject: Re: Choosing data file size for a multi TB database?

  Datafile sizing has the greatest regular impact on backups and restores. Given a large multi-processor server with 16 tape drives available, which would do a full backup or full restore fastest?

    a.. a 10-Tbyte database comprised of two 5-Tbyte datafiles
    b.. a 10-Tbyte database comprised of ten 1-Tbyte datafiles
    c.. a 10-Tbyte database comprised of two-hundred 50-Gbyte datafiles?
    d.. a 10-Tbyte database comprised of two-thousand 5-Gbyte datafiles?

  Be sure to consider what type of backup media are you using, how much concurrency will you be using, and the throughput of each device?

  There is nothing “unmanageable” about hundreds or thousands of datafiles; don’t know why that’s cited as a concern. Oracle8.0 and above has a limitation on 65,535 datafiles per tablespace, but otherwise large numbers of files are not something to be concerned about. Heck, the average distribution of a Java-based application is comprised of 42 million directories and files and nobody ever worries about it...

  on 8/30/05 10:17 AM, Paul Baumgartel at wrote:

    Good advice. These are known as "bigfile" tablespaces (the conventional kind are now called "smallfile").

    On 8/30/05, Allen, Brandon <> wrote:

      You might want to consider "largefile" tablespaces if you're using 10g - these are tablespaces that have one and only one datafile, which can be up to 4,294,967,296 (roughly 4 billion - a.k.a 4GB) BLOCKS, which means a single file can be 8-to-128TB (terabytes) depending on your block size (2k to 32k). The other nice thing about these is that you can control the files with ALTER TABLESPACE commands, e.g. ALTER TABLESPACE BIG1 RESIZE 10TB; ALTER TABLESPACE BIG2 AUTOEXTEND ON NEXT 100G MAXSIZE 10TB;       Disclaimer: I've never actually used largefile tablespaces myself - just read about them :-)

      -----Original Message-----
      []On Behalf Of Branimir Petrovic
      Sent: Tuesday, August 30, 2005 4:33 AM
      Subject: Choosing data file size for a multi TB database?

      How would you approach task of sizing data files for a project that
      start with
      a 1TB database but may relatively quickly grow to stabilize at around

      Obvious options are:

          - start with many smallish files (like 2GB each), then add some
      thousands more
            as the database grows,
          - start with a number of largish data files (in 10-100GB range
      then add
            more such files to accommodate growth.

      Neither of the above options look very desirable (to me at least).
      might be bad choice with checkpointing in mind, but the second option
is not
      winner if data files ever needs to be moved around. Anyway some
      choice must
      be made, and all I'd like at this moment is not to give perilous

(admission: once the "ball" starts rollin', this bastard ain't gonna
be mine:)) So from practical perspective - what would be the least troublesome


      FYI I - OS platform is the darkest secret at this point, as is the hardware


(no-one can tell, early signs of "well communicated, well managed"
project are all there) FYI II - I've never had to deal with DBs much bigger than 100GB, thus the need for "reality check".. --

      Privileged/Confidential Information may be contained in this message or attachments hereto. Please advise immediately if you or your employer do not consent to Internet email for messages of this kind. Opinions, conclusions and other information in this message that do not relate to the official business of this company shall be understood as neither given nor endorsed by it.


Received on Fri Sep 02 2005 - 22:35:30 CDT

Original text of this message