Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Lost appendix: Space Estimations for Schema Objects

RE: Lost appendix: Space Estimations for Schema Objects

From: Mercadante, Thomas F <thomas.mercadante_at_labor.state.ny.us>
Date: Wed, 16 Jun 2004 15:23:29 -0400
Message-ID: <DE8A21F8F1D0254EA4A9214D52AB2FEDAD5C3E@exchsen0a1ma>


Daniel,

I agree with your methodology if your users can answer your questions. But sometimes, all the users can do is declare a record count for the database table. Or a ratio of "5 of these records for every one of those records". Then we are left with coming up with a methodology that makes management happy. I agree that the formulas that Oracle provides are a "WAG" (Wild Assed Guess) at best. But at least I can say "I am following the formulas provided by Oracle". It provides a comfort level for managers so that they feel confident that they will not be needing to purchase more disk in the near future.

And I *always* add 10% to the estimates I come up with to give *me* a comfort level!

Tom Mercadante
Oracle Certified Professional

-----Original Message-----
From: Daniel Fink [mailto:Daniel.Fink_at_Sun.COM] Sent: Wednesday, June 16, 2004 3:00 PM
To: oracle-l_at_freelists.org
Subject: Re: Lost appendix: Space Estimations for Schema Objects

<jumping into the reorg debate quagmire>

Leslie,

I have one question, Why is such a level of detail needed?

In the past, for each schema or database, we defined 4 sizes of objects (along the lines of autoallocate boundaries).

Small - 128k (or the value of db_block_size * db_file_multiblock_read_count). This is also our baseline multiple. Medium - 1m Large - 64m Extra-Large - 256m

We then reviewed the tables/indexes and "estimated" their size and put them in the appropriate tablespace (lmt uniform size). Me - So, how big do you *think* this table will be? Them - Well... Me - More than 256 megs? Them - Oh, not that large. Me - Less than 50 megs? Them - Hmm..that sounds about right. Me - Okay, that's a Large. Them - But, that means we are not using 14 megs. Me - That's okay. This gives us pre-allocated space if our estimate is wrong. And 14 megs is not a big deal. After all, how much memory do you have in your PDA? Kinds of puts it in perspective.

I've used this approach very successfully in the past 5 years and find that my space wastage is less than 10% (even less on larger databases).

The first time I used this approach, it took a lot of work with the development staff to set aside the fine-grained sizing process they were used to. Using this approach, we created 2 data warehouses of about 750g each (back in 8.0.4 time). The only space-related issue in the 9 months I was there was a load process that tried..and tried...and tried to process a bad input file and filled up the normally small error_log table. The next release tracked the number of errors on a particular file and stopped trying to load it after a threshold had been met. Almost all of the other databases that were supported by members in my group had an average of 1 space related failure per week. Of course, I also worked myself out of the contract...(Mogens, are you listening?)

Regards,
Daniel Fink

Leslie Tierstein wrote:
> Applying the formulas as documented in Oracle 8, and comparing the
> results to an actual Oracle 8i database (a data warehouse; I'm
> interested only in 8 fact tables ranging in size from 10 million to
> 500 million rows; the 9i database is for a new client, and is the
> reason for this exercise) yields the following:
>
> - Table estimates for non-partitioned tables are within 10% (uniformly
> lower for the estimate than the actual) of the actual
> - But the table estimates don't take into account partitioned tables,
> which are farther off
> - B-Tree index estimates were modified to reflect the changed size of
> the ROWID; most are still within 10% deviation from the actual
> - We're still looking at the bitmap indexes (determining cardinality)
> and the local partitioned indexes
>
> More problematic is actually determining how many rows are going to be
> in each fact table -- both at startup and at 6 month intervals.
>
> Unfortunately, we're not going to 10g any time soon, so I can't use
> the EM facility Lex mentioned.
>
> Leslie



Please see the official ORACLE-L FAQ: http://www.orafaq.com

To unsubscribe send email to: oracle-l-request_at_freelists.org put 'unsubscribe' in the subject line.
--
Archives are at http://www.freelists.org/archives/oracle-l/
FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------
----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request_at_freelists.org
put 'unsubscribe' in the subject line.
--
Archives are at http://www.freelists.org/archives/oracle-l/
FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------
Received on Wed Jun 16 2004 - 14:21:37 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US