Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Large table and buffer cache concerns

Re: Large table and buffer cache concerns

From: DA Morgan <>
Date: Wed, 11 Apr 2007 19:59:00 -0700
Message-ID: <> wrote:
> So to recap:
> I'm uneasy with a table that holds 400 million records. I've simply
> never worked with tables on this magntude, within an OLTP system.
> 400 million 43 byte records written to 3-5 million 8K blocks...
> 3K simultaneous customers of 400K customers accessing the table 24X7
> 4 select calls a second (fetching 0 to 1000 records per customer)
> 20 insert or update calls a second (inserting a new record or updating
> a single record per customer)
> No RAC, No Partitioning.
> I suppose I could build a test case that would simulate the
> anticipated load.
> Thought there might be a few out there who've experienced similar high
> number of record tables within High Volume OLTP systems who might be
> able to alleviate my initial skepticisms an concerns over the Buffer
> Cache and associated latches.
> The only time I've worked with tables containing 400 million plus
> records has been within DSS, Datamart and Datawarehousing
> environments, who had site wide enterprise licenses and deep enough
> pockets to resolve any issues with more hardware.

What you are describing above ... 24x7 and 400K customers is essentially a description of

As I have taught custom Oracle classes for the folks at Amazon I will give you the same advice they would receive.

Fix the problem or your resume.

RAC is irrelevant to the issue ... partitioning is not ... and what you have described with respect to your app servers is insane.

If the organization has that much data, that many customers, and the SLA they describe ... they can afford to do it right.

Daniel A. Morgan
University of Washington
(replace x with u to respond)
Puget Sound Oracle Users Group
Received on Wed Apr 11 2007 - 21:59:00 CDT

Original text of this message