Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Large table and buffer cache concerns

Re: Large table and buffer cache concerns

From: <devalos_at_gmail.com>
Date: 11 Apr 2007 17:54:01 -0700
Message-ID: <1176339241.890841.213760@n76g2000hsh.googlegroups.com>


So to recap:

I'm uneasy with a table that holds 400 million records. I've simply never worked with tables on this magntude, within an OLTP system.

400 million 43 byte records written to 3-5 million 8K blocks...

3K simultaneous customers of 400K customers accessing the table 24X7

4 select calls a second (fetching 0 to 1000 records per customer) 20 insert or update calls a second (inserting a new record or updating a single record per customer)

No RAC, No Partitioning.

I suppose I could build a test case that would simulate the anticipated load.

Thought there might be a few out there who've experienced similar high number of record tables within High Volume OLTP systems who might be able to alleviate my initial skepticisms an concerns over the Buffer Cache and associated latches.

The only time I've worked with tables containing 400 million plus records has been within DSS, Datamart and Datawarehousing environments, who had site wide enterprise licenses and deep enough pockets to resolve any issues with more hardware. Received on Wed Apr 11 2007 - 19:54:01 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US