Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Large table and buffer cache concerns

Re: Large table and buffer cache concerns

From: joel garry <joel-garry_at_home.com>
Date: 12 Apr 2007 16:31:00 -0700
Message-ID: <1176420660.662753.73550@n76g2000hsh.googlegroups.com>


On Apr 11, 5:54 pm, deva..._at_gmail.com wrote:
> So to recap:
>
> I'm uneasy with a table that holds 400 million records. I've simply
> never worked with tables on this magntude, within an OLTP system.
>
> 400 million 43 byte records written to 3-5 million 8K blocks...
>
> 3K simultaneous customers of 400K customers accessing the table 24X7
>
> 4 select calls a second (fetching 0 to 1000 records per customer)
> 20 insert or update calls a second (inserting a new record or updating
> a single record per customer)

Now, here we are seeing you aren't really accessing 400M records all at once. This is what you need to focus on, what sort of concurrency issues you will be having. 20 single updates a second isn't all that much (unless something silly is going on, like those all updating a single header record). If for some reason those are all in the same block, you may have to play games to avoid that issue. Inserts are less likely to be an issue, unless you really do things wrong.

In a very gross oversimplification to get into a ballpark, the buffers need to be sized to those select calls and associated indices over the length of the transactions. For example, if those 4 selects * say 3 seconds * 1000 * 8K blocks (worst case data spread there)~~100M. So if we're off an order of magnitude here, that's still just a gig. Extreme oversizing of the buffers can have issues too (although not so much as in the past). Testing it and looking at what the advisors say can be very informative, but watch out for the advisors iteratively telling you to increase buffers.

>
> No RAC, No Partitioning.
>
> I suppose I could build a test case that would simulate the
> anticipated load.
>
> Thought there might be a few out there who've experienced similar high
> number of record tables within High Volume OLTP systems who might be
> able to alleviate my initial skepticisms an concerns over the Buffer
> Cache and associated latches.
>
> The only time I've worked with tables containing 400 million plus
> records has been within DSS, Datamart and Datawarehousing
> environments, who had site wide enterprise licenses and deep enough
> pockets to resolve any issues with more hardware.

I can't emphasize enough, it's not the number of rows in the tables that matter, it's how many you are dealing with over a short period of time. Jonathan Lewis' Practical Oracle8i has perhaps the clearest description of what actually goes on (not that Kyte's books slouch, either).

Just don't tell me you are doing this on some little dell box running windows...

jg

--
@home.com is bogus.
"A Zulu word which is found in some dictionaries but has never really
caught on among English speakers is tokoloshe. It means "a mischievous
and lascivious hairy dwarf".  We are sure that our creative readers
can find a use for this word." - Take our word for it.
Received on Thu Apr 12 2007 - 18:31:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US