Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Large table and buffer cache concerns

Large table and buffer cache concerns

From: <devalos_at_gmail.com>
Date: 10 Apr 2007 17:08:20 -0700
Message-ID: <1176250100.425723.111570@y5g2000hsa.googlegroups.com>


Hello all,

     I've an application developer with a rather unique request, for a High Volume OLTP system. I've yet to work with tables of this magnitude before so I'd love some second opinions.

Essentially, a 400 million record table consisting of 43 byte records. 7 int and 2 date columns. Which accounts for approximately half the overall database storage. This is an associative table along these lines:

SQL> create table c ( a int primary key, b int );

Table created.

SQL> create table q ( x int primary key, y int );

Table created.

SQL> create table j ( a references c, x references q, z int, k int, l date, m date, n int, o int, p int, primary key(a,x) );

Table created.

Table c has 400K records and table q has 1000 records hence the 400 million within table j.

The problem:

Partitioning is not available due to cost constraints.. Oracle 10gR2 - 64 bit: Standard Edition One DB_BLOCK_SIZE is set to 8K.
BUFFER CACHE set to 10 GB
4, dual core, sockets so 8 cores in all.

Some quick math: 400 million records at 43 bytes each = 16 GB of raw data without indexes and not accounting to free space within a data block.

I'm a bit squeamish about the BUFFER CACHE flush rate.

The data is routinely (once every 5 to 30 min) selected by c.a: select x,z,l,m,n,o,p from j where a = :1; expecting a result set of 1000 records, (once every 5 - 30 min) for 3K different values of :1 at the same frequency.

The data is laid down pretty much randomly over time, with a single insert or update per 3K customers once every 30 min or so.

So back to what has me concerned... The buffer cache.

If I have 3K customers fetching 1000 records, potentially over 1000 DB blocks, that's potentially 3 million 8K blocks being loaded into the Buffer Cache. Of the potential, ~3.4 million blocks for the entire 400 million record table. Ok, I know these are dooms and gloom numbers. Surely, loading 3 million records of a 400 million record table I'll get a better density/reuse of these 8K blocks.

Am I just being overly cautious or do I have a legitimate concern with the rate a which I'll be moving through the available buffer cache.

Potential option 1:

Tune the DB_BLOCK_SIZE from 8K to 2K as the average row length for all tables is 43 bytes as well. (ok this one table has certainly skewed that average)

Potential option 2:

Shoe-horn these 1000 records per 400K customers into as few 8K DB blocks as I can through the use of BLOBs. The data represented by this single 400 million record table can be looked at logically as three lists. Active, repeatable and historical. Active could be condensed to about 640 bytes and store in row. Repeatable to about 6KB and stored out of row. Historical to just under 4000 bytes and stored in row.

Create table j (a references c, d BLOB, e BLOB, f BLOB) lob (d,f) STORE AS
(TABLESPACE example

 enable storage in row)
lob (e) STORE AS
(TABLESPACE example

 disable storage in row)
;

In so doing, compressing those 1000 records into perhaps 2 8K DB Blocks. Received on Tue Apr 10 2007 - 19:08:20 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US