Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Large table and buffer cache concerns

Re: Large table and buffer cache concerns

From: <hasta_l3_at_hotmail.com>
Date: 11 Apr 2007 22:11:37 -0700
Message-ID: <1176354697.733746.239590@q75g2000hsh.googlegroups.com>


On 11 avr, 02:08, deva..._at_gmail.com wrote:
> Hello all,
>
> I've an application developer with a rather unique request, for a
> High Volume OLTP system. I've yet to work with tables of this
> magnitude before so I'd love some second opinions.
>
> Essentially, a 400 million recordtableconsisting of 43 byte records.
> 7 int and 2 date columns. Which accounts for approximately half the
> overall database storage. This is an associativetablealong these
> lines:
>
> SQL> createtablec ( a int primary key, b int );
>
> Tablecreated.
>
> SQL> createtableq ( x int primary key, y int );
>
> Tablecreated.
>
> SQL> createtablej ( a references c, x references q, z int, k int, l
> date, m date, n int, o int, p int, primary key(a,x) );
>
> Tablecreated.
>
> Tablec has 400K records andtableq has 1000 records hence the 400
> million withintablej.
>
> The problem:
>
> Partitioning is not available due to cost constraints..
> Oracle 10gR2 - 64 bit: Standard Edition One
> DB_BLOCK_SIZE is set to 8K.BUFFERCACHEset to 10 GB
> 4, dual core, sockets so 8 cores in all.
>
> Some quick math: 400 million records at 43 bytes each = 16 GB of raw
> data without indexes and not accounting to free space within a data
> block.
>
> I'm a bit squeamish about theBUFFERCACHEflush rate.
>
> The data is routinely (once every 5 to 30 min) selected by c.a:
> select x,z,l,m,n,o,p from j where a = :1;
> expecting a result set of 1000 records, (once every 5 - 30 min) for 3K
> different values of :1 at the same frequency.
>
> The data is laid down pretty much randomly over time, with a single
> insert or update per 3K customers once every 30 min or so.
>
> So back to what has me concerned... Thebuffercache.
>
> If I have 3K customers fetching 1000 records, potentially over 1000 DB
> blocks, that's potentially 3 million 8K blocks being loaded into theBufferCache. Of the potential, ~3.4 million blocks for the entire
> 400 million recordtable. Ok, I know these are dooms and gloom
> numbers. Surely, loading 3 million records of a 400 million recordtableI'll get a better density/reuse of these 8K blocks.
>
> Am I just being overly cautious or do I have a legitimate concern with
> the rate a which I'll be moving through the availablebuffercache.
>
> Potential option 1:
>
> Tune the DB_BLOCK_SIZE from 8K to 2K as the average row length for all
> tables is 43 bytes as well. (ok this onetablehas certainly skewed
> that average)
>
> Potential option 2:
>
> Shoe-horn these 1000 records per 400K customers into as few 8K DB
> blocks as I can through the use of BLOBs. The data represented by
> this single 400 million recordtablecan be looked at logically as
> three lists. Active, repeatable and historical.
> Active could be condensed to about 640 bytes and store in row.
> Repeatable to about 6KB and stored out of row.
> Historical to just under 4000 bytes and stored in row.
>
> Createtablej (a references c, d BLOB, e BLOB, f BLOB)
> lob (d,f) STORE AS
> (TABLESPACE example
> enable storage in row)
> lob (e) STORE AS
> (TABLESPACE example
> disable storage in row)
> ;
>
> In so doing, compressing those 1000 records into perhaps 2 8K DB
> Blocks.

Dear,

If I get you correctly, a third option is to use clusters, with a customer cluster key.

I would definitly build a test case. Make sure to believe your measurements, and only your measurements.

If there are critical queries across customers, be sure to test that also...

Cheers

P.S. Of course, option 2 makes me sick :-) Received on Thu Apr 12 2007 - 00:11:37 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US