Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Large table and buffer cache concerns
On Apr 11, 2:08 am, deva..._at_gmail.com wrote:
> Hello all,
>
> I've an application developer with a rather unique request, for a
> High Volume OLTP system. I've yet to work with tables of this
> magnitude before so I'd love some second opinions.
>
> Essentially, a 400 million record table consisting of 43 byte records.
> 7 int and 2 date columns. Which accounts for approximately half the
> overall database storage. This is an associative table along these
> lines:
>
> SQL> create table c ( a int primary key, b int );
>
> Table created.
>
> SQL> create table q ( x int primary key, y int );
>
> Table created.
>
> SQL> create table j ( a references c, x references q, z int, k int, l
> date, m date, n int, o int, p int, primary key(a,x) );
>
> Table created.
>
> Table c has 400K records and table q has 1000 records hence the 400
> million within table j.
>
> The problem:
>
> Partitioning is not available due to cost constraints..
> Oracle 10gR2 - 64 bit: Standard Edition One
> DB_BLOCK_SIZE is set to 8K.
> BUFFER CACHE set to 10 GB
> 4, dual core, sockets so 8 cores in all.
>
> Some quick math: 400 million records at 43 bytes each = 16 GB of raw
> data without indexes and not accounting to free space within a data
> block.
>
> I'm a bit squeamish about the BUFFER CACHE flush rate.
>
> The data is routinely (once every 5 to 30 min) selected by c.a:
> select x,z,l,m,n,o,p from j where a = :1;
> expecting a result set of 1000 records, (once every 5 - 30 min) for 3K
> different values of :1 at the same frequency.
>
> The data is laid down pretty much randomly over time, with a single
> insert or update per 3K customers once every 30 min or so.
>
> So back to what has me concerned... The buffer cache.
>
> If I have 3K customers fetching 1000 records, potentially over 1000 DB
> blocks, that's potentially 3 million 8K blocks being loaded into the
> Buffer Cache. Of the potential, ~3.4 million blocks for the entire
> 400 million record table. Ok, I know these are dooms and gloom
> numbers. Surely, loading 3 million records of a 400 million record
> table I'll get a better density/reuse of these 8K blocks.
>
> Am I just being overly cautious or do I have a legitimate concern with
> the rate a which I'll be moving through the available buffer cache.
>
> Potential option 1:
>
> Tune the DB_BLOCK_SIZE from 8K to 2K as the average row length for all
> tables is 43 bytes as well. (ok this one table has certainly skewed
> that average)
>
> Potential option 2:
>
> Shoe-horn these 1000 records per 400K customers into as few 8K DB
> blocks as I can through the use of BLOBs. The data represented by
> this single 400 million record table can be looked at logically as
> three lists. Active, repeatable and historical.
> Active could be condensed to about 640 bytes and store in row.
> Repeatable to about 6KB and stored out of row.
> Historical to just under 4000 bytes and stored in row.
>
> Create table j (a references c, d BLOB, e BLOB, f BLOB)
> lob (d,f) STORE AS
> (TABLESPACE example
> enable storage in row)
> lob (e) STORE AS
> (TABLESPACE example
> disable storage in row)
> ;
>
> In so doing, compressing those 1000 records into perhaps 2 8K DB
> Blocks.
Shouldn't you be better concerned by the table design? Table j contains a cartesian product! Does that sound right? As far as I am concerned you are approaching this from the wrong end. You can symptom fight whatever you want, with whatever contorted measure like recreating the database with a smaller blocksize (this won't help a damn, as it won't reduce the amount of cache required), or convert the table to some lob design. As the table design is completely FUBAR, THAT need to be adressed. Other than that, the only thing you can do is set up the recycle pool, make sure the table is in the recycle pool, and pray, over and over again, for some wisdom in your applications developers, and more resources (like the partitioning option) for your database. There are not only '50 ways to leave your lover' (Simon and Garfunkel), there are at least 50 ways to hell. This is surely one of them.
-- Sybrand Bakker Senior Oracle DBAReceived on Wed Apr 11 2007 - 03:52:31 CDT
![]() |
![]() |