Re: high "gc busy buffer"

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Wed, 12 Jan 2011 12:42:48 -0000
Message-ID: <h5SdnUKk66BYPLDQnZ2dnUVZ8s-dnZ2d_at_bt.com>



"charles" <dshproperty_at_gmail.com> wrote in message news:0448b00a-5519-478f-b3e4-893d3eebdc69_at_l7g2000vbv.googlegroups.com...
> Dear group,
>
> We are using Oracle 10g on linux RAC with 4 nodes. We have a small
> table, no more than 300 rows, but most people will update/delete/
> insert on it during load testing. And ASH report shows it is the
> cause of this contention.
>
> EVENT CNT
> ------------------------------ ----------
> gc current retry 4
> gc current grant 2-way 31
> gc current request 47
> gc cr request 66
> gc cr multi block request 69
> gc current block busy 79
> gc cr grant 2-way 81
> gc current grant busy 113
> gc current block 2-way 416
> gc cr block busy 420
> gc cr block 2-way 568
> gc buffer busy 6448
>
> All the "gc buffer busy" points to one small table, no index/triggers
> on this table, which 2000 update, 1800 insert, 1600 select. The table
> is created in a ASSM tablespace, so i guess freelist is not going to
> help here.
>
> Could somebody give me some advice?
>
> Thanks

How can you have no more than 300 rows if you have 1800 inserts and no deletes ?

Before asking for a solution, make sure you are describing the problem properly.

No indexes - means every select/update will have to do a full tablescan, which means you must constantly be moving all the table blocks across the interconnect. If you can't isolate the activity to just one node you need to spread the contention as much as possible. If the table really is small and is going to stay small you could try implementing a single table hash cluster with one key per block, so two nodes will only compete for a block if they both want the same row at the same time. (With a single table hash cluster you can also avoid having to maintain an index and still get high precision access).

-- 
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com
Received on Wed Jan 12 2011 - 06:42:48 CST

Original text of this message