Re: Performance issue after creating higher block size tablespace - a further question

From: Tim Gorman <>
Date: Mon, 01 Mar 2010 09:42:01 -0700
Message-ID: <>



The statistics in V$BUFFER_POOL_STATISTICS are cumulative since the instance started.  The fact you still see numbers for an 8K buffer pool does not mean that it is "active", it just means that it was there.  If you see the numbers for the 8K buffer pool incrementing, then that would indicate that it is still somehow "active", but you would have to watch the view over time to see that.

Buffer pools can come and go.  V$ views that are cumulative since instance startup are just that -- cumulative since instance startup.  They are not going to "clear" themselves because you're no longer using something that you were just using.

Hope this helps...
Tim Gorman
consultant -> Evergreen Database Technologies, Inc.
postal     => P.O. Box 630791, Highlands Ranch CO  80163-0791
website    =>
email      =>
mobile     => +1-303-885-4526
fax        => +1-303-484-3608
Lost Data? => for info about DUDE...

Saad Khan wrote:
Gurus, have a look at the following statistics:

10:38:31 SYS_at_cldprod2 > select block_size, BUFFER_BUSY_WAIT, DB_BLOCK_GETS, PHYSICAL_READS, PHYSICAL_WRITES from v$buffer_pool_statistics;

---------- ---------------- ------------- -------------- ---------------
      2048                0             0              0               0
      8192       1032171136     114987203     3.0442E+10        27422279
      4096         44045127    5118516066     9281895243       353200354
     16384                0             0              0               0

When I've removed the tablespace with 8K, why is it still showing this many buisy buffers and block gets for 8K? Yes I didnt recycle the database but how can I make it flush the 8k blocks from SGA and give that memory to 4K blocks?

Is it due to db_8k_cache_size set to 64 MB?

On Mon, Mar 1, 2010 at 10:30 AM, Saad Khan <> wrote:

No the problem didnt resolve but since I needed to do something very urgently, I reverted the 8k thing after reading all that which Tom Kyte has written and some of you suggested as well. So now its back to 4k.

But that has done some reorg but I dont think that will resolve the original performance issue. The "do something" song is still being drummed on my head.

On Fri, Feb 26, 2010 at 1:16 PM, girlgeek <> wrote:
I believe that Saad's problem got solved yesterday, but it gave me further thought.

Saad said that the application is processing 1 row at a time. Oracle operates in blocks. Is it possible that some of his increase in time when he increased the block size from 4K to 8K could be simply in the time needed to read/write the larger number of bytes for each row processed? Of course, I am assuming that each single row will require that a new block be read/written. I am ignoring the fact that some of the blocks will already be in cache. Guru's what are your answers to this further question.


Oliver Jost wrote:
Just some two-bits on this one. If you are doing massive inserts are you spending a lot of time allocating space? If so, your rowcache may be very busy allocating more space. You could pre-allocate some space to the segment and increase the size of the next to accommodate future growth.
Good luck,

*From:* on behalf of Saad Khan
*Sent:* Thu 2/25/2010 12:39 PM
*To:* Mark W. Farnham
*Subject:* Re: Performance issue after creating higher block size tablespace

No the query plan didnt change, the cost changed very marginally.

Yes, i rebuild the indexes after moving the tables.

This is 10g, so it gathers stats on base of skewness. The last_analyzed column for this tables shows yesterday.

Its the joins that are killing the query i think. I'm going to send the queries to list .

DB_CACHE_SIZE was set to 0 as the sga_target is set here which is 1.7 GB (its a 32-bit installation)

I couldnt understand your last message. May be if you elaborate it more.


On Thu, Feb 25, 2010 at 12:10 PM, Mark W. Farnham < <>> wrote:

   Did your query plan change?

   Did you also rebuild the indexes after you moved the tables?

   Gather statistics? Are they a big change from the last time you
   gathered statistics?

   Is the new tablespace in files that occupy comparable underlying
   volumes in terms of I/Os supported per unit time?

   Is your storage in some flavor of SAME, or were the tables being
   selected from formerly on independently operating units of i/o
   (especially from the insert target) and now you’ve lumped them all

   What was your db_cache_size before?

   Are you memory lean on the machine and using filesystems? Have you
   robbed the OS of file caching space by adding to the SGA size?

   This is a very critical app and I dont want to rebuild "whole"
   thing. Moving a bunch of tables is entirely different ofcourse.

   Those are all bits of a partial change analysis you might do, not
   that you’ve stepped in it. If one or more of them is on target
   (measure, don’t guess) then you might have a shortcut out of your
   problem. Others might add to the list.

   Now if you had a time machine, I’d say get in it and measure
   things to evaluate what (if any) performance benefit there was to
   be expected if you could get i/o service time to zero by moving to
   8K. Then, if that idealized ceiling of possible benefit was
   significant, figure what the likely benefit was if everything
   meshed in your favor with no side effects. Then, if that still
   seemed worthwhile, plan and engineer the move so that you ruled
   out in advance negative side effects. (And I’m wondering why not
   rebuild the whole thing at 8K if the database block size was
   measured to predict an advantage.)

   So what to really do now? See where the time is going. One often
   useful bit of information is routing the output of the select to
   dev/null and seeing how long that takes. If the lion’s share of
   your time is in the select, fix that. Likewise, if you queue up
   the results of the select in a single table and just select from
   there and insert into the destination, does that reveal a
   bottleneck on the insert side?

   Before you would move back, you would want to have some evidence
   that moving back would eliminate some problem. Unless of course
   the urgency now is such that just getting back where you were
   right away is more important than minimizing the amount of work to
   reach better performance. Then you could pretend you went through
   the time machine, figure out where your time is going and attack
   the problem from that standpoint.




   <>] *On Behalf Of *Saad Khan
   *Sent:* Thursday, February 25, 2010 11:24 AM

   *To:* <>
   *Subject:* Performance issue after creating higher block size

   Hi gurus.

   I've a production database running on oracle10.2.0.3 at SUSE linux
   10. The default DB_BLOCK_SIZE for the database is 4K.

   There was a performance complain coming from the users and
   developers asked me to look into that. They particularly
   complained about one stored procedure that was taking too much
   time. Now when I looked into the stored proc, I saw the insert
   statement in one particular table which is something more than 4
   million rows while selecting from a bunch of other tables.

   So what I did, I created a new tablespace with the db_block_size
   8K and moved all the tables that were used in that SP in the new tbs.

   And guess what, the new response came after that showed its taking
   almost double the time as it was taking earlier. The AWR report
   shows a lot of user IO activity and the tablespace that is hit
   most is the new one. Now is it due to the different block size for
   this new tablespace? Is Oracle finding it hard to manage 8k blocks
   inside the SGA designed for 4K originally?

   The db_cache_size is set to 8192 and db_8k_cache_size is also set
   to 8192.

   Is there any other step I can take? I dont want to revert it back
   to 4k , I think it should work.

   Any suggestions?

   Thanks in advance.

-- Received on Mon Mar 01 2010 - 10:42:01 CST

Original text of this message