Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Segment management auto clause

Re: Segment management auto clause

From: Richard Foote <richard.foote_at_bigpond.com>
Date: Thu, 15 May 2003 21:18:08 +1000
Message-ID: <yrKwa.34788$1s1.505960@newsfeeds.bigpond.com>


Hi Jonathan,

Comments in line

"Jonathan Lewis" <jonathan_at_jlcomp.demon.co.uk> wrote in message news:b9ua3g$bjo$1$8302bc10_at_news.demon.co.uk...
>
> Richard -
>
> A couple of comments in-line, not being
> picky, but a few thoughts on tests that
> you may or may not have done already.

You picky ? Naaahhh, never ;)

>
> I haven't seen multiple extents covered by a single bitmap
> block yet - it's always been one extent = one block for me.
> However, if you set up tests where all the extents are allocated
> in one long-running process to one table, then perhaps one
> map can be extended to cover adjacent extents. Again,
> though, I haven't seen it happen at the smaller end of the
> scale when it ought to be most important.
>

Yeah, a couple of scenarios where this is possible. One as you suggest, when a long running process dumps of whole heap of stuff into a table, a single bitmap block will span multiple extents. Of course the extent size needs to be correspondingly smallish as well.

Perhaps, the easiest way to see this though is actually at the smaller end. I was testing the effect of "ridiculously" small extents using the minimum extent size possible with ASSM (5 blocks). Create a 40K (assuming an 8K block size) uniform size tablespace with ASSM and checkout the bitmaps blocks ...

>
> > Some first level bitmap blocks may only
> span 16 blocks,
> > however some first level bitmap blocks span 64 blocks or 128 blocks
> or 256
> > blocks, etc
>
> If I recall correctly, you have to have fairly large segments (64MB
> ? )
> before you get to 256 blocks - which means that many segments in
> 'ordinary' database will be in the "1% space = bitmap blocks" range -
> and even the big segments will only be in the 0.5% space range.
> Whilst this isn't dramatic in terms of disk space, bitmap blocks tend
> to be quite warm, and therefore become a much more significant
> fraction
> of the buffer.

I agree that hot bitmap blocks could be an issue but consider when they get accessed. During a FTS, they of course all need to be accessed. If however the frequency of the FTS and the associated number of cached bitmap blocks is an issue, then such a table would make a good candidate for the recycle pool to reduce "cache damage" (my term !!).

The other times bitmap blocks are accessed are during insertions. But generally speaking, it's only a limited number of the bitmap blocks that are ever "hot" at a point in time (as the root bitmap block ignores "full" bitmap blocks). So again, the damage is naturally reduced.

That said, if memory restrictions are an issue and the above two scenarios are prevalent in your system for many segments, then yeah, not good.

>
> Given the disparity between the tkprof time and
> the clock time on elapsed, I don't think the
> tkprof times for CPU can be trusted.

I agree. I think tkprof purchased the same "timing device" as I did once in Hong Kong many years ago ;)

>
> It might be interesting to see how much difference
> there is in latch behaviour.
>
> Given the variation in the clock time, did you repeat
> the experiment after dropping the tablepaces, but
> not the files, then recreating the tablespaces reusing
> the 'opposite' file to see how much effect the file positions
> had ?
>

No I didn't but I'm pretty damn sure that the ASSM tablespace had an unfair advantage in this regard ;) My comment "perhaps the file management layer not being so hot for the not_assm" was my little admission.

> >
> > If you notice, the number of LIOs is a little higher with ASSM but
> due to
> > the way FTS load data, the number of PIOs is somewhat less. Then
> again,
> > perhaps the file management layer ain't so hot for the not_assm
> tablespace
> > ...
> >
>
> The extra LIOs could be related to checking bitmap blocks
> for batches - and it is certainly interesting to see how far
> the phsyical I/O counts are from the BLOCKS recorded in
> the dba_tables view. In particular, does the reduction in
> PIOs have anything to do with the possibility that your buffer
> cache has got a nice warm 90+ bitmap blocks now 'wasting'
> space that could otherwise be holding more important data ?
> Next time you repeat the experiment, it would be interesting
> to check how many table blocks are in the buffer at the end
> of the test, and how many in each class.

I think you're right with your earlier points. Certainly the extra LIOs can be attributed to the bitmap blocks. However, the method or manner in which Oracle processes and subsequently deals with these bitmap blocks is important. At the end of the day, the key difference between the two are these bitmap blocks and how effectively Oracle deals with them. They "waste" space, they cause additional LIOs, they require additional processing and consume additional memory. All negatives I would say. But *if* these negatives are proportionally "small enough" to balance out the potential benefits of reduced block contention, more effective usage of storage (*especially* for widely variable row lengths and random delete patterns) then keep me on the mailing list. It would certainly be interesting to see the blocks remaining in the cache at the end of the run so I'll definitely add that to my to-do list. Whether such bitmap blocks prevent more important data staying there depends on load but I guess an argument could be made that maybe the bitmaps blocks are sufficiently important to warrant caching if it means reduced overheads for subsequent and necessary FTS. Are indexes really important, we don't want their data as such, just the data they point to ;)

>
> You might try running series of single row inserts on tables
> at that size, after doing some scattered deletes. Check how
> many different blocks of what type get latched - the behaviour
> I've seen on smaller tables is:
> seg header
> L2
> L1
> Table block.
>
> Which makes for 2 very hot blocks, and a handful of warm (L1)
> blocks when the going gets tough - instead of one hot freelist
> block. I'd be interested to see how this changes with size
> of table.
>

Thanks Jonathan, all good suggestions. I've decided that my current system of keeping track of experiments and test results is not coping too well !! I might need a database to store them all in all. Say, there's an idea !!

I might sound very much as if I'm some sort of crusader for the rights of ASSM segments. I'm not, I just trying to determine what shape(s) the damn things really are and try and fit them in the right holes.

If you know what I mean ?

Cheers

Richard Received on Thu May 15 2003 - 06:18:08 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US