Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Segment management auto clause
Richard -
A couple of comments in-line, not being
picky, but a few thoughts on tests that
you may or may not have done already.
-- Regards Jonathan Lewis http://www.jlcomp.demon.co.uk The educated person is not the person who can answer the questions, but the person who can question the answers -- T. Schick Jr One-day tutorials: http://www.jlcomp.demon.co.uk/tutorial.html ____Denmark__May 21-23rd ____Sweden___June ____Finland__September ____Norway___September Three-day seminar: see http://www.jlcomp.demon.co.uk/seminar.html ____UK_(Manchester)_May x 2 ____Estonia___June 4th - 6th ____Australia_June 18th - 20th (Perth) ____Australia_June 23rd - 25th (t.b.a) ____USA_(CA, TX)_August The Co-operative Oracle Users' FAQ http://www.jlcomp.demon.co.uk/faq/ind_faq.html "Richard Foote" <richard.foote_at_bigpond.com> wrote in message news:zJrwa.34254$1s1.498229_at_newsfeeds.bigpond.com...Received on Wed May 14 2003 - 15:47:53 CDT
> Secondly, there was some debate regarding each extent requiring it's
own set
> of bitmap blocks. This is incorrect. One bitmap block can map to
many
> extents and indeed as an object grows, Oracle assigns more bitmaps
within a
> bitmap block.
I haven't seen multiple extents covered by a single bitmap block yet - it's always been one extent = one block for me. However, if you set up tests where all the extents are allocated in one long-running process to one table, then perhaps one map can be extended to cover adjacent extents. Again, though, I haven't seen it happen at the smaller end of the scale when it ought to be most important.
> Some first level bitmap blocks may only
span 16 blocks,
> however some first level bitmap blocks span 64 blocks or 128 blocks
or 256
> blocks, etc
If I recall correctly, you have to have fairly large segments (64MB ? ) before you get to 256 blocks - which means that many segments in 'ordinary' database will be in the "1% space = bitmap blocks" range - and even the big segments will only be in the 0.5% space range. Whilst this isn't dramatic in terms of disk space, bitmap blocks tend to be quite warm, and therefore become a much more significant fraction of the buffer.
>
> Extra full time scan time of 17%+, well let's see. I would be really
>
I agree with your general observations. Since the blocks seem to be formatted and made available in chunks of 16, there isn't a lot of extra work for Oracle to check a bitmap block near the LHWM to see which 16-block chunks need to be hit with a multiblock read. And since Oracle doesn't seem to be too profligate about the decision to format another 16 blocks, the overhead should not be dramatic. (Except in the small table case - and as we all know, small tables do need indexes anyway ;)
> SQL> insert into assm_tab select * from assm_tab;
Insert as select is a bit of a special case for ASSM - after all it's been created to handle high concurrency small inserts. You can't really be confident that the results of a test like this will be consistent with 'planned' use of ASSM.
> Commit complete.
That's one continuous, extremely clean, 2GB of ASSM - not really a typical example.
> SQL> select table_name, num_rows, blocks, empty_blocks, avg_space
> 2 from dba_tables
> 3 where table_name like '%ASSM%';
>
> TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS
AVG_SPACE
> ------------------------------ ---------- ---------- ------------ --
--------
> NOT_ASSM_TAB 1712128 43352 679
906
> ASSM_TAB 1712128 43505 527
922
>
> *** note, approx 0.35% more blocks allocated ***
>
>
> SQL> select segment_name, count(*) from dba_extents
> 2 where segment_name like '%ASSM%'
> 3 group by segment_name;
>
>
> SEGMENT_NAME COUNT(*)
> -------------------- ----------
> ASSM_TAB 114
> NOT_ASSM_TAB 114
>
>
> **** note each have 114 extents, used autoallocate in this example
****
> Elapsed: 00:08:53.07
>
> Statistics
> ----------------------------------------------------------
> 0 recursive calls
> 0 db block gets
> 154571 consistent gets
> 43294 physical reads
> 0 redo size
> 75588057 bytes sent via SQL*Net to client
> 1256054 bytes received via SQL*Net from client
> 114143 SQL*Net roundtrips to/from client
> 0 sorts (memory)
> 0 sorts (disk)
> 1712128 rows processed
>
> Statistics
> ----------------------------------------------------------
> 0 recursive calls
> 0 db block gets
> 154805 consistent gets
> 43203 physical reads
> 0 redo size
> 80025841 bytes sent via SQL*Net to client
> 1256054 bytes received via SQL*Net from client
> 114143 SQL*Net roundtrips to/from client
> 0 sorts (memory)
> 0 sorts (disk)
> 1712128 rows processed
>
>
> Better CPU, better elapsed ???
>
Given the disparity between the tkprof time and the clock time on elapsed, I don't think the tkprof times for CPU can be trusted. It might be interesting to see how much difference there is in latch behaviour. Given the variation in the clock time, did you repeat the experiment after dropping the tablepaces, but not the files, then recreating the tablespaces reusing the 'opposite' file to see how much effect the file positions had ?
> Well go figure ????
>
> If you notice, the number of LIOs is a little higher with ASSM but
due to
> the way FTS load data, the number of PIOs is somewhat less. Then
again,
> perhaps the file management layer ain't so hot for the not_assm
tablespace
> ...
>
The extra LIOs could be related to checking bitmap blocks for batches - and it is certainly interesting to see how far the phsyical I/O counts are from the BLOCKS recorded in the dba_tables view. In particular, does the reduction in PIOs have anything to do with the possibility that your buffer cache has got a nice warm 90+ bitmap blocks now 'wasting' space that could otherwise be holding more important data ? Next time you repeat the experiment, it would be interesting to check how many table blocks are in the buffer at the end of the test, and how many in each class. You might try running series of single row inserts on tables at that size, after doing some scattered deletes. Check how many different blocks of what type get latched - the behaviour I've seen on smaller tables is: seg header L2 L1 Table block. Which makes for 2 very hot blocks, and a handful of warm (L1) blocks when the going gets tough - instead of one hot freelist block. I'd be interested to see how this changes with size of table.
>
> Richard
>
>