Re: Tested 11g OLTP compression and found rather serious issue

From: joel garry <>
Date: Fri, 31 Jul 2009 09:59:10 -0700 (PDT)
Message-ID: <>

On Jul 31, 6:46 am, Mark D Powell <> wrote:
> On Jul 31, 5:02 am, "" <> wrote:
> > Oh, I forgot this important info, regret :
> > SQL> select * from v$version;
> > ---------------------------------------------------------------------------­-----
> > Oracle Database 11g Enterprise Edition Release - Production
> > PL/SQL Release - Production
> > CORE      Production
> > TNS for 32-bit Windows: Version - Production
> > NLSRTL Version - Production
> > - Kenneth Koenraadt
> Ken, I do not know how table compression works on 11g but when first
> introduced if you updated a row in a compressed table Oracle stored
> the updated row back in uncompressed format.  Compression was
> suggested for tables what had little to no update activity.  Your
> results could be partially due to migrated rows if no enhancement to
> recompress the row was not added in 11g.
> Interested in any updates you come up with.
> HTH -- Mark D Powell --

Of course, I have no idea how this really works, but see metalink Note: 466362.1

"As an example on what would happen when inserting into block that is part of a compressed table, the cycle would be like :

  • The block is empty and available for inserts.
  • When you start inserting into this block, data is stored in an uncompressedformat (like for uncompressed tables).
  • However, as soon as you reach the PCTFREE of that block, the data is automatically compressed, potentially reducing the space it originally occupied.
  • This allows for new uncompressed inserts to take place in the same block, until PCTFREE is reached again. At that point compression is triggered again to reduce space occupation in the block."

So I'm wondering if either there is a bug, or Kenneth has some combination of PCTFREE and compressible data that doesn't trigger the compression with the particular pattern of updating - since each row is adding only one byte, the additional length is only the number of rows in the block, and the wasted space below PCTFREE on the original load is less than that (assuming a row inserted originally that would go over PCTFREE winds up in the next block). I'm also wondering what the first update of object_id=object_id actually does.

Again, this is all speculation, but suggests experiments that could be verified with tracing and block dumps, especially my assumptions, which may be saying the last line of the quote from metalink is misleading. If that line is not misleading, this would imply the update works completely differently than the insert which jams as much as possible in the block under PCTFREE, and any large proportion of updates would necessarily explode in size with massive chaining - which would seem to say the docs lie about the applicability of this feature to OLTP: Even though it does say this should only be used on data that isn't changing much, what a weasel line.

That's probably unfair, I think what the docs should be making clear is the feature is to be used for the parts of OLTP systems that have settled down and aren't being updated any more, which in the past would be archived out. Now you'd probably do something like move the data into compressed partitions or tablespaces, depending on options. This could make a simple app upgrade that simply adds a column or something into a huge undertaking.


-- is bogus.
So, how long do you need data?
Received on Fri Jul 31 2009 - 11:59:10 CDT

Original text of this message