Re: Tested 11g OLTP compression and found rather serious issue

From: John Hurley <johnbhurley_at_sbcglobal.net>
Date: Fri, 31 Jul 2009 16:18:40 -0700 (PDT)
Message-ID: <497d756d-18d9-4cda-9765-d941380c6854_at_26g2000yqk.googlegroups.com>



On Jul 31, 4:50 am, "tamkat..._at_gmail.com" <tamkat..._at_gmail.com> wrote:

snip

> Hi,
>
> I ran the below script in a series of 11g OLTP compression tests. Note
> that the compression rate is fine right after the inserts. And after
> we update the OBJECT_ID column, the compression rate remains the same.
> But after we update a (mostly) NULL column (SUBOBJECT_NAME), the
> compressed segment *explodes* to a size even bigger than the
> uncompressed one! The latter update also takes much longer time to
> complete (no surprise).
>
> Seems like a pitfall..Imagine a 100 Gb compressed table in production
> and we make an seemingly innocent update on it...
>
> Any ideas? (No comments on the commits inside a loop, thank you, as
> mentioned below it is for simulating OLTP).
>
> - Kenneth Koenraadt
>
> ***************
>
> SQL> -- Create an ASSM tablespace with small extent size for testing
> SQL> create tablespace t1 uniform size 128k;
>
> TabelomrÕde er oprettet.
>
> SQL>
> SQL> -- Create 2 logically identical tables, one non-compressed and
> one OLTP compressed
>
> SQL> create table c1 tablespace t1 as select * from dba_objects where
> 0 = 1;
>
> Tabel er oprettet.
>
> SQL> create table c2 compress for all operations tablespace t1 as
> select * from dba_objects where 0 = 1;
>
> Tabel er oprettet.
>
> SQL>
> SQL>
> SQL> -- Simulate OLTP
> SQL> declare
>   2
>   3  cursor c is
>   4  select * from dba_objects;
>   5  i binary_integer;
>   6  j binary_integer;
>   7  begin
>   8
>   9  for i in 1..10 loop
>  10    for c_rec in c loop
>  11      j := j + 1;
>  12      insert into c1 values c_rec;
>  13      insert into c2 values c_rec;
>  14      if mod(j,10) = 0 then
>  15        commit;
>  16      end if;
>  17    end loop;
>  18    end loop;
>  19  end;
>  20  /
>
> PL/SQL-procedure er udf°rt.
>
> SQL> commit;
>
> Bekrµftelse er udf°rt.
>
> SQL>
> SQL> -- Calculate compress ratio
> SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio  from
> dba_segments a, dba_segments b
>   2  where a.segment_name = 'C1'
>   3  and b.segment_name = 'C2';
>
> COMPRESS_RATIO
> --------------
>            ,48
>
> SQL>
> SQL> -- now update every row in compressed table, non-null column
> SQL>
> SQL> update c2 set object_id = object_id;
>
> 139820 rµkker er opdateret.
>
> SQL> commit;
>
> Bekrµftelse er udf°rt.
>
> SQL>
> SQL> -- Calculate compress ratio again
> SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio  from
> dba_segments a, dba_segments b
>   2  where a.segment_name = 'C1'
>   3  and b.segment_name = 'C2';
>
> COMPRESS_RATIO
> --------------
>            ,48
>
> SQL>
> SQL> -- now update every row in compressed table, null column
> SQL> update c2 set subobject_name = 'a';
>
> 139820 rµkker er opdateret.
>
> SQL> commit;
>
> Bekrµftelse er udf°rt.
>
> SQL>
> SQL> -- Calculate compress ratio again
> SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio  from
> dba_segments a, dba_segments b
>   2  where a.segment_name = 'C1'
>   3  and b.segment_name = 'C2';
>
> COMPRESS_RATIO
> --------------
>           1,19
>
> SQL>
> *********************

You might want to volley this over to asktom ( http://asktom.oracle.com ) esp since you have a test case and get his read on it. Received on Fri Jul 31 2009 - 18:18:40 CDT

Original text of this message