Re: Tested 11g OLTP compression and found rather serious issue

From: Tamkatten_at_gmail.com <tamkatten_at_gmail.com>
Date: Sat, 8 Aug 2009 11:09:50 -0700 (PDT)
Message-ID: <c3c48bcf-640d-4351-801c-9b6f13267069_at_j9g2000vbp.googlegroups.com>


On 31 Jul., 10:50, "tamkat..._at_gmail.com" <tamkat..._at_gmail.com> wrote:
> Hi,
>
> I ran the below script in a series of 11g OLTP compression tests. Note
> that the compression rate is fine right after the inserts. And after
> we update the OBJECT_ID column, the compression rate remains the same.
> But after we update a (mostly) NULL column (SUBOBJECT_NAME), the
> compressed segment *explodes* to a size even bigger than the
> uncompressed one! The latter update also takes much longer time to
> complete (no surprise).
>
> Seems like a pitfall..Imagine a 100 Gb compressed table in production
> and we make an seemingly innocent update on it...
>
> Any ideas? (No comments on the commits inside a loop, thank you, as
> mentioned below it is for simulating OLTP).
>
> - Kenneth Koenraadt
>
> ***************
>
> SQL> -- Create an ASSM tablespace with small extent size for testing
> SQL> create tablespace t1 uniform size 128k;
>
> TabelomrÕde er oprettet.
>
> SQL>
> SQL> -- Create 2 logically identical tables, one non-compressed and
> one OLTP compressed
>
> SQL> create table c1 tablespace t1 as select * from dba_objects where
> 0 = 1;
>
> Tabel er oprettet.
>
> SQL> create table c2 compress for all operations tablespace t1 as
> select * from dba_objects where 0 = 1;
>
> Tabel er oprettet.
>
> SQL>
> SQL>
> SQL> -- Simulate OLTP
> SQL> declare
>   2
>   3  cursor c is
>   4  select * from dba_objects;
>   5  i binary_integer;
>   6  j binary_integer;
>   7  begin
>   8
>   9  for i in 1..10 loop
>  10    for c_rec in c loop
>  11      j := j + 1;
>  12      insert into c1 values c_rec;
>  13      insert into c2 values c_rec;
>  14      if mod(j,10) = 0 then
>  15        commit;
>  16      end if;
>  17    end loop;
>  18    end loop;
>  19  end;
>  20  /
>
> PL/SQL-procedure er udf°rt.
>
> SQL> commit;
>
> Bekrµftelse er udf°rt.
>
> SQL>
> SQL> -- Calculate compress ratio
> SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio  from
> dba_segments a, dba_segments b
>   2  where a.segment_name = 'C1'
>   3  and b.segment_name = 'C2';
>
> COMPRESS_RATIO
> --------------
>            ,48
>
> SQL>
> SQL> -- now update every row in compressed table, non-null column
> SQL>
> SQL> update c2 set object_id = object_id;
>
> 139820 rµkker er opdateret.
>
> SQL> commit;
>
> Bekrµftelse er udf°rt.
>
> SQL>
> SQL> -- Calculate compress ratio again
> SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio  from
> dba_segments a, dba_segments b
>   2  where a.segment_name = 'C1'
>   3  and b.segment_name = 'C2';
>
> COMPRESS_RATIO
> --------------
>            ,48
>
> SQL>
> SQL> -- now update every row in compressed table, null column
> SQL> update c2 set subobject_name = 'a';
>
> 139820 rµkker er opdateret.
>
> SQL> commit;
>
> Bekrµftelse er udf°rt.
>
> SQL>
> SQL> -- Calculate compress ratio again
> SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio  from
> dba_segments a, dba_segments b
>   2  where a.segment_name = 'C1'
>   3  and b.segment_name = 'C2';
>
> COMPRESS_RATIO
> --------------
>           1,19
>
> SQL>
> *********************

Hi again,

Further findings on OLTP compression :

  1. Reducing a column's width (e.g update mytable set col1 = substr (col1,1,trunc(length(col1)/2))) and then expanding it again seems to make the segment explode as well
  2. You have a compressed table with app. 130.00 rows and 2 columns having exactly the same content. You then issue an update to set one of the columns equal to the other one. No logical change, though a physical one. The segment explodes again....case is below.

As for now, I would be reluctant to use OLTP compression on even rarely updated tables. Any comments/findings ?


SQL> -- Create an ASSM tablespace with small extent size for testing SQL> drop tablespace t1 including contents and datafiles;

TabelomrÕde er droppet.

SQL> create tablespace t1 uniform size 128k;

TabelomrÕde er oprettet.

SQL>
SQL>
SQL> create table c1 (col1 varchar2(30),col2 number(10),col3 varchar2
(30),col4 number(10)) tablespace t1 ;

Tabel er oprettet.

SQL>
SQL> create table c2 (col1 varchar2(30),col2 number(10),col3 varchar2 (30),col4 number(10)) compress for all operations

Tabel er oprettet.

SQL>
SQL>
SQL>
SQL>
SQL> -- Simulate OLTP
SQL> declare

  2
  3 cursor c is
  4 select object_id,object_name
  5 from dba_objects;
  6
  7 i binary_integer;
  8 j binary_integer;
  9
 10 begin
 11
 12 for i in 1..10 loop
 13 for c_rec in c loop
 14      j := j + 1;
 15      insert into c1 values
(c_rec.object_name,c_rec.object_id,c_rec.object_name,c_rec.object_id);
 16      insert into c2 values
(c_rec.object_name,c_rec.object_id,c_rec.object_name,c_rec.object_id);
 17      if mod(j,10) = 0 then
 18        commit;
 19      end if;

 20 end loop;
 21 end loop;
 22 end;
 23 /

PL/SQL-procedure er udf°rt.

SQL>
SQL> commit;

Bekrµftelse er udf°rt.

SQL>
SQL> -- Calculate compress ratio
SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio  from
  2 dba_segments a, dba_segments b
  3 where a.segment_name = 'C1'
  4 and b.segment_name = 'C2';

COMPRESS_RATIO


           ,71

SQL>
SQL> commit;

Bekrµftelse er udf°rt.

SQL>
SQL>
SQL> update c1 set col3 = col1;

131440 rµkker er opdateret.

SQL>
SQL> update c2 set col3 = col1;

131440 rµkker er opdateret.

SQL>
SQL> commit;

Bekrµftelse er udf°rt.

SQL>
SQL>
SQL> select trunc(b.bytes/a.bytes,2) as compress_ratio  from
  2 dba_segments a, dba_segments b
  3 where a.segment_name = 'C1'
  4 and b.segment_name = 'C2';

COMPRESS_RATIO


           1,2 Received on Sat Aug 08 2009 - 13:09:50 CDT

Original text of this message