Re: OLTP compression slow update

From: David Fitzjarrell <oratune_at_yahoo.com>
Date: Thu, 27 Dec 2012 13:40:31 -0800 (PST)
Message-ID: <1356644431.21819.YahooMailNeo_at_web121604.mail.ne1.yahoo.com>



OLTP compression operates on 'full' blocks (those where PCTUSED has been reached) before compression is enacted.  Updating blocks compressed for OLTP requires uncompressing the data, performing the updates then compressing the blocks again once, as I previously mentioned, PCTUSED has been reached.  All 'full' blocks are compressed (if they aren't already) in sort of a 'batch' operation thus OLTP compression results in a mixture of compressed and uncompressed blocks after DML (inserts, updates, deletes) and it operates on any DML, not just direct path inserts.  As such updates to OLTP compressed data can be resource intensive.  I have noted that updating OLTP compressed data can take almost twice as long as updating uncompressed data (a fact already revealed in this thread).

David Fitzjarrell



From: Martin Berger <martin.a.berger_at_gmail.com> To: Michael Dinh <mdinh235_at_gmail.com> Cc: Raj Jamadagni <rjamya_at_gmail.com>; oracle-l <oracle-l_at_freelists.org>; kevin.jernigan_at_oracle.com Sent: Thursday, December 27, 2012 1:49 PM Subject: Re: OLTP compression slow update

Maybe we can sort the situation out a little bit. For me the main question is still "Why the execution X (compressed) is slower by 1.79 compared to execution Y (uncompressed)?" Is this still correct?
I know of 2 methods oracle provide to answer these question: "system/session statistics" or "wait events".  Does anyone knows other methods to get informations about the "what's going on"? Maybe Kevin or some others can give us some insight which statistics or events we can expect and measure to identify the process is 'in compression A' or 'doing compression task B'?
If there are no such informations available I can only suggest to collect/sample stack traces.

Jareds and Rajs suggestions are totally valid with a primary target to 'avoid the slow path', but I'd like to 'understand the slow path' and then decide how/why to act.

just my .02 €
  Martin

On Thu, Dec 27, 2012 at 7:50 PM, Michael Dinh <mdinh235_at_gmail.com> wrote:

> Thanks all for the ressponse.
> Still testing and will share results.
>
> -Michael.
>
> On Thu, Dec 27, 2012 at 4:52 AM, rjamya <rjamya_at_gmail.com> wrote:
>
> > I saw this first in early 10g versions of plain vanilla compression (not
> > fancy oltp etc). We  were analyzing http/smtp logs collected
> company-wide,
> > Initially we had daily partitions. since it started getting large quickly
> > (we had open web at that time), I introduced partition compression, it
> > worked like magic. When updates (due to occasional re-loads, corrections,
> > ranking etc) to compressed partitions took long, we redesigned our
> > strategy. Kept 7 days worth of parttions uncompressed, and everything
> older
> > was compressed. That worked out well in the end. I guess the update issue
> > on compressed data still needs work then :)
> > Raj
>
>

--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
Received on Thu Dec 27 2012 - 22:40:31 CET

Original text of this message