Re: OLTP compression slow update

From: Martin Berger <martin.a.berger_at_gmail.com>
Date: Thu, 27 Dec 2012 23:09:58 +0100
Message-ID: <CALH8A93RxPJrpWy2j85Z3Vezd=4SJ7Y668eJSUcFa=R5YWyDgQ_at_mail.gmail.com>



David,
I totally agree with your description (well, maybe I'm not sure if the block must be uncompressed totally, but that's not the issue here I'd say). I just would like to get some informations like "wait events" e.g. "compressing block" p1=file, p2=block, p3=numbers? - but As far as I understand it's not a "wait" for oracle as long as the server process is doing some work. (from the client point of view it's a kind of "wait"). The 2nd thing would be to increase some sesstat counters, but I don't know any regarding compression.
So we can only guess by external timings at which statement the UPDATE lead to a block decompression/compression and where it run in an uncompressed (below PCTUSED) block?

Martin

On Thu, Dec 27, 2012 at 10:40 PM, David Fitzjarrell <oratune_at_yahoo.com>wrote:

> OLTP compression operates on 'full' blocks (those where PCTUSED has been
> reached) before compression is enacted. Updating blocks compressed for
> OLTP requires uncompressing the data, performing the updates then
> compressing the blocks again once, as I previously mentioned, PCTUSED has
> been reached. All 'full' blocks are compressed (if they aren't already) in
> sort of a 'batch' operation thus OLTP compression results in a mixture of
> compressed and uncompressed blocks after DML (inserts, updates, deletes)
> and it operates on any DML, not just direct path inserts. As such updates
> to OLTP compressed data can be resource intensive. I have noted that
> updating OLTP compressed data can take almost twice as long as updating
> uncompressed data (a fact already revealed in this thread).
>
> David Fitzjarrell
>
>
> *From:* Martin Berger <martin.a.berger_at_gmail.com>
> *To:* Michael Dinh <mdinh235_at_gmail.com>
> *Cc:* Raj Jamadagni <rjamya_at_gmail.com>; oracle-l <oracle-l_at_freelists.org>;
> kevin.jernigan_at_oracle.com
> *Sent:* Thursday, December 27, 2012 1:49 PM
>
> *Subject:* Re: OLTP compression slow update
>
> Maybe we can sort the situation out a little bit.
> For me the main question is still "Why the execution X (compressed) is
> slower by 1.79 compared to execution Y (uncompressed)?"
> Is this still correct?
> I know of 2 methods oracle provide to answer these question:
> "system/session statistics" or "wait events". Does anyone knows other
> methods to get informations about the "what's going on"?
> Maybe Kevin or some others can give us some insight which statistics or
> events we can expect and measure to identify the process is 'in compression
> A' or 'doing compression task B'?
> If there are no such informations available I can only suggest to
> collect/sample stack traces.
>
> Jareds and Rajs suggestions are totally valid with a primary target to
> 'avoid the slow path', but I'd like to 'understand the slow path' and then
> decide how/why to act.
>
> just my .02 €
> Martin
>
>
> On Thu, Dec 27, 2012 at 7:50 PM, Michael Dinh <mdinh235_at_gmail.com> wrote:
>
> > Thanks all for the ressponse.
> > Still testing and will share results.
> >
> > -Michael.
> >
> > On Thu, Dec 27, 2012 at 4:52 AM, rjamya <rjamya_at_gmail.com> wrote:
> >
> > > I saw this first in early 10g versions of plain vanilla compression
> (not
> > > fancy oltp etc). We were analyzing http/smtp logs collected
> > company-wide,
> > > Initially we had daily partitions. since it started getting large
> quickly
> > > (we had open web at that time), I introduced partition compression, it
> > > worked like magic. When updates (due to occasional re-loads,
> corrections,
> > > ranking etc) to compressed partitions took long, we redesigned our
> > > strategy. Kept 7 days worth of parttions uncompressed, and everything
> > older
> > > was compressed. That worked out well in the end. I guess the update
> issue
> > > on compressed data still needs work then :)
> > > Raj
> >
> >
>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Dec 27 2012 - 23:09:58 CET

Original text of this message