RE: OLTP compression slow update

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Thu, 27 Dec 2012 17:18:06 -0500
Message-ID: <010f01cde480$0ca43f90$25ecbeb0$_at_rsiz.com>



Of course inserting a bunch of skeletal rows and then doing batches of updates to fill in most of the rest of the column values is a pecimal method that is frequently used nonetheless. If you think for a moment about the interaction of filling blocks with skeletal rows, OLTP compression, and then updating to fill in the missing column values, you will be surprised that the performance degradation is so mild. (This is not what the original poster is doing, I just thought this thread needed a mention of this in addition to David's warning about how it works.)

If, in anticipation of trying to avoid excessive row migration, the DBA has adjusted pctfree and pctused, the interactive with OLTP compression is probably not helped.

(You won't yet find pecimal in the dictionary. I started using it in 1981. I hope its meaning as an antonym to optimal is obvious.)

For reasonable use cases, all the Oracle compression stuff is the "neatest stuff since slice bread." But never try to teach a pig to sing. (You cannot succeed and it just annoys the heck out of the pig.) If you've got an existing application that seems like it will fight with compression, don't use it.

For the broad base of reasonable cases, the elapsed time cost of reading fewer physical blocks is likely to save overall elapsed time unless your system is pretty badly CPU bound already.

Using a rolling aging window as described below is an interesting way to have your cake and eat it too.

-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of David Fitzjarrell
Sent: Thursday, December 27, 2012 4:41 PM To: martin.a.berger_at_gmail.com; Michael Dinh Cc: Raj Jamadagni; oracle-l; kevin.jernigan_at_oracle.com Subject: Re: OLTP compression slow update

OLTP compression operates on 'full' blocks (those where PCTUSED has been reached) before compression is enacted.  Updating blocks compressed for OLTP requires uncompressing the data, performing the updates then compressing the blocks again once, as I previously mentioned, PCTUSED has been reached.  All 'full' blocks are compressed (if they aren't already) in sort of a 'batch' operation thus OLTP compression results in a mixture of compressed and uncompressed blocks after DML (inserts, updates, deletes) and it operates on any DML, not just direct path inserts.  As such updates to OLTP compressed data can be resource intensive.  I have noted that updating OLTP compressed data can take almost twice as long as updating uncompressed data (a fact already revealed in this thread).

David Fitzjarrell



From: Martin Berger <martin.a.berger_at_gmail.com> To: Michael Dinh <mdinh235_at_gmail.com> Cc: Raj Jamadagni <rjamya_at_gmail.com>; oracle-l <oracle-l_at_freelists.org>; kevin.jernigan_at_oracle.com
Sent: Thursday, December 27, 2012 1:49 PM Subject: Re: OLTP compression slow update

Maybe we can sort the situation out a little bit. For me the main question is still "Why the execution X (compressed) is slower by 1.79 compared to execution Y (uncompressed)?" Is this still correct?
I know of 2 methods oracle provide to answer these question: "system/session statistics" or "wait events".  Does anyone knows other methods to get informations about the "what's going on"? Maybe Kevin or some others can give us some insight which statistics or events we can expect and measure to identify the process is 'in compression A' or 'doing compression task B'?
If there are no such informations available I can only suggest to collect/sample stack traces.

Jareds and Rajs suggestions are totally valid with a primary target to 'avoid the slow path', but I'd like to 'understand the slow path' and then decide how/why to act.

just my .02 €
  Martin

On Thu, Dec 27, 2012 at 7:50 PM, Michael Dinh <mdinh235_at_gmail.com> wrote:

> Thanks all for the ressponse.
> Still testing and will share results.
>
> -Michael.
>
> On Thu, Dec 27, 2012 at 4:52 AM, rjamya <rjamya_at_gmail.com> wrote:
>
> > I saw this first in early 10g versions of plain vanilla compression
> > (not fancy oltp etc). We  were analyzing http/smtp logs collected
> company-wide,
> > Initially we had daily partitions. since it started getting large
> > quickly (we had open web at that time), I introduced partition
> > compression, it worked like magic. When updates (due to occasional
> > re-loads, corrections, ranking etc) to compressed partitions took
> > long, we redesigned our strategy. Kept 7 days worth of parttions
> > uncompressed, and everything
> older
> > was compressed. That worked out well in the end. I guess the update
> > issue on compressed data still needs work then :) Raj
>
>

--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l


--
http://www.freelists.org/webpage/oracle-l
Received on Thu Dec 27 2012 - 23:18:06 CET

Original text of this message