Re: basic compression

From: ddf <>
Date: Thu, 22 Jan 2015 06:21:58 -0800 (PST)
Message-ID: <>

On Friday, January 16, 2015 at 12:01:10 PM UTC-7, Mark D Powell wrote:
> On Thursday, January 15, 2015 at 3:58:56 PM UTC-5, geos wrote:
> > initally a table was created with basic compression and loaded using
> > direct load insert. during table usage time there were performed some
> > updates, some non-direct load inserts, deletions etc. so after a while a
> > table is composed of compressed and uncompressed blocks living altogether.
> >
> > do you know if is it possible to tell whether a particular block is
> > stored in compressed or uncompressed form when using basic compression
> > for a table? is this information available at the block level with the
> > help of some oracle system functions that operate on blocks?
> >
> > thank you
> > geos
> I do not have time to read Joel's reference at the moment so my apologies if this is in the material but I suspect you can see this in the block dump of the table/index. That is I expect there is flag set that will show in the dump but I do not have time to test this either but it should only take you a few moments to find out.
> HTH -- Mark D Powell --

A block dump will show compressed data by virtue of the row length; compressed data will show 'abnormally' short lengths (as compared to what the length SHOULD be based on the column definitions plus overhead). Actually the data isn't compressed, it's 'de-duplicated'. It's an interesting mechanism, described by Jonathan in the provided link. Suffice it to say the repeating values are 'cataloged' into a 'table' and each occurrence of a given token is replaced by its 'identifier' as referenced by the token 'table'. You need to read Jonathan's post in its entirety; do not rely on my synopsis. Jonathan also takes you through the entire process of generating and reading a binary block dump so be sure to read and understand that as well.

David Fitzjarrell Received on Thu Jan 22 2015 - 15:21:58 CET

Original text of this message