Re: basic compression

From: joel garry <>
Date: Thu, 22 Jan 2015 16:50:43 -0800 (PST)
Message-ID: <>

On Thursday, January 22, 2015 at 1:55:42 PM UTC-8, geos wrote:
> On 22.01.2015 15:21, ddf wrote:
> >> I do not have time to read Joel's reference at the moment so my apologies if this is in the material but I suspect you can see this in the block dump of the table/index. That is I expect there is flag set that will show in the dump but I do not have time to test this either but it should only take you a few moments to find out.
> >> HTH -- Mark D Powell --
> >
> > A block dump will show compressed data by virtue of the row length; compressed data will show 'abnormally' short lengths (as compared to what the length SHOULD be based on the column definitions plus overhead). Actually the data isn't compressed, it's 'de-duplicated'. It's an interesting mechanism, described by Jonathan in the provided link. Suffice it to say the repeating values are 'cataloged' into a 'table' and each occurrence of a given token is replaced by its 'identifier' as referenced by the token 'table'. You need to read Jonathan's post in its entirety; do not rely on my synopsis. Jonathan also takes you through the entire process of generating and reading a binary block dump so be sure to read and understand that as well.
> > David Fitzjarrell
> thank you all for pointing me to these articles. I started reading them
> and even I won't be allowed to do block dump I see there is a lot of
> useful information. I thought that maybe there was a way to tell
> compressed/uncompressed by executing some procedure but I also
> appreciate learning something new.
> thank you,
> geos

I think a compression-aware option to the VSIZE function would be a reasonable enhancement to ask for. Not that they would do it.


-- is bogus.
Received on Fri Jan 23 2015 - 01:50:43 CET

Original text of this message