Re: basic compression

From: geos <geos_at_SPAMPRECZ.autograf.pl>
Date: Thu, 22 Jan 2015 22:55:33 +0100
Message-ID: <m9rrkl$esk$1_at_node1.news.atman.pl>


On 22.01.2015 15:21, ddf wrote:
>> I do not have time to read Joel's reference at the moment so my apologies if this is in the material but I suspect you can see this in the block dump of the table/index. That is I expect there is flag set that will show in the dump but I do not have time to test this either but it should only take you a few moments to find out.
>> HTH -- Mark D Powell --
>
> A block dump will show compressed data by virtue of the row length; compressed data will show 'abnormally' short lengths (as compared to what the length SHOULD be based on the column definitions plus overhead). Actually the data isn't compressed, it's 'de-duplicated'. It's an interesting mechanism, described by Jonathan in the provided link. Suffice it to say the repeating values are 'cataloged' into a 'table' and each occurrence of a given token is replaced by its 'identifier' as referenced by the token 'table'. You need to read Jonathan's post in its entirety; do not rely on my synopsis. Jonathan also takes you through the entire process of generating and reading a binary block dump so be sure to read and understand that as well.
> David Fitzjarrell

thank you all for pointing me to these articles. I started reading them and even I won't be allowed to do block dump I see there is a lot of useful information. I thought that maybe there was a way to tell compressed/uncompressed by executing some procedure but I also appreciate learning something new.

thank you,
geos Received on Thu Jan 22 2015 - 22:55:33 CET

Original text of this message