Re: basic compression
From: Mladen Gogala <gogala.mladen_at_gmail.com>
Date: Fri, 23 Jan 2015 01:28:14 +0000 (UTC)
Message-ID: <pan.2015.01.23.01.28.45_at_gmail.com>
>
> A block dump will show compressed data by virtue of the row length;
> compressed data will show 'abnormally' short lengths (as compared to
> what the length SHOULD be based on the column definitions plus
> overhead). Actually the data isn't compressed, it's 'de-duplicated'.
> It's an interesting mechanism, described by Jonathan in the provided
> link. Suffice it to say the repeating values are 'cataloged' into a
> 'table' and each occurrence of a given token is replaced by its
> 'identifier' as referenced by the token 'table'. You need to read
> Jonathan's post in its entirety; do not rely on my synopsis. Jonathan
> also takes you through the entire process of generating and reading a
> binary block dump so be sure to read and understand that as well.
>
>
> David Fitzjarrell
Date: Fri, 23 Jan 2015 01:28:14 +0000 (UTC)
Message-ID: <pan.2015.01.23.01.28.45_at_gmail.com>
On Thu, 22 Jan 2015 06:21:58 -0800, ddf wrote:
> On Friday, January 16, 2015 at 12:01:10 PM UTC-7, Mark D Powell wrote:
>> On Thursday, January 15, 2015 at 3:58:56 PM UTC-5, geos wrote: >> > initally a table was created with basic compression and loaded using >> > direct load insert. during table usage time there were performed some >> > updates, some non-direct load inserts, deletions etc. so after a >> > while a table is composed of compressed and uncompressed blocks >> > living altogether. >> > >> > do you know if is it possible to tell whether a particular block is >> > stored in compressed or uncompressed form when using basic >> > compression for a table? is this information available at the block >> > level with the help of some oracle system functions that operate on >> > blocks? >> > >> > thank you geos >> >> I do not have time to read Joel's reference at the moment so my >> apologies if this is in the material but I suspect you can see this in >> the block dump of the table/index. That is I expect there is flag set >> that will show in the dump but I do not have time to test this either >> but it should only take you a few moments to find out. >> >> HTH -- Mark D Powell --
>
> A block dump will show compressed data by virtue of the row length;
> compressed data will show 'abnormally' short lengths (as compared to
> what the length SHOULD be based on the column definitions plus
> overhead). Actually the data isn't compressed, it's 'de-duplicated'.
> It's an interesting mechanism, described by Jonathan in the provided
> link. Suffice it to say the repeating values are 'cataloged' into a
> 'table' and each occurrence of a given token is replaced by its
> 'identifier' as referenced by the token 'table'. You need to read
> Jonathan's post in its entirety; do not rely on my synopsis. Jonathan
> also takes you through the entire process of generating and reading a
> binary block dump so be sure to read and understand that as well.
>
>
> David Fitzjarrell
That mechanism is available for DB2 as of version 9, which is a long time. Oracle de-duplication technology is very similar.
-- Mladen Gogala The Oracle Whisperer http://mgogala.byethost5.com Je suis CharlieReceived on Fri Jan 23 2015 - 02:28:14 CET