Re: expdp dump file and CLOB column

From: Maxim Demenko <mdemenko_at_gmail.com>
Date: Sun, 12 Oct 2008 12:09:13 +0200
Message-ID: <48F1CCC9.1050300@gmail.com>


Mladen Gogala schrieb:
> I have to import a 9GB dump file produced by expdp. My problem is space
> allocation. One of the columns is of the CLOB type and, for some reason,
> it produced a 45GB LOB segment. The obvious question is how do 45GB fit
> into 9GB export file? Is there an underlying compression? Are LOB objects
> in data pump dump files compressed? The logic tells me that they must be
> but I was unable to find any documentation. Does anybody have more info?
>
>
>

What is Oracle version you are speaking about? I did some very basic tests on 10.2.0.4 and couldn't see anything similar to compression. I assumed, compression rate should be (doesn't matter what compression method is used) highest for most redundant data - in my testcase dumpfile size was the same for clobs filled either with dbms_random or with blanks. What size return sum(dbms_lob.getlength(lob_column)) ?
Does it nearly correlate with 45Gb ? If not, i would assume a massive space overallocation ( for whatever reason) during import. Another points to consider could be of course the charactersets on both source and target database and the original size of lob segments in the source.

If you are on 11g however - and export was done with data compression option,i would not wonder about compression rate, as it depends massively on your data pattern.

Best regards

Maxim Received on Sun Oct 12 2008 - 05:09:13 CDT

Original text of this message