Re: expdp dump file and CLOB column

From: Mladen Gogala <>
Date: Sun, 12 Oct 2008 12:03:27 +0000 (UTC)
Message-ID: <gcsp2f$v52$>

On Sun, 12 Oct 2008 12:09:13 +0200, Maxim Demenko wrote:

On Sun, 12 Oct 2008 12:09:13 +0200, Maxim Demenko wrote:

> What is Oracle version you are speaking about?

> I did some very basic
> tests on and couldn't see anything similar to compression. I
> assumed, compression rate should be (doesn't matter what compression
> method is used) highest for most redundant data - in my testcase
> dumpfile size was the same for clobs filled either with dbms_random or
> with blanks. What size return sum(dbms_lob.getlength(lob_column)) ? Does
> it nearly correlate with 45Gb ?

No, it doesn't:
  1 select sum(dbms_lob.getlength(segment_text))/1048576 MB 2* from   news_segments partition(data0544)
SQL> /         MB


It correlates more closely with 9GB.

> If not, i would assume a massive space overallocation ( for whatever
> reason) during import.

Yes, that's probably it. I will attempt the normal export, no "dp" version, and see what happens. That's probably the dirty little secret of data pump: waste humongous amounts of space. I am not surprised that Oracle Corp. decided to venture into storage business.

> Another points to
> consider could be of course the charactersets on both source and target
> database and the original size of lob segments in the source.

Both character sets are the same.

Received on Sun Oct 12 2008 - 07:03:27 CDT

Original text of this message