Re: long to clob conversion redo size and new table size

From: Grzegorz <grzegorzof_at_interia.pl>
Date: Wed, 29 Jul 2009 18:31:12 +0200
Message-ID: <h4ptgo$fu1$1_at_news.task.gda.pl>



Randolf Geist pisze:
>> I've got range partitioned table with clob and number (primary key)
>> columns and doing insert from select /*+ index(hint) */ id, to_lob(dane)
>> from long_table where pk between range1 and range2 (I've defined 24
>> ranges so 24 separate inserts with index hint on select) .
>> When I am done with inserts a primary key will be added and few renames
>> close the case :).

>
> Do you think that accessing 3 million rows out of 70 million rows
> (1/24th of the table) via index is faster than doing a full table
> scan? I would doubt that in most cases... What is the clustering
> factor of that index used? You might be faster with a simple FTS.
Well its primary key index and table is insert/select only with id generated via sequence so looks promissing. Looks like FTS on 300GB table with 2 columns id number pk, and 'dane' long takes forever .
>
>> Few more questions come to my mind:
>> Subsequent inserts take longer .
>> I've observed about 10% overtime is  that normal.

>
> What do you mean by "subsequent inserts"? As part of the conversion,
> of after the conversion when using the new table instead of the old?
> Please elaborate and specify more details.
I've divided whole conversion to 24 x inserts from select id, to_lob(long_column) with range on primary key statements and after 3 turns looks like insert is taking longer about 10 % then previous . The whole idea is because I dont want to influence on current workload, hence no CTAS with parallel way :D .

>

>> Table after conversion got about 300k 1M extents in uniform sized MSSM
>> tablespace is that ok from performance point of view ?

>
> So you mean to say the converted table segment has 300G size. What
> about the LOB segment, what's the size of the associated LOB segment?

Well, converted table is range partitioned on id column (primary key), I've got above 24 partitions .
Not converted table (that with long) is one big 300GB segment with 300k 1M extents.
So if table is partitioned extents are divided between partitions hence not that bad , right ?

>
> 300k extents is definitely a lot. If you were going for an
> AUTOALLOCATE tablespace, you would end up with a lot less number of
> extents, something about less than 5,000 I think given the allocation
> rules used by AUTOALLOCATE. Most of your extents were then 64M.
> Sometimes it's mentioned that it might even use 256M extents, but I've
> never seen them myself.

I wish i went for an autoallocate partition , its kind of political/free space issue :(.
I know the trick with autoallocate and big initial extent so next one will be larger (skipping 64k).

Currently I'm only concern about chunk size (8k at the moment), if I'll go into 16k chunks I can lost a lot of space after conversion (Im using in row storage in lob now).
Regards.
Grzegorz Received on Wed Jul 29 2009 - 11:31:12 CDT

Original text of this message