Re: Adding column and updating a huge partitioned table.

From: Mohamed Houri <mohamed.houri_at_gmail.com>
Date: Sat, 10 Jan 2015 09:49:44 +0100
Message-ID: <CAJu8R6hDq+Efbxpo8xAKu2HtvE0jx8A2z_uCVAHgsksaDJw=kw_at_mail.gmail.com>



I wrote an article explaining the new 11g (and enhanced 12c for not null columns) DDL optimization feature

http://www.oracle.com/technetwork/articles/database/ddl-optimizaton-in-odb12c-2331068.html

That I wished it could be extended to deterministic functions

create table t_ddl

as select

rownum n1

, trunc ((rownum-1)/3) n2

, trunc(dbms_random.value(rownum, rownum*10)) n3

, dbms_random.string('U', 10) c1

from dual

connect by level <= 1e4;

 SQL> alter table t_ddl add C_DDL number default 42 not null;

Table altered.

create or replace function f1_ddl

return number

deterministic

is

begin

return 42;

end f1_ddl;

/

 SQL> alter table t_ddl add C_DDL_2 number default f1_ddl() not null;

alter table t_ddl add C_DDL_2 number default f1_ddl() not null

*

ERROR at line 1:

ORA-04044: procedure, function, package, or type is not allowed here

Best regards

Mohamed Houri

www.hourim.wordpress.com

2015-01-09 2:08 GMT+01:00 Kenny Payton <k3nnyp_at_gmail.com>:

> My first thought was dbms_redefinition as well but two things came to mind
> as to why I didn’t recommend it to Michael ( not that it is not feasible ).
>
> 1) it requires the duplication of the entire table which takes a bit more
> space but if you have it this would not be a concern.
>
> 2) Although I’ve successfully done this on some much larger tables with
> quite a bit of activity I’m not expert. I wasn’t for sure how you would
> populate the new column. Would you define a function based index on the
> new column ( provided the function is deterministic as earlier mentioned )
> and maintain that for updates until following the finish?
>
> Definitely a nice utility that I’ve been able to leverage on more than one
> occasion.
>
> Thanks,
> Kenny
>
> On Jan 8, 2015, at 4:15 PM, Andrew Kerber <andrew.kerber_at_gmail.com> wrote:
>
> I think you can do this with pretty minimum pain using dbms_redefinition.
> It is a very nice utility, basically you create the interim table, start
> the redefinition, copy dependent objects, add indexes, and finish. You
> would probably need to use the sync option a few times, but the only lock
> is a very short dictionary lock.
>
> On Thu, Jan 8, 2015 at 11:41 AM, Mark W. Farnham <mwf_at_rsiz.com> wrote:
>
>> apparently bounced short of the list, trying again.
>>
>>
>>
>> *From:* Mark W. Farnham [mailto:mwf_at_rsiz.com]
>> *Sent:* Thursday, January 08, 2015 12:30 PM
>> *To:* 'timur.akhmadeev_at_gmail.com'; 'napacunningham_at_gmail.com'
>> *Cc:* 'oracle-l_at_freelists org'
>> *Subject:* RE: Adding column and updating a huge partitioned table.
>>
>>
>>
>> +42 on the virtual column idea IF Timur’s IF requirements are true.
>>
>>
>>
>> Otherwise, I would suggest that a new certain other use characteristics *
>> *might** suggest that the least amount of work between now and when the
>> sun explodes* might be to create this column on the existing data as a new
>> table (possibly an IOT consisting of a relevant unique key matching your
>> partition scheme plus the function value (ie. the new column), also
>> indexed) and yoked by a union all view when needed.
>>
>>
>>
>> As you proceed eventually migrating to a “Scaling to Inifinity” scheme
>> (best described, as far as I am concerned by Tim Gorman), then new time
>> based partitions could swap in the existence of the new function based
>> column and eliminate union all view.
>>
>>
>>
>> Notes:
>>
>> 1) don’t ignore my “otherwise”, if Timur’s way will work for you it is
>> probably your best solution and yes you can index a virtual column ending
>> up with it being tagged a function based index with similar performance
>> characteristics and rules of whether the optimizer chooses it as a function
>> based index as if the complexity of the virtual column was the complexity
>> of the function based index on regular columns. (Based on a few relatively
>> simple tests and measurements from my paper about managing transactions
>> with disappearing indexes, but no tests violated that theory and it makes
>> sense to me.)
>>
>> 2) “sun explodes” – ripped from a Baron Schwartz tweet via Cary Millsap,
>> which should become common parlance: “If a query will not complete before
>> the sun explodes, it can be described as thermodynamically infeasible.” (Of
>> course hardware and software changes might make a formerly infeasible query
>> feasible before the sun does explode. I think I’ve seen some queries where
>> we’d need a new and smaller chronon or a faster speed of light.)
>>
>> 3) I intentionally implied you’ll be moving toward a “scaling to
>> infinity” scheme. I’ll take odds on that if we meet at Collaborate in Las
>> Vegas. #C15LV
>>
>>
>>
>> mwf
>>
>>
>>
>> *From:* oracle-l-bounce_at_freelists.org [
>> mailto:oracle-l-bounce_at_freelists.org <oracle-l-bounce_at_freelists.org>] *On
>> Behalf Of *Timur Akhmadeev
>> *Sent:* Thursday, January 08, 2015 11:20 AM
>> *To:* napacunningham_at_gmail.com
>> *Cc:* oracle-l_at_freelists org
>> *Subject:* Re: Adding column and updating a huge partitioned table.
>>
>>
>>
>> If you don't need to update this column after initial calculation, and
>> the function is deterministic (preferably) you may try adding it as a
>> virtual column.
>>
>> On Thursday, January 8, 2015, Michael Cunningham <
>> napacunningham_at_gmail.com> wrote:
>>
>> It sounds crazy to me, but I thought I'd give it a try.
>>
>>
>>
>> I have a table that is over 1.2 TB with 64 hash partitions.
>>
>>
>>
>> If you have a great method for this I'd love to hear it. Oracle 12.1.0.1
>> enterprise on Linux 6.5 with SSD storage.
>>
>>
>>
>> I need to add a column and populate the column with data from a
>> function. It's a data warehouse so I can prevent DML during this time (one
>> partition at a time), but cannot prevent during DML during the update of
>> all 64 partitions at once.
>>
>>
>>
>> I also need an index on this new column.
>>
>>
>>
>> I sure would appreciate all the input I can get on this topic because I
>> feel like I'm missing something and there must be a better way of doing
>> this.
>>
>>
>>
>> During a test it took be 6 hours to update the data in a single partition
>> in a test environment. That comes out to around 700 hours for all 64
>> partitions. Please help.
>>
>>
>>
>> --
>>
>> Michael Cunningham
>>
>>
>>
>> --
>> Regards
>> Timur Akhmadeev
>>
>
>
>
> --
> Andrew W. Kerber
>
> 'If at first you dont succeed, dont take up skydiving.'
>
>
>

-- 

Houri Mohamed

Oracle DBA-Developer-Performance & Tuning

Member of Oraworld-team <http://www.oraworld-team.com/>

Visit My         - Blog <http://www.hourim.wordpress.com/>

Let's Connect -
<http://fr.linkedin.com/pub/mohamed-houri/11/329/857/>*Linkedin
Profile <http://fr.linkedin.com/pub/mohamed-houri/11/329/857/>*

My Twitter <https://twitter.com/MohamedHouri>      - MohamedHouri
<https://twitter.com/MohamedHouri>

--
http://www.freelists.org/webpage/oracle-l
Received on Sat Jan 10 2015 - 09:49:44 CET

Original text of this message