Re: Tuning 'Alter Modify" column

From: Pap <oracle.developer35_at_gmail.com>
Date: Wed, 24 Aug 2022 02:06:46 +0530
Message-ID: <CAEjw_fjDUws07+PTXqesb+5ZeOGKr47YmQoZC_5s9uhJgcc-Ng_at_mail.gmail.com>



Also I think, in the partition exchange method you may need to truncate the main table and modify/increase the column length to char 9 before exchange , else exchange partition operation will mostly fail because of structure/column length mismatch.

On Wed, 24 Aug 2022, 1:13 am yudhi s, <learnerdatabase99_at_gmail.com> wrote:

> Thank you so much Jonathan. You explained it so nicely. This really helped.
>
> We will give it a try by making the index unusable or through the
> partition exchange method. Its legacy app so making it Varchar2 will
> possibly need some impact analysis.
>
> However, while the Alter query was running along with a valid index on the
> same column (which was getting altered)we were monitoring tablescan
> operation through longops. We saw after ~80% of the table scan done i.e
> around ~4hrs past the execution, the longops suddenly disappeared. And the
> session started showing continuously increasing value of "rollback changes
> - undo records applied". It means the session was starting to rollback. But
> we were trying to understand if it's really the effect of the "Update
> restarts" bug which you pointed out? As the default session trace generated
> shows 'TIMEOUT' so I'm wondering if we hit something else. Below is a
> snippet of the trace couple of lines above and below the 'alter' statement.
>
> user session for deadlock lock 0x4220165d0
>
> sid: 2085 ser: 65435 audsid: 836497845 user: 428/USER1
>
> flags: (0x145) USR/- flags2: (0x4009) DDLT1/-/INC
>
> flags_idl: (0x1) status: BSY/-/-/- kill: -/-/-/-
>
> pid: 323 O/S info: user: grid, term: UNKNOWN, ospid: 187203
>
> image: oracle_at_mchn06.city1.comp1.com
>
> client details:
>
> O/S info: user: XXXXX, term: , ospid: 105399
>
> machine: oxxxxx1 program: sqlplus_at_oxxxxxx1 (TNS V1-V3)
>
> application name: SQL*Plus, hash value=3669949024
>
> current SQL:
>
> Alter Table tab1 Modify COL1 CHAR(9 BYTE)
>
> 2022-08-23 08:55:37.738*:kjdglblkrdmpint(): DUMP LOCAL BLOCKER: initiate
> state dump for TIMEOUT
>
> possible owner[323.187203] on resource
> LB-0EACC583-109A9AD2-00000000-00000000
>
> 2022-08-23 08:55:37.738 :kjzddmp(): Submitting asynchronized dump request
> [1c]. summary=[ges process stack dump (kjdglblkrdm1)].
>
> *** 2022-08-23T08:55:44.108570-04:00
>
> ERROR: KGL Lock timeout. Handle=0x872c70138
>
> LibraryObjectLock: Address=0x87bf0b7d8 Handle=0x872c70138 RequestMode=S
>
> CanBeBrokenCount=9652 Incarnation=3 ExecutionCount=0
>
> On Tue, Aug 23, 2022 at 4:17 PM Jonathan Lewis <jlewisoracle_at_gmail.com>
> wrote:
>
>>
>> Because you're using a char() column (rather than varchar2() Oracle has
>> to update every row that has a value (i.e. is not null) by appending a
>> space to it, and it also has to update every index that references that
>> column doing a "delete/insert".
>>
>> The update will be by tablescan, in a single transaction, so you will
>> acquire a HUGE undo segment. The redo generated will be one entry of about
>> 280 bytes per row updated IF THERE ARE NO INDEXES; plus you might find that
>> some rows migrate but that's a little unlikely unless the rows are all very
>> short to start with anyway.
>>
>> For every index that includes the column you're likely to generate a
>> bigger problem. Every "updated" index entry will be an insert/delete pair,
>> generating at least 450 bytes between them (plus 2 x the length of a
>> typical index entry). Moreover, because it's a single transaction and
>> Oracle doesn't cleanout, or even try to "stub" index entries until the
>> transaction has committed (and even then some other session has to do the
>> cleanout) you will basically double the size of the index and probably
>> generate more redo on index leaf block splits - which generate about 18KB
>> of redo per block split if you're using 8KB blocks.
>>
>> A further time-waster is that Oracle has a funny "restart" mechanism the
>> first time the undo segment its using needs to add an extent (see:
>> https://jonathanlewis.wordpress.com/2019/09/10/update-restarts/ and its
>> comments). At the scale you're operating your modify is likely to work for
>> a few million rows than roll back and start again.
>>
>> Given the huge amount of redo and the catastrophic impact on the indexes
>> options are:
>> a) try dropping any affected indexes before you do the modify - this MAY
>> help.
>> or
>> b) use dbms_redefinition to make the change if you need it to be online
>> or
>> c) Create an empty single-partition partitioned table matching the
>> definition. Insert /*+ append */ from old to new. Create local indexes
>> matching your current indexes on the partitioned table. Exchange partition
>> with table. This means you only copy the table once (and minimise the
>> redo) and don't lose privileges etc.- but there are some little details you
>> would want to investigate and adjust around the edges of this strategy.
>>
>> Regards
>> Jonathan Lewis
>>
>>
>>
>>
>>
>>
>> On Tue, 23 Aug 2022 at 10:35, yudhi s <learnerdatabase99_at_gmail.com>
>> wrote:
>>
>>> Hi All, We have got an ALTER table modify column column in which we are
>>> increasing the length of a column from char(8 byte) to char(9 bytes) as
>>> below. The table holds ~100million rows. We are seeing in the long ops its
>>> showing to take 4-5hrs of time and the message showing up as 'Table Scan'.
>>> I think one way is to move the data to another table and then truncate this
>>> table and modify the column and then move the table back using INSERT
>>> APPEND.But we were trying to understand if there exists any other less
>>> impact and faster approach like making it happen in parallel threads etc. I
>>> tried making the table Degree to parallel-16, buty the ALTER is still
>>> running in serial.
>>>
>>> alter table tab1 modify COL1 CHAR(9 BYTE);
>>>
>>> Regards
>>> Yudhi
>>>
>>

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Aug 23 2022 - 22:36:46 CEST

Original text of this message