Re: Tuning 'Alter Modify" column

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Wed, 24 Aug 2022 09:05:25 +0100
Message-ID: <CAGtsp8=JW9wywBEifs1kDCvSB5Q4+jWdqY3H2ZrRgkmaNWL=8A_at_mail.gmail.com>



Pap,
That's a very good point, and one that I'd forgotten (one of the "detail round the edges").
If OP wants to use the partition exchange method for the data, then I'd protect the data until after the exchange, i.e.

Create and populate partitioned copy with char(9), index as needed Create empty partitioned copy with char(8) Exchange table with empty copy's partition, excluding indexes -- to protect original data
Change (now empty) table to char(9)
Check indexes on (empty) table are usable Exchange (empty) table and char(9) partition, including indexes. When all is checked drop the char(8) copy.

Since the application will have to be stopped while this is going on it's worth remembering that all it's trying to avoid is the risk of losing some important privileges. So if it seems too messy the alternative of simply creating a copy table (with changed char(9), indexing and renaming, then recreating privileges may seem easier.

In both cases think about what you want the pctfree to be when creating the copy data, as most of the old data is (presumably) never going to change and could be created at pctfree 0, but the newer data might need some free space to grow.

Regards
Jonathan Lewis

On Tue, 23 Aug 2022 at 21:36, Pap <oracle.developer35_at_gmail.com> wrote:

> Also I think, in the partition exchange method you may need to truncate
> the main table and modify/increase the column length to char 9 before
> exchange , else exchange partition operation will mostly fail because of
> structure/column length mismatch.
>
> On Wed, 24 Aug 2022, 1:13 am yudhi s, <learnerdatabase99_at_gmail.com> wrote:
>
>> Thank you so much Jonathan. You explained it so nicely. This really
>> helped.
>>
>> We will give it a try by making the index unusable or through the
>> partition exchange method. Its legacy app so making it Varchar2 will
>> possibly need some impact analysis.
>>
>> However, while the Alter query was running along with a valid index on
>> the same column (which was getting altered)we were monitoring tablescan
>> operation through longops. We saw after ~80% of the table scan done i.e
>> around ~4hrs past the execution, the longops suddenly disappeared. And the
>> session started showing continuously increasing value of "rollback changes
>> - undo records applied". It means the session was starting to rollback. But
>> we were trying to understand if it's really the effect of the "Update
>> restarts" bug which you pointed out? As the default session trace generated
>> shows 'TIMEOUT' so I'm wondering if we hit something else. Below is a
>> snippet of the trace couple of lines above and below the 'alter' statement.
>>
>> user session for deadlock lock 0x4220165d0
>>
>> sid: 2085 ser: 65435 audsid: 836497845 user: 428/USER1
>>
>> flags: (0x145) USR/- flags2: (0x4009) DDLT1/-/INC
>>
>> flags_idl: (0x1) status: BSY/-/-/- kill: -/-/-/-
>>
>> pid: 323 O/S info: user: grid, term: UNKNOWN, ospid: 187203
>>
>> image: oracle_at_mchn06.city1.comp1.com
>>
>> client details:
>>
>> O/S info: user: XXXXX, term: , ospid: 105399
>>
>> machine: oxxxxx1 program: sqlplus_at_oxxxxxx1 (TNS V1-V3)
>>
>> application name: SQL*Plus, hash value=3669949024
>>
>> current SQL:
>>
>> Alter Table tab1 Modify COL1 CHAR(9 BYTE)
>>
>> 2022-08-23 08:55:37.738*:kjdglblkrdmpint(): DUMP LOCAL BLOCKER: initiate
>> state dump for TIMEOUT
>>
>> possible owner[323.187203] on resource
>> LB-0EACC583-109A9AD2-00000000-00000000
>>
>> 2022-08-23 08:55:37.738 :kjzddmp(): Submitting asynchronized dump request
>> [1c]. summary=[ges process stack dump (kjdglblkrdm1)].
>>
>> *** 2022-08-23T08:55:44.108570-04:00
>>
>> ERROR: KGL Lock timeout. Handle=0x872c70138
>>
>> LibraryObjectLock: Address=0x87bf0b7d8 Handle=0x872c70138 RequestMode=S
>>
>> CanBeBrokenCount=9652 Incarnation=3 ExecutionCount=0
>>
>> On Tue, Aug 23, 2022 at 4:17 PM Jonathan Lewis <jlewisoracle_at_gmail.com>
>> wrote:
>>
>>>
>>> Because you're using a char() column (rather than varchar2() Oracle has
>>> to update every row that has a value (i.e. is not null) by appending a
>>> space to it, and it also has to update every index that references that
>>> column doing a "delete/insert".
>>>
>>> The update will be by tablescan, in a single transaction, so you will
>>> acquire a HUGE undo segment. The redo generated will be one entry of about
>>> 280 bytes per row updated IF THERE ARE NO INDEXES; plus you might find that
>>> some rows migrate but that's a little unlikely unless the rows are all very
>>> short to start with anyway.
>>>
>>> For every index that includes the column you're likely to generate a
>>> bigger problem. Every "updated" index entry will be an insert/delete pair,
>>> generating at least 450 bytes between them (plus 2 x the length of a
>>> typical index entry). Moreover, because it's a single transaction and
>>> Oracle doesn't cleanout, or even try to "stub" index entries until the
>>> transaction has committed (and even then some other session has to do the
>>> cleanout) you will basically double the size of the index and probably
>>> generate more redo on index leaf block splits - which generate about 18KB
>>> of redo per block split if you're using 8KB blocks.
>>>
>>> A further time-waster is that Oracle has a funny "restart" mechanism the
>>> first time the undo segment its using needs to add an extent (see:
>>> https://jonathanlewis.wordpress.com/2019/09/10/update-restarts/ and its
>>> comments). At the scale you're operating your modify is likely to work for
>>> a few million rows than roll back and start again.
>>>
>>> Given the huge amount of redo and the catastrophic impact on the indexes
>>> options are:
>>> a) try dropping any affected indexes before you do the modify - this MAY
>>> help.
>>> or
>>> b) use dbms_redefinition to make the change if you need it to be online
>>> or
>>> c) Create an empty single-partition partitioned table matching the
>>> definition. Insert /*+ append */ from old to new. Create local indexes
>>> matching your current indexes on the partitioned table. Exchange partition
>>> with table. This means you only copy the table once (and minimise the
>>> redo) and don't lose privileges etc.- but there are some little details you
>>> would want to investigate and adjust around the edges of this strategy.
>>>
>>> Regards
>>> Jonathan Lewis
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, 23 Aug 2022 at 10:35, yudhi s <learnerdatabase99_at_gmail.com>
>>> wrote:
>>>
>>>> Hi All, We have got an ALTER table modify column column in which we are
>>>> increasing the length of a column from char(8 byte) to char(9 bytes) as
>>>> below. The table holds ~100million rows. We are seeing in the long ops its
>>>> showing to take 4-5hrs of time and the message showing up as 'Table Scan'.
>>>> I think one way is to move the data to another table and then truncate this
>>>> table and modify the column and then move the table back using INSERT
>>>> APPEND.But we were trying to understand if there exists any other less
>>>> impact and faster approach like making it happen in parallel threads etc. I
>>>> tried making the table Degree to parallel-16, buty the ALTER is still
>>>> running in serial.
>>>>
>>>> alter table tab1 modify COL1 CHAR(9 BYTE);
>>>>
>>>> Regards
>>>> Yudhi
>>>>
>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Aug 24 2022 - 10:05:25 CEST

Original text of this message