Re: Big Update/DML

From: Sanjay Mishra <"Sanjay>
Date: Wed, 26 Aug 2020 22:05:33 +0000 (UTC)
Message-ID: <1307388517.6738335.1598479533308_at_mail.yahoo.com>



 

Jon
Thanks for the explanation. The environment is currently refreshed with expdp/impdp and table was in Advance compression due to size as without compression it is 4-5 TB and with Advance compression, it is 790G. Table later on is not getting any Update but Insert and this is one time Update due to App upgrade requirements to multiple tables. Even PCT setting are in place but additional space due to Row migrations is fine as main target is to reduce the timing.  Simple Parallel UPDATE is taking 20+ hour with 100 Parallel process and using 50 CPU_count per node and Buffer Cache of 100G CTAS was also used by selecting column twice but was showing no improvement and taking almost same time Yes partition can be another good point that can help to reduce the update time but need to go thru all testing and currently the env need to be moved to 19c due to 12.2 support ending in Nov and not sufficient time available to redo performance test. What are the best way to improve CTAS performance ? or other Bulk operation for Update/INSERT ? TxSanjay

    On Wednesday, August 26, 2020, 11:00:52 AM EDT, Jonathan Lewis <jlewisoracle_at_gmail.com> wrote:    

Is that 3-4 billion rows each, or total ? I would be a little suspicious of an update which populates a new column with a value derived from existing columns. What options might you have for declaring a virtual column instead - which you could index if needed. Be extremely cautious about calculating space requirements - if you're updating every row on old data might you find that you're causing a significant fraction of the rows in each block to migrate, and there's a peculiarity of bulk row migration that can effectively "waste" 25% of the space in every block that becomes the target of a migrated row.

This effects can be MUCH work when the table is compress (even for OLTP) since the update has to decompress the row before updating and then only "re-compresses" intermittently as the block becomes full.  The CPU cost can be horrendous and you still have the problem of migration if the addition means the original rows can no longer fit in the block. If it is necessary to add the column you may want to review "alter table move online" can do in the latest versions (in case you can make it add the column as you move) or review the options for dbms_redefinition - maybe running several redefinitions concurrently rather than trying to do any parallel update to any single table. RegardsJonathan Lewis   

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Aug 27 2020 - 00:05:33 CEST

Original text of this message