Re: Big Update/DML

From: Sanjay Mishra <"Sanjay>
Date: Thu, 27 Aug 2020 00:43:24 +0000 (UTC)
Message-ID: <2067790113.7736.1598489004999_at_mail.yahoo.com>



 Sayan
Update statement is 
Update snows.stamp_detail set set stamp_process_calc=processed_calc_amt; Tried to use1. Parallel DML with 100 --> Taking 20+hrs2. CTAS was tried using half a billion as well as 1 billion rows with parallel 50, 75,100 - Almost same result3. CTAS with nologging using same step 2 but still not much improvement We have 5-10 such big table and so running each with this much time-frame need high downtimeTxSanjay

    On Wednesday, August 26, 2020, 08:23:18 PM EDT, Sayan Malakshinov <xt.and.r_at_gmail.com> wrote:  

 Hi Sanjay,
It would be better if you provide more details about your update. Exact update statement would be helpful. is this column nullable/not null?

On Thu, Aug 27, 2020 at 1:12 AM Sanjay Mishra <dmarc-noreply_at_freelists.org> wrote:

 Andy
Yes look like is an option if we are doing work online and despite take more time but need not require downtime. In our case multiple DDL are running to existing environment due to Application upgrade and so all work has to be done with downtime. So challenge is reduce time of DML operations on big tables containing few billions rows. TxSanjay

    On Wednesday, August 26, 2020, 11:20:55 AM EDT, Andy Sayer <andysayer_at_gmail.com> wrote:  

 It does sound like a virtual column could be the ideal solution. But if data needs to be physically stored or cannot be calculated deterministically at any point in time then Connor has a great demo of using dbms_redefinition to create a new table online with a function to map the new column. There’s obviously some overhead with context switching but it may be far better than some of the obstacles you might be facing at the moment: https://connor-mcdonald.com/2016/11/16/performing-a-large-correlated-update/ (and you might be able to help it with pragma udf in the right circumstances). Obviously, how helpful this is depends where the work is currently going and how online this needs to be. 

Thanks,Andrew
On Wed, 26 Aug 2020 at 16:00, Jonathan Lewis <jlewisoracle_at_gmail.com> wrote:

Is that 3-4 billion rows each, or total ? I would be a little suspicious of an update which populates a new column with a value derived from existing columns. What options might you have for declaring a virtual column instead - which you could index if needed. Be extremely cautious about calculating space requirements - if you're updating every row on old data might you find that you're causing a significant fraction of the rows in each block to migrate, and there's a peculiarity of bulk row migration that can effectively "waste" 25% of the space in every block that becomes the target of a migrated row.

This effects can be MUCH work when the table is compress (even for OLTP) since the update has to decompress the row before updating and then only "re-compresses" intermittently as the block becomes full.  The CPU cost can be horrendous and you still have the problem of migration if the addition means the original rows can no longer fit in the block. If it is necessary to add the column you may want to review "alter table move online" can do in the latest versions (in case you can make it add the column as you move) or review the options for dbms_redefinition - maybe running several redefinitions concurrently rather than trying to do any parallel update to any single table. RegardsJonathan Lewis   

--

Best regards,
Sayan MalakshinovOracle performance tuning engineer Oracle ACE Associate
http://orasql.org
--

http://www.freelists.org/webpage/oracle-l Received on Thu Aug 27 2020 - 02:43:24 CEST

Original text of this message