Big Update/DML

From: Sanjay Mishra <"Sanjay>
Date: Wed, 26 Aug 2020 03:28:48 +0000 (UTC)
Message-ID: <625546540.6286223.1598412528817_at_mail.yahoo.com>



Hi Everyone
I am working on 19c and need to do one time update on multiple tables containing 3-4 Billions records and some tables are Compressed for OLTP and some are uncompressed. Tables have multiple columns but updating only one new column added with data from another column from the same table. Environment is on Exadata with Buffer Cache of 60G and CPU_count of 30 Update using high Parallel DMl enabled are taking several hours to even a day per table and are using high UNDO    

  • Does dropping index even the column updated has no relation to Indexed column can help the Elapsed time
  • Does Compress table will help in this scenario vs uncompressed Table. Table size with compress for OLTP is around 800G and same kind of another table is 4 Tb without compression. Trying to see that if compression can help in using less IO or buffer cache from both Table and Index perspective
  • Does adding more SGA or CPU can help in allocating more Parallel threads to reduce the Elapsed time

I was checking and found that dbms_parallel_execute can be good solution. Can someone update if they had used for Big Update and can share his sample code to try TIA Sanjay

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Aug 26 2020 - 05:28:48 CEST

Original text of this message