RE: Big Update/DML

From: Reen, Elizabeth <"Reen,>
Date: Fri, 28 Aug 2020 14:45:56 +0000
Message-ID: <b76fa506076d4199828acf254e91f90d_at_imcnam.ssmb.com>



                Many years ago my ETL team was trying to speed up all the loads by setting the degree of parallelism to twice what it was.  One job went from 20 minutes to 2 hours.  After many hours of testing, I asked them to humor me and set it to no parallelism.  If the job ran more than 20 minutes, they could kill it.  The job took 2 minutes.  Some things do not do well with multiple threads.  Inserts can be one of them, as all of the processes are trying to get to the same end bucket.  If you partition the table, you can set up the inserts to work on each partition.  This reduces the locking substantially.  Check your parallel jobs, do they all finish reasonably close to each other?  Or is one process taking a lot longer?


Liz

From: [External] oracle-l-bounce_at_freelists.org <oracle-l-bounce_at_freelists.org> On Behalf Of [External] Lothar Flatz Sent: Friday, August 28, 2020 3:19 AM
To: Sanjay Mishra; oracle-l_at_freelists.org; Clay.Jackson_at_quest.com Subject: Re: Big Update/DML

Sanjay,

50% of the total time is on the insert and it is CPU. Therefore compression would be a handy suspect. I would expect OLTP comression being less efficient in a bulk load compared to e.g. "query low". I never tested that though. Of course the best test is umcompressed on your 1 billon row sample. Make sure that you do not run in other waits e.g. "HV Contention".

The CPU_COUNT parameter setting may not mean much. First of all it is most certainly threads and not cores. In a heavy CPU bounded task threads won't help a lot. Thus you can almost divide your CPU by 2. Second the parameter means really where resource manager will cap you. The point are the cores in the server that are backing CPU_COUNT. Quite likely you do share the cores with other instances. What is the CPU utilizartion on your server when you run your statement? Bottom line: you might not have as much cores as you think you have. I would also test with much lower DOP and see what happens.

Regards

Lothar

Am 27.08.2020 um 20:21 schrieb Sanjay Mishra:

Lothar

Yes this was around 2 hr for 1Billion row and tables has 15-23 billions rows. There is no LOB object in any tables where UPDATE has to be processed. Let me work to convert the one provided in test. It is around 800G Advance Compression and without compression look like will be 3-4 Tb. But I can try to test it

Regarding Other part. It is Exadata 7 with CPU_COUNT tested with 30-40 but no difference so far.

Tx
Sanjay

On Thursday, August 27, 2020, 01:16:24 PM EDT, Lothar Flatz <l.flatz_at_bluewin.ch><mailto:l.flatz_at_bluewin.ch> wrote:

Hi,

Runtime is 1:45 hours not 20??
But 1:45 seems still too long.

  1. Insert is direct parallel
  2. Work distribution is about even
  3. Statement scales on CPU So that is ok. My gut feeling is that CPUs are not delivering. Could be wrong of course, because I am missing many details. (E.g. how many LOBS are in that table).
  4. How many cores do you have that can work on this task?
  5. What kind of cores (old sparcs?)
  6. What is your compression? I would run the CTAS without compression. Even if you want the result compressed, you might gain insight.

Regards

Lothar

Am 27.08.2020 um 18:45 schrieb Sanjay Mishra: Clay

Thanks for the update. Regarding table, all work for update/CTAS is tried only after been refreshed with EXPDP/IMPDP from production to test the timeline. I shared the SQL Monitor report in the last email and here it is attached again.

Sanjay
On Thursday, August 27, 2020, 12:09:19 PM EDT, Clay Jackson (cjackson) <clay.jackson_at_quest.com><mailto:clay.jackson_at_quest.com> wrote:

What Lothar said – I’d look at the plan for the CTAS to be sure the optimizer isn’t doing something “unusual”, AND, consider the possibility that the table is already horribly “row chained” so that each read is actually reading several “random” (or worse) blocks. More data is clearly key to understanding.

Clay Jackson

From: oracle-l-bounce_at_freelists.org<mailto:oracle-l-bounce_at_freelists.org> <oracle-l-bounce_at_freelists.org><mailto:oracle-l-bounce_at_freelists.org> On Behalf Of Lothar Flatz Sent: Thursday, August 27, 2020 3:41 AM To: oracle-l_at_freelists.org<mailto:oracle-l_at_freelists.org> Subject: Re: Big Update/DML

CAUTION: This email originated from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe.

Hi,

with regards to CTAS it is very hard to believe it takes that long. I am pretty sure that there is something wrong. A sql monitor would be extremly helpfull.

Regards

Lothar

Am 27.08.2020 um 02:43 schrieb Sanjay Mishra (Redacted sender smishra_97 for DMARC):

Sayan

Update statement is

Update snows.stamp_detail set set stamp_process_calc=processed_calc_amt;

Tried to use

  1. Parallel DML with 100 --> Taking 20+hrs
  2. CTAS was tried using half a billion as well as 1 billion rows with parallel 50, 75,100 - Almost same result
  3. CTAS with nologging using same step 2 but still not much improvement

We have 5-10 such big table and so running each with this much time-frame need high downtime

Tx

Sanjay

On Wednesday, August 26, 2020, 08:23:18 PM EDT, Sayan Malakshinov <xt.and.r_at_gmail.com><mailto:xt.and.r_at_gmail.com> wrote:

Hi Sanjay,

It would be better if you provide more details about your update. Exact update statement would be helpful. is this column nullable/not null?

On Thu, Aug 27, 2020 at 1:12 AM Sanjay Mishra <dmarc-noreply_at_freelists.org<mailto:dmarc-noreply_at_freelists.org>> wrote:

Andy

Yes look like is an option if we are doing work online and despite take more time but need not require downtime. In our case multiple DDL are running to existing environment due to Application upgrade and so all work has to be done with downtime. So challenge is reduce time of DML operations on big tables containing few billions rows.

Tx

Sanjay

On Wednesday, August 26, 2020, 11:20:55 AM EDT, Andy Sayer <andysayer_at_gmail.com<mailto:andysayer_at_gmail.com>> wrote:

It does sound like a virtual column could be the ideal solution. But if data needs to be physically stored or cannot be calculated deterministically at any point in time then Connor has a great demo of using dbms_redefinition to create a new table online with a function to map the new column. There’s obviously some overhead with context switching but it may be far better than some of the obstacles you might be facing at the moment:

https://connor-mcdonald.com/2016/11/16/performing-a-large-correlated-update/<https://urldefense.com/v3/__https:/nam05.safelinks.protection.outlook.com/?url=https*3A*2F*2Fconnor-mcdonald.com*2F2016*2F11*2F16*2Fperforming-a-large-correlated-update*2F&data=02*7C01*7Cclay.jackson*40quest.com*7Cd7187791c566418711db08d84a75d598*7C91c369b51c9e439c989c1867ec606603*7C0*7C0*7C637341217248530635&sdata=MAHM40uGfFgjYUJGv*2FpEUo7KrT9NgPZvzpF3zASZqTc*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUl!!Jkho33Y!wJSQW6mGw0wgV1skgUqyn09fZ-ola5sF3TakWZA7MdEivUUFp_w2b8SdEmkFAAGI4g$> (and you might be able to help it with pragma udf in the right circumstances).

Obviously, how helpful this is depends where the work is currently going and how online this needs to be.

Thanks,

Andrew

On Wed, 26 Aug 2020 at 16:00, Jonathan Lewis <jlewisoracle_at_gmail.com<mailto:jlewisoracle_at_gmail.com>> wrote:

Is that 3-4 billion rows each, or total ?

I would be a little suspicious of an update which populates a new column with a value derived from existing columns. What options might you have for declaring a virtual column instead - which you could index if needed.

Be extremely cautious about calculating space requirements - if you're updating every row on old data might you find that you're causing a significant fraction of the rows in each block to migrate, and there's a peculiarity of bulk row migration that can effectively "waste" 25% of the space in every block that becomes the target of a migrated row.

This effects can be MUCH work when the table is compress (even for OLTP) since the update has to decompress the row before updating and then only "re-compresses" intermittently as the block becomes full. The CPU cost can be horrendous and you still have the problem of migration if the addition means the original rows can no longer fit in the block.

If it is necessary to add the column you may want to review "alter table move online" can do in the latest versions (in case you can make it add the column as you move) or review the options for dbms_redefinition - maybe running several redefinitions concurrently rather than trying to do any parallel update to any single table.

Regards

Jonathan Lewis

--

Best regards,
Sayan Malakshinov

Oracle performance tuning engineer

Oracle ACE Associate
http://orasql.org<https://urldefense.com/v3/__https:/nam05.safelinks.protection.outlook.com/?url=http*3A*2F*2Forasql.org*2F&data=02*7C01*7Cclay.jackson*40quest.com*7Cd7187791c566418711db08d84a75d598*7C91c369b51c9e439c989c1867ec606603*7C0*7C0*7C637341217248540625&sdata=TOD3zIOdZDwp45R4ptmja9tAFNb3DwpnToKef0HMQpI*3D&reserved=0__;JSUlJSUlJSUlJSUlJQ!!Jkho33Y!wJSQW6mGw0wgV1skgUqyn09fZ-ola5sF3TakWZA7MdEivUUFp_w2b8SdEml0xu3jKg$>

--

http://www.freelists.org/webpage/oracle-l Received on Fri Aug 28 2020 - 16:45:56 CEST

Original text of this message