RE: Insert query slowness

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Wed, 2 Nov 2022 18:56:35 -0400
Message-ID: <14ac01d8ef0e$5c0ac560$14205020$_at_rsiz.com>



Does informatica have a flat file unload option? (including defining a report with either fixed width column output or “not in your data” column separators and an infinite page length with one row per tuple).  

IF it does, it seems like you might be better off unloading the data to a flat file and loading it with one of Oracle’s bulk tools.  

IF your table is “too wide” you might even be better off creating n reports (each starting with a unique key combination) and loading into n Oracle tables and finally inserting the real destination from the n oracle tables joined on the unique key with the columns in the right order.  

IF the n tables are temporary tables you might save on undo and redo duplication and only pay for it on the final insert into the real destination.  

IF you’re doing a batch at a time, you can add the unique index on each of the n tables at the end of the interim loads and before the final insert.  

These thoughts are askew from the very sound thoughts and suggestions that preceded my remark.  

Good luck.  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Jonathan Lewis Sent: Wednesday, November 02, 2022 5:23 AM To: Oracle L
Subject: Re: Insert query slowness    

A couple of thoughts:

  1. Andy's comment about the supplemental logging bug 32992004 may be the most significant thing - it's possible that when supplemental logging is enabled your array processing effectively turns into single-row processing when it reaches the database. The nature of the data also makes a difference to what supplemental logging will have to do - you may be generating a lot more undo and redo because of it and that might be slowing you down. Supplemental logging is one of the features that disables "private redo / in-memory undo", so if you're currently committing every 60 or 70 rows then you may have been working completely (or mostly) in private in 11g, and will have switched to competing for the public redo thread in 19c.
  2. You've mentioned SQL*Net more data to client - that could only happen if the database were sending more data to the Informatica than could fit into a single SQL*Net packet - but if you're just doing a basic INSERT VALUES() there shouldn't be any significant volume of data returned to Informatica, so has there been a change in the Informatica configuration that makes it add a RETURNING clause to the insert? You did say, though, that the extra time was CPU time, so this wait might be irrelevant; as Lothar said, checking SDU_SIZE settings may help anyway.
  3. Since you've said the extra time is CPU that means Oracle must be doing some "extra" work with your insert. Do you have any reports of session activity stats (v$sesstat) for the 11g system that you can compare with the 19c stats. If (for example) Informatica processed a job lot of 100,000 rows in both versions could you find ANY statistic name that had changed by approximately 100,000 (or a multiple of 100,000) between the two versions.

Regards

Jonathan Lewis    

P.S. I didn't see a message from PAP coming through - what did it say?        

On Wed, 2 Nov 2022 at 06:46, yudhi s <learnerdatabase99_at_gmail.com> wrote:

Thank you Lothar and Andy. The details pap posted is exactly matching our issue. We also see during checking the v$sesion at runtime of the insert query, the event it logs as 'sql * net more data to client' . And can this point to some network config issue/change happened as part of grid patch as because nothing has changed from application point. And in that case if changing SDU size is an option?

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Nov 02 2022 - 23:56:35 CET

Original text of this message