Re: Insert query slowness

From: Andy Sayer <andysayer_at_gmail.com>
Date: Tue, 1 Nov 2022 18:27:24 -0700
Message-ID: <CACj1VR5G9zbLBU2WOa+wgotcRobWk4R4hU6ixwj7nnvQJux9-w_at_mail.gmail.com>



I’ve never used Informatica, but I’ve heard about all sorts of bad behaviour on the forums.

How much time do you need to make up?

60 to 70 seems a bit small, how many executions does this turn out to be? How frequently is it committing? Is it starting a new session per execution? Can you DIY parallelize this using multiple threads in Informatica?

Has anything changed with the batch sizes? Are you checking from AWR?

Increasing the batch size should make it more efficient per execution, so instead of 60 rows per 10cpu seconds you might be able to do 120 in 11cpu seconds.

OFE will mean nothing, unless the time is hit during a parse and you have baseline which isn’t able to be used. Checking the notes of your plan should point this out, if it is the problem then you just need to evolve your baseline.

I wouldn’t go down the unusable index path unless I knew a lot about the usage of the table. Rebuilding the index is going to get slower and slower as the table gets larger. It has its place, but there’s a lot of things that could go terrible.

Thinking about the change in DB, cpu time increase does seem like it could be a redo thing. Perhaps checking the redo generated between your two versions, this bug comes to mind *Bug 32992004 supplemental logging kicking in even if pdb is not enabled for supplemental loggin.*

Another boring possibility is that you have more indexes that need updating (or indexes that coincidentally got taller). I would expect additional buffer gets per row in that case, can you see that?

Thanks,
Andy

On Tue, Nov 1, 2022 at 12:35 PM, yudhi s <learnerdatabase99_at_gmail.com> wrote:

> Trying to see what all can be done to improve this batch insert
> performance. Apart from making the index unusable and loading the table. Is
> there any other option? Currently we have ofe 11.2, will it be making it
> 19.0 will help it anyway?
>
>
> On Mon, 31 Oct, 2022, 12:05 pm yudhi s, <learnerdatabase99_at_gmail.com>
> wrote:
>
>> Thank you Andy. Actually the inserts are currently happening in batch
>> only from ETL tool informatica , with a batch size of 60 to 70. But as I
>> mentioned each if those batch used to take <5milli sec which is now taking
>> double the time and the dba_hist_sql_stat showing it's mostly the cpu time
>> component which has been increased for each of those batches. Do you mean
>> by increasing the batch size further we may see benefits? But yes, app
>> development team stating they don't have control the batch size but
>> informatica does it's own.
>>
>> On Sun, 30 Oct, 2022, 10:24 pm Andy Sayer, <andysayer_at_gmail.com> wrote:
>>
>>> You will potentially make a lot of that time back by converting to using
>>> array binds for your large inserts. Once you do that, make sure you compare
>>> rows processed to cpu time rather than executions to cpu time. Your
>>> legitimate single value processes won’t change but they’re only taking
>>> 10ms, which isn’t going to be noticeable.
>>>
>>> Thanks,
>>> Andy
>>>
>>> On Sun, Oct 30, 2022 at 2:50 AM, yudhi s <learnerdatabase99_at_gmail.com>
>>> wrote:
>>>
>>>> Thank you so much Dominic and Malden.
>>>>
>>>> We raised a case with Oracle. It appears post 19.15 db patch and the
>>>> grid patch, some additional functions get executed behind the scene as
>>>> below and thus overall cpu times becomes higher for the data load queries.
>>>>
>>>> Also there appears to be a hardware issue with one cell server in this
>>>> full rack system and also it's a X3 machine with image 19.2.9.0 with
>>>> write-though flash cache mode, so we may have to change it to write back to
>>>> help the write queries/DML performance.
>>>>
>>>> _at_
>>>> 0x14cfb24d<-ksdxcb()+2254<-sspuser()+213<-__sighandler()<-write()+16<-sdbgrfuw
>>>> _at_
>>>> f_write_file()+59<-sdbgrfwf_write_file()+66<-dbgtfdFileWrite()+714<-dbgtfdFile
>>>> _at_
>>>> AccessCbk()+848<-dbgtfPutStr()+576<-dbktPri()+144<-ksdwrf()+640<-ktsp_dl_auto_
>>>> _at_
>>>> align()+1382<-ktspfsrch_array()+3301<-ktspscan_bmb()+3004<-ktspgsp_main()+1060
>>>> _at_
>>>> 6<-kdzh_setup_blkarray()+455<-kdzhailseb_dml()+1904<-kdzhcl_dml()+206<-kdt_hcc
>>>> _at_
>>>> _flush()+258<-kdtFlushBuf()+20786<-qerltcFlushKdtBuffer()+452<-qerltcBufRow()+
>>>> _at_ 87<-qerltcKdtBufferedInsRowCBK()+283<-qerltcLoadSta
>>>>
>>>> On Sun, 30 Oct, 2022, 3:31 am Mladen Gogala, <gogala.mladen_at_gmail.com>
>>>> wrote:
>>>>
>>>>> On 10/28/22 18:34, yudhi s wrote:
>>>>>
>>>>> Hello Experts, It's version 19.15 of the oracle. But the OFE has still
>>>>> been set as 11.2.0.4. We are seeing after our infra team applied april 2022
>>>>> database and grid patch(below is the exact detail), almost all of the
>>>>> conventional batch insert queries are running slower(e.g. ~5ms/execution
>>>>> before patch vs 10ms/execution now). The main time component for this query
>>>>> is shown to be "CPU time" in dba_hist_sqlstat, which has been increased.
>>>>> However, when checking the wait event pattern from dba_hist_system_event,
>>>>> the response time , we see for 'cell single block physical read' wait event
>>>>> the response time , has been increased up from <1ms before patch to ~10ms+
>>>>> now after patch. Also we are seeing a new wait event "cell single block
>>>>> physical read: flash cache" which was not there before the patch. Want to
>>>>> know if anyone has encountered such an issue? And this symptom we are
>>>>> seeing across multiple databases in the same cluster.There is nothing much
>>>>> in the insert queries. The insert queries, looks like "Insert into
>>>>> tab1(c1,c2,....).. values(c1,c2,....); "
>>>>>
>>>>> 33815596;OCW RELEASE UPDATE 19.15.0.0.0 (33815596) 33806152;Database
>>>>> Release Update : 19.15.0.0.220419 (33806152) 33803476 - Grid patch
>>>>>
>>>>> Regards
>>>>>
>>>>> Yudhi
>>>>>
>>>>> Hi Yudhi,
>>>>>
>>>>> If the duration of 'cell single block physical read' has gone up 10
>>>>> times on your Exadata machine after upgrade, you should open a case with
>>>>> Oracle Support. Did you only patch Oracle RDBMS or did you also apply
>>>>> Exadata firmware patch? You can trace the particular insert that got slower
>>>>> by executing someting like "ALTER SYSTEM SET EVENTS="sql_trace[sql:
>>>>> <SQL_ID>] level=12'". That should produce a trace file for every process
>>>>> executing the given SQL_ID. You can combine those trace files by trcsess
>>>>> and analyze by using orasrp (http://oracledba.ru/orasrp/). That
>>>>> should give you an idea of what exactly are your inserts waiting for.
>>>>>
>>>>> However, to reiterate, what you described warrants opening a case with
>>>>> Oracle Support. Out of curiosity, what version of Exadata do you have? Do
>>>>> you have one of those new monsters with Mellanox (Nvidia) 100Gb/sec
>>>>> adapters, RDMA and Optane cache? I wonder what effect will Intel killing
>>>>> off Optane have on the Exadata platform? The "persistent cache" was
>>>>> actually Intel Optane. Of course, those last few sentences were just me
>>>>> thinking aloud, that doesn't have much to do with your problem.
>>>>>
>>>>> Regards
>>>>>
>>>>> --
>>>>> Mladen Gogala
>>>>> Database Consultant
>>>>> Tel: (347) 321-1217https://dbwhisperer.wordpress.com
>>>>>
>>>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Nov 02 2022 - 02:27:24 CET

Original text of this message