Re: Insert query slowness

From: Lothar Flatz <l.flatz_at_bluewin.ch>
Date: Wed, 2 Nov 2022 08:36:42 +0100
Message-ID: <fc40f287-d657-e3b8-facc-73bb121d7d9b_at_bluewin.ch>



Maybe it is the SDU size. Judging by the batch size it seems to me you are rather limited by the database driver infomatica uses. Well, I think it uses a driver, does it? Maybe JDBC or ODBC. These drivers can limit your batch size in the way you describe. It is easy to change the config of the driver.

Thanks

Am 02.11.2022 um 07:45 schrieb yudhi s:
> Thank you Lothar and Andy. The details pap posted is exactly matching
> our issue. We also see during checking the v$sesion at runtime of the
> insert query, the event it logs as 'sql * net more data to client' .
> And can this point to some network config issue/change happened as
> part of grid patch as because nothing has changed from application
> point. And in that case if changing SDU size is an option?
>
> On Wed, 2 Nov, 2022, 11:55 am Lothar Flatz, <l.flatz_at_bluewin.ch> wrote:
>
> You should increase the batch size, 500 is the rule of thumb, but
> I would test higher numbers still. It should be possible to
> increase the batch size.  I never did configure Informatica
> myself, but I heard it is difficult to configure. Do also check
> your database driver, this might limit you on the batch size.
> You could use the APPEND_VALUES hint, if in Informatica or via a
> sql_patch.
> There should be a direct database mode for informatica, but it
> comes with an extra charge.
>
> Is there any trigger on one of the tables? I might help to have a
> trace.
>
> Thanks
>
> Lothar
>
> Am 01.11.2022 um 20:35 schrieb yudhi s:
>> Trying to see what all can be done to improve this batch insert
>> performance. Apart from making the index unusable and loading the
>> table. Is there any other option? Currently we have ofe 11.2,
>> will it be making it 19.0 will help it anyway?
>>
>> On Mon, 31 Oct, 2022, 12:05 pm yudhi s,
>> <learnerdatabase99_at_gmail.com> wrote:
>>
>> Thank you Andy. Actually the inserts are currently happening
>> in batch only from ETL tool informatica , with a batch size
>> of 60 to  70. But as I mentioned each if those batch used to
>> take <5milli sec which is now taking double the time and the
>> dba_hist_sql_stat showing it's mostly the cpu time component
>> which has been increased for each of those batches. Do you
>> mean by increasing the batch size further we may see
>> benefits? But yes, app development team stating they don't
>> have control the batch size but informatica does it's own.
>>
>> On Sun, 30 Oct, 2022, 10:24 pm Andy Sayer,
>> <andysayer_at_gmail.com> wrote:
>>
>> You will potentially make a lot of that time back by
>> converting to using array binds for your large inserts.
>> Once you do that, make sure you compare rows processed to
>> cpu time rather than executions to cpu time. Your
>> legitimate single value processes won’t change but
>> they’re only taking 10ms, which isn’t going to be noticeable.
>>
>> Thanks,
>> Andy
>>
>> On Sun, Oct 30, 2022 at 2:50 AM, yudhi s
>> <learnerdatabase99_at_gmail.com> wrote:
>>
>> Thank you so much Dominic and Malden.
>>
>> We raised a case with Oracle. It appears post 19.15
>> db patch and the grid patch, some additional
>> functions get executed behind the scene as below and
>> thus overall cpu times becomes higher for the data
>> load queries.
>>
>> Also there appears to be a hardware issue with one
>> cell server in this full rack system and also it's a
>> X3 machine with image 19.2.9.0 with write-though
>> flash cache mode, so we may have to change it to
>> write back to help the write queries/DML performance.
>>
>> _at_
>> 0x14cfb24d<-ksdxcb()+2254<-sspuser()+213<-__sighandler()<-write()+16<-sdbgrfuw
>> _at_
>> f_write_file()+59<-sdbgrfwf_write_file()+66<-dbgtfdFileWrite()+714<-dbgtfdFile
>> _at_
>> AccessCbk()+848<-dbgtfPutStr()+576<-dbktPri()+144<-ksdwrf()+640<-ktsp_dl_auto_
>> _at_
>> align()+1382<-ktspfsrch_array()+3301<-ktspscan_bmb()+3004<-ktspgsp_main()+1060
>> _at_
>> 6<-kdzh_setup_blkarray()+455<-kdzhailseb_dml()+1904<-kdzhcl_dml()+206<-kdt_hcc
>> _at_
>> _flush()+258<-kdtFlushBuf()+20786<-qerltcFlushKdtBuffer()+452<-qerltcBufRow()+
>> _at_ 87<-qerltcKdtBufferedInsRowCBK()+283<-qerltcLoadSta
>>
>> On Sun, 30 Oct, 2022, 3:31 am Mladen Gogala,
>> <gogala.mladen_at_gmail.com> wrote:
>>
>> On 10/28/22 18:34, yudhi s wrote:
>>>
>>> Hello Experts, It's version 19.15 of the oracle.
>>> But the OFE has still been set as 11.2.0.4. We
>>> are seeing after our infra team applied april
>>> 2022 database and grid patch(below is the exact
>>> detail), almost all of the conventional batch
>>> insert queries are running slower(e.g.
>>> ~5ms/execution before patch vs 10ms/execution
>>> now). The main time component for this query is
>>> shown to be "CPU time" in dba_hist_sqlstat,
>>> which has been increased. However, when checking
>>> the wait event pattern from
>>> dba_hist_system_event, the response time , we
>>> see for 'cell single block physical read' wait
>>> event the response time , has been increased up
>>> from <1ms before patch to ~10ms+ now after
>>> patch. Also we are seeing a new wait event "cell
>>> single block physical read: flash cache" which
>>> was not there before the patch. Want to know if
>>> anyone has encountered such an issue? And this
>>> symptom we are seeing across multiple databases
>>> in the same cluster.There is nothing much in the
>>> insert queries. The insert queries, looks like
>>> "Insert into tab1(c1,c2,....)..
>>> values(c1,c2,....); "
>>>
>>> 33815596;OCW RELEASE UPDATE 19.15.0.0.0
>>> (33815596) 33806152;Database Release Update :
>>> 19.15.0.0.220419 (33806152) 33803476 - Grid patch
>>>
>>> Regards
>>>
>>> Yudhi
>>>
>> Hi Yudhi,
>>
>> If the duration of 'cell single block physical
>> read' has gone up 10 times on your Exadata
>> machine after upgrade, you should open a case
>> with Oracle Support. Did you only patch Oracle
>> RDBMS or did you also apply Exadata firmware
>> patch? You can trace the particular insert that
>> got slower by executing someting like "ALTER
>> SYSTEM SET EVENTS="sql_trace[sql: <SQL_ID>]
>> level=12'". That should produce a trace file for
>> every process executing the given SQL_ID. You can
>> combine those trace files by trcsess and analyze
>> by using orasrp (http://oracledba.ru/orasrp/).
>> That should give you an idea of what exactly are
>> your inserts waiting for.
>>
>> However, to reiterate, what you described
>> warrants opening a case with Oracle Support. Out
>> of curiosity, what version of Exadata do you
>> have? Do you have one of those new monsters with
>> Mellanox (Nvidia) 100Gb/sec adapters, RDMA and
>> Optane cache? I wonder what effect will Intel
>> killing off Optane have on the Exadata platform?
>> The "persistent cache" was actually Intel Optane.
>> Of course, those last few sentences were just me
>> thinking aloud, that doesn't have much to do with
>> your problem.
>>
>> Regards
>>
>> --
>> Mladen Gogala
>> Database Consultant
>> Tel: (347) 321-1217
>> https://dbwhisperer.wordpress.com
>>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Nov 02 2022 - 08:36:42 CET

Original text of this message