Re: Huge plain inserts response time bad with db file sequential read wait event.

From: Mladen Gogala <gogala.mladen_at_gmail.com>
Date: Fri, 21 Dec 2018 02:36:35 -0500
Message-ID: <6f003b70-9db6-7f10-fac8-10bcb66530ab_at_gmail.com>


On 12/20/2018 12:48 PM, Rakesh Ra wrote:
>
>
> Can someone guide as to,
>
> a) Why INSERT is spending more time on db file sequential reads for PK
> index?

Because insert needs to check whether the row being inserted already exists. To do that, Oracle needs to check the primary key index, therefore you get db file sequential reads. If the table is large, the PK is likely to be large as well and you will need a lot of reads. Root block and a couple of branch blocks will likely be cached, but eventually you will encounter the situation in which you need to read both the branch blocks and leaf blocks to check your data. There are various methods to remedy that, one of which is enabling parallel DML. Another method of speeding up inserts is to use compress for all operations. The third is to use hybrid columnar compression, if possible. You may even try using IOT. It's all done using the proven trial and error method.

> b) Why CPU costing is showing as off in the execution plan for OGG
> queries only?

Because you haven't collected system statistics. CPU costing is off when optimizer doesn't have an access to system statistics.

>
> Regards,
> RRA

-- 
Mladen Gogala Database Consultant Tel: (347) 321-1217
--
http://www.freelists.org/webpage/oracle-l
Received on Fri Dec 21 2018 - 08:36:35 CET

Original text of this message