Home » RDBMS Server » Performance Tuning » Max time for all sql's in my process being consumed by db file sequential read
Max time for all sql's in my process being consumed by db file sequential read [message #288935] Wed, 19 December 2007 01:38 Go to next message
mithun.karthikeyan
Messages: 5
Registered: December 2007
Location: Hyderabad
Junior Member
Hi,

I am facing performane issue with one of my process. The process is taking about 4 minutes to get completed. Ideally it should'nt have taken more than a minute.

So I generated the SQLTrace and the tkprof for the process and find that for almost all the SQL's max amount of time is being consumed by DBfile Sequential read. Of the total 4 Minutes that the process is taking to complete almost 3 minutes is being consumed by DBfile sequential read. And for almost all the SQL's in the trace, DB file sequential read accounts for 95% of the elapsed time.

Another noticeable thing is that DB file sequential read is more in case when Data is being fetched from three tables in particular. And these tables are very bulky tables with more than 5-6 millions of rows. I understand that the db file sequential read could be because of the I/O time required to read through the indexes of these huge tables.

However I would like help/suggestion to some how reduce this DBfile sequential read time. Is there anything I can do to improve the performance?
Re: Max time for all sql's in my process being consumed by db file sequential read [message #288942 is a reply to message #288935] Wed, 19 December 2007 01:56 Go to previous messageGo to next message
Michel Cadot
Messages: 64103
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
There is no general answer to your question.
Buy faster disks
Read less = optimize the statement
...

Regards
Michel
Re: Max time for all sql's in my process being consumed by db file sequential read [message #289025 is a reply to message #288935] Wed, 19 December 2007 05:16 Go to previous messageGo to next message
mithun.karthikeyan
Messages: 5
Registered: December 2007
Location: Hyderabad
Junior Member
Thanks for the effort

But the question is how do u identify if its Disk upgrade that is required and not optimizing the statement.

Please dont ask me to do a Disk upgrade when optimizing the statement is no more possible. Thats very obvious.
But what I want help on is regarding whether there are any pointers that I can verify which will lead me to conclude that a Disk upgrade is required?
Some Statistic data or something like that I can check?

Thanks
Mithun

[Updated on: Wed, 19 December 2007 05:16]

Report message to a moderator

Re: Max time for all sql's in my process being consumed by db file sequential read [message #289049 is a reply to message #289025] Wed, 19 December 2007 06:09 Go to previous messageGo to next message
Michel Cadot
Messages: 64103
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
As I said many possible reasons can exist.
YOu have the answer at the level of your question.
General question -> general answer
Precise question -> precise answer

Also read How to Identify Performance Problem and Bottleneck .

Regards
Michel
Re: Max time for all sql's in my process being consumed by db file sequential read [message #289683 is a reply to message #288935] Tue, 25 December 2007 06:02 Go to previous messageGo to next message
michael_bialik
Messages: 611
Registered: July 2006
Senior Member
Post your TKPROF and index(es) definitions.

Michael
Re: Max time for all sql's in my process being consumed by db file sequential read [message #290024 is a reply to message #288935] Thu, 27 December 2007 03:16 Go to previous messageGo to next message
halfydba
Messages: 20
Registered: November 2007
Location: Australia
Junior Member
Check out SQLTXPLAIN on metalink Note 215187.1. It is very useful in assisting with the tuning of SQL Queries. So get that output and upload it.

In the end, I agree with Michel. Your question is too general like
like asking, "How long is a piece of string?"



Re: Max time for all sql's in my process being consumed by db file sequential read [message #290274 is a reply to message #288935] Fri, 28 December 2007 04:36 Go to previous messageGo to next message
mithun.karthikeyan
Messages: 5
Registered: December 2007
Location: Hyderabad
Junior Member
Hi,
Thanks for the reply

I had taken TkProf for my process, and I came to know that DBfile sequential read was the one causing issue from that.

The most time consuming SQL is


SELECT 'Y'
FROM
TIME_ATTRIBUTES WHERE TIME_ATTRIBUTE_ID IN (SELECT TIME_ATTRIBUTE_ID
FROM TIME_ATTRIBUTE_USAGES WHERE TIME_BUILDING_BLOCK_ID = :B3 AND
TIME_BUILDING_BLOCK_OVN = :B2 ) AND ATTRIBUTE_CATEGORY = 'REASON' AND
ATTRIBUTE7 = DECODE (:B1 , NULL, ATTRIBUTE7, :B1 )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 5420 3.31 3.37 0 0 0 0
Fetch 5420 4.71 79.54 7053 124992 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10841 8.02 82.92 7053 124992 0 0

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 7053 0.25 75.78
latch: cache buffers chains 6 0.00 0.00
********************************************************************************



I am attaching the SQLtxplain report also for this sql statement.



Re: Max time for all sql's in my process being consumed by db file sequential read [message #290275 is a reply to message #288935] Fri, 28 December 2007 04:38 Go to previous messageGo to next message
mithun.karthikeyan
Messages: 5
Registered: December 2007
Location: Hyderabad
Junior Member
Hi,

I had also modified the above query, by joining the two tables instead of using the IN clause.

The SQLtxplain report for that statement is attached.

Thanks
Myth
Re: Max time for all sql's in my process being consumed by db file sequential read [message #290278 is a reply to message #288935] Fri, 28 December 2007 04:42 Go to previous messageGo to next message
mithun.karthikeyan
Messages: 5
Registered: December 2007
Location: Hyderabad
Junior Member
However,
this is just one statement in my process experiencing this high percentage of DB file sequential read time.

Almost all the sql's are facing huge percentage of DBfile sequential read.

For eg, the next most costly statement has a DBfile Sequential read time of 61 seconds, when the total elapsed time for that statement is just 71 seconds.

The third most expensive sql statement has a total elapsed time of 22 seconds out of which DBfile Sequential read accounts for 18 seconds.



Re: Max time for all sql's in my process being consumed by db file sequential read [message #290434 is a reply to message #290278] Fri, 28 December 2007 19:44 Go to previous message
halfydba
Messages: 20
Registered: November 2007
Location: Australia
Junior Member
Start looking through the sqltxplain. There is a lot of useful information in there. You can see where parts of your query are taking the most time, and a brief overview of your data. This should be giving you a very good place to start. Try and understand what each is and do some reading.
Also, you have quite a number of rows there and the indexes you are using have quite a high cardinality. Stats have not been gathered for a long time and your data has increase ~5% for the 2 tables you are trying to join.

p.s. I know you are not doing a sort operation in this query, but your sort area is very low. PGA_aggregate_target is quite useful for large sorts. Test letting oracle manage this. It will consume a lot more memory, but your sorts will be a lot faster.. especially if you have that much data.
Previous Topic: can collections help
Next Topic: index problem on partition based table
Goto Forum:
  


Current Time: Sun Dec 04 02:21:21 CST 2016

Total time taken to generate the page: 0.08694 seconds