Home » RDBMS Server » Performance Tuning » Performance improvement required (Oracle 10G)
Performance improvement required [message #634969] Wed, 18 March 2015 08:02 Go to next message
saurabh.sha
Messages: 15
Registered: March 2015
Location: india
Junior Member
Dear members,

i am having a SQL query which is taking long time to run ( 1.5-2 hrs), i am looking for ways to improve it, i and attaching the query and explain for for it, kindly share your views/ideas on the same.

Regards
Saurabh
  • Attachment: Query.txt
    (Size: 22.14KB, Downloaded 249 times)
Re: Performance improvement required [message #634970 is a reply to message #634969] Wed, 18 March 2015 08:03 Go to previous messageGo to next message
saurabh.sha
Messages: 15
Registered: March 2015
Location: india
Junior Member
Explain plan attched
Re: Performance improvement required [message #635057 is a reply to message #634970] Thu, 19 March 2015 09:30 Go to previous messageGo to next message
Lalit Kumar B
Messages: 3123
Registered: May 2013
Location: World Wide on the Web
Senior Member
Welcome to the forum!

Please read and follow the OraFAQ Forum Guide and How to use [code] tags, to enable us to help you.

Is it possible for you to add the query, explain plan and other details as text enclosed within code tags. Also, if the text is too huge, consider using spoiler tags.

[Updated on: Thu, 19 March 2015 09:32]

Report message to a moderator

Re: Performance improvement required [message #635064 is a reply to message #634969] Thu, 19 March 2015 11:54 Go to previous messageGo to next message
John Watson
Messages: 7265
Registered: January 2010
Location: Global Village
Senior Member
Is it possible that your developers are a bit lazy?
For example, I see a SORT UNIQUE which is forcing materialization of the v_shipment_order view, and does not appear to remove any rows. This usually means that a developer has included a DISTINCT clause which is not necessary. Perhaps there are no duplicates, and if there are perhaps they could be handled better.
Also, all those outer joins. They will restricting the optimizer's options terribly. Are they really necessary? Don't just say "yes", think it through and justify why they are there.
Re: Performance improvement required [message #635065 is a reply to message #635064] Thu, 19 March 2015 12:00 Go to previous messageGo to next message
BlackSwan
Messages: 25858
Registered: January 2009
Location: SoCal
Senior Member
My interpretation of the EXPLAIN PLAN is that query include at least 1 remote object.
Does this query include @REMOTE access?
Re: Performance improvement required [message #635093 is a reply to message #635065] Fri, 20 March 2015 03:17 Go to previous messageGo to next message
saurabh.sha
Messages: 15
Registered: March 2015
Location: india
Junior Member
Thanks for the inputs, yes we are using remote objects ( some of the tables are in different server, so we are using synonyms)
it does require distinct clause and all the left outer join are also required.
Re: Performance improvement required [message #635123 is a reply to message #635093] Sat, 21 March 2015 03:55 Go to previous messageGo to next message
John Watson
Messages: 7265
Registered: January 2010
Location: Global Village
Senior Member
saurabh.sha wrote on Fri, 20 March 2015 08:17
Thanks for the inputs, yes we are using remote objects ( some of the tables are in different server, so we are using synonyms)
it does require distinct clause and all the left outer join are also required.


Well, I would consider your statement above carefully. How many rows is that DISTINCT removing? None? And why do you need all those outer joins that generate about 2.4m rows, if you end up discarding all but 43782 of them? You need to understand your data, and to understand your queries.
Re: Performance improvement required [message #635131 is a reply to message #634969] Sun, 22 March 2015 07:17 Go to previous messageGo to next message
michael_bialik
Messages: 619
Registered: July 2006
Senior Member
Can you post TKPROF report?

[Updated on: Sun, 22 March 2015 07:17]

Report message to a moderator

Re: Performance improvement required [message #635140 is a reply to message #635131] Sun, 22 March 2015 10:41 Go to previous messageGo to next message
Kevin Meade
Messages: 2102
Registered: December 1999
Location: Connecticut USA
Senior Member
In cases like this, where there is a large and complicated query involved, there are two strategies that can be readily used to figure out what to do. Usually a combination of both works best. These techniques are:

1. Cardinality Based Analysis
2. Query Decomposition and Reconstruction


For #1, Cardinality Based Analysis: I have attached some documents that will give you the basic introduction of what this is. Additionally I have included the link on Amazon to finding my book from which the related documents come, in case you wish to get into more detail.

For #2, Query Decomposition and Reconstruction: this is the process of breaking down a query into independently observable parts which you can manipulate to learn more about where time is going in your original query, and thus how to fix performance issues in it. For example:

1. you have smartly used the WITH clause here to incrementally build your query.  This is very good.  And it will also make it easier for you to do the process of decomposition and reconstruction.  I note that there are several independent sub-queries in this large query.  You should use create table commands to materialize each of these sub-queries.  This will allow you to see how long the sub-queries take on their own which may lead you to one of them taking way too long and thus possibly being your problem, which you can then fix.  And they will allow you to compare row counts from each query plan step to see that they match expectations.

2. the query makes use of a remote query or object.  You should materialize that object or query locally and then run tests and do the decomposition and reconstruction process using this local copy, in order to learn how much time is related to working with this remote object.  This will give you an idea of how much the remote-ness of the query is contributing to overall runtime and thus, if part of the problem, will lead you in the right direction for solutions.

3. once you have all the independent parts, you can materialize the main query using the tables your created for the sub-queries.  This will tell you how long the joining etc. of the main query is taking.  This you can do one table at a time as is described in one of the attachments.  Again this will show you where your time is going and thus where maybe there will be a potential fix.

4. I note that you have some user-defined functions as well.  It may be that these are part of your problem.  Since it appears these only show up in the main query, these can easily be evaluated by simply removing them from the query to see if it runs a lot faster.

5. also, there is a SCALAR SUB-QUERY in your query.  This too my be inefficient which you should check. If it is actually being done (it may not be (see notes below)) then it could be a cause of performance pain.


As to the query execution plan, aside from the remote object/query, it looks very good to me. As long as the cardinalities noted in the plan steps is accurate (if you are indeed manipulating only 3 million rows everywhere), then I would expect this query to take only a few minutes on any decent hardware platform. I would note that the query is clearly a WAREHOUSE STYLE query and as such is correctly employing HASH JOIN and FULL TABLE SCAN. Additionally, a review of the PLAN_TABLE_OUTPUT and PREDICATE INFORMATION sections you provided show that none of the hash joins are suffering from the main hash join failure scenario (join across a partial key). This means none of the joins is generating a large intermediary join result that contains lots of unneeded rows (which usually happens as a result of a non-equi join (like between) which the hash join cannot handle). So in short the joins look good.

Thus one thing you need to do is validate the cardinalities of your query plan to know that these table scan steps and join steps are producing the number of rows they say they are producing. Try GATHER_PLAN_STATISTICS for this. Indeed you might want to start here since it may only take you 2 hours to get the answers you need using this hint.

Lastly I note that the query contains something like 30 table references yet the query plan you provided only shows 8 or so.  So either the query plan you have given us is not the plan for this query, or we see in this query plan, the magic of Oracle's optimization which I call DUNSEL JOIN REMOVAL and which Oracle refers to as JOIN ELIMINATION.  The CBO has determined (likely due to the outer joins in this query) that most of the joins it is doing are not needed because they neither change the number of rows in the result set, nor provide access to data needed by the result set, and thus the CBO has removed them from your query.  You should validate that the plan you have shown us is in fact the plan that has been used to execute with.  If Oracle is using this query plan, it has already saved you possibly hours of runtime because of this optimization (what a smart database we have).


Cardinality based tuning, query decomposition and reconstruction, and tuning hash joins, are all described in my book on SQL Tuning, which is where the attachments below come from, along with many other related techniques and topics.

Using the steps I have outlined above, it will likely take you 10 to 20 hours to solve your performance issue, unless you get lucky (which you might) in which case it may take much less time.

Good luck, Kevin.

[Updated on: Mon, 23 March 2015 09:28]

Report message to a moderator

Re: Performance improvement required [message #635194 is a reply to message #634969] Tue, 24 March 2015 05:19 Go to previous messageGo to next message
saurabh.sha
Messages: 15
Registered: March 2015
Location: india
Junior Member
Thanks a lot Kevin and John for providing the inputs, we are working on the decomposition, till now we have analysed most of the time is taken by
remote part of the query, after copying the tables to local server the query works really fast, we are also now trying to analyse the data portion
if all the outer joins are really required.

i will keep all updated on the thread regarding the progress,

Thanks again for your detailed steps and suggestions.
Re: Performance improvement required [message #642483 is a reply to message #635194] Fri, 11 September 2015 09:12 Go to previous message
BlackSwan
Messages: 25858
Registered: January 2009
Location: SoCal
Senior Member
>i will keep all updated on the thread regarding the progress,
but months later & you have NOT done as stated above
Previous Topic: Would it be useful to enable Hyper-Threading in Oracle 12.1.0.2 on Linux
Next Topic: Query does not error, but does not come back with a result set (as if it's in an endless loop)
Goto Forum:
  


Current Time: Sat Feb 24 09:11:39 CST 2018

Total time taken to generate the page: 0.18254 seconds