Re: Estimations are as good as it can be, what else can be done here?

From: yudhi s <learnerdatabase99_at_gmail.com>
Date: Thu, 9 Nov 2023 00:13:32 +0530
Message-ID: <CAEzWdqeEKQffiUeTbrU2QPfTbi0GFAUd-OczEzDgOyiyzOrPjw_at_mail.gmail.com>



Thank you Jonathan.

The first sql monitor is from the actual query which triggers from the applications UI screen which is why FIRST_N_ROWS optimization mode shows in the query, i believe. And that query is on a VIEW(VW_PPTG) with one filter on the predicate and "order by" on ~12 columns.

The second sql monitor which i posted , is by executing the query manually. Also exposing the view-VW_PPTG definition outside and using directly in the query (that is why if you see the second query, it looks bigger with Joining ~10 tables). I did this way, as I was expecting some difference , But i don't see much in overall response time.

So you are correct , the 329 rows from the first query sql monitor, because it stops early because of first_row optimization as it is executed from UI, in the first case.

The "optimizer_features_enable('11.2.0.4')" hint is in the view definition and I have put it exactly the same way, when I ran it manually. but i also saw if i remove it , then also i don't see much of the difference in the execution time. This hint is there in the view definition, so i believe the first query execution also must have executed with the same hints inside the view definition.

Thus data duplicate issue is not there i believe.

Considering it really has to read those 2million and sort those , is it possible to get the response time for those first_n_rows within <10 seconds anyway?

I am yet to fully understand the blog below which you highlighted. I will see if we can utilize something similar for tweaking the query in my case and get some benefit.

https://jonathanlewis.wordpress.com/2010/05/18/double-trouble/

On Wed, Nov 8, 2023 at 5:34 PM Jonathan Lewis <jlewisoracle_at_gmail.com> wrote:

> Looking through the SQL Monitor output, it seems that you really do want
> 2M rows generated and sorted, and the optimizer has done a couple of clever
> things to make that happen as efficiently as possible.
>
> A couple of discrepancies, though -
> 1 The first plan reports (FIRST N ROWS), the second reports (ALL ROWS) in
> the Status.
> 2 The first plan reports 329 rows returned, the second reports 2M. Is this
> because of "stopping early", or because of the choice of data.
> 3 If the latter why would there be so many duplicates when the query was
> originally run ***
> 4 The second query shows a comment (was it once a hint)
> optimizer_features_enable('11.2.0.4'), but the plan shows scalar subquery
> unnesting, which I think is a version 12 feature. Do you have some other
> optimizer_features_enable set for the production query that might be
> blocking a helpful transformation?
>
>
> GIven point 3 it's worth asking if there's any scope for finding out which
> table introduces the duplicates and replacing a reference to the table-name
> appear with an inline view of the form (select distinct column_list from
> tableX where ...) The optimizer calls this "distinct placement" and can do
> it automatically, but possibly it doesn't see the opportunity here, or
> maybe it's just not possible.
>
> The return of 329 rows (after reduction from 2M) is normal, while the
> return of 2M rows is a very special (possibly unique) case. then you may be
> able to find a way of using a "two-pass" approach to the problem. You call
> for a distinct on 56 columns, and then order by 12 of them. If most of
> those 44 "extra" columns come from a single table you may be able to find a
> way creating an inline view that does a select distinct on a much smaller
> number of columns (including the rowid from the one "large column list"
> table) and then does a second join to that table with a much reduced set of
> rows.
>
> There's a trivial example of the type of thing at: double trouble |
> Oracle Scratchpad (wordpress.com)
> <https://jonathanlewis.wordpress.com/2010/05/18/double-trouble/>
> Your distinct, though, means it might not be possible to use this pattern
> effectively.
>
>
> Regards
> Jonathan Lewis
>
>
>
>
>
>
>
>
>
>
>
>
> On Wed, 8 Nov 2023 at 04:15, yudhi s <learnerdatabase99_at_gmail.com> wrote:
>
>> Hello Listers,
>>
>> It's version 19C of oracle.
>> We have a search query which is getting submitted from the UI and the
>> expected response time is <10 seconds to display the first ~200 odd rows on
>> the screen, but it takes ~50 seconds for that. The sql monitor shows the
>> cardinality estimation is as good as it can be. The query has ~10 tables
>> used in the join conditions. I wanted to understand what else can be
>> done(like query modification, design change etc) to make this query finish
>> in <10 seconds response time.
>>
>> The time seems equally distributed in the plan, just that the sorting of
>> the final result set before publishing is consuming ~20% of the execution
>> time, but we need those data sets to be sorted in that order before showing
>> up on the screen. So not sure what else can be done here, to improve this?
>>
>> The main query is based on the top of a view, but I even executed it
>> manually by exposing the internal query which is used for creating the
>> view. Not seeing much of a difference.
>>
>> Below I have published the query and its sql monitor.
>>
>> https://gist.github.com/databasetech0073/046b198e977d200c70778a973c223ed9
>>
>> Regards
>> Yudhi
>>
>>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Nov 08 2023 - 19:43:32 CET

Original text of this message