Re: Estimations are as good as it can be, what else can be done here?

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Wed, 8 Nov 2023 12:03:21 +0000
Message-ID: <CAGtsp8kyWORLRdf14pg5bmJx4zzJLyyi0DSzK4i9rzyrSe73bg_at_mail.gmail.com>



Looking through the SQL Monitor output, it seems that you really do want 2M rows generated and sorted, and the optimizer has done a couple of clever things to make that happen as efficiently as possible.

A couple of discrepancies, though -
1 The first plan reports (FIRST N ROWS), the second reports (ALL ROWS) in the Status.
2 The first plan reports 329 rows returned, the second reports 2M. Is this because of "stopping early", or because of the choice of data. 3 If the latter why would there be so many duplicates when the query was originally run ***
4 The second query shows a comment (was it once a hint) optimizer_features_enable('11.2.0.4'), but the plan shows scalar subquery unnesting, which I think is a version 12 feature. Do you have some other optimizer_features_enable set for the production query that might be blocking a helpful transformation?

GIven point 3 it's worth asking if there's any scope for finding out which table introduces the duplicates and replacing a reference to the table-name appear with an inline view of the form (select distinct column_list from tableX where ...) The optimizer calls this "distinct placement" and can do it automatically, but possibly it doesn't see the opportunity here, or maybe it's just not possible.

The return of 329 rows (after reduction from 2M) is normal, while the return of 2M rows is a very special (possibly unique) case. then you may be able to find a way of using a "two-pass" approach to the problem. You call for a distinct on 56 columns, and then order by 12 of them. If most of those 44 "extra" columns come from a single table you may be able to find a way creating an inline view that does a select distinct on a much smaller number of columns (including the rowid from the one "large column list" table) and then does a second join to that table with a much reduced set of rows.

There's a trivial example of the type of thing at: double trouble | Oracle Scratchpad (wordpress.com)
<
https://jonathanlewis.wordpress.com/2010/05/18/double-trouble/> Your distinct, though, means it might not be possible to use this pattern effectively.

Regards
Jonathan Lewis

On Wed, 8 Nov 2023 at 04:15, yudhi s <learnerdatabase99_at_gmail.com> wrote:

> Hello Listers,
>
> It's version 19C of oracle.
> We have a search query which is getting submitted from the UI and the
> expected response time is <10 seconds to display the first ~200 odd rows on
> the screen, but it takes ~50 seconds for that. The sql monitor shows the
> cardinality estimation is as good as it can be. The query has ~10 tables
> used in the join conditions. I wanted to understand what else can be
> done(like query modification, design change etc) to make this query finish
> in <10 seconds response time.
>
> The time seems equally distributed in the plan, just that the sorting of
> the final result set before publishing is consuming ~20% of the execution
> time, but we need those data sets to be sorted in that order before showing
> up on the screen. So not sure what else can be done here, to improve this?
>
> The main query is based on the top of a view, but I even executed it
> manually by exposing the internal query which is used for creating the
> view. Not seeing much of a difference.
>
> Below I have published the query and its sql monitor.
>
> https://gist.github.com/databasetech0073/046b198e977d200c70778a973c223ed9
>
> Regards
> Yudhi
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Nov 08 2023 - 13:03:21 CET

Original text of this message