Re: Performance issue on query doing smart scan

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Tue, 2 Feb 2021 17:47:58 +0000
Message-ID: <CAGtsp8nkUS7Bz2iYQWzLj-ahXOa-_pSX1Ld4eaxrT_AMWnTexg_at_mail.gmail.com>



Deciding whether to talk to Oracle or not about the double decompression is something you'll have to decide for yourself. You could show them the figures I've raised (and then give them then entire set of non-zero figures as a follow-up) and ask if double decompression is the correct interpretation, and if there are any known bugs and related patches. It's possible they'll just tell you to make your SQL more efficient and add indexes to do less work.

As far as building the index is concerned. An interesting thought for building the index is this:

You can create the index as unusable, and this will take virtually no time at all.

Then you can start running 'alter index XXX rebuild partition YYY' from the oldest end upwards. (I assume that your millions of inserts/updates per day are likely to be at the recent end of the data).

Oracle 11.2.0.4 is capable of using "TABLE EXPANSION" to split the query into what is effectively a UNION ALL query against a view with a name like VW_TE_nnn that uses index range scans for the partitions that have usable indexes and tablescans for the partitions where the index is still unusable. Here's a linke to an article by Maria Colgan about the transformation:
https://blogs.oracle.com/optimizer/optimizer-transformations:-table-expansion

Since you have 180 partitions, and since the plan will (should) split into two distinct pieces, you can examine the SQL*Monitor report and see how well the indexed branch performs relative to the tablescan part, and may get some idea of how efficient the whole thing will be by the time you've built the first 10 or 12 partitions.

Regards
Jonathan Lewis

On Tue, 2 Feb 2021 at 15:02, Lok P <loknath.73_at_gmail.com> wrote:

> Also Jonathan, apart from having the new index created(which will really
> need a lot of tests/agreement because this transaction table is 7TB in size
> and is exposed to heavy batch inserts happening each day 24/7,
> ~250-300million rows each day) to help cater this query,

>>
>>
>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Feb 02 2021 - 18:47:58 CET

Original text of this message