Re: Difference in temp table insert performance

From: Lok P <loknath.73_at_gmail.com>
Date: Sun, 9 Jan 2022 01:11:59 +0530
Message-ID: <CAKna9VZo38OL=qq1QHeAe0r8TqoUa5ukh5d2xabw8cm4Sm5Wbg_at_mail.gmail.com>



Also you may check the direct path read/write temp response time to see if it's coming from flash or not.

And along with that writeback is the minimum requirement for this to happen so see below too.

Exadata Write-Back Flash Cache - FAQ (Doc ID 1500257.1)

On Sun, Jan 9, 2022 at 12:19 AM Pap <oracle.developer35_at_gmail.com> wrote:

> Thank you Jonathan.
>
> I hope you are talking of the doc below stating the temp read/write from
> flash from the 12.2.1.1.0. exadata version. And we already have a 19+
> exadata image version, so hopefully we are already benefiting out of it.
> And it does not state anything on the oracle version , So I believe it
> does not depend on the Oracle version(which in our case is on 11.2.0.4
> oracle version as of now), so we should be good in that perspective.
>
> And in the below case i did see all big reads 128KB+ reads (66GB/336K read
> request). So may be as its there in doc stating it will only happen
> provided the write won't impact the high priority OLTP workload. So maybe
> in our case we are not getting full flash cache benefit as we have this
> query run during a busy window. And this application uses (reading/writing)
> to global temporary tables heavily, so as mentioned in this doc, a separate
> disk group from flash disk to hold the temp files(in our case temp
> tablespace size is ~1TB) may be something we will need.
>
> Will also explore if we can combine the indexes to reduce the number of
> indexes from three or we can make those index thinner by removing some
> columns out of those two composite indexes.
>
> ============================================================
> ============================================================
> ============================================================
> ========================================
> | Id | Operation | Name | Rows
> | Cost | Time | Start | Execs | Rows | Read | Read | Write |
> Write | Cell | Mem | Activity | Activity Detail |
> | | | | (Estim)
> | | Active(s) | Active | | (Actual) | Reqs | Bytes | Reqs |
> Bytes | Offload | (Max) | (%) | (# samples) |
> ============================================================
> ============================================================
> ============================================================
> ========================================
>
> | 1 | LOAD AS SELECT | |
> | | 8275 | +21 | 1 | 1 | 336K | 66GB | 666K |
> 146GB | | 543K | 84.93 | Cpu (4325) |
> | | | |
> | | | | | | | | |
> | | | | direct path read temp (1277) |
> | | | |
> | | | | | | | | |
> | | | | direct path write temp (1266) |
>
>
> Can Reads And Writes to TEMP Use Exadata Flash Cache When In Writeback
> Mode (Doc ID 2468671.1)
>
>
>
> On Sat, Jan 8, 2022 at 11:30 PM Jonathan Lewis <jlewisoracle_at_gmail.com>
> wrote:
>
>>
>> If you want to find out what your version of 19c does in these
>> circumstances you need to do the test yourself. (Otherwise you could pay me
>> to do the test for you, of course)
>>
>> The differences in the effects of append (or not) for the GTT may not be
>> terribly important as you change; but I think there may be some possible
>> benefit in enhancements in the use of flash cache when you upgrade: you may
>> not be able to reduce the number of write and reads to temp but you may be
>> able to configure 19c to make them faster. (Left as exercise because I
>> can't do the necessary testing)
>>
>> If the problem is the temp read/write for sorting (enable events 10032
>> and 10033 to check before running the procedure) then you may be able to
>> eliminate some of the I/O by forcing larger memory allocations - but at
>> 700M rows to index you may still be unable to avoid temp I/O. (The 10032
>> will also tell you if you do 3 x 700M rows sorts) or a single 2100M row
>> sort; the 10033 will tell you about "sort runs" being written to disc, and
>> "merges" from disk which would be reads.)
>>
>> You could always work on options for reducing the number of indexes -
>> it's possible that you don't need all three.
>>
>> As far as the TSFE object id being reported for the operation which
>> should be the load as select, I wouldn't spend too much time worrying about
>> it. There are other cases where the object id, or various other columns
>> from v$session (which is often a source for v$active_session_history) are
>> not updated, or are not updated exactly when you might expect, so that
>> attempts to produce reports produce misleading resuilts.
>>
>>
>> Regards
>> Jonathan Lewis
>>
>>
>> On Sat, 8 Jan 2022 at 16:51, Pap <oracle.developer35_at_gmail.com> wrote:
>>
>>> Thank you so much Jonathan. As always, great detail.
>>>
>>> You are correct. The INSERT /*+APPEND*/ is used here for the data load
>>> into the global temporary table and GTT being created as 'on commit
>>> preserve rows'. And this insert query which i had posted is the first time
>>> population which means the table is populated from an empty stage.
>>>
>>> As we are currently on Oracle version 11.2.0.4 with Exadata X8 and are
>>> planning to move to 19C soon. So curious to know regarding the points you
>>> stated i.e. how the append hint behavior will vary with versions and how
>>> the presence or absence of data on GTT will have an effect on 'on commit
>>> preserve type table load? Can you please explain this a bit more ?
>>>
>>> And as you rightly pointed out, the storage intelligence of the exadata
>>> is not getting applied here because if the table is a global temporary
>>> table it would have benefited more from the real table. But I think we
>>> currently have some design constraints like the same data load sql process
>>> is running from multiple sessions for multiple customers at same time. So
>>> thinking of the possible options at hand here , is the only option to make
>>> the data load faster here is, parallel hint or dropping indexes from the
>>> global temp table?
>>>
>>> And also Jonathan, another point , i was still seeing in
>>> dba_hist_active_sess_history , if i group the sample by sql_plan_line_id,
>>> event , current_obj# the top most sample count appears on
>>> sql_plan_line_id-1, and its pointing to the object TSFE which is not
>>> actually the global temporary table that is getting loaded rather its the
>>> one which is getting full scanned on plan_line_id-8.
>>>
>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Sat Jan 08 2022 - 20:41:59 CET

Original text of this message