Re: Difference in temp table insert performance

From: Lok P <loknath.73_at_gmail.com>
Date: Sun, 9 Jan 2022 13:06:22 +0530
Message-ID: <CAKna9VYNnBi=RVM2e3f3ox2ub0drxQn1oN_q+bQmcgPKykct-w_at_mail.gmail.com>



I know the overall working of 'writeback flash cache' . But, I am also seeing these two points in the doc for the first time. So I can't confirm , if oracle version 11.2.0.4*.1* mainly the *'.1' *is the minimum requirement for writeback and whether only Normal redundancy at the Data disk group is a must. Others may comment on this.

On Sun, Jan 9, 2022 at 1:27 AM Pap <oracle.developer35_at_gmail.com> wrote:

> Thank You Lok.
>
> I just had a quick look into the writeback flash cache doc . It says the
> DB home to be "11.2.0.4.*1 *or higher" (I see from v$version in our case
> its 11.2.0.4.*0*), and also i am surprised to see another criteria for
> writeback to work is Data disk group to have HIGH redundancy , but in this
> case as i see from v$asm_diskgroup , its set as 'NORMAL' redundancy. And
> Infra DBA saying it's done that way to save space.
>
> So I am not sure, but that seems to mean we must not be getting the
> benefit of even a writeback feature(along with this temp read/write from
> flash) even if we are on X8 with 19+ image version and have 'write
> back' enabled? Or are there any AWR performance statistics I can look
> into to confirm this behavior?
>
> On Sun, Jan 9, 2022 at 1:12 AM Lok P <loknath.73_at_gmail.com> wrote:
>
>> Also you may check the direct path read/write temp response time to see
>> if it's coming from flash or not.
>>
>> And along with that writeback is the minimum requirement for this to
>> happen so see below too.
>>
>> Exadata Write-Back Flash Cache - FAQ (Doc ID 1500257.1)
>>
>> On Sun, Jan 9, 2022 at 12:19 AM Pap <oracle.developer35_at_gmail.com> wrote:
>>
>>> Thank you Jonathan.
>>>
>>> I hope you are talking of the doc below stating the temp read/write from
>>> flash from the 12.2.1.1.0. exadata version. And we already have a 19+
>>> exadata image version, so hopefully we are already benefiting out of it.
>>> And it does not state anything on the oracle version , So I believe it
>>> does not depend on the Oracle version(which in our case is on 11.2.0.4
>>> oracle version as of now), so we should be good in that perspective.
>>>
>>> And in the below case i did see all big reads 128KB+ reads (66GB/336K
>>> read request). So may be as its there in doc stating it will only happen
>>> provided the write won't impact the high priority OLTP workload. So maybe
>>> in our case we are not getting full flash cache benefit as we have this
>>> query run during a busy window. And this application uses (reading/writing)
>>> to global temporary tables heavily, so as mentioned in this doc, a separate
>>> disk group from flash disk to hold the temp files(in our case temp
>>> tablespace size is ~1TB) may be something we will need.
>>>
>>> Will also explore if we can combine the indexes to reduce the number of
>>> indexes from three or we can make those index thinner by removing some
>>> columns out of those two composite indexes.
>>>
>>> ============================================================
>>> ============================================================
>>> ============================================================
>>> ========================================
>>> | Id | Operation | Name | Rows
>>> | Cost | Time | Start | Execs | Rows | Read | Read | Write |
>>> Write | Cell | Mem | Activity | Activity Detail |
>>> | | | |
>>> (Estim) | | Active(s) | Active | | (Actual) | Reqs | Bytes |
>>> Reqs | Bytes | Offload | (Max) | (%) | (# samples)
>>> |
>>> ============================================================
>>> ============================================================
>>> ============================================================
>>> ========================================
>>>
>>> | 1 | LOAD AS SELECT | |
>>> | | 8275 | +21 | 1 | 1 | 336K | 66GB | 666K |
>>> 146GB | | 543K | 84.93 | Cpu (4325) |
>>> | | | |
>>> | | | | | | | | |
>>> | | | | direct path read temp (1277) |
>>> | | | |
>>> | | | | | | | | |
>>> | | | | direct path write temp (1266) |
>>>
>>>
>>> Can Reads And Writes to TEMP Use Exadata Flash Cache When In Writeback
>>> Mode (Doc ID 2468671.1)
>>>
>>>
>>>
>>> On Sat, Jan 8, 2022 at 11:30 PM Jonathan Lewis <jlewisoracle_at_gmail.com>
>>> wrote:
>>>
>>>>
>>>> If you want to find out what your version of 19c does in these
>>>> circumstances you need to do the test yourself. (Otherwise you could pay me
>>>> to do the test for you, of course)
>>>>
>>>> The differences in the effects of append (or not) for the GTT may not
>>>> be terribly important as you change; but I think there may be some possible
>>>> benefit in enhancements in the use of flash cache when you upgrade: you may
>>>> not be able to reduce the number of write and reads to temp but you may be
>>>> able to configure 19c to make them faster. (Left as exercise because I
>>>> can't do the necessary testing)
>>>>
>>>> If the problem is the temp read/write for sorting (enable events 10032
>>>> and 10033 to check before running the procedure) then you may be able to
>>>> eliminate some of the I/O by forcing larger memory allocations - but at
>>>> 700M rows to index you may still be unable to avoid temp I/O. (The 10032
>>>> will also tell you if you do 3 x 700M rows sorts) or a single 2100M row
>>>> sort; the 10033 will tell you about "sort runs" being written to disc, and
>>>> "merges" from disk which would be reads.)
>>>>
>>>> You could always work on options for reducing the number of indexes -
>>>> it's possible that you don't need all three.
>>>>
>>>> As far as the TSFE object id being reported for the operation which
>>>> should be the load as select, I wouldn't spend too much time worrying about
>>>> it. There are other cases where the object id, or various other columns
>>>> from v$session (which is often a source for v$active_session_history) are
>>>> not updated, or are not updated exactly when you might expect, so that
>>>> attempts to produce reports produce misleading resuilts.
>>>>
>>>>
>>>> Regards
>>>> Jonathan Lewis
>>>>
>>>>
>>>> On Sat, 8 Jan 2022 at 16:51, Pap <oracle.developer35_at_gmail.com> wrote:
>>>>
>>>>> Thank you so much Jonathan. As always, great detail.
>>>>>
>>>>> You are correct. The INSERT /*+APPEND*/ is used here for the data load
>>>>> into the global temporary table and GTT being created as 'on commit
>>>>> preserve rows'. And this insert query which i had posted is the first time
>>>>> population which means the table is populated from an empty stage.
>>>>>
>>>>> As we are currently on Oracle version 11.2.0.4 with Exadata X8 and are
>>>>> planning to move to 19C soon. So curious to know regarding the points you
>>>>> stated i.e. how the append hint behavior will vary with versions and how
>>>>> the presence or absence of data on GTT will have an effect on 'on commit
>>>>> preserve type table load? Can you please explain this a bit more ?
>>>>>
>>>>> And as you rightly pointed out, the storage intelligence of the
>>>>> exadata is not getting applied here because if the table is a global
>>>>> temporary table it would have benefited more from the real table. But I
>>>>> think we currently have some design constraints like the same data load sql
>>>>> process is running from multiple sessions for multiple customers at same
>>>>> time. So thinking of the possible options at hand here , is the only option
>>>>> to make the data load faster here is, parallel hint or dropping indexes
>>>>> from the global temp table?
>>>>>
>>>>> And also Jonathan, another point , i was still seeing in
>>>>> dba_hist_active_sess_history , if i group the sample by sql_plan_line_id,
>>>>> event , current_obj# the top most sample count appears on
>>>>> sql_plan_line_id-1, and its pointing to the object TSFE which is not
>>>>> actually the global temporary table that is getting loaded rather its the
>>>>> one which is getting full scanned on plan_line_id-8.
>>>>>
>>>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Sun Jan 09 2022 - 08:36:22 CET

Original text of this message