Re: Difference in temp table insert performance

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Sat, 8 Jan 2022 10:20:06 +0000
Message-ID: <CAGtsp8kj4mdo9j5yYEn+dwFi-EKpoq0V=azx+sqbmczsOtRWXw_at_mail.gmail.com>



A couple of background points:

  1. The FILTER operation is fine - in this case it's just Oracle checking that the two bind variable dates are in the right order. It's an example of "conditional plans", if the dates were in the wrong order the rest of the plan would not be called: so we can dismiss that.
  2. In theory the activity reported in the SQL Monitor plan is just a summary of the contents of v$active_session_history: even so it was worth checking that this was so in this case since those reads and writes are a little surprising.
  3. It's good to have eliminated the possibility of some odd LOB behaviour - when considering anomalies it's good to narrow the field by eliminating the easy stuff first.

I was being a little hopeful when I suggested the reporting error and the access to TEMP might have been reported in the wrong place, but it was worth a check. The clue that it was irrelevant comes from other columns of the Monitor plan. For each hash join a starting point for the memory/temp space usage for the operation is the volume of data (bytes rather than rows) from the first child row. In your case:

Operation 3 had mem(max) of 21M - Operation 4 reported 270K rows (actual) and the plan from display_cursor() suggested about 30 bytes per row: so those figures look okay
Operation 6 has mem(max) of 2M - Operation 7 reported 2,430 rows (actual) and the plan from display_cursor() suggested about 130 bytes per rows, so again the figures look okay.

So that brings us back to why the reads and writes. You're doing an insert on a global temporary table - you haven't told use how the insert operates but it's reported as LOAD AS SELECT, so I believe is must be insert with /*+ append */ The consequences may vary with version, and presumable you have the table declared as ON COMMIT PRESERVE ROWS so the consequence may vary depending on whether the GTT is empty or has already had some data inserted, but: If the table is empty then Oracle will have to sort the data needed for the three indexes to create the indexes. In some cases this may mean a SINGLE sort combining all three sets of index data into a single (tagged by object_id) set; or it may result in the table being scanned three times to extract and sort the data for each index in turn. This is probably where all the temp I/O is coming from; and (for your version of Oracle and Exadata) access to tables in TEMP is much slower than access to data from "real" tables because of the limited scope for compress, storage indexes, and all the other tricks that don't apply.

Regards
Jonathan Lewis

On Sat, 8 Jan 2022 at 05:35, Pap <oracle.developer35_at_gmail.com> wrote:

> Thank you, Lok.
>
> I tried to fetch information from dba_hist_active_sess_history , i see its
> showing top most activity has been logged against plan_line_id -1 only ,
> however the top most object_id/current_obj# is pointing to table/partitions
> of object TSFE which is actually showing FTS in plan_line_id -8. And few
> of the samples also logged against current_obj# which is pointing to the
> indexes in the loaded global temporary table.
>
> Few thoughts coming into mind..
> So as Jonathan said, is this just an issue in logging of the activity in
> ASH and sql monitor both. And this direct path read/write temp samples are
> coming somewhere from the HASH join those are there in the plan? And does
> that mean data load part i.e. sql_plan_line_id -1 is not the bottleneck
> here and so we need to look into other parts of the query then to improve
> the performance. And should we then try a parallel hint as nothing much
> here in the plan here and all the things are running in the FTS path here?
>
> Another thing i noticed, this Insert query is wrapped within an Alter
> session statement i.e. "_serial_direct_read" = ALWAYS and turning it back
> to AUTO after this query ends. And i think it may be because of some bugs
> wrt smartscan not happening for DMLs in 11.2 version. And also i see those
> full scans are forced through a sql profile. But I hope these won't cause
> such weird things.
>
>
>
>
> On Sat, Jan 8, 2022 at 10:47 AM Lok P <loknath.73_at_gmail.com> wrote:
>
>> Can you verify what is the topmost activity in
>> ASH/Dba_hist_active_sess_history logged against that same execution which
>> you captured in sql monitor?
>>
>> Is it against the same plan_line_id-1 and current_obj# pointing to any
>> of the three indexes from the loaded global temporary table and they are
>> against the same with events 'direct path read/write temp"?
>>
>> On Sat, Jan 8, 2022 at 10:39 AM Pap <oracle.developer35_at_gmail.com> wrote:
>>
>>> Thank You Jonathan.
>>>
>>> I misstated something in my initial post and want to correct that.
>>> Actually the loading part is taking time( with 'direct path read/write
>>> temp' samples) for the global temporary table having ~27 columns in it +
>>> having 3 indexes in it i.e. the first sql monitor. The other global
>>> temporary table with ~41 columns with no indexes i.e. the second sql
>>> monitor as i posted in my earlier thread is not taking time.
>>>
>>> But still trying to understand if all that overhead is just because of
>>> indexes? Those three indexes(two of them composite) are all b-tree indexes
>>> and on below columns and their data-type.
>>>
>>> (OTYP, Varchar2(2 byte) ,OID (NUMBER(38))
>>> (OTYP, Varchar2(2 byte), EID (NUMBER(38))
>>> FID - (NUMBER(38)
>>>
>>> As I had not captured the display_cursor plan while it was running, so I
>>> tried to run the SELECT part of it in production manually to see the column
>>> projection info. I verified the projected columns from the select list do
>>> not have any LOB data type nor do they have any such concatenation type
>>> function. The Select columns look simple without any usage of function in
>>> them. And the filter at step-1 appears to be a simple date filter only.
>>>
>>> SELECT TSFE.FID, TSFE.CB, TSFE.PPCID, TSFE.CRT, TSFE.DT_CR, TSFE.DT_MOD,
>>> TSFE.MBY,TSFE.AMT,TSFE.PART_DATE,TSFE.SCD,TSFE.OID,TSFE.OTYP,
>>> TSFE.EID,TSFE.ETYP,TSFE.BST, TSFE.ART, TSFE.FCID,
>>> TSFE.RFLG,TSFE.SID, TSFE.PBS_ID, TSFE.PBST,
>>> TSFE.PBS_DT,TSFE.TAMT,TSFE.FCNT, TSSF.RTYP
>>> FROM TTNI TTNI,
>>> TSFE TSFE,
>>> TSSF TSSF
>>> WHERE TSFE.PART_DATE BETWEEN TSSF.PART_DATE1 AND TSSF.PART_DATE1
>>> + 1
>>> AND TSFE.EID = TTNI.NID
>>> AND TSFE.ETYP = TTNI.NE
>>> AND TSFE.SID = TSSF.SID
>>> AND TSFE.BST = :B3
>>> AND TSFE.PART_DATE BETWEEN to_date(:B2,'DD-MON-YYYY') AND
>>> to_date(:B1,'DD-MON-YYYY')
>>> AND TSSF.PART_DATE2 BETWEEN to_date(:B2,'DD-MON-YYYY') AND
>>> to_date(:B1,'DD-MON-YYYY')
>>> AND TSSF.PART_DATE2 BETWEEN TTNI.STIM AND TTNI.ETIM;
>>>
>>>
>>>
>>> -----------------------------------------------------------------------------------------------------------------------
>>> | Id | Operation | Name | Rows |
>>> Bytes | Cost (%CPU)| Time | Pstart| Pstop |
>>>
>>> -----------------------------------------------------------------------------------------------------------------------
>>> | 0 | SELECT STATEMENT | | |
>>> | 48M(100)| | | |
>>> |* 1 | FILTER | | |
>>> | | | | |
>>> |* 2 | HASH JOIN | | 1 |
>>> 205 | 48M (4)|162:19:14 | | |
>>> | 3 | PARTITION RANGE ITERATOR | | 63 |
>>> 1953 | 225 (5)| 00:00:03 | KEY | KEY |
>>> |* 4 | TABLE ACCESS STORAGE FULL | TSSF | 63 |
>>> 1953 | 225 (5)| 00:00:03 | KEY | KEY |
>>> |* 5 | HASH JOIN | | 1860 |
>>> 316K| 48M (4)|162:19:11 | | |
>>> |* 6 | TABLE ACCESS STORAGE FULL | TTNI | 20 |
>>> 680 | 19 (0)| 00:00:01 | | |
>>> | 7 | PARTITION RANGE ITERATOR | | 102M|
>>> 13G| 48M (4)|162:19:07 | KEY | KEY |
>>> |* 8 | TABLE ACCESS STORAGE FULL| TSFE | 102M|
>>> 13G| 48M (4)|162:19:07 | KEY | KEY |
>>>
>>> -----------------------------------------------------------------------------------------------------------------------
>>>
>>> Query Block Name / Object Alias (identified by operation id):
>>> -------------------------------------------------------------
>>>
>>> 1 - SEL$1
>>> 4 - SEL$1 / TSSF_at_SEL$1
>>> 6 - SEL$1 / TTNI_at_SEL$1
>>> 8 - SEL$1 / TSFE_at_SEL$1
>>>
>>> Predicate Information (identified by operation id):
>>> ---------------------------------------------------
>>> 1 - filter(TO_DATE(:B1,'DD-MON-YYYY')>=TO_DATE(:B2,'DD-MON-YYYY'))
>>> 2 - access("TSFE"."SID"="TSSF"."SID")
>>> filter(("TSFE"."PART_DATE">="TSSF"."PART_DATE1" AND
>>> "TSFE"."PART_DATE"<=INTERNAL_FUNCTION("TSSF"."PART_DATE1")+1
>>> AND "TSSF"."PART_DATE2">="TTNI"."STIM" AND
>>> "TSSF"."PART_DATE2"<="TTNI"."ETIM"))
>>> 4 - storage(("TSSF"."PART_DATE2"<=TO_DATE(:B1,'DD-MON-YYYY') AND
>>> "TSSF"."PART_DATE1"<=TO_DATE(:B1,'DD-MON-YYYY') AND
>>> INTERNAL_FUNCTION("TSSF"."PART_DATE1")+1>=TO_DATE(:B2,'DD-MON-YYYY') AND
>>> "TSSF"."PART_DATE2">=TO_DATE(:B2,'DD-MON-YYYY')))
>>> filter(("TSSF"."PART_DATE2"<=TO_DATE(:B1,'DD-MON-YYYY') AND
>>> "TSSF"."PART_DATE1"<=TO_DATE(:B1,'DD-MON-YYYY') AND
>>> INTERNAL_FUNCTION("TSSF"."PART_DATE1")+1>=TO_DATE(:B2,'DD-MON-YYYY') AND
>>> "TSSF"."PART_DATE2">=TO_DATE(:B2,'DD-MON-YYYY')))
>>> 5 - access("TSFE"."EID"="TTNI"."NID" AND "TSFE"."ETYP"="TTNI"."NE")
>>> 6 - storage(("TTNI"."ETIM">=TO_DATE(:B2,'DD-MON-YYYY') AND
>>> "TTNI"."STIM"<=TO_DATE(:B1,'DD-MON-YYYY')))
>>> filter(("TTNI"."ETIM">=TO_DATE(:B2,'DD-MON-YYYY') AND
>>> "TTNI"."STIM"<=TO_DATE(:B1,'DD-MON-YYYY')))
>>> 8 - storage(("TSFE"."PART_DATE">=TO_DATE(:B2,'DD-MON-YYYY') AND
>>> "TSFE"."PART_DATE"<=TO_DATE(:B1,'DD-MON-YYYY') AND "TSFE"."BST"=:B3))
>>> filter(("TSFE"."PART_DATE">=TO_DATE(:B2,'DD-MON-YYYY') AND
>>> "TSFE"."PART_DATE"<=TO_DATE(:B1,'DD-MON-YYYY') AND "TSFE"."BST"=:B3))
>>>
>>> Column Projection Information (identified by operation id):
>>> -----------------------------------------------------------
>>> 1 - "TSFE"."SID"[VARCHAR2,40], "TSSF"."RTYP"[VARCHAR2,1],
>>> "TSFE"."EID"[NUMBER,22],
>>> "TSFE"."ETYP"[VARCHAR2,2], "TSFE"."FID"[NUMBER,22],
>>> "TSFE"."CB"[VARCHAR2,50],
>>> "TSFE"."PPCID"[NUMBER,22], "TSFE"."CRT"[NUMBER,22],
>>> "TSFE"."DT_CR"[DATE,7],
>>> "TSFE"."DT_MOD"[DATE,7], "TSFE"."MBY"[VARCHAR2,50],
>>> "TSFE"."AMT"[NUMBER,22], "TSFE"."PART_DATE"[DATE,7],
>>> "TSFE"."SCD"[VARCHAR2,5], "TSFE"."OID"[NUMBER,22],
>>> "TSFE"."OTYP"[VARCHAR2,2],
>>> "TSFE"."FCNT"[NUMBER,22], "TSFE"."TAMT"[NUMBER,22],
>>> "TSFE"."BST"[VARCHAR2,2],
>>> "TSFE"."ART"[NUMBER,22], "TSFE"."FCID"[NUMBER,22],
>>> "TSFE"."RFLG"[VARCHAR2,1],
>>> "TSFE"."PBS_DT"[DATE,7], "TSFE"."PBS_ID"[NUMBER,22],
>>> "TSFE"."PBST"[VARCHAR2,2]
>>> 2 - (#keys=1) "TSFE"."SID"[VARCHAR2,40], "TSSF"."RTYP"[VARCHAR2,1],
>>> "TSFE"."EID"[NUMBER,22],
>>> "TSFE"."ETYP"[VARCHAR2,2], "TSFE"."FID"[NUMBER,22],
>>> "TSFE"."CB"[VARCHAR2,50],
>>> "TSFE"."PPCID"[NUMBER,22], "TSFE"."CRT"[NUMBER,22],
>>> "TSFE"."DT_CR"[DATE,7],
>>> "TSFE"."DT_MOD"[DATE,7], "TSFE"."MBY"[VARCHAR2,50],
>>> "TSFE"."AMT"[NUMBER,22], "TSFE"."PART_DATE"[DATE,7],
>>> "TSFE"."SCD"[VARCHAR2,5], "TSFE"."OID"[NUMBER,22],
>>> "TSFE"."OTYP"[VARCHAR2,2],
>>> "TSFE"."FCNT"[NUMBER,22], "TSFE"."TAMT"[NUMBER,22],
>>> "TSFE"."BST"[VARCHAR2,2],
>>> "TSFE"."ART"[NUMBER,22], "TSFE"."FCID"[NUMBER,22],
>>> "TSFE"."RFLG"[VARCHAR2,1],
>>> "TSFE"."PBS_DT"[DATE,7], "TSFE"."PBS_ID"[NUMBER,22],
>>> "TSFE"."PBST"[VARCHAR2,2]
>>> 3 - "TSSF"."SID"[VARCHAR2,40], "TSSF"."PART_DATE1"[DATE,7],
>>> "TSSF"."RTYP"[VARCHAR2,1], "TSSF"."PART_DATE2"[DATE,7]
>>> 4 - "TSSF"."SID"[VARCHAR2,40], "TSSF"."PART_DATE1"[DATE,7],
>>> "TSSF"."RTYP"[VARCHAR2,1], "TSSF"."PART_DATE2"[DATE,7]
>>> 5 - (#keys=2) "TSFE"."EID"[NUMBER,22], "TSFE"."ETYP"[VARCHAR2,2],
>>> "TTNI"."STIM"[DATE,7],
>>> "TTNI"."ETIM"[DATE,7], "TSFE"."FID"[NUMBER,22],
>>> "TSFE"."CB"[VARCHAR2,50],
>>> "TSFE"."PPCID"[NUMBER,22], "TSFE"."CRT"[NUMBER,22],
>>> "TSFE"."DT_CR"[DATE,7],
>>> "TSFE"."DT_MOD"[DATE,7], "TSFE"."MBY"[VARCHAR2,50],
>>> "TSFE"."AMT"[NUMBER,22], "TSFE"."PART_DATE"[DATE,7],
>>> "TSFE"."SCD"[VARCHAR2,5], "TSFE"."OID"[NUMBER,22],
>>> "TSFE"."OTYP"[VARCHAR2,2],
>>> "TSFE"."FCNT"[NUMBER,22], "TSFE"."TAMT"[NUMBER,22],
>>> "TSFE"."BST"[VARCHAR2,2],
>>> "TSFE"."ART"[NUMBER,22], "TSFE"."FCID"[NUMBER,22],
>>> "TSFE"."RFLG"[VARCHAR2,1],
>>> "TSFE"."SID"[VARCHAR2,40], "TSFE"."PBS_ID"[NUMBER,22],
>>> "TSFE"."PBST"[VARCHAR2,2],
>>> "TSFE"."PBS_DT"[DATE,7]
>>> 6 - "TTNI"."NE"[VARCHAR2,2], "TTNI"."NID"[NUMBER,22],
>>> "TTNI"."STIM"[DATE,7],
>>> "TTNI"."ETIM"[DATE,7]
>>> 7 - "TSFE"."FID"[NUMBER,22], "TSFE"."CB"[VARCHAR2,50],
>>> "TSFE"."PPCID"[NUMBER,22],
>>> "TSFE"."CRT"[NUMBER,22], "TSFE"."DT_CR"[DATE,7],
>>> "TSFE"."DT_MOD"[DATE,7], "TSFE"."MBY"[VARCHAR2,50],
>>> "TSFE"."AMT"[NUMBER,22], "TSFE"."PART_DATE"[DATE,7],
>>> "TSFE"."SCD"[VARCHAR2,5], "TSFE"."OID"[NUMBER,22],
>>> "TSFE"."OTYP"[VARCHAR2,2], "TSFE"."EID"[NUMBER,22],
>>> "TSFE"."ETYP"[VARCHAR2,2],
>>> "TSFE"."BST"[VARCHAR2,2], "TSFE"."ART"[NUMBER,22],
>>> "TSFE"."FCID"[NUMBER,22],
>>> "TSFE"."RFLG"[VARCHAR2,1], "TSFE"."SID"[VARCHAR2,40],
>>> "TSFE"."PBS_ID"[NUMBER,22],
>>> "TSFE"."PBST"[VARCHAR2,2], "TSFE"."PBS_DT"[DATE,7],
>>> "TSFE"."TAMT"[NUMBER,22],
>>> "TSFE"."FCNT"[NUMBER,22]
>>> 8 - "TSFE"."FID"[NUMBER,22], "TSFE"."CB"[VARCHAR2,50],
>>> "TSFE"."PPCID"[NUMBER,22],
>>> "TSFE"."CRT"[NUMBER,22], "TSFE"."DT_CR"[DATE,7],
>>> "TSFE"."DT_MOD"[DATE,7], "TSFE"."MBY"[VARCHAR2,50],
>>> "TSFE"."AMT"[NUMBER,22], "TSFE"."PART_DATE"[DATE,7],
>>> "TSFE"."SCD"[VARCHAR2,5], "TSFE"."OID"[NUMBER,22],
>>> "TSFE"."OTYP"[VARCHAR2,2], "TSFE"."EID"[NUMBER,22],
>>> "TSFE"."ETYP"[VARCHAR2,2],
>>> "TSFE"."BST"[VARCHAR2,2], "TSFE"."ART"[NUMBER,22],
>>> "TSFE"."FCID"[NUMBER,22],
>>> "TSFE"."RFLG"[VARCHAR2,1], "TSFE"."SID"[VARCHAR2,40],
>>> "TSFE"."PBS_ID"[NUMBER,22],
>>> "TSFE"."PBST"[VARCHAR2,2], "TSFE"."PBS_DT"[DATE,7],
>>> "TSFE"."TAMT"[NUMBER,22],
>>> "TSFE"."FCNT"[NUMBER,22]
>>>
>>> On Sat, Jan 8, 2022 at 2:48 AM Jonathan Lewis <jlewisoracle_at_gmail.com>
>>> wrote:
>>>
>>>>
>>>> The large number of combined reads and writes to temp on in a LOAD
>>>> operation suggest two possibilities (though I've not seen the pattern
>>>> before).
>>>>
>>>> a) maybe the statistics are actually being reported in the wrong place
>>>> in the plan for some reason
>>>> b) possibly the code is doing something with LOB columns - perhaps if
>>>> it was calling a function in the select list that concatenates a number of
>>>> values to construct a LOB value this is one of the side effects.
>>>>
>>>> Look at the SQL, the column definitions, and the projection information
>>>> - that might give you some clues.
>>>>
>>>> BTW - what is the FILTER operation doing ?
>>>>
>>>> Regards
>>>> Jonathan Lewis
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, 7 Jan 2022 at 17:15, Pap <oracle.developer35_at_gmail.com> wrote:
>>>>
>>>>> Hello Listers, Its version 11.2.0.4 of Oracle. And is planned to move
>>>>> to 19C soon.
>>>>>
>>>>> For one of the third party applications , we see direct path insert
>>>>> into the global temporary temp table is taking significant time. Below is
>>>>> sql monitor from two of the queries , both of them are loading data into
>>>>> global temporary tables and in the first case it's inserting ~500million
>>>>> and in second case it's inserting ~700million rows. But what we see is even
>>>>> the first case global temporary table holds no indexes in it , it's still
>>>>> showing ~84% of the activity in the data load step(plan_line_id - 1). And
>>>>> the activity section in the sql monitor showing significant time samples
>>>>> for 'direct path read temp' and 'direct path write temp'.
>>>>>
>>>>> In the second case it's inserting ~747million rows but is not spending
>>>>> that much time in the data load part i.e. plan_line_id-1 and also i am not
>>>>> seeing those 'direct path read temp' and 'direct path write temp' samples
>>>>> there. Even this global temporary has 3- indexes in it.
>>>>>
>>>>> So we wanted to understand what must be causing this ? and if we could
>>>>> make the data load into the global temporary table faster in the first
>>>>> case? In the second case I understand it's the HASH join part where we are
>>>>> spending a lot of time as it spills to temp and we may not have much option
>>>>> at hand but we were expecting at least the data load should not take this
>>>>> amount of time.
>>>>>
>>>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Sat Jan 08 2022 - 11:20:06 CET

Original text of this message