Re: reads by KTSJ
Date: Wed, 13 May 2020 19:28:11 -0400
Message-ID: <CAMHX9JK=61dx3t0d=WwL1zX-3zA26VnwfKp-sMr7bdXE6VyWgQ_at_mail.gmail.com>
Based on the stats descriptions, I'd say there's some asynchronous ASSM
bitmap block repair "offloading" mechanism, like when a foreground process
finds a block that ASSM reports as "free-enough", but in fact it isn't,
then someone needs to go and update the ASSM bitmap block (and generate
redo, etc).
See, one of the counters has "fg" in it instead of "bg":
ASSM *fg*: submit segment fix task
This explains why you had all those single block reads against regular data
blocks - as the space management slaves had to read them into cache to take
a look inside the data block, to see the real space usage vs. whatever the
ASSM bitmaps said.
Btw there's a X$KTSP_REPAIR_LIST table in SGA (protected by an "auto
segment repair latch") that you can query to see if you have your
table/partition's data object ID listed there. Check the alert log or
SMCO/Wnnn tracefiles too to see if there's any extra detail?
Apparently there are some "assm repair" related parameters too:
SQL> _at_pd assm%repair
NAME VALUE
--------------------------------------- --------------------
ASSM bg: segment fix monitor 789
ASSM fg: submit segment fix task 0
ASSM bg:mark segment for fix 0
*ASSM bg:create segment fix task 321*
*ASSM bg:slave fix one segment 169*
ASSM bg:slave fix state 8281891
Show all parameters and session values from x$ksppi/x$ksppcv...
NAME VALUE DESCRIPTION
-------------------------------- ----------
--------------------------------------------------
_assm_segment_repair_fg 0 ASSM Segment repair: fg
_assm_segment_repair_bg TRUE ASSM Segment repair: bg enable
_assm_segment_repair_maxblks 4294967294 ASSM Segment repair: Max blocks
per slave
_assm_segment_repair_timelimit 60 ASSM Segment repair: Max time
in seconds per slave
*Update: As I finished writing all the above, I decided to search MOS for
"assm repair" and found a very relevant-looking bugfix:*
*Bug 30265523 - blocks are not marked as free in assm after delete - 12.2
and later (Doc ID 30265523.8)*
*The fix for 30265523 is first included in*
- 19.7.0.0.200414 (Apr 2020) Database Release Update (DB RU) <https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&sourceId=30265523.8&id=245840.1#FIXED_19.7.0.0.DBRU:200414>
- 18.10.0.0.200414 (Apr 2020) Database Release Update (DB RU) <https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&sourceId=30265523.8&id=245840.1#FIXED_18.10.0.0.DBRU:200414>
- 12.2.0.1.200414 (Apr 2020) Database Release Update (DB RU) <https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&sourceId=30265523.8&id=245840.1#FIXED_12.2.0.1.DBRU:200414>
- 12.2.0.1.200414 (Apr 2020) Bundle Patch for Windows Platforms <https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&sourceId=30265523.8&id=245840.1#FIXED_WIN:C201P:200414>
There's a way to disable the fix apparently (read the full details from the note):
This fix can be disabled by setting the new event 60029 at level 1024...
-- Tanel https://tanelpoder.com On Wed, May 13, 2020 at 6:15 PM Noveljic Nenad <nenad.noveljic_at_vontobel.com> wrote: > Indeed! > > > > SQL> _at_init > > SQL> _at_sys ASSM%fix > > > > NAME > VALUE > > ---------------------------------------------------------------- > -------------------------- > > ASSM bg: segment fix > monitor 789 > > ASSM fg: submit segment fix task > 0 > > ASSM bg:mark segment for > fix 0 > > ASSM bg:create segment fix > task 321 > > ASSM bg:slave fix one segment > 169 > > ASSM bg:slave fix > state 8281891 > > > > Do you ever run out of good ideas? > > > > Yes, that’s actually my main concern that KTSJs are thrashing the file > system. > > > > I’ve already used the hidden parameter, when, e.g. many slaves were > started to extend a LOB segment or several slaves tried to do the flashback > archive maintenance for the same table, which ended in deadlock. The high > default value doesn’t make any sense, especially on a consolidated system. > I won’t though change the parameter for now, to get alerted in the case of > excessive free buffer waits – it’s a test system anyway. > > > > Best regards, > > > > Nenad > > > > https://nenadnoveljic.com/blog > > > > > -- http://www.freelists.org/webpage/oracle-lReceived on Thu May 14 2020 - 01:28:11 CEST