Re: Memory Issue

From: Pap <oracle.developer35_at_gmail.com>
Date: Sun, 10 Jan 2021 01:35:46 +0530
Message-ID: <CAEjw_fiNpdnq4bboy34-G9SZVQS8jF6iGmtBuxq75CVV17ML2Q_at_mail.gmail.com>



  I was able to capture a few details when the same issue was occurring in real time. There are ~30 different sessions experiencing the "library cache Mutex X" and all of them are mostly executing the same recursive query. But when i checked the P1 column of v$session_wait for all those ~30 sessions , they were pointing to same P1 value(i.e idn) and then i checked that from v$db_object_cache where hash_value in (<'P1'>), it pointing to same object for all those ~30 sessions, and it's a public synonym. Results from v$db_cache are below. I tried to see the details by doing a F4 on the synonym name , but i am seeing messages like "synonym for this object type are not supported", even i am not able to see the main table by doing F4 on that too. I am not sure how this synonym can cause such issues and how it's related to the recursive query using obj$, but I do see last_ddl_time in dba_objects for this public synonym pointing to the recent past although the created is showing a few years back.

All the memory consumption is in memory type "session PGA memory" and v$process_memory shows all of the top consumers under OTHER category only. very less under SQL and PLSQL.

Name NAME SPACE HASH_VALUE TYPES LOCKS PINS LOCKED_TOTAL PINNED_TOTAL XXXX TABLE/PROCEDURE XXXXX SYNONYM 33 21 1442853779 1442853771 On Thu, Jan 7, 2021 at 10:46 PM Pap <oracle.developer35_at_gmail.com> wrote:

> Can this memory choke be caused by an "os thread startup" event from a
> Streams replication session?
>
> Something was not adding up as because the number of Average active
> sessions per ~15minute for the recursive query during that interval was
> showing up as <5 during this memory issue period which lasted for ~17hours.
> Which seems to be quite low. I fetched the data using the below query from
> dba_hist_active_sess_history. And also I tried executing those recursive
> queries manually with few binds in loop 100 times and those came within
> seconds.
>
> Then we found something interesting in OEM top activity just before the
> issue started. There was a single session lying in the database from SYS
> with module as "Streams" and program as "oracle_at_PRODDB1 (QMNC)", though
> this session was not showing as top consumer and even not coming as in
> blocking sessions against any of those recursive queries which were
> consuming all those memory, but it's almost seems like, this is the one
> which made the recursive queries to suffer from "Library cache: mutex X".
>
> That same streams session was active all through that ~17hrs duration and
> if I aggregate the wait events in dba_hist_active_sess_history , it comes
> as below. So I am trying to interpret these but not quite able to translate
> how this one session can cause those recursive queries to crawl. Is this
> streams session waitevent can cause library cache latches all over and thus
> impacting the parsing of the query Or say this recursive queries? As
> because this session has logged null sql_id and has logged ~5MB as the
> pga_allocated in dba_hist_active_sess_history.
>
>
> *Stats from the one Streams session for those ~17hours for which it was
> active:-*
>
> EVENT SESSION_STATE COUNT(*)
> ON CPU 24894
> os thread startup WAITING 15273
> Streams AQ: qmn coordinator waiting for slave to start WAITING 969
> enq: PV - syncstart WAITING 218
> latch: shared pool WAITING 125
> process diagnostic dump WAITING 78
> enq: PR - contention WAITING 14
> cleanup of aborted process WAITING 6
> latch free WAITING 1
>
>
> On Wed, Dec 23, 2020 at 8:29 PM Sayan Malakshinov <xt.and.r_at_gmail.com>
> wrote:
>
>> I've got you exported ASH. Btw, it would be much easier if you use
>> pastebin or gist.github to post such data and just provide links to the
>> files.
>> To be honest the only interesting thing I see in your ASH data is "V8
>> Describe Any" in "Top level call name", which should be OCIDescribeAny().
>> Also only 716 rows from total 3146 were in "waiting", all other 2430
>> snaps (~80%) were "On CPU". So I would start with snapping session call
>> stack, at least 30-40 "oradebug short_stack".
>> From those 716 rows, 715 were "library cache: mutex X" and one "latch:
>> row cache objects"
>> And 132 rows with Blocking session status=valid with blocking
>> sid=808,serial=37715.
>>
>>
>> --
>> Best regards,
>> Sayan Malakshinov
>> Oracle performance tuning engineer
>> Oracle ACE Associate
>> http://orasql.org
>>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Sat Jan 09 2021 - 21:05:46 CET

Original text of this message