Aw: Re: How to trace memory resize ops

From: Ingrid Voigt <>
Date: Tue, 10 Apr 2018 16:46:50 +0200
Message-ID: <trinity-3f9f18cf-9b42-49fd-a7f7-0447daa13675-1523371609937_at_3c-app-gmx-bs79>

There have been two more occurrences of the Shared Pool Growth / Buffer Pool Shrinking.
The tracefiles contained the same SQL:
  SELECT tbl1.root_taskid, tbl1.taskid, COUNT (tbl2.logid)
    FROM tbl1, tbl2
   WHERE     tbl1.taskid = tbl2.ref_taskid
         AND (tbl2.entrytyp IN (:1, ... :6))
         AND (   tbl1.root_taskid IN (:7, ... :110)
              OR tbl1.taskid IN (:111, ... :214))
         AND >=
                 (SELECT TRUNC (MIN (tbl3.createddate))
                    FROM tbl3
                   WHERE tbl3.root_taskid IN (:215, ... :318)
                      OR tbl3.taskid IN (:319, ... :422))
GROUP BY tbl1.root_taskid, tbl1.taskid
This SQL, the callstack and error fit perfectly with MOS 2127533.1:
ORA-600 [xplSetRws:1] From Explain Plan Or Select With a Huge In-List
The remedy is to rewrite the SQL, the problem is with the application
developer now.
(This would have been much easier to find if I hadn't overlooked ORA-00600
in the alert log. Aaarghh.)
Thanks again
Ingrid Voigt
Gesendet: Montag, 26. März 2018 um 17:09 Uhr
Von: "Ingrid Voigt" <>
Cc: "" <>
Betreff: Aw: Re: How to trace memory resize ops
thank you Stefan and Jonathan. I have found something to dig deeper. The
suspicious SQL is from the application and looks at first glance as if we could
maybe change it.
No caching, no.
If something interesting turns up I will be back with results.
Ingrid Voigt
Gesendet: Montag, 26. März 2018 um 14:44 Uhr
Von: "Jonathan Lewis" <>
An: "" <>
Betreff: Re: How to trace memory resize ops


Stefan's given you an option for tracing.
If you want to do a quick check of the most obvious guess before waiting for the next occurrence then run up the AWR reports for the intervals just before, during and after the last occurrence and check the instance activity stats for large numbers for "parse count (hard)"; and the SQL Ordered by Version Count for statements with large numbers of versions, and SQL Ordered by Sharable Memory for large memory consumers.

Basically the guess is that something fires in a very large number of similar SQL statements with different literals (that's the check for hard parses), or the same SQL statement with bind variable hits a bug which causes large numbers of re-optimizations (which also increase hard parses) that leave the statement with lots of versions and a total large memory requirement.

If you're lucky it will be the literal SQL problem, which is often very easy to fix; if you're unlucky it will be some obscure Oracle bug causing the second problem. Either way you might find some early clues in a quickk examination of AWR reports

Jonathan Lewis

From: <> on behalf of Ingrid Voigt <>
Sent: 26 March 2018 11:43
Subject: How to trace memory resize ops

Oracle EE on Windows 64 Bit

We've been having problems with memory resize operations shrinking the buffer cache
below specified limits:

Specified: 16GB db_cache_size, 3GB shared_pool_size, 20GB sga_max_size.
Changes to: approx. 3.5GB db_cache_size, 16GB shared_pool_size

This occurs about 2-3 times per month, usually after hours, severely decreases database
performance and ends with a bounce of the database.

According to MOS 1269139.1 the resizes are expected behavior.

Ist there some way to trace / find out what exactly causes this?
(SQL, background process, other details of the root cause)

My customer is not willing to set _MEMORY_IMM_MODE_WITHOUT_AUTOSGA=false
and risk ORA-04031. If that changes, could one see something with a trace of

Thanks and best regards
Ingrid Voigt

-- Received on Tue Apr 10 2018 - 16:46:50 CEST

Original text of this message