RE: How to trace memory resize ops

From: Mark W. Farnham <>
Date: Mon, 26 Mar 2018 09:37:46 -0400
Message-ID: <018f01d3c507$a0706600$e1513200$>

First parse time would also be a clue if there are many happening in a wave
(which as per JL seems quite likely).

Any system level cache results on?


-----Original Message-----
From: [] On Behalf Of Jonathan Lewis
Sent: Monday, March 26, 2018 8:45 AM
Subject: Re: How to trace memory resize ops


Stefan's given you an option for tracing. If you want to do a quick check of the most obvious guess before waiting for the next occurrence then run up the AWR reports for the intervals just before, during and after the last occurrence and check the instance activity stats for large numbers for "parse count (hard)"; and the SQL Ordered by Version Count for statements with large numbers of versions, and SQL Ordered by Sharable Memory for large memory consumers.

Basically the guess is that something fires in a very large number of similar SQL statements with different literals (that's the check for hard parses), or the same SQL statement with bind variable hits a bug which causes large numbers of re-optimizations (which also increase hard parses) that leave the statement with lots of versions and a total large memory requirement.

If you're lucky it will be the literal SQL problem, which is often very easy to fix; if you're unlucky it will be some obscure Oracle bug causing the second problem. Either way you might find some early clues in a quickk examination of AWR reports

Jonathan Lewis

From: <> on behalf of Ingrid Voigt <> Sent: 26 March 2018 11:43
Subject: How to trace memory resize ops

Oracle EE on Windows 64 Bit

We've been having problems with memory resize operations shrinking the buffer cache below specified limits:

Specified: 16GB db_cache_size, 3GB shared_pool_size, 20GB sga_max_size. Changes to: approx. 3.5GB db_cache_size, 16GB shared_pool_size

This occurs about 2-3 times per month, usually after hours, severely decreases database performance and ends with a bounce of the database.

According to MOS 1269139.1 the resizes are expected behavior.

Ist there some way to trace / find out what exactly causes this?
(SQL, background process, other details of the root cause)

My customer is not willing to set _MEMORY_IMM_MODE_WITHOUT_AUTOSGA=false and risk ORA-04031. If that changes, could one see something with a trace of ORA-04031? Thanks and best regards
Ingrid Voigt


Received on Mon Mar 26 2018 - 15:37:46 CEST

Original text of this message