Re: SGA_TARGET substantially less than SGA_MAX_SIZE ?

From: Hans Forbrich <fuzzy.graybeard_at_gmail.com>
Date: Mon, 11 Aug 2014 20:51:56 -0600
Message-ID: <53E9814C.70903_at_gmail.com>



Haven't looked at this since 10gR1, but if memory serves [sic]:

As usual, it depends.

If the server is supposed to handle several instances, and there are significantly different workloads *inside each instance* over a long time that must be accommodated without 'down time', but which allow the bulk of the memory to be shifted between instances for short durations, then why not?

So if you had a machine with 64GB, 4 instances, and the month-end on each one would be best at 48GB but normal transaction rates are best at 12GB for each then the proposal should be valid. Then, for each instance, for the month-end, you could jack it up to 48GB and dial the others back to 4GB each for the couple of hours, and the switch.

(Of course, that much variant in SGA is suspect, as gut feeling says it would possibly be PGA, but that's a different analysis.)

This is all assuming that your OS actually supports DISM and 'returning unused memory from SGA to OS' - which is not a given.

The con is that fixed memory size would be relatively huge and effectively wasted space since it contains the pointers to the *potential* SGA chunks.

Under most environments, I would expect such a variant to raise eyebrows - but it does not necessarily mean it is wrong. Just something to seriously justify.

/Hans

On 11/08/2014 5:05 PM, kyle Hailey wrote:
>
> Is there any reason to run SGA_TARGET substantially less than
> SGA_MAX_SIZE?
>
> For example is there any reason to run MAX=48GB, TARGET=12GB instead
> of just running MAX at say 10% over TARGET to give some wiggle room?
>
> Another way to put it: what are the pros and cons of running
> SGA_TARGET substantially less than SGA_MAX_SIZE ?
>
> Thanks
> Kyle
>

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Aug 12 2014 - 04:51:56 CEST

Original text of this message