Re: Memory Sizing Advice

From: <>
Date: Fri, 9 May 2008 05:49:23 -0700 (PDT)
Message-ID: <>

Comments embedded.
On May 8, 11:35 pm, Pat <> wrote:
> On May 8, 9:00 pm, "Ana C. Dent" <> wrote:
> > Pat <> wrote in news:12a7f1d9-6dce-4ba5-9d41-
> >
> > When your only tool is a hammer, all problems are viewed as nails.
> > > The classic solution to this is:
> > > add more memory
> > What you are attempting to do is covert Physical I/O to Logical I/O.
> > A smarter solution is to add an index to reduce I/O by orders of magnitude.

Possibly. It's also possible that this is a highly transactional configuration, effecting large volumes of data changes. An index won't help there.

> The problem here isn't excessive table scans or an absence of indexes.
> The working set of indexes simply don't fit in cache all that well.
> I've got mutltiple indexes > 1 G in size and a half dozen or so >
> 500M.
> So, while I appreciate the tutorial on the importance of indexes as a
> component to an efficient data retreival strategy, I find it a bit odd
> that you're acting as though cache memory isn't an analagous
> component.

It isn't, really, since if you're running a system which modifies large volumes of data the cached data blocks may be invalidated by the insert/update/delete activity, requiring them to be refreshed to ensure reliable and accurate result sets are returned. Increasing the cache size won't help when blocks are marked as modified and thus refreshed due to transactional activity.

> This is the database back end for an enterprise application, it's not
> a data warehouse application. It tends to aggressively chew over the
> same working set (the aforementioned 10-12G of memory) querying it in
> all sorts of unpredictable, end-user defined, ways.

No, it 'chews over' the application data which, in turn, 'churns' the existing cache because the data blocks have been modified since the last query used them. No amount of memory will stop that behaviour.

> If I knew a set of
> additional indexes I could add that would reduce my working set, I'd
> have already added them.

I don't believe it's the size of your 'working set' that is the 'problem'; the issue is querying constantly changing data which is brought into the cache because the blocks, essentially, undergo continuous change during the business day.

> At this point, the only solution I can see
> here is to bump up the SGA so that my (existing) index and data blocks
> fit in memory.

I can't believe that will do much good except to give the vendor of your memory a better bottom line. You should install the PLUSTRACE role then use autotrace on some of these queries during the day to report some useful query statistics, such as redo generated. Yes, a query can generate redo, and undo, due to delayed block cleanout, and that phenomenon will increase your physical reads, and, yes, increasing the SGA, and the resulting buffer cache, can help some but Oracle restricts block cleanout for a transaction of any size to 10% of the total buffer cache blocks; any transaction modifying a block count in excess of the 10% threshold will relegate the cleanout of the remaining blocks affected to the next operation which touches those blocks, even if that operation is a (relatively) simple select statement. To eliminate this entirely you would need to allocate a number of blocks in the buffer cache equal to 10 times the size of the largest transaction you could possibly execute in your database, and I doubt you have the budget, or a machine, that can provide that much memory. Throwing memory at this 'problem' is, in my opinion, not the solution.

You need to find the source of this buffer cache block churning, and I expect it's due to high transactional activity. Yes, you can install the maximum amount of RAM your machine can support, and you can allocate 80% of that to your database, that will only do so much to keep data in memory. Once that data changes the cached values are no longer valid and require a refresh, which involves physical I/O. Unless you stop all transactional activity you can't guarantee that the data you loaded into cache at 9:15 this morning will still be there at 9:37 that same morning.

In this case bigger isn't always better.

David Fitzjarrell Received on Fri May 09 2008 - 07:49:23 CDT

Original text of this message