Re: Memory Sizing Advice
Date: Fri, 09 May 2008 04:48:22 GMT
Pat <pat.casey_at_service-now.com> wrote in news:e71181dd-9753-4709-a063-ef2fc5254d26_at_a70g2000hsh.googlegroups.com:
> On May 8, 9:00 pm, "Ana C. Dent" <anaced..._at_hotmail.com> wrote:
>> Pat <pat.ca..._at_service-now.com> wrote in >> news:12a7f1d9-6dce-4ba5-9d41- >> 73c18ab0d..._at_y21g2000hsf.googlegroups.com: >> >> When your only tool is a hammer, all problems are viewed as nails. >> >> >> >> > The classic solution to this is: >> > add more memory >> >> What you are attempting to do is covert Physical I/O to Logical I/O. >> >> A smarter solution is to add an index to reduce I/O by orders of >> magnitude.
> The problem here isn't excessive table scans or an absence of indexes.
> The working set of indexes simply don't fit in cache all that well.
> I've got mutltiple indexes > 1 G in size and a half dozen or so >
> So, while I appreciate the tutorial on the importance of indexes as a
> component to an efficient data retreival strategy, I find it a bit odd
> that you're acting as though cache memory isn't an analagous
> This is the database back end for an enterprise application, it's not
> a data warehouse application. It tends to aggressively chew over the
> same working set (the aforementioned 10-12G of memory) querying it in
> all sorts of unpredictable, end-user defined, ways. If I knew a set of
> additional indexes I could add that would reduce my working set, I'd
> have already added them. At this point, the only solution I can see
> here is to bump up the SGA so that my (existing) index and data blocks
> fit in memory.
You have all the answers.
BCHR is a measure of performance.
DB size relates to perfromance.
More memory results in better response time.
Have A Nice Day! Received on Thu May 08 2008 - 23:48:22 CDT