Re: Memory Sizing Advice
Date: Fri, 9 May 2008 06:54:44 -0700 (PDT)
On May 9, 8:05 am, "Arne Ortlinghaus" <Arne.Ortlingh..._at_acs.it> wrote:
> We have a multi purpuse database of about 500 GB on a SAN storing system
> with about 100 concurrent users. Although we are using many indexes it
> helped very much to have 20GB of RAM for the database. The users directly
> can see by the response time of the standard input windows if data is loaded
> from the disks or if it is already in the main memory: if it is in memory
> the problematic queries take 0.5 to 3 seconds, if it must be loaded it can
> take also more than 60 seconds if there are other users requiring data from
> disk. Unfortunately the Windows 64bit Operating System does seem not to make
> always the best usage of the additional memory: We see many page faults in
> the processes. But nevertheless I would say: after having a multiprocessor
> CPU the most important part is the quantity of main memory.
> Arne Ortlinghaus
> ACS Data Systems
> "Pat" <pat.ca..._at_service-now.com> schrieb im Newsbeitragnews:e71181dd-9753-4709-a063-ef2fc5254d26_at_a70g2000hsh.googlegroups.com...
> > On May 8, 9:00 pm, "Ana C. Dent" <anaced..._at_hotmail.com> wrote:
> >> Pat <pat.ca..._at_service-now.com> wrote in news:12a7f1d9-6dce-4ba5-9d41-
> >> 73c18ab0d..._at_y21g2000hsf.googlegroups.com:
> >> When your only tool is a hammer, all problems are viewed as nails.
> >> > The classic solution to this is:
> >> > add more memory
> >> What you are attempting to do is covert Physical I/O to Logical I/O.
> >> A smarter solution is to add an index to reduce I/O by orders of
> >> magnitude.
> > The problem here isn't excessive table scans or an absence of indexes.
> > The working set of indexes simply don't fit in cache all that well.
> > I've got mutltiple indexes > 1 G in size and a half dozen or so >
> > 500M.
> > So, while I appreciate the tutorial on the importance of indexes as a
> > component to an efficient data retreival strategy, I find it a bit odd
> > that you're acting as though cache memory isn't an analagous
> > component.
> > This is the database back end for an enterprise application, it's not
> > a data warehouse application. It tends to aggressively chew over the
> > same working set (the aforementioned 10-12G of memory) querying it in
> > all sorts of unpredictable, end-user defined, ways. If I knew a set of
> > additional indexes I could add that would reduce my working set, I'd
> > have already added them. At this point, the only solution I can see
> > here is to bump up the SGA so that my (existing) index and data blocks
> > fit in memory.- Hide quoted text -
> - Show quoted text -
As said in another post the most important part is NOT memory or number of CPUs available, it's I/O architecture. Granted, Windows is not the ideal 'operating system' (and I do use that term loosely) when it comes to memory 'management', but that isn't your main problem; the issue is the bottleneck created by the limited capacity of your I/O subsystem and it's associated architecture. What I said earlier still applies; bulking up the cache in hopes you'll get more hits when your data is constantly changing is akin to 'tilting at windmills'. You're not addressing the underlying problem, you're attempting to mask the symptoms, which rarely, if ever, works.
Of course, when it's all said and done it's your (meaning the company's) money and time and effort; spend it or waste it as you see fit. And, personally, I think goosing your RAM allotment to expand your buffer cache is wasting money which will provide no real return.
David Fitzjarrell Received on Fri May 09 2008 - 08:54:44 CDT