Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Responsiveness of Server at high CPU load

Re: Responsiveness of Server at high CPU load

From: Rick Denoire <100.17706_at_germanynet.de>
Date: Thu, 18 Dec 2003 22:43:52 +0100
Message-ID: <kb64uv0c61r21v31sn366jalq21d22rhlk@4ax.com>


Sybrand Bakker <gooiditweg_at_sybrandb.nospam.demon.nl> wrote:

>The Oracle guideline is that SGA shouldn't consume not more than about
>one third of physical RAM. In your case it consumes almost all of it.
>Your system is simply starving, and is faulting and swapping like
>hell.

Excuse me but defining which amount of memory is suitable in percent portion of total memory is meaningless. One third of 512M does not compare to one third of 4G. As long as enough memory for the OS and other processes is still available, nothing is starving. The remaining amount of memory of 512M is more than enough for the rest.

>Apart from not being familiar with the guideline (which has been
>posted many, many times here), what on earth made you think the
>configuration you have is sensible? In your situation there is severe
>lack of memory and I guess even the O/S is usually paged out!

We must be reading different sources of information. I have seen the recommendation of 80% so often, that I can't tell where I actually read it. In any case, your interpretation does not fit facts. Databases can work perfectly for months, and users are even impressed how quick work goes. At times, something goes plain wrong - this is a different situation then, not just a little bit slower. This is like black and white.

>Also, if you already need 3.5 G for 4 test instances, where you and
>your 'developers' routinely must have solved performance problems by
>cranking up init.ora parameters, how do you think this is EVER going
>to perform into production (ie NOT throwing everything you have right
>out of the window, and starting all over again)?

I tend to support your opinion here, at least on technical grounds. I deliberately oversized things because experience has forced me to be very defensive. You see, from time to time, heavy trouble happens and things go completely wrong, work can't be done and bosses come around talking about a crisis. In most cases, this is due to very bad developers' work, like trying to fill a table with many millions of rows in one shut. The TEMP tablespace alone is set to.. guess.. 24 GB in total size, almost as much as the total amount of data. Why? I have seen developers filling up that space. Don't ask me how. I need to be sure that I can't be blamed by setting parameters and sizes at ridiculous large values. Otherwise, the Admin is the culprit (I am the OS Admin too).

In this particular case, you might be surprised to know that block buffers of the instance involved in the problem is large enough to hold 90% of ALL data in the database! There is practically no real physical disk access - when FTS are done, since their data don't live long enough in the buffer cache, blocks are gotten from... disk cache in the RAID (the EMC RAID has a monitoring utility built in).

In short, you are wrong.

Bye
Rick Denoire Received on Thu Dec 18 2003 - 15:43:52 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US