Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Performance problem Linux vs. Solaris

Re: Performance problem Linux vs. Solaris

From: Mladen Gogala <mgogala_at_adelphia.net>
Date: Mon, 27 May 2002 17:58:56 -0400
Message-ID: <acua71$sai97$1@ID-82084.news.dfncis.de>


On Mon, 27 May 2002 03:29:33 -0400, MH wrote:

> Hi,
>
> I have a problem with Oracle 8.1.7. I installed it on a 2 processors
> Intel linux machine. The database I use is not at all optimized, means I
> did not spread any datafiles across disks and I have only one tablespace.
>
> The application working on that database produces around 2500
> transactions per minute on that machine. When I install the same database
> on a sunfire V280, the transaction rate goes down to 400 transactions per
> minute. The sunfire also has two processors, and a lot more real memory
> (4 Gigabytes). When I look at the cache hit ratios, there's no hint of
> contention, all ratios are above 90%. In addition, there apear no wait
> events in the system.
>
> I think that there is perhaps a problem with Solaris 2.7 and Oracle that
> I don't know. Does anybody know about a certain parameter to adjust in
> Oracle or Solaris, or can anybody give m an idea of what I can exmaine to
> solve the problem?

Well, you are faced with a tuning problem. So, you don't have paging, no excessive IO rates, no system contention in any shape or form. That means that the culprit lies within the database realm. You say that there are no significant wait events and that hit ratios are high. Well, Cary Milsap (formerly a demigod at oracle Corp., now of the Hotsos Inc.) has recently published a paper (see http://www.hotsos.com ) named "Why the high hit ratio is NOT OK" in which he stresses the fact that memory access is far from free and that while a gazillion of db block gets satisfied from SGA will certainly perform faster then the gazillion of physical reads, the best way to make a really significant improvement is to reduce the amount of the db block gets in the first place. So, what does that leave us with? Well, you have to tune your SQL. You will first monitor CPU and see how much CPU is being used. If the CPU usage is high, you will go to v$sqlarea and look for the most CPU intensive SQLs. If the CPU consumption is low, then you will look for the physical reads and rows processed. When you locate the most expensive SQL statements (and usually, the first 5 will account for over the 90% of the spent time) then try to tune to do their work more efficiently. That's a classical tuning problem that helped me buy my car back then when I was a consultant.

-- 
Mladen Gogala
Za Feral Spremni!
Received on Mon May 27 2002 - 16:58:56 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US