Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Profiling Oracle Applications

Re: Profiling Oracle Applications

From: Bass Chorng <bchorng_at_yahoo.com>
Date: 29 Nov 2004 10:01:30 -0800
Message-ID: <bd9a9a76.0411291001.5f31f2df@posting.google.com>


bobde6_at_hotmail.com (Bob) wrote in message news:<9cae39d7.0411260416.1189a842_at_posting.google.com>...
> Hi
>
> I have been set the task of consolidating a number of applications
> onto one server all which use oracle database. I need to profile the
> applications so that the appropriate resources (CPU/Memeory) can be
> assaigned and re-assigned when appropriate to suit all the
> applications needs (e.g] OLTP / DSS).
>
> My current solution is to first establish an understand of what each
> system does, system usage, number of users. Then secondly take hourly
> statspack shots for a full week hourly, to begin to develop an
> understanding of the loads of the system (looking at calls /
> statements per sec)
>
> What I am asking is there any advice one can give ( previous
> experience ), or different techniques to use to perform this task.
>
> Thanks
>
> Bob

I would keep statspack running all the time - initially at 5 minute interval for a week, set your baseline then drop down to 15 minutes. You should do the same with OS statistics. I think a simple vmstat can provide you most of your needs.

For workload analysis, I found the most useful metrics within Oracle are buffer gets and commit rate ( statistic# 4 in v$sysstat ). The former is to analyze queries, the latter writes. They can all be translated into rough CPU utilizations.

Each buffer get roughly equates to 50-75 micro-seconds of CPU time. But that is just user time, I usually add another 25% for system time. So for a particular SQL ( or workload ), you can find out peak hour # buffer gets per second, multiply by 75 micro-seconds, add 20%, then convert that into seconds. Thats the total number of CPU seconds needed for that workload in one wall clock second.

Then you divide that by # of CPUs ( total available CPU seconds per second ), you can easily get % of CPU utilization by that workload.

For writes, it is a bit more complicated, as the biggest CPU consumer is not the session itself, but it is log writer. Log writer CPU consumption has to do with redo payload and commit rate. You probably have to test it to determine your numbers. On my system, typical few hundred byte redo size uses about 20% of a 8-CPU (900 mhz) box at about 450 TPS. That is with quick IO. Of course, if you use file system buffer cache, it will be more expensive.

So for writes, I usually look at executions/sec, commits/sec and redo size/per commit to size a workload. Received on Mon Nov 29 2004 - 12:01:30 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US