Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Client performance.

Re: Client performance.

From: Bob Fazio <rfazio_at_home.com.nospam>
Date: 2000/06/22
Message-ID: <PUg45.14496$A%3.167470@news1.rdc2.pa.home.com>

First of all, I have never had good response using ODBC with oracle. The setting defaults are really poor. Use oo4o instead. With no code changes at all, I can expect 20-400% improvement over ODBC.

Oracle has close to 200 parameters which can be adjusted to effect how much of the system is used by the database, and for what purpose. The good side to this is that you have the ability to adjust exactly where to give memory and resources to enhance how the database performs. The problem with these 200 parameters is that you can just as easily kill performance. And for some unknown reason, Oracle still insists on shipping the databases with a default configuration that performs as poorly as it can possibly perform. What you will find is that even if you go through the effort of tuning oracle as tight as you can, you will most likely still see better results in SQL/Server for a development environment where you have only a few users. On the other hand, in your production environment where you may have 100+ users accessing the data, Oracle will still be returning the results in approximately the same amount of time, where SQL/Server will most likely have brought the system to it's knees long ago.

The reason for this is that oracle implements a very complex series of locks and latches to control access to data. SQL/Server uses a less complex mechanism which is faster in low number of connections scenario, but falls apart in larger ones.

My recommendation would be that if you will only ever have one user with one application, then stick with SQL/Server.

If you wish to try and tune Oracle, start with a bstat/estat report, and evaluate the results.
http://24.8.218.197/tools/tuning/bstat_estat.xls You want to examine the system with a steady, and repeatable load on the system for at least 1/2 hour after the system has been running with the same load for at least 1/2 hour before.

Start a 1 hour test load.
After 30 min. run utlbstat
after 30 min. run utlestat
stop the 1 hour test.

This should give you a pretty good idea on how the system is performing. Key things, If you not using Java, then limit the memory that you are using for it. If you can use raw devices. Avoid raid5 on the database if you can. Make sure that you supply adequate memory to db_block_buffers. Don't set your block_size too high (2-4k is usually best) 8K if you do a lot of DSS types of operations.

--
Robert Fazio, Oracle DBA
rfazio_at_home.com
remove nospam from reply address
http://24.8.218.197/
"Marcus Hampel" <marcus.hampel_at_myview.de> wrote in message
news:3950A64B.9E5E0672_at_myview.de...

>
>
> Luggy wrote:
> >
> > I totally agree with Michael. The first stage in investigating poor
> > application performance should always be to see what it is doing. It
> > is
> > not sufficient to say that SQL Server and Oracle are running
> > comparable
> > statements, as Oracle's optimizer will be completely different.
>
> Ok, read my first article: The client application cause the CPU load,
> but only if it works with the Oracle server. With SQL-Server the Server
> the client produce only a small part of complete CPU load.
>
> What does the client do? Does the client compute optimizing on Oracle? I
> believe not, but i don't know.
>
> > You may well find that the addition of an index or two will vastly
> > improve the Oracle performance. Try SQL tracing the client session
> > (alter session set sql_trace = true) and then TKPROF the resulting
> > trace file.
> Ok, for the unbelievers, the result of tkprof:
>
>
**************************************************************************** ****
> OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
>
> call count cpu elapsed disk query
> current rows
> ------- ------ -------- ---------- ---------- ---------- ----------
> ----------
> Parse 155 0.23 0.31 0 0
> 0 0
> Execute 155 0.03 0.07 0 0
> 0 0
> Fetch 675 0.23 0.23 0 5117
> 0 968
> ------- ------ -------- ---------- ---------- ---------- ----------
> ----------
> total 985 0.49 0.61 0 5117
> 0 968
>
> Misses in library cache during parse: 0
>
>
> OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
>
> call count cpu elapsed disk query
> current rows
> ------- ------ -------- ---------- ---------- ---------- ----------
> ----------
> Parse 2 0.02 0.02 0 0
> 0 0
> Execute 2 0.00 0.00 0 0
> 0 0
> Fetch 17 0.00 0.00 0 53
> 0 16
> ------- ------ -------- ---------- ---------- ---------- ----------
> ----------
> total 21 0.02 0.02 0 53
> 0 16
>
> Misses in library cache during parse: 0
>
> 155 user SQL statements in session.
> 2 internal SQL statements in session.
> 157 SQL statements in session.
>
**************************************************************************** ****
>
> >
> > (However, I'm a little surprised if you're getting no disk activity).
> It's always the secound try. All data are cached. The application should
> fly. But the application takes 4s to complete work (with full CPU
> load!). The Oracle server needs 0.6s to compute results.
> With SQL-Server as database the application completes work in 0.8s.
>
> >
> > Dave.
>
> Thanks,
> Marcus
Received on Thu Jun 22 2000 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US