Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: A kind of... benchmark?

Re: A kind of... benchmark?

From: Billy Verreynne <vslabs_at_onwe.co.za>
Date: 7 Aug 2003 23:20:43 -0700
Message-ID: <1a75df45.0308072220.4062d14a@posting.google.com>


Rick Denoire <100.17706_at_germanynet.de> wrote:

> I really don't understand why you guys get that philosophical.

'cause there ain't much use for this from a practical viewpoint? ;-)

> As I
> explained, my only concern was to gain OBJECTIVE, REPRODUCIBLE,
> ARTIFICIALLY consistent metrics about how a number of typical
> operations in a database performs.

Yeah, understand. But IMO those are meaningless. They don't show performance. They show differences. Which is not a bad thing btw - providing you know what those differences mean.

What you are basically asking for is a method to define : when is a SQL or database slow?

The real world answer - when the user/developer/operator/manager bitches that it is too slow. When he expects to wait 5 seconds for the data to be returned and it takes a minute. Or when he runs a job and it takes 10 hours and he wants it to take maybe 3 to 4 hours.

> To put it simple, let's assume that I want to assess the effect of
> using locally managed tablespaces instead of dictionary managed. Why
> the hell would it be senseless to run such a benchmak in databases
> with corresonding tablespaces and compare the results?

No it would not be. I agree with you. I do this a lot of times myself. If you have everything constant (hardware, operating system, disks etc), then you can do the type of db environment alone "benchmark" you are suggesting. Simplest example - slow SQL query. How do you tune it? One way is by trying (forcing) different execution plans and seeing & measuring what happens.

However - all that these will show is which one is "better" under those specific circumstances. Take that exact same slow SQL query. Change the indexing. And the slow SQL could just be as fast as it is suppose to be.

This type of "benchmarking" is an integral part of performance tuning.

> Even simpler. If we purchased a new server - something totally
> different (OS, Harddisks, CPU) and we were asked by the managers (who
> understand NOTHING about databases) how much faster this system is, I
> could tell them based on the benchmark. Then, they would know if
> spending the money was a right decision.

That is exactly what I have a problem with. What if the db design is crap? Still in 2nd normal form in many tables? Index space usage of a single table exceeding the total data space of that same table?

What if the machine the db is sitting on is a 4 CPU box and only 1 CPU is effectively used? Will the purchase of 16 CPU monster make things suddenly go faster? Or what about when it is connected to a 1TB EMC array with 2 SCSI channels and not 4 or more fibre channels?

What if (and this one I have seen more than once) the new more powerful machine causes a severe disk i/o bottleneck on the existing storage device? (it generates more i/o calls per second, effectively overloading the disk i/o subsystem and as a result get data slower off/on the disks than a _slower_ machine)

Any manager/management that spends money on a machine without basing that decision on cold and hard facts, are idiots. And the reason why they are idiots is often because their technical staff (and the vendor's salesmen) misleads them.. with benchmarks and crap that are meaningless.

Before spending $250,000 on a new superfast server, it could make a lot more sense to add to the current machine's RAM, add two more CPUs and swap SCSI channels for fibre.. for a $100,000. Or get three smaller servers for $300,000. (a capacity versus speed thing)

Benchmarking's primary purpose is a pissing contest. Which machine is the fastest. That is not what the business wants. They want the best machine(s) for the job. Where best is qualified ito cost, performance, maintenance, capacity, scalebility, service level agreements and so on.

> The simplest case. Run the benchmark. Reorganize the database. Rerun
> the benchmark. Kill a myth or stick at it.
>
> Run the benchmark. Setup parallelism. Rerun the benchmark. Got it?

It is not that simple. Let me re-use your example. Parallelism ito PQ behaves very diffirently in HOW it is applied, and to WHAT it is applied. (e.g. a 500k row table versus a 500 million row table)

There is no single best method in Oracle to do ABC. One method may give excellent performance one month, and give very poor performance the next.. for the sole reason that the data is now different.

Take an index range scan. The criteria for the BETWEEN statement scans 10% of the index. Performance is fine. The next set of bind variables for the BETWEEN criteria causes a 90% index scan.. In which case a full table scan would have been faster. And using PQ with that, even faster.

Same SQL. Different criteria. Vastly different performance. How can you put piggeonhole that into a generic database benchmark?

> What I asked was, do
> you know a set of SQL statements compiled specially to probe
> performance of typical database operations? Nothing more than that.

Use OEM/Statspack/v$tables for that. I am simply saying that there are no standard set of SQLs that can be used to "benchmark" a real world database.

There are however numerous SQLs that can be used to _identify_ performance bottlenecks in a database. And there are numerous way to address such performance problems.

--
Billy
Received on Fri Aug 08 2003 - 01:20:43 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US