Re: Oracle threatens to sue Standish over an article

From: Michael Friedman <mfriedma_at_us.oracle.com>
Date: Mon, 26 Jul 1993 01:12:33 GMT
Message-ID: <1993Jul26.011233.25750_at_oracle.us.oracle.com>


I don't speak for Oracle and this is just me speaking for myself...

In article <CAML7E.6uD_at_feanor.xel.com> shaw_at_feanor.xel.com (Greg Shaw) writes:
>Graeme Sargent (graeme_at_pyra.co.uk) wrote:
>: In <CAAKsE.MDJ_at_feanor.xel.com> shaw_at_feanor.xel.com (Greg Shaw) writes:
 

>: >Hal Berenson (berenson_at_nova.enet.dec.com) wrote:
 

>: >: In article <1993Jul3.021408.10121_at_qiclab.scn.rain.com>, tcox_at_qiclab.scn.rain.com (Thomas Cox) writes...
>: >[ bunch o' stuff deleted ]
 

>: >: So the problem with a "no benchmark special" rule is that determining
>: >: what is or isn't a benchmark special is very subjective. How would a
>: >: vendor prove that a feature wasn't a benchmark special? Would they have
>: >: to produce a customer who would testify they are using the feature?
>: >: Would one be enough? Two? Ten? Should the TPC provide the exact
>: >: program to be run, thus forcing the use of standard SQL syntax and
>: >: preventing the use of vendor-specific features? Would this really
>: >: provide a useful comparison, since customers DO use many non-standard
>: >: features, or would it be just another useless data point?
 

>: >In a word, YES.
 

>: Is that YES it's a useful comparison, or YES it's a useless data point???
>: I've now read your post three times, and I'm still not sure!
 

>YES it's a useless data point. I'm sorry for any ambiguity.

I disagree. See below...

>: > When evaluating DB systems, the TPC benchmark is used
>: >*OFTEN* to compare A vs. B. Period. 'Special' features that influence a
>: >benchmard (aka propietary features) are of *NO* use, because you're not
>: >talking about inplementing everyting in a vendor-specific situation.
>: >You're talking about apples vs. apples, DB vendor vs. DB vendor.
 

>: Which means that a vendor specific feature that you would choose to use
>: (or that is automatic, for that matter) *should* be measured. Fast/
>: Group Commits were vendor specific, it's just that they're now specific
>: to just about every vendor. The chances are that *your* particular
>: database now goes faster because of that particular "benchmark special".
 

>The question being, in your first sentence 'would' vs. 'could'. When I'm
>selecting a DB vendor, I want to see what happens in the BASE system. I
>can always turn some vendor-specific feature later for system performance
>testing. But, that will come later. In the beginning implementation
>phases, I want to know how the database performs on a dataset without
>having to specially setup the database -- e.g. put the data in and away it
>goes.

Well, I guess that's a point of view, but it is by no means a common one. People who have a serious need for speed do not just "put the data in and away it goes". They devote serious effort to database tuning and they use vendor specific features that are designed to speed up applications. The purpose of the TPC benchmarks is to see how fast a particular vendor's product can run a particular kind of transaction processing system, assuming that you do everything possible to tune it and optimize it. Prohibiting vendor-specific features seems unreasonable, given this objective.

>If something is automatic, great. I won't see it, I won't know about it --
>I'll assume it's part of the base system, and it *WILL* influence my
>decision on which DB vendor to go with.

Why does it matter whether or not you see it and know about it?

>: >Because Oracle uses a 'feature' that is not something that would be useful
>: >in a REAL WORLD situation (aka any real situation) to 'up' their TPC
>: >benchmarks disqualifies them, in my opinion.
 

>: Just because it's not useful in a particular real world situation does
>: *NOT* mean that it is not useful in any real world situation.
 

>Again, when I am doing comparisons of DB vendors, I look at their TPC
>numbers. They're supposed to reflect, in some meaningful way, how the
>database will perform on 'normal' databases.

Actually, that's not correct. TPC numbers are supposed to reflect how the database will perform on a specific type of transaction processing. Many people have the misconception that they are general speed numbers, but that is not correct.

>Transactions without database locks is *NOT* part of a 'normal' database. On a
>single-user database, perhaps, but that is not what the TPC is measuring --
>you won't get 300 transactions per second from a single user. (Well, if
>you do, you type *MUCH* *MUCH* faster than I do! ;-)

For rather obvious reasons I'm not going to comment on the specifics of Oracle's dispute with Standish. That doesn't mean I agree with you.

>: > When TPC can do a real world test
>: >(hopefully varied), I expect it to accurately reflect what I can do with
>: >*MY* database.
 

>: But I want it to reflect *MY* database, not *YOUR* database ... and
>: there's the rub!
 

>No, you misunderstand my context. I'm talking specifically about 'discrete
>transactions'. More than that, I don't care. I'm saying tha 'discrete
>transactions', as used by a DB vendor (specifically oracle) to measure TPC
>benchmarks is an invalid setup.

Why? It gets the right results on the TPC benchmark. I think your dispute is with the fact that you don't think that discrete transactions will work with your application. Well, then your application is obviously not very much like th4e TPC benchmark. In that case, what makes you think that TPC results without discrete transactions are any more relevant to your needs than TPC results with discrete transactions?

>: > 'Discrete transactions' are not what I can do with my
>: >database ... and expect the database to be intact at the end of the day.
 

>: Why wouldn't you expect it to be intact at the end of the day? If it
>: can do more than 60 million TPC-A transactions in a day and remain
>: intact, why wouldn't it for you? It sounds as if you've misunderstood
>: Discrete Transaction. It passes the ACID test, so where's the beef?
 

>As I understand it (and correct me if I am wrong), discrete transactions
>require that each transaction is unique in the database. e.g. you are
>assured that there is not an update occuring on one record in the database
>by two transactions. This means that database locking (record and/or
>table) isn't necessary. This is what I contend is not an accurate test for
>a 'real world' database system. Generally, when going hundreds of
>transactions per second, that is doing updates AS WELL AS additions.

I don't understand it so I won't comment.

>: >: The TPC does have a very strict definition of what the benchmark can and
>: >: can not do. That definition exists at a fairly high level (essentially a
>: >: detailed functional specification) so that any software system on any
>: >: hardware platform can run the benchmark. So, it is possible using TPC
>: >: benchmarks to compare a TPF2 system with the application implemented in
>: >: IBM 370 assembly language accessing a private file structure with
>: >: application implemented recovery mechanisms to a Compaq PC running SCO
>: >: UNIX and ORACLE with the application written in C calling PL/SQL
>: >: procedures. You can also use TPC benchmarks to evaluate the
>: >: suitability of OODB products for TP applications, which you couldn't do
>: >: if the TPC provided an ISO SQL program as the only valid benchmark
>: >: program.
 

>: >Again, real world vs. fantasy world.
 

>: I'm confused again, you mean these are your fantasies???!
 

>No, I feel that the flexibility in the system leaves gaping holes where DB
>vendors can drive through with outrageous claims of performance. My
>example is the TPC test using 'discrete transactions'. See above arguments
>as to why I consider 'discrete transactions' an invalid database testing
>usage parameter.

What this really suggests is that we need a new benchmark that more closely corresponds to other applications that have become more common over the past ten or fifteen years. For example, an Order Entry application or a General Ledger, or perhaps a full integrated accounting system, with data entry screens, queries, updates, and reports.

>: > with one qualifier: If the TPC-C benchmark (and TPC-X
>: >where X>C) *ACCURATELY* reflects what goes on on a typical DB machine, then
>: >what you say has relevance. If it does not, however, it means that
>: >databases are being optimized for functions that are not common.
 

>: It doesn't mean that at all! Even if you were prescient enough to be
>: able to design a benchmark which tested all the functions in common
>: usage, you would find that the number of typical DB machines that it
>: *ACCURATELY* reflected was < 1!
 

>: A typical DB machine actually uses only a subset of the common
>: functions, plus a subset of the less common functions. The problem is
>: that each one (ie each schema design/application, not each occurrence/
>: installation) uses a *different* pair of subsets.
 

>I think that your second paragraph depicts what I was trying to say. I
>just feel that for database selection, you must look at those functions
>that are common to all database systems. Unusual features that may or may
>not be of use to you should be extraneous features when looking at
>benchmark tests.

This seems unreasonable. For example, let's say that Oracle found some way of massively optimizing a specific class of transaction if the programmer notifies the DB that those transactions fit the requirements. Should this feature be banned from the benchmark? After all, if your application is similar to the benchmark then you will be able to make use of this feature. If your application is not similar to the benchmark then the benchmark's results are probably irrelevant regardless. Received on Mon Jul 26 1993 - 03:12:33 CEST

Original text of this message