Re: Oracle threatens to sue Standish over an article

From: Renato Ghica <ghica_at_fig.citib.com>
Date: Tue, 27 Jul 1993 16:08:40 GMT
Message-ID: <CAtzIH.HIE_at_fig.citib.com>


In article <1993Jul26.122258.4838_at_pyra.co.uk>, graeme_at_pyra.co.uk (Graeme Sargent) writes:
|> In <CAML7E.6uD_at_feanor.xel.com> shaw_at_feanor.xel.com (Greg Shaw) writes:
|>
|> >Graeme Sargent (graeme_at_pyra.co.uk) wrote:
|> >: In <CAAKsE.MDJ_at_feanor.xel.com> shaw_at_feanor.xel.com (Greg Shaw) writes:
|>
|> [stuff deleted]
|>
|> >: > When evaluating DB systems, the TPC benchmark is used
|> >: >*OFTEN* to compare A vs. B. Period. 'Special' features that influence a
|> >: >benchmard (aka propietary features) are of *NO* use, because you're not
|> >: >talking about inplementing everyting in a vendor-specific situation.
|> >: >You're talking about apples vs. apples, DB vendor vs. DB vendor.
 

|> >: Which means that a vendor specific feature that you would choose to use
|> >: (or that is automatic, for that matter) *should* be measured. Fast/
|> >: Group Commits were vendor specific, it's just that they're now specific
|> >: to just about every vendor. The chances are that *your* particular
|> >: database now goes faster because of that particular "benchmark special".
 

|> >The question being, in your first sentence 'would' vs. 'could'. When I'm
|> >selecting a DB vendor, I want to see what happens in the BASE system. I
|>
|> Why?
|>

Because extra things cost money.

|> >can always turn some vendor-specific feature later for system performance
|> >testing. But, that will come later. In the beginning implementation
|>
|> Why?
|>

Because extra things cost money.

|> >phases, I want to know how the database performs on a dataset without
|> >having to specially setup the database -- e.g. put the data in and away it
|> >goes.
|>
|> Is this just laziness? Or do you have some reason for not thinking
|> before implementing?

All other things being equal, it should not matter what form the data is in, for performance reasons. If you want to see how fast a car goes, it does not matter wether you drive it with four people in the car or if you're by yourself (as long as you always do it the same way).

|>
|> >If something is automatic, great. I won't see it, I won't know about it --
|> >I'll assume it's part of the base system, and it *WILL* influence my
|> >decision on which DB vendor to go with.
|>
|> And the fact that it defeats your argument it seems does not bother you!
|> Which I guess gives us a value judgement on the argument.

non-sequitur.

|>
|> It is simply not true to say that "because you're not talking about
|> inplementing(sic) everyting(sic) in a vendor-specific situation". You
|> are *always* implementing in a vendor-specific situation, whether you
|> like it or not, unless you re-invent the entire DBMS wheel for yourself.
|>
|> >: >Because Oracle uses a 'feature' that is not something that would be useful
|> >: >in a REAL WORLD situation (aka any real situation) to 'up' their TPC
|> >: >benchmarks disqualifies them, in my opinion.
|> >: Just because it's not useful in a particular real world situation does
|> >: *NOT* mean that it is not useful in any real world situation.
 

|> >Again, when I am doing comparisons of DB vendors, I look at their TPC
|> >numbers. They're supposed to reflect, in some meaningful way, how the
|> >database will perform on 'normal' databases.
|>
|> No they're not. Each TPC benchmark is designed to reflect a very
|> specific type of database/application.
|>
|> >Transactions without database locks is *NOT* part of a 'normal' database. On a
|>
|> Granted, but I don't see the relevance to the discussion.

Well. it's like HONDA saying the Acura goes from 0-60mph in 3 seconds (as long as there's no car frame, 0.25 gallons of gas, no seats and very light wheels). It's misleading.

|>
|> >single-user database, perhaps, but that is not what the TPC is measuring --
|> >you won't get 300 transactions per second from a single user. (Well, if
|> >you do, you type *MUCH* *MUCH* faster than I do! ;-)
 

|> >: > When TPC can do a real world test
|> >: >(hopefully varied), I expect it to accurately reflect what I can do with
|> >: >*MY* database.
|> >: But I want it to reflect *MY* database, not *YOUR* database ... and
|> >: there's the rub!
 

|> >No, you misunderstand my context. I'm talking specifically about 'discrete
|> >transactions'. More than that, I don't care. I'm saying tha 'discrete
|> >transactions', as used by a DB vendor (specifically oracle) to measure TPC
|> >benchmarks is an invalid setup.
|>
|> Without, apparently, any technical (or other) justification for that
|> statement.

It's a gut feeling. You know no-locks on a transaction are bad right? Even without a formal proof, right?

|>
|> >: > 'Discrete transactions' are not what I can do with my
|> >: >database ... and expect the database to be intact at the end of the day.
 

|> >: Why wouldn't you expect it to be intact at the end of the day? If it
|> >: can do more than 60 million TPC-A transactions in a day and remain
|> >: intact, why wouldn't it for you? It sounds as if you've misunderstood
|> >: Discrete Transaction. It passes the ACID test, so where's the beef?
 

|> >As I understand it (and correct me if I am wrong), discrete transactions
|> >require that each transaction is unique in the database. e.g. you are
|> >assured that there is not an update occuring on one record in the database
|> >by two transactions. This means that database locking (record and/or
|>
|> You are wrong. TPC-A gives no such assurance.

Well. if there are no locks and no guarantees that more updates are being done, then the benchmark is USELESS. is this a semi-formal proof?

|>
|> >table) isn't necessary. This is what I contend is not an accurate test for
|> >a 'real world' database system. Generally, when going hundreds of
|> >transactions per second, that is doing updates AS WELL AS additions.
|> >: A typical DB machine actually uses only a subset of the common
|> >: functions, plus a subset of the less common functions. The problem is
|> >: that each one (ie each schema design/application, not each occurrence/
|> >: installation) uses a *different* pair of subsets.
 

|> >I think that your second paragraph depicts what I was trying to say. I
|> >just feel that for database selection, you must look at those functions
|> >that are common to all database systems. Unusual features that may or may
|> >not be of use to you should be extraneous features when looking at
|> >benchmark tests.
|>
|> But that's a crazy way to go about selection! The unusual feature which
|> transforms *your* application is exactly what you *should* be looking
|> for!
|>
|> >: >I do feel that any TPC benchmark will test some functionality of a DB
|> >: >system. I contend that the system must be demonstrably close to what
|> >: >happens in the real world to be usable and/or believable.
 

|> >: The closer it gets to what happens in any particular piece of the real
|> >: world, the more specialised it becomes, and, as a direct result, the
|> >: less useful it becomes (except to the inhabitant of that particular
|> >: real-world location). Whether it is believable or not is an
|> >: administrative issue, and has nothing to do with functionality.
 

|> >My point above being that there are similar functions for every database in
|> >existence. To compare those databases for performance, there must be some
|>
|> That's rubbish. What use is a measure of update performance if you're
|> implementing a read-only database? Or vice-versa?
|>
|> There are *NO* functions which are common to every database in
|> existence.
|>

Wrong. Every relational database has a common set of functions. They may be implemented in various ways, but unless you're an alien, you're going to use SELECT, UPDATE, etc etc

(transmogrify (date/2 * rand(56)) from 1.2.ddertqz as wished by the emperor our lord and protector, order by the dark side of the moon). ;-)

|> >common ground to test with. Discrete transactions, as explained to me, do
|> >not qualify as common, let alone 'common ground'.
|>
|> But maybe it was the explanation that was at fault!
|>
|> >There is a point, as you say, that the benchmark is no longer believable.
|> >That point is when the benchmark criteria has become too specific to a
|> >particular use to be usable as a generic benchmark figure. But, there must
|> >be some common usage of the database by a benchmark so that those benchmark
|> >figures may be used to compare database A vs. database B. The use of
|> >unusual features found only in that database by the benchmark (specifically,
|> >discrete transactions), I feel, invalidates the comparison of database A
|> >vs. database B.
|>
|> It doesn't invalidate it, it just encourages you to focus on how
|> applicable the benchmark is to your situation, which you should have
|> been doing anyway.
|>
|> Why single out discrete transaction for this censure? The whole point
|> of TPC has always been that the benchmark is tuned to extract maximum
|> performance. The basis of your argument seems to have been that you
|> believe that discrete transaction breaks the ACID rules, whereas this is
|> quite simply not true, ACIDity is *required* for a TPC result.
|>
|> graeme

Well, maybe what we need is for all the vendors to run the benchmarks like ORACLE does, and then sit an wonder why our application tables get updeated 10 times slower than the demo!

-rg

needless to say, I speak for myself.....

-rg

:-)

-- 

"die,die, damned thread!"
"delete *this"
"I'm 90% done."
Received on Tue Jul 27 1993 - 18:08:40 CEST

Original text of this message