Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Larry Ellison comments on Microsoft's benchmark
Thanks, Oracle!
SQL Server professionals owe Oracle a huge debt of gratitude for two
reasons. First, if Oracle hadn't been caught red-handed stealing
garbage (yes, literally) some of you might have gone on denying the
powerful undercurrent of business competition that has driven a good
part of the Microsoft antitrust trial. (You can read more about
Oracle's foray into waste management at
http://www.msnbc.com/news/426438.asp?0nm=N21D.) Second, and more
importantly, if not for Oracle's (and other vendors') actions in a
recent Transaction Processing Performance Council (TPC) vote, Microsoft
might have shipped a potentially crippled version of a powerful new SQL
Server feature code-named Coyote.
Coyote, or distributed partitioned views, lets customers "scale out" their servers and build what Microsoft calls a federated database. What's a federated database? Think loosely coupled database servers linked across multiple SMP boxes. Distributed partitioned views are updateable views that exist on different servers combined through the UNION keyword.
By now, you've probably heard that Microsoft recently withdrew the record-breaking TPC-C numbers SQL Server 2000 posted last February, giving up its claim to "The World's Fastest Database." But don't hang your head yet. The TPC-C scores were invalidated because the beta version of distributed partitioned views that Microsoft used in the audited benchmark didn't support the ability to update primary keys. A strict literal reading of Section 1.6.3 of the TPC-C specification would indeed invalidate Microsoft's results. But interestingly enough, both Oracle and Tandem have published "clustered" benchmarks over the past few years--none of which complied with a strict literal reading of Section 1.6.3. These benchmarks didn't support the ability to update primary keys either. (You can download the full TPC-C specification from http://www.tpc.org/cspec.html.)
Whether you consider the invalidation of Microsoft's TPC-C results fair or not, the result is great news for SQL Server users. Microsoft didn' plan to offer primary key updates through a distributed partitioned view in SQL Server 2000' initial release. But in the flush of embarrassment about the TPC-C results recall, Microsoft has updated SQL Server 2000' code base to fully support primary key updates through a distributed partitioned view. To be honest, distributed partitioned view support without the ability to update a primary key would have been very limiting for real-world applications. So thanks, Oracle! We wouldn't have this nifty feature without your kind support!
Does Microsoft plan to republish new TPC-C numbers now that SQL Server 2000 is in full compliance with the TPC-C spec? Absolutely. And the code change shouldn' significantly affect Microsoft's ability to post high TPC-C numbers. Technically, no one can reference the tpmC score associated with a TPC-C benchmark until the numbers have been officially audited. So I can' say that Microsoft has already rerun the test and demonstrated that the new and improved distributed partitioned view support shows essentially the same performance. Instead, I'll just say that Microsoft has run a test that is awfully similar to TPC-C and has had the numbers audited and verified by people who look a lot like TPC-C auditors. Microsoft is working aggressively to publish new numbers, but $4 million systems with 96 CPUs don't grow on trees. Although Microsoft is staying mum on this topic right now, I suspect we won't see the new numbers for at least a month.
Next week, I'll cover the impact of IBM's newest TPC-C scores and what they mean for SQL Server and the future of database technology. The story has a happy ending if you're a committed SQL Server fan.
Brian Moran
SQL Server Magazine UPDATE News Editor
Wed, 12 Jul 2000 06:16:33 GMT Ivana Humpalot <ivana_humpalot_at_nospam.com> wrote:
> "Brad" <Brad_at_SeeSigIfThere.com> wrote:
>> >> What I want to know is how the system can still be reliable if one or >> more servers are down. If the data is inaccessible then how can any >> query be reliable? I can understand if there is some striping going on, >> but even then if two machines go down all of the data is not accessible. >> How can the database as a whole be worth hitting if only one of twelve >> servers is up (as Ivana claimed).
-- http://www.cooper.com.hkReceived on Mon Jul 17 2000 - 00:00:00 CDT