Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Larry Ellison comments on Microsoft's benchmark
Richard,
I wanted to speak with you at the Windows DNA 2000 Readiness Conference in Denver (02/28 - 03/03). You should listen to Lon Fisher's (Microsoft) presentations on performance optimizations with an ear toward the implementation of the benchmarks (PC Week / Doculabs, TPC-C).
If it weren't for the fact that I am swamped with other obligations I would explain some of the issues at length (including code samples and commentary).
Even the NSTL benchmark results used in so many of the conference presentations are questionable. The numbers indicate either out-right incompetence by NSTL staff or are a work of fiction. In case you are not familiar with the work performed by NSTL titled "Scalability and Performance Testing of DNA Application Designed with Microsoft Visual Studio", available at:
http://www.nstl.com/html/ecommerce_scalability.html
As I have already put together some comments on the NSTL results, I will include them.
The reason for the question is that "near-linear" almost never translates into better per-server numbers. In the chart we see a comparison of concurrent users / server:
1 Svr = 1,300 cu 2 Svr = 2,800 cu (+1,500 cu increment, or 2.15 x base) 3 Svr = 4,000 cu (+1,200 cu increment, or 3.08 x base) 4 Svr = 6,000 cu (+2,000 cu increment, or 4.61 x base)
If I saw a trend like this in one of my benchmarks, it would raise serious questions about my environment. I would retest a number of times and if the pattern was inconsistent then the underlying cause should be determined, the problem(s) corrected and all tests run again. **I am not saying that the trend isn't possible, just that I have never observed this type of trend in scaling any solution.** This indicates to me that the testing process is out of control.
2. Figures 8/10 and 9/11 show similar trends that would lead me to
believe that the solution has run out of steam at 3 servers. The response time takes a serious up-turn at approximately 7,000 users in both cases; were the 4th server to have a beneficial impact on system throughput then the trend should be more in-line with the difference between Figures 4/6 and 6/8.
Even the difference between the 2 and 3 server tests indicate an underlying bottleneck, as the increase in throughput is much smaller than from 1 to 2 servers. Without statistics from the database server and network it is impossible to state with certainty the cause, but F&M Stocks does use MTS (distributed) transactions which results in a higher degree of serialization and is the most likely cause.
I also realize that the NSTL document is qualified as "Preliminary Testing Results", and that the final results may differ. Note that I have requested a final version but have had no response.
I know that Microsoft *can* deliver decent technology. The problem is that management / marketing is on a mission to sell technologies that your developers are not using in their benchmarks.
This presents a problem for ISV's - the customer demands a particular implementation (based on Microsoft literature) even when an alternative solution should be delivered. When real world results do not live up to the expectations based on marketing hype, the customer looks to the ISV as the source of the problem.
My reason for attending the conference was that I have found the limits of MTS / COM+ in a production environment (of a Fortune 100) using Oracle as the back-end, and I wanted to learn how the benchmarks were implemented. As I expected, the top results did not use MTS (distributed transactions) - though most customers and industry analysts seem to believe the contrary.
BTW, I noticed the alphabet soup in your signature. The only thing missing is "Microsoft employee". I realize you might be posting from home, but everyone reading this thread doesn't recognize you.
-- Michael D. Long http://extremedna.homestead.com "Richard Waymire" <rwaymir_at_ibm.net> wrote in message news:ueZi6Kl3$GA.282_at_cppssbbsa04...Received on Sun Jul 02 2000 - 00:00:00 CDT
> The data is partitioned across each node for key tables. If a node fails
> any queries against the distributed partitioned view will fail (but NEVER
> return incorrect results). Hence the recommendation to run each node in
an
> MSCS failover cluster.
>
> Is shared-nothing clustering good for general systems? Ask just about
every
> VERY large system in a cluster (Tandem, DB2, etc.).
>
> For an objective opinion on such matters, please read some relevant
material
> such as "In Search of Clusters" by Pfister from IBM Corp. You might also
> look up some slides, etc. from Doctor Jim Gray
> (http://research.microsoft.com/~gray/). Before you dismiss the site
because
> it's on Microsoft's web page, look at this credentials (including the
Turing
> award).
>
> --
> Richard Waymire, MCT, MCSE+I, MCSD, MCDBA
> "Alexander Penev" <webmaster_at_penev.com> wrote in message
> news:395554AC.9D413341_at_penev.com...
> > What do you mean? Is the data partitioned along the 12 nodes or not?
Will the
> > whole system fail if one of the nodes fails? Are this issues good for a
for a
> > general purpose system or not? That's what Ellison says and i think it's
just
> > true. If you think it's not please explain us why. I would not read
hundreds
> > of c++ code without knowing what i'm looking for...
> >
> > "Michael D. Long" wrote:
> >
> > > And if you can read C++, you'll find some other goodies...
> > >
> > > --
> > > Michael D. Long
> > > http://extremedna.homestead.com
> > >
> > > "Alexander Penev" <webmaster_at_penev.com> wrote in message
> > > news:39527E0C.E614B483_at_penev.com...
> > > > Hi Steve,
> > > > It's true that every company tries to blame the compatitor's product
and to
> > > > push theirs but THIS STATEMENTS of L. Ellison ARE JUST TRUE!!!! You
can see it
> > > > yourself:
> > > > http://www.tpc.org/results/FDR/Tpcc/compaq.8500.96p.00021702.fdr.pdf
> > > >
> > > > Just see the source code for creating the databases of the
databases.......
> > > >
> > > > Steve Jorgensen wrote:
> > > >
> > > > > All companies try to lie with statistics while being technically
accurate.
> > > > > That's why you have to read every company's benchmarks, their
competitors'
> > > > > benchmarks, and everyone's critiques of everyone else's
benchmarks.
> > > > >
> > > > > Ivana Humpalot wrote in message ...
> > > > > >X-No-Archive: yes
> > > > > >
> > > > > >
> > > > > >In the Analyst Q&A following Oracle's 4th Quarter Earnings
Report,
> > > > > >Larry Ellison made some very interesting remarks about
Microsoft's
> > > > > >recent SQL Server 2000 benchmark.
> > > > > >
> > > > > >If Ellison's comments are true then Microsoft is basically
> > > > > >defrauding their customers with their benchmark.
> > > > > >
> > > > > >I have included below the transcript of his comments.
> > > > > >
> > > > > >Is Larry Ellison lying or is Microsoft really defrauding their
> > > > > >customers with their benchmark?
> > > > > >
> > > > > >You can listen to the audio here:
> > > > > > http://www.nasdaq.com/reference/broadcast_oracle.htm
> > > > > >
> > > > > >Near the 1 hour mark, an analyst from Paine Webber asked a
question
> > > > > >about Microsoft SQL Server 2000. The following is Larry Ellison's
> > > > > >response:
> > > > > >
> > > > > > In terms of microsoft.. we have no concerns at all. They still
> > > > > > can't scale. They have this benchmark that they got out which
> > > > > > works only in the laboratory.
> > > > > >
> > > > > > The only problem with microsoft's benchmark is that it has a
> > > > > > 3-hour mean time of failure. What they have done is to chop up
> > > > > > the database in to 10 separate little databases, and if any
one
> > > > > > of those databases fail it brings down the entire system, or
> > > > > > worse yet gives wrong results.
> > > > > >
> > > > > > So it is a completely bogus benchmark.
> > > > > >
> > > > > > I mean, it meets the letter of the benchmark rules, however by
> > > > > > their own statistics in terms of availability they have a very
> > > > > > very short mean time of failure.
> > > > > >
> > > > > > No one seriously will ever use this kind of system.
> > > > > >
> > > > > > They have 10 separate computers each with 10% of the database.
> > > > > > If you want an 11th computer you have to unload the entire
> > > > > > database from the 10 computers and then put 9.1% of the
database
> > > > > > on the 11 computers. If one of the computers fail you lose 10%
> > > > > > of the database. And that means when you use your query.. you
> > > > > > don't get the right answer back.
> > > > > >
> > > > > > If you use 10 separate systems.. if you believe Microsoft's
> > > > > > statistics on failure rates.. one failure every 30 days, you
are
> > > > > > going to get a major system outage or wrong results every 3
days.
> > > > > >
> > > > > > It is a preposterous benchmark.
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > >
> >
>
>
![]() |
![]() |