Re: Linux betas NT in TPC testing, running Oracle8

From: Christopher Browne <cbbrowne_at_news.hex.net>
Date: Thu, 13 May 1999 02:55:04 GMT
Message-ID: <car_2.5725$qQ4.122906_at_news2.giganews.com>


On Wed, 12 May 1999 13:52:10 -0500, nik <ndsimpso_at_ingr.com> wrote:
>r.e.ballard_at_usa.net wrote in message <7hatok$79i$1_at_nnrp1.deja.com>...
>>But at the same time, a 6k TPM benchmark result on a single processor
>>Linux engine is not even publishable.
>
>Because the rules of the benchmark require you to factor in support costs
>that today can't be bought for a LINUX based system. This is a problem of
>the benchmark requirement which are set by the TPC council, the majority of
>whose members come from the database companies and UNIX vendors.

And whether that's 100% fair or not, the RDBMS and TP products that may be "free for evaluation purposes" are certainly *not* free for commercial deployment. It ain't 100% fair, but it doesn't thereby move automatically to being 0% fair...

>>Even if a P2/400 could only
>>crank 8k, that would still leave the cost of dual systems (the
>>alternative to a single system with support contract) it under $16k
>>giving a benchmark of $2/TPM.
>
>Adding a second 400MHz processor to one of the servers I sell would add
>roughly $300 to the price of the system, hardly likely to impact price
>performance noticeably. Also I very much doubt that a single 400MHz could
>come close to 8K transactions on TPC/C, and there is no way that the
>hardware would come in under 16K. The disk space required for a 5.3K result
>was 75x4GB drives, an 8K result would require more disk space (simply
>because the database size scales with performance.) Lets assume, for the
>sake of argument that it 1.5x the space and we are using 9GB drives, you
>still need ~50x9GB drives at say $450 each, thats 22.5K just for the drives,
>and we haven't included disk housing, controllers etc etc.

And you haven't added in the costs of licenses for the RDBMS/TPM products. There are not products available for free that will do the job at this time.

>>> Other than you're opinion, what actual real data
>>> like SPECWeb results do you
>>> have to back this claim up?
>>
>>I had some in the archives. I'll look them up.
>
>I wait with baited breath.

Troll bait? Or otherwise? :-)

[The right word to be used was probably the homonym "bated" breath, which seems most commonly used in Shakespearean literature... This obscure bit parallels the all-too-common tendancy for people that never visited a House of Commons in session to think that they agree with something by saying "Here! Here!" when what is actually wanted is the homonym "Hear! Hear!"]

>>> > Availability - NT Availability has improved
>>> > from 95% in 1996 to nearly 99.7% in 1999.
>>> > Linux has gone from 99.98% to 99.998% this means
>>> > Linux is down for about 5 minutes every 3 months.
>>>
>>> Show me a singgle company anywhere in the world
>>> that will guarnatee 99.998%
>>> uptime for a LINUX server. This is completely bogus claim.
>>
>>Neither is a guarantee. Both are simply the observations obtained
>>from organizations with numerous servers. The NT configuration was
>>based on an Insurance company using and keeping records on over 2200
>>servers. The Linux numbers came from a number of sources including
>>Dejanews and some ISPs.
>
>But without knowing the complexity of the applications and the loads
>on the servers, this data is meaningless, for example, a LINUX box
>doing simple IP routing on T1 link is a very different kettle of fish
>to an NT box handling a large database with hundreds of
>transactions/sec.

In much the way that comparing the reliability of the FreeBSD box running ftp.cdrom.com, where it's basically got a bunch of ftpd processes running, to www.slashdot.org, where requests head through various layers, from HTTP server to middleware to RDBMS and back. They have different kinds of "robustness challenges."

>>It's not hard to put together a newsgroup server on Linux that runs on a
>>486/100.
>
>I'd like to see you run a full feed (35K+groups, 2GB & 1.5-2million articles
>inbound/day) on a 486 with any OS. If you seriously think that's possible
>then you've obviously never tried to run a full newsfeed. The machine in
>question took over from a Digital Alpha server running DEC UNIX about two
>years ago at which time the feed was roughly half what it is today, and the
>DEC box was on its knees.

I suggest that you bounce an email to <sdenny_at_hex.net>. He used to do something less dissimilar to this than you'd think.

>> Are you also serving web pages? (how many pages/second). Are
>>you also serving ASP pages? (how many rows/second). Are you taking
>>updates? (how many inserts/second).
>
>Newservers whether UNIX or NT are usually dedicated to the task, this
>machine is no exception.

news.hex.net (which is no more) was such; furthermore, it was mainly devoted to servicing users requests. There was a separate box [that was *really* quite non-powerful] that collected news batches from the satellite link.

News is a good example of an application that tends to need devoted servers.

>>Actually, Linux followed the UNIX lead and focused on SP and MPP
>>technologies such as BEOWulf, ACE/TAO, and CORBA.

>You are moving the goal posts, if you want ot argue that LINUX
>currently offers some very attractive loosely coupled cluster
>capabilities, I wouldn't argue. But we both know that that has no
>bearing on the claim that LINUX SMP scales well on a 16 CPU SMP box.

True. And hopefully no one will consider it an "attack" if I comment that ACE/TAO has not yet been packaged up in a trivially-installable form for Linux.

The deployment of CORBA-based applications on Linux is, at present, in roughly the embryonic form that GNOME was nearly a year ago. Some pieces are available, but a minimal quantity of applications that have reached a level of maturity as to allow them to be pre-configured using a package manager.

As far as I can tell, the only CORBA-based application that 'self-installs' is the ORBit support in GNOME, and at present that seems only to be used to support the panel "applets." Again, that shouldn't be considered an "attack" on either GNOME or ORBit; step one in "world domination" is to get some small things working, and then for maturity of codebase to later allow more ambitious results.

>> Actually,
>>Linux 2.0 did NOT scale well in SMP environments due to it's single
>>process table and single spinlock. More advanced SMP configurations
>>required the 2.1 kernel and special recompiles. Even 2.0 requires
>>tuning since the number of spinlocks needs to be adjusted based on the
>>number of processors (having 8 spinlocks and tables on a monoprocessor
>>system would be wasteful - having 2 spinlocks on a 16 processor system
>>would be messy as well. Solaris, AIX, and HP_UX have all gone SP
>>and/or MPP.
>
>So, no defence of the claim that LINUX scales well on 16 CPU SMP boxes. To
>say that Solaris, AIX and HPUX have gone MPP is disingenous, all of them
>have demonstrated capabilities to scale well on 16 CPU SMP. LINUX has yet to
>demonstrate scalablity on 2-way SMP which puts it behind NT in this respect,
>since NT has demonstrated excellent scalability on 4-way and quite useable
>on 8-way.

... And this probably does not do justice to Linux, as 2.0 is the code of quite a long time ago, and represented the "proof of concept" that Linux could *stably* run SMP.

A lot of SMP enhancements went into 2.1, and 2.2 incorporates the "best of SMP" that Linux has to offer.

*Directly* to your point, it hasn't been formally benchmarked, and hence there is *no* defence presently available.

The *direct* response is something I expect to see forthcoming in the coming months. There are now significant vendors interested in selling their servers to run Linux, and it makes sense for companies like Compaq, Dell, VA Research, and Penguin Computing to run benchmarks on SMP systems and report on the results. That wasn't true 6 months ago; it is now; putting together suitable configurations and reporting on the results does take some time.

>>There are third party cluster packages (from HP), and there
>>are MP packages such as MQ, but these all boil down to the
>>per-processor capabilities of the base machines. Since SMP machines
>>are substantially more expensive than single processor machines, and
>>since there's almost no limit to the number of processors in SP/MPP
>>systems, it's no big surprise that Linux will scale more cheaply than
>>NT.
>
>Depends on the application, if the application lends itself to a Beowulf
>like approach, say rendering images, then yes LINUX rocks, but trying
>building a large scale database using Beowulf technologies.

"building a large scale database using Beowulf technologies" does what?

I presume you forgot to end this sentence with something like: "isn't an effective approach at this time."

To which I would say "I agree."

-- 
"I've discovered that P=NP, but the proof is too long to fit within the
confines of this signature..."
cbbrowne_at_hex.net- <http://www.ntlug.org/~cbbrowne/lsf.html>
Received on Thu May 13 1999 - 04:55:04 CEST

Original text of this message