Re: Linux betas NT in TPC testing, running Oracle8

From: <r.e.ballard_at_usa.net>
Date: Wed, 12 May 1999 03:49:40 GMT
Message-ID: <7hatok$79i$1_at_nnrp1.deja.com>


In article <V8#ifV9m#GA.181_at_pet.hiwaay.net>,   "nik" <ndsimpso_at_ingr.com> wrote:
>
> r.e.ballard_at_usa.net wrote in message <7guvu2$bi9$1_at_nnrp1.deja.com>...
> >In article <m1emkwpwy5.fsf_at_inconnu.isu.edu>,
> > Craig Kelley <ink_at_inconnu.isu.edu> wrote:
> >> r.e.ballard_at_usa.net writes:
> >> [snip]
> >>
> >> > It isn't a true "conspiracy", but the folks who audit these
results
> >> > cannot accept the Linux terms as legitimate results. If one were
to
> >> > consider only the raw hardware costs - which could be
competitively
> >> > obtained for $50k-$60k and the software costs which are in the
$2k
> >> > range, and the mainenance contracts available from companies like
> >> > Flagship ($6,000-$20,000 for 5 years) this is typical of Linux
> >> > "bargain basement" environments, a total of $80k. If Linux were
> >> > able to crank out 8,000 TPC/M (plausable when you compare the SCO
> >> > numbers), then Linux would still be in the $10/TPC range. Just
> >> > looking at one of the "low-end" NT machines, it's easy to see how
> >> > Linux could generate some rediculously low $/TPC numbers. NT
> >> > generates $30/TPC with it's bottom of the line systems.
>
> NT systems are in low 20s for 4-way XEON, no one bothers to benchmark
the
> 2-way PIII systems in TPC/C because they run out of memory before they
run
> out of other horsepower, this would apply to LINUX as well.

It really depends on what you are trying to accomplish. If you want to get a 7000 TPM benchmark, you need a big bazooka. If you want 2000 TPM and get a low $/TPM rate, you can use a smaller box.

> >> And in this day-and-age, benchmarks are becoming worthless. If
they
> >> were so important, nobody would even use Windows NT, Microsoft's
SQL
> >> Server or MySQL. People want *reasonable* solutions to their
> >> problems, both in terms of performance and price. Linux only needs
to
> >> meet this requirement in order to satisfy the majority.
> >
> Thousands companies use SQL Server for database
> applications, so a benchmark
> using SQL Server is very definitely
> of interest to customers purchasing
> database systems. A 4-way XEON server
> running NT and SQLServer is pushing
> the 25K TPM under the TPC/C benchamrk,
> this would easily meet the database
> requirements of hundreds of everyday applications.

But at the same time, a 6k TPM benchmark result on a single processor Linux engine is not even publishable. Even if a P2/400 could only crank 8k, that would still leave the cost of dual systems (the alternative to a single system with support contract) it under $16k giving a benchmark of $2/TPM.

Until actual verified legitimate numbers are published, Linux $/TPM are up for grabs. You can get some pretty cheap hard drives, some pretty cheap transaction monitors, and some pretty cheap LAN cards and create a pretty incredible system for very little money. Back in the days when you needed 10 drives to get a 20 gig database, it was a challenge. Today, with 16gig drives, RAID in software, and cheap DIMM memory, it's not that hard to come up with some respectable numbers.

> >It's rather interesting that when you
> >Compare Linux to NT using nearly any
> >metric other than training expense in
> >the first 60 days, Linux comes up a
> >clear winner. This goes beyond simple benchmarks as well.
> >
> > Web Server Benchmarks - Linux carries as
> > much as 8 times the capacity
> > of Windows NT. At very low levels, NT gives
> > slightly better response times, but Linux/Apache
> > response time is nearly flat or linear while NT
> > deteriorates at an exponential rate.
> >
>
> Other than you're opinion, what actual real data
> like SPECWeb results do you
> have to back this claim up?

I had some in the archives. I'll look them up.

To be fair, this 8x performance gain was on a single processor machine with 128 meg and 2 hard drives. NT was thrashing it's brains out while Linux purred.

A Benchmark run in Germany actually showed NT as crashing under numerous conditions. Caldera used to have a link to this one. I found a number of benchmarks under Infoseek.

One that gave the best results, but was totally bogus gave MySQL on Linux a 20x throughput that Oracle on NT. This of course ignored the fact that MySQL didn't have two-phase commit, rollback, or transaction logging. - totally bogus, but amusing.

> > Availability - NT Availability has improved
> > from 95% in 1996 to nearly 99.7% in 1999.
> > Linux has gone from 99.98% to 99.998% this means
> > Linux is down for about 5 minutes every 3 months.
>
> Show me a singgle company anywhere in the world
> that will guarnatee 99.998%
> uptime for a LINUX server. This is completely bogus claim.

Neither is a guarantee. Both are simply the observations obtained from organizations with numerous servers. The NT configuration was based on an Insurance company using and keeping records on over 2200 servers. The Linux numbers came from a number of sources including Dejanews and some ISPs.

> > NT is down 5 minutes per week. In
> >response to Linux stability, Commercial UNIX systems
> >are targeting 5 minutes/year.
>
> Hmm, seems COMPAQ, HP and UNISYS would all disagree
> with you about the uptime of NT, they all have packages
> which guarantee (and will pay you money
> if they fail to meet the guarantee) uptime
> on NT configurations well in
> excess of what you are quoting.

My understanding was that they could guarantee certain clusters of certain configurations up to 99.97%
One week equals 10080 minutes (7*24*60). And 99.97% of 10080 is 10077 - leaving a difference of 3 Minutes. This is about the time it takes to reboot an NT server. With multiple kosher configurations in SMP configurations and RAID drives and a gig of RAM, you can get a much better rating. Furthermore, scheduled outages are not included in that downtime calculation and each machine is to be rebooted at least once a week (I believe the contracts call for daily reboots).

This is significantly different from Linux systems that run online housekeeping without down-time.

Of course, HP, SUN, and IBM also offer 99.999% uptime guarantees with their UNIX systems (HP_UX, Solaris, and AIX respectively). Flagship offers quality of service contracts. Many ISPs use Linux for their servers and offer quality of service contracts with their Linux systems.

> There is not
> a single vendor who would make
> the same claim (and the same money back guarantee) for LINUX.

Again, the closest I have seen is quaity of service guarantees provided by ISPs for Linux hosted servers.

> > Total Cost of Ownership (TCO). Due partly to the lower cost of
Linux,
> but
> >mostly due to the high reliability and "self sustaining" maintenence
> >scripting, Linux has shown itself to be as much as 1/10th the cost
of NT
> >Servers. In the workstation environment, Linux has shown after the
first
> 60
> >days, to be 1/5 the the cost of NT. Most NT TCO studies are limited
to 90
> >days againts commercial UNIX systems such as Solaris, AIX, or HP_UX.
> >
> >
> Again, point to real data as opposed to
> LINUX advocate data for TCO studies

Actually, a good reference is the pricing sheets for most ISPs. I use 9Net avenue as my web site provider for my private web site. They quote a Linux system at $100/month. The quote on a Minimal NT system is nearly $300/month. The Linux system requires fewer resources as a base system and requires less personal attention. They offer similar $100/month contracts on FreeBSD systems.

> of LINUX, I suspect you'll be hard pressed to find any.

Real dollars and cents quotes from people willing to provide service are a pretty good source of TCO numbers for comparable quality of service at comparable cost.

> > Total Benefit of Ownership (TBO). Again,
> > due to the lower cost, high
> > reliability, and low management costs, Linux has been able to
> > pack "more bang per buck" into it's system. A reliable NT
> > configuration requires separate router, firewall, naming service,
> > web server, and database machines. Linux configurations
> > can easily fit all of these functions into a single
> > machine and still run reliably.
>
> ROTFLMAO.
(Translation - Roll on the floor, laughing my a** off).

Let's see. Which is so funny - that, in order to have a secure, reliable, high performance system you need to have a separate fire-wall (A UNIX system) a separate router (A CISCO/UNIX system), and a separate machines for DNS, IIS, and SQL Server. You could throw all of this into a single machine but NT performance rapidly deteriorates because the context switching forces it to flush the cache so much that the performance deteriorates horribly. I defer to statements made by Gartner Reports who claim that it takes 5 NTs to do the work of one Linux or UNIX machine.

Perhaps you dispute my claim that all of these functions can be provided on a single machine. Many users of _at_home, the cable-modem system now owned by AT&T, use Linux machines because they share a link with all other Windows 95 users on the block.

> > Scalability. NT has a "scalability wall" of about
> > 100 concurrent users per machine.
>
> Funny, I'm posting this for an NT USENET server
> which takes a 38K group feed and regualrly supports
> 250-300 simultaneous users, and that is fairly slimly
> confgurted (by today's standards) Pentium Pro machine.

Sounds like you're due for a CALS audit - can't wait to see you get that $15,000 bill from Billy. If your company is public, maybe he'll take it in equity. What's 1% of your company worth - about $1000?

It's not hard to put together a newsgroup server on Linux that runs on a 486/100. Are you also serving web pages? (how many pages/second). Are you also serving ASP pages? (how many rows/second). Are you taking updates? (how many inserts/second).

> > Linux has shown itself to be extremely scalable. Linux can run
> > effectively on an 80386 machine with as little as 8 meg and a 20
> > meg hard drive (using network support). It can be scaled up to
> > Alpha, UltraSparc, or PPC G3 chips with a gigabyte of RAM and both
> > RAID in software and RAID in hardware, including multiple SCSI and
> > network cards (ethernet or ATM).
> >
> > Linux also supports SMP systems of up to 16 processors and can
> > run number
>
> If you are going to say that LINUX scales to 16 processors,
> you are going to have tpo point us at some benchmarks that
> demonstrate this, I think you'll
> find that hard to do. By the same metric (i.e what the kernel can
> theoretcially support) NT can handle 32 processors.

Actually, Linux followed the UNIX lead and focused on SP and MPP technologies such as BEOWulf, ACE/TAO, and CORBA. Actually, Linux 2.0 did NOT scale well in SMP environments due to it's single process table and single spinlock. More advanced SMP configurations required the 2.1 kernel and special recompiles. Even 2.0 requires tuning since the number of spinlocks needs to be adjusted based on the number of processors (having 8 spinlocks and tables on a monoprocessor system would be wasteful - having 2 spinlocks on a 16 processor system would be messy as well. Solaris, AIX, and HP_UX have all gone SP and/or MPP. DEC (now COMPAC)has always used clustering for scalability. Microsoft offers WolfPack but the standby system is passive. There are third party cluster packages (from HP), and there are MP packages such as MQ, but these all boil down to the per-processor capabilities of the base machines. Since SMP machines are substantially more expensive than single processor machines, and since there's almost no limit to the number of processors in SP/MPP systems, it's no big surprise that Linux will scale more cheaply than NT. Unix systems manufacturers have integrated MPP and SP into their own top-line systems such as the Enterprise 10000 (MPP) and the SP2 (SP).

> --
> Nik Simpson

--
Rex Ballard - Open Source Advocate, Internet Architect, MIS Director
http://www.open4success.com


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---
Received on Wed May 12 1999 - 05:49:40 CEST

Original text of this message