Re: Linux betas NT in TPC testing, running Oracle8

From: <r.e.ballard_at_usa.net>
Date: Thu, 13 May 1999 16:22:35 GMT
Message-ID: <7heu86$eq6$1_at_nnrp1.deja.com>


In article <7hbfne$7q8$1_at_fafnir.cf.ac.uk>,   Phillip.Fayers_at_astro.cf.ac.uk wrote:
> In article <7hatok$79i$1_at_nnrp1.deja.com>, r.e.ballard_at_usa.net writes:
> >In article <V8#ifV9m#GA.181_at_pet.hiwaay.net>,
> > "nik" <ndsimpso_at_ingr.com> wrote:
> >>
> >> r.e.ballard_at_usa.net wrote in message
<7guvu2$bi9$1_at_nnrp1.deja.com>...
> >> >In article <m1emkwpwy5.fsf_at_inconnu.isu.edu>,
> >> > Craig Kelley <ink_at_inconnu.isu.edu> wrote:
> >> >> r.e.ballard_at_usa.net writes:
> ...

>

> >> >It's rather interesting that when you
> >> >Compare Linux to NT using nearly any
> >> >metric other than training expense in
> >> >the first 60 days, Linux comes up a
> >> >clear winner. This goes beyond simple benchmarks as well.
>

> Who is doing the comparison?
>

> >> > Web Server Benchmarks - Linux carries as
> >> > much as 8 times the capacity
> >> > of Windows NT. At very low levels, NT gives
> >> > slightly better response times, but Linux/Apache
> >> > response time is nearly flat or linear while NT
> >> > deteriorates at an exponential rate.
>
> >> Other than you're opinion, what actual real data
> >> like SPECWeb results do you
> >> have to back this claim up?
>
> >I had some in the archives. I'll look them up.

I went back and found a few - the www.silkwood.com archives, for one.

There have also been several tests between Apache/Linux and IIS/NT.

> Try looking at:
>
http://www.zdnet.com/pcweek/stories/news/0,4153,401970,00.html

>

> This is a ZDNet comparison of NT, Netware,
> Linux and Solaris on as near
> as possible the same hardware.

But not necessarily the same operating conditions. If default configurations are used, Linux would be running CGI (fork/exec) while NT runs in threads. A comparable test would be to use apache modules, which run fork but don't then exec.

> In the WebBench tests Linux came in
> last, by a reasonable margin.

Again, Linux CGI vs NT ISAPI. Either run Linux CGI vs NT CGI or Linux modules vs NT NSAPI.

I'd like to point out that the UNIX/Linux programmer would usually run a very "thin" module that would connect to a separate server process, either via a UNIX domain socket or via IPC.

> In NetBench (file serving via samba for
> Solaris and Linux) Linux and Solaris were roughly th same, last behind
> NT and Novell.

The key here is that NT caches the WINS lookups and SMB buffers while the default for SAMBA is to force lookups (which is better for large networks where NFS back-ends could require frequent synchronization.

Remember that SAMBA can export file systems that can be shared by hundreds of users and processes other than SAMBA. As a result, SAMBA is typically more paranoid about keeping file system integrity. In a NetBench benchmark, SAMBA would be the only program running, but the default settings of SAMBA would be to assume that anything could change at any time.

The NT File server assumes that the server process is the only process, and that the only files being shared are those on the local hard drives.

> The bad Solaris score was due to an extremely slow
> file rename time,

This would make sense. Solaris doesn't do write-behind, which means that updates to the directories, inodes, and allocation tables must be completed in real-time. A cached update takes about 200 microseconds, the physical update takes about 75 milliseconds. In the real world, Sun systems are usually set up with RAID drives that have as much as 16 megabytes of cache. If the power fails, the RAID drive flushes cache immediately.

NT caches writes within the operating system. If there is a power failure that brings the NT down, the file-system may become corrupted.

Linux does a comprimise and caches, but flushes frequently.

> NT and Solaris were both about 4 times faster than
> Linux on file reads.

This may be due to the default settings that discourage read-ahead caching. In the early days, Linux was often used to bridge NFS, Netware IPX, and SMB filesystems and export them as a single SMB "drive". As a result, Linux must assume that a file could be modified or removed at any time.

This bridging feature isn't even available under NT.

> >To be fair, this 8x performance gain was on a single processor
machine
> >with 128 meg and 2 hard drives. NT was thrashing it's brains out
> >while Linux purred.

>

> You can always find a benchmark which will show your
> particular platform in a good light.

I think that was the original point of this thread.

There are very few benchmarks that have the rigor of the TPC benchmarks, but when "informal" TPC benchmarks for Linux are announced, the publisher is threatened with a lawsuit.

Linux has been around, and has had SQL databases for nearly 6 years now. In that time, nearly 10 TPB, TPC, and TPD benchmarks have been run and silenced. To this day, there are absolutely no publishable results.

Linux and UNIX are very similar, and Linux actually benchmarks very well compared to other systems. Linux currently does NOT do well when using BOTH SMP (4 or more processors) AND hardware RAID. Essentially, disk writes eventually force the processors into a queue. Since the Linux kernel isn't allowed to "assume" that a write was successful, it will keep the buffer until the successful write is confirmed.

Linux/UNIX and NT are radically different programming environments. There are things one must do to get good performance out of NT that would be wasteful and inefficient on a Linux/UNIX system. Furthermore, Linux/UNIX programmers assume that their applications must be scalable and therefore do more integrity checking than NT programmers - who often assume that their process is the only one on the box.

Assuming that Linux even comes close to the TPC numbers produced by SCO, Solaris 7, or other UNIX systems, and still provides the "no royalties" solution, it's likely that Linux will not set speed records, but WILL set "Bang for the Buck" records.

Microsoft has found out that Linux has a weak spot (SMB with RAID), and wants to play that particular scenario for all it's worth. Meanwhile, the Linux community has focused on more scalable technologies such as SP, MPP, BeoWolf, and CORBA.

Microsoft is still trying to weave MTS, DCOM/COM+, and MSMQ into a coherant (homogenus) system, and still has to add load balancing, asynchonous processing, redundancy, and fault tolarance.

> --
> Phillip Fayers, SunAdmin/Support/Programming/Postmaster/Webmaster(TM)
> Dept of Physics & Astronomy, University of Wales, College of Cardiff.
> P.Fayers_at_astro.cf.ac.uk Attribute these comments to me, not UWCC.

--
Rex Ballard - Open Source Advocate, Internet Architect, MIS Director
http://www.open4success.com


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---
Received on Thu May 13 1999 - 18:22:35 CEST

Original text of this message