Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: Personal Oracle vs MS Sql Server

Re: Personal Oracle vs MS Sql Server

From: Andrew Gideon <ag22121_at_tagsys.com>
Date: 17 Apr 1999 15:16:59 GMT
Message-ID: <7fa8lb$ta7@dixie.tagsys.com>


>From: "Jon Smirl" <jonsmirl_at_mediaone.com>
>Date: Fri, 16 Apr 1999 21:49:07 -0400
>
>Andrew Gideon <ag22121_at_tagsys.com> wrote in message
>news:7f7r7q$k7t_at_dixie.tagsys.com...
>
>>I've worked with quite a few multigigabyte databases. While one
>>*could* load enough RAM to cache this, that isn't often the
>>case.
>
>Why don't you try it? I'll never understand why people put up with
>complicated disk thrashing problems when two or three thousand dollars worth
>of RAM would avoid the problem. Fixing thrashing is hard; making it go away
>is easy. It may cost $20K of programmer time to fix it; of course that's
>salary money and not a budget item.
>

I didn't mean "one or two" gig. I meant "several hundreds". Of course, this doesn't really address your question. A better answer, although not a perfect one, is that we operate under the assumption that, at some point, there'll be more data than memory. So we build the system to work well under those circumstances.

That's not a perfect answer because it doesn't address "why not buy more memory". But it does address why we build software as if we cannot.

It is one of my pet complaints about products in the MS world that they just assume "enough memory". I've an old 486 happily running Linux in some small amount of memory, still doing usful things. That's a museum piece in the MS universe.

It isn't just "use virtual memory" either. As you pointed out, that can cause a lot of thrashing (if that concept isn't redundant {8^). But software can be built to use memory in patterns that preserve locality. That cheapens paging, can increase the benefit of memory caches, and has other possible benefits as well (esp. when you enter the multiprocessor universe).

With respect to budgets, hardware is usually "cheaper". It is a depreciated item normally (or spread some other way, such as leasing). Salaries are pure expense.

But "doing it right" scales better.

>> >On small to medium scale systems I've benchmarked Apache/Perl/Sybase at
>4-10
>> >times the speed of IIS/ASP/MSSQL on the same hardware. Just for laughs I
>> >measured Apache/Perl/Sybase on Win95 and it was twice the speed of NT.
>> >Search in the mod_perl mailing list archives for some recent benchmark
>> >activity that other people have been doing.
>> >
>> You do realize that this is an unfair comparison, in NT's favor? At
>> least, I read this to mean "Perl-implemented CGI programs". A better
>> comparison to ASP is something like mod-perl (or anything else that
>> avoids fork()/exec() costs).
>>
>My tests were done with mod_perl. I believe it was a reasonably fair
>comparison - not like the recent MS one. MS sent a team in to tune a
>quad-xeon system and then benchmarked it against Redhat 5.2. Redhat's
>invitation consisted of an email to tech support. Needless to say MS won the
>'benchmark'.
>
>http://www.zdnet.com/pcweek/stories/news/0,4153,1014383,00.html
>

Yes, I've read about this "comparison". It is yet another example of MS doing what it does so successfully: marketing.

Because of a couple of recent hardware problems on NT machines in our office, I've been dealing with them more than usual. I've also been dealing with a company that supports these things.

It astounds me what NT support "costs", and I'm at a loss to understand how it can be viewed as "cheaper" or "easier" than anything I've seen. But marketing solves all engineering ills.

Anyway, I couldn't tell from your wording whether you'd used mod-perl or CGI. I'm actually sorry that you did it right, in that - had you done it wrong - the "done properly" difference would have been even more dramatic.

[...]
>
>I tested mod_perl pages which included a database access. MS ASP performance
>is terrible compared to mod_perl.
>

Did you preserve the DB connections across requests? Does ASP?

>Future benchmarks should test static and dynamic content independently.
>Almost all current web servers have no problem serving up static data as
>fast as your bandwidth can take it. The real problem is with the dynamic
>pages.
>

Is that true even for an Intranet server on a fast (ie. 100+M) LAN? Since I don't care much about static-only web serving (it is growing less common with each passing HTTP request), I've never considered this, but you've made me a bit curious about it.

Received on Sat Apr 17 1999 - 10:16:59 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US