Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: 10g vs. MS SQL Server

Re: 10g vs. MS SQL Server

From: Howard J. Rogers <hjr_at_dizwell.com>
Date: Wed, 12 May 2004 11:47:30 +1000
Message-ID: <40a18227$0$20347$afc38c87@news.optusnet.com.au>


Anna C. Dent wrote:

> http://www.progstrat.com/research/gems/040401rdbmscmcs.pdf

Clearly, their comparisons on the installation front didn't involve having to create a user account, set kernel parameters, create installation directory structure, integrate database and listener startup into the /etc/rc.d script hierarchy and other such wonders of the Linux world! No wonder the Oracle installation was 54% less complex than SQL Server's... they missed all the good bits out!

Never mind that I would never assess a database on its ease of installation.

I was also slightly alarmed to read "SQL Server ... is manageable as far as backup and recovery is concerned, but every operation needs to be manually configured". Stone me... a database that requires someone who knows what they are doing in terms of backup and recovery. Whatever next!

On page 18, they claimed to only use bundled software, yet have a note that they used the Oracle Diagnostics and Tuning Pack (which costs extra).

Their 'create database' time savings over SQL Server appear largely to have arisen by virtue of them selecting the 'General Database' template in dbca and keeping all the defaults. Hardly a real-world test.

They allowed their new user to be granted the 'Connect' role by default (big no-no). If they'd not done that and actually thought about what privleges were needed to achieve certain tasks, I rather think their timings might have changed. Quite a good example, too, I thought of how some of Oracle's defaults are actually bad news, not time savers.

They allow the system to define their backup schedule, and hence record a time of 0 seconds. Strange thing is, I'd rather like a say in when and how frequently my valuable data gets backed up.

They compare flashback to before drop with a full-blown SQL Server incomplete recovery, which seems to me a little unfair. (They might, for example, have tested for 'user performs deranged DML' instead of 'user drops table', and tested how long it would take the average DBA to get the date format right for the flashback command!... They did go on to do an erroneous transaction comparison, but proceded to cheat a little by using the GUI to do the flashback having previously said the GUI and the CLI were 50-50 in terms of functionality).

And I note their tuning tests were not of the 'this report takes 10 minutes currently. Make it run in 5' type, but were simply of test of how easy it was to run the tuning wizard on each database.

And finally, I note the report was dated April 1st.

It's actually quite a good read overall, so thank you for the link. It shows that anything can be conjured from statistics if you select the right ones mindlessly enough. All the report finally proves, I think, is that creating and managing a crappy, badly-designed, select-all-defaults Oracle database is quicker than doing the same thing in SQL Server. I wouldn't want to bet my career or house on that translating into well-designed and -managed databases.

Regards from someone who rather likes Oracle, actually! HJR Received on Tue May 11 2004 - 20:47:30 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US