Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Database or store to handle 30 Mb/sec and 40,000 inserts/sec

Re: Database or store to handle 30 Mb/sec and 40,000 inserts/sec

From: Tony Rogerson <>
Date: Tue, 21 Feb 2006 08:06:05 -0000
Message-ID: <dtehkq$cbp$1$>

You still haven't answered Serge's question about backing up your claims, here are just some of your quotes from this thread....

"inferior implementation"
"Writers block readers and readers block writers in SQLServer. There is no getting around this fundamental issue and because of it SQL Server will always be fundamentally a completely inferior product" "Well, that isn't an application, so, sure, deadlocks can happen." "My biggest issue with this is that SQLServer has just implemented this. I have no idea how well it works, and I have no trust that it actually is a solid implementation. "

This is what I've taken issue with, lets go back to your quotes and ask you to back up your statements with technical fact.....

"inferior implementation"

What parts of the design are inferior and why? Point to some benchmarks, or at the very least post the SQL you used to test; note: Daniel has not done this when asked.

Don't post hear-say or your feeling (probably based on no knowledge).

"Writers block readers and readers block writers in SQLServer. There is no getting around this fundamental issue and because of it SQL Server will always be fundamentally a completely inferior product"

This is completely un-true which is why I took issue, I am not saying SQL Server is better than Oracle or the other way round, SQL Server has an implementation that is up-to-date with current thinking and technology and is not based on what the current thinking was 20 years ago.

Please shown me where it states that SQL Server 2005 blocks readers and there is no getting round this fundemental issue.

"Well, that isn't an application, so, sure, deadlocks can happen."

With the deadlocks, again, we danced around a bit there until you finely admitted they can happen in Oracle, I guess Oracle should never been used with anything that might send adhoc queries against the database, like, say a reporting application, an application with a dynamic search front end etc...

"My biggest issue with this is that SQLServer has just implemented this. I have no idea how well it works, and I have no trust that it actually is a solid implementation. "

The beta program was around 3 years long (alpha to final beta), also, there where 100,000's of developers, dbas etc... who installed and used the beta, actually its probably a lot more, the SQL 2005 Express had around 40,000 downloads in one month tail end of last year; also, there where a lot of significant companies go live with 2005 at rtm, one such one is a major retail bank here in the UK - Nationwide Building Society.

Please give me references that back up your 'big issue', aside from Daniel who hasn't backed it up either by posting DDL and SQL he was using (if he ever did).

As to isolations, SQL Server implements all transaction isolations required by the ANSI standard, I keep asking - does Oracle? - nobody has answered except some ranting dissing the ANSI SQL standard (why am I not suprised there, you guys like to keep it complicated so you can't easily move to another platform)!

Just because you have been sucked into the Oracle isolation religion, a religion that was started around 20 or 30 years ago doesn't mean we should follow; I've been developing for 19 years now, started out on the mainframe with DB2 for about 5 years. There are plenty of reasons to use each isolation level but you will always disagree because you follow doctorine and probably don't have any real rdbms experience aside from Oracle! Whereas I guess SQL Server and DB2 guys implement what the business requires rather than what the Oracle engine wants.

An example of Read Uncommitted would be getting the cardinatlity of a column in a million row table, you want a value between 0.00 and 1.00 as to uniqueness where unique is 1, this would be used for say the optimiser in working out a query plan. You are not after a definitive reply from your dataset but just a roundabout idea of the cardinatlity.

Another example is, depending on how you implement the queue, the statistics of a call centre - where the business requirement is just an indiciation of how many callers are waiting, it doesn't need to be exact because its for the board in the call centre as a motiviator to the call centre staff to answer calls quicker.

I'm not holding my breath, because you follow doctorine you won't agree with the above but thats your religion rather than sound technical based reasons based around the business requirements.

Tony Rogerson
SQL Server MVP - free video tutorials

"Galen Boyer" <> wrote in message

> On Mon, 20 Feb 2006, wrote:
>>> On 19 Feb 2006, wrote: >>> >>>> When would you ever want to read uncommitted records? >> Uncommitted read is just fine for anything statistical. >> When mining a DSS or ODS system there is no need to get exact data. >
> Okay,
> This doesn't answer what was really my question, which is, when would
> someone want to read uncommitted records _in a transactional
> environment_. I guess the question should have been more exact, but I
> thought the thread was quite explicitly already in that context.
> > > >> Whether someone returned a pair of shoes or not is irrelevant for >> trend analysis. >> Does Oracle support query sampling? If so, there you go... >> >> I find it highly amusing how posters justify isolation levels >> based on locking behavior. >> Isolation is semantics, locking is implementation. >> There are quite viable solutions for READ COMMITTED isolation >> level which have the exactly same concurrency behavior as >> Oracle's implementation of Snapshot Isolation. >> Declaring them worse or inadequate merely by virtue of not being >> the same is pretty intolerant. >> >> I know a bit about Oracle's implementation of snapshot isolation. >> apparanetly there are posters here who believe they can compare >> it to what MS has delivered. None of them, so far, has justified >> their claims on lack of scalability (beyond "it's new", it can't >> be trusted). >
> True, and Tony can't justify any of his thoughts other than to say, MS
> has caught up with Oracle. We are both working from expertise compiled
> mostly from the particular arguments side. But, my arguments have never
> been trying to state the inner-workings of SQLServer, because I would
> never try to show myself as that level of an expert, but I do know that
> SQLServer has always had the issue that writers block readers and
> readers block writers, while Oracle has never had this problem. This is
> a fundamental issue, which still is not clear that SQLServer has solved.
> Why is this? Because one has to ask for this new isolation level that
> Tony is touting as "having caught up with Oracle". If it did catch up
> MS to ORacle, then how come it is not the default. When would anyone
> want writers to block readers or readers to block writers (Another
> fundamental question I submit to Tony which can be in any environment,
> transactional or not,)
> >> Care to cough up some hard facts? Given that SQL Server 2000 is 6 >> years old and any Oracle product that age has been called >> "neolithic" by some posters in this group, it is much more >> interesting to compare the here and now that the history of any >> vendors perceived shortcoming. >> >> So why is SQL Server 2005's implementation of Snapshot isolation bad? >
> It is the fact that SQL Server has to even have this as an optional
> implementation is what shows it to be bad in the first place. This
> should be the default and no other implementation should be allowed.
> Here is a snippet from an opening paragraph.
> SQL Server 2005 introduces a new "snapshot" isolation level that is
> intended to enhance concurrency for online transaction processing
> (OLTP)
> applications. In prior versions of SQL Server, concurrency was based
> solely on locking, which can cause blocking and deadlocking problems
> for
> some applications. Snapshot isolation depends on enhancements to row
> versioning and is intended to improve performance by avoiding
> reader-writer blocking scenarios.
> Sounds almost like what Oracle does right out of the box and the correct
> scenario of only showing the committed rows. Okay great! So they
> caught up with Oracle? Having read the basics of implementation, it
> almost seems like they tried to use tempdb in the same manner that
> Oracle uses rollback segments, XSN is analogous to SCN. Well, how come
> one has to set an isolation level to get this? Why isn't it the default
> that everything else is built on?
> Again, when would one ever want to see committed rows in a transactional
> environment? If the answer is never, then this implementation should be
> the default, but since it isn't there must be sometime when a user would
> want to see uncommitted records during the middle of a transaction, or
> maybe there is some other reason, having to do with performance, or
> scalability, or usability, or ...
> --
> Galen Boyer
Received on Tue Feb 21 2006 - 02:06:05 CST

Original text of this message