Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Database or store to handle 30 Mb/sec and 40,000 inserts/sec

Re: Database or store to handle 30 Mb/sec and 40,000 inserts/sec

From: Galen Boyer <galen_boyer_at_yahoo.com>
Date: 20 Feb 2006 19:00:29 -0600
Message-ID: <uirr91hl4.fsf@rcn.com>


On Mon, 20 Feb 2006, srielau_at_ca.ibm.com wrote:
>> On 19 Feb 2006, galen_boyer_at_yahoo.com wrote:
>>
>>> When would you ever want to read uncommitted records?
> Uncommitted read is just fine for anything statistical.
> When mining a DSS or ODS system there is no need to get exact data.

Okay,

This doesn't answer what was really my question, which is, when would someone want to read uncommitted records _in a transactional environment_. I guess the question should have been more exact, but I thought the thread was quite explicitly already in that context.

> Whether someone returned a pair of shoes or not is irrelevant for
> trend analysis.
> Does Oracle support query sampling? If so, there you go...
>
> I find it highly amusing how posters justify isolation levels
> based on locking behavior.
> Isolation is semantics, locking is implementation.
> There are quite viable solutions for READ COMMITTED isolation
> level which have the exactly same concurrency behavior as
> Oracle's implementation of Snapshot Isolation.
> Declaring them worse or inadequate merely by virtue of not being
> the same is pretty intolerant.
>
> I know a bit about Oracle's implementation of snapshot isolation.
> apparanetly there are posters here who believe they can compare
> it to what MS has delivered. None of them, so far, has justified
> their claims on lack of scalability (beyond "it's new", it can't
> be trusted).

True, and Tony can't justify any of his thoughts other than to say, MS has caught up with Oracle. We are both working from expertise compiled mostly from the particular arguments side. But, my arguments have never been trying to state the inner-workings of SQLServer, because I would never try to show myself as that level of an expert, but I do know that SQLServer has always had the issue that writers block readers and readers block writers, while Oracle has never had this problem. This is a fundamental issue, which still is not clear that SQLServer has solved. Why is this? Because one has to ask for this new isolation level that Tony is touting as "having caught up with Oracle". If it did catch up MS to ORacle, then how come it is not the default. When would anyone want writers to block readers or readers to block writers (Another fundamental question I submit to Tony which can be in any environment, transactional or not,)

> Care to cough up some hard facts? Given that SQL Server 2000 is 6
> years old and any Oracle product that age has been called
> "neolithic" by some posters in this group, it is much more
> interesting to compare the here and now that the history of any
> vendors perceived shortcoming.
>
> So why is SQL Server 2005's implementation of Snapshot isolation bad?

It is the fact that SQL Server has to even have this as an optional implementation is what shows it to be bad in the first place. This should be the default and no other implementation should be allowed.

Here is a snippet from an opening paragraph.

     SQL Server 2005 introduces a new "snapshot" isolation level that is
     intended to enhance concurrency for online transaction processing (OLTP)
     applications. In prior versions of SQL Server, concurrency was based
     solely on locking, which can cause blocking and deadlocking problems for
     some applications. Snapshot isolation depends on enhancements to row
     versioning and is intended to improve performance by avoiding
     reader-writer blocking scenarios.

Sounds almost like what Oracle does right out of the box and the correct scenario of only showing the committed rows. Okay great! So they caught up with Oracle? Having read the basics of implementation, it almost seems like they tried to use tempdb in the same manner that Oracle uses rollback segments, XSN is analogous to SCN. Well, how come one has to set an isolation level to get this? Why isn't it the default that everything else is built on?

Again, when would one ever want to see committed rows in a transactional environment? If the answer is never, then this implementation should be the default, but since it isn't there must be sometime when a user would want to see uncommitted records during the middle of a transaction, or maybe there is some other reason, having to do with performance, or scalability, or usability, or ...

-- 
Galen Boyer
Received on Mon Feb 20 2006 - 19:00:29 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US