Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Avoiding any locks in SQL Servers - read and understand....itsmagic.

Re: Avoiding any locks in SQL Servers - read and understand....itsmagic.

From: Guido Stepken <stepken_at_little-idiot.de>
Date: Tue, 19 Aug 2003 00:28:52 +0200
Message-ID: <bhrk1o$cnh$06$1@news.t-online.com>


Brian Peasland wrote:

MVTO: Each transaction T has timestamp TS(T).

Item X has versions X1, ..., Xk.

Read_TS(Xi) = Largest TS among Xi-readers

Write_TS(Xi) = TS of Xi-writer

T writes X: A new version Xk+1 is created, with read_TS(Xk+1) = write_TS(Xk+1) =TS(T)

Rules for MVTO

Transaction T tries to write item X:

        Find i such that
        write_TS(Xi) = Maxm(write_TS(Xm) TS(T))

        If write_TS(Xi) < TS(T) and read_TS(Xi) > TS(T)
        then abort T; else create a new version Xj
           with read_TS(Xj) := write_TS(Xj) := TS(T)

Abort case: Some younger transaction has read an older version, compared to TS(T).

As you can see, transactions are also aborted, if there are several older versions of data entries from other transactions. Don't forget, all data have versions, which are assigned to their corresponding transaction.
You see, it's more complex, than you think, but works without locks and without hurting integrity. Don't misunderstand my expression "time - shift" - all queries, inserts, updates run at the same time and work with their own snapshot of data, their versions. But - the transaction manager after some time looks at all timestamps of all data changed, looks at their corresponding transaction timestamps and decides, what data has finally to be written into the database, what versions of data can be dropped, because of several overwrites .... this operation is time - shifted, the cleanup of all versions, timestamps data. whenever a transaction has finished, the transaction manager cleans up and writes final data into database file.
To your question: If data becomes "out of sync", the transaction is simply aborted. It makes no sense to update a dataset, which has changed in the mean time 3 times, after having read the data. On the other hand,   lost updates cannot occur, because the transaction manager has control over the versions of all datasets.

regards, Guido Stepken

> Please educate me then. Without locks, and use "time shifting", then a
> user gets the data that has been changed most recently. What happens if
> the user starts his transaction before this data was changed and then
> accesses this data after another user changes it? Which time-shifted
> variation does the user see? The one when the user started their
> transaction or the one that was changed sometime after their transaction
> started. If it is the latter, then this can lead to the user using data
> that is "out of sync" with the start of their transaction. If it is the
> former, then how does the system keep its overhead low while maintaining
> all of these time versioned records all in a highly-concurrent
> environment?
>
> Thanks,
> Brian
>
Received on Mon Aug 18 2003 - 17:28:52 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US