Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> comp.databases.theory -> Re: Concurrency in an RDB

Re: Concurrency in an RDB

From: Sampo Syreeni <>
Date: Sat, 16 Dec 2006 03:25:53 +0200
Message-ID: <>

On 2006-12-15, David wrote:

> Note that OT is not a locking protocol. In allows for multi-users with
> no locking at all.

So what do you mean when you say "locking"? Delay? That isn't necessarily present. Exclusion? Neither. Any precaution at all? Well, sure. But then timestamping is a sort of precaution as is particular structuring of the permissible write transactions, and vice versa OT takes action after the fact, which the so called locking protocols don't. All of the optimistic, non-locking concurrency protocols eventually do so as well. After that I've even pointed out that so do the protocols relying on open transactions, semantic locking and higher level compensation. Only they sometimes don't.

I'd say the partition into locking and non-locking protocols is hazy at best.

> OT imposes strong restrictions on the integrity constraints. It is not
> permissible to nul an operation once it has been generated.

What do you mean by this?

> This limits the applications for which OT is suitable. Eg don't use OT
> for reserving seats on an aeroplane.

In a really small aeroplane company it is possible that seats are reserved by editing a shared text file. It is claimed that OT is suitable for shared editing of text files. Hence, it must be suitable for seat reservation under at least some conditions. Can you elaborate on what the conditions are, precisely?

> However, collaborative, decentralised management of a company's
> geological data would be reasonable.

I don't really understand this either. The only fleshed out OT protocol I've seen thus far concerns text files and rejects/annuls every transaction which has to do with the same file offset and symbol, within a network roundtrip time. A typical geological dataset would be composed of far more numerous data points, true, so per update write contention would be less. But normalizing for size, I would imagine that the dataset would be even more rigid and unforgiving against this sort of compensation, because it does not possess the global symmetry that a text file (an element of a string monoid) does. Instead, its rigidity, borne of structural/semantic asymmetry, would probably give rise to more edit conflicts and hence more annulled transactions per a granule of time than for a long string.

At least this is what happens for your average, structured document under the normal timestamped replication protocols. I've never seen any way around the problem other than to exploit higher level semantics.

Sampo Syreeni, aka decoy -, tel:+358-50-5756111
student/math+cs/helsinki university,
openpgp: 050985C2/025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
Received on Fri Dec 15 2006 - 19:25:53 CST

Original text of this message