Re: Concurrency in an RDB
Date: 23 Dec 2006 17:58:32 -0800
Message-ID: <1166925512.486987.10450_at_i12g2000cwa.googlegroups.com>
On Dec 23, 5:08 pm, "David" <davi..._at_iinet.net.au> wrote:
> Marshall wrote:
>
> > I'm not sure if this is a trivial rephrasing of what you just
> > said or an actual disagreement, but no, I would not agree
> > that they "shouldn't" be enforced on every update.
> > However I would agree that there may be practical limitations
> > on so doing, such as the amount of time necessary to check
> > the constraint. Any constraint that can practically be checked
> > on every update should be.
>
> You suggest that performance is the only issue at stake. In examples
> like the above, a verification failure often points to an error in
> *previously* committed changes. Software development is a good example
> of a "non-monotonic" process. Sometimes you need to commit a change
> that will temporarily break the integrity of the system.
>
> Now you could argue that the user should be forced to make all the
> changes necessary for the DB to atomically change from one valid state
> to the next. However in some domains that could lead to long running
> transactions that take hours, days or even months to complete.
How is that not exactly and precisely a performance argument?
> > > I have the impression (please correct me if I'm wrong) that your
> > > assumption that a DB should always be in a valid state is coloured by
> > > the (relational) problems that you have expertise in.
>
> > Certainly this is always true for everyone. However I am having
> > a hard time seeing the value of your approach given how much
> > less it lets us count on the dbms.
> Is it really a problem? A workflow can easily force a user to run the
> verification as part of the process of using the DB within a system.
>
> As an example, good software companies have a release engineering
> workflow that ensures that the release has passed various unit tests,
> regression tests etc before it can be released. It goes without saying
> that it must compile successfully.
In my experience, constraints hold if and only if they are centrally enforced. Those constraints that everyone knows of and which are supposed to be enforced in application code are a distant memory, victims of broken windows syndrome.
> > I'm also unclear on how much
> > I have to give up in the way of integrity enforcement. I'm having
> > a hard time building a mental model for that. Your intent only to
> > speak at a high level somewhat exacerbates this difficulty.
>
> > Hmm, I just had an interesting idea. Perhaps the issues your
> > idea raises could be dealt with as a "quality of service" issue.
> > Where one needs strict durability, one could so specify externally
> > to the application.
>
> > This is a bit tricky because of the question of guarantees of
> > desirable properties. One area I'm interested in is
> > static analysis, and that's entirely dependent on building
> > theorems from known properties of a system. Weakening
> > those properties might render some analysis techniques
> > unsound.
>
> Examples of that would be relevant to this discussion.
How ironic that now you are asking me to be specific.
Hrm. Well, my understanding of the limits of your proposal
remains sketchy. However if I understand you if transaction
A and transaction B each issue an update in an incompatible
way, the system picks one and discards the other, and the
code that issued the update is not notified of this in a timely
manner. Is that correct? Can we characterize precisely what
kinds of constraints are still possible, and which are lost?
I didn't see that that was clarified; I apologize if I missed it.
(I recall the phrase "complex constraints" but I don't recall
a specific definition of what that was.)
Marshall Received on Sun Dec 24 2006 - 02:58:32 CET
