Re: Efficient way of global concurrency control/serializability in federated databases??
Date: Mon, 09 Oct 2006 02:47:18 GMT
Message-ID: <WKiWg.110381$1T2.19739_at_pd7urf2no>
anonym wrote:
Regardless of whether you wanted to apply update a to db A and update b
to db B, C et cetera or whether update a was to be applied to db A and
B, a so-called two phase commit protocol was needed. Ie., anything
other than that would risk a being applied and b not being applied and
vice versa, which would require that manual intervention always be
available. One db is the coordinator, it sends two messages to all the
other dbs, first is the update along with the question - do you promise
to commit this if I send a second message. If all the respondent dbs
answer yes, then the coordinator sends each of them a second message
telling them to go ahead. If the coordinator fails in the meantime, its
own undolog will tell it on the next startup to tell the other dbs to
undo their own commits. As far as I know, this kind of scheme always
> Hi,
>
> I am preparing an analysis report and need some help.
> Currently, what is the most efficient / most used way of ensuring
> global concurrency control / serializability in federated
> databases/multidatabases ??
>
> Thank you.
>
Let me stick my neck out on this one, based on what was possible ten
years ago, but I doubt anything brilliant has come along since then,
except maybe for pathological cases.
A couple of times I suggested application-specific commits, eg., where there was some easily stored value, say an account balance, that must be the same at two different locations, otherwise one or the other is wrong and cannot be depended on. This was a little better traffic-wise.
It really seems like a law of nature to me that two or more dbs cannot appear as one without quite a lot handshaking.
magoo Received on Mon Oct 09 2006 - 04:47:18 CEST