Re: efficient compare
Date: Sat, 22 Apr 2006 13:54:23 GMT
Message-ID: <jAq2g.64138$VV4.1205107_at_ursa-nb00s0.nbnet.nb.ca>
Andersen wrote:
>> As an example of the importance of physical structure, where do you >> intend to evaluate this result? At the computer containing A, c(A), at >> the computer containing B, c(B), or at some other computer c(C) ?
>
> I really want to do unification between A and B. I.e., after A and B
> exchange some information, they should each have (A union B) locally.
> Only computers in the world are A and B, and there is a network between
> them, and we want to minimize traffic. Local computations on A and B
> almost take zero time (well exclude solutions where you use some fancy
> extremely costly fractal compression or anything of that sort).
>
>> As another example, what is the maximum size of a single datagram >> passed over the network and what is the size of the representation of >> the tuples? I can think of an optimization that would improve >> performance if two or more tuples fit in a single datagram.
>
> MTU=1500?
In the general case, the MTU is insufficient. One must also know the size of the representation of the tuples. However, in the replication case, this becomes unimportant. Use the logs.
>> If you are trying to minimize traffic between the computers, then >> presumably the cost of any sorts or index creation on the computers >> matters less. But then again, one would still have to weigh the >> expected savings against the costs.
>
> Sorry, my assumption was that index creation and things of that sort are
> definitely allowed (I assumed the solution would involve something like
> that).
It won't matter in the replication case. Use the logs.
>> Do you envision this as part of a distributed database? Some sort of >> replication architecture? To perform a merge-purge of mailing lists? >> Simply to reconcile to similar but independent databases?
>
> Some kind of replication architecture.
Then use the logs as I already suggested.
>> Since the size of the intersection will be relatively small and >> because the dbmses will have to reconcile updates temporally, it >> probably makes sense to just share log files from the previous >> checkpoint forward. Compressing the log files would be your primary >> efficiency opportunity.
>
> I want to be able to use the same algorithm to sync node A and B where
> one of them has not been around at all, or has a very outdated database.
> But since the sync is done periodically, most of the time, the computers
> should be in sync. So another assumption is that if A = B, then the
> algorithm should be very efficient.
If nothing has changed in either A or B, the logs from the last sync forward will be empty. In this case A=B, and the algorithm will be very efficient.
I suppose as an optimization, one can first send over the smaller log and then send back an edited log of just the changes required. Suppose c(A) sends a request to sync to c(B) and includes the size of the logs for A. c(B) then compares that figure with the size of the logs for B. If the logs of B are smaller, c(B) responds to the request with the logs of B. If the logs of B are larger, c(B) requests the logs from A.
At this point, either c(A) or c(B) has all of the information. Suppose c(B) has all of the information. It detects all of the changes required. It applies the changes required to B and sends a log of the changes required for A back to c(A). c(A) then applies those changes and the sync is complete.
If A and B had identical changes since the last synch, c(B) will detect that no changes are required and send c(A) an empty log of changes.
>> Pardon me for observing that it sounds like a question or essay topic >> for a course of some sort. People working for a dbms vendor >> implementing some sort of distributed database or replication feature >> would tend to keep abreast of the state of the art using much better >> sources than usenet.
>
>
> I find asking on usenet much better, as it would maybe take me years to
> gain the experience that a particular person has in a field. I have
> Ullman/Garcia Molina's Database book, but that would take me some while
> to dig through (I did take database courses many years ago in my
> undergrad education).
None of what I wrote above required years as an implementer of dbmses. I simply reasoned it out on my own. The only item of consideration missing from your original question was the almost certain existence of the files that log all changes to the dbms.
I suggest where the state of the art lies is in the merging of two log files to determine the necessary changes and the potential to abbreviate both log files before the process begins. If the same value changes twice in a dbms, one might be able to get away with tossing out the first change and keeping only the last change. Triggered procedures, of course, will complicate this. Received on Sat Apr 22 2006 - 15:54:23 CEST