Re: Canonical DB
Date: Fri, 23 Jun 2006 00:05:42 +0200
Message-ID: <449b13ca$0$31652$e4fe514c_at_news.xs4all.nl>
Dmitry A. Kazakov wrote:
> mAsterdam wrote:
>>Robert Martin wrote:
>
>>>Things like tree searches, graph walks, etc. >> >>No, tree searches and graph walks are things >>you *need* to do (and specify) when all you have >>is navigational structures. They are part of their cost. >> >>Now where is the benefit - what are you computing: >>Which (or which types of) computations are easier >>with a navigational structure?
>
> That's easy.
> The benefit is in reducing the
> cardinality of the problem space.
> Tree, graph and a table, too, can serve as sparse descriptions of
> some large problem space. Consider a route map.
A navigational problem space.
I prefer simple beginnings, but so be it.
It just means we have to be extremely careful not to confuse
navigation in our universe of discourse (UOD)
with possibly navigational aspects of our model.
So, ok. Reluctantly.
> The original space is 3D continuum.
> You cannot handle it as-is in any finite system.
So, we have to model. Ok.
> A graph here is just a sparse description, of something,
> which otherwise might be impossible to compute.
http://en.wikipedia.org/wiki/Graph_theory says "a graph is a set of objects called points or vertices connected by links called lines or edges." yet another pair: nodes and links.
> Talking about cases where usage of the "original" relational structure is
> possible, but computationally inefficient, consider distance mappings.
> Technically each ordered structure induces some distance.
Yes.
> For example, a table with ordered keys induces one
> between rows d(k1,k2)=k2-k1.
> A tree structure does it as well, it is so-called tree distance.
> This induced distance is obviously not the problem space distance.
Indeed. To the problem space this tree-distance is meaningless. It may say something about something like a computational distance for a (navigational) computation, but nothing at all about the UOD.
> Which might have an immense computational impact.
> Example: kd-trees vs. relational tables.
> [Euclidean (any 3D) distance cannot be mapped to *any* linear order. It is
> a hard fact. So there is no key which could work. This was the thing which
> killed Prolog in machine learning (nobody would even try RA.)]
> Another advantage of graphs and other "non-linear" structures, is that you
> might do things which would otherwise illegal. For example,
> self-referential structures. Self-containment is illegal, but a thing which
> behaves as if it contained itself is possible through indirection. There
> are many cases where you have to use such models, because other models
> would be complex/inadequate/illegal.
> If you want a generalized description,
> it is the problems where local descriptions are still "good-behaving",
> linear, but global ones are a nightmare. Graph is locally is almost as good
> as a table.
A kd-tree is a graph, so again we have sets of nodes
and links to maintain. However,
http://en.wikipedia.org/wiki/Kd-tree:
"Balancing a kd-tree requires care. Because kd-trees are sorted in
multiple dimensions, the tree rotation technique cannot be used to
balance them — this may break the invariant."
Am I understanding correctly? Received on Fri Jun 23 2006 - 00:05:42 CEST