Re: Objects and Relations

From: David BL <davidbl_at_iinet.net.au>
Date: 1 Feb 2007 09:55:09 -0800
Message-ID: <1170352508.150084.40090_at_k78g2000cwa.googlegroups.com>


On Feb 1, 4:16 pm, "Marshall" <marshall.spi..._at_gmail.com> wrote:
> On Jan 31, 6:18 am, "David BL" <davi..._at_iinet.net.au> wrote:
>
> > On Jan 31, 4:10 pm, "Marshall" <marshall.spi..._at_gmail.com> wrote:
>
> > > As an aside, please note that TC is not a showstopper for a relational
> > > language. If we are speaking of a language that is equivalent in
> > > computational power to first order logic, (such as basic SQL) then
> > > we are speaking of a language that is not Turing complete.
>
> > Does it make sense for a language to be equivalent in computational
> > power to FOL when FOL is so rich as to be undecideable?
>
> I don't quite follow this question. As someone who is used to general
> purpose programming languages, FOL occasionally feels weak to
> me.

Check out

http://en.wikipedia.org/wiki/First_order_logic

FOL allows you to express things like Fermat's last theorem, only proven true in 1994. Isn't FOL too rich to be regarded as a computational model?

RA seems much simpler than FOL. I think of it as "arithmetic" with (extensional) relations.

It seems to me that the join operator doesn't give you a group because given relations b,c there is no uniquely defined relation a satisfying a |X| b = c. This makes RA more like arithmetic than algebra (as a high school student would understand those terms).

> > I thought RA was a strict subset of FOL - ie Horn clauses without
> > recursion and negation.
>
> I am unclear; however I was under the impression that Codd had
> shown that RA and FOL were of equivalent computational power.

This is discussed in

http://en.wikipedia.org/wiki/Relational_algebra

> > BTW is support for aggregation technically a part of RA or not?
>
> Mumble. I have no idea what the conventional view is, however
> we lately have heard from Vadim a demonstration that aggregation
> is the inverse operation of join, so these days I'd have to say yes.

I'm curious to know how that relates to my comment above.

[snip]

>
> > > > The objection is this: I find it difficult to imagine a system being
> > > > able to turn those select statements into fast efficient
> > > > implementations on strings represented as arrays of characters.
> > > > Presumably there is an integrity constraint that shows that a string
> > > > has characters at all index positions from the start to the end of the
> > > > string. Somehow the system sees this integrity constraint, realises
> > > > that strings can be implemented as arrays and then is able to
> > > > translate RA expressions into efficient algorithms. This all seems
> > > > rather speculative to me.
>
> > > > Can you answer this question for me? In the following
>
> > > > select min(index) from S where c = ch and index >= fromIndex
>
> > > > How does the system realise that it can avoid finding all characters
> > > > satisfying the where clause, and then finding the minimum index? ie
> > > > instead it should merely scan the array for the left most character
> > > > matching ch starting at fromIndex.
>
> > > Well, I can't put the pieces together in a nice package but I can
> > > enumerate them. In short, the system has to understand order
> > > theory.
>
> > > I don't think it is the integrity constraint that matters so much as
> > > that there is a total order on int, the type of index. Furthermore
> > > the system understands the relationship between the comparators
> > > (<, >=, etc.) and this order, and it understands the relationship
> > > between < and min(). Knowing this it can know that it doesn't
> > > have to examine any values for index that are <= fromIndex,
> > > and it knows once it has found a single matching value, it doesn't
> > > need to examine any values larger than that. Couple this with
> > > the fact that if the system is using a naive array implementation
> > > of the string, it will also know that the index values are laid
> > > out *in order* in memory. (The fact that these values aren't
> > > reified is irrelevant; they are still ordered. We can consider
> > > the (index, char) tuple as being sequential, with the index
> > > taking up 0 bytes and the char sizeof(char) bytes.) Oh, and
> > > that the optimizer will know how to take advantage of joining
> > > on an attribute that is in order in memory. IIUC these are all
> > > things that optimizers are capable of doing in today's products.
>
> > When we build all sorts of crazy, complicated RA expressions to
> > perform string processing, can we really expect an optimizer to
> > generate decent code? Are you saying all the theory currently exists?
>
> Again I have to specifically disclaim any significant knowledge of
> query optimizers; it is just not my field. However as I understand
> it (which is weakly) yes, the theory does currently exist, for a lot
> of it anyway.
>
> In general, though, someone who knows exactly what they
> are doing, and who is building a special-purpose engine, given
> a tool such as C++, will be able to build something that is
> better *for that special purpose* than a general-purpose
> solution will likely be. That difference may only be O(c)
> though.

The halting problem suggests that the space of all algorithms is too much for a compiler to cope with. However I wonder whether that is true when we restrict ourselves to useful algorithms. I can imagine a compiler being smarter than your average programmer and that may be enough. Who knows how clever compilers will be in 100 years time. We may be like carrots by comparison.

A particularly interesting question for me is whether computers can (in principle) be conscious.

[snip] Received on Thu Feb 01 2007 - 18:55:09 CET

Original text of this message