Re: Objects and Relations

From: Marshall <marshall.spight_at_gmail.com>
Date: 30 Jan 2007 23:10:44 -0800
Message-ID: <1170227444.817200.121720_at_h3g2000cwc.googlegroups.com>


On Jan 30, 6:32 am, "David BL" <davi..._at_iinet.net.au> wrote:
> On Jan 30, 6:33 pm, "Marshall" <marshall.spi..._at_gmail.com> wrote:
>

Okay. Going to try to recreate my earlier post. Will not be as good but hope to get the basic sense across, along with some actual references to stuff.

> > > Consider the following relation
>
> > > charsInString(StringId, Index, Char) :- StringId has Char at
> > > Index
>
> > This isn't right as is, though; you're putting every string in
> > a single table. Coincidental cohesion; grouping things based
> > on a superficial type commonality. Are you going to put all
> > your ints in a single table also? I would expect not.
>
> I'm glad to see that you don't like the idea of putting all characters
> of all strings in a single table.
>
> I believe a relational database should treat strings as an ADT to be
> used for domains for attributes.

Yes, I'm aware of that perspective, and some people I really respect subscribe to it. However I don't; if you encapsulate a string as an ADT then you lose the ability to use the RA on the characters, which I believe I've already demonstrated has some utility.

I also have an argument that encapsulation itself is merely a crutch to make up for the fact that a language lacks declarative integrity constraints.

> > Instead, let's imagine we're in a nested relational system,
> > and strings are a relation type {index:int, c:char} I will use
> > the name S to denote the string at hand; sort of the analog
> > of "this" in an OO language.
>
> Well, I was assuming a flat relational system. If you are going to
> allow nested relations then I agree that things are rather
> different. In fact I believe this is a bit of a cop out. Please
> correct me if I'm wrong.

Not 100% sure I understand, but it sounds like you're saying nested relations are a cop-out. If so I disagree. Again, this is a theory group, so we do not have to restrict ourselves to discussing existing implementations or SQL or whatever. As a logical model nested relations have a long history of study, and they do solve some problems. My personal model includes nested relations.

> At the outermost nesting level you have
> chosen to deal with strings opaquely - as a domain for attributes. In
> other words at that level of abstraction you have chosen not to deal
> with strings relationally.

Yes however I consider this a non-issue. The advantage of recursive structures is that one can deal with each level at a time, and not have to address them all at once.

> You don't seem to appreciate the fact that the onus of proof often
> depends on the nature of the claim rather that who is making it. It
> would obviously be difficult to construct a formal proof that there
> doesn't exist elegant and useful string processing algorithms using
> RA, whereas it is comparatively easy to show examples if they in fact
> exist. All I said is that I can't think of any and perhaps there
> aren't any. I make no apologies for having rather less experience in
> RM than you do. If you look carefully at my post you will find that I
> never even claim that there are no such examples. I only raise the
> question.

I don't believe I have made any claims as to where the anus of proof lies.

> I note that this is the second time you have expected proofs of non-
> existence of examples (cf our discussions about fine grained mutative
> operations in a previous thread). Do you agree that this is often
> impossible?

I don't believe that I have expected proofs. Let me change my earlier phrase "prove a point" to "make a point" so that we avoid any potentially-ambiguous use of the word "prove."

Some of what we're discussing are design questions and they are not subject to proof. (In fact, surprisingly little *is* subject to proof.)

> > > Won't you almost always
> > > join/select for the simple purpose of looking up a character in a
> > > given string at a given index position?
>
> > You say that like it's a bad thing. Join is the primary tool
> > of the RA, so yes, we will join. And select. We'll use the
> > algebra, just like an OO programmer would use loops
> > and method calls.
>
> Of course. What I mean is that the access pattern on the relation is
> almost always the same, and no different to the functionality provided
> by the ADT.
>
> This contrasts with relations like Employee which join on different
> attributes at different times, revealing the power of RA for adhoc
> query.

Joining on single-attribute relations is a practical and useful thing to do; it doesn't indicate any problem, any more than a function that takes only a single argument indicates a problem.

Join of two one-attribute relations on their single attributes is set intersection, which I hope you will agree is a meaningful and useful operator.

> > indexOf(int ch, int fromIndex)
>
> > The source to java.lang.String can be downloaded as part of the JDK;
> > I will refer to it here. (But not excerpt it, much as I would like to
> > for
> > comparison, because it is copyrighted.)
>
> > Here is the relational implementation in pseudo-SQL:
>
> > select min(index) from S where c = ch and index >= fromIndex
>
> > The body of the Java implementation is 37 lines long, and
> > contains 8 if statements and 2 for loops, and can return
> > from any one of five places.
>
> I shall look at this Java code when I get the time. It seems overly
> complicated.
>
> In my C++ code I use a StringRange class that represents an aliased
> range of characters using a pair of pointers, allowing me to write
>
> while(s && *s != ch) ++s;
>
> to scan through characters in range s not equal to ch.

I am aware of the StringRange techinque but not familiar with it. I am not immediately clear what the above code does; in a C idiom (as opposed to C++) it appears merely to advance the (uninitialized) s pointer. Even if it's a complete solution, you omit the code from the StringRange class, which has to be accounted for to do a fair comparison with my code.

> > select min(S1.index) from S S1 where
> > (select min(S2.c = str.c) from S S2, str
> > where S2.index-S1.index = str.index) and
> > S1.index >= fromIndex
[corrected from original]
> > "Select the smallest S.index where
> > every character in S' offset by S.index equals
> > every character in str,
> > that is greater than or equal to fromIndex"
>
> > The body of the java implementation calls an internal helper function
> > that is 33 lines, containing 6 ifs, 2 for loops, 1 while loop, and
> > that
> > can return from any one of 4 different places.
>
> Thinking off the top of my head, in C++ I would write something based
> along the lines of
>
> while(s && str.size() <= s.size() && s.prefix(str.size()) != str) ++s;

Again, I note that you're not accounting for the size() and prefix() methods;
you have to include them in the total count. My code didn't use any function abstraction; what you see is the whole deal. Otherwise we both just wrap our implementations in a function and call it a day. My implementation used only min(), subtraction, and comparison.

> > > If you want to deal with strings declaratively (not procedurally) then
> > > use a functional programming or logic programming language. RM seems
> > > a poor fit.
>
> > Does it still seem so after reading the above?
>
> Yes!

Okay. What would you find to be a convincing demonstration? If nothing else I claim my substring find is quite declarative; as much so as any logic program. (In fact, one could make an argument that mine *is* a logic program.)

It is in general nearly impossible to separate out one's familiarity with
a particular computational model from how "intuitive" code in that model appears. Again, Haskell on first contact appeared ridiculously opaque, but now I find it generally very readable.

The Stroustroup quote:

http://www.research.att.com/~bs/bs_faq.html#compare

The only way to get around the familiarity issue is with hard metrics of expressiveness, and these are few and far between. The one that stands out is Felleisen's "On the Expressive Power of Programming Languages." I have it on my to-do list to master Felleisen expressiveness and use it to compare a relational language.

Until we do that, or something like it, pretty much all we're doing with comparing different computational models is expressing how familiar its idioms are to us.

> I found your post very informative and interesting and shows me that
> RA is more powerful than I imagined - when combined with aggregation.
> I was expecting that substring testing would require transitive
> closure.

As an aside, please note that TC is not a showstopper for a relational language. If we are speaking of a language that is equivalent in computational power to first order logic, (such as basic SQL) then we are speaking of a language that is not Turing complete. However if we imagine a general purpose programming language that was nonetheless relational through and through, we would of course be discussing a Turing complete language and it would be able to express TC.

I note that recursive functions are Turing complete, and that functions
are a kind of relation, and we can use join on any kind of relation, therefore we can employ join with functions. Being able to do so expands the computational power of join to the "goes to 11" of Turing, at the cost of the loss of the strong normalization property
that basic RA has, which as a very smart guy once said to me "is overrated anyway."

> I admit the RM approach is a little foreign to me.
> However, I have a remaining objection, and perhaps this could be
> overthrown as well.
>
> The objection is this: I find it difficult to imagine a system being
> able to turn those select statements into fast efficient
> implementations on strings represented as arrays of characters.
> Presumably there is an integrity constraint that shows that a string
> has characters at all index positions from the start to the end of the
> string. Somehow the system sees this integrity constraint, realises
> that strings can be implemented as arrays and then is able to
> translate RA expressions into efficient algorithms. This all seems
> rather speculative to me.
>
> Can you answer this question for me? In the following
>
> select min(index) from S where c = ch and index >= fromIndex
>
> How does the system realise that it can avoid finding all characters
> satisfying the where clause, and then finding the minimum index? ie
> instead it should merely scan the array for the left most character
> matching ch starting at fromIndex.

Well, I can't put the pieces together in a nice package but I can enumerate them. In short, the system has to understand order theory.

I don't think it is the integrity constraint that matters so much as that there is a total order on int, the type of index. Furthermore the system understands the relationship between the comparators (<, >=, etc.) and this order, and it understands the relationship between < and min(). Knowing this it can know that it doesn't have to examine any values for index that are <= fromIndex, and it knows once it has found a single matching value, it doesn't need to examine any values larger than that. Couple this with the fact that if the system is using a naive array implementation of the string, it will also know that the index values are laid out *in order* in memory. (The fact that these values aren't reified is irrelevant; they are still ordered. We can consider the (index, char) tuple as being sequential, with the index taking up 0 bytes and the char sizeof(char) bytes.) Oh, and that the optimizer will know how to take advantage of joining on an attribute that is in order in memory. IIUC these are all things that optimizers are capable of doing in today's products.

The thing that I used to think was harder was figuring out early termination of aggregates. That is, if you are aggregating multiplication across a set of integers, you can stop doing any actual multiplication if you hit a zero. I found the key to this in some recently published Fortress documents; you can stop any time you hit a fixed point of the folded function. (So the system has to know fixed points.)

Actually the Fortress documents are quite interesting in how they handle both catamorphisms and how they handle implicit parallelization. I note these are two things that are *quite* difficult to optimize in C++ because of its Mr. Pouty Pants memory model, where you pretty much have to assume everything is aliased to everything else unless you have an affidavit from the Postmaster General. I also note that these kinds of optimizations are a cakewalk for a relational language, and that the relative importance of implicit parallelization vs. standard C-style optimizations is strongly increasing over time with multiple CPU cores. Finally, I note that I am really starting to overuse the phrase "I note."

Upon rereading the last paragraph it appears my coherence level is dropping faster than a frat boy's pants at Mardi Gras, so I'd better sign off now. Sorry for any incoherence.

Marshall Received on Wed Jan 31 2007 - 08:10:44 CET

Original text of this message