Re: predicate, constraints, header, relvar, and relation

From: Marshall <marshall.spight_at_gmail.com>
Date: 20 Apr 2007 22:42:13 -0700
Message-ID: <1177134133.653898.272430_at_y80g2000hsf.googlegroups.com>


On Apr 20, 6:50 pm, Bob Badour <bbad..._at_pei.sympatico.ca> wrote:
> Marshall wrote:
> > On Apr 20, 5:26 pm, Bob Badour <bbad..._at_pei.sympatico.ca> wrote:
>
> >>I could see someone wanting to overload
> >>the constraint concept to achieve this goal, but I think that would be a
> >>mistake.
>
> > Argh. Why, dammit?!
>
> > I was planning on doing exactly that.
>
> I think the property concept works better than the constraint concept.
> Saying: "Here is a value, and it has the following properties" is subtly
> different from "Here is a value, and it must always obey the following
> constraints." How could a value ever disobey them?
>
> In part, it might just be a matter of choice of syntax.
>
> Erm, or maybe not. I suppose it makes sense to say: "Check that the
> above literal has the following properties." Especially for large
> complex literals.

Okay.

For a while I've been thinking about the interaction between type systems and constraint systems. There is a trend in programming language theory towards ever-more-expressive types. It occurs to me that a garden variety type system is just a limited, compile-time-only constraint system. Constraint systems are highly expressive; they may even be Turing complete. This is is not so terribly hard, because they are checked at runtime. If a language had a type system that was as expressive, there would be certain downsides. Type checking might not halt, for example.

There is a further difference between types and constraints: the manner of quantification. When a given line of code with checks is executed, passing the checks, this is logically equivalent to an *existentially quantified* proof of the checks. In other words, if the line succeeds even once, there we know that there exists at least one set of runtime values that will satisfy the checks. On the other hand, if the checks are static, then passing the compile time checks is a *universally quantified* proof of the checks. That is, the checks will succeed for *every possible* set of runtime values referenced in that code. The runtime checks become as strong as the compile time checks at the limit of checking against every possible set of runtime values.

So this brings us to the interesting idea that perhaps we ought to have a single, highly expressive system for both runtime and compile time checking, and we ought to put the decision of how much checking happens when *in the hands of the programmer.*

For a long time I was considering that the difference between a compile time a runtime check would be reflected syntactically. Later I reversed that idea. It occurs to me that the question of *when* to do the checks could be considered a form of (logical?) independence. The key then would be to ensure that the semantics of a program do not change when a given check is moved from runtime to compile time or vice versa. The only difference would be whether the check failed early or failed late; in no cases would it be possible to have a program that returns a different answer depending on the time a check happens.

As near as I can tell, all that is necessary for this is to ensure that the attribute *names* of a relation are always known statically. Not too much of a burden.

Marshall Received on Sat Apr 21 2007 - 07:42:13 CEST

Original text of this message