# Re: algebra equations for Reference and FD constraints

Date: Mon, 05 Jan 2009 08:12:03 -0800

Message-ID: <Yhq8l.24152$Ac2.4567_at_newsfe01.iad>

Walter Mitty wrote:

...

*> What's not clear to me is whether or not the concept of "assignment" is
**> inherently an imperative concept, and not reducible to purely declarative
*

> expression. I lean towards the view that assignment is inherently

> imperative.

In case you haven't seen it, here's a link to a widely-regarded, maybe even classic textbook:

http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-4.html#%_toc_%_sec_3.1.3

see "The Costs of Introducing Assignment". The authors distinguish 'substitution model' as being a property of functional languages. I think the A-algebra (algebras typically involve the substitution model, what many of us were taught in grade school, if I've got it right) and with an equality operator are among what they might call the functional languages. I'd also say most people start with a schema that constrains the possible assignment variables/pointers (eg., by declaring the allowable relvar or table names in advance) and never consider the algebraic implications. This puts the systems they make on logical quicksand. For example, there are two kinds of constraints for any relation, implicit and explicit. If the algebra or equivalent is ignored, certain implicit constraints are ignored too, such as the projection identities (eg., Heath) and heading constraints, in other words, logical independence. In most application languages, you can't map or interpret a sin gle pointer to a set of algebraic symbols, but single algebraic expressions can be substituted for elements in the set of all possible language variables/pointers. I'd say when designing a schema it is logically safer to express the design constraints algebraically (with shorthands of course), then pick among the algebraic symbols that one prefers to interpret as variables/pointers. In a functional language, one needn't bother with this choice, that language will choose variables that satisfy the equations and we might not even need to know the definitions, eg., 'names', of all of them. In this sense, I'd say functional languages are at a 'higher-level' of abstraction and so too the A-algebra even though I'd guess many programmers would imagine it is somehow a 'lower-level' of detail.

Beyond predictability, an algebra or calculus starting point may not matter to the average developer but it is crucial in the design of a dbms imlementation which must nearly optimize for practical execution times, eg., deciding whether an index can check a key constraint. Without that starting point, one is only guessing whether optimizations are logically sound and consistent. Received on Mon Jan 05 2009 - 17:12:03 CET