Re: foundations of relational theory?

From: Ross Ferris <ross_at_stamina.com.au>
Date: 9 Nov 2003 01:01:42 -0800
Message-ID: <26f6cd63.0311090101.59157350_at_posting.google.com>


"Marshall Spight" <mspight_at_dnai.com> wrote in message news:<eUkqb.87007$275.257141_at_attbi_s53>...  

> By this you mean an ad-hoc api for updating the db, right?

Actually, this can be a web service - or simply hidden as part of a higher level tool, which would be the more nomal delivery vehicle for this functionality.  

> What does that mean? That data access is mediated through
> procedurally written code? As I've argued before, this is
> distinctly worse than declarative integrity constraints.
>

I agree that it is far better to have such simple constraints declared as an integral component of the database environment, and I'm glad that the multi-valued database model allows this to be done without procedural code using an appropriate tool  

> But they do have to learn your API, right? How is that
> any better?

As I said, with an appropriate tool, the actual API never has to be visible to the user. This can be "better" in the sense that not only can the data definition contain constraints, but also rules for table joins.

For an end user, this means that they don't have to have any pre-knowledge of the underlying database schema to be able to cruise through what would be related tables in SQL.

In a mv environment, these could be implemented as seperate tables (Files in "my" language), OR simply exist as embedded multi-value datasets ... as the architect of an application I get to choose what is going to be the most appropriate physical implementation, given an understanding of the "problem" at hand

I'm also aware that such tools probably exist in SQL-land, avoiding the need to intimately "know" and "grok" the underlying physical structure - though the abundance of DBA's would seem to indicate that such facilities are rare

> Except it's never fully general. You have to manually
> create each kind of insert, update, delete, and query.

Actually, this IS NOT the case .... with an appropriate tool these are automatically taken care of for you, maintaining full referential integrity, with the added "bonus" that additional fields can be added to the database in a "live" environment in real time, through the use of an active dictionary definition, without the need to re-organize the database or re-compile programs.

> You have to consider ahead of time what kinds of
> bulk operations you're going to support. For each
> and every operation, you have to manually write
> procedural code to enable it. Any ones that you
> don't think of ahead of time, the application
> can't do.

As noted above, this is simply not the case when an appropriate tool is used in conjunction with the mv database.

> Whereas if you exposed SQL, you'd have
> full generality no matter what, at no effort.

Well, as noted above, this is not the case. Indeed, with SQL you would still need to WRITE SQL statements in order to perform basic operations like INSERT or DELETE.

Just as there are development tools that allow this to be hidden from a developer working with an SQL database, the same is also true of the multi-valued world.

When you get down to it, the reality (pun intented for those on CDP) of the situation is that current mv databases DO support SQL, so a developer has the choice to use SQL if it is appropriate - or to use more "native" tools if that offers a more appropriate solution.

However, whereas native tools exist that allow you to nest muti-values to over 100 levels, if you wanted to take "advantage" of the SQL engines, you would need to limit yourself to no more than 2 levels, which means you are nearly as constrained as you are with a "traditional" SQL implementation, although you can still eliminate a redundant table for each join

I suppose one of the attractions of the mv data model is the fact that you DO have such choices available. You may choose to have a fully constrained database (which I believe is the ONLY way that development should take place FWIW in ANY environment), or at the other end of the Spectrum use the database to store "unstructured" (from an external DDL sense) information with interpretation rules embedded in the application
>
> I don't see any advantages; in fact, it seems a lot worse.

Perhaps you may have changed your mind ? The reality is that you don't HAVE to be worse off (depends a lot on tool choice - I have my own barrow to push with Visage), and indeed may end up being significantly ahead of the game.

In the end, a database is like any tool - used by a craftsman with intimate knowledge it can exceed the popular expectation of potential, yet in the hands of an apprentice it can cause disaster. Luckily there are modern tools available for most databases that enable an apprentice to achieve a similar result to a craftsman

>
>
> Marshall

Regards,

Ross Ferris Received on Sun Nov 09 2003 - 10:01:42 CET

Original text of this message