Re: A searchable datastructure for represeting attributes?

From: Nuno Souto <nsouto_at_optushome.com.au.nospam>
Date: Fri, 01 Mar 2002 13:44:35 GMT
Message-ID: <3c7f80f1.626089_at_news-vip.optusnet.com.au>


> NS> they are supposed to keep the integrity of?
>
>Let's think of a situation when we need to add a new
>domain to our relational DB with some very odd rules
>for its possible values.
>

Yes. What makes that inherently more efficient away from the data than it is near it? That is the whole point, isn't it? Not the nature of the task but it's location and efficiency.

We can sit here citing examples ad-infinitum, but there is no way there will ever be a proper proof of concept based on sound theory that handling data integrity away from its storage is better. That proof just doesn't exist.

>F.e. ISO container numbers (what is familiar to me)
>should have a verifying digit at the end.
>
>And please don't say anything about triggers.

Non sequitur. Why not, have you had a bad experience with a horrible trigger implementation?

In the absence of proper domain support - something that the relational model required but was never implemented - triggers are one of the easiest ways of providing precisely this type of functionality.

And at least in the case I'm most familiar with, extremelly efficient. So, why not use them? Because they are not "theoretically" correct? Or they violate some new-fangled OO data rule?

What is there in OO that inherently allows domain integrity checking in a more efficient way than in a database? Absolutely nothing.

The thing we have to keep in perspective here is this:

1- OO techniques are relatively new. No mathematical base in them whatsoever. And completely unproven in terms of efficiently handling very large volumes of critical data. Sure, I've heard of the "in-memory OO database" exercises. Let's leave jokes aside for the moment.

2- Databases (hierarchical, relational or even "OO") are/were based on sound mathematical theory. They were developed over many, many years. They are part of a much wider field that goes by the name of data management. Nothing new here. Most db data handling problems have been solved now. And dbs have nowadays been optimized to a level never seen before.

The knowledge base to handle any task that needs data management (and the tools to do so) are right here and now with dbs. Why throw them out? Because the industry needs a new "paradigm" so that punters can be fleeced once again?

Unlike a lot of people around IT at the moment, I'm from the school that says: if it ain't broken, don't fix it. This tends to work and work well. And cheap. Still have to see any of these new theories meet these two fundamental criteria of this day and age.

Cheers
Nuno Souto
nsouto_at_optushome.com.au.nospam Received on Fri Mar 01 2002 - 14:44:35 CET

Original text of this message