Re: Multiple specification of constraints

From: ben brugman <ben_at_niethier.nl>
Date: Mon, 1 Mar 2004 10:38:38 +0100
Message-ID: <4043049f$0$269$4d4ebb8e_at_read.news.nl.uu.net>


"mAsterdam" <mAsterdam_at_vrijdag.org> wrote in message news:40404af4$0$564$e4fe514c_at_news.xs4all.nl...
> ben brugman wrote:
>
> > Constraints should be centralised.
>
> This is to general. I look at it this way: There are more sets of
> constraints, each with a different purpose.
Yes, I do agree here.
The sets do not have to be completely disjunct.

And most discussions in this 'newsgroup' only concern constraints which do apply to the database. (This is the set of data which is still present, after the machine is shut down. Wel the constraints 'are' not present, but the data still conforms to that set of constraints).

Then the centralising of the constraints. I do not think the implementation can be 'totally' centralised. But a nice situation would be that the model with all constraints is somewhere centralised in some form. (Documentation or something).
This set of constraints has to be implemented and the implementation can be redundand. (In the code and in the database.)

Example there is an implementation of 'child' and 'parent' in the database, the database holds the constraint the 'parent' must exist everytime a child exists. Often the application is build in such a way that even if the constraint is dropped in the database, the application only can 'make' data in accordance with the constraint. Often the application can not 'violate' a constraint if it is the only application.

My point :
All constraints should be 'included' in the model 'documentation'. (This is considered centralised.)
Implementation of constraints should be done as close as possible to the data, preferably in the RDBMS.
(The implementation can be centralised, but I think this is not realistic for all situations).
Server and application coding often do implement a part of the constraints implemented in the database. (The risc here is that different software has different implementations of the 'same' constraint).
Then some constraints are only implemented outside the RDBMS, the risc here is different implementations of the 'same' constraint and no central enforcement (RDBMS) of the constraint and over time the constraint can be changed (switched on or off or can be different). This gives the risc that even if a constraint is implemented that there is data not in accordance with that constraint.

(
Some object modellers believe that all constraints should be implemented in 'their' object code. The implementation should be centralised for each object. The advantage being that there is a centralised constraint and that then there is a freedom to implement the data of the object independend of the constraint in any data-repository which can hold the data.

In concept I do agree with those views. But from past experience I totaly disagree with that concept. Why ? Environments change more than databases. Constraints are more difficult to enforce in this way. There is no hard link between the constraint itself and the existing data.

Implementing the constraint in an implemented database, the link is hard. Because to use the data you need the specific RDBMS and therefore can enforce the constraints. There are no databases which can be accesed by RDBMSses from different vendors. )

> The contraints at the
> database serve to protect the integrity of the managed set of data.
> The constraints at the user-interface on the other hand serve to assist
> the user in providing the data he needs to provide in order to achieve
> his goal.
>
> So here we have two sets of constraints. Let's call them 'D' and 'U'.
> You can look at a set of constraints as defining all possible
> combinations of a set of data, i.e. as a type.
>
> Is U defining a subset of D? A superset?
> The intersection of the sets defined is the relevant set for the
> feared constraint-redundancy.

Even if a constraint if only of the type 'D' or of the type 'U' it can be implemented more than once. So I do not think this is the problem of the intersection. For example the 'U' set can be implemented more than once if more than once if serveral applications work with the same data.

>
> Now when there are lots of data and - relatively - just a few
> constraints ("Large databanks"), I won't worry about that redundancy.
> Bank ID's provide a good example for this.

The risc of constraint redundency is not that it is implemented more than once, the risc is that there can be different implementations of the same constraint. (For example a new application using the same data but not implementing the constraint at all). (I do not know what you mean by the redundancy of Bank ID's).

Thanks for your participation,
ben brugman

>
> Just my 2 Eurocents.
Received on Mon Mar 01 2004 - 10:38:38 CET

Original text of this message