Re: Why is database integrity so impopular ?

From: David BL <davidbl_at_iinet.net.au>
Date: Tue, 14 Oct 2008 01:24:23 -0700 (PDT)
Message-ID: <4e5bbde3-4344-465d-a9fa-e000e9f8d9fe_at_26g2000hsk.googlegroups.com>


On Oct 14, 3:25 am, paul c <toledobythe..._at_oohay.ac> wrote:

> Suggest that a DEFAULT value could also be an integrity mechanism,
> especially for updates to projection views, for example, a "Year"
> attribute's default might be changed every January 1. This might help
> isolate apps from the db 'rules', if you will.

So the DBMS constrains the year of newly inserted tuples, but not the year of existing tuples in the database.

This relates to the question of whether it is sufficient for integrity to only be related to a predicate calculated on a given snapshot of the database state, without concern for how it got to that state. I really like the simplicity of that.

I’m a bit suspicious of reading too much into the history of updates on a database unless one is going to distinguish between data entry corrections versus real changes (ie external predicates that are functions of time). In the latter case it seems much better to actually model temporal external predicates in the schema – ie use a temporal database, and then the update history isn’t significant.

Database history seems relevant to auditing (and requirements that follows from the likes of the Sarbanes-Oxley Act). I would hope that a good DBMS provides a generic, hacker proof means to calculate prior states that would meet the requirements of the law courts for establishing culpability. The nice thing is that this is independent of the logical model and even whether or not it is a temporal database. Received on Tue Oct 14 2008 - 10:24:23 CEST

Original text of this message