Re: How to ensure data consistency?

From: ddtl <fake_at_address.com>
Date: Tue, 07 Sep 2004 23:24:47 -0700
Message-ID: <h98tj0p6bo8e15h4dmsbn68cul78u2ldjr_at_4ax.com>


>Well, it looks as if this could be done using triggers.

Initially I also though so, but after checking the documentation (for postgresql) I realized that it is probably wrong - the trigger activates either before or after the row is inserted into the table. If the trigger is activated before, he can change the row submitted by the user (or reject it). So, suppose that you insert something into the super-table. The submitted data passes all the tests - everything is correct there, so the trigger has nothing to do: no reason to reject the quiery. The trigger cannot say: "hey, indeed your quiery is correct and I allowed it to pass, but you didn't fill in additional properties into the sub-table" - after the row is stored, the trigger is gone.

>Another approach
>would be using a different data model, illustrated below. However, there
>still needs to be validation (with triggers?) that the id's of the
>specified specialization are non null.
>
>TS
>t_id (PK)
>specialization_type: Either "a", "b", or "c".
>b_id (FK)
>c_id (FK)
>[...]

Is it the main (super) table? If I understand correctly, you want to put into each row number of columns for common properties and then add to it N additional columns where N=(number of groups with additional properties), and every one of those is FK? If that is so, it won't work (try it!). maybe I don't understand what you mean, though. Received on Wed Sep 08 2004 - 08:24:47 CEST

Original text of this message