Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Referential Integrity Enforcement: Do With Constraints or Triggers?

Referential Integrity Enforcement: Do With Constraints or Triggers?

From: internetmaster <youlove_at_me.com>
Date: Fri, 26 Jul 2002 17:55:57 -0400
Message-ID: <3D41C56D.2000707@me.com>


In an earlier thread I mentioned the following observation:

http://groups.google.com/groups?dq=&hl=en&lr=&ie=UTF-8&oe=UTF-8&threadm=a20d28ee.0207260158.3ecabf94%40posting.google.com&prev=/groups%3Fhl%3Den%26lr%3D%26ie%3DUTF-8%26oe%3DUTF-8%26group%3Dcomp.databases.oracle.tools

I was wrong about one thing. The database does have primary keys defined, but no foreign keys. I went browsing through the data dictionary and found that the database has a ton of Triggers. It appears they're using Triggers to enforce the referential integrity rather than using constraints.

They have Insert triggers on the parent tables that handle cascade deletes and prevent illegal inserts on child tables where they key value on the parent table doesn't exist.

My question is it less expensive to handle the RI this way? I can see where creating a ton of constraints would get to be cumbersome and perhaps Triggers would be easier from a maintenance perspective.

For example, there is one parent Table (Project) that has about 15 child tables associated to it logically. The developers of this application created 2 triggers to handle the Insert and Delete anomalies whereas 30 constraints would had to have been coded.

 From a performance perspective, what are the ramifications for this design decision? How expensive is a constraint compared to a Trigger?

Thanks. Received on Fri Jul 26 2002 - 16:55:57 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US