Re: Data Constraints AND Application Constraints

From: FrankHamersley <>
Date: Fri, 18 Mar 2005 11:10:58 GMT
Message-ID: <6Hy_d.2574$>

Kenneth Downs wrote:
> FrankHamersley wrote:

>>Now that the "vs" thread has subsided I got to wondering how many
>>practitioners of database constraints as a key element of schema designs
>>consider there remains a need for further constraint (related) checking
>>in the application code itself?
>>I will declare in advance my predilection is that no one design layer
>>trusts the layers above or below it.  I accept the overheads it incurs
>>and often find this multiple sieving approach often traps inadvertent
>>design or rendition faults in the layers above or below.  These are not
>>(often) technical integrity failures, more likely business rule
>>oversights, but when left in place they do occasionally trap "real weird
>>sh:t" - especially the intermittent bugs that almost always combine with
>>Murphy's Law for maximum impact.
>>What is the consensus/spectrum of views?
>>Cheers, Frank.

> Our view goes like this:
> The authoritative copy of the business rules is a machine-readable spec that
> resides outside of the conventional tiers, but it used to implement them
> and test them.
> At implementation, our design has the database server implement all business
> rules, including not just constraints but also automation. A builder diffs
> the db to the spec and reconciles.
> The builder then provides a copy of the spec to the next layer up, in its
> native form. This layer, being a dekstop client or intermediate tier,
> knows about rules and implements them only insofar as it would be a
> convenience to the user, normally to save a round trip to the db server.
> Generally though the db server has to be the final implemented authority on
> what happens, because any other path allows stray agents to connect to the
> db server and thwart the developers' intent.

All good points to this point - given performance is always a pet subject of mine I can appreciate the point about limiting the inter-layer traffic.

> The "Really weird sh:t" that you mention has always turned out in my
> experience to be traceable to db design artifacts. The best solution we've
> come up with is a basic insistence that many iterations of development must
> be affordable, so that we can get those design flaws out by successive
> approximation.

This subject is the real meat of my original post/interest (apols to all vegan readers) :-)

I accept there is always an place for iterative improvement, however of course we all strive to build solutions that have as few defects as possible when going live, and especially at least to avoid crippling bugs that risk sinking the project totally!

Back to "really weird sh:t" - again I accept that almost all failures are design flaws of some sort - but a key points is that not all of them are yours. As evidenced by the need for patches to DBMS packages (and OS's) none of the underlying software is flawless and the users certainly aren't.

So the question restated is how much trust should you put in the layers above and below? My view is that trusting any other layer - even a fully constrained database - is "fair weather" programming.

IMO this poor kung fu leads to inherently buggy or tragic performance outcomes much like the parlourous security profile exhibited by the most commonly used x86 OS.

Personally I always build for Roaring Forties and Southern Ocean conditions (when I can get away with it and often even when I can't) because when the "sh:t hits the fan" my code at least squawks loudly about where the fault originated from and can often typify it well enough to lead to a permanent repair or at least a better recovery outcome.

Cheers, Frank. Received on Fri Mar 18 2005 - 12:10:58 CET

Original text of this message