Re: Data Constraints AND Application Constraints
Date: Fri, 18 Mar 2005 11:10:58 GMT
Message-ID: <6Hy_d.2574$C7.1568_at_news-server.bigpond.net.au>
Kenneth Downs wrote:
> FrankHamersley wrote:
>
>>Now that the "vs" thread has subsided I got to wondering how many >>practitioners of database constraints as a key element of schema designs >>consider there remains a need for further constraint (related) checking >>in the application code itself? >> >>I will declare in advance my predilection is that no one design layer >>trusts the layers above or below it. I accept the overheads it incurs >>and often find this multiple sieving approach often traps inadvertent >>design or rendition faults in the layers above or below. These are not >>(often) technical integrity failures, more likely business rule >>oversights, but when left in place they do occasionally trap "real weird >>sh:t" - especially the intermittent bugs that almost always combine with >>Murphy's Law for maximum impact. >> >>What is the consensus/spectrum of views? >> >>Cheers, Frank.
>
> Our view goes like this:
>
> The authoritative copy of the business rules is a machine-readable spec that
> resides outside of the conventional tiers, but it used to implement them
> and test them.
>
> At implementation, our design has the database server implement all business
> rules, including not just constraints but also automation. A builder diffs
> the db to the spec and reconciles.
>
> The builder then provides a copy of the spec to the next layer up, in its
> native form. This layer, being a dekstop client or intermediate tier,
> knows about rules and implements them only insofar as it would be a
> convenience to the user, normally to save a round trip to the db server.
>
> Generally though the db server has to be the final implemented authority on
> what happens, because any other path allows stray agents to connect to the
> db server and thwart the developers' intent.
All good points to this point - given performance is always a pet subject of mine I can appreciate the point about limiting the inter-layer traffic.
> The "Really weird sh:t" that you mention has always turned out in my
> experience to be traceable to db design artifacts. The best solution we've
> come up with is a basic insistence that many iterations of development must
> be affordable, so that we can get those design flaws out by successive
> approximation.
This subject is the real meat of my original post/interest (apols to all vegan readers) :-)
I accept there is always an place for iterative improvement, however of course we all strive to build solutions that have as few defects as possible when going live, and especially at least to avoid crippling bugs that risk sinking the project totally!
Back to "really weird sh:t" - again I accept that almost all failures are design flaws of some sort - but a key points is that not all of them are yours. As evidenced by the need for patches to DBMS packages (and OS's) none of the underlying software is flawless and the users certainly aren't.