Re: DB clasical structure violation

From: Bernard Peek <bap_at_shrdlu.com>
Date: Thu, 27 Jun 2002 13:43:22 +0100
Message-ID: <2DZS4OFqhwG9EwaH_at_diamond9.demon.co.uk>


In message <3D1624D7.1E0D_at_ix.netcom.com>, Lee Fesperman <firstsql_at_ix.netcom.com> writes
>Mariano Abdala wrote:
>>
>> Of course the idea is to keep the integrity. What we'd be gaining is
>> redundancy, in order to improve in some cases performance(at least that's
>> what i'm trying to prove).
>
>But, redundancy directly impacts integrity. Redundancy means you have
>the same 'fact' in
>two different places. If the recording of the fact doesn't match in
>each place, then you
>have an integrity problem.
>
>To avoid that, you must add constraints to ensure that multiple
>recordings of the fact
>do match. This will affect performance.

Sometimes. It's a choice between different inefficiencies. The constraints only need to be tested when the denormalised data is changed. If that data is completely static then the constraints only have to be applied when the database is first constructed and there is zero performance impact at run-time.

-- 
Bernard Peek
bap_at_shrdlu.com

In search of cognoscenti
Received on Thu Jun 27 2002 - 14:43:22 CEST

Original text of this message