Re: Nearest Common Ancestor Report (XDb1's $1000 Challenge)

From: Hugo Kornelis <hugo_at_pe_NO_rFact.in_SPAM_fo>
Date: Wed, 02 Jun 2004 09:23:57 +0200
Message-ID: <>

On Tue, 01 Jun 2004 17:42:56 -0700, Gene Wirchenko wrote:

>Hugo Kornelis <hugo_at_pe_NO_rFact.in_SPAM_fo> wrote:
>>On Tue, 01 Jun 2004 17:16:32 -0300, Leandro Guimaraens Faria Corsetti
>>Dutra wrote:
>>>Em Sat, 22 May 2004 14:20:47 +0200, Hugo Kornelis escreveu:
>>>> There may be very valid reasons for violoating
>>>> normalization rules in the design of a relation database.
>>> None whatsoever, if so which?
>>Denormalizing to increase performance. I'm sure you've heard about it
> Done due to inadequate tools.

Hi Gene (and Leandro too),

As I already mentioned in this discussion, I'm not a theorist. The tools may be inadequate, but it's the tools I've got to work with. As soon as I find a customer who is willing to pay me for work on a "true" relational database, I will. Until that time, I'll stick to "untrue" relational databases such as SQL Server.

> Normalisation is done at the
>logical level. Mucking up ones logical design over an implementation
>issue is not good. You can get into horrible messes with loss of
>database integrity.

That's exactly the reason why I made a distinction between DEnormalized design and UNnormalized design. First, you make a proper logical design. Then, if necessary (and only then), you sometimes have to decide to denormalize. Every denormalization gets documented. Every possible step that helps reduce the risc of loss of date integrity should be taken.

Best, Hugo


(Remove _NO_ and _SPAM_ to get my e-mail address)
Received on Wed Jun 02 2004 - 09:23:57 CEST

Original text of this message