Re: Normalizing vs. Denormalizing

From: Austin Moseley <\"moseba_at_audv55.aud.alcatel.com'>
Date: 1996/04/25
Message-ID: <4lp398$b1o_at_news01.aud.alcatel.com>#1/1


>In article <4km5s7$4pl_at_maverick.tad.eds.com>

>>: to spend time and effort down the road (probably very soon)on your existing
>>: system. RDMS such as Oracle are designed to handle normalization very
>>: well even with the basic configuration and with a help of a good DBA you
>>: can design a 3rd or even 4th deg. normalize system without any significant
>>: performance problem. Consider demormalization only when the performance
>>
>>Bull. Accepting an answer like this requires little thought from the
>>acceptor, of course. Unfortunately, as a result, little thought will have
>>gone into the use for which the database is being designed. Normalizing
>>just because "it's what you should do" is really stupid. I wouldn't make
>>my design decisions based on what Oracle (et al) are supposed to do, but
>>rather on what use my customers will be putting their data to.
>

Performance is the number one reason to normalize. If a table is read only and is fairly small, then a denormalization scheme with the appropriate indexes is ok.

But for all other tables, especially those that are updated often, that have lots of related data, then the tables should be normalized. Otherwise you will waste alot of cpu, memory, and disk space to hold and manipulate duplicate data and needlessly large indexes, while fighting a fragmentation, and the whole problem will get exponentially worse over time.

To see this, create two normalized tables with one xref table between them, add in one-many and many-many rows, then denormalize the data into one table - you will have a lot of duplicate data. Your indexes will increase in size, then, as will the rate of fragmentation.

-Austin Received on Thu Apr 25 1996 - 00:00:00 CEST

Original text of this message