Re: Normalizing vs. Denormalizing

From: Will Kooiman <wkooiman_at_csac.com>
Date: 1996/03/20
Message-ID: <4iq2ia$jii_at_news01.aud.alcatel.com>#1/1


epepa_at_aol.com (EPepa) wrote:
>I'm in the process of helping to develop an Oracle database for a plant
>that's very concerned with speed. They enter about 5000 orders per month
>and each order is one record (no detail lines). Since I took the Oracle
>classes, I have the normalization techniques down pretty well, but my
>concern is the time that it would take in disk reads in order to access
>the several tables required to normalize this company's orders. (Each
>order should store information into several tables that would have 4-5
>sets of 5-10 pieces of data each, for a total of about 35 denormalized
>columns.) If I denormalize the data, it would reduce the number of disk
>reads and all of the information should be fairly contiguous, but it seems
>to me that that would defeat the purpose of a relational database.
>
>Does anyone have any ideas on where you would draw the line?

Be very careful about denormalizing this design. In almost every case, a normalized design will outperform a denomalized design.

Oracle actually joins tables very fast. I have had several systems that regularly joined 15-20 tables together that had response times less than a second. One system ran on a "slow" RS 6000 with 300,000+ records.

Conversely, EVERY system I have ever worked on that had performance problems was "denormalized for performance".

Also, Oracle does block i/o, not record i/o. Normalized databases tend to have more records per block due to data not being duplicated. This translates into LESS i/o.

This is a huge topic that can only be solved with lots of debates and/or tests.

Will Kooiman
Computer Systems Authority
wkooiman_at_csac.com Received on Wed Mar 20 1996 - 00:00:00 CET

Original text of this message