Re: A Normalization Question

From: Neo <neo55592_at_hotmail.com>
Date: 12 Jul 2004 20:54:10 -0700
Message-ID: <4b45d3ad.0407121954.19f3b2a9_at_posting.google.com>


> From "Fundamentals of Database Systems", Elmasri/Navathe, 3rd Ed.
> (Electronic Version) Addison-Wesley Copyright 2000:
> "Normalization of data can hence be looked upon as a process of analyzing
> the given relation schemas based on their FDs and primary keys to achieve
> the desirable properties of (1) minimizing redundancy and (2) minimizing the
> insertion, deletion, and update anomalies..."

The above definition is in context to RM and is limited because not all data models use or require a relation (as defined by RM). The above definition is a subset of mine: With respect to dbs, normalization is the process of eliminating or replacing duplicate things with a reference to the original thing (ie fact, entity, object, relation, etc) being represented. This is equilvalent to xNF where x can be infinite.

> 1. "Normalization of data can hence be looked upon as a process of analyzing
> the given relation schemas based on their FDs and primary keys..."

A limited definition as not all data models use relations.

> 3. Nowhere is it ever stated that a degree of redundancy approaching zero is
> the goal, or even a desireable outcome. Normalizing to the nth degree is
> not necessarily a good thing.

Whether an xNF is good or bad is a judgement that I am not making. What I am pointing out is that a db containing the string 'brown' three times has redundant data and can result in update anomaly. Received on Tue Jul 13 2004 - 05:54:10 CEST

Original text of this message