Re: MV Keys

From: Marshall Spight <marshall.spight_at_gmail.com>
Date: 1 Mar 2006 17:01:24 -0800
Message-ID: <1141261284.404545.215620_at_v46g2000cwv.googlegroups.com>


dawn wrote:
>
> > The others have formal definitions, based on functional
> > dependencies. 1NF is mostly informally defined (except for Date's
> > definition, which however renders 1NF pointless and irrelevant---but
> > then, so is 2NF, and arguably 3NF),
>
> Yes, and this damages the def of all NF's because they are all defined
> first as requiring 1NF.

Meh; it doesn't seem like a big deal to me. If we drop the whole idea of 1NF entirely, does anything of significance change in the definition of the other normal forms?

> Although all NF's are defined as first requiring 1NF, at least with
> Codd and his camps.

Indirectly, insofar as for all N, normal form N subsumes the definition of normal form N-1. Again, is there any reason we should care?

> I agree with Date tossing out normalization as defined by Codd, but not
> with his attempt to redefine 1NF to be meaningless. It is difficult to
> move foward if people still have the mistaken impression that database
> theory requires that we remove repeating groups.

Repeating groups are still a bad idea, regardless of whether we allow nested relations or not. I don't agree that allowing nested relations means we're going to want to start using repeating groups.

> The redefinitions of the term "normalize" make it difficult to
> communicate too. The industry no longer thinks we must normalize data
> and many, like me, don't even think that should be a goal.

Really? You think redundancy and update anomalies are no problem? Or are you using your super-narrow definition of "normalization" here?

> Functional
> dependencies are another matter. If we want to redefine the term
> normalize, we should be clear that no longer requires 1NF (formally
> named "normalization" by Codd).

The term "normalize" refers to any of a bazillion different methodologies for putting structured data into a canonical form. For example, case insensitive string comparison can be effected by doing a literal string compare on two strings that have been normalized to lower case.

So we're not really "redefining" anything. In fact, I think if we drop the idea of 1NF entirely, I still wouldn't refer to it as being a redefinition of anything.

What it comes down to is a question of whether 1NF is useful and convenient or not. (This is assuming no one shows me an update anomaly associated with 1NF.) It looks to me like it's not. I don't therefore conclude that this idea requires a complete redefinition of everything. I don't see how, say, the definition of BCNF will need any alteration at all.

Marshall Received on Thu Mar 02 2006 - 02:01:24 CET

Original text of this message