Re: Normalisation

From: Jon Heggland <>
Date: Sat, 9 Jul 2005 14:04:03 +0200
Message-ID: <>

In article <xfCze.140548$>, says...
> >>>A set can be transformed into a unary relation with a simple operation.
> >>>A string can be transformed into a binary relation (integer and
> >>>character) with a simple operation.
> >>
> >>That requires logarithmic space, and not constant space as my
> >>transformation. So it is arguably more complex.
> >
> > Please elaborate. Assuming for the sake of the argument that you are
> > right, so what?
> It indicates that in one case there is a larger similarity than in the
> other because you meed more work to do the transformation. You're not
> asking me to explain the stated complexity classes of the operations,
> are you?

Well, yes, I actually am. Sorry if it is trivial, but I don't see the difference. Or the logarithm.

Anyway, that is an implementation matter. The transformation at the logical level is trivial.

> Usually it is relatively well-known which operations are possible in a
> DBMS and which aren't. That makes it in practice actually a quite stable
> notion even though it is a relative one.

Fair enough. It is the odd one out of the normal forms, though, since the others aren't relative in that way. And it has the unfortunate consequence that higher normal forms don't imply 1NF, if I understand you correctly.

> > It is also rather
> > complicated, imo, since you have to refer to operations over signatures
> > and proper classes as opposed to sets/domains.
> The definition does not refer to proper classes, and it is always a bit
> dangerous to call something complicated just because you had trouble
> understanding it. :-)

Complexity can be measured pretty objectively, no? :)

> As any good database researcher

How do you know I'm any good? :)

> you probably know
> and understand the notion of "genericity". Just as a test to see if you
> really understood it, can you tell me the relationship between this
> notion and the notion of 1NF I defined?

It would be easier if you would care to restate your 1NF definition, including (if necessary) the definition of atomicity (and other nontrivial  terms), but I'll have a go. :)

"Relation" is a generic (is "generic type" bad form?), and allowing operators that manipulate generics breaks atomicity and leads to paradoxes and optimisation problems. Except the classic relational operator; those are okay.

> I think the situation is a bit more complex than that. For me there are
> actually two logical levels: one at the conceptual level and one at the
> external level (as defined in the ANSI/SPARC model). For the external
> level I would agree that the form of the data model should be dictated
> by purely logical arguments. It should simply properly model how the
> users see their data. No more, no less.
> However, at the conceptual level the task of the model becomes more
> complex. Its job is to unify all the different models of the different
> user groups, but in a relatively implementation independent way. That
> means, for example, that if two groups want to nest the relations
> differently, then they probably should be modeled at this level by flat
> relations. It is for this level that I think 1NF is still useful.

Fair enough, I guess.

Received on Sat Jul 09 2005 - 14:04:03 CEST

Original text of this message