Re: A Normalization Question

From: D Guntermann <guntermann_at_hotmail.com>
Date: Fri, 9 Jul 2004 18:08:24 GMT
Message-ID: <I0LJq0.4G_at_news.boeing.com>


"Alan" <not.me_at_uhuh.rcn.com> wrote in message news:UTvHc.49505$MT5.399_at_nwrdny01.gnilink.net...
>
> "Dan" <guntermannxxx_at_verizon.com> wrote in message
> news:YDmHc.40521$qw1.28576_at_nwrddc01.gnilink.net...
> >
> > "Alan" <alan_at_erols.com> wrote in message
> > news:2l2btjF813i3U1_at_uni-berlin.de...
> > >
> > > "Neo" <neo55592_at_hotmail.com> wrote in message
> > > news:4b45d3ad.0407061849.580874d6_at_posting.google.com...
> > > > > Yes, [RM] has limitations, but normalizing data is not one of
them.
> > > > > The RM defines normalization.
> > > >
> > > > RM defines a limited form of normalization. The general form of
> > > > normalization which is the central theme to all xNFs (where x may be
> > > > infinite), allows one to identify 'brown', 'brown', 'brown' as being
> > > > redundant which XDb1/TDM normalizes.
> > > >
> > > > > You just refuse to accept normalization ...
> > > >
> > > > I accept the general form of normalization that can be applied to
all
> > > > data models and allows one to recognize that 'brown', 'brown',
'brown'
> > > > is redundant. I refuse to accept RM's limited form of normalization
as
> > > > the general form of normalization which doesn't allow one to
recognize
> > > > that 'brown', 'brown', 'brown' is redundant.\
> > >
> > > It's not, it's not, it's not. You can preach this nonsense till the
cows
> > > come home, but it will never be true. Several examples have been
> provided
> > to
> > > you.
> > >
> > > >
> > > > > > A better reference is C.J. Date's "An Intro to Database
> Systems"...
> > > > >
> > > > > How does that make it better?
> > > > > Anyways how would you know it is better since you never read
> Navathe?
> > > >
> > > > While I have not read all 873 pages of Elmasri/Navathe's "Fund of Db
> > > > Sys" 2nd Ed that sits several books under my C.J. Date's "Intro to
Db
> > > > Sys" 6th Ed, I have read enough of it to know that compared to that
of
> > > > Date's p288-9, their fundamental explanation of normalization on
p407
> > > > is limited: "Normalization of data can be looked on as a process
> > > > during which unsatisfactory relation shemas are decomposed by
breaking
> > > > up their attributes into smaller relations schemas that possess
> > > > desirable properties". C.J. Date's is better because his fundamental
> > > > explanation comes closer to the general form of normalization that
can
> > > > be applied to data in any model, even those that don't have any
> > > > relations.
> > >
> > > That's on old version, maybe 12 years old. They are up to 4th Ed. now.
> > >
> > > Anyway, As I stated elswhere, I give up.
> > >
> > >
> > Don't give up!
> >
> > I see this discussion as very beneficial to the entire group, or at
least
> > for me. It forces everyone to revisit the fundementals and re-examine
our
> > own understanding of them.
> >
> > Actually, if Neo's ideas were practical at the physical level and he
never
> > introduced his form of "normalization" to the logical user level as a
form
> > of data model, I wouldn't have an issue with his "implementation" at all
> as
> > long as it behaved functionally as a user would expect -- operations and
> > inference rules over semantics units.
> >
> > - Dan
> >
> >
>
> Unfortunately, I don't agree that his implemenation is practical.

I also believe this to be true. I said, "if Neo's ideas were practical at the physical level...".

You wind
> up storing nothing but pointers to data. This would be a nightmare when it
> comes time to extract the data. Imagine trying to debug a report. Then
there
> is the question of data entry. How would the system know what data was
> already entered? Ex: A user goes to enter the string "Brown" as a car
color.
> The system would need to check to see if that string was already enetered.
> The overhead would be enormous.

Yes. Or how about machine to machine data interchange. Another level encoding/decoding (a new protocol) mechanism would have to take place to translate references into coherent values. The overhead would be enormous, I would think.

  • Dan
    >
    >
Received on Fri Jul 09 2004 - 20:08:24 CEST

Original text of this message