Re: Is this bad design ?

From: mAsterdam <mAsterdam_at_vrijdag.org>
Date: Thu, 11 Mar 2004 00:03:48 +0100
Message-ID: <404f9ee3$0$558$e4fe514c_at_news.xs4all.nl>


Dawn M. Wolthuis wrote:

> ...
> I'm thinking in terms of the thick of a bell curve -- not those that are
> huge enough to end up in one tail or so small they are in the other.

So yes, computerized and no, not normalized? (checking wether I get your point)

>>> ...
>>>>Dawn, what would you do if we extended the example and said that there
>>>>is also another table called Father, and that every Child must also
>>>>have a Father?  The Child can't be modelled as an attribute of both
>>>>tables, can it?  And then another table called School, and another
>>>>called Gang, ..., all with similar 1:M relationships.  What then?
>>
>>> You are absolutely correct that if the requirements 
>>> were different, then I would model it differently.
>>> I would also write all other aspects of the
>>> applications involved differently.  The requirements rule!

>>They also change.

>
> Yes, yes! We want to project well enough in advance to minimize major
> changes, but then ensure we have an implementation that can we can handily
> refactor as needed. In the relational model just having an attribute of an
> entity that switches to cardinality > 1 requires a refactoring to put that
> attribute in a separate table. The overall costs (including risks) of all
> refactorings required over the life of a system that is relevant.

http://www.hyperdictionary.com/computing/refactoring :

 > 	Definition:	 	
 >
 > Improving a computer program by reorganising its internal structure
 > without altering its external behaviour.

At first I had problems trying to understand what you meant with refactoring in this context,
so I looked it up. I think you are talking about changing the internal structure as a consequence of a change in the desired external behaviour (a.k.a. the requirements). Refactoring is just the opposite of that - even though the goal of refactoring indeed is to get a piece of software that is easier to maintain (adapt to changing requirements). One way of doing that (refactoring) is to eliminate unwanted redundacy.

So I guess at some point in the bell curve you mentioned one has to normalize the data, halting all exteral changes until it's done. Major business risks. I am not saying that normalized is always better, but I do think it is necessary in order to manage growing complexity at some point.

Now with hindsight one could say: Why not structure your data right in the first place? I have also seen people *over* structuring data for all kinds of scenarios for future development. This lead to a lot of YAGNI code, and the changes that *did* occur
were not (ok some of them were) anticipated. Received on Thu Mar 11 2004 - 00:03:48 CET

Original text of this message