Re: WWW/Internet 2009: 2nd CFP until 21 September

From: Walter Mitty <wamitty_at_verizon.net>
Date: Sun, 16 Aug 2009 03:03:54 GMT
Message-ID: <uGKhm.2337$Jg.1158_at_nwrddc01.gnilink.net>


"paul c" <toledobythesea_at_oohay.ac> wrote in message news:zmJhm.41463$PH1.15652_at_edtnps82...
> Mr. Scott wrote:
> ...
>> Pardon the pun, but your argument appears to be more applicable to
>> inapplicable nulls. ...
>
> Oh, here we go again, more twists and turns. I think Darwen said the
> record for the number of different kinds of nulls was somewhere in the
> twenties.
>
>
This is one of the more screwy discussions among theoreticians. You have to draw a difference between indication and inference. The only thing that a null INDICATES is that data is missing. The reason why data is missing is either INFERENCE on the part of the reader, or else it's an out of band message (a metamessage) between the writer and the reader.

There are only two cases that I think are worth distinguishing. The first is, I guess, the inapplicasble null. This one arises from the fact that the number of cells in a table is constrained to be the product of the number of rows and the number of columns, but the number of placeholders needed might be less than that. This case can be obviated by normalization. I'm perhaps in the minority in c.d.t. in saying that there are cases where a design that's less than fully normalized can be a good one.

The other case worth mentioning is that things are not what they should be. If there's any comprehensive theory on things that get screwed up, I'm unaware of it. Maybe the famous Murphy, of Murphy's law, created such a theory. Theory or no, people whio build systems in the real world try to build systems retain some of their value when things get screwed up. Received on Sun Aug 16 2009 - 05:03:54 CEST

Original text of this message