Re: Temporal database - no end date

From: Marshall <marshall.spight_at_gmail.com>
Date: 22 Jan 2007 19:16:13 -0800
Message-ID: <1169522173.749900.17740_at_38g2000cwa.googlegroups.com>


On Jan 22, 6:15 pm, "David" <davi..._at_iinet.net.au> wrote:
> Marshall wrote:
>
> > If you've explained how to do that, I've missed it. How can you
> > not pick a granularity? The pigeonhole principle would seem to
> > indicate that you must. I assume we're still talking about software
> > on digital computers here.
>
> By granularity do you mean unit of time? Note that without chronon
> quantization and assuming floating point number representation of
> times, the choice of unit of time has no affect on the level of
> quantization because the "relative scale" of the unit can be
> absorbed into the exponent. Rather the quantization only depends on
> the size of the mantissa.

There are precision issues associated with using integral quantities and there are precision issues associated with using floating point quantities. Although the specifics of the issues are different, I consider that a difference in the details rather than any kind of significant qualitative difference. Both are compromises where numbers are concerned.

And while I may have missed it, I don't think anyone here has suggested "use a float" is the answer to this issue.

> On another note I think you are continually pushing the conversation
> into physical concerns when the subject of the thread is very clearly
> to do with the underlying logical model.

In general, the industry is far too concerned about physical matters, and there are those of us who push hard in the other direction as a kind of antidote to that. However, it is not the case that the logical layer is completely divorced from physical concerns.

There is one area in particular that is of the utmost importance. The one overriding limitation on the logical level that the physical level applies is that it must exist. Any logical model that is not implementable is not worth much. This is an issues everywhere except for purely theoretical or mathematical constructs. If we are talking about building software, we must prefer even the least solution that can exist to the greatest solution that cannot exist.

If someone can demonstrate to me a physical implementation that can support a logical model with unlimited precision, I will gratefully and humbly apologize, and admit that my bringing physical issues into the conversation was a mistake.

> Is your argument that the reals are not relevant because they can't
> be represented exactly on a digital computer? That's rather lame
> when discussing the pros and cons of different logical models.

Indeed. Fortunately I have never said any such thing. However there are properties that the reals have that our approximations don't, such as the power of the continuum. If a designer believes of his logical model that his use of double precision floating point means his numbers have the power of the continuum, then he's in for a fall.

> What do you think floating point number representations are trying to
> model?

The reals.

> Themselves as a pattern of bits? How do you explain why 1.0 /
> 3.0 on a real computer comes out remarkably close to one third, or
> repeating the calculation x = (x*x+2)/(2*x) quickly converges to
> something remarkably close to sqrt(2).

I have no idea what you are trying to get at in this paragraph. You're asking me to explain to you why an approximation of the reals behaves approximately as the reals do? If it were a bug report I would close it with status "Behaves As Designed."

Marshall Received on Tue Jan 23 2007 - 04:16:13 CET

Original text of this message