Re: Modelling Disjoint Subtypes

From: David Cressey <>
Date: Mon, 26 Mar 2007 10:56:31 GMT
Message-ID: <zFNNh.1876$E46.1231_at_trndny09>

"Marshall" <> wrote in message
> On Mar 25, 11:53 am, "David Cressey" <> wrote:
> > "Marshall" <> wrote in message
> > > As an aside, there exists systems in which the storage cost
> > > *at runtime* for type information is zero, because the types
> > > exist only at compile time, and are completely removed
> > > after.
> >
> > If you are saying what I think you are saying, then I disagree.
> >
> > For example, let's I have
> > float x, a, b;
> > x= a + b
> >
> > If I look at the variables x, a, and b at runtime, the type is gone.
But if
> > I look at the code generated by the compiler, I'm going to find that the
> > plus sign is represented by a floating point addition operation. So, to
> > extent that operator indicates type of operand, the information is
> > there at runtime, although buried in the code.
> >
> > This could be important if you were writing a decompiler.


> It seems pretty clear we agree on what is actually happening.
> However there may be some disagreement on what terminology
> best describes that.

> It hinges on your phrase: "to the extent that the operator indicates
> the type ..." The example you picked is one where the operators
> are actually supported in the CPU itself. This is a fairly unusual
> case. The "extent" may not be all that much in the usual case.

This isn't quite true. In the case of a CPU that doesn't support floating point arithmetic,
(e.g. Intel 8086) the "operator" I'm referring to could be a call to the floating point addition subroutine. In the case of a Java compiler, the "operator" could be anything that's supported in the operator repertoire of the JVM. It still gives the clue that the operands "must" be floating point numbers.

> Since numbers are on my mind lately, let's consider rationals
> and integers. Suppose we have some rational numbers x and y.
> Further suppose we do some thing if they are integers and
> something else otherwise.


> if (isInt(x) and isInt(y) {
> // consider the code here
> } else { ... }

> Now, inside the first pair of braces, we know *statically*
> that the denominator of both x and y is 1. So if we multiply
> x and y, the compiler, which ordinarily would be doing
> two multiplies (x.numerator * y.numerator,
> x.denominator * y.denominator) may well throw the second
> multiply away, since its result is a constant 1. So if
> you looked at the generated code, you'd see only one
> multiply and possibly falsely conclude that x and y are
> integer.

A compiler generates code based on static analysis. If the compiler generated code fails to multiply the denominators, there are only two possibilities: either x and y are somehow constrained to only take on integer values, or a run time error (possibly undetected) will occur, when one of them violates the isInt constraint.

> Or consider an example with float representations of Fahrenheit
> and Celsius as two different types. If the types are erased,
> and we want to compare two temperatures to see which one
> is higher, in both cases they will be identical unadorned floats;
> the distinction between F and C is gone.

Units of measure is a separable issue. For the case you present, The Unit of measure can be considered a linear equation. It's complicated in the case of temperature, because temperature does not inherently have ratio. [long pseudo scientific rant about absolute zero snipped by the original author]. Let's take something like mass 123.45 Kilograms can be taken as a ratio: namely the ratio between 123.45 and the mass of a one kilogram object. If the mass is given as a rational (e.g. 12345/100) that doesn't change anything. You can cascade the ratios. Received on Mon Mar 26 2007 - 12:56:31 CEST

Original text of this message