Re: Computational complexity (was Re: Normalisation)
Date: Tue, 19 Jul 2005 10:40:20 +0200
In article <11dodnf29qpvma1_at_corp.supernews.com>,
> > Estimating complexity on Turing machines is one thing; doing it for
> > usable programming languages / systems is slightly different. I know
> > theory, but not *all* theory. Sorry, but we can't all be omniscient.
> The programming language (and the constant space/time assumptions for
> "int") are an abstraction -- and (like all abstractions) are leaky.
Hm? The Turing machine is even more of an abstraction. Neither technique is ideal. It depends on the circumstances which one is best to use, and the circumstances I am used to strongly favour one over the other.
> <oldfartmode>I dunno, kids these days with their fancy dual-core Athlons
> -- whatever happened to coding assembler to fit in a 256 byte page?
Or using microcode to construct instruction sets? That was good fun, actually.
Everything is reality at some level. But sorry; I meant the debating technique as much as the idea of counters taking log n space.
> Engineering is the process of balancing the constraints of physical
> reality with cost and requirements. To do that you must _know_ the
Agreed. But it is no better to be unaware of "practical" complexity assessment than to quietly ignore the complexity associated with strings of unbounded lengths (unless they are specifically called for), I think.
I don't know where you got the impression that I am unaware of (? or opposed to?) that kind of considerations.
-- JonReceived on Tue Jul 19 2005 - 10:40:20 CEST