Re: Computational complexity (was Re: Normalisation)
Date: Tue, 19 Jul 2005 10:40:20 +0200
Message-ID: <MPG.1d46dada2a522faf9896fa_at_news.ntnu.no>
In article <11dodnf29qpvma1_at_corp.supernews.com>,
misha_at_no_mishapen_spam.co.uk says...
> > Estimating complexity on Turing machines is one thing; doing it for
> > usable programming languages / systems is slightly different. I know
> > theory, but not *all* theory. Sorry, but we can't all be omniscient.
>
> The programming language (and the constant space/time assumptions for
> "int") are an abstraction -- and (like all abstractions) are leaky[1].
Hm? The Turing machine is even more of an abstraction. Neither technique is ideal. It depends on the circumstances which one is best to use, and the circumstances I am used to strongly favour one over the other.
> <oldfartmode>I dunno, kids these days with their fancy dual-core Athlons
> -- whatever happened to coding assembler to fit in a 256 byte page?
Or using microcode to construct instruction sets? That was good fun, actually.
> > Sophistry. This is the kind of thing that gives theory a bad name.
>
> No. Reality.
Everything is reality at some level. But sorry; I meant the debating technique as much as the idea of counters taking log n space.
> Engineering is the process of balancing the constraints of physical
> reality with cost and requirements. To do that you must _know_ the
> constraints.
Agreed. But it is no better to be unaware of "practical" complexity assessment than to quietly ignore the complexity associated with strings of unbounded lengths (unless they are specifically called for), I think.
> [1] Jon better not apply for a job at Fog Creek, me thinks
> (http://www.joelonsoftware.com/articles/fog0000000319.html).
I don't know where you got the impression that I am unaware of (? or opposed to?) that kind of considerations.
-- JonReceived on Tue Jul 19 2005 - 10:40:20 CEST