Date: Sat, 12 Jul 2008 11:54:59 -0700 (PDT)
On Jul 12, 6:03 am, JOG <j..._at_cs.nott.ac.uk> wrote:
> On Jul 12, 2:27 am, Marshall <marshall.spi..._at_gmail.com> wrote:
> > I am calling bullshit on the above position, attributed to
> > Wittgenstein.
> > I am calling bullshit on the idea that "meaning and knowledge
> > cannot be encoded in any formal representation."
> Then we disagree whole-heartedly. Great guns.
I know! It's like the first time I've ever disagreed with someone on the Internet! :-)
> > > Either way, knowledge is generally accepted in AI research as
> > > unencodable in a descriptive model. I would love to claim to have
> > > formulated such conclusions myself, but I am merely reiterating
> > > Clancey, Brookes and Cantwell-Smith famous papers, the well documented
> > > demise of expert systems, the $35million wasted on projects like CYC,
> > > etc, etc, etc.
> > Lately I have developed an allergic reaction to various ideas
> > asserting that brains are somehow magical and mystical,
> This is a straw man. You are attributing mysticism
> where it is not claimed.
I am clear that no one is using the term "magic" to describe how brains work. Nonetheless, I assert that this is what various claims of the uncomputability of the brain reduce to.
> It is merely as statement that meaning comes from how our
> senses react to the world, as opposed to your view of the brain as a
> turing machine churning up statements of first order logic.
"How our senses react to the world" is entirely mechanizable. I would agree that a computer with no inputs or outputs is not going to be able to do anything useful, in exactly the same way that a brain floating in a vat of nutrients also won't.
> > and thought is
> > something that we not only can't currently explain computationally,
> > but never will be able to explain computationally. It's just bullshit.
> Yeah, that's right. Human thought is not like a big calculator.
> Go figure.
Go "figure" you say? As in, "to compute or calculate?" (To be said in a Dr. Evil voice.) (OK, that was completely lame of me, I admit.)
> > Earlier you mentioned "What Computers Still Can't Do."
> > Reading for example this:
> > I see no argument that doesn't amuse me with its lameness.
> > I would type more, but I have a pressing engagement. Perhaps
> > later?
> Absolutely. I'm interested in how you have formulated your wishful
> 1960's style opinions - misguided as they are ;)
That AI researchers of the past were overly optimistic is no indication, one way or the other, of what is possible mechanically. We programmers are often excessively optimistic in project estimates. Guilty! And of course where decades are involved, the error factor may also be in decades.
I will also acknowledge up front that this question is not settled, and the only thing that will settle it for sure is when we have a machine that is obviously as smart as a human, and as generally capable cognitively. (Alternatively, a solid refutation of the Church-Turing thesis would prove it impossible. But that won't happen.) Nonetheless I claim that the evidence, while not absolute, has already moved beyond a reasonable doubt as to the outcome. And the astonishingly poor arguments mustered against the inevitable, despite the failure of _every_ _single_ previous man-will-never-build argument just piss me off.
The brain does some amazing things. How might it accomplish them? By processing information. It has inputs and outputs. Yes, these are amazingly complex, but even the physiology of the brain is exactly what we would expect from a mechanical model. We see a large bundle of nerves that pass information into and out of the brain at the base, down the spinal column. We see that the highest-bandwidth inputs, vision, have a dedicated, wide channel. We see that we can map, for example, specific areas of the primary motor cortex directly to specific motor activities, and in fact the map itself has the same physical layout of the thing it is mapping. (The so-called sensory and motor homunculi.)
Today, we cannot build a robot that has the flexibility, the suppleness, the self-contained power system, the self-repairing capabilities of the human body. And yet no one ever writes a book saying we *never* will be able to. Why is that? Because it's a stupid claim; we can trivially see from extrapolation that such a thing is possible. In fact, we have an obvious existence proof: the human body. The situation with the brain is no different. We can't build it today, but it is simply because we aren't there yet; we will be at some point in the future. Nothing magically prevents us from ever getting there. No signs of any invisible barrier have yet been reported. And again, we have an obvious existence proof of a mechanical object that is as cognitively able as a brain, and that is the brain itself.
What other candidates besides "computation" exist for describing what the mind does? If it's not computation, then it's ... ?
What must be necessary for the mind to be non-algorithmic? The brain must have, at some fairly low level, some fundamental operation that is non-algorithmic. The idea requires that the brain has some primitive operation that is instantiable in a physical object (three pounds of fatty meat) but that it is impossible to abstract over. For if we could abstract this primitive, we could compute with it.
Consider that idea: impossible to abstract over.
THAT is an extraordinary claim. Has there ever been any process in history that we haven't been able to abstract?
Or again we have the computational equivalence of every computational system ever designed (above a certain low threshold.) Where does that ceiling come from? It might be credible to suggest that there are processing primitives we haven't thought of yet, that might be necessary for consciousness, IF we saw that the existence of a great diversity of computational models which had a great diversity of expressive power. That might indicate we hadn't covered them all yet. But instead we see exactly the opposite: *every* computational model, *every* set of primitives we can design, above a low threshold of power, is equally expressive. Clearly we well understand when we have reached a full set of processing primitives: any Turing-complete system will do. HERE now is a hard invisible barrier, and this barrier strongly denies the possibility of the existence of a mechanism that would be available to the brain but not to a machine.
Various claims are sometimes made about possibilities in physics that might account for some special mechanism the brain has access to. Usually these are some kind of quantum effects. My understanding is that the idea that the brain takes advantage of quantum effects is not generally accepted, but even if it were true, that doesn't change the situation. Quantum effects are computable. Quantum computers cannot compute anything that regular computers can't. Even if some hitherto-undescribed quantum effect exists, it will be possible to build an abstraction for it. I would be astonished to find that our computational models aren't already up to the task, but even if they aren't, we can simply expand them.
The greatest weakness in the entire debate, however, is the capacity issue. Lack of computing capacity is a complete explanation for what computers can't do (yet.) The entire issue is quantitative, not qualitative. The quantitative issue defeats all the anti-computation arguments handily, from the Chinese room on. The quantitative argument handily explains exactly what we are seeing. Computers get more capable every year. Tasks that were once beyond reach come into reach, then become easy. 3D rendering was once completely out of reach. Then it was possible, but very slow. Then it was possible in real time, then it was cheap to do on an XBox. Our primitive wireframe drawings give way to scanline rendering, and to raytracing, and to radiosity. Each step requires more computing power, and there is no indication that this process has some intrinsic hard upper limit.
Certainly some problems remain out of reach. These problems are hard. Some problems are hard even for humans. Consider that a baby sits there and listens to people talking for a year before attempting single word utterances. Consider how hard it is to learn a new language. Is it any surprise then that mechanical translation is hard for machines? No it is not. It is a question of capacity. As we see more language translation efforts using very large corpora, we see ever greater success: again it is a capacity issue. That we can mechanize an English-Arabic, Arabic-English dictionary might cause us irrational exuberance around the idea of translating one into the other, but it turns out that natural language translation is about a lot more than just dictionaries.
In short, it is obvious at this point that man-will-never-build arguments are doomed. They didn't hold for flying, swimming, driving faster than a horse, walking on the moon, or anything else; they do not hold for thinking. There is no hint of a mechanism available to the brain that is not available to a computer circa 1950. There *is* a huge difference in power between those two, and between the fastest things we have today, and this power difference is a complete explanation for the situation we find ourselves in as far as what cognitive tasks our computers can handle and what ones they can't. And our computers continue to get faster and yes, smarter, all the time.
Marshall Received on Sat Jul 12 2008 - 13:54:59 CDT