Date: Sun, 13 Jul 2008 09:07:36 -0700 (PDT)
>> Marshall wrote:
>> That AI researchers of the past were overly optimistic is no
>> indication, one way or the other, of what is possible mechanically.
>> We programmers are often excessively optimistic in project
>> estimates. Guilty!
Yes and the ones who proved to be wrong bailed out of AI to pedal their rule-based crap (snake-oil?) as the semantic web, keeping themselves in nice custhy jobs, wasting more money on more junk, and leaving everyone else to find a strategy that actually might work.
>> I will also acknowledge up front that this question is not
>> settled, and the only thing that will settle it for sure is
>> when we have a machine that is obviously as smart as
>> a human, and as generally capable cognitively. (Alternatively,
>> a solid refutation of the Church-Turing thesis would prove
>> it impossible. But that won't happen.)
I'm afraid Turing is against you, so its a mistake to reference him. He claimed that universal turing machines could be turned to "any welldefined task by being supplied with the appropriate programme". Yup, that was any "well-defined" task, not any task at all. He wasn't stupid that Turing lad.
Like many at the time Minsky mitakenly took this to mean that "Mental processes resemble the kind of processes found in computer programs: arbitrary symbol associations, treelike storage schemes, conditional transfers, and the like".
Oh. Dear. You see, that's what modern AI rejected, not some bizarre notion of not being able to mechanize things.
>> The situation with the brain is no different. We can't build it today, but
>> it is simply because we aren't there yet; we will be at
>> some point in the future. Nothing magically prevents
>> us from ever getting there.
Who has ever said that? Who are you arguing with? Creationists? ;)
>> The greatest weakness in the entire debate, however,
>> is the capacity issue. Lack of computing capacity is
>> a complete explanation for what computers can't do (yet.)
Ok, this one is just ridiculous. Lets take the bastion of good old fashioned AI - chess. In the 90's the chess AI "deep blue" was processing over 200 million board positions a second. That's right. 200 millions every single second. Let's compare that to a grand master, who can examine about 8. Yup, that's 199,999,992 less positions per second than the AI.
Oh yup, just CLEARLY the problem is that our computers aren't quick enough!
I'd say there's a lot of pattern matching going on.
>> That we can mechanize an English-Arabic,
>> Arabic-English dictionary might cause us irrational
>> exuberance around the idea of translating one into the
>> other, but it turns out that natural language translation
>> is about a lot more than just dictionaries.
Yes, /exactly/. Language (and its meaning) can't be nice neatly externalized in a formalized description, packaged off as a set of rules. That's exactly what Wittgenstein said, the guy you called an idiot earlier. You have to be embodied in the world that language is referring to, to understand its meaning. Otherwise its all just syntax. And thats certainly that's not how we learn language - we bootstrap by direct experience of what the words refer to.
>> As we see more language
>> translation efforts using very large corpora, we see
>> ever greater success: again it is a capacity issue.
I work on the periphery of this area and I can guarantee you what you are saying is just not true. CLIR is proving very resistant to just throwing more words at it thank you very much. To be honest, expecting a disembodied computer representation to understand words like "tranquility" or "trust" is like expecting a blind person to understand what "blue" means. Received on Sun Jul 13 2008 - 18:07:36 CEST