Date: Sun, 13 Jul 2008 06:46:16 -0700 (PDT)
On Jul 12, 7:54 pm, Marshall <marshall.spi..._at_gmail.com> wrote:
> On Jul 12, 6:03 am, JOG <j..._at_cs.nott.ac.uk> wrote:
> > On Jul 12, 2:27 am, Marshall <marshall.spi..._at_gmail.com> wrote:
> > > I am calling bullshit on the above position, attributed to
> > > Wittgenstein.
> > > I am calling bullshit on the idea that "meaning and knowledge
> > > cannot be encoded in any formal representation."
> > Then we disagree whole-heartedly. Great guns.
> I know! It's like the first time I've ever disagreed with
> someone on the Internet! :-)
> > > > Either way, knowledge is generally accepted in AI research as
> > > > unencodable in a descriptive model. I would love to claim to have
> > > > formulated such conclusions myself, but I am merely reiterating
> > > > Clancey, Brookes and Cantwell-Smith famous papers, the well documented
> > > > demise of expert systems, the $35million wasted on projects like CYC,
> > > > etc, etc, etc.
> > > Lately I have developed an allergic reaction to various ideas
> > > asserting that brains are somehow magical and mystical,
> > This is a straw man. You are attributing mysticism
> > where it is not claimed.
> I am clear that no one is using the term "magic" to describe
> how brains work.
How strange! You said you have an allergic reaction to the viewpoint "brains are somehow magical"... despite being clear that noone has said that? Are you next going to object to the viewpoint that "unicorns aren't real" given that noone has said that either?
> Nonetheless, I assert that this is what
> various claims of the uncomputability of the brain reduce to.
Some may reduce to that. But not here, and certainly not in modern AI. The deal is not about uncomputability anyhow , its about not being able to build robust systems out of externalized representations of knowledge.
Nice cartoon (as xkcd often is). Another red herring of course given the cartoon is referring to paranormal science, and nothing to do with the subject at hand, but I'm sure your being mischevious. My favourite was this one:
> > It is merely as statement that meaning comes from how our
> > senses react to the world, as opposed to your view of the brain as a
> > turing machine churning up statements of first order logic.
> "How our senses react to the world" is entirely mechanizable.
Ah, the religion of the great big calculator in the sky. Amen to the hard drive in my head! Jesting aside, this is actually completely beside the point - the statement I originally made concerned formalised descriptions not mechanisability (if anyone thought the latter was impossible AI researchers would have given up and gone to the pub. All they are doing is taking a different approach to mechanisation, making it situated).
You've written a long post here, so I'm going to do it the respect of putting the kettle on, and reading it with due diligence ;) All best, Jim.
> I would agree that a computer with no inputs or outputs is
> not going to be able to do anything useful, in exactly the
> same way that a brain floating in a vat of nutrients also
> [snipped pre-caffeine intake...]
Received on Sun Jul 13 2008 - 15:46:16 CEST