Re: Transactions: good or bad?

From: Costin Cozianu <c_cozianu_at_hotmail.com>
Date: Tue, 17 Jun 2003 20:43:53 -0700
Message-ID: <bcon23$kt4js$1_at_ID-152540.news.dfncis.de>


>>>A computer can emulate the behavior of a small neuron network without
>>>any problem.
>>>
>>
>>This is unsupported speculation, not science. Your "philosophers" should
>>teach you better than this.
>
>
> It is pure science.
>

No, it is pure speculation. As long as for example a network of "artificial" neurons used for optical character recognition makes a supercomputer look stupid compared to a 6 year old.

>
>>>We know pretty well how a neuron work.
>>>
>>
>>Obviously your claims are just hot air, since we're very far from "real"
>>or "unreal" AI.
>
>
> You try to distort everything. Understanding the mechanism of a single
> neuron is not the same as understanding how billions of neurons work
> together. It is like saying you know all about how a Pentium 4 works
> because you know how a transistor works.
>

So if you don't know how billions of neurons work, what's the purpose of this discussion ?

>
>>We don't really know how neurons work.
>
>
> It seems you are not very well informed.
>
> http://ic.ucsc.edu/~bruceb/psyc123/neuron.html
>

Great. So you determined that neurons transmit electrical signals. That's very fancy. A computer transmits electrical signals better and faster, and in learger numbers. Yet no network of transistors is remotely capable to do what is trivial for the least capable humans.

Therefore all you said about neurons and computers is nice and dandy , you may call them theories, but they have very little explanation power.

This is most definitely not "we know how neuron works". It is we have a scientific theory on "how neurons work", but it's not yet tested that much and consequentky we don't really know if that's all there's to it.

>
>>>Perhaps, but what is sure is that there is not any scientific theory
>>>that might give us a hint that we can not construct intelligent
>>>computers.
>>>
>>
>>Modern mathematics gives us plenty.
>
>
> Mathematics is not science, they don't use the scientifical method.
>

They don't use your popperisms. Maybe it's Popper who really has a problem, it is most definitely *not* the Mathematics.

Modern society would be shit in the absence of Mathematics, whereas smart people do just fine without ever worrying about that Popper.

Actually the really smart ones like Girard and Dijkstra mock the likes of Popper.
>
>>What he was refering to in the context is that philosophy is not a
>>science and philosophical arguments have nothing to do (therefore no
>>intellectual standing) within a science, and especially within
>>Mathematics.
>
>
> Philosophy of science discuss what science is.

Yes, that's exactly what they do: discuss. Net value of futile discussions in science: close to 0.

> Saying that Popper is
> not an intellectual is complete nonsense.
>
> If you had a minimal grasp in philosophy of science you would know
> that maths are not science.
>

It's like saying if you have a good grasp of cocaine, you'd know that cocaine is good. No thanks. Phylosophy of science is for weenies, software engineers are concerned with Mathematics (including Computing Science).

>
>>>"que le lenguage informatique es déterministe - il s'exécute dans un
>>>ordre précis"
>>>
>>>It is a complete nonsense.
>>>
>>
>>Why don't you write him a nice letter.
>
>
> I am not very interested in discussing about computer science with
> someone who says nonsenses like such.
>

Ha-ha. You might want to say that you are not qualified to write a letter to Girard.

>>Trashing doesn't come close to an argument, and while Girard is
>>justified at anytime to dismiss Alfredo Novoa's opinionated nonsense as
>>trash , the reverse does not hold.
>
>
> He does not ofer any justification to his absurd assertion.
>
> Connect a sound card to a computer and you can have non deterministic
> results.
>

Therefore you might be able to predict the results of the lotery, right ?

Oh, boy you are definitely in trolling mode. If you carefully read the articles in question you'd come to realize the trivial fact that you don't need just non-determinism to escape the halting problem *and* successfully prove theorems. You'd need an oracle. But I guess Popper didn't concerned himself with oracles.

>
>>I can understand your lack of knowledge in Math, but this non-sense you
>>just wrote is unexcusable, especially after trashing Girard.
>>
>>Read my leaps: chess has a finite model, chess *is* finite. If you're
>>not able to see the obvious, you come borderline to trolling.
>
>
> It is finite only because the number of turns is limited.

No it is not. The number of turns in chess is not limited by anything.

> The number
> of turns is the halt condition, if not we could fall in infinite
> loops.
>

No, that's elementary: you don't have to fall into an infinite loop unless you don't get it.

Since the number of positions you have to evaluate is finite it doesn't matter that the number of moves is infinite. The only thing you care for in a chess program is to assign a value to a position and choose the next best move.

Now, do you finally get it ?

>>You obviously have not done enough Math to know this is trivially false.
>>A human will extend the model he's working with if needed, a computer is
>>not.
>
>
> If you extend the search space on the fly the search is still a
> search.
>

No, you extend the model, you extend the rules of a game, you create new concepts, new theorems you abstract things, and think at a higher level of abstraction.

>>Oh, strong AI has been largely discredited as a scientific hoax by now.
>
>
> And such discredit was also discredited. You should read Copeland or
> shut up.
>

I read what I want to read and what does me good. Philosophy is for weenies, you read it if you like it, if you don't you haven't lost anything. If you want a good AI book, I'd recommend you Russell and Norvig.

Real man read Mathematics :)

>
>>Humans can decide which theorems are important and which are not,
>
>
> It is orthogonal to theorem proving.
>

No, it is not. Deciding which things are important and which are not is branch pruning.

>
>>You don't "prove" emipirical theories. We were talking about formal
>>theories here. Popper told you the difference ?
>
>
> Computers can not prove theorems is not a formal theory.
>

Oh, yes it is. Computers are entirely built and work upon formal theory.

It's clearer and clearer that reading Popper didn't do you much good. Try Dijkstra, Parnas, Knuth, Wirth, Hoare. They have more substance, they don't ever speak non-sense or speculations, and the reading will do you good.

A good software engineer should not ever have enough time to read the likes of Popper :) There's too much science to read in our profession to worry about the Poppers of the world.

>
>>>Do you think the human mind is supernatural? :-)
>>
>>You haven't explained what's your definition of super-natural
>
>
> A supernatural thing does not follow the natural laws.
>

And natural laws are what Alfredo or Popper think they are. So before Einstein came along , light was super-natural since it didn't abide the "natural laws" too much at the time.

>
>>, but for
>>all intents and pruposes human brain is part of what common language
>>calls it "nature".
>
>
> Then prove that we can not emulate the behavior of a "natural thing"
> using computers, taking into account that computers are not restricted
> to the Von Newman architecture and Turing Machines equivalents.

Do you know of any computer in existence that is proven more powerful than Turing equivalent?

> The charge of the proof is on your side.

Here you show your lack of mathematical culture and indoctrination with philosophical nonsense.

It is not me who needs to disproof your beliefs. Until you haven't constructed a formal theory that fully supports your beliefs, your beliefs that a formal theory can be constructed is worth petty nothing.

>
>
>>>But this could change (or not). If you don't know what intelligence is
>>>then you can not prove we can not construct an intelligent machine
>>>ever.
>>>
>>
>>Within the current definition of computing machines (Turing equivalent
>>that is), it's been proven "beyond a reasonable doubt".
>
>
> Where?
>

I refer you back to Girard. Maybe this time you get to actually read it and think about it.

> By the way see this:
>
> "Computer scientists and logicians have shown that if conventional
> digital computers are considered in isolation from random external
> inputs (such as a bit stream generated by radioactive decay), then
> given enough time and tape, Turing machines can compute any function
> that any conventional digital computer can compute. (We won't consider
> whether Turing machines and modern digital computers remain equivalent
> when both are given external inputs, since that would require us to
> change the definition of a Turing machine.)"
>
> http://plato.stanford.edu/entries/turing-machine/
>
> Computers may have random external inputs. This invalidates Girard's
> claims.
>

Another pearl of the month. To (aproximately) quote David Parnas : "If you put all the monkeys in the world to type randomly for 5 years it is not *impossible* that they reproduce all the works of Shakespeare, but it is *extremely unlikely*."

That's how much external sources of randomness can account for in AI.

In any case a source of randomness is in no way different than plain-vanilla input like a file. As a matter of fact if you programmed in Unix they have a special file '/dev/random'.

If you think you think intelligence can be rooted in randomness, well let me invite you to recreate the works of Shakespear.

> The noise generated by a cheap sound card could be a good random
> external input :-)
>

Yes, and reproduce the works of Shakespear. Let's do it.

>
>>>Which ones?
>>>
>>
>>Search in finite models with proof.
>
>
> If we find a proof in a search in a finite model it does not
> invalidate the proof.
>

All it does it invalidates the method you used to claim that computers are not less a good theorem solver than humans.

>
>>No. You need to be able to understand what theorems are significant and
>>what theorems are not interesting.
>
>
> And it is orthogonal to the discussion. The discussion is about the
> proof of given conjectures.
>

In order to prove a given conjecture a computer is going to have to decide which theorems are not important. Otherwise for any non-trivial theory you can prove an infinite number of theorems.

>
>>In so doing humans enhance their ability to construct Mathematics over
>>an infinite domain of all the junk that may otherwise come out by
>>mechanizing axioms and inference rules.
>
>
> Humans can not search in an infinite domain. The number of symbols you
> can handle in your life is finite.
>

Yes, but as a human, I get to choose the important symbols. Computers don't have this privilege, unless specifically directed by a human.

>
>>>By the way mathematics are not science.
>>>
>>
>>The BS statement of the month.
>
>
> It is trivially true.
>

Oh, yes it is trivially true if you hijack the word science for the stupid popperistic purposes. Next time you're tempted to get the sense of a word from Popper, try the following:

	www.dictionary.com,
	Encyclopedia Britannica,
	Oxford English Dictionary

For normal people Mathematics *is* science.

>
> Alfredo

Cheers,
Costin Received on Wed Jun 18 2003 - 05:43:53 CEST

Original text of this message