Re: Transactions: good or bad?

From: Bob Badour <bbadour_at_golden.net>
Date: Sun, 15 Jun 2003 13:15:56 -0400
Message-ID: <5KnHa.12$WR3.903214_at_mantis.golden.net>


"Costin Cozianu" <c_cozianu_at_hotmail.com> wrote in message news:bci44j$j85o4$1_at_ID-152540.news.dfncis.de...
> Alfredo Novoa wrote:
> > Costin Cozianu <c_cozianu_at_hotmail.com> wrote in message
news:<bcfed9$idmcu$1_at_ID-152540.news.dfncis.de>...
> >
> >>Alfredo Novoa wrote:
> >>Playing chess is not a proof. For all the programs we are talking about
> >>, playing chess is a search in a finite model.
> >
> > It is not finite if you don't limit the number of turns, and a
> > complete search is unfeasible. A chess problem is not very different
> > to a theorem proof.
>
> I can understand your lack of knowledge in Math, but this non-sense you
> just wrote is unexcusable, especially after trashing Girard.
>
> Read my leaps: chess has a finite model, chess *is* finite. If you're
> not able to see the obvious, you come borderline to trolling.

The board and the pieces have a finite number of states just as my computer has a finite number of states.

Where is your proof that all chess games halt?

> >>>But a computer does not need to think in order to prove a theorem or
> >>>to find a stalemate in five movements.
> >>
> >>Exactly he only neeeds to compute. In order to prove Pythagora's theorem
> >>you have to think.
> >
> >
> > Even if it was true (you can not prove it and we don't know what to
> > think is), it is not the case of all theorems.
> >
>
> It is the case for more than enough theorems, those that counts. It is
> so much the case, that a computer will not "find" a proof unless it is
> launched by a mathematician in a particular domain, and on a particular
> theorem, where the mathematician who launched the program, either knows
> for sure that the search is finite, or knows or has good reasons to
> believe that the particular resolution mechanism utilized by the program
> will find the proof.

How then have computers finally proved conjectures that stumped mathematicians for decades? Surely, if the mathematician "knew" the search would be finite, the mathematician must already have a proof for the conjecture.

The only thing mathematicians have--with respect to proofs--that computers lack is impatience. The computer can just keep chugging away at the problem, and it's the mathematician who pulls the plug to start over.

> Give the computer Peanno's axiomatic construction, and let him solve
> modern algebraic problems, and it'll blow smoke.

How, exactly, did humans do any better?

> >>Oh, great. So you admit to confusing search in a finite model with
proving.
> >
> > Brute force and other search approaches are not limited to finite
> > models, and if you have proved something the method used does not
> > care.
> >
> > Human proving is also a search. Humans are good restricting the
> > search.
> >
>
> You obviously have not done enough Math to know this is trivially false.
> A human will extend the model he's working with if needed, a computer is
> not.

Except that the article cited earlier specifically contradicts your statement above. The strategy the computer used for the proof involved extending its model with previously proved theorems.

> >>>It is very easy to create a software which proves not very trivial
> >>>boolean algebra theorems.
> >>
> >>Yes, but we are not talking trivialities here.
> >
> >
> > Then you are talking only about very hard to solve theorems. You were
> > not specific.
> >
>
> The discussions started with the claim that you can construct an AI
> system that will prove your programs, or at least that will be able to
> generate an exhaustive set of test for your programs.

With all due respect, Costin, Alfredo made neither of those claims. You have been talking past his points without addressing them. You introduced the concept of computer thought to turn the discussion into a pissing match. The requirement for thinking computers did not exist in anything to which you were responding.

> >>Do you know the Halting problem ?
> >
> >
> > More or less.
> >
> > http://www.wikipedia.org/wiki/Halting_problem
> >
> > It is a problem for humans too, but we can stablish heuristic halt
> > conditions.
> >
>
> No, it is not a problem for humans bevause we have not established that
> humans can be modelled by Turing Machines.

It matters not whether humans can be modelled by Turing Machines. It only matters whether computer programs can be modelled by Turing Machines for the Halting Problem to apply. After all, the Halting Problem simply states that it is possible to create programs where Halting is uncertain. The Halting Problem does not prevent anyone--human or computer--from proving that some class of programs halts or that some other class of programs does not halt.

> > Lots of humans fail trying to prove some theorems, and some proofs are
> > found when humans are looking for other things.
>
> That's what computers don't do.

Again, the article previously cited contradicts your statement.

> >>No. Humans are creative theorem provers.
> >
> >
> > Kasparov said that Deep Blue made several creative movements, and
> > computers have proven theorems which mathematicians thougth they were
> > reserved for creative theorem provers.
> >
>
> Very "creative" indeed. Creative by means of an exhaustive partial search.
>
> Are we talking fantasies here ?

Where is your proof that humans use anything other than exhaustive partial search for creativity? Or are you talking fanasies here? For that matter, where is your proof that fantasies are anything other than exhaustive partial searches?

Costin, it is clear you are interested only in the pissing match and in drawing the discussion away from the original points made to obscure them.

> > It was not proved that we can not construct creative computers, but
> > creativity is not always the only way. Automated theorem proving
> > progres a lot faster than strong artificial intelligence.
> >
>
> Oh, strong AI has been largely discredited as a scientific hoax by now.

How is that observation relevant as a response to Alfredo's implicit statement above that strong AI is neither necessary nor effective for theorem proving?

I don't know what you are trying to achieve with this pissing match and by talking past Alfredo.

> And the progress, while significant for theorem provers as human
> helpers, when considered from your absurd theories that there's no
> difference in principle between computers as mathematicians and humasn
> as mathematicians, well, that progress is not even good enough to

Where did Alfredo present the absurd theory you ascribe to him? Costin, you can do better than constructing straw men to burn. I suggest you stop putting words in Alfredo's mouth and respond instead to what he actually says.

> >>>>It's provably impossible to create such a software that will take any
> >>>>sufficiently expressive formal system, a proposition in the system and
> >>>>*reliably* come up with the decision whether that proposition is a
> >>>>theorem or not.
> >
> >
> > If you meant we can not create a software which can solve ALL theorems
> > then of course, but humans can not either.
>
> Humans can decide which theorems are important and which are not, which
> are worth pursuing and which are not, which reasearch direction should
> be abandoned altogether.

What prevents a similarly informed computer from doing the same?

> >>>I don't think there is anything supernatural in the human mind. If we
> >>>can think other machines also could. But you don't need to be
> >>>self-concious in order to prove a theorem.
> >>>
> >>
> >>You don't "believe" there's anything supernatural. And you don't know ,
> >>you only "believe" you are a machine.
> >
> >
> > Well, acording to Popper we can not prove any scientific theory with
> > certainty, they can only be shown to be true beyond a reasonable
> > doubt, the only certainty is in disproving scientific theories.
> >
>
> You don't "prove" emipirical theories. We were talking about formal
> theories here. Popper told you the difference ?

Um, we measure the axioms for one and make them up for the other?

> >>>>When I'll see the first computer to take the decision to submit a
paper
> >>>>to a scientific journal, all by itself, then I'll reconsider my
position.
> >>>
> >>>
> >>>You are mixing very different things.
> >>>
> >>
> >>No, it is you who are mixing very different things.
> >
> >
> > Which ones?
> >
>
> Search in finite models with proof.

Proof is proof. Where is your proof that humans avoid search in finite models to prove? Or are you simply blowing smoke for the purposes of a pissing match?

> > You are mixing the hability of proving theorems with the motivations
> > and desires for sending a paper to a scientific journal.
> >
> > You don't need to know what a scientific journal is in order to prove
> > a theorem.
> >
>
> No. You need to be able to understand what theorems are significant and
> what theorems are not interesting. A journal will publish only
> interesting theorems. A human scientist will generally send to
> publication only articles that he deems intersting and having a chance
> to be published. He will also write the article skipping over obvious
> proofs, or over less obvious but mechanical and unsubtle proofs (putting
> them in Appendix eventually), and so on so forth. Valuating and
> communicating knowledge, that is.

First, a sufficiently informed computer could automate all of those decisions. Second, none of the above is relevant to any of Alfredo's points you were responding to when you started talking past him and embarked on this uninteresting, futile pissing match.

<additional irrelevant pissing match fodder snipped> Received on Sun Jun 15 2003 - 19:15:56 CEST

Original text of this message