# Re: Fitch's paradox and OWA

Date: Thu, 31 Dec 2009 13:29:51 -0700

Message-ID: <8R7%m.50$YP1.43_at_newsfe15.iad>

Marshall wrote:

> On Dec 30, 8:16 pm, Barb Knox <s..._at_sig.below> wrote:

>> Marshall <marshall.spi..._at_gmail.com> wrote: >> >> By the nature of the construction of predicate logic, every arithmetic >> formula must be either true or false in the standard model of the >> natural numbers. >> >> But, we have no satisfactory way to fully characterise that standard >> model! We all think we know what the natural numbers are, but Goedel >> showed that there is no first-order way to define them, and I don't know >> of *any* purely formal (i.e., syntactic) way to do do.

*>*

> I was more under the impression that Goedel showed there

*> was no complete finite theory of them, rather than no*

*> way to define them.*

Godel didn't show any of the 2 you've mentioned.

> Are you saying those are equivalent?

If I'm the one answering this question then "No": defining a model of a formal system is not the same as demonstrating anything about a formal system that's supposed to be about the model. Naturally.

*>
**>
*

>> (The usual ways >> to define them are not fully syntactic, but rely on "the full semantics" >> of 2nd-order logic, or "a standard model" of set theory, both of which >> are more complicated than just relying on "the Standard Model" of >> arithmetic in the first place.)

*>*

> Here's a possible definition:

*>*

*> nat := 0 | succ nat*

*>*

*> x + 0 = x*

*> x + succ y = succ x+y*

*>*

*> x * 0 = 0*

*> x * succ y = x + (x * y)*

*>*

*> Is there some way this definition is not fully syntactic?*

Yes: The part "nat := 0 | succ nat" isn't syntactical. [In the context
of FOL, being syntactical is being part of a FOL language/formula which
this part doesn't seem to be].

*>
*

> It certainly seems to me that the above is fully syntactic,

*> and is a complete definition of basic arithmetic.
*

That's *not* the canonical knowledge of arithmetic: what happens to the usual syntactical symbol '<', in your "complete definition"?

*> Are
*

> there statements that are true of this definition that

*> can't be captured by any finite theory? Sure there
**> are, but that has nothing to do with whether it's
**> a proper syntactic definition. To say it's not a syntactic
**> definition, you have to point out something about
**> it that's not syntactic, or not correct as a model
**> of the naturals.
*

Setting aside the missing "<", what you've defined up there is *in no way* conforming with the _FOL definition of a model_ which the naturals is supposed to be collectively. For example, what's the set of 2-tuples that would correspond to your '+'?

*>
**>
*

>>> If it's actually the case (that every statement of basic arithmetic >>> is either true or false) then it's not a shortcoming to say so. >>> On the contrary, that would be a virtue. >> Speaking philosophically (since I'm posting from sci.philoisophy.tech), >> entities which in some sense exist but are thoroughly inaccessible seem >> to be of little value. This applies to the truth values of any >> statements which can never be known to be true or false.

*>*

> While I have sympathy for that position, I don't think it's

*> tenable in the long run. Or anyway, it's not tenable to go*

*> from "of little value" to suggesting that we should, say,*

*> not attend to the real numbers because of the existence*

*> of uncomputable numbers, or suggest that statements*

*> that are undecidable one way or the other are somehow*

*> neither true nor false. What they are is undecidable.*

First order undecidable formulas are in a different class than those that aren't model-able, aren't truth assigned-able.

I asked you before:

"(1) There are infinite counter examples of GC.

Tell me what you'd even suspect as a road-map to assign true or false to (1)?"

Now if you let (1') be defined as:

(1') df= (1) /\ A1 /\ A2 /\ ... A9

where A1 - A9 are Q's axioms (a la Shoenfield). Tell us, Marshall, what models or what kinds of models that you think you could assign 'true' or 'false' to (1')? If you really can't - which I don't think you can - then don't you at least think of the possibility that there are arithmetic statements that can't be true or false?

Why is it that a statement has to be true or false while _there's no way_ to assign a truth value to it any way? Other than we might have grown up accustomed to it, what kind of reasoning is that?

Ok I might sound a bit rhetorical here. But can you technically answer my question about (1')? Received on Thu Dec 31 2009 - 21:29:51 CET