Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> comp.databases.theory -> Re: Programming is the Engineering Discipline of the Science that is Mathematics
Keith H Duggar wrote:
[ A lot of irrelevant chaff skipped]
> > 1. PT is a logic generalization.
>
> No, I the OP claimed quote:
>
> "interestingly, one view of logic is as a specialization
> of conditional probability theory. One that deals only
> with certainty (1) and impossibility (0) rather than a
> range of probability." -- KHD
>
> In other words to paraphrase myself and Cox:
>
> "interestingly, one view of probability theory, the Cox
> formulation, is as a generalization of logic. One that
> deals with a range of probability corresponding to a
> degree of rational belief bounded in the extremes by
> certainty (1) and impossibility (0)."
>
> > My response was that PT is not truth functional in the
> > sense the propositional logic is namely that the compound
> > sentence truth, in logic, is determined by its
> > constituent's truth values
>
> No, your response was quote
>
> "PT cannot be 'a generalization of logic' because PT
> 'connectives' (+/*) are not truth functional." -- vc
>
> Which, I tried to explain to you is wrong for two reasons.
> First, (+/*) are NOT connectives in PT they are the real
> operators addition and multiplication. Second, PT uses the
> SAME connectives as logic. The connectives haven't changed
> they are still truth-functional as well as probability-
> functional.
OK, good, so you continue to claim that the probability of P(A and B) is determined solely by P(A) and P(B) just as you did before:
"When you apply the connectives to a probability-valued statements you get probability-valued statements whose probability depends only on the constituent probabilities. "
Now, I'll rephrase my puzzle in Jaynes/Bayesian terms:
Let [0..1] be a real line interval from which a point is chosen randomly. Let a and b be two sub-intervals of the [0..1] interval whose lengths are 1/3 and 1/8 respectively. Let A and B be two propositions 'the random point is chosen from a' and 'the random point is chosen from b' respectively . Then, by the Indifference Principle (see the Jaynes book), we can assign prior probabilities P(A)=1/3 and P(B)=1/8. Show how to derive P(A and B) given P(A) and P(B). If you can show that, you can claim that "probability depends only on the constituent probabilities". Are you unable to do that ?
>
>
> > I've provided an example, trivial to anyone who's read an
> > introduction to PT, and asked to compute P(A and B) given
> > P(A) and P(B). There has been no answer yet. Are you
> > unable to answer the question ?
>
> Rest assured there is an answer and I am able to answer
> it.
So what is the answer ?
> > What kind of sentences do you have in mind whose
> > probability is one?
>
> > Please provide a meaningful example.
>
> What are you talking about? It's not hard. It's trivial and
> irrelevant! For proving probability theorems or that
> probability is a generalization of logic it does not matter
> what A, B, C etc stand for! "My first name is Keith" "My
> last name is Duggar" "My first name is Keith or my last name
> is Duggar". Does that satisfy you? I hope so because it is
> TOTALLY irrelevant (and somewhat VI honestly).
>
> [disjunction nagging]
>
> > 2. In some case, namely when probabilities are 0 and 1,
> > the probabilistic statements 'reduce' to logical
> > statements. I asked to provide two or more statements
> > whose probability would be one and show what the
> > probability of the disjunction of such statements might
> > be. There has been no answer. Are you unable to answer the
> > question ?
What you've provided is not a meaningful example, but just two mutually irrelevant true propositions (similarly to introducing irrelevancy with penguins and the leaking roof in the Jaynes book). To provide a meaningful example of reducing probabilities, take for example two relevant statements A='It will rain today' and 'The roof will leak' (see the same book), assign priors and show how P(A and B) can be equal to one.
Also, your 'basic proof' is quite meaningless (as I pointed out in another message):
> One last time I will provide one of these basic proofs. And
> I will provide more than you ask for, that is below is a
> proof for the complete reduction to logic in the limit of
> certainty (1) and impossibility (0).
>
> P(A) = 1
> P(A or B) = P(~(~A and ~B))
> P(A or B) = 1 - P(~A and ~B)
> P(A or B) = 1 - P(~B|~A)P(~A)
Since P(~A) equals zero, the above statement does not make sense. We'll try the argument from authority (since you appear impervious to simple logic):
(the Jaynes book)
"In our formal probability symbols (those with a capital P)
P(A|B)
....
We repeat the warning that a probability symbol is undefined and
meaningless if the conditioning
statement B happens to have zero probability in the context of
our problem ...
"
> P(A or B) = 1 - P(~B|~A)(1-P(A))
meaningless
> P(A or B) = 1 - P(~B|~A)(0)
meaningless
> P(A or B) = 1 - 0
> P(A or B) = 1
>
> P(B) = 1
> (same as above just swap A and B)
meaningless, see above.
>
> P(A) = 0
> P(B) = 0
> P(~A) = 1
> P(~B) = 1
> P(~A and ~B) = 1 [conjunction proved in previous post]
> P(A or B) = P(~(~A and ~B))
> P(A or B) = 1 - P(~A and ~B)
> P(A or B) = 1 - 1
> P(A or B) = 0
>
> thus
>
> A : B : A or B
> 0 : 0 : 0
> 0 : 1 : 1
> 1 : 0 : 1
> 1 : 1 : 1
NMaturally, there can be no "thus" (see above).
[ Irrelevant chaff and the refusal to answer (1) and (2) skipped ] Received on Sat Jun 10 2006 - 22:11:19 CDT