Logic 101

This is the main board for discussing philosophy - formal, informal and in between.

Moderator: Only_Humean

Forum rules
Forum Philosophy

Logic 101

This is a work in progress. It will be added to as I have time and inclination. If anyone wants to help, or especially if you see an error, please PM me.

A note on style: I will use italics for two reasons. Most instances of italics will be to introduce technical words used in logic, or technical usages of common words. It is a bad habit among philosophers to use common words in often narrow senses - senses specific to philosophy. Logicians are no different. I will also use italics for simple emphasis, where I think it might be helpful. Hopefully, the context will make the distinction clear.

I have also decided that I will sometimes provide links for these words. These links lead to material that may be of interest, but are not necessarily primarily explicative - they may just be interesting ideas connected to the words. If they confuse, I suggest you ignore them. In some cases, I have added a link primarily for amusement - perhaps only for my own.

A note on notation: Different writers use different systems of notation for the symbols used in logic. I am following the notation I first learnt, Irving Copi's, as a matter of convenience. Much thanks to TheStumps and Carleas for their invaluable assistance in this regard.
Faust

Introduction

Probably one of the most difficult aspects of discussing logic is to define it. First, we must limit our usage of the word, for it has many. Here, we are considering only what is called formal, deductive logic – but we will not necessarily use those qualifiers after this.

Logic Defined

We will define logic by its purpose – which is to distinguish good arguments from bad arguments. Of course, now we must say what we mean by good and bad. For this, we will introduce another word that can be used in different ways, but for which we will adopt a specific and therefore technical usage – validity.

Logic distinguishes valid arguments from those that are invalid, according to a peculiar set of rules. More on those rules later. Suffice it, for the moment, to say that valid arguments are those that if the premises are true, then the conclusion must also be true. This is called a valid inference, or a valid argument. It should be noted that valid logical arguments then produce only what is called analytic truth – which is to say that the truth of the conclusion is dependent on the truth of the preceding premises and not upon any matters of fact.

It should also be noted that logic is not concerned with the actual process of inference, but only with the statements used to make an inference and their relations to each other. These statements are also called propositions, claims or assertions, often indiscriminately. Some writers draw a distinction between statements and propositions, but I will not. None of these are usually called sentences, although they have been by some writers in the past. I will draw this last distinction and will not use "sentence" as a synonym for statement, etc.

But these assertions are conveyed by sentences. We can properly say that an assertion has in common with the sentence used to convey it that sentence's meaning, and that such a sentence can be of only one kind: the declarative sentence. To illustrate, any declarative sentence may be translated into another language, but the assertion it makes remains the same in either language. Likewise, the same sentence may make different claims when used in different contexts.

Thus “All men are mortal” is the same proposition as “All men will someday die”, even though they are two different sentences. And “I went to jail” makes a different claim in a recitation of one’s criminal history than it does during a Monopoly game, or so it could, at least.

So logic has as its subject matter arguments and the statements used to make them. So, what is an argument? It’s a collection of statements (premises) of which the last (the conclusion) is claimed to follow from those that precede it, which is to say that those statements preceding the conclusion are to be considered grounds for the conclusion. While the word “argument” can mean many other things in common parlance, it has this technical meaning in logic.

It's worthwhile to note that logic does not create this relationship between the premises and the conclusion, however – a valid logical argument only tests and affirms this relationship. Again, we are not, as logicians, concerned with the actual process of inference – but only with subjecting completed arguments to a method which will clarify and illustrate that process, by testing it for errors.

Truth

Propositions are either true or false. If we cannot determine whether a declarative sentence is either true or false, then it does not contain a statement, for the purposes of logic. Another way to say this is that, logic presupposes that truth or falsity can be assigned to any proposition – an assignable truth or falsity being part of the definition of a proposition. While indeterminacy may be fascinating to the philosopher, it’s useless to the logician. And while the issue of indeterminacy has been insinuated into the subject of logic, we will not consider it here.

Arguments are never said to be true or false. They are valid or invalid. While it may be of great interest to us that statements be either true or false, the word "truth" is not applicable to arguments themselves. The seat of truth (or falsity) in logic is then propositions and not arguments. Again - arguments can be valid or invalid, but not true or false.

Valid and invalid arguments alike may contain true premises as easily as false ones, and while it may seem odd at first glance, any argument, valid or invalid, can have a true proposition as its conclusion even if its premises are not true. These results would be accidental to the argument, but still possible.

There are two conditions that must be satisfied to establish the truth of any argument’s conclusion as a conclusion, however. The argument must be valid, and the premises must all be true (this is also called a sound argument). The logician is concerned only with the first of those conditions. Thus the logician, qua logician, is not concerned with truth per se. The truth of the premises used in an argument must be established, or merely accepted, prior to the argument. And the truth of the conclusion (as a conclusion of the argument at hand), being in part dependent upon the truth of the premises, is similarly of no importance to the logician. Logic is the study of validity, and not of truth.

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

Statements

There are two kinds of statements used in logic: simple statements and compound statements. Compound statements are those that contain two or more simple statements. Thus “My dog has fleas” is a simple statement and “My dog has fleas and he barks at the mailman" is a compound statement, consisting of the simple statement "My dog has fleas" and another simple statement, "he barks at the mailman". There are several types of compound statements - this one is a conjunction.

Conjunction

In ordinary language, there are several ways to conjoin simple statements into conjunctions. In some cases, the choice is merely among conjunctions - "My dog has fleas but he loves a bath", for instance, doesn't contain the word "and". In such statements as "Jack and Jill went up the hill" are conjunctions, but differ from our first example in that the word "and" does not appear between two simple statements, but nonetheless indicates that two simple statements are conjoined. But in symbolizing any of these compound statements, we must observe certain conventions.

As we have said, statements are either true or false. Another way to say this is that all statements have a truth value. That is, any statement, if true, has the truth value true and any statement, if false, has the truth value false. Here we are only introducing the nomenclature truth value.

A conjunction is true only if both of its conjuncts - that is, the simple statements being conjoined - are true. This is merely what we mean when we make a statement that is a conjunction - we are asserting that both (or all) of the simple statements are true, at the same time. So, if even one of the conjuncts is false, then the conjunction, taken as a whole, is also false. Conjunctions are called truth-functionally compound statements for this reason – the truth value of a conjunction is wholly determined by the truth of its component parts.

The reason that symbols are used by logicians is that the correctness of the above paragraph is more easily seen. So, lets call any two statements “p” and “q”, following convention.

So, given any two statements p and q there are only four possible sets of truth values available. These four sets determine the truth value of their conjunction:

1. p is true and q is true (“p and q” is true)
2. p is true and q is false (“p and q” is false)
3. p is false and q is true (“p and q” is false)
4. p is false and q is false (“p and q” is false)

We’ll use a dot (•) as the symbol for “and”. With this, we’ll now construct a truth table mirroring the values stated above, which will serve as the definition for that symbol. It is a definition for • because it accounts for the truth value of the conjunction in every possible case.

Code: Select all
     p        q        p  •  q1.   T        T           T2.   T        F           F3.   F        T           F4.   F        F           F

Note that the last column provides the truth value for the conjunction taken as a whole.

To illustrate this in our examples, above, we will write

"My dog has fleas and he barks at the mailman" as "My dog has fleas • He barks at the mailman"

"My dog has fleas but he never scratches himself" as "My dog has fleas • My dog loves a bath"

"Jack and Jill went up the hill" as "Jack went up the hill • Jill went up the hill"

If we assign a symbol to each simple statement, and by convention use capital letters to do so,

"My dog has fleas • He barks at the mailman" is written as "F • B"

"My dog has fleas • My dog loves a bath" is written as "F • L"

"Jack went up the hill • Jill went up the hill" is written as "A • I"

The choices of letters used as symbols is rather arbitrary - often the first letter of an important word in the proposition is used - but this is really only a mnemonic device.

Negation

Another kind of truth-functionally compound statement is negation. It is the combination of any statement and its denial. So the negation of “My dog has fleas” can be written “It is not true that my dog has fleas”, “My dog doesn’t have fleas”, “There are no fleas on my dog” etc, but if “My dog has fleas is symbolized as “D” (following again the convention that simple statements are symbolized by upper-case letters), then any of these expressions can be symbolized the same way. We’ll use the tilde (~) as that symbol and we then have ~D. So, following our convention above, for any statement p, its negation is symbolized as ~p.

And we can construct a truth table to define ~ as well.

Code: Select all
p        ~p T         FF         T

Disjunction

Next we consider disjunctions, indicated by the word “or” - also truth-functionally compound (they're all going to be). Here, the simple statements that make up the components of this kind of compound statement are called disjuncts, or alternatives. “Or” is a little trickier than “and”, for it has two common meanings – the inclusive and the exclusive. The inclusive, or weak, use of the word “or” is as in “Fleas can get on either my cat or my dog”, meaning that both of my pets are susceptible to fleas (the meaning is inclusive of both). But if I say, “My pets never have fleas at the same time: it’s either one or the other” we have the exclusive, or strong sense of “or”.

Here’s the difference – the weak sense asserts that at least one disjunct is true, but the strong sense asserts that at least one is true, but they are not both true. That at least one disjunct is true is a meaning that is common to both senses - it is what is called the common partial meaning, and it is this common partial meaning that we will use to define our symbol for disjunction. For this reason, the symbol for "or", or disjunction will account for all instances of "or" - both the weak and the strong senses. We will use the symbol (v) for disjunction, and define it thusly (please note that the software I am using here doesn't allow for lower-case letters):

Here we see that if at least one disjunct is true, then the disjunction, again taken as a whole, is also true. The only case in which the whole disjunction is false is when both disjuncts are false.

But what of the exclusive sense of “or”? Here, we must use a more complex-looking (but not more logically complex) expression. For here we mean “either but not both”. We need a little more “punctuation” for this, which may be familiar from mathematics – parentheses.

The statement "I will go to the doctor and I will go to the butcher or I will go to the movies" is ambiguous. It could mean that I am going to go either to both the doctor and the butcher or I will go to neither and will go to the movies instead, or it could mean that I will go to the doctor and then either to the butcher or the movies. Here, the word "either" and its placement in the sentence helps us to determine just what statement I am making. It's not always easy to divine the statement contained in a sentence, but that is a "pre-logical" problem. The presence of and placement of "either" is often, but not always a help. In symbolizing such statements, we need to remove ambiguity, for one formulation is a conjunction and the other a disjunction.

"I will go to the doctor and I will go to the butcher or I will go to the movies" can be rewritten as "I will go to the doctor" • "I will go to the butcher" v "I will go to the movies". But we have no symbol for "either", and this may or may not be a help, anyway.

This statement is just as ambiguous, of course. We will solve this the same way as we would in numeric algebra - with parentheses. We'll assign a capital letter to each of the simple statements we are using here, such that "I will go to the Doctor" is D, "I will go to the Butcher" is B, and "I will go to the Movies is M. We will then have D • B v M. And just like we would in an algebraic expression, we all assign parentheses depending upon what our meaning really is. So we have a choice between (D • B) v M and D • (B v M). And again we will see that one expression is a conjunction and one a disjunction.

So, now that we have introduced parentheses, let's get back to the exclusive, or strong sense or “or”. Some systems do use a symbol for the strong sense of "or", but we will not, here. We will use symbols that we already have at our disposal, so that we can see how parentheses are used, and so we can see how conjunctions and disjunctions can be combined using them.

So let's again write any conjunction as p v q. If we mean this in the strong sense, we mean "either p v q but not both p and q". "Either p v q" is symbolized as "p v q" of course. "But" as we have seen, will be symbolized as •. "Both p and q" signifies a conjunction, so we have "p • q". And "not", of course, is ~. To apply this ~, we need to apply it to the conjunction as a whole, so it will be written as ~(p • q).

So our entire expression "p or q but not (both) p and q" will be (p v q) • ~(p • q).

Whew!

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

Conditionals

Next we will consider conditional statements, also called implications. These are "if, then" statements, such as "If I knew you were coming, then I'd have baked a cake". The first part of these compound statements (between the "if" and the "then") is called the antecedent, and the part after the "then" is called the consequent. Implications do not assert that either the antecedent or the consequent is true, but only that the consequent is true if the antecedent is true. Here we say that the antecedent implies the consequent.

Like disjunctions, implications are of more than one sort. Implications may be logical, definitional or causal. (Some writers include a fourth type - "decisional", but I have my doubts about that.) A logical implication is such: "If all men are mortal and Socrates is a man, then Socrates is mortal". A definitional implication is: "If all bachelors are unmarried men and I am a bachelor then I am an unmarried man". And a causal implication would be such as: "If water is heated to 100 degrees Celsius then it will boil".

We will adopt the symbol (⊃) for implication, as in "If (A)you're my friend, then (B)I don't need enemies" - where we will write A ⊃ B. But just as with disjunctions, we will need to see what the different meanings of these implications have in common - we must find the common partial meaning of different implications, so that we can use ⊃ in every case of a conditional, no matter what specific type of implication is being made.

As we have seen, implications assert only that if their antecedents are true, then their consequents are true. So, to define conditionals, we will look to see how those implications can be false. We can see that our causal implication would be false if we heated water to 100 degrees and it didn't boil. So we know that any implication is false when its consequent is false but its antecedent true. So any implication "if p (heated to 100 degrees C, in our example) then q (water boils)" is false if p • ~ q is known to be true, for in saying p • ~q, we are saying that the conjunction of p (100 degree heat) and the negation of q (i.e. the water does not boil) is true. Which we know is not the case in our example of the boiling point of water.

So, if p • ~q is false, it's negation must be true, and this is ~(p • ~q). So we have an expression that tells us when an implication is true. And we can now construct our truth table to define our symbol "⊃".

Code: Select all
p    q    ~q    p • ~q    ~(p • ~q)T    T     F      F           T T    F     T      T           FF    T     F      F           TF    F     T      F           T

Note: p ⊃ q has the same truth values as ~(p • ~q) above, because the latter, as we have said, is the common partial meaning gleaned from the different meanings of the former. As representative only of that common partial meaning, ⊃ is a weak form of implication, known as material implication. No causal connection, for instance, is indicated by this symbol, even if, in a given case, such a connection is meant.

So, this definition does not represent the only meaning of a conditional, but rather the part of the various meanings of conditionals that those various meanings have in common - that implications cannot be true if their consequents are false but their antecedents true. And thus we can use ⊃ for any conditional statement.

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

Material Equivalence

Two statements are materially equivalent when they have the same truth value. We will use (≡) here to indicate material equivalence. An easy example of material equivalence is: p ≡ ~~p. Clearly, since negation changes the truth value of any true statement p to false, negating the negation of p turns that truth value back to true. There are a couple of instances of material equivalence that are very handy for the logician, both of which we will examine now, and which we shall see again when we examine rules for the validity of arguments.

The first is ~(p • q) ≡ (~p v ~q). This can be proved with a truth table, but it can also be explained without one. As we have seen, conjunctions, such as p • q, are true only if both conjuncts are true. ~(p • q) states that the conjunction is not true, so we can infer that ~p v ~q - that is, that either p is not true or q is not true. So, to say one is to say the other - and thus they are materially equivalent.

Similarly, our (weak-sense) disjunctions assert that at least one disjunct is true, so to negate a disjunction, we'd have to show that both disjuncts are false. So ~(p v q) - the negation of a disjunction - is the same as asserting the conjunction of the negations of (both of) those component statements p and q. We thus arrive at ~(p v q) ≡ (~p • ~q), for in the first we are saying that the disjunction of p and q is false, which amounts to saying that the conjunction of the negation of those statements is true - that both statements (p and q) are false, in other words.

~(p • q) ≡ (~p v ~q)
and
~(p v q) ≡ (~p • ~q)

are known as DeMorgan's Theorems.

We have now defined enough symbols to formulate many of the arguments used in propositional logic. We will next look at argument forms and some rules for constructing arguments that are of those forms.

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

Argument Forms

Just as arguments can be valid or invalid, so can argument forms. Any argument can be proved invalid if we can construct another argument of the same specific form which has a true conclusion and false premises. This is so because, as we have said, validity is a purely formal characteristic of arguments, having nothing to do with the subject of the argument. Here we will see what is meant by argument form.

Earlier, we distinguished between how specific premises are symbolized (we're using capital letters, like A, B, C...) and how the notion of any premise (i.e. premises in general) is to be symbolized (lower-case letters). I will follow the convention that p, q, r, s...will be those statement variables, which we will use to describe argument forms, as opposed to specific arguments. To construct our argument forms, we will introduce another convention - that (.:) (a period followed by a colon) will mean "therefore".

Let us begin with this argument form:

p v q
~p
.: q

Which is known as the Disjunctive Syllogism (DS). A substitution instance of this form (that is, a specific argument using this form) is:

1. Either I am dead or I am alive.

Therefore, I am alive.

We can symbolize this argument, where A is "I am dead" and B is "I am alive" as:

1. A v B
2. ~A
.: B

This argument, and the form it uses, can be proven with a truth table, as we have done before, by accounting first for all the possible truth values of the individual terms and next for the resultant truth values of each premise.

Code: Select all
p    q    p v q    ~pT    T      T       FT    F      T       FF    T      T       TF    F      F       T

The first two columns account for the possible truth values of the terms used to make the compound statements in the argument and column three and four show the resultant truth values as those terms are in fact being used to make those statements. Note that column two is also the argument's conclusion and that only in row three do we find only true premises - we will see there also, in column two of row three, a true conclusion. Thus we have proved the validity of the form of this argument. All substitution instances will be valid, therefore.

Some familiar argument forms are Modus Ponens, Modus Tollens and the Hypothetical Syllogism.

Modus Ponens (MP):

p ⊃ q
p
.:q

Modus Tollens (MT):

p ⊃ q
~q
.:~p

Hypothetical Syllogism (HS):

p ⊃ q
q ⊃ r
.:p ⊃ r

These can be shown valid with truth tables, as above. The method of construction of these truth tables is the same as with the Disjunctive Syllogism - first, the individual terms are assigned all possible combinations of truth values, then the implications using those terms are assigned the resultant values and then we find the row where all premises are true and the conclusion also is true. It is this row that establishes the validity of the argument form. And as the form is valid, so will all substitution instances be valid.

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

Statement Forms

Statement forms are described with statement variables (our p, q, r, s...again). A statement form is a sequence of these variables such that when specific statements are substituted (A, B, C...), making sure that the same statement replaces the same variable throughout, the result is a statement of that same sequence. So where the specific statements are A, B and C, the statement form p ⊃ (q v r) will have, as a substitution instance, A ⊃ (B v C).

The statement "I am in London or I am in Paris" is, as we have seen, is of the form p v q. Now, this could be true or false - it's true if one of the disjuncts is true, and false if neither of them are. But "I am in London or I am not in London" is always true. No matter what truth values we assign to the terms of this statement, one disjunct is going to be true, and so the disjunction itself will always be true. Statement forms that have only true substitution instances are called tautologies. This statement form is p v ~p and is just such a form - it's a tautology.

But the statement "I am in London and I am not in London" is of course not a disjunction, but a conjunction - a statement form of another kind, rendered as p • ~p. As we have seen, the conjuncts would both have to be true for this conjunction to also be true, and since it is obvious that this can not be the case, we see that this is a statement form with no true substitution instances. This we call a contradiction.

Both of these forms have one thing in common - their truth or falsity is of a purely formal type. No matters of fact apply to either when determining their truth values. So it's logically impossible for tautologies to be false and just as logically impossible for contradictions to be true. Statement forms that are neither tautologies nor contradictions are contingencies. These are statements the truth of which depends upon whatever happens to be the case - they either reflect facts or they do not. "I am in London" is a contingency.

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

Rules for the Method of Deduction

We have seen that argument forms can be shown to be valid using truth tables. Deduction employs these forms and others as a shorter method of showing validity than truth tables can afford. These argument forms then constitute Rules of Deduction, or Rules of Inference, for their validity can be previously established, and as they are argument forms, we know that all substitution instances will also be valid.

Nine Rules of Inference

Here are nine rules of inference. They are not all the rules we will need, and they can be stated in somewhat different ways, but along with what are called Rules of Replacement, they are considered the most basic rules of deductive method by many writers.

Modus Ponens (MP)

p ⊃ q
p
.:q

Modus Tollens (MT)

p ⊃ q
~q
.:~p

Hypothetical Syllogism (HS)

p ⊃ q
q ⊃ r
.:p ⊃ r

Disjunctive Syllogism (DS)

p v q
~p
.:q

Constructive Dilemma (CD)

(p ⊃ q) • (r ⊃ s)
p v r
.:q v s

Destructive Dilemma (DD)

(p ⊃ q) • (r ⊃ s)
~q v ~ s
.:~p v ~r

Simplification (Simp)

p • q
.:p

Conjunction (Conj)

p
q
.:p • q

p
.:p v q

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

Rules of Replacement

In addition to the nine rules above, logic employs these ten Rules of Replacement.

We have said that our compound statements are truth-functional statements, meaning that the truth value of these statements is determined by the truth values of their component simple statements. But this also means that any part of these statements can be replaced by another expression, if that expression is logically equivalent, and therefore preserves the truth value of the part replaced. Thus, to return to a previous example, ~~p is a replacement for p.

As we will consider these rules as also rules of inference, I will number them as one with our previous Nine Rules. As with my system of notation, this also follows Copi. Some of these will be familiar from Algebra (because they are algebra).

10. DeMorgan's Theorems (DeM)

~(p • q) ≡ (~p v ~q)
~(p v q) ≡ (~p • ~q)

11. Commutation (Comm)

(p v q) ≡ (q v p)
(p • q) ≡ (q • p)

12. Association (Ass) [Okay, (Assoc)]

[p v (q v r)] ≡ [(p v q) v r]
[p • (q • r)] ≡ [(p • q) • r]

13. Distribution (Dist)

[p • (q v r)] ≡ [(p • q) v (p • r)]
[p v (q • r)] ≡ [(p v q) • p v r)]

14. Double negation (DA)

p = ~~p

15. Transposition (Trans)

(p ⊃ q) ≡ (~q ⊃ ~p)

16. Material Implication (Impl)

(p => q) ≡ (~p v q)

17. Material Equivalence (Equiv)

(p ≡ q) ≡ [(p ⊃ q) • (q ⊃ p)]
(p ≡ q) ≡ [(p • q) v ( ~p • ~q)]

18. Exportation (Exp)

[(p • q) ⊃ r] ≡ [p ⊃ (q ⊃ r)]

19. Tautology (Taut)

p ≡ (p v p)
p ≡ (p • p)

Even now, these rules are not complete. But the validity of many arguments can be determined using these rules.

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

The Method of Deduction

Formal Proofs

A formal proof of validity for any argument is a sequence of statements each of which is either a premise of the argument or a statement that follows from any preceding statement (whether a premise of the original argument or one derived therefrom) by a rule of inference, which ends with the conclusion of that argument. That probably sounds more complicated than it is.

Let's sketch out a simple argument.

1. If either the Celtics win their series or the Lakers win their series then I win a hundred bucks.
2. The Celtics win their series and the Suns win their series.
3. Therefore, I win a hundred bucks.

Where "Celtics win their series" is C, "Lakers win their series" is L, "Suns win their series" is S, and "I win a hundred bucks is W", this can be symbolized thusly:

1. (C v L) ⊃ W
2. C • S
.:W

Because this argument is so simple, we would not likely need to make a formal argument to collect on our bet. But there is an argument to be made, and the formal proof is as follows. Because we will number the lines, I will place the conclusion to the right, on the last line of the argument. To the right of every premise of the proof, I will list the lines from which I derived the new premise and the rule of inference employed in generating that premise.

Code: Select all
1. (C v L) ⊃ W2. C • S    /.:W3. C         2, Simp4. C v L     3, Add5. W         4, 1 MP

Here we see first the argument itself, and then the additional statements we need to construct our proof. In this case, we see that each of the statements of the formal proof is derived using one of the first nine rules of inference, which have been previously established as valid argument forms. Line 3 is derived from a line from the argument itself. Line 4 is derived from a line in the proof. And line 5 is derived from a line of the argument and a line from the proof.

This is not the only proof of this argument that we could construct, but, given our rules, it's the most elegant proof. That is, it's the proof that requires the fewest premises. The following is a much longer proof, but it is logically equivalent to the preceding one. Note also that the Replacement Rules can be applied either to entire lines of a proof or to parts of lines. The first nine rules of inference apply only to entire lines.

Code: Select all
1. (C v L) ⊃ W2. C • S    /.:W3. ~(C v L) v W            1, Impl4. (~C • ~L) v W           3, DeM5. W v (~C • ~L)           4, Comm6. (W v ~C) • (W v ~L)     5, Dist7. W v ~C                  6, Simp8. ~C v W                  7, Comm9. C ⊃ W                   8, Impl10. C                      2, Simp11. W                      9, 10 MP

A rough analogy in mathematics is that we could perform a series of additions where the numbers added are all the same - all 3's, for instance, say, ten times (that is, nine additions, adding 3 to 3 and then adding 3 to the previous sum eight times). Or we could multiply - 3 x 10. The answer is the same, and the operations are mathematically equivalent - the only difference being the number of operations we must perform. Elegance is often desirable, but never necessary. One reason that the study of formal logic is useful is that it can help us to make our arguments more elegant, and therefore perhaps more easily understood. And if we can eliminate a controversial premise from our argument, we might win wider acceptance for that argument.

Just sayin'.

Given that more than one proof may be constructed for any given argument, we may notice that the method of deduction differs from the use of truth tables in that constructing a truth table is purely mechanical - there is only one truth table for any argument or argument form. The construction of a truth table requires no thinking - we just plug in the truth values and look for the "proof line" - the line in the table that shows all true premises and a true conclusion.

Deduction - the application of argument forms and replacement rules that have already been shown to be valid - does require some thinking. Often, it requires trial and error. But once the proof is constructed, checking to see if the proof is valid is also purely mechanical - either the rules are applied correctly or they are not. As in any technique, practice helps. And constructing a formal proof is often easier than using truth tables, because long and complicated arguments require equally long (but not complicated) truth tables.

Either method can be used - there is no logical difference, again because the rules of inference and the operations used in them have been shown to be valid and been defined by truth tables. In fact, some of the rules of inference are redundant. Modus Tollens, for instance, is redundant to a combination of Material Implication, Commutation and a Disjunctive Syllogism, or, more elegantly, to Transposition and Modus Ponens.

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

Conditional Proofs

Sometimes arguments produce a conclusion which is itself a conditional, or implication (B ⊃ C, e.g.). There is a rule of inference that will make such arguments much easier to prove than by using our first nineteen rules, and which will allow for the proof of some arguments that those nineteen rules cannot show the validity of. This is the Rule of Conditional Proof.

The general form of any argument is p ⊃ q, where the premises, taken together, are p, and the conclusion is q. Every argument is a conditional, or implication, in other words - "If (the premises), then (the conclusion). In the case of an argument the conclusion of which is itself a conditional, we can make this form a little more specific with (where P is the premises, A is the antecedent of the conclusion and C is the consequent of the conclusion) P ⊃ (A ⊃ C).

Let's return to our replacement rule of Exportation. It states that, [(p • q) ⊃ r] ≡ [p ⊃ (q ⊃ r)]. We'll notice that the right side of this equation is the same form as our P ⊃ (A ⊃ C) above. So our form for any argument that has a conditional as a conclusion is logically equivalent to [(p • q) ⊃ r], which translates to (P • A) ⊃ C for our specific argument above. This amounts to making the antecedent of the conclusion an additional premise of the equation, isolating the consequent C as the conclusion - that is, the conclusion is no longer a conditional. This of course means that we have a somewhat different argument than what we started with.

But because these two argument forms are logically equivalent, we can provide a proof of one by providing a proof of the other. There are times when that additional premise, however, will allow us a much shorter proof than we could construct without it - arguments with conditionals as conclusions will contain conditionals as premises - think Modus Ponens, Constructive Dilemma, or Destructive Dilemma.

Let's examine a simple argument that has as its conclusion an implication.

Code: Select all
1. (T ⊃ E) • (A ⊃ L)/(T v A) ⊃ (E v L)

We'll invoke the Rule of Conditional Proof (CP) thusly:

Code: Select all
2. T v A             /.: E v L       CP3. E v L                           1, 2 CD

Using a Conditional Proof, we have provided a proof for the original argument.

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

Basically, you just take a look at the argument and say, "Dude, that's absurd!".

Sometimes it works.

Just kidding.

The Reductio Ad Absurdum method is also called the Rule of Indirect Proof (IP). Here, we assume the opposite (or negation) of the conclusion and seek to derive an explicit contradiction. Since any conclusion can be validly deduced from a contradiction, this contradiction can be used to deduce the original conclusion. In practice, the proof is ended at the contradiction, but that is because deducing the original conclusion from such a contradiction is always possible. So, the proof is indirect, but complete. To be clear, an IP does not prove the validity of the argument from the contradiction itself, but only because that contradiction will in every case allow for the deduction of the original conclusion.

Let's see an IP of a simple argument.

Code: Select all
1. A v (B • C)2. A ⊃ C      /.: C3. ~C          IP4. ~A          2, 3 MT          5. B • C       1, 4 DS6. C • B       5 Comm 7. C           6 Simp8. C • ~C      7, 3  Conj

The nineteen rules, plus the rule of Conditional Proof and that of Indirect Proof are the rules we need to prove the validity of arguments with basic propositional logic. But we need a few more rules in order to assess arguments the premises of which are not truth-functionally compound.

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

Propositional Functions

All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.

With all the rules we have reviewed thus far, we still cannot properly assess the validity of this argument. That's because the premises are not truth-functionally-compound. The validity of this argument depends upon the logical structure of the premises, and not upon the truth values of the component simple statements of the compound premises of arguments we have previously reviewed.

Singular Propositional Functions

"Socrates is human" is a singular proposition - it makes the claim that an individual entity (Socrates) has the attribute of being human. Socrates is called the subject here, and "human" is the predicate. Any (non-negative) singular proposition asserts that the (individual) subject term has as an attribute the predicate term. In this case, the subject term is Socrates, but it can be a dog, a bus or a fire hydrant - in ordinary language, the subject is always then a noun. Predicate terms can be adjectives, nouns or verbs.

Symbolizing propositional functions can look complicated, but if we take it step by step, we will see that it is not. We will symbolize any individual as "x", and specific individuals by other lower-case letters. To begin symbolizing our singular proposition then, Socrates becomes "s" (we will use only a through w for individuals). So, x is an individual variable (much like it is in algebra - a place holder), and (in this case) s is an individual constant. Individual constants are substitution instances of individual variables, much as when symbolizing statements, A, B and C were substitution instances of p, q and r. When we substitute an individual constant for an individual variable, it's called an instantiation.

So we have s as our subject term, of which we want to predicate "human". By convention, terms for attributes (predicate terms), will be placed to the left of subject terms, and will be written in upper-case letters. Just as we use "s" for Socrates, for ease of reading, we will usually use the first letter of the predicate word as our predicate term - just to make it easier to remember as we read the argument in symbolic form. Following our rules so far, we symbolize "Socrates is human" as Hs.

Hs, then, is a statement - the result of an instantiation instance of the propositional function, Hx.

Note: Some writers use parentheses around the subject term (the individual constant "s", above) - resulting in H(s). Again following Copi, and because I think it's a good idea anyway, I will not do this.

General Propositional Functions

To formulate "All men are mortal" we need to go through some preliminaries. General propositions, as we might guess, do not name individuals. Like singular propositions, though, they are substitution instances of propositional functions. The process of substitution in this case is not called instantiation, but generalization, or quantification. We will see that a general proposition is formed from a propositional function by placing either a universal quantifier or an existential quantifier before it.

Let's examine a general statement - "Everything is mortal". We will restate this as "Given any thing at all, it is mortal". The "it" we will replace with x, as we did with singular propositions. This is why we have restated the proposition in the way we have - so that we have the "it". This "it" refers, of course, to "thing". So we will use x for that term as well - this x we will always put in parentheses - (x).

So we have "Given any x, x is mortal. And following our practice for symbolizing a predicated subject, we will write "x is mortal" as Mx. So we have (x)Mx.

The phrase "given any x", which is a standard formulation, and symbolized, again, as (x) - parentheses and all, is called a universal quantifier.

Another kind of general proposition is represented by "Something is mortal". For reasons similar to those we had above, we will want to restate this as "There is at least one x such that x is mortal". Note that there is no difference, for our purposes, between "some" (more than one) and "at least one" (perhaps only one). Following the same rules we used for universal quantifiers above, we will then have "There is at least one x such that Mx ("it is mortal")" To fully symbolize this, we translate "There is at least one x", which is called an existential quantifier, into ∃x.

The fully symbolized proposition is then (∃x)Mx - "Something is mortal" has become "There is at least one thing- ∃x such that it is mortal - Mx. The "such that it is" is accounted for by the positioning of the terms.

As we have said, a general proposition is formed by placing a quantifier - either universal or existential, as the case requires, before a propositional function. Obviously, a universal quantification of a propositional function is true only if all substitution instances of it are true, and an existential quantification of a propositional function is true if at least one substitution instance of it is true. This means that, if we assume that there is at least one individual, if the universal quantification of a given propositional function is true, so also is an existential quantification of that propositional function.

Four Types of Quantification

We must also formulate two other subject-predicate propositions - "Something is not mortal" and "Nothing is mortal". These are symbolized, respectively as (∃x)~Mx and (x)~Mx.

We will adopt the convention that phi (Φ) will represent any attribute whatever, which gives us these four quantification types:

(x)Φx

(∃x)Φx

(x)~Φx

(∃x)~Φx

We may now examine more closely the relationships between them.

Again assuming at least one individual exists, (x)Φx and (x)~Φx are contraries - they might both be false but they cannot both be true. (∃x)Φx and (∃x)~Φx are called subcontraries - they can both be true but they cannot both be false. There are two sets of contradictories - (x)Φx and (∃x)~Φx, and also (∃x)Φx and (x)~Φx. Within each of these sets, one must be true and the other must be false. And finally, the truth of (∃x)Φx is implied by the truth of (x)Φx (as was mentioned earlier) and the truth of (∃x)~Φx is implied by the truth of (x)~Φx.

Now we'll examine four propositions related to our syllogism, to see how we will symbolize them.

{Note: The system we are going to use is not the only system - the other system known to me is based on Set Theory. I personally prefer that system, perhaps because the "New Math" that I was taught as a child was also based on Set Theory. I believe that the system I will here present is more intuitive and makes for easier and more literal translations from ordinary language. It has the additional advantage that no new symbols need be introduced in order to use it.}

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

A, E I, O Propositions

We will now examine what are called A E I O propositions, or, more properly, proposition forms. .

They are:

All s is p (Universal Affirmative or A)

All s is not p (Universal Negative or E)

Some s is p (Particular Affirmative or I)

Some s is not p (Particular Negative or O)

Let's flesh these out with some ordinary-English claims.

All humans are mortal (A)
No Humans are mortal (from "all men are not mortal") (E)
Some humans are mortal (I)
Some Humans are not mortal (O)

We'll take them in order:

All humans are mortal will become, using a pattern familiar from the last section, "Gievn any individual thing, if it is human then it is mortal"

That is, given any individual thing (x), if x is human then x is mortal, (x is human ⊃ x is mortal).

The final form then is (x)[Hx ⊃ Mx], where (x) is "any individual" and Hx is "x is human" and Mx is "x is mortal" as we have seen in the previous section.

Thus the E proposition will be (x)[Hx ⊃ ~Mx]

"I" will be (∃x)[Hx • Mx]

And the O proposition will be (∃x)[Hx • ~Mx]

These A e I o propositions are, of course, not the only forms of general propositions, but they are both commonly used and famous since Aristotle, who emphasized them.

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm

Re: Logic 101

Four More Rules of Inference

Now that we have entered the realm of quantifiers and functions, we need to add four more rules of inference. And we’ll start with simple versions of those rules. There are some restrictions on how these rules are used, which we will get to later. It should be noted that the restrictions on their use, when fully stated, will change the way these rules are stated formally, but the basic idea of each will be stated here.

The first one is Universal Instantiation. (U. I.)

(x)Фx / .: Фv

In the first expression [(x)Фx], we are saying that all x’s have a given attribute (Ф stands for any attribute whatever)– this is just the general form of the quantifier for expressing an attribute common to all members of a class. It’s one of the general propositional functions we saw a couple of sections ago. The second expression represents any substitution instance for that function - we're adopting the use of “v” as a stand-in for “any individual” with that same attribute, much as we have used "x" as a variable.

With (x)Фx / .: Фv, what is being claimed here is quite simple – that because any universal quantification of a propositional function [(x)Фx] is true only if all substitution instances of that quantification are true, we can use U. I. as a rule of inference. We can, in other words, always validly infer that substitution instance – Фv – from the universal quantification, (x)Фx.

The next rule is Universal Generalization. (U. G.)

Фy /.: (x)Фx

Here, "y" stands in for any arbitrarily selected individual. What is being stated here is that if we arbitrarily select an individual, say an individual mortal, and as long as we are assuming only its mortality, then we can claim this individual, Фy, is a substitution instance of Фx. In other words, it's the reverse of Universal Instantiation. In this example, what is true of an arbitrarily selected individual mortal is also true of all mortals.

Existential Generalization (E. G.)

Фv /.: (∃x)Φx

This is actually another aspect of Universal Instantiation. With U.I. we are claiming that all substitution instances are true. With E. G. we are essentially saying that, for an existential quantification of a propositional function to be true, at least one substitution instance of Фv is true.

Existential Instantiation (E.I.)

(∃x)Φx / .: Фv

You will note that this rule appears the "reverse" of E.G. It isn't - it's a separate assertion. What we're stating here is that the existential quantification (∃x) of a propositional function (Φx) asserts that there is at least one individual (v), which will yield a true substitution instance of that existential quantification - will in other words have that attribute. As all we may know about that individual might be that it does possess that attribute, we will name it, as an instance of "v", by use of another variable, "w". This is, as before, merely a convention. The use of "w" is not as a proper name, but is a placeholder for an individual about whom we have only the knowledge that is has that specified attribute. The "w" is, therefore, still just a variable.

While we will discuss restrictions on the use of all of these rules later, one restriction on the use of this rule should be stated now. That individual "w" must be one that has not yet occurred within the present context - which has not bee used in the argument, in other words. If this restriction is not present, then we would allow an untenable situation, which I will here illustrate:

1. Some humans are mortal
2. Some cats are mortal
/.: some humans are cats

Without the restriction that E.I must use only individuals who have not previously been named, we would have:

1. (∃x)[Hx•Mx]
2. (∃x)[Cx•Mx] /.: (∃x)[Hx•Cx]
3. Hw•Mw.....1. E.I. Here, we are plugging in that w, to claim that there is one true instance of (∃x)[Hx•Mx]
4. Cw•Mw.....2. E.I.

Line four is the reason that we must include the above restriction. We cannot, in other words, use "w" again. This might not be apparently wrong at first, for all we are saying is that there is at least one cat which is mortal. The problem is that this allows:

5. Hw.........3, Simp
6. Cw.........4, Simp
7. Hw•Cw....5,6 conj
8. (∃x)[Hx•Cx] 7 E.G.

Which claims that some humans are cats.

Faust
Unrequited Lover of Wisdom

Posts: 16846
Joined: Sat May 21, 2005 6:47 pm