You are on page 1of 55

BUSINESS MATHEMATICS LOGIC

MOHAMMED SHADAB

(Reg No: CC0018BK33AM25AAB)

GREAT EASTERN MANAGEMENT SCHOOL, BANGALORE

2009-2010
BUSINESS MATHEMATICS LOGIC
CERTIFICATE

This is to certify that the Project work


“ LOGIC”

Is submitted to the college by the candidate

MOHAMMED SHADAB bearing Reg No: CC0018BK33AM25AAB

Is the product of bonafide research carried out by the candidate

Under my supervision in BUSINESS MATHEMATICS.

(GUIDE)

DR.G.S.HEDGE
BANGALORE Lecturer, Business Mathematics
Great Eastern Management School
SEP 2009
Acknowledgment

The Project work was carried out under the remarkable guidance of

Lecturer, Great Eastern Management School. I am grateful for his guidance,


valuable

I also express my sincere gratitude and thanks to all the subjects participated
in the study.
CONTENTS
Statements and Logical Operators Exercises for Section 1
2. Logical Equivalence, Tautologies and Contradictions Exercises for
Section 2
The Conditional and the Biconditional Exercises for Section 3
Tautological Implications and Tautological Equivalences Exercises for
Section 4
Rules of Inference Exercises for Section 5
Arguments and Proofs Exercises for Section 6
Predicate Calculus Exercises for Section 7

• 1 Nature of logic
o 1.1 Logical form
o 1.2 Deductive and inductive reasoning
o 1.3 Consistency, soundness, and completeness
o 1.4 Rival conceptions of logic
• 2 History of logic
• 3 Topics in logic
o 3.1 Syllogistic logic
o 3.2 Sentential (propositional) logic
o 3.3 Predicate logic
o 3.4 Modal logic
o 3.5 Informal reasoning
o 3.6 Mathematical logic
o 3.7 Philosophical logic
o 3.8 Logic and computation
• 4 Controversies in logic
o 4.1 Bivalence and the law of the excluded middle
o 4.2 Is logic empirical?
o 4.3 Implication: strict or material?
o 4.4 Tolerating the impossible
o 4.5 Rejection of logical truth
INTRODUCTION

Logic, from the Greek λογική (logiké) is defined by the Penguin


Encyclopedia to be "The formal systematic study of the principles of valid
inference and correct reasoning". As a discipline, logic dates back to
Aristotle, who established its fundamental place in philosophy. It became
part of the classical trivium, a fundamental part of a classical education, and
is now an integral part of disciplines such as mathematics, computer science,
and linguistics.

Logic concerns the structure of statements and arguments, in formal systems


of inference and natural language. Topics include validity, fallacies and
paradoxes, reasoning using probability and arguments involving causality
and time. Logic is also commonly used today in argumentation theory.

You have been assigned the job of evaluating the attempts of mortals to
prove the existence of God. And many attempts there have been. Three in
particular have caught your attention: they are known as the cosmological
argument, the teleological argument, and the ontological argument.

Cosmological Argument (St. Thomas Aquinas): No effect can cause itself,


but requires another cause. If there were no first cause, there would be an
infinite sequence of preceding causes. Clearly there cannot be an infinite
sequence of causes, therefore there is a first cause, and this is God.

Teleological Argument (St. Thomas Aquinas): All things in the world act
towards an end. They could not do this without their being an intelligence
that directs them. This intelligence is God.

Ontological Argument (St. Anselm): God is a being than which none greater
can be thought. A being thought of as existing is greater than one thought of
as not existing. Therefore, one cannot think of God as not existing, so God
must exist.

Are these arguments valid?


Logic is the underpinning of all reasoned argument. The Greeks recognized
its role in mathematics and philosophy, and studied it extensively. Aristotle,
in his Organon, wrote the first systematic treatise on logic. His work in
particular had a heavy influence on philosophy, science and religion through
the Middle Ages.
But Aristotle's logic was logic expressed in ordinary language, so was still
subject to the ambiguities of natural languages. Philosophers began to want
to express logic more formally and symbolically, in the way that
mathematics is written (Leibniz, in the 17th century, was probably the first
to envision and call for such a formalism). It was with the publication in
1847 of G. Boole's The Mathematical Analysis of Logic and A. DeMorgan's
Formal Logic that symbolic logic came into being, and logic became
recognized as part of mathematics. This also marked the recognition that
mathematics is not just about numbers (arithmetic) and shapes (geometry),
but encompasses any subject that can be expressed symbolically with precise
rules of manipulation of those symbols. It is symbolic logic that we shall
study in this chapter.

Since Boole and DeMorgan, logic and mathematics have been inextricably
intertwined. Logic is part of mathematics, but at the same time it is the
language of mathematics. In the late 19th and early 20th century it was
believed that all of mathematics could be reduced to symbolic logic and
made purely formal. This belief, though still held in modified form today,
was shaken by K. Gödel in the 1930's, when he showed that there would
always remain truths that could not be derived in any such formal system.
We'll mention more about this as we go along.

The study of symbolic logic is usually broken into several parts. The first
and most fundamental is the propositional calculus, and this is the subject
of most of this web text. Built on top of this is the predicate calculus, which
is the language of mathematics. We shall study the propositional calculus in
the first six sections and look at the predicate calculus briefly in the last two.
1. Statements and Logical Operators

This on-line text is, for the most part, devoted to the study of so-called
Propositional Calculus. Contrary to what the name suggests, this has
nothing to do with the subject most people associate with the word
"calculus." Actually, the term "calculus" is a generic name for any area of
mathematics that concerns itself with calculating. For example, arithmetic
could be called the calculus of numbers. Propositional Calculus is then the
calculus of propositions. A proposition, or statement, is any declarative
sentence, which is either true (T) or false (F). We refer to T or F as the
truth-value of the statement.

Example 1 Propositions

The sentence "2+2 = 4" is a statement, since it can be either true or false.
Since it happens to be a true statement, its truth value is T.

The sentence "1 = 0" is also a statement, but its truth value is F.

"It will rain tomorrow" is a proposition. For its truth value we shall have to
wait for tomorrow.

The following statement might well be uttered by a Zen Master to a puzzled


disciple: "If I am Buddha, then I am not Buddha." This is a statement which,
we shall see later on, really amounts to the simpler statement "I am not
Buddha." As long as the speaker is not Buddha, this is true.

"Solve the following equation for x" is not a statement, as it cannot be


assigned any truth value whatsoever. (It is an imperative, or command,
rather than a declarative sentence.)

"The number 5" is not a proposition, since it is not even a complete


sentence.
is a proposition with truth value T

"Mars is not a planet" is a proposition with truth value F .

is not a proposition

is a proposition with truth value T

"Ode to Spring" is a proposition with truth value F .

is not a proposition

is a proposition with truth value T

"60 = 1" is a proposition with truth value F

is not a proposition

Example 1B Self-Referential Sentences

"This statement is false" gets us into a bind: If it were true, then, since it is
declaring itself to be false, it must be false. On the other hand, if it were
false, then it’s declaring itself false is a lie, so it is true! In other words, if it
is true, then it is false, and if it is false, then it is true, and we go around in
circles. We get out of this bind by refusing to accord it the privileges of
statement hood. In other words, it is not a statement. An equivalent pseudo-
statement is: "I am lying," so we call this liar's paradox.

"This statement is true" may seem like a statement, but there is no way that
its truth value can ever be determined: is it true, or is it false? We thus
disqualify it as well. (In fact, it is the negation of the liar's paradox; see
below for a discussion of negation.)
Sentences such as these are called self-referential sentences, since they
refer to themselves.

Here are some rather amusing (and slightly disturbing) examples of self-
referential sentences, the first two being taken from Douglas R. Hofstadter's
Met magical Themas:

"This sentences no verb."

"This sentence was in the past tense."

"This sentence asserts absolutely nothing."

"While the last sentence had nothing to say, this sentence says a lot."

"This sentence has more to say than the last two sentences combined, if you
count the number of words."

We shall use the letters p, q, r, s and so on to stand for propositions. Thus,


for example, we might decide that p should stand for the proposition "the
moon is round." We shall write

P: "the moon is round"

To express this. We read this as

P is the statment "the moon is round."

We can form new propositions from old ones in several different ways. For
example, starting with p: "I am an Anchovian," we can form the negation of
p: "It is not the case that I am an Anchovian" or simply "I am not an
Anchovian." We denote the negation of p by ~p, read "not p." What we
mean by this is that, if p is true, then ~p is false, and vice-versa. We can
show this in the form of a truth table:

p ~p
T F
F T
On the left are the two possible truth values of p, with the corresponding
truth values of ~p on the right. The symbol ~ is our first example of a logical
operator.

Following is a more formal definition.

Negation

The negation of p is the statement ~p, which we read "not p." Its
truth value is defined by the following truth table.

p ~p
T F
F T

The negation symbol "~" is an example of a unary logical operator


(the term "unary" indicates that the operator acts on a single
proposition).

Example 2 Negating Statements

Find the negations of the following propositions.


(a) p: "2+2 = 4"
(b) q: "1 = 0"
(c) r: "Diamonds are a pearl's best friend."
(d) s: "All the politicians in this town are crooks."

Solution
(a) ~p is the statement "it is not true that 2+2 = 4," or more simply,
~p: "2+2 4."
(b) ~q: "1 0."

(c) ~r: "Diamonds are not a pearl's best friend."

(d) ~s: " Not all the politicians in this town are crooks."

Notice that ~p is false, because p is true. However, ~q is true, because q is


false. A statement of the form ~q can very well be true; it is a common
mistake to think it must be false.

To say that diamonds are not a pearl's best friends is not to say that
diamonds are a pearl's worst enemy. The negation is not the polar opposite,
but whatever would deny the truth of the original statement. Similarly,
saying that not all politicians are crooks is not the same as saying that no
politicians are crooks, but is the same as saying that some politicians are not
crooks. Negations of statements involving the quantifiers "all" or "some"
are tricky. We'll study quantifiers in more depth when we discuss the
predicate calculus.

Example 3 Conjunction

If p: "This galaxy will ultimately wind up in a black hole" and q: "2+2 = 4,"
what is p q?

Solution

p q: "This galaxy will ultimately disappear into a black hole and 2+2=4," or
the more astonishing statement: "Not only will this galaxy ultimately
disappear into a black hole, but 2+2 = 4!"

q is true, so if p is true then the whole statement p q will be true. On the


other hand, if p is false, then the whole statement p q will be false.
Example 4 Combining Conjunction with Negation

With p and q as in Example 3, what does the statement p (~q) say?

Solution

p (~q) says: "This galaxy will ultimately disappear into a black hole and
2+2 4," or "Contrary to your hopes and aspirations, this galaxy is doomed
to eventually disappear into a black hole; moreover, two plus two is
decidedly different from four!"

Since ~q is false, the whole statement p (~q) is false (regardless of whether


p is true or not).

Example 5 Combining Three Statements


Let p: "This topic is boring," q: "This whole web site is boring" and r: "Life
is boring." Express the statement "Not only is this topic boring, but this
whole web site is boring, and in fact life is boring (so there!)" in logical
form.

Solution

The statement is asserting that all three statements p, q and r are true. (Note
that "but" is simply an emphatic form of "and.") Now we can combine them
all in two steps: Firstly, we can combine p and q to get p q, meaning "This
topic is boring and this web site is boring." We can then conjoin this with r
to get: (p q) r. This says: "This topic is boring, this web site is boring and
life is boring." On the other hand, we could equally well have done it the
other way around: conjoining q and r gives "This web site is boring and life
is boring." We then conjoin p to get p (q r), which again says: "This topic is
boring, this web site is boring and life is boring." We shall soon see that

(p q) r
is logically the same as
p (q r),
a fact called the associative law for conjunction. Thus both answers (p q) r
and p (q r) are equally valid. This is like saying that (1+2)+3 is the same as
1+(2+3). As with addition, we sometimes drop the parentheses and write
p q r.

As we've just seen, there are many ways of expressing a conjunction in


English. For example, if

p: "Waner drives a fast car"


and
q: "Costenoble drives a slow car,"
the following are all ways of saying p q:
Waner drives a fast car and Costenoble drives a slow car.
Waner drives a fast car but Costenoble drives a slow car.
Waner drives a fast car yet Costenoble drives a slow car.
Although Waner drives a fast car, Costenoble drives a slow car.
Waner drives a fast car even though Costenoble drives a slow car.
While Waner drives a fast car, Costenoble drives a slow car.

Any sentence that suggests that two things are both true is a conjunction.
The use of symbolic logic strips away the elements of surprise or judgement
that can also be expressed in an English sentence.

We now introduce a third logical operator. Starting once again with p: "I am
clever," and q: "You are strong," we can form the statement "I am clever or
you are strong," which we write symbolically as p q, read "p or q." Now in
English the word "or" has several possible meanings, so we have to agree on
which one we want here. Mathematicians have settled on the inclusive or: p
q means p is true or q is true or both are true.

With p and q as above, p q stands for "I am clever or you are atrong, or
both." We shall sometimes include the phrase "or both" for emphasis, but
even if we do not that is what we mean. We call p q the disjunction of p
and q.
Disjunction

The disjunction of p and q is the statement p q, which we read "p or


q." Its truth value is defined by the following truth table.

p q p q
T T T
T F T
F T T
F F F

This is the inclusive or, so p q is true when p is true or q is true or


both are true.

Notice that the only way for the whole statement to be false is for
both p and q to be false. For this reason we can say that p q also
means "p and q are not both false." We'll say more about this in the
next section.

The disjunction symbol " " is our second example of a binary

2. Logical Equivalence, Tautologies, and


Contradictions

Example 1 Constructing a Truth Table

Construct the truth table for ~(p q).

Solution

Whenever we encounter a complex formula like this, we work from the


inside out, just as we might do if we had to evaluate an algebraic expression,
like -(a+b). Thus, we start with the p and q columns, then construct the p q
column, and finally, the ~(p q) column:
p q p q ~(p q)
T T T F
T F F T
F T F T
F F F T

Notice how we get the ~(p q) column from the p q column: we reverse all
its the truth values, since that is what negation means.

Example 2 Constructing a Truth Table

Construct the truth table for p (p q).

Solution

Since there are two variables, p and q, we again start with the p and q
columns. Working from inside the parentheses, we then evaluate p q, and
finally take the disjunction of the result with p:

p q p qp (p q)
T T T T
T F F T
F T F F
F F F F

Before we go on...

How did we get the last column from the others? Since we are "or-ing" p
with p q, we must look at the values in the p and p q columns and "or"
those together, according to the instructions for "or." Thus, for example, in
the second row, we get T F = T, and in the third row, we get F F = F. (If
you look at the second row of the truth table for "or" you will see T | F | T,
and in the last row you will see F | F | F )
Example 3 Three Variables

Construct the truth table for ~(p q) (~r).

Solution

Here, there are three variables: p, q and r. Thus we start with three initial
columns showing all eight possibilities:

p q r
T T T
T T F
T F T
T F F
F T T
F T F
F F T
F F F

We now add columns for p q, ~(p q) and ~r, and finally ~(p q) (~r)
according to the instructions for these logical operators. Here is how the
table would grow as you construct it:

p q r p q
T T T T
T T F T
T F T F
T F F F
F T T F
F T F F
F F T F
F F F F
p q r p q ~(p q) ~r
T T T T F F
T T F T F T
T F T F T F
T F F F T T
F T T F T F
F T F F T T
F F T F T F
F F F F T T

and finally,

p q r p q ~(p q) ~r ~(p q) (~r)


T T T T F F F
T T F T F T F
T F T F T F F
T F F F T T T
F T T F T F F
F T F F T T T
F F T F T F F
F F F F T T T

The Conditional and the Biconditional

The Conditional

Consider the following statement: "If you earn an A in logic, then I'll buy
you a Yellow Mustang." It seems to be made up out of two simpler
statements:

p: "You earn an A in logic," and


q: "I will buy you a Yellow Mustang."

What the original statement is then saying is this: if p is true, then q is true,
or, more simply, if p, then q. We can also phrase this as p implies q, and we
write p q.

Now let us suppose for the sake of argument that the original statement: "If
you earn an A in logic, then I'll buy you a Yellow Mustang," is true. This
does not mean that you will earn an A in logic; all it says is that if you do so,
then I will buy you that Yellow Mustang. Thinking of this as a promise, the
only way that it can be broken is if you do earn an A and I do not buy you a
Yellow Mustang. In general, we ue this idea to define the statement p q.

Conditional

The conditional p q, which we read "if p, then q" or "p implies q," is
defined by the following truth table.

p q p q
T T T
T F F
F T T
F F T

The arrow " " is the conditional operator, and in p q the statement p
is classed the antecedent, or hypothesis, and q is called the
consequent, or conclusion.

Notice that the conditional is a new example of a binary logical


operator -- it assigns to each pair of statments p and q the new
statement p q.

Notes

1. The only way that p q can be false is if p is true and q is false-this is the
case of the "broken promise."

2. If you look at the truth table again, you see that we say that "p q" is true
when p is false, no matter what the truth value of q. This again makes sense
in the context of the promise — if you don't get that A, then whether or not I
buy you a Corvette, I have not broken my promise. However, it goes against
the grain if you think of "if p then q" as saying that p causes q. The problem
is that there are really many ways in which the English phrase "if ... then ..."
is used. Logicians have simply agreed that the meaning given by the truth
table above is the most useful for mathematics, and so that is the meaning
we shall always use. Shortly we'll talk about other English phrases that we
interpret as meaning the same thing.

Here are some examples that will help to explain each line in the truth table.

Example 1 (True Implies True) is True

If p and q are both true, then p q is true. For instance:


If 1+1 = 2 then the sun rises in the east.

Here p: "1+1 = 2" and q: "the sun rises in the east."

Notice that the statements need not have anything to do with one
another. We are not saying that the sun rises in the east because 1+1 =
2, simply that the whole statement is logically true.

Example 2 True Can't Imply False

If p is true and q is false, then p q is false. For instance:


When it rains, I carry an umbrella.

Here p: "It is raining," and q: "I am carrying an umbrella." In other


words, we can rephrase the sentence as: "If it is raining then I am
carrying an umbrella." Now there are lots of days when it rains (p is
true) and I forget to bring my umbrella (q is false). On any of those
days the statement p q is clearly false.
Notice that we interpreted "When p, q" as "If p then q."

Example 3 False Implies Anything

If p is false, then p q is true, no matter whether q is true or not. For


instance:
If the moon is made of green cheese, then I am the King of England.

Here p: "the moon is made of green cheese," which is false, and q: "I am the
King of England." The statement p q is true, whether or not the speaker
happens to be the King of England (or whether, for that matter, there even is
a King of England).

"If I had a million dollars I'd be on Easy Street." "Yeah, and if my


grandmother had wheels she'd be a bus." The point of the retort is that, if the
hypothesis is false, the whole implication is true.

4. Tautological Implications and Tautological


Equivalences

Tautological Implications

In this section we enlarge our list of "standard" tautologies by adding ones


involving the conditional and the biconditional. From now on, we use small
letters like p and q to denote atomic statements only, and uppercase letters
like A and B to denote statements of all types, compound or atomic.
We first look at some tautological implications, tautologies of the form A B. You should check the truth
table of each of the statements we give to see that they are, indeed, tautologies.

Modus Ponens or Direct Reasoning

[(p q) p] q.

In words: If p implies q, and if p is true, then q must be true.

Example
Letting p: "I love math" and q: "I will pass this course," we get

If my loving math implies that I will pass this course, and if I indeed love
math, then I will pass this course.

Another way of setting this up is in the following argument form:

If I love math, then I will pass this course.


I love math.
Therefore, I will pass this course.

In symbols:

p q
p
q

Notice that we draw a line in the argument form to separate what we are
given from the conclusion that we draw. This tautology represents the most
direct form of everyday reasoning, hence its name "direct reasoning."
Another bit of terminology: We say that p q and p together logically imply
q.
To check that it is a tautology, we use a truth table.

p q p q (p q) p [(p q) p] q
T T T T T
T F F F T
F T T F T
F F T F T

Once more, modus ponens says that, if we know that p implies q, and we
know that p is indeed true, then we can conclude that q is also true. This is
sometimes known as affirming the hypothesis. You should not confuse this
with a fallacious argument like: "If I were an Olympic athlete then I would
drink Boors. I do drink Boors, therefore I am an Olympic athlete." (Do you
see why this is nonsense?) This is known as the fallacy of affirming the
consequent. There is, however, a correct argument in which we deny the
consequent:

Modus Tollens or Indirect Reasoning

[(p q) ~q] ~p

In words, if p implies q, and q is false, then so is p.

Example
If we once again take p: "I love math" and q: "I will pass this course," we get

If I love math then I will pass this course; but I know that I will fail it.
Therefore, I must not love math.

In argument form:

If I love math, then I will pass this course.


I will fail the course.
Therefore, I do not love math.

In symbols:

p q
~q
~p

As you can see, this argument is not quite so direct as that in the first
example; it seems to contain a little twist: "If p were true then q would also
be true. However, q is false. Therefore p must also be false (else q would be
true.)" That is why we refer to it as indirect reasoning.

We'll leave the truth table for the exercises. Note that there is again a similar,
but fallacious argument form to avoid: "If I were an Olympic athlete then I
would drink Boors. However, I am not an Olympic athlete. Therefore I can't
drink Boors." This is a mistake Boors sincerely hopes you do not make!

More tautoligical implications:

Simplification
(p q) p
and
(p q) q

In words, the first says: If both p and q are true, then, in particular, p is true.

Example
If the sky is blue and the moon is round, then (in particular) the sky is blue.

Argument Form

The sky is blue and the moon is round.


Therefore, the sky is blue.

In symbols:
p q
p

The other simplification, (p q) q is similar.

Addition
p (p q)

In words, the first says: If p is true, then we know that either p or q is true.

Example
If the sky is blue, then either the sky is blue of some ducks are kangaroos.

Argument Form

The sky is blue.


Therefore, the sky is blue or some ducks are kangaroos.

In symbols:

p
p q

Notice that it doesn't matter what we use as q, nor does it matter whether it is
true or false. The reason is that the disjunction p q is true if at least one of p
or q is true. Since we start out knowing that p is true, the truth value of q
doesn't matter.

Warning
The following are not tautologies:

(p q) p;

p (p q).
5. Rules of Inference

In the last section, we wrote out all our tautologies in what we called
"argument form." For instance, Modus Ponens [(p q) p] q was represented
as

p q
p
q

We think of the statements above the line, the premises, as statements given
to us as true, and the statement below the line, the conclusion, as a statement
that must then also be true.

Our convention has been that small letters like p stand for atomic statements.
But, there is no reason to restrict Modus Ponens to such statements. For
example, we would like to be able to make the following argument:

If roses are red and violets are blue, then sugar is sweet and so are you.
Roses are red and violets are blue.
Therefore, sugar is sweet and so are you.

In symbols, this is
(p q) (r s)
p q
r s

So, we really should write Modus Ponens in the following more general and
hence usable form:

A B
A
B

where, as our convention has it, A and B can be any statements, atomic or
compound.

In this form, Modus Ponens is our first rule of inference. We shall use rules
of inference to assemble lists of true statements, called proofs. A proof is a
way of showing how a conclusion follows from a collection of premises.
Modus Ponens, in particular, allows us to say that, if A B and A both
appear as statements in a proof, then we are justified in adding B as another
statement in the proof.

Example 1 Applying Modus Ponens

Apply Modus Ponens to statements 1 and 3 in the following list of


premises (that is, statements that we take to be true).
1. (p q) (r ~s)
2. ~r s
3. p q

Solution

Notice that all the statements are compound statements, and that they
have the following patterns:
1. A B
2. C
3. A.
Statement A appears twice; in lines (1) and (3). Looking at Modus
Ponens, we see that we can deduce B = r ~s from these lines. (Line
(2) is not going to be used at all; it just goes along for the ride.) Thus,
we can enlarge our list as follows:

1. (p q) (r ~s) Premise
2. ~r s Premise
3. p q Premise
4. r (~s) 1,3 Modus Ponens

On the right we have given the justification for each line: lines (1)
through (3) were given as premises, and line (4) follows by an
application of Modus Ponens to lines (1) and (3); hence the
justification "1,3 Modus Ponens."

The above list of four statements consitutes a proof that Statement 4


follows from premises 1-3, and we refer to it as a proof of the
argument
(p q) (r ~s) Premise
~r s Premise
p q Premise

r (~s) Conclusion

Example 2 Using T1

Apply Modus Tollens to the following premises:


1. (p q) (r ~s)
2. ~(r ~s)
3. (p q) p

Solution

Looking at the given premises, we see the pattern:


1. A B
2. ~B
3. A C
As a rule of inference, Modus Tollens has the following form:

A B
~B
~A

(In words, if A B appears on the list, and if ~B also appears on the


list, we can add ~A to the list of true statements.)

This matches the first two premises, so we can apply Modus Tollens
to get the following.

1. (p q) (r ~s) Premise
2. ~(r ~s) Premise
3. (p q) p Premise
4. ~(p q) 1,2 Modus Tollens

We used A C to represent the statement (p q) p, although we could just as


well have represented it by D. Since we're not using this statement at all, it
doesn't matter how we represent it. On the other hand, in order to be able to
use Modus Tollens on lines (1) and (2), it was imperative that we
represented line (1) by A B, and not by the single letter A. If you look at the
argument form of Modus Tollens, you will see that it requires a statement of
the form A B (as well as ~B, of course). Part of learning to apply the rules
of inference is learning how to analyze the structure of statements at the
right level of detail.
6. Arguments and Proofs

In Example 5 in the preceding section we saw the following argument.

a q
b q
(a b) q

Precisely, an argument is a list of statements called premises followed by a


statement called the conclusion. (We allow the list of premises to be empty,
as in Example 3 in the preceding section.) We say that an argument is valid
if the conjunction of its premises implies its conclusion. In other words,
validity means that if all the premises are true, then so is the conclusion.
Validity of an argument does not guarantee the truth of its premises, so does
not guarantee the truth of its conclusion. It only guarantees that the
conclusion will be true if the premises are.

Arguments and Validity

An argument is a list of statements called premises followed by a statement


called the conclusion.

Premis
P1
e
Premis
P2
e
Premis
P3
e
..... .....
Premis
Pr
e
Concl
C
usion

The argument is said to be valid if the statement

(P1 P2 . . . Pr) C

is a tautology. In other words, validity means that if all the premises are true,
then the conclusion must be true.

Question

To show the validity of an argument like


a q
b q
(a b) q
what we need to do is check that the statement [(a q)Š(b q)] [(a b)
q] is a tautology. So to show that an argument is valid we need to
construct a truth table, right?

Answer

Well, that would work, but there are a couple of problems. First, the
truth table can get quite large. The truth table for [(a q)Š(b q)]
[(aæb) q] has eight rows and nine columns. It gets worse quickly,
since each extra variable doubles the number of rows.

Second, checking the validity of an argument mechanically by


constructing a truth table is almost completely unenlightening; it gives
you no good idea why an argument is valid. We'll concentrate on an
alternative way of showing that an argument is valid, called a proof,
that is far more interesting and tells you much more about what is
going on in the argument.

Lastly, while truth tables suffice to check the validity of statements in


the propositional calculus, they do not work for the predicate calculus
we will begin to discuss in the following section. Hence, they do not
work for real mathematical arguments. One of our ulterior motives is
to show you what mathematicians really do: They create proofs.
Question

OK, so what is a proof?

Answer

Informally, a proof is a way of convincing you that the conclusion


follows from the premises, or that the conclusion must be true if the
premises are. Formally, a proof is a list of statements, usually
beginning with the premises, in which each statement that is not a
premise must be true if the statements preceding it are true. In
particular, the truth of the last statement, the conclusion, must follow
from the truth of the first statements, the premises. How do we know
that each statement follows from the preceding ones? We cite a rule of
inference that guarantees that it is so.
Proofs

A proof of an argument is a list of statements, each of which is


obtained from the preceding statements using one of the rules of
inference T1, T2, S, C, or P. The last statement in the proof must be
the conclusion of the argument.

Example
As an example, we have the following proof of the argument given
above, which we considered in the preceding section:
1. a q Premise
2. b q Premise
3. ~a q 1, Switcheroo
4. ~b q 2, Switcheroo
5. (~a q) (~b q) 3,4 Rule C
6. (~a ~b) q 5, Distributive Law
7. ~(a b) q 6, De Morgan
8. (a b) q 7, Switcheroo

Question

I'm convinced that proofs may be a good thing, but I'm still a little
skeptical. What does a proof actually have to do with the validity of
an argument?
Answer

On the one hand, a proof establishes the validity of an argument. The


reason is that, in a proof, every line must be true if the preceding lines
are true. In particular, the truth of the first lines, the premises, implies
the truth of the last line, the conclusion. Hence a proof does show that
an argument is valid. Much less obvious, but reassuring, is the fact
that every valid argument in propositional calculus has a proof. In
other words, an argument is valid if and only if there is a proof of it.

The only way to learn to find proofs is by looking at lots of examples and
doing lots of practice. In the following examples we'll try to give you some
tips as we go along.
7. Predicate Calculus

The Limits of Propositional Calculus

One of the most famous arguments in logic goes as follows.


All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.
There is really no good way to express this argument using propositional
calculus.

Question

What are you talking about? These are just three ordinary statements
in the propositional calculus:
p: All men are mortal.
q: Socrates is a man.
r: Socrates is mortal.

Answer

But then the above argument has the form


p
q
r
and is therefore not a valid argument in the propositional calculus.

Question

OK. That was a tricky one. I now see that we cannot take those
statments as atomic statements, but should write them as compound
statements. Now I get it! It is just the transitive rule:
Something is a man It is mortal
Something is Socrates It is a man
Something is Socrates It is mortal

Answer
This looks more convincing, but there is another catch: "Something is
a man", and "It is a man", while a perfectly good sentences, are not
propositions (what, after all, are their truth values?). The same goes
for the other "statements" in the argument No matter how we try to
rephrase the argument as a valid argument in propositional calculus,
we are doomed to run into some or other technical difficulty.

Universal Quantifier

We need to go beyond the propositional calculus to the predicate calculus,


which allows us to manipulate statements about all or some things,
suggested by the above attempt at formulating the argument about Socrates.

We begin by rewording the statment "All men are mortal" a little more
slickly than we did above:

"For all x, if x is a man then x is mortal."


The sentence "x is a man" is not a statement in propositional calculus, since
it involve an unknown thing x and we can't assign a truth value without
knowing what x we're talking about. This sentence can be broken down into
its subject, x, and a predicate, "is a man." We say that the sentence is a
statement form, since it becomes a statement once we fill in x. Here is how
we shall write it symbolically: The subject is already represented by the
symbol x, called a term here, and we use the symbol P for the predicate "is a
man." We then write Px for the statement form. (It is traditional to write the
predicate before the term; this is related to the convention of writing
function names before variables in other parts of mathematics.) Similarly, if
we use Q to represent the predicate "is mortal" then Qx stands for "x is
mortal." We can then write the statement "If x is a man then x is mortal" as
Px Qx. To write our whole statement, "For all x, if x is a man then x is
mortal" symbolically, we need symbols for "For all x." We use the symbol "
" to stand for the words "for all" or "for every." Thus, we can write our
complete statement as
x[Px Qx].
The symbol " " is called a quantifier because it describes the number of
things we are talking about: all of them. Specifically, it is the universal
quantifier because it makes a claim that something happens universally.

Question
What are those square brackets doing around Px Qx?

Answer

They define what is called the scope of the quantifier x. That is, they
surround what it is we are claiming is true for all x.

Nature of logic

The concept of logical form is central to logic; it being held that the validity
of an argument is determined by its logical form, not by its content.
Traditional Aristotelian syllogistic logic and modern symbolic logic are
examples of formal logics.

• Informal logic is the study of natural language arguments. The study


of fallacies is an especially important branch of informal logic. The
dialogues of Plato are a good example of informal logic.
• Formal logic is the study of inference with purely formal content,
where that content is made explicit. (An inference possesses a purely
formal content if it can be expressed as a particular application of a
wholly abstract rule, that is, a rule that is not about any particular
thing or property. The works of Aristotle contain the earliest known
formal study of logic, which were incorporated in the late nineteenth
century into modern formal logic. In many definitions of logic,
logical inference and inference with purely formal content are the
same. This does not render the notion of informal logic vacuous,
because no formal logic captures all of the nuance of natural
language.)
• Symbolic logic is the study of symbolic abstractions that capture the
formal features of logical inference. Symbolic logic is often divided
into two branches, propositional logic and predicate logic.
• Mathematical logic is an extension of symbolic logic into other
areas, in particular to the study of model theory, proof theory, set
theory, and recursion theory.

These families generally give logic a similar structure: to establish the


relation of the sentences in topic of interest to their representation in logic
through the analysis of logical form and semantics, and to present an
account of inference relating these formal propositions.

Logical form

Logic is generally accepted to be formal, in that it aims to analyses and


represent the form (or logical form) of any valid argument type. The form of
an argument is displayed by representing its sentences in the formal
grammar and symbolism of a logical language to make its content usable in
formal inference. If one considers the notion of form to be too
philosophically loaded, one could say that formalizing is nothing else than
translating English sentences in the language of logic.

This is known as showing the logical form of the argument. It is necessary


because indicative sentences of ordinary language show a considerable
variety of form and complexity that makes their use in inference impractical.
It requires, first, ignoring those grammatical features which are irrelevant to
logic (such as gender and declension if the argument is in Latin), replacing
conjunctions which are not relevant to logic (such as 'but') with logical
conjunctions like 'and' and replacing ambiguous or alternative logical
expressions ('any', 'every', etc.) with expressions of a standard type (such as
'all', or the universal quantifier).

Second, certain parts of the sentence must be replaced with schematic letters.
Thus, for example, the expression 'all As are Bs' shows the logical form
which is common to the sentences 'all men are mortals', 'all cats are
carnivores', 'all Greeks are philosophers' and so on.
That the concept of form is fundamental to logic was already recognized in
ancient times. Aristotle uses variable letters to represent valid inferences the
Prior Analytics, leading Jan Łukasiewicz to say that the introduction of
variables was 'one of Aristotle's greatest inventions'. According to the
followers of Aristotle (such as Ammonius), only the logical principles stated
in schematic terms belong to logic, and not those given in concrete terms.
The concrete terms 'man', 'mortal', etc., are analogous to the substitution
values of the schematic placeholders 'A', 'B', 'C', which were called the
'matter' (Greek 'hyle') of the inference.

The fundamental difference between modern formal logic and traditional or


Aristotelian logic lies in their differing analysis of the logical form of the
sentences they treat.

• In the traditional view, the form of the sentence consists of (1) a


subject (e.g. 'man') plus a sign of quantity ('all' or 'some' or 'no'); (2)
the copula which is of the form 'is' or 'is not'; (3) a predicate (e.g.
'mortal'). Thus: all men are mortal. The logical constants such as 'all',
'no' and so on, plus sentential connectives such as 'and' and 'or' were
called 'syncategorematic' terms (from the Greek 'kategorei' – to
predicate, and 'syn' – together with). This is a fixed scheme, where
each judgement has an identified quantity and copula, determining the
logical form of the sentence.

According to the modern view, the fundamental form of a simple sentence is


given by a recursive schema, involving logical connectives, such as a
quantifier with its bound variable, which are joined to by juxtaposition to
other sentences, which in turn may have logical structure.

The modern view is more complex, since a single judgement of Aristotle's


system will involve two or more logical connectives. For example, the
sentence "All men are mortal" involves in term logic two non-logical terms
"is a man" (here M) and "is mortal" (here D): the sentence is given by the
judgement A (M, D). In predicate logic the sentence involves the same two
non-logical concepts, here analyzed as m(x) and d(x), and the sentence is
given by , involving the logical connectives for
universal quantification and implication.
But equally, the modern view is more powerful: medieval logicians
recognized the problem of multiple generality, where Aristotelian logic is
unable to satisfactorily render such sentences as "Some guys have all the
luck", because both quantities "all" and "some" may be relevant in an
inference, but the fixed scheme that Aristotle used allows only one to govern
the inference. Just as linguists recognize recursive structure in natural
languages, it appears that logic needs recursive structure.

Deductive and inductive reasoning

Deductive reasoning concerns what follows necessarily from given


premises. However, inductive reasoning—the process of deriving a reliable
generalization from observations—has sometimes been included in the study
of logic. Correspondingly, we must distinguish between deductive validity
and inductive validity (called "cogency"). An inference is deductively valid
if and only if there is no possible situation in which all the premises are true
and the conclusion false.

The notion of deductive validity can be rigorously stated for systems of


formal logic in terms of the well-understood notions of semantics. Inductive
validity on the other hand requires us to define a reliable generalization of
some set of observations. The task of providing this definition may be
approached in various ways, some less formal than others; some of these
definitions may use mathematical models of probability. For the most part
this discussion of logic deals only with deductive logic.
Consistency, soundness, and completeness

Among the important properties that logical systems can have:

Consistency, which means that no theorem of the system contradicts


another.

Soundness, which means that the system's rules of proof will never allow a
false inference from a true premise. If a system is sound and its axioms are
true then its theorems are also guaranteed to be true.

Completeness, which means that there are no true sentences in the system
that cannot, at least in principle, be proved in the system.

Some logical systems do not have all three properties. As an example, Kurt
Gödel's incompleteness theorems show that no standard formal system of
arithmetic can be consistent and complete. At the same time his theorems for
first-order predicate logics not extended by specific axioms to be arithmetic
formal systems with equality, show those to be complete and consistent.

Rival conceptions of logic


Logic arose (see below) from a concern with correctness of argumentation.
Modern logicians usually wish to ensure that logic studies just those
arguments that arise from appropriately general forms of inference. For
example, Thomas Hofweber writes in the Stanford Encyclopedia of
Philosophy that logic "does not, however, cover good reasoning as a whole.
That is the job of the theory of rationality. Rather it deals with inferences
whose validity can be traced back to the formal features of the
representations that are involved in that inference, be they linguistic, mental,
or other representations".

By contrast, Immanuel Kant argued that logic should be conceived as the


science of judgment, an idea taken up in Gottlob Frege's logical and
philosophical work, where thought (German: Gedanke) is substituted for
judgement (German: Urteil). On this conception, the valid inferences of
logic follow from the structural features of judgements or thoughts.

History of logic

The earliest sustained work on the subject of logic is that of Aristotle,[11] In


contrast with other traditions, Aristotelian logic became widely accepted in
science and mathematics, ultimately giving rise to the formally sophisticated
systems of modern logic.

Several ancient civilizations have employed intricate systems of reasoning


and asked questions about logic or propounded logical paradoxes. In India,
the Nasadiya Sukta of the Rig-Veda (RV 10.129) contains ontological
speculation in terms of various logical divisions that were later recast
formally as the four circles of catuṣkoṭi: "A", "not A", "Neither A or not
A", and "Both not A and not not A". The Chinese philosopher Gongsun
Long (ca. 325–250 BC) proposed the paradox "One and one cannot become
two, since neither becomes two." Also, the Chinese 'School of Names' is
recorded as having examined logical puzzles such as "A White Horse is not
a Horse" as early as the fifth century BCE. In China, the tradition of
scholarly investigation into logic, however, was repressed by the Qin
dynasty following the legalist philosophy of Han Feizi.

Logic in Islamic philosophy also contributed to the development of modern


logic, which included the development of "Avicennian logic" as an
alternative to Aristotelian logic. Avicenna's system of logic was responsible
for the introduction of hypothetical syllogism,[16] temporal modal logic, and
inductive logic. The rise of the Asharite school, however, limited original
work on logic in Islamic philosophy, though it did continue into the 15th
century and had a significant influence on European logic during the
Renaissance.

In India, innovations in the scholastic school, called Nyaya, continued from


ancient times into the early 18th century, though it did not survive long into
the colonial period. In the 20th century, Western philosophers like Stanislaw
Schayer and Klaus Glashoff have tried to explore certain aspects of the
Indian tradition of logic.

During the later medieval period, major efforts were made to show that
Aristotle's ideas were compatible with Christian faith. During the later
period of the Middle Ages, logic became a main focus of philosophers, who
would engage in critical logical analyses of philosophical arguments.

The syllogistic logic developed by Aristotle predominated until the mid-


nineteenth century when interest in the foundations of mathematics
stimulated the development of symbolic logic (now called mathematical
logic). In 1854, George Boole published An Investigation of the Laws of
Thought on Which are Founded the Mathematical Theories of Logic and
Probabilities, introducing symbolic logic and the principles of what is now
known as Boolean logic. In 1879 Frege published Begriffsschrift which
inaugurated modern logic with the invention of quantifier notation. In 1903
Alfred North Whitehead and Bertrand Russell published Principia
Mathematical on the foundations of mathematics, attempting to derive
mathematical truths from axioms and inference rules in symbolic logic. In
1931 Gödel raised serious problems with the foundation list program and
logic ceased to focus on such issues.
The development of logic since Frege, Russell and Wittgenstein had a
profound influence on the practice of philosophy and the perceived nature of
philosophical problems (see Analytic philosophy), and Philosophy of
mathematics. Logic, especially sentential logic, is implemented in computer
logic circuits and is fundamental to computer science. Logic is commonly
taught by university philosophy departments often as a compulsory
discipline.

Topics in logic

Syllogistic logic
The Organon was Aristotle's body of work on logic, with the Prior
Analytics constituting the first explicit work in formal logic, introducing the
syllogistic. The parts of syllogistic, also known by the name term logic, were
the analysis of the judgements into propositions consisting of two terms that
are related by one of a fixed number of relations, and the expression of
inferences by means of syllogisms that consisted of two propositions sharing
a common term as premise, and a conclusion which was a proposition
involving the two unrelated terms from the premises.

Aristotle's work was regarded in classical times and from medieval times in
Europe and the Middle East as the very picture of a fully worked out system.
It was not alone: the Stoics proposed a system of propositional logic that
was studied by medieval logicians; nor was the perfection of Aristotle's
system undisputed; for example the problem of multiple generality was
recognised in medieval times. Nonetheless, problems with syllogistic logic
were not seen as being in need of revolutionary solutions.

Today, some academics claim that Aristotle's system is generally seen as


having little more than historical value (though there is some current interest
in extending term logics), regarded as made obsolete by the advent of
propositional logic and the predicate calculus. Others use Aristotle in
argumentation theory to help develop and critically question argumentation
schemes that are used in artificial intelligence and legal arguments.

Sentential (propositional) logic

A propositional calculus or logic (also a sentential calculus) is a formal


system in which formulae representing propositions can be formed by
combining atomic propositions using logical connectives, and a system of
formal proof rules allows certain formula to be established as "theorems".

Predicate logic

Predicate logic is the generic term for symbolic formal systems such as first-
order logic, second-order logic, many-sorted logic, and infinitary logic.

Predicate logic provides an account of quantifiers general enough to express


a wide set of arguments occurring in natural language. Aristotelian
syllogistic logic specifies a small number of forms that the relevant part of
the involved judgements may take. Predicate logic allows sentences to be
analyzed into subject and argument in several additional ways, thus allowing
predicate logic to solve the problem of multiple generality that had
perplexed medieval logicians.

The development of predicate logic is usually attributed to Gottlob Frege,


who is also credited as one of the founders of analytical philosophy, but the
formulation of predicate logic most often used today is the first-order logic
presented in Principles of Mathematical Logic by David Hilbert and
Wilhelm Ackermann in 1928. The analytical generality of predicate logic
allowed the formalisation of mathematics, drove the investigation of set
theory, and allowed the development of Alfred Tarski's approach to model
theory. It provides the foundation of modern mathematical logic.

Frege's original system of predicate logic was second-order, rather than first-
order. Second-order logic is most prominently defended (against the
criticism of Willard Van Orman Quine and others) by George Boolos and
Stewart Shapiro.

Modal logic

In languages, modality deals with the phenomenon that sub-parts of a


sentence may have their semantics modified by special verbs or modal
particles. For example, "We go to the games" can be modified to give "We
should go to the games” and "We can go to the games"" and perhaps "We
will go to the games". More abstractly, we might say that modality affects
the circumstances in which we take an assertion to be satisfied.

The logical study of modality dates back to Aristotle, who was concerned
with the alethic modalities of necessity and possibility, which he observed to
be dual in the sense of De Morgan duality.[citation needed] While the study of
necessity and possibility remained important to philosophers, little logical
innovation happened until the landmark investigations of Clarence Irving
Lewis in 1918, who formulated a family of rival axiomatizations of the
alethic modalities. His work unleashed a torrent of new work on the topic,
expanding the kinds of modality treated to include deontic logic and
epistemic logic. The seminal work of Arthur Prior applied the same formal
language to treat temporal logic and paved the way for the marriage of the
two subjects. Saul Kripke discovered (contemporaneously with rivals) his
theory of frame semantics which revolutionized the formal technology
available to modal logicians and gave a new graph-theoretic way of looking
at modality that has driven many applications in computational linguistics
and computer science, such as dynamic logic.

Informal reasoning

The motivation for the study of logic in ancient times was clear: it
is so that one may learn to distinguish good from bad arguments,
and so become more effective in argument and oratory, and
perhaps also to become a better person. Half of the works of
Aristotle's Organon treat inference as it occurs in an informal
setting, side by side with the development of the syllogistic, and in
the Aristotelian school, these informal works on logic were seen as
complementary to Aristotle's treatment of rhetoric.

This ancient motivation is still alive, although it no longer takes


centre stage in the picture of logic; typically dialectical logic will
form the heart of a course in critical thinking, a compulsory course
at many universities.

Argumentation theory is the study and research of informal logic,


fallacies, and critical questions as they relate to every day and
practical situations. Specific types of dialogue can be analyzed and
questioned to reveal premises, conclusions, and fallacies.
Argumentation theory is now applied in artificial intelligence and
law.

Mathematical logic
Mathematical logic really refers to two distinct areas of research: the first is
the application of the techniques of formal logic to mathematics and
mathematical reasoning, and the second, in the other direction, the
application of mathematical techniques to the representation and analysis of
formal logic.

The earliest use of mathematics and geometry in relation to logic and


philosophy goes back to the ancient Greeks such as Euclid, Plato, and
Aristotle. Many other ancient and medieval philosophers applied
mathematical ideas and methods to their philosophical claims.

The boldest attempt to apply logic to mathematics was undoubtedly the


logicism pioneered by philosopher-logicians such as Gottlob Frege and
Bertrand Russell: the idea was that mathematical theories were logical
tautologies, and the programmer was to show this by means to a reduction of
mathematics to logic. The various attempts to carry this out met with a series
of failures, from the crippling of Frege's project in his Grundgesetze by
Russell's paradox, to the defeat of Hilbert's program by Gödel's
incompleteness theorems.

Both the statement of Hilbert's program and its refutation by Gödel


depended upon their work establishing the second area of mathematical
logic, the application of mathematics to logic in the form of proof theory.
Despite the negative nature of the incompleteness theorems, Gödel's
completeness theorem, a result in model theory and another application of
mathematics to logic, can be understood as showing how close logicism
came to being true: every rigorously defined mathematical theory can be
exactly captured by a first-order logical theory; Frege's proof calculus is
enough to describe the whole of mathematics, though not equivalent to it.
Thus we see how complementary the two areas of mathematical logic have
been.

If proof theory and model theory have been the foundation of mathematical
logic, they have been but two of the four pillars of the subject. Set theory
originated in the study of the infinite by Georg Cantor, and it has been the
source of many of the most challenging and important issues in
mathematical logic, from Cantor's theorem, through the status of the Axiom
of Choice and the question of the independence of the continuum
hypothesis, to the modern debate on large cardinal axioms.
Recursion theory captures the idea of computation in logical and arithmetic
terms; its most classical achievements are the undecidability of the
Entscheidungsproblem by Alan Turing, and his presentation of the Church-
Turing thesis. Today recursion theory is mostly concerned with the more
refined problem of complexity classes — when is a problem efficiently
solvable? — and the classification of degrees of unsolvability.

Philosophical logic

Philosophical logic deals with formal descriptions of natural language. Most


philosophers assume that the bulk of "normal" proper reasoning can be
captured by logic, if one can find the right method for translating ordinary
language into that logic. Philosophical logic is essentially a continuation of
the traditional discipline that was called "Logic" before the invention of
mathematical logic. Philosophical logic has a much greater concern with the
connection between natural language and logic. As a result, philosophical
logicians have contributed a great deal to the development of non-standard
logics (e.g., free logics, tense logics) as well as various extensions of
classical logic (e.g., modal logics), and non-standard semantics for such
logics (e.g., Kripke's technique of super valuations in the semantics of
logic).

Logic and the philosophy of language are closely related. Philosophy of


language has to do with the study of how our language engages and interacts
with our thinking. Logic has an immediate impact on other areas of study.
Studying logic and the relationship between logic and ordinary speech can
help a person better structure their own arguments and critique the
arguments of others. Many popular arguments are filled with errors because
so many people are untrained in logic and unaware of how to correctly
formulate an argument.

Logic and computation

Logic cut to the heart of computer science as it emerged as a discipline: Alan


Turing's work on the Entscheidungsproblem followed from Kurt Gödel's
work on the incompleteness theorems, and the notion of general purpose
computers that came from this work was of fundamental importance to the
designers of the computer machinery in the 1940s.

In the 1950s and 1960s, researchers predicted that when human knowledge
could be expressed using logic with mathematical notation, it would be
possible to create a machine that reasons, or artificial intelligence. This
turned out to be more difficult than expected because of the complexity of
human reasoning. In logic programming, a program consists of a set of
axioms and rules. Logic programming systems such as Prolog compute the
consequences of the axioms and rules in order to answer a query.

Today, logic is extensively applied in the fields of artificial intelligence, and


computer science, and these fields provide a rich source of problems in
formal and informal logic. Argumentation theory is one good example of
how logic is being applied to artificial intelligence. The ACM Computing
Classification System in particular regards:

• Section F.3 on Logics and meanings of programs and F. 4 on


Mathematical logic and formal languages as part of the theory of
computer science: this work covers formal semantics of programming
languages, as well as work of formal methods such as Hoare logic
• Boolean logic as fundamental to computer hardware: particularly, the
system's section B.2 on Arithmetic and logic structures, relating to
operatives AND, NOT, and OR;
• Many fundamental logical formalisms are essential to section I.2 on
artificial intelligence, for example modal logic and default logic in
Knowledge representation formalisms and methods, Horn clauses in
logic programming, and description logic.
Furthermore, computers can be used as tools for logicians. For example, in
symbolic logic and mathematical logic, proofs by humans can be computer-
assisted. Using automated theorem proving the machines can find and check
proofs, as well as work with proofs too lengthy to be written out by hand.

Controversies in logic

Just as we have seen there is disagreement over what logic is about, so there
is disagreement about what logical truths there are.

Bivalence and the law of the excluded middle

The logics discussed above are all "bivalent" or "two-valued"; that is, they
are most naturally understood as dividing propositions into true and false
propositions. Non-classical logics are those systems which reject bivalence.

Hegel developed his own dialectic logic that extended Kant's transcendental
logic but also brought it back to ground by assuring us that "neither in
heaven nor in earth, neither in the world of mind nor of nature, is there
anywhere such an abstract 'either–or' as the understanding maintains.
Whatever exists is concrete, with difference and opposition in itself".

In 1910 Nicolai A. Vasiliev rejected the law of excluded middle and the law
of contradiction and proposed the law of excluded fourth and logic tolerant
to contradiction. In the early 20th century Jan Łukasiewicz investigated the
extension of the traditional true/false values to include a third value,
"possible", so inventing ternary logic, the first multi-valued logic.

Logics such as fuzzy logic have since been devised with an infinite number
of "degrees of truth", represented by a real number between 0 and 1.
Intuitionistic logic was proposed by L.E.J. Brouwer as the correct logic for
reasoning about mathematics, based upon his rejection of the law of the
excluded middle as part of his intuitionism. Brouwer rejected formalization
in mathematics, but his student Arend Heyting studied intuitionistic logic
formally, as did Gerhard Gentzen. Intuitionistic logic has come to be of great
interest to computer scientists, as it is a constructive logic, and is hence a
logic of what computers can do.

Modal logic is not truth conditional, and so it has often been proposed as a
non-classical logic. However, modal logic is normally formalized with the
principle of the excluded middle, and its relational semantics is bivalent, so
this inclusion is disputable.

Is logic empirical?

What is the epistemological status of the laws of logic? What sort of


argument is appropriate for criticising purported principles of logic? In an
influential paper entitled "Is logic empirical?" Hilary Putnam, building on a
suggestion of W.V. Quine, argued that in general the facts of propositional
logic have a similar epistemological status as facts about the physical
universe, for example as the laws of mechanics or of general relativity, and
in particular that what physicists have learned about quantum mechanics
provides a compelling case for abandoning certain familiar principles of
classical logic: if we want to be realists about the physical phenomena
described by quantum theory, then we should abandon the principle of
distributivity, substituting for classical logic the quantum logic proposed by
Garrett Birkhoff and John von Neumann.

Another paper by the same name by Sir Michael Dummett argues that
Putnam's desire for realism mandates the law of distributivity. Distributivity
of logic is essential for the realist's understanding of how propositions are
true of the world in just the same way as he has argued the principle of
bivalence is. In this way, the question, "Is logic empirical?" can be seen to
lead naturally into the fundamental controversy in metaphysics on realism
versus anti-realism.

Implication: strict or material?

It is obvious that the notion of implication formalized in classical logic does


not comfortably translate into natural language by means of "if… then…",
due to a number of problems called the paradoxes of material implication.

The first class of paradoxes involves counterfactuals, such as "If the moon is
made of green cheese, then 2+2=5", which are puzzling because natural
language does not support the principle of explosion. Eliminating this class
of paradoxes was the reason for C. I. Lewis's formulation of strict
implication, which eventually led to more radically revisionist logics such as
relevance logic.

The second class of paradoxes involves redundant premises, falsely


suggesting that we know the succedent because of the antecedent: thus "if
that man gets elected, granny will die" is materially true if granny happens to
be in the last stages of a terminal illness, regardless of the man's election
prospects. Such sentences violate the Gricean maxim of relevance, and can
be modelled by logics that reject the principle of monotonicity of entailment,
such as relevance logic.

Tolerating the impossible

Hegel was deeply critical of any simplified notion of the Law of Non-
Contradiction. It was based on Leibniz's idea that this law of logic also
requires a sufficient ground in order to specify from what point of view (or
time) one says that something cannot contradict itself, a building for
example both moves and does not move, the ground for the first is our solar
system for the second the earth. In Hegelian dialectic the law of non-
contradiction, of identity, itself relies upon difference and so is not
independently assert able.

Closely related to questions arising from the paradoxes of implication comes


the suggestion that logic ought to tolerate inconsistency. Relevance logic
and paraconsistent logic are the most important approaches here, though the
concerns are different: a key consequence of classical logic and some of its
rivals, such as intuitionistic logic, is that they respect the principle of
explosion, which means that the logic collapses if it is capable of deriving a
contradiction. Graham Priest, the main proponent of dialetheism, has argued
for paraconsistency on the grounds that there are in fact, true contradictions.

Rejection of logical truth

The philosophical vein of various kinds of skepticism contains many kinds


of doubt and rejection of the various bases upon which logic rests, such as
the idea of logical form, correct inference, or meaning, typically leading to
the conclusion that there are no logical truths. Observe that this is opposite
to the usual views in philosophical skepticism, where logic directs skeptical
enquiry to doubt received wisdoms, as in the work of Sextus Empiricus.

Friedrich Nietzsche provides a strong example of the rejection of the usual


basis of logic: his radical rejection of idealization led him to reject truth as a
mobile army of metaphors, metonyms, and anthropomorphisms—in short ...
metaphors which are worn out and without sensuous power; coins which
have lost their pictures and now matter only as metal, no longer as coins.
His rejection of truth did not lead him to reject the idea of either inference or
logic completely, but rather suggested that logic [came] into existence in
man's head [out] of illogic, whose realm originally must have been
immense. Innumerable beings who made inferences in a way different from
ours perished. Thus there is the idea that logical inference has a use as a tool
for human survival, but that its existence does not support the existence of
truth, nor does it have a reality beyond the instrumental: Logic, too, also
rests on assumptions that do not correspond to anything in the real world.

You might also like