Northern Kentucky University



CSC 425/525: Extra Notes on Chapter 2

Logic and First Order Predicate Calculus

For those of you who have had MAT 385, this should be review. If you feel you already understand the topic, you may skip sections I-IV below.

I. Introduction

Logic is the study of reasoning. It is applied to prove things, whether mathematical, philosophical, or scientific. It is the basis for a branch of computer programming used in artificial intelligence. It can be applied to perform software verification. Basically, logic is a systematic method for clearly expressing and demonstrating truths.

We focus on symbolic logic, that is, a logic where the statements expressed are symbols that represent concepts. We start with propositional logic where our statements are either true or false. This is a traditional two-valued logic. There are other logics, such as fuzzy logic (which deals with statements that contain a degree of truthfulness, evaluated as a real number between 0 and 1) which will be covered later in the semester.

In logic, we may write a statement in English, in propositional calculus, in predicate calculus, or through some other means. For these notes, we will limit ourselves to these three “languages”. A statement must be well-formed. We refer to such statements as well-formed formulas (wff). There are rules for forming wffs which will be introduced later.

In English, the following are legal logical statements:

• It is sunny out.

• You are a computer science student.

• The number twelve is larger than the number 10.

• A tree is a living entity.

In English, the following are not logical statements:

• How did you do on the exam? (this does not evaluate to true or false)

• He went to the movie. (we do not know who “He” is so we cannot prove whether this is true or false)

• God created the Universe. (we cannot prove this)

• Read the next chapter. (this is a command, not a statement)

Statements can include connectives. In logic, our connectives are AND, OR, NOT, Implication (written as () and equivalence (written as ↔ or (, but we will use ↔ in these notes). You should already be familiar with AND, OR and NOT through previous courses. Below are the truth tables for these three connectives (also known as Boolean operators).

|X |Y |X AND Y |X OR Y |NOT X |

|T |T |T |T |F |

|T |F |F |T |F |

|F |T |F |T |T |

|F |F |F |F |T |

As you should expect, the result of AND is true only if all of the arguments are true, and the result of OR is true if any of the arguments are true. AND and OR can be applied to two or more Boolean values. NOT only applies to a single argument and “flips it”, that is, true becomes false and false becomes true.

NOTE: when building a truth table, specify the combinations of T and F values for the arguments (X and Y above) in their proper order. Above, that order is TT, TF, FT, FF. We could also order these as FF, FT, TF, TT. With 2 arguments, there are 22 combinations, if we had 3 arguments, there would be 23 combinations (we will see such an example in a few pages).

Implication, also called the conditional connective, can be thought of, in some ways, like an if-then statement, so “X ( Y” can also be thought of as “If X then Y”. X is called the antecedent and in programming, we think of it as our condition. Y is the consequent (the conclusion or what happens if the antecedent is true). However, implication is more far reaching than an if statement. We often use implication to denote a truth preserving rule, which we find when defining classes and their attributes. For instance, since all humans die (eventually), we can write a rule that says “if you are a man, then you will die”, or man ( mortal. We can also use implication to represent a causal situation (cause and effect) such as “if it rains, then the grass will be wet”, or rain ( wet grass. Note that in logic, implications must be truth preserving. By saying rain ( wet grass, we are actually violating this because there are possibilities where it could rain and yet the grass remain dry (someone who has covered their lawn with a tarp or tent, or if the rain did not fall on that part of the city, etc. We will not concern ourselves with non-truth-preserving knowledge at this point, we will study this in more detail later in the semester.

As with AND, OR and NOT, we want to derive a truth table for implication. What does the truthfulness of an if statement mean? Consider as an example, “if you are over seven feet in height, then you are considered tall.” The truth table of implications is perhaps less easy to understand:

|X |Y |X (Y |

|T |T |T |

|T |F |F |

|F |T |T |

|F |F |T |

Here, the implication is true if X is false, or if both X and Y are true. It seems reasonable to say that the implication is true when both the antecedent and the consequent are true, but why is the implication true when the antecedent is false? Let’s examine our example again: “if you are over seven feet, you would be considered as tall” again. If you are over seven feet in height, you will be considered as tall, so if X is true, Y is true. There are three other possibilities as shown in the truth table. What if you are not over seven feet in height? In this case, X is false. Some people may still consider you to be tall anyway but some people may not. So Y may be true and Y may be false depending on the circumstance. Since, if X is false and Y is true, this is still a reasonable implication to make, then row 3 is correct. And, if X is false and Y is false, this is again a reasonable implication to make, row 4 is correct. Now consider the situation where X is true and Y is false. This means that you are over seven feet but you are not considered to be tall. This violates our implication. So row 2 is incorrect. In effect, the implication is saying that if X is true Y must be true and if X is false, Y may be true or false but if X is true, Y cannot (must not) be false.

One problem with the above example is that the conclusion is based on human perspective. Let’s instead consider an example from mathematics. “if an integer is 4, then its square is 16”. Our first row is true, if x = 4, then x2 = 16. Let’s consider the fourth row. Assume the integer is 3. Then x = 4 is false, and x2 = 16 is false and our statement then remains true. Another case is if x = -4. Our antecedent is still false because x = 4 is false, but x2 = 16 is true. Our statement is still true then. The last possibility is row 2. Since row 2 requires that the antecedent is true, we know x = 4. However, since 42 = 16, there is no way for the consequent to be false. Therefore, the second row in the truth table, the antecedent is true and the consequent is false, is false.

Equivalence (↔) between two statements means that the two statements mean the same thing. This is the same as saying if statement1 then statement2 AND if statement2 then statement1, or statement1 is equivalent to statement2 means statement1 ( statement2 AND statement2 ( statement1. Here is the truth table for equivalence, worked through by applying ( and AND.

|X |Y |X (Y |Y ( X |(X ( Y) AND (Y ( X) |

|T |T |T |T |T |

|T |F |F |T |F |

|F |T |T |F |F |

|F |F |T |T |T |

What we see here is that X and Y are equivalent if they coincide (have the same truth values). This is the NOT of XOR or it is the COINCIDENCE Boolean operator.

II. Propositional Logic

Propositional logic (also called propositional calculus) is logic as described above, where statements are propositions built of a small set of primitives. These primitives are atomic propositions and the connectives described above (AND, OR, NOT, implication, equivalence). An atomic proposition is a statement that itself does not contain subcomponents such as other propositions or connectives. For simplicity, we will think of propositions as symbols that represent concepts in the real-world such as “it is sunny” or “Joe is a student”, or abstract concepts such as a number or variable (e.g., x = 4). You might think of these symbols as representing statements in English, such as “today is Tuesday”. We will usually denote propositions as single letters (some texts use only lower case letters to represent propositions, other texts use only upper case letters, here, we will use only upper case letters). You have already been viewing propositional calculus in the truth tables above as we have used X and Y in place of English statements.

While we are mentioning English, it should be noted that we can express ideas in many different ways in English. So, while we use AND, OR, NOT, implication and equivalence when writing our logical statements, in English, the same statements might use different words. The list below reflects some of the choices.

• AND: and, but, also, in addition, moreover

• Implication: A implies B, A therefore B, A only if B, B follows from A, A is a sufficient condition for B, B is a necessary condition for A

• Equivalence: A if and only if B, A is necessary and sufficient for B

• NOT: not, it is false that, it is not true that

• OR: or (there is only one way to say OR)

AND is also referred to as conjuction, OR as disjunction and NOT as negation. You will often see symbols used for these as well:

AND: *, [pic]

OR: +, [pic]

NOT: ~, ́, ˉ (located over the argument or statement), ⌐

Equivalence: [pic]

NOTE: From this point forward in these notes, ~ will be used for NOT, * for AND, + for OR, ( for implication, and ↔ for equivalence.

A wff in propositional logic will contain propositions, connectives as shown above, and parentheses (or square brackets) where a connective must occur between two propositions except ~ which must occur immediately before a single proposition or parenthesis (in some texts, where ́ is used for NOT, the symbol then appears after the proposition or parenthesis, not before). Here are some examples of wffs:

• A

• A ( B

• ~(A [pic] B) [pic] (~C [pic]D) [note here that the NOT applies only to (A [pic] B)

• ~(~(A[pic] B) [pic] (~C [pic]~(D[pic] E)))

• A ( (~B [pic] C)

And here are some examples that are not wffs:

• ~ (no atomic proposition provided for NOT)

• A B * (connective is in the wrong case)

• A B [pic] C ~D (missing connectives between A and B, C and ~D, although in some texts, * can be omitted to denote AND in which case, this is a wff)

• A [pic] ( B (missing a proposition after AND)

• (A * B) ) + C (too many parentheses)

• (A * (C + (D ( E)) (not enough parentheses)

III. Evaluated Statements

If a statement is a wff, then we can evaluate it. We evaluate a statement by filling in the truth values for propositions, much like you would fill in variable values in algebra, and then applying the operators. All propositions will have values of true or false and the operators operate on true or false values to yield true or false values themselves. The order that operators are applied are given by the following list of operator precedence:

1. Items in parentheses, innermost parentheses first, applied left to right

2. Negation

3. AND

4. OR (NOTE: in some texts, AND and OR are considered at the same level of precedence)

5. Implication

6. Equivalence

As an example, the statement A + B * C + D would be performed by first applying B * C, and then applying A + that value and finally that value + D. That is, it would be applied as if we had fully parenthesized it to be (A + (B * C)) + D. If we considered AND and OR to be at the same level of operator precedence, then this would be ((A + B) * C) + D. The statement ~(A * B ( C) will perform A * B first, followed by that ( C, followed by negation. If we know that A is false, B is true and C is false, the entire statement is false. We show this by “plugging in the values” for A, B, * C. ~ ((A * B) ( C) becomes ~ ((false * true) ( false), which is ~(false ( false), which is ~(true), which is false. We can also show all of the possible values of such a statement by deriving the truth table. Because the statement has multiple connectives, we will build the truth table in steps. First, we will find the truth table for A * B. Next, we will find the truth table for (A * B) ( C. Finally, we can negate that truth table to find the statement’s truth table. The truth table is shown below. You would build it in stages (column-by-column).

|A |B |C |A * B |A * B ( C |~ (A*B ( C) |

|T |T |T |T |T |F |

|T |T |F |T |F |T |

|T |F |T |F |T |F |

|T |F |F |F |T |F |

|F |T |T |F |T |F |

|F |T |F |F |T |F |

|F |F |T |F |T |F |

|F |F |F |F |T |F |

Notice that the truth table indicates that this statement is true only if A and B are true and C is false. A statement that is always false is known as a contradiction. A statement that is always true is called a tautology. Here are a few further examples. For each statement, a truth table is provided.

A + (~ (B ( C))

|A |B |C |B( C |~ (B ( C) |A + (~ (B(C)) |

|T |T |T |T |F |T |

|T |T |F |F |T |T |

|T |F |T |T |F |T |

|T |F |F |T |F |T |

|F |T |T |T |F |F |

|F |T |F |F |T |T |

|F |F |T |T |F |F |

|F |F |F |F |T |T |

(A ( B) ( C

|A |B |C |A( B |(A ( B) ( C |

|T |T |T |T |T |

|T |T |F |T |F |

|T |F |T |F |T |

|T |F |F |F |T |

|F |T |T |T |T |

|F |T |F |T |F |

|F |F |T |T |T |

|F |F |F |T |T |

(A + B) ( (A * B)

|A |B |A + B |A * B |(A + B) ( (A * B) |

|T |T |T |T |T |

|T |F |T |F |F |

|F |T |T |F |F |

|F |F |F |F |T |

Recall the definition of equivalence between two propositions, A and B, is (A ( B) * (B ( A). We saw that the truth table for equivalence shows that equivalence is true if the two propositions are both true, or the two propositions are both false. That is, two statements are equivalent if they coincide. Another way to think about this is that two statements are equivalent if their truth table values are the same. So, this gives us a way to prove that two statements are equivalent.

We show the following two statements are equivalent by generating their truth tables. A ( B, ~ B ( ~ A.

|A |B |A ( B |~ A |~ B |~ B ( ~ A |

|T |T |T |

|Commutative |A + B ↔ B + A |A * B ↔ B * A |

|Associative |(A + B) + C ↔ A + (B + C) |(A * B) * C ↔ A * (B * C) |

|Distributive |A + (B * C) ↔ |A * (B + C) ↔ |

| |(A + B) * (A + C) |(A * B) + (A * C) |

|Identity |A + false ↔ A |A * true ↔ A |

|Null |A + true ↔ true |A * false ↔ false |

|Idempotent |A + A ↔ A |A * A ↔ A |

|Complement |A + ~ A ↔ true |A * ~ A ↔ false |

|DeMorgan’s |~(A + B) ↔ ~ A * ~ B |~ (A * B) ↔ ~ A + ~ B |

Three additional laws apply to operators other than AND and OR:

double negation: ~ (~ A) ↔ A

equivalence: “A is equivalent to B” ↔ A ( B * B ( A

implication: A ( B ↔ ~A + B

These laws are covered in CSC 362 and other details are skipped here.

IV. Predicate Logic

Propositions are used to represent singular statements, such as “It is sunny” or “the car is fast”. But what if I want to express statements about various cars, not just the class car? For instance, we might want to say that “Italian sports cars are fast” but not necessarily other cars. Or “Joe’s car is fast but Susan’s car is not fast”? We need to enhance our logic from propositions to predicates. A predicate is a statement, much like a proposition, that contains one or more variables which represent instances. A variable can have one of two scopes, for all (everything) or there exists (a single item). With predicates, we can enrich our logic to be far more expressive. Predicates, unlike propositions, will include variables in parentheses (we might also call these arguments) and the predicates are typically preceded with quantifiers which dictate the scope of the variable(s). We will use two quantifiers, the universal quantifier, [pic], means “for all”. The existential quantifier, [pic], means “there exists”. NOTE: predicates in some texts are indicated with upper case letters and variables with lower case letters. Here, all variables will be indicated using lower case letters, and predicates will either be lower case single letters, or lower case words.

The statement [pic]x: p(x). means that, for all x, it is a p, or “everything is a p”. For instance, we might say [pic]x: round(x). which means “everything is round”. This of course is untrue. So we are more likely to use the universal quantifier in an implication rule. Returning to an earlier statement, “all men are mortal”, we can state this as “for everything in the universe, if it is a man then it is mortal”. This can be written as [pic]x: man(x) ( mortal(x). Thus, we can write “truth preserving rules”, implications that, if the condition is true, then the conclusion must be true.

The existential quantifier is used in an argument to show that at least one instant exists. We might use this to say that “the United States has a president”. This could be expressed as [pic]x: presidentofus(x). If we assume that US is a constant that represents the United States, and the predicate president takes two arguments, the person and the country, then we could also write this as [pic]x: president(x, US). We can also use the same predicate to say ~[pic]x: president(x, England). That is, while the US has a president, England does not.

Statements can have multiple quantifiers including quantifiers of both types. Here are some examples:

([pic]x)([pic]y)([pic]z) : father(x, y) * father(y, z) ( grandfather(x, z)

“If x is y’s father and y is z’s father, then x is z’s grandfather.”

([pic]x)([pic]y) : student(x) * teacher(y) ( canteach(y, x)

“If x is a student and y is a teacher then y can teach x.”

([pic]x)([pic]y) : student(x) ( teacher(y) * canteach(y, x)

“If x is a student, then there exists someone who teaches x.”

This could also be stated as “all students have a teacher.”

NOTE: We would place [ ] around the entire implication statement to properly denote the scope rule, but we will omit that for brevity. So for instance, the first statement really should read: ([pic]x)([pic]y)([pic]z) : [Father(x, y) * Father(y, z) ( Grandfather(x, z)].

Let’s go through some examples of translating English sentences into predicate calculus statements.

John loves only Mary: ([pic]x) loves(John, x) ( Mary(x).

“if there is anything that John loves, it is Mary”

Only John loves Mary: ([pic]x) loves(x, Mary) ( John(x).

“if anything loves Mary, it is John.

All dogs chase all rabbits: ([pic]x) ([pic]y) dog(x) * rabbit(y) ( chase(x, y).

Some dogs chase all rabbits: ([pic]x) ([pic]y) dog(x) * rabbit(y) ( chase(x, y).

Some dogs chase rabbits: ([pic]x) ([pic]y) dog(x) * rabbit(y) ( chase(x, y).

Only dogs chase rabbits: ([pic]x) ([pic]y) chase(x, y) * rabbit(y) ( dog(x).

Dogs chase only rabbits: ([pic]x) ([pic]y) chase(x, y) * dog(x) ( rabbit(y).

Notice the subtlety here in how the quantifier can change our interpretation of what the statement means. It can be very tricky!

When we negate a qualifier, we must be careful in what we negate because the position of the negation will change our interpretation of the statement. Consider “Everything is beautiful”, which can be written as ([pic]x) beautiful(x). If we write ~([pic]x) beautiful(x), we are saying “nothing is beautiful”, (“there does not exist an x that makes beautiful(x) true”). Here are some additional examples that include two quantifiers (let’s assume that the negation of loves is hates):

Everybody hates somebody: ([pic]x) ([pic]y): ~loves(x, y)

Somebody loves everybody: ([pic]x) ([pic]y): loves(x, y)

Somebody hates everybody: ([pic]x) ([pic]y): ~loves(x, y).

What if we want to negate a statement. For instance, the negation (opposite) of “everything is beautiful” is really “it is false that everything is beautiful” which means “somethings are not beautiful”. This is expressed as ([pic]x) ~beautiful(x). So notice that to negate a universal quantifier, we change the quantifier and move the negation to be in front of the predicate.

Negating a statement with a quantifier should remind you something of DeMorgan’s law. In DeMorgan’s law, we moved the negation in front of each proposition and we changed the sign from AND to OR (or OR to AND). Here, we change the quantifier from universal to existential (or existential to universal) and move the negation to appear in front of the predicate (in fact, we are not moving the negation in front of the predicate but rather we are negating the predicate). The negation of “nothing is beautiful” is “some things are beautiful”. We can write these two statements as ([pic]x) ~beautiful(x) and ([pic]x): beautiful(x) respectively. Again, the two opposites are formed by taking the original, changing the quantifier, and then negating the predicate (change ~beautiful(x) to beautiful(x)).

In summary, the opposite of ([pic]x) p(x) is ([pic]x) ~p(x), and the opposite of ([pic]x) p(x) is ([pic]x) ~p(x).

What if we have multiple quantifiers and want to negate a statement? This becomes a bit more complex. Consider again “Everybody hates somebody”, which is: ([pic]x) ([pic]y): ~loves(x, y). The opposite of this statement is “Everybody hates somebody is false” or “not everybody hates somebody”, and is written as ~([pic]x) ([pic]y): ~loves(x, y). Notice that this statement has two negations. Can I just cancel them and yield ([pic]x) ([pic]y): loves(x, y)? This new statement says that “everybody loves somebody”, but that isn’t what we are saying when we say “not everybody hates somebody”. In fact, the former statement says that everyone has someone that they can love but the latter says that there are some people out there who hate no one (they love everyone).

In order to distribute the negation, we move the negation in by one quantifier and change the outer quantifier. So, ~([pic]x) ([pic]y): ~loves(x, y) is the same as ([pic]x) ~([pic]y): ~loves(x, y), which means that “for some people, there are no people that they do not love”. We can continue to distribute the negation to arrive at ([pic]x) ([pic]y): ~~loves(x, y) and now we can apply double negation to get ([pic]x) ([pic]y): loves(x, y). This last statement means “some people love everyone”.

V. Using Predicate Calculus in AI

Many AI researchers have used predicate calculus as a way to represent knowledge. There are two reasons for using predicate calculus:

1. Conclusions are truth preserving, and therefore, results from an AI system using predicate calculus can be believed. Some approaches in AI are not based on a rigorous formalism and therefore conclusions drawn may still be doubted.

2. There are available methods to use the represented knowledge. These are modus ponens, modus tollens, resolution, and unification.

Here, we take a look at these methods. It should be noted that often times, literature refers to resolution and modus ponens as the same thing. They are not, and resolution will be covered in chapter 14. Here, we will look at Modus Ponens, Modus Tollens and Unification.

Imagine that you have the following implication rule A ( B and that you know A is true. You can then conclude B is true. Why? The implication truth table tells us that implication is true if either A and B are both true or A is false. We make the assumption that our implication rule is true (truth preserving). Therefore, since A is known to be true, B must also be true or else the implication rule is not true (it will not match the truth table) and therefore is not truth preserving, violating our assumption. Now, assume we have the following rules:

A ( B

A ( C

B * C ( D

D ( E

We know A is true, this lets us conclude (eventually) that E is true because we can conclude that B and C are both true, and therefore B * C is true and therefore D is true. This process of taking something that we know is true, applying it to the antecedent of a rule, and then concluding the rule’s consequence, is known as modus ponens. Its very simple yet very useful. Although this is formally called Modus Ponens, we will see in chapter 3 and later, that the same process is used in what is known as data-driven search, or forward chaining.

In Modus Tollens, we are using the implication rule but reversing the search process. If we known A ( B and we know that B is false, then we conclude that A is false? Why? Again, recall the implication truth table. B (the consequence) is false in two rows, T ( F and F ( F. However, the row T ( F is false, and therefore we cannot conclude that A is true. Therefore, A must be false (again, under the assumption that our rule is truth preserving). If we have the following rules:

A ( B

B ( C

C ( D

And we know D is false, we will be able to conclude that A must be false as well. If A were true, then B and C will be true and therefore D would also have to be true. This “backward” approach is similar to, but not exactly like, backward chaining, also known as goal-driven search.

When we use predicate calculus, we have a problem. Imagine that I know the following:

[pic]X: man(X) ( mortal(X).

man(Frank Zappa).

Can we conclude mortal(Frank Zappa)? No, because the rule says “X” and the predicate says “Frank Zappa” and X is not Frank Zappa. So, we need an additional mechanism that allows us to unify the variable in a predicate calculus term to the value in a predicate statement. That mechanism is called unification. We denote the unification as {X/Frank Zappa} which means “substitute X with Frank Zappa”. Now, I can conclude mortal(Frank Zappa).

The book chapter presents an example of a financial advisor. On page 75 of the textbook, you will find the “knowledge base” for the example – the rules and pieces of knowledge that we know. We apply the rules (statements 1-8) on the pieces of knowledge (9-11) to determine if, for this person, whether the person should invest in stocks or savings. Notice that the knowledge base is not complete because we know amount_saved(22000) and earnings(25000, steady) but we need to know values for minsavings(Y) and minincome(Y), which require some mathematical formulas (given on page 74). You should be able to step through the entire example (73-76) to see how the process works. Here, we see an additional example where we can draw some conclusions.

Facts:

1) Poodles are dogs.

2) Cocoa is a poodle.

3) Every dog has his day.

4) Cats and dogs are enemies.

5) Calicos are cats.

6) Fluffy is a calico.

Predicate Calculus form:

(note: V stands for "for all" and E stands for "there exists")

1) Vx: poodle(x) ( dog(x).

2) poodle(cocoa).

3) Vx Ey: dog(x) ( hasday(x, y).

4) Vx Vy: dog(x) * cat(y) ( enemies(x, y).

5) Vx: calico(x) ( cat(x).

6) calico(fluffy).

First, let’s try to prove “cocoa will have his day”. Apply 1 and 2 giving us dog(cocoa) by unifying x to cocoa. Apply this conclusion with 3 gives us hasday(cocoa, y) therefore there exists a day that cocoa “will have”.

Now find an enemy for cocoa. Again, 1 and 2 tell us dog(cocoa). Now 5 and 6 can be combined to conclude cat(fluffy) by unifying y to fluffy. Taking these conclusions and applying them to 4 gives us dog(cocoa) * cat(fluffy) ( enemies(cocoa, fluffy) or, fluffy is an enemy of cocoa, so we have found an enemy of cocoa’s.

See the textbook and website for additional examples.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download