You are on page 1of 14

fWhat is Logic?

The term "logic" came from the Greek word logos, which is sometimes translated
as "sentence", "discourse", "reason", "rule", and "ratio".

So what is logic? Briefly speaking, we might define logic as the study of the
principles of correct reasoning.

Nature of logic
Logic is the most important aspect of Artificial Intelligence, and the nature of logic
helps in the formation of the theories for intelligent entities.

The concept of logical form is central to logic; it being held that the validity of an
argument is determined by its logical form, not by its content. Traditional
Aristotelian syllogistic logic and modern symbolic logic are examples of formal
logics.

• Informal logic is the study of natural language arguments. The study of


fallacies is an especially important branch of informal logic. The dialogues of
Plato are a good example of informal logic.

• Formal logic is the study of inference with purely formal content, where that
content is made explicit. (An inference possesses a purely formal content if it
can be expressed as a particular application of a wholly abstract rule, that is,
a rule that is not about any particular thing or property. The works of Aristotle
contain the earliest known formal study of logic, which were incorporated in
the late nineteenth century into modern formal logic. In many definitions of
logic, logical inference and inference with purely formal content are the same.
This does not render the notion of informal logic vacuous, because no formal
logic captures the entire nuance of natural language.)

• Symbolic logic is the study of symbolic abstractions that capture the formal
features of logical inference. Symbolic logic is often divided into two
branches, propositional logic and predicate logic.

• Mathematical logic is an extension of symbolic logic into other areas, in


particular to the study of model theory, proof theory, set theory, and recursion
theory.
What is Artificial Intelligence?

Definition: The science of developing methods to solve problems usually


associated with human intelligence.

Alternate definitions:

• building intelligent entities or agents;


• making computers think or behave like humans
• studying the human thinking through computational models;
• Generating intelligent behavior, reasoning, learning.

Humankind has given itself the scientific name homo sapiens--man the wise--
because our mental capacities are so important to our everyday lives and our
sense of self. The field of artificial intelligence, or AI, attempts to understand
intelligent entities. Thus, one reason to study it is to learn more about ourselves.
But unlike philosophy and psychology, which are also concerned with
intelligence, AI strives to build intelligent entities as well as understand them.
Another reason to study AI is that these constructed intelligent entities are
interesting and useful in their own right. AI has produced many significant and
impressive products even at this early stage in its development. Although no one
can predict the future in detail, it is clear that computers with human-level
intelligence (or better) would have a huge impact on our everyday lives and on
the future course of civilization.

AI addresses one of the ultimate puzzles. How is it possible for a slow, tiny brain
(brain), whether biological or electronic, to perceive, understand, predict, and
manipulate a world far larger and more complicated than itself? How do we go
about making something with those properties? These are hard questions, but
unlike the search for faster-than-light travel or an antigravity device, the
researcher in AI has solid evidence that the quest is possible. All the researcher
has to do is look in the mirror to see an example of an intelligent system.

AI is one of the newest disciplines. It was formally initiated in 1956, when the
name was coined, although at that point work had been under way for about five
years. Along with modern genetics, it is regularly cited as the “field I would most
like to be in” by scientists in other disciplines. A student in physics might
reasonably feel that all the good ideas have already been taken by Galileo,
Newton, Einstein, and the rest, and that it takes many years of study before one
can contribute new ideas. AI, on the other hand, still has openings for a full-time
Einstein.
The study of intelligence is also one of the oldest disciplines. For over 2000
years, philosophers have tried to understand how seeing, learning, remembering,
and reasoning could, or should, be done. The advent of usable computers in the
early 1950s turned the learned but armchair speculation concerning these mental
faculties into a real experimental and theoretical discipline. Many felt that the new
“Electronic Super-Brains” had unlimited potential for intelligence. “Faster Than
Einstein” was a typical headline. But as well as providing a vehicle for creating
artificially intelligent entities, the computer provides a tool for testing theories of
intelligence, and many theories failed to withstand the test--a case of “out of the
armchair, into the fire.” AI has turned out to be more difficult than many at first
imagined and modern ideas are much richer, more subtle, and more interesting
as a result.

AI currently encompasses a huge variety of subfields, from general-purpose


areas such as perception and logical reasoning, to specific tasks such as playing
chess, proving mathematical theorems, writing poetry (poetry), and diagnosing
diseases. Often, scientists in other fields move gradually into artificial
intelligence, where they find the tools and vocabulary to systematize and
automate the intellectual tasks on which they have been working all their lives.
Similarly, workers in AI can choose to apply their methods to any area of human
intellectual endeavor. In this sense, it is truly a universal field.
Four possible goals to pursue in Artificial Intelligence

Systems that think like humans. Systems that think rationally.

Systems that act like humans Systems that act rationally

Historically, all four approaches have been followed. As one might expect, a
tension exists between approaches centered around humans and approaches
centered around rationality. (We should point out that by distinguishing between
human and rational behavior, we are not suggesting that humans are necessarily
``irrational'' in the sense of ``emotionally unstable'' or ``insane.'' One merely need
note that we often make mistakes; we are not all chess grandmasters even
though we may know all the rules of chess; and unfortunately, not everyone gets
an A on the exam. Some systematic errors in human reasoning are cataloged by
Kahneman et al...) A human-centered approach must be an empirical science,
involving hypothesis and experimental confirmation. A rationalist approach
involves a combination of mathematics and engineering. People in each group
sometimes cast aspersions on work done in the other groups, but the truth is that
each direction has yielded valuable insights. Let us look at each in more detail.

Acting humanly: The Turing Test approach


The Turing Test, proposed by Alan Turing (Turing, 1950), was designed to
provide a satisfactory operational definition of intelligence. Turing defined
intelligent behavior as the ability to achieve human-level performance in all
cognitive tasks, sufficient to fool an interrogator. Roughly speaking, the test he
proposed is that the computer should be interrogated by a human via a teletype,
and passes the test if the interrogator cannot tell if there is a computer or a
human at the other end. For now, programming a computer to pass the test
provides plenty to work on. The computer would need to possess the following
capabilities:

• natural language processing to enable it to communicate successfully in


English (or some other human language);
• knowledge representation to store information provided before or during
the interrogation;
• automated reasoning to use the stored information to answer questions
and to draw new conclusions;
• Machine learning to adapt to new circumstances and to detect and
extrapolate patterns.

Turing's test deliberately avoided direct physical interaction between the


interrogator and the computer, because physical simulation of a person is
unnecessary for intelligence. However, the so-called total Turing Test includes
a video signal so that the interrogator can test the subject's perceptual abilities,
as well as the opportunity for the interrogator to pass physical objects ``through
the hatch.'' To pass the total Turing Test, the computer will need

• computer vision to perceive objects, and


• Robotics to move them about.

Within AI, there has not been a big effort to try to pass the Turing test. The issue
of acting like a human comes up primarily when AI programs have to interact with
people, as when an expert system explains how it came to its diagnosis, or a
natural language processing system has a dialogue with a user. These programs
must behave according to certain normal conventions of human interaction in
order to make themselves understood. The underlying representation and
reasoning in such a system may or may not be based on a human model.

Thinking humanly: The cognitive modeling


approach
If we are going to say that a given program thinks like a human, we must have
some way of determining how humans think. We need to get inside the actual
workings of human minds. There are two ways to do this: through introspection--
trying to catch our own thoughts as they go by--or through psychological
experiments. Once we have a sufficiently precise theory of the mind, it becomes
possible to express the theory as a computer program. If the program's
input/output and timing behavior matches human behavior, that is evidence that
some of the program's mechanisms may also be operating in humans. For
example, Newell and Simon, who developed GPS, the ``General Problem
Solver'' (Newell and Simon, 1961), were not content to have their program
correctly solve problems. They were more concerned with comparing the trace of
its reasoning steps to traces of human subjects solving the same problems. This
is in contrast to other researchers of the same time (such as Wang (1960)), who
were concerned with getting the right answers regardless of how humans might
do it. The interdisciplinary field of cognitive science brings together computer
models from AI and experimental techniques from psychology to try to construct
precise and testable theories of the workings of the human mind. Although
cognitive science is a fascinating field in itself, we are not going to be discussing
it all that much in this book. We will occasionally comment on similarities or
differences between AI techniques and human cognition. Real cognitive science,
however, is necessarily based on experimental investigation of actual humans or
animals, and we assume that the reader only has access to a computer for
experimentation. We will simply note that AI and cognitive science continue to
fertilize each other, especially in the areas of vision, natural language, and
learning.

Thinking rationally: The laws of thought approach


The Greek philosopher Aristotle was one of the first to attempt to codify ``right
thinking,'' that is, irrefutable reasoning processes. His famous syllogisms
provided patterns for argument structures that always gave correct conclusions
given correct premises. For example, ``Socrates is a man; all men are mortal;
therefore Socrates is mortal.'' These laws of thought were supposed to govern
the operation of the mind, and initiated the field of logic.

The development of formal logic in the late nineteenth and early twentieth
centuries, which we describe in more detail in Chapter 6, provided a precise
notation for statements about all kinds of things in the world and the relations
between them. (Contrast this with ordinary arithmetic notation, which provides
mainly for equality and inequality statements about numbers.) By 1965, programs
existed that could, given enough time and memory, take a description of a
problem in logical notation and find the solution to the problem, if one exists. (If
there is no solution, the program might never stop looking for it.) The so-called
logicist tradition within artificial intelligence hopes to build on such programs to
create intelligent systems.

There are two main obstacles to this approach. First, it is not easy to take
informal knowledge and state it in the formal terms required by logical notation,
particularly when the knowledge is less than 100% certain. Second, there is a big
difference between being able to solve a problem ``in principle'' and doing so in
practice. Even problems with just a few dozen facts can exhaust the
computational resources of any computer unless it has some guidance as to
which reasoning steps to try first. Although both of these obstacles apply to any
attempt to build computational reasoning systems, they appeared first in the
logicist tradition because the power of the representation and reasoning systems
are well-defined and fairly well understood.

Acting rationally: The rational agent approach


Acting rationally means acting so as to achieve one's goals, given one's beliefs.
An agent is just something that perceives and acts. (This may be an unusual use
of the word, but you will get used to it.) In this approach, AI is viewed as the
study and construction of rational agents.

In the “laws of thought” approach to AI, the whole emphasis was on correct
inferences. Making correct inferences is sometimes part of being a rational
agent, because one way to act rationally is to reason logically to the conclusion
that a given action will achieve one's goals, and then to act on that conclusion.
On the other hand, correct inference is not all of rationality; because there are
often situations where there is no provably correct thing to do, yet something
must still be done. There are also ways of acting rationally that cannot be
reasonably said to involve inference. For example, pulling one's hand off of a hot
stove is a reflex action that is more successful than a slower action taken after
careful deliberation.

All the “cognitive skills” needed for the Turing Test is there to allow rational
actions. Thus, we need the ability to represent knowledge and reason with it
because this enables us to reach good decisions in a wide variety of situations.
We need to be able to generate comprehensible sentences in natural language
because saying those sentences helps us get by in a complex society. We need
learning not just for erudition, but because having a better idea of how the world
works enables us to generate more effective strategies for dealing with it. We
need visual perception not just because seeing is fun, but in order to get a better
idea of what an action might achieve--for example, being able to see a tasty
morsel helps one to move toward it.

The study of AI as rational agent design therefore has two advantages. First, it is
more general than the “laws of thought” approach, because correct inference is
only a useful mechanism for achieving rationality, and not a necessary one.
Second, it is more amenable to scientific development than approaches based
on human behavior or human thought, because the standard of rationality is
clearly defined and completely general. Human behavior, on the other hand, is
well-adapted for one specific environment and is the product, in part, of a
complicated and largely unknown evolutionary process that still may be far from
achieving perfection. We will see that despite the apparent simplicity with which
the problem can be stated, an enormous variety of issues come up when we try
to solve it. One important point to keep in mind: we will see before too long that
achieving perfect rationality--always doing the right thing--is not possible in
complicated environments. The computational demands are just too high.

What is intelligence?
Definitions

• Intelligence – inter ligare (Latin) – the capacity of creating connections


between notions.

• Wikipedia: the ability to solve problems.

• WordNet: the ability to comprehend; to understand and profit from


experience.
• Complex use of creativity, talent, imagination.

• Biology - Intelligence is the ability to adapt to new conditions and to


successfully cope with life situations.

• Psychology - a general term encompassing various mental abilities,


including the ability to remember and use what one has learned, in order
to solve problems, adapt to new situations, and understand and
manipulate one’s reality.

• Nonlinear, non-predictable behavior.

More generally intelligence is the ability to face problems in an un-programmed


(creative) manner.

Intelligence exists as a very general mental capability involving ability to reason,


plan, solve problems, think abstractly, comprehend complex ideas, learn quickly
and learn from experience. The brain processes involved are little understood.

Mainstream thinking in psychology regards human intelligence not as a single


ability or cognitive process but rather as an array of separate components.
Research in AI has focused chiefly on the following components of intelligence:
learning, reasoning, problem-solving, perception, and language-understanding.

Learning

Learning is distinguished into a number of different forms. The simplest is


learning by trial-and-error. For example, a simple program for solving mate-in-
one chess problems might try out moves at random until one is found that
achieves mate. The program remembers the successful move and next time the
computer is given the same problem it is able to produce the answer
immediately. The simple memorising of individual items--solutions to problems,
words of vocabulary, etc.--is known as rote learning.
Rote learning is relatively easy to implement on a computer. More challenging is
the problem of implementing what is called generalisation. Learning that involves
generalisation leaves the learner able to perform better in situations not
previously encountered. A program that learns past tenses of regular English
verbs by rote will not be able to produce the past tense of e.g. "jump" until
presented at least once with "jumped", whereas a program that is able to
generalise from examples can learn the "add-ed" rule, and so form the past tense
of "jump" in the absence of any previous encounter with this verb. Sophisticated
modern techniques enable programs to generalise complex rules from data.

Reasoning

To reason is to draw inferences appropriate to the situation in hand. Inferences


are classified as either deductive or inductive. An example of the former is "Fred
is either in the museum or the cafe; he isn't in the cafe; so he's in the museum",
and of the latter "Previous accidents just like this one have been caused by
instrument failure; so probably this one was caused by instrument failure". The
difference between the two is that in the deductive case, the truth of the premises
guarantees the truth of the conclusion, whereas in the inductive case, the truth of
the premise lends support to the conclusion that the accident was caused by
instrument failure, but nevertheless further investigation might reveal that,
despite the truth of the premise, the conclusion is in fact false.

There has been considerable success in programming computers to draw


inferences, especially deductive inferences. However, a program cannot be said
to reason simply in virtue of being able to draw inferences. Reasoning involves
drawing inferences that are relevant to the task or situation in hand. One of the
hardest problems confronting AI is that of giving computers the ability to
distinguish the relevant from the irrelevant.

Problem-solving

Problems have the general form: given such-and-such data, find x. A huge
variety of types of problem is addressed in AI. Some examples are: finding
winning moves in board games; identifying people from their photographs; and
planning series of movements that enable a robot to carry out a given task.

Problem-solving methods divide into special-purpose and general-purpose. A


special-purpose method is tailor-made for a particular problem, and often exploits
very specific features of the situation in which the problem is embedded. A
general-purpose method is applicable to a wide range of different problems. One
general-purpose technique used in AI is means-end analysis, which involves the
step-by-step reduction of the difference between the current state and the goal
state. The program selects actions from a list of means--which in the case of,
say, a simple robot, might consist of pickup, putdown, move-forward, move-back,
move-left, and move-right--until the current state is transformed into the goal
state.

Perception

In perception the environment is scanned by means of various sense-organs,


real or artificial, and processes internal to the perceiver analyse the scene into
objects and their features and relationships. Analysis is complicated by the fact
that one and the same object may present many different appearances on
different occasions, depending on the angle from which it is viewed, whether or
not parts of it are projecting shadows, and so forth.

At present, artificial perception is sufficiently well advanced to enable a self-


controlled car-like device to drive at moderate speeds on the open road, and a
mobile robot to roam through a suite of busy offices searching for and clearing
away empty soda cans. One of the earliest systems to integrate perception and
action was FREDDY, a stationary robot with a moving TV 'eye' and a pincer
'hand' (constructed at Edinburgh University during the period 1966-1973 under
the direction of Donald Michie). FREDDY was able to recognise a variety of
objects and could be instructed to assemble simple artifacts, such as a toy car,
from a random heap of components.

Language-understanding

A language is a system of signs having meaning by convention. Traffic signs, for


example, form a mini-language, it being a matter of convention that, for example,
the hazard-ahead sign means hazard ahead. This meaning-by-convention that is
distinctive of language is very different from what is called natural meaning,
exemplified in statements like 'Those clouds mean rain' and 'The fall in pressure
means the valve is malfunctioning'.

An important characteristic of full-fledged human languages, such as English,


which distinguishes them from, e.g. bird calls and systems of traffic signs, is their
productivity. A productive language is one that is rich enough to enable an
unlimited number of different sentences to be formulated within it.

It is relatively easy to write computer programs that are able, in severely


restricted contexts, to respond in English, seemingly fluently, to questions and
statements, for example the Parry and Shrdlu programs described in the section
Early AI Programs. However, neither Parry nor Shrdlu actually understands
language. An appropriately programmed computer can use language without
understanding it, in principle even to the point where the computer's linguistic
behaviour is indistinguishable from that of a native human speaker of the
language What, then, is involved in genuine understanding, if a computer that
uses language indistinguishably from a native human speaker does not
necessarily understand? There is no universally agreed answer to this difficult
question. According to one theory, whether or not one understands depends not
only upon one's behaviour but also upon one's history: in order to be said to
understand one must have learned the language and have been trained to take
one's place in the linguistic community by means of interaction with other
language-users.

The Role of Logic in Artificial Intelligence

Theoretical computer science developed out of logic, the theory of computation


(if this is to be considered a different subject from logic), and some related areas
of mathematics.

So theoretically minded computer scientists are well informed about logic even
when they aren't logicians. Computer scientists in general are familiar with the
idea that logic provides techniques for analyzing the inferential properties of
languages, and with the distinction between a high-level logical analysis of a
reasoning problem and its implementations.

Logic, for instance, can provide a specification for a programming language by


characterizing a mapping from programs to the computations that they license. A
compiler that implements the language can be incomplete, or even unsound, as
long as in some sense it approximates the logical specification. This makes it
possible for the involvement of logic in AI applications to vary from relatively
weak uses in which the logic informs the implementation process with analytic
insights, to strong uses in which the implementation algorithm can be shown to
be sound and complete. In some cases, a working system is inspired by ideas
from logic, but acquires features that at first seem logically problematic but can
later be explained by developing new ideas in logical theory. This sort of thing
has happened, for instance, in logic programming.

In particular, logical theories in AI are independent from implementations. They


can be used to provide insights into the reasoning problem without directly
informing the implementation. Direct implementations of ideas from logic—
theorem-proving and model-construction techniques—are used in AI, but the AI
theorists who rely on logic to model their problem areas are free to use other
implementation techniques as well. Thus, in Moore 1995b (Chapter 1), Robert C.
Moore distinguishes three uses of logic in AI; as a tool of analysis, as a basis for
knowledge representation, and as a programming language.

A large part of the effort of developing limited-objective reasoning systems goes


into the management of large, complex bodies of declarative information. It is
generally recognized in AI that it is important to treat the representation of this
information, and the reasoning that goes along with it, as a separate task, with its
own research problems.
The evolution of expert systems illustrates the point. The earliest expert systems,
such as MYCIN (a program that reasons about bacterial infections, see Buchanan
& Shortliffe 1984), were based entirely on large systems of procedural rules, with
no separate representation of the background knowledge—for instance, the
taxonomy of the infectious organisms about which the system reasoned was not
represented.

Later generation expert systems show a greater modularity in their design. A


separate knowledge representation component is useful for software engineering
purposes—it is much better to have a single representation of a general fact that
can have many different uses, since this makes the system easier to develop and
to modify. And this design turns out to be essential in enabling these systems to
deliver explanations as well as mere conclusions.

Knowledge representation is an area in artificial intelligence that is concerned


with how to formally "think", that is, how to use a symbol system to represent "a
domain of discourse" - that which can be talked about, along with functions that
may or may not be within the domain of discourse that allow inference
(formalized reasoning) about the objects within the domain of discourse to occur.
Generally speaking, some kind of logic is used both to supply a formal semantics
of how reasoning functions apply to symbols in the domain of discourse, as well
as to supply (depending on the particulars of the logic), operators such as
quantifiers, modal operators, etc. that, along with an interpretation theory, give
meaning to the sentences in the logic.

When we design a knowledge representation (and a knowledge representation


system to interpret sentences in the logic in order to derive inferences from them)
we have to make trades across a number of design spaces, described in the
following sections. The single most important decision to be made however is the
expressivity of the KR. The more expressive, the easier (and more compact) it is
to "say something". However, more expressive languages are harder to
automatically derive inferences from. An example of a less expressive KR would
be propositional logic. An example of a more expressive KR would be auto-
epistemic temporal modal logic. Less expressive KRs may be both complete and
consistent (formally less expressive than set theory). More expressive KRs may
be neither complete nor consistent.

The key problem is to find a KR (and a supporting reasoning system) that can
make the inferences your application needs in time, that is, within the resource
constraints appropriate to the problem at hand. This tension between the kinds of
inferences an application "needs" and what counts as "in time" along with the
cost to generate the representation itself makes knowledge representation
engineering interesting.
Goals of Artificial Intelligence

Two main goals of AI:

• To understand human intelligence better. We test theories of human


intelligence by writing programs which emulate it.

The main goal of AI is to understand the human intelligence in a better


way, in order to reflect a truer image, to make a more accurate human-like
computational system; we must develop a better understanding of
intelligence and all the components that are associated with intelligence.

Without having proper knowledge about intelligence we cannot go towards


making an efficient and flawless AI controlled system.

• To create useful “smart” programs able to do task that would normally require
a human expert.

The vision of AI is to create such systems that can interact with the
humans, adapt to the changing environment and provide many accurate
and logical conclusions. In its course of being super talented it mustn’t be
complex to use, it should be simple enough that every individual can get
an understanding as to how it works.

Foundations of AI

Different fields have contributed to AI in the form of ideas, viewpoints and


techniques.

• Philosophy: Logic, reasoning, mind as a physical system, foundations of


learning, language and rationality.
• Mathematics: Formal representation and proof algorithms, computation,
(UN) decidability, (in) tractability, probability.
• Psychology: adaptation, phenomena of perception and motor control.
• Economics: formal theory of rational decisions, game theory.
• Linguistics: knowledge representation, grammar.
• Neuroscience: physical substrate for mental activities.
• Control theory: homeostatic systems, stability, and optimal agent design.
Major research areas (Applications)

• Natural Language Understanding


For the future the obvious progression will be to incorporate natural language
understanding--allowing the computer to not only convert your words into text,
but also to understand what you mean, and to respond accordingly. That will
require the computer to convert recognized sentences into a format that it can
understand, such as a system command or a database query.

• Image, Speech and pattern recognition


Image, Speech and pattern recognition is an important research area; with the
help of these factors an effective and efficient AI system can be built.
AI systems can be brought to life with the use of these factors

• Uncertainty Modeling
Uncertainty has been a concern to engineers, managers, and scientists for many
years in engineering and sciences. With the help of Artificial Intelligent systems
uncertainty modeling can be done with accuracy.

• Expert systems
An expert system is software that attempts to provide an answer to a problem, or
clarify uncertainties where normally one or more human experts would need to
be consulted. Expert systems are most common in a specific problem domain,
and are a traditional application and/or subfield of artificial intelligence.

• Virtual Reality
Virtual reality (VR) is a technology which allows a user to interact with a
computer-simulated environment, whether that environment is a simulation of the
real world or an imaginary world. Most current virtual reality environments are
primarily visual experiences, displayed either on a computer screen or through
special or stereoscopic displays, but some simulations include additional sensory
information, such as sound through speakers or headphones.

• Decision making
Decision making can be regarded as an outcome of mental processes (cognitive
process) leading to the selection of a course of action among several
alternatives. Every decision making process produces a final choice. The output
can be an action or an opinion of choice.

• Autonomous control, robotics


Autonomous robots are robots which can perform desired tasks in unstructured
environments without continuous human guidance. Many kinds of robots have
some degree of autonomy. Different robots can be autonomous in different ways.
A high degree of autonomy is particularly desirable in fields such as space
exploration, cleaning floors, mowing lawns, and waste water treatment.

You might also like