You are on page 1of 60

Written By:

Electrical Engineer

1
PART I

2
Introduction
Artificial Intelligence (AI), a term that in its broadest sense would
indicate the ability of an artifact to perform the same kinds of
functions that characterize human thought. The possibility of
developing some such artifact has intrigued human beings since
ancient times. With the growth of modern science, the search for
AI has taken two major directions: psychological and physiological
research into the nature of human thought, and the technological
development of increasingly sophisticated computing systems.

In the latter sense, the term AI has been applied to computer


systems and programs capable of performing tasks more complex
than straightforward programming, although still far from the
realm of actual thought. The most important fields of research in
this area are information processing, pattern recognition, game-
playing computers, and applied fields such as medical diagnosis.
Current research in information processing deals with programs
that enable a computer to understand written or spoken information
and to produce summaries, answer specific questions, or
redistribute information to users interested in specific areas of this
information. Essential to such programs is the ability of the system
to generate grammatically correct sentences and to establish

3
linkages between words, ideas, and associations with other ideas.
Research has shown that whereas the logic of language structure
its syntax—submits to programming, the problem of meaning, or
semantics, lies far deeper, in the direction of true AI.
In medicine, programs have been developed that analyze the
disease symptoms, medical history, and laboratory test results of a
patient, and then suggest a diagnosis to the physician. The
diagnostic program is an example of so-called expert systems
programs designed to perform tasks in specialized areas as a
human would. Expert systems take computers a step beyond
straightforward programming, being based on a technique called
rule-based inference, in which pre established rule systems are
used to process the data. Despite their sophistication, systems still
do not approach the complexity of true intelligent thought.

Many scientists remain doubtful that true AI can ever be


developed. The operation of the human mind is still little
understood, and computer design may remain essentially incapable
of analogously duplicating those unknown, complex processes.
Various routes are being used in the effort to reach the goal of true
AI. One approach is to apply the concept of parallel processing
interlinked and concurrent computer operations. Another is to
create networks of experimental computer chips, called silicon
neurons that mimic data-processing functions of brain cells. Using

4
analog technology, the transistors in these chips emulate nerve-cell
membranes in order to operate at the speed of neurons.
History

The intellectual roots of AI, and the concept of intelligent


machines, may be found in Greek mythology. Intelligent artifacts
appear in literature since then, with real (and fraudulent)
mechanical devices actually demonstrating behavior with some
degree of intelligence. After modern computers became available
following World War II, it has become possible to create programs
that perform difficult intellectual tasks. Even more importantly,
general purpose methods and tools have been created that allow
similar tasks to be performed.

In this brief history, the beginnings of artificial intelligence are


traced to philosophy, fiction, and imagination. Early inventions in
electronics, engineering, and many other disciplines have
influenced AI. Some early milestones include work in problems
solving which included basic work in learning, knowledge
representation, and inference as well as demonstration programs in
language understanding, translation, theorem proving, associative
memory, and knowledge-based systems. The article ends with a
brief examination of influential organizations and current issues
facing the field.

5
Fifty years ago, Herbert A. Simon and Allen Newell had a
Christmas break story that would top them all. 'Over the Christmas
holiday,' Dr. Simon famously blurted to one of his classes at
Carnegie Institute of Technology, 'Al Newell and I invented a
thinking machine.' It was another way of saying that they had
invented artificial intelligence -- in fact, the only way of saying it
in the winter of 1955-56 because no one had gotten around to
inventing the term 'artificial intelligence.'
Related media file inside the CD.
Quotation
Computers double their performance every 18 months. So the danger is real
that they could develop intelligence and take over the world.
Stephen Hawking (1942 - )
British physicist.

What is Artificial Intelligence?


DEFINITIONS
Artificial intelligence is a term that encompasses many definition
as below:-
“The goal of work in artificial intelligence is to build
machines that perform tasks normally requiring human
intelligence.”
“Research scientists in artificial intelligence try to get
machines to exhibit behavior that we call intelligence behavior
when we observe it in human being.”
The goal of artificial intelligence research is” to construct
computer programs which exhibit behavior that we call intelligent
when observe human binges”

6
“thinking is a continuum, an n-dimensional continuum…
comparisons can be made between men and machine in the
continuum of thinking… in this context then, the goal of artificial
intelligence can be stated—it is simply an attempt to push
machine behavior further out into this continuum.”
“Artificial intelligence (is) define generally as the attempt to
construct mechanisms that perform tasks requiring intelligence
when perform tasks requiring intelligence when perfumed by
human.”
The ability of a digital computer or computer-controlled
robot to perform tasks commonly associated with intelligent
beings. The term is frequently applied to the project of developing
systems endowed with the intellectual processes characteristic of
humans, such as the ability to reason, discover meaning,
generalize, or learn from past experience.

Why Create Artificial Intelligence?


Even though artificial intelligence may have positive outcomes,
why create it if it has the possibility of being as destructive as
some scientists predict? Some scientists firmly believe that these
"creatures" would not be as malicious towards humans as humans
are towards animals. Is this a risk worth taking? According to
scientist Hubert Dreyfus, it is not worth considering the negative
implications because there is only a remote possibility that
artificial intelligence will be dangerous.

This does not seem to be a responsible position to take. If humans


have the power to analyze and think before acting, then it should
be done. History demonstrates the error of humans very clearly.
The bombing of Hiroshima is a prime example of how the use of
technology was not explored in advance for its potential
repercussions. Atomic energy, not meant for mass destruction by
scientists who invented it, was abused by those who did not

7
understand the capabilities of the new technology. Artificial
intelligence, if not carefully analyzed, could have negative.

Related media file inside the CD.

Different topics & detail

Automata Theory
Concept that describes how machines mimic human behavior. The
theory proposes that human physical functions and behavior can be
simulated by a mechanical or computer-controlled device.
Applications of automata theory have included imitating human
comprehension and reasoning skills using computer programs,
duplicating the function of muscles and tendons by hydraulic
systems or electric motors, and reproducing sensory organs by
electronic sensors such as smoke detectors.
The concept of automata, or manlike machines, has historically
been associated with any self-operating machine, such as watches
or clockwork songbirds driven by tiny springs and gears. But in the
late 20th century, the science of robotics (the development of
computer-controlled devices that move and manipulate objects)
has replaced automata as it relates to replicating motion based on
human anatomy (see Robot). Modern theories of automata
currently focus on reproducing human thought patterns and
problem-solving abilities using artificial intelligence and other
advanced computer-science techniques.

8
Artificial Life Interactive Video Environment
(ALIVE)

The Artificial Life Interactive Video Environment (ALIVE) is a


virtual reality system where people can interact with virtual
creatures without being constrained by headsets, goggles, or
special sensing equipment. The system is based on a magic mirror
metaphore: a person in the ALIVE space sees their own image on a
large-screen TV as if in a mirror. Autonomous, animated
characters join the user's own image in the reflected world.

Using a single camera (the same one used to create the video
image), the vision-based tracking system extracts the user's head,
hand, and foot positions, as well as the gesture information. The
autonomous characters use this information along with their own
motivations to act in believable and entertaining ways.

ALIVE is currently the primary application of the IVE system, a


joint project between the Autonomous Agents group and Vision
and Modeling group at the MIT Media Lab.

9
Expert Systems

Expert systems are computer software programs that mimic the


expertise of human specialists. Expert systems have two
components, a knowledge base that provides rules and data and an
inference engine that enables the expert system to form
conclusions. For example, an expert system that diagnoses blood
disease in a patient would require a knowledge base that included
data on physiology, blood pathogens, disease symptoms, and
treatment options. The inference engine searches through the
knowledge base and concludes which possible disease or diseases
the patient has and then suggests various treatments based on that
diagnosis.
Mimic
Mimic is a simple perception. It's purpose is to learn to fire a single
output neuron when the corresponding input neuron if fired. The
network may be trained automatically or manually.
Neurons are circles in various shades of pink. The brighter the
shade, the more active the neuron is. Neurons which are not firing
at all are black. Connections are represented by rectangles running
between neurons. Positive connections are blue; negative
connections are red. The stronger a connection, the thicker the
rectangle. The Mimic network has four input neurons on the left,
two hidden layer neurons in the middle, and four output neurons on
the right

10
Application, in computer science, a computer program designed to
help people perform a certain type of work. An application thus
differs from an operating system (which runs a computer), a utility
(which performs maintenance or general-purpose chores), and a
language (with which computer programs are created). Depending
on the work for which it was designed, an application can
manipulate text, numbers, graphics, or a combination of these
elements. Some application packages offer considerable computing
power by focusing on a single task, such as word processing;
others, called integrated software, offer somewhat less power but
include several applications, such as a word processor, a
spreadsheet, and a database program. See also Computer;
Operating System; Programming Language; Spreadsheet Program;
Utility.

Building a Human Computer


It is entirely possible that the processes involved in brain function
are so complex as to make an effective understanding of them
impossible in any practical sense. Then AI is rendered a practical
impossibility. At the other end of the spectrum a few scientists
hold that there is nothing all that special about consciousness and
that any machine packed with enough intelligence will
automatically acquire consciousness along the way.
Related software Eliza(talk with computer) inside the CD.

11
Robots

Robot is a computer-controlled machine that is programmed to


move, manipulate objects, and accomplish work while interacting
with its environment. Robots are able to perform repetitive tasks
more quickly, cheaply, and accurately than humans. The term
robot originates from the Czech word robota, meaning
“compulsory labor.” It was first used in the 1921 play R.U.R.
(Rossum's Universal Robots) by the Czech novelist and playwright
Karel Capek. The word robot has been used since to refer to a
machine that performs work to assist people or work that humans
find difficult or undesirable.

Related media file inside the CD.

HOW ROBOTS WORK


Robotics This robotic hand is capable of performing the delicate
task of picking up and holding an egg without breaking it. A tactile
array sensor located on the right half of its gripping mechanism
sends information to the robot's control computer about the
pressure the robotic hand exerts; given this information, the control
computer instructs the robotic hand to loosen, tighten, or maintain
the current gripping force. This feedback loop repeats

12
continuously, enabling the robotic hand to stay in between the two
extremes of dropping and crushing the egg.
The inspiration for the design of a robot manipulator is the human
arm, but with some differences. For example, a robot arm can
extend by telescoping—that is, by sliding cylindrical sections one
over another to lengthen the arm. Robot arms also can be
constructed so that they bend like an elephant trunk. Grippers, or
end effectors, are designed to mimic the function and structure of
the human hand. Many robots are equipped with special purpose
grippers to grasp particular devices such as a rack of test tubes or
an arc-welder.
The joints of a robotic arm are usually driven by electric motors. In
most robots, the gripper is moved from one position to another,
changing its orientation. A computer calculates the joint angles
needed to move the gripper to the desired position in a process
known as inverse kinematics.
Some multijointed arms are equipped with servo, or feedback,
controllers that receive input from a computer. Each joint in the
arm has a device to measure its angle and send that value to the
controller. If the actual angle of the arm does not equal the
computed angle for the desired position, the servo controller moves
the joint until the arm's angle matches the computed angle.
Controllers and associated computers also must process sensor
information collected from cameras that locate objects to be
grasped, or they must touch sensors on grippers that regulate the
grasping force.
Any robot designed to move in an unstructured or unknown
environment will require multiple sensors and controls, such as
ultrasonic or infrared sensors, to avoid obstacles. Robots, such as
the National Aeronautics and Space Administration (NASA)
planetary rovers, require a multitude of sensors and powerful
onboard computers to process the complex information that allows
them mobility. This is particularly true for robots designed to work
in close proximity with human beings, such as robots that assist

13
persons with disabilities and robots that deliver meals in a hospital.
Safety must be integral to the design of human service robots.

Related media file inside the CD.

USES FOR ROBOTS


Hospital Robot Helpmate is a robot that independently navigates
through hospital corridors, delivering meal trays, paperwork, and
supplies. The robot employs multiple sensors to safely navigate
and work in close proximity to people.Photo Researchers,
Inc./Hank Morgan/Science Source

In 1995 about 700,000 robots were operating in the industrialized


world. Over 500,000 were used in Japan, about 120,000 in Western
Europe, and about 60,000 in the United States. Many robot
applications are for tasks that are either dangerous or unpleasant
for human beings. In medical laboratories, robots handle
potentially hazardous materials, such as blood or urine samples. In
other cases, robots are used in repetitive, monotonous tasks in
which human performance might degrade over time. Robots can
perform these repetitive, high-precision operations 24 hours a day
without fatigue. A major user of robots is the automobile industry.

14
General Motors Corporation uses approximately 16,000 robots for
tasks such as spot welding, painting, machine loading, parts
transfer, and assembly. Assembly is one of the fastest growing
industrial applications of robotics. It requires higher precision than
welding or painting and depends on low-cost sensor systems and
powerful inexpensive computers. Robots are used in electronic
assembly where they mount microchips on circuit boards.
Robots on an Automobile Assembly Line The automobile industry
uses robots to help manufacture cars. This time-lapse video shows
robots assembling an automobile chassis. Robots are especially
useful in performing activities on an assembly line that humans
find repetitive and monotonous.
Activities in environments that pose great danger to humans, such
as locating sunken ships, cleanup of nuclear waste, prospecting for
underwater mineral deposits, and active volcano exploration, are
ideally suited to robots. Similarly, robots can explore distant
planets. NASA's Galileo, an unpiloted space probe, traveled to
Jupiter in 1996 and performed tasks such as determining the
chemical content of the Jovian atmosphere.

15
Related media file inside the CD.

IMPACT OF ROBOTS
Robotic manipulators create manufactured products that are of
higher quality and lower cost. But robots can cause the loss of
unskilled jobs, particularly on assembly lines in factories. New
jobs are created in software and sensor development, in robot
installation and maintenance, and in the conversion of old factories
and the design of new ones. These new jobs, however, require
higher levels of skill and training. Technologically oriented
societies must face the task of retraining workers who lose jobs to
automation, providing them with new skills so that they can be
employable in the industries of the 21st century.
Related Media in side the CD.
FUTURE TECHNOLOGIES
Wabot-2 and Inventor An inventor plays a duet with his robotic
creation, Wabot-2, at the Tokyo Exposition. Building this kind of
robot is a challenging task because the dexterity of the human hand
is perhaps the most difficult function to recreate mechanically.
Although Wabot-2’s performance may not be emotional, with an
electronic scanning eye and quality components, the technical
accuracy will be extremely high. Hutchison Library/Michael

16
Macintyre. Automated machines will increasingly assist humans in
the manufacture of new products, the maintenance of the world's
infrastructure, and the care of homes and businesses. Robots will
be able to make new highways, construct steel frameworks of
buildings, clean underground pipelines, and mow lawns.
Prototypes of systems to perform all of these tasks already exist.
One important trend is the development of microelectromechanical
systems, ranging in size from centimeters to millimeters. These
tiny robots may be used to move through blood vessels to deliver
medicine or clean arterial blockages. They also may work inside
large machines to diagnose impending mechanical problems.
These sources provide additional information on Robot.
Perhaps the most dramatic changes in future robots will arise from
their increasing ability to reason. The field of artificial intelligence
is moving rapidly from university laboratories to practical
application in industry, and machines are being developed that can
perform cognitive tasks, such as strategic planning and learning
from experience. Increasingly, diagnosis of failures in aircraft or
satellites, the management of a battlefield, or the control of a large
factory will be performed by intelligent computers.

17
The future of A.I

Artificial intelligence in the 90's is centered around improving


conditions for humans. But is that the only goal in the future?
Research is focusing on building human-like robots. This is
because scientists are interested in human intelligence and are
fascinated by trying to copy it. If A.I. machines can be capable of
doing tasks originally done by humans, then the role of humans
will change. Robots have already begun to replace factory
workers. They are acting as surgeons, pilots, astronauts, etc.
According to Crevier, a computer scientist, robots will take over
clerical workers, the middle managers and on up. Eventually what
society will be left with are machines working at every store and
humans on every beach. As Moravec puts it, we'll all be living as
millionaires.

18
Applications of AI

Game playing
You can buy machines that can play master level chess for a
few hundred dollars. There is some AI in them, but they play
well against people mainly through brute force computation--
looking at hundreds of thousands of positions. To beat a
world champion by brute force and known reliable heuristics
requires being able to look at 200 million positions per
second.
Speech recognition
In the 1990s, computer speech recognition reached a practical
level for limited purposes. Thus United Airlines has replaced
its keyboard tree for flight information by a system using
speech recognition of flight numbers and city names. It is
quite convenient. On the the other hand, while it is possible
to instruct some computers using speech, most users have
gone back to the keyboard and the mouse as still more
convenient.
Understanding natural language
Just getting a sequence of words into a computer is not
enough. Parsing sentences is not enough either. The
computer has to be provided with an understanding of the
domain the text is about, and this is presently possible only
for very limited domains.
Computer vision
The world is composed of three-dimensional objects, but the
inputs to the human eye and computers' TV cameras are two

19
dimensional. Some useful programs can work solely in two
dimensions, but full computer vision requires partial three-
dimensional information that is not just a set of two-
dimensional views. At present there are only limited ways of
representing three-dimensional information directly, and they
are not as good as what humans evidently use.
Expert systems
A ``knowledge engineer'' interviews experts in a certain
domain and tries to embody their knowledge in a computer
program for carrying out some task. How well this works
depends on whether the intellectual mechanisms required for
the task are within the present state of AI. When this turned
out not to be so, there were many disappointing results. One
of the first expert systems was MYCIN in 1974, which
diagnosed bacterial infections of the blood and suggested
treatments. It did better than medical students or practicing
doctors, provided its limitations were observed. Namely, its
ontology included bacteria, symptoms, and treatments and
did not include patients, doctors, hospitals, death, recovery,
and events occurring in time. Its interactions depended on a
single patient being considered. Since the experts consulted
by the knowledge engineers knew about patients, doctors,
death, recovery, etc., it is clear that the knowledge engineers
forced what the experts told them into a predetermined
framework. In the present state of AI, this has to be true. The
usefulness of current expert systems depends on their users
having common sense.
Heuristic classification
One of the most feasible kinds of expert system given the
present knowledge of AI is to put some information in one of
a fixed set of categories using several sources of information.
An example is advising whether to accept a proposed credit
card purchase. Information is available about the owner of
the credit card, his record of payment and also about the item

20
he is buying and about the establishment from which he is
buying it (e.g., about whether there have been previous credit
card frauds at this establishment

21
PART II

22
About natural Intelligence

Let us first recap the most important features of the neural


networks found in the brain. Firstly the brain contains many
billions of very special kinds of cell - these are the nerve cells or
neurons. These cells are organized into a very complicated
intercommunicating network. Typically each neuron is physically
connected to tens of thousands of others. Using these connections
neurons can pass electrical signals between each other. These
connections are not merely on or off - the connections have varying
strength which allows the influence of a given neuron on one of its
neighbors to be either very strong, very weak (perhaps even no
influence) or anything in between. Furthermore, many aspects of
brain function, particularly the learning process, are closely
associated with the adjustment of these connection strengths. Brain
activity is then represented by particular patterns of firing activity
amongst this network of neurons. It is this simultaneous
cooperative behavior of very many simple processing units which
is at the root of the enormous sophistication and computational
power of the brain.

23
About artificial Intelligence

Artificial neural networks are computers whose architecture is


modeled after the brain. They typically consist of many hundreds
of simple processing units which are wired together in a complex
communication network. Each unit or node is a simplified model
of a real neuron which fires (sends off a new signal) if it receives a
sufficiently strong input signal from the other nodes to which it is
connected. The strength of these connections may be varied in
order for the network to perform different tasks corresponding to
different patterns of node firing activity. This structure is very
different from traditional computers.

Human Analogous

24
Neural Network
In computer science, highly interconnected network of
information-processing elements that mimics the connectivity and
functioning of the human brain. Neural networks address problems
that are often difficult for traditional computers to solve, such as
speech and pattern recognition. They also provide some insight
into the way the human brain works. One of the most significant
strengths of neural networks is their ability to learn from a limited
set of examples.
Artificial Neural Network The neural networks that are
increasingly being used in computing mimic those found in the
nervous systems of vertebrates. The main characteristic of a
biological neural network, top, is that each neuron, or nerve cell,
receives signals from many other neurons through its branching
dendrites. The neuron produces an output signal that depends on
the values of all the input signals and passes this output on to many
other neurons along a branching fiber called an axon. In an
artificial neural network, bottom, input signals, such as signals
from a television camera’s image, fall on a layer of input nodes, or
computing units. Each of these nodes is linked to several other
“hidden’ nodes between the input and output nodes of the network.
There may be several layers of hidden nodes, though for simplicity
only one is shown here. Each hidden node performs a calculation
on the signals reaching it and sends a corresponding output signal
to other nodes. The final output is a highly processed version of the
input.
Neural networks were initially studied by computer and cognitive
scientists in the late 1950s and early 1960s in an attempt to model
sensory perception in biological organisms. Neural networks have
been applied to many problems since they were first introduced,
including pattern recognition, handwritten character recognition,
speech recognition, financial and economic modeling, and next-
generation computing models.

25
HOW A NEURAL NETWORK WORKS
Neural networks fall into two categories: artificial neural networks
and biological neural networks. Artificial neural networks are
modeled on the structure and functioning of biological neural
networks. The most familiar biological neural network is the
human brain. The human brain is composed of approximately 100
billion nerve cells called neurons that are massively
interconnected. Typical neurons in the human brain are connected
to on the order of 10,000 other neurons, with some types of
neurons having more than 200,000 connections. The extensive
number of neurons and their high degree of interconnectedness are
part of the reason that the brains of living creatures are capable of
making a vast number of calculations in a short amount of time.
See also Neurophysiology.
Neurons
Biological neurons have a fairly simple large-scale structure,
although their operation and small-scale structure is immensely
complex. Neurons have three main parts: a central cell body, called
the soma, and two different types of branched, treelike structures
that extend from the soma, called dendrites and axons. Information

26
from other neurons, in the form of electrical impulses, enters the
dendrites at connection points called synapses. The information
flows from the dendrites to the soma, where it is processed. The
output signal, a train of impulses, is then sent down the axon to the
synapses of other neurons.

Artificial neurons, like their biological counterparts, have simple


structures and are designed to mimic the function of biological
neurons. The main body of an artificial neuron is called a node or
unit. Artificial neurons may be physically connected to one another
by wires that mimic the connections between biological neurons,
if, for instance, the neurons are simple integrated circuits.
However, neural networks are usually simulated on traditional
computers, in which case the connections between processing
nodes are not physical but are instead virtual.
Artificial neurons may be either discrete or continuous. Discrete
neurons send an output signal of 1 if the sum of received signals is
above a certain critical value called a threshold value, otherwise
they send an output signal of 0. Continuous neurons are not
restricted to sending output values of only 1s and 0s; instead they
send an output value between 1 and 0 depending on the total
amount of input that they receive—the stronger the received signal,
the stronger the signal sent out from the node and vice-versa.
Continuous neurons are the most commonly used in actual
artificial neural networks.

27
Artificial Neural Network Architecture
The architecture of a neural network is the specific arrangement
and connections of the neurons that make up the network. One of
the most common neural network architectures has three layers.
The first layer is called the input layer and is the only layer
exposed to external signals. The input layer transmits signals to the
neurons in the next layer, which is called a hidden layer. The
hidden layer extracts relevant features or patterns from the
received signals. Those features or patterns that are considered
important are then directed to the output layer, the final layer of the
network. Sophisticated neural networks may have several hidden
layers, feedback loops, and time-delay elements, which are
designed to make the network as efficient as possible in
discriminating relevant features or patterns from the input layer.
DIFFERENCES BETWEEN NEURAL
NETWORKS AND TRADITIONAL COMPUTERS
Neural networks differ greatly from traditional computers (for
example personal computers, workstations, mainframes) in both
form and function. While neural networks use a large number of
simple processors to do their calculations, traditional computers

28
generally use one or a few extremely complex processing units.
Neural networks also do not have a centrally located memory, nor
are they programmed with a sequence of instructions, as are all
traditional computers.
The information processing of a neural network is distributed
throughout the network in the form of its processors and
connections, while the memory is distributed in the form of the
weights given to the various connections. The distribution of both
processing capability and memory means that damage to part of
the network does not necessarily result in processing dysfunction
or information loss. This ability of neural networks to withstand
limited damage and continue to function well is one of their
greatest strengths.
Neural networks also differ greatly from traditional computers in
the way they are programmed. Rather than using programs that are
written as a series of instructions, as do all traditional computers,
neural networks are “taught” with a limited set of training
examples. The network is then able to “learn” from the initial
examples to respond to information sets that it has never
encountered before. The resulting values of the connection weights
can be thought of as a ‘program’.
Neural networks are usually simulated on traditional computers.
The advantage of this approach is that computers can easily be
reprogrammed to change the architecture or learning rule of the
simulated neural network. Since the computation in a neural
network is massively parallel, the processing speed of a simulated
neural network can be increased by using massively parallel
computers—computers that link together hundreds or thousands of
CPUs in parallel to achieve very high processing speeds.
NEURAL NETWORK LEARNING
In all biological neural networks the connections between
particular dendrites and axons may be reinforced or discouraged.
For example, connections may become reinforced as more signals
are sent down them, and may be discouraged when signals are

29
infrequently sent down them. The reinforcement of certain neural
pathways, or dendrite-axon connections, results in a higher
likelihood that a signal will be transmitted along that path, further
reinforcing the pathway. Paths between neurons that are rarely
used slowly atrophy, or decay, making it less likely that signals
will be transmitted along them.
The role of connection strengths between neurons in the brain is
crucial; scientists believe they determine, to a great extent, the way
in which the brain processes the information it takes in through the
senses. Neuroscientists studying the structure and function of the
brain believe that various patterns of neurons firing can be
associated with specific memories. In this theory, the strength of
the connections between the relevant neurons determines the
strength of the memory. Important information that needs to be
remembered may cause the brain to constantly reinforce the
pathways between the neurons that form the memory, while
relatively unimportant information will not receive the same
degree of reinforcement.

Decision Making and Learning


The Hopfield model is an artificial neural network designed to
model the memory recall process of the brain. It can recover a
perfect image or memory when presented with only a part of the
original memory. It is also robust in that connections between
nodes can be altered to some degree without causing a catastrophic
loss of memories. However, the brain is much more than a memory
storing device - it has a processing capability. The brain receives
input from various sensory sources, extracts certain features from
this information, and by comparing this processed information
with past experience, can formulate new actions.

To illustrate these ideas, consider the visual system of the frog.


The frog possesses sets of nerve cells just behind the retina whose
function is to discriminate only the four following events:

30
• a moving object penetrates the frog's field of vision.
• a moving object penetrates the field of vision and stops.
• the general level of lighting in the field of vision decreases
suddenly
• a small, dark object round in form enters the field of vision
and moves around in an erratic manner.

The first three events put the frog into a state of alert. The first case
can be interpreted as the arrival of an intruder. The second case
involves the intruder stopping and the danger becoming real. The
third case can be interpreted as the arrival of a predator which is
overshadowing the frog. All three cases give rise to the "escape"
response. The last case suggests an insect is close and it causes an
attack by the frog regardless of whether or not there is really prey
there. The responses of the frog, attack or flight, are triggered
entirely visually. So, the visual neurons of the frog are "wired-up"
in order that, when they receive a picture of the frog's environment
from its eyes, that information is processed into one of the four
predetermined possibilities. This information is then sent to the rest
of the brain, in order to produce a response. This feature of being
able to extract certain simple features from perhaps a very complex
image is commonly referred to as pattern recognition. It is a crucial
feature of the brain which allows it to make sense of a very
complex and ever changing world.

The Perceptron - a network for decision making


An artificial neural network which attempts to emulate this pattern
recognition process is called the Perceptron. In this model, the
nodes representing artificial neurons are arranged into layers. The
signal representing an input pattern is fed into the first layer. The
nodes in this layer are connected to another layer (sometimes
called the "hidden layer"). The firing of nodes on the input layer is
conveyed via these connections to this hidden layer. Finally, the
activity on the nodes in this layer feeds onto the final output layer,
where the pattern of firing of the output nodes defines the response

31
of the network to the given input pattern. Signals are only
conveyed forward from one layer to a later layer - the activity of
the output nodes does not influence the activities on the hidden
layer.

In contrast to the Hopfield network, this network produces its


response to any given input pattern almost immediately - the firing
pattern of the output is automatically stable. There is no relaxation
process to a stable firing pattern, as occurs with the Hopfield
model.

To try to simplify things, we can think of a simple model in which


the network is made up of two screens - the nodes on the first
(input) layer of the network are represented as light bulbs which
are arranged in a regular pattern on the first screen. Similarly, the
nodes of the third (output) layer can be represented as a regular
array of light bulbs on the second screen. There is no screen for the
hidden layer - that is why it is termed "hidden"! Instead we can
think of a black box which connects the first screen to the second.
Of course, the magic of how the black box will function depends
on the network connections between hidden nodes which are
inside. When a node is firing, we show this by lighting its bulb.
See the picture for illustration.

We can now think of the network functioning in the following


way: a given pattern of lit bulbs is set up on the first screen. This
then feeds into the black box (the hidden layer) and results in a
new pattern of lit bulbs on the second screen. This might seem a

32
rather pointless exercise in flashing lights except for the following
crucial observation. It is possible to "tweak" with the contents of
the black box (adjust the strengths of all these internode
connections) so that the system can produce any desired pattern on
the second screen for a very wide range of input patterns. For
example, if the input pattern is a triangle, the output pattern can be
trained to be a triangle. If an input pattern containing a triangle and
a circle is presented, the output can be still arranged to be a
triangle. Similarly, we may add a variety of other shapes to the
network input pattern and teach the net to only respond to
triangles. If there is no triangle in the input, the network can be
made to respond, for example, with a zero.

In principle, by using a large network with many nodes in the


hidden layer, it is possible to arrange that the network still spots
triangles in the input pattern, independently of what other junk
there is around. Another way of looking at this is that: the network
can classify all pictures into one of two sets - those containing
triangles and those which do not. The perceptron is said to be
capable of both recognizing and classifying patterns.

Furthermore, we are not restricted to spotting triangles, we could


simultaneously arrange for the network to spot squares, diamonds
or whatever we wanted. We could be more ambitious and ask that
the network respond with a circle whenever we present it with a
picture which contains both triangles, squares but not diamonds.
There is another important task that the perceptron can perform
usefully: the network may be used to draw associations between
objects. For example, whenever the network is presented with a
picture of a dog, its output may be a cat. Hopefully, you are
beginning to see the power of this machine at doing rather complex
pattern recognition, classification and association tasks. It is no
coincidence, of course, that these are the types of task that the
brain is exceptionally good at.

33
Artificial Vs Biological Neuron & Natural Networks
Biological neurons have a fairly simple large-scale structure,
although their operation and small-scale structure is immensely
complex. Neurons have three main parts: a central cell body, called
the soma, and two different types of branched, treelike structures
that extend from the soma, called dendrites and axons. Information
from other neurons, in the form of electrical impulses, enters the
dendrites at connection points called synapses. The information
flows from the dendrites to the soma, where it is processed. The
output signal, a train of impulses, is then sent down the axon to the
synapses of other neurons.

Artificial neurons, like their biological counterparts, have simple


structures and are designed to mimic the function of biological
neurons. The main body of an artificial neuron is called a node or
unit. Artificial neurons may be physically connected to one another
by wires that mimic the connections between biological neurons,
if, for instance, the neurons are simple integrated circuits.

34
However, neural networks are usually simulated on traditional
computers, in which case the connections between processing
nodes are not physical but are instead virtual.

Artificial neurons may be either discrete or continuous. Discrete


neurons send an output signal of 1 if the sum of received signals is
above a certain critical value called a threshold value, otherwise
they send an output signal of 0. Continuous neurons are not
restricted to sending output values of only 1s and 0s; instead they
send an output value between 1 and 0 depending on the total
amount of input that they receive—the stronger the received signal,
the stronger the signal sent out from the node and vice-versa.
Continuous neurons are the most commonly used in actual
artificial neural networks.

Discussion of Memory in Humans


One of the most important functions of our brain is the laying
down and recall of memories. It is difficult to imagine how we
could function without both short and long term memory. The
absence of short term memory would render most tasks extremely
difficult if not impossible - life would be punctuated by a series of
one time images with no logical connection between them.
Equally, the absence of any means of long term memory would
ensure that we could not learn by past experience. Indeed, much of
our impression of self depends on remembering our past history.

Our memories function in what is called an associative or content-


addressable fashion. That is, a memory does not exist in some
isolated fashion, located in a particular set of neurons. All
memories are in some sense strings of memories - you remember
someone in a variety of ways - by the color of their hair or eyes,
the shape of their nose, their height, the sound of their voice, or
perhaps by the smell of a favorite perfume. Thus memories are
stored in association with one another. These different sensory
units lie in completely separate parts of the brain, so it is clear that

35
the memory of the person must be distributed throughout the brain
in some fashion. Indeed, PET scans reveal that during memory
recall there is a pattern of brain activity in many widely different
parts of the brain.

Notice also that it is possible to access the full memory (all aspects
of the person's description for example) by initially remembering
just one or two of these characteristic features. We access the
memory by its contents not by where it is stored in the neural
pathways of the brain. This is very powerful; given even a poor
photograph of that person we are quite good at reconstructing the
persons face quite accurately. This is very different from a
traditional computer where specific facts are located in specific
places in computer memory. If only partial information is available
about this location, the fact or memory cannot be recalled at all.

A Description of the Hopfield Network


The Hopfield neural network is a simple artificial network which is
able to store certain memories or patterns in a manner rather
similar to the brain - the full pattern can be recovered if the
network is presented with only partial information. Furthermore
there is a degree of stability in the system - if just a few of the
connections between nodes (neurons) are severed, the recalled
memory is not too badly corrupted - the network can respond with
a "best guess". Of course, a similar phenomenon is observed with
the brain - during an average lifetime many neurons will die but we
do not suffer a catastrophic loss of individual memories - our
brains are quite robust in this respect (by the time we die we may
have lost 20 percent of our original neurons).

The nodes in the network are vast simplifications of real neurons -


they can only exist in one of two possible "states" - firing or not
firing. Every node is connected to every other node with some
strength. At any instant of time a node will change its state (i.e

36
start or stop firing) depending on the inputs it receives from the
other nodes.

If we start the system off with a any general pattern of firing and
non-firing nodes then this pattern will in general change with time.
To see this think of starting the network with just one firing node.
This will send a signal to all the other nodes via its connections so
that a short time later some of these other nodes will fire. These
new firing nodes will then excite others after a further short time
interval and a whole cascade of different firing patterns will occur.
One might imagine that the firing pattern of the network would
change in a complicated perhaps random way with time. The
crucial property of the Hopfield network which renders it useful
for simulating memory recall is the following: we are guaranteed
that the pattern will settle down after a long enough time to some
fixed pattern. Certain nodes will be always "on" and others "off".
Furthermore, it is possible to arrange that these stable firing
patterns of the network correspond to the desired memories we
wish to store!

The reason for this is somewhat technical but we can proceed by


analogy. Imagine a ball rolling on some bumpy surface. We
imagine the position of the ball at any instant to represent the
activity of the nodes in the network. Memories will be represented
by special patterns of node activity corresponding to wells in the
surface. Thus, if the ball is let go, it will execute some complicated
motion but we are certain that eventually it will end up in one of
the wells of the surface. We can think of the height of the surface
as representing the energy of the ball. We know that the ball will
seek to minimize its energy by seeking out the lowest spots on the
surface -- the wells.

Furthermore, the well it ends up in will usually be the one it started


off closest to. In the language of memory recall, if we start the
network off with a pattern of firing which approximates one of the

37
"stable firing patterns" (memories) it will "under its own steam"
end up in the nearby well in the energy surface thereby recalling
the original perfect memory.

The smart thing about the Hopfield network is that there exists a
rather simple way of setting up the connections between nodes in
such a way that any desired set of patterns can be made "stable
firing patterns". Thus any set of memories can be burned into the
network at the beginning. Then if we kick the network off with any
old set of node activity we are guaranteed that a "memory" will be
recalled. Not too surprisingly, the memory that is recalled is the
one which is "closest" to the starting pattern. In other words, we
can give the network a corrupted image or memory and the
network will "all by itself" try to reconstruct the perfect image. Of
course, if the input image is sufficiently poor, it may recall the
incorrect memory - the network can become "confused" - just like
the human brain. We know that when we try to remember
someone's telephone number we will sometimes produce the
wrong one! Notice also that the network is reasonably robust - if
we change a few connection strengths just a little the recalled
images are "roughly right". We don't lose any of the images
completely.

38
PART III

39
(a)Software

Computer sciences
Artificial intelligence (AI) research seeks to enable computers and
machines to mimic human intelligence and sensory processing
ability, and models human behavior with computers to improve our
understanding of intelligence. The many branches of AI research
include machine learning, inference, cognition, knowledge
representation, problem solving, case-based reasoning, natural
language understanding, speech recognition, computer vision, and
artificial neural networks.
A key technique developed in the study of artificial intelligence is
to specify a problem as a set of states, some of which are solutions,
and then search for solution states. For example, in chess, each
move creates a new state. If a computer searched the states
resulting from all possible sequences of moves, it could identify
those that win the game. However, the number of states associated
with many problems (such as the possible number of moves
needed to win a chess game) is so vast that exhaustively searching
them is impractical. The search process can be improved through
the use of heuristics—rules that are specific to a given problem and
can therefore help guide the search. For example, a chess heuristic
might indicate that when a move results in checkmate, there is no
point in examining alternate moves.
Programming
Programming Language, in computer science, artificial language
used to write a sequence of instructions (a computer program) that
can be run by a computer. Similar to natural languages, such as
English, programming languages have a vocabulary, grammar, and
syntax. However, natural languages are not suited for
programming computers because they are ambiguous, meaning

40
that their vocabulary and grammatical structure may be interpreted
in multiple ways. The languages used to program computers must
have simple logical structures, and the rules for their grammar,
spelling, and punctuation must be precise.
1st, 2nd, 3rd, 4th, 5th, generation
1st generation
Machine language or Binary
2nd generation
AUTOCODER, SAP and SPS assembly language
3rd generation
COBOL (Common Business-Oriented Language), FLOWMATIC,
C and C++, java, LISP1, MATHLAB, LPL and PL/I, BALM,
ALGOL-60 and LISP 1.5.
4th generation
IBM's ADRS2, APL, CSP and AS, Power Builder, Access.
5th generation
PROLOG
About Some Useful Language
Machine language or Binary
In machine languages, instructions are written as sequences of 1s
and 0s, called bits, that a computer can understand directly. An
instruction in machine language generally tells the computer four
things: (1) where to find one or two numbers or simple pieces of
data in the main computer memory (Random Access Memory, or
RAM), (2) a simple operation to perform, such as adding the two
numbers together, (3) where in the main memory to put the result
of this simple operation, and (4) where to find the next instruction
to perform. While all executable programs are eventually read by
the computer in machine language, they are not all programmed in
machine language. It is extremely difficult to program directly in
machine language because the instructions are sequences of 1s and
0s. A typical instruction in a machine language might read 10010

41
1100 1011 and mean add the contents of storage register A to the
contents of storage register B.
Assembly language
Computer programmers use assembly languages to make machine-
language programs easier to write. In an assembly language, each
statement corresponds roughly to one machine language
instruction. An assembly language statement is composed with the
aid of easy to remember commands. The command to add the
contents of the storage register A to the contents of storage register
B might be written ADD B,A in a typical assembly language
statement. Assembly languages share certain features with machine
languages. For instance, it is possible to manipulate specific bits in
both assembly and machine languages. Programmers use assembly
languages when it is important to minimize the time it takes to run
a program, because the translation from assembly language to
machine language is relatively simple. Assembly languages are
also used when some part of the computer has to be controlled
directly, such as individual dots on a monitor or the flow of
individual characters to a printer.
C++ Programming Language
C (computer), in computer science, a programming language
developed by Dennis Ritchie at Bell Laboratories in 1972; so
named because its immediate predecessor was the B programming
language. Although C is considered by many to be more a
machine-independent assembly language than a high-level
language, its close association with the UNIX operating system, its
enormous popularity, and its standardization by the American
National Standards Institute (ANSI) have made it perhaps the
closest thing to a standard programming language in the
microcomputer/workstation marketplace. C is a compiled language
that contains a small set of built-in functions that are machine
dependent. The rest of the C functions are machine independent
and are contained in libraries that can be accessed from C
programs. C programs are composed of one or more functions

42
defined by the programmer; thus C is a structured programming
language. See also C++; Computer; Library; UNIX
Dylan Programming Language
Dylan is a new object-oriented dynamic language (OODL) being
developed by Apple. This language development effort has the
goal of developing a practical tool for writing mainstream
commercial applications. The intent is to combine the best qualities
of static languages (small, fast programs) with the best qualities of
dynamic languages (rapid development, code that's easy to read,
write and maintain). It differs from C++ in many important ways
that makes it powerful and flexible. Dylan as a number of features
that distinguish it from C++ including:

1. automatic memory management


2. clean, consistent syntax
3. fully and consistently object-oriented model
4. dynamic as well as static type checking
5. support for incremental compilation
6. first-class functions and classes

Logo Programming Language


Devised in the late 60s by Papert and his colleagues at MIT as an
educational aid for children, Logo is a subset of LISP and can be
used as a serious programming language. Indeed, Burke and
Genise's (1987) undergraduate text exclusively uses Logo to
explain fundamental principles of computer science, and Shafer's
(1986) book on Macintosh AI programming is based on Logo.
Logo is compact, making it an excellent vehicle for exploring AI
on a PC. A version of Logo exists for virtually every major brand
of desktop computer, with versions in most cases marketed by the
computer manufacturer.

Logo is best known for graphics; the user programs a cursor


(called a "turtle") to draw figures on the screen. Graphics aside,
Logo has a number of list manipulation functions (in a syntax that

43
is more friendly than LISP's) that make it ideal for AI applications,
and random number generating functions that make it suitable for
simulations. Goldenberg and Feurzeig (1987) show how Logo can
be used to explore computational linguistics, and Harvey's three
volumes (1985, 1986, 1987) give a wide range of other
applications and projects

OPS Programming Language


Carnegie-Mellon University scientists Herbert Simon and Allen
Newell have investigated human problem-solving processes for
many decades. One of their ideas, the production system,
represents knowledge as a set of condition-action rules (also called
"if-then rules"). If the conditions for a rule are satisfied, the rule's
specified action occurs. In the late 70s, Charles Forgy developed
OPS5, a programming language which embodies this idea. The
production system became the dominant knowledge representation
methodology in expert systems, and OPS5 became popular among
expert system developers. OPS5 stores data in working memory,
and if-then rules in production memory. Rules in OPS5 are
completely independent of one another. They can be placed in
production memory in any order. If the data in working memory
match the conditions of a rule in production memory, the rule's
actions take place. Possible actions include modifying the contents
of working memory (which might then match the conditions of
another rule), reading information from a file, displaying
information on the screen, and calling external programs. In some
versions of the language, a rule's actions can cause a new rule to be
created, allowing the development of systems that learn.
Smalltalk Programming Language
Objects in Smalltalk can represent real-world objects, and their
methods can model the behaviors of those real-world objects. For
example, the objects "automobile" and "truck" inherit data (like
"has wheels" and "has an engine") and methods (like "accelerate to
maximum speed") from the class "Vehicle". These objects,

44
however, may differ on their specific values for the "wheels" and
"engine" properties, and on their "accelerate" method. Within a
Smalltalk program, messages are sent to objects instructing them to
invoke their methods. In a program that simulates the movement of
vehicles, an "accelerate" message might be sent to "car" and
"truck". Each object would then active its particular "accelerate"
method.
PROLOG
In computer science, an acronym for programming in logic, a
computer programming language important in the development of
artificial intelligence software during the 1970s and 1980s. Unlike
traditional programming languages, which process only numerical
data and instructions, PROLOG processes symbols and
relationships. It is designed to perform search functions that
establish relationships within a program. This combination of
symbolic processing and logic searching made PROLOG the
preferred language during the mid-1980s for creating programs
that mimic human behavior.

PROLOG was developed in 1970 at the University of Marseille in


France by Alain Colmerauer. Colmerauer believed that traditional
computer-programming languages, such as Fortran and COBOL,
were inappropriate for representing human logic in a machine.
Colmerauer's primary goal was to communicate with computers
using conversational language instead of programmer's jargon. He
concluded that strict symbolic logic was the appropriate bridge
between human and machine.

A PROLOG program is made up of facts and rules that are usually


limited to a single domain, such as marine life, accounting, or
aircraft maintenance. Once a database is built for that domain,
PROLOG searches the database and forms relationships between
facts. PROLOG's functions are designed to prove that a
proposition is either valid or invalid. This is done by applying logic

45
to the available facts, such as “A hammerhead is a shark” and
“Madeline likes all sharks.” Rules are built by combining facts:
“Madeline likes X, and X is a shark.” If the program's database
identifies certain symbolic entities—such as hammerheads, makos,
and great whites—as sharks, then PROLOG can use the rule to
determine that Madeline likes hammerheads, makos, and great
whites, even though that information was not specifically
programmed into the database.

This form of artificial intelligence is valuable in situations in which


a fault or malfunction can be specifically identified—as in
equipment maintenance and repair and in cases in which the
answers are of the YES/NO or TRUE/FALSE variety. Due to the
rigidity of the applied logic, however, PROLOG has difficulty with
imprecise data or fuzzy sets.

In the United States the artificial-intelligence community of the


late 1970s ignored PROLOG in favor of a competing artificial-
intelligence language, LISP, which was developed by John
McCarthy at the Massachusetts Institute of Technology in
Cambridge, Massachusetts. In Europe, however, PROLOG
captured the interest of researchers and by the mid-1980s it became
the preferred language for building expert systems. In 1981, when
the Japanese government initiated a national project to develop
commercial artificial intelligence, it adopted PROLOG as its
standard programming language.

Many of the features that were once unique to PROLOG are now
used in modern object-oriented programming, a programming
technique that is becoming the standard for software development.
LISP
In computer science, acronym for List Processing. A list-oriented
computer programming language developed in 1959-1960 by John
McCarthy and used primarily to manipulate lists of data. LISP was

46
a radical departure from the procedural languages (Fortran,
ALGOL) then being developed; it is an interpreted language in
which every expression is a list of calls to functions. LISP
continues to be heavily used in research and academic circles and
has long been considered the “standard” language for artificial-
intelligence (AI) research, although Prolog has made inroads into
that claim in recent years.
Fuzzy Logic
In computer science, a form of logic used in some expert systems
and other artificial-intelligence applications in which variables can
have degrees of truthfulness or falsehood represented by a range of
values between 1 (true) and 0 (false). With fuzzy logic, the
outcome of an operation can be expressed as a probability rather
than as a certainty. For example, in addition to being either true or
false, an outcome might have such meanings as probably true,
possibly true, possibly false, and probably false. See also Artificial
Intelligence; Expert Systems.
Fortran,
In computer science, acronym for FORmula TRANslation. The
first high-level computer language (developed 1954-1958 by John
Backus) and the progenitor of many key high-level concepts, such
as variables, expressions, statements, iterative and conditional
statements, separately compiled subroutines, and formatted
input/output. Fortran is a compiled, structured language. The name
indicates its scientific and engineering roots; Fortran is still used
heavily in those fields, although the language itself has been
expanded and improved vastly over the last 35 years to become a
language that is useful in any field. See also Compiled Language;
High-Level Language; Structured Programming.
COBOL
In computer science, acronym for COmmon Business-Oriented
Language, a verbose, English-like programming language
developed between 1959 and 1961. Its establishment as a required
language by the U.S. Department of Defense, its emphasis on data

47
structures, and its English-like syntax (compared to those of
Fortran and ALGOL) led to its widespread acceptance and usage,
especially in business applications. Programs written in COBOL,
which is a compiled language, are split into four divisions:
Identification, Environment, Data, and Procedure. The
Identification division specifies the name of the program and
contains any other documentation the programmer wants to add.
The Environment division specifies the computer(s) being used
and the files used in the program for input and output. The Data
division describes the data used in the program. The Procedure
division contains the procedures that dictate the actions of the
program. See also Computer.
Pascal (computer)
A concise procedural computer programming language, designed
1967-71 by Niklaus Wirth. Pascal, a compiled, structured
language, built upon ALGOL, simplifies syntax while adding data
types and structures such as subranges, enumerated data types,
files, records, and sets. Acceptance and use of Pascal exploded
with Borland International's introduction in 1984 of Turbo Pascal,
a high-speed, low-cost Pascal compiler for MS-DOS systems that
has sold over a million copies in its various versions. Even so,
Pascal appears to be losing ground to C as a standard development
language on microcomputers. See also C; Compiled Language
BASIC
In computer science, acronym for Beginner's All-purpose
Symbolic Instruction Code. A high-level programming language
developed by John Kemeny and Thomas Kurtz at Dartmouth
College in the mid-1960s. BASIC gained its enormous popularity
mostly because of two implementations, Tiny BASIC and
Microsoft BASIC, which made BASIC the first lingua franca of
microcomputers. Other important implementations have been
CBASIC (Compiled BASIC), Integer and Applesoft BASIC (for
the Apple II), GW-BASIC (for the IBM PC), Turbo BASIC (from
Borland), and Microsoft QuickBASIC. The language has changed

48
over the years. Early versions are unstructured and interpreted.
Later versions are structured and often compiled. BASIC is often
taught to beginning programmers because it is easy to use and
understand and because it contains the same major concepts as
many other languages thought to be more difficult, such as Pascal
and C. See also Compiled Language; High-Level Language;
Interpreted Language; Structured Programming.
Object Oriented Programming
In computer science, type of high-level computer language that
uses self-contained, modular instruction sets for defining and
manipulating aspects of a computer program. These discrete,
predefined instruction sets are called objects and they may be used
to define variables, data structures, and procedures for executing
data operations. In OOP, objects have built-in rules for
communicating with one another. By using objects as stable,
preexisting building blocks, programmers can pursue their main
objectives and specify tasks from the top down, manipulating or
combining objects to modify existing programs and to create
entirely new ones. See also Computer Program, Programming
Language.
One especially powerful feature of OOP languages is a property
known as inheritance. Inheritance allows an object to take on the
characteristics and functions of other objects to which it is
functionally connected. Programmers connect objects by grouping
them together in different classes and by grouping the classes into
hierarchies. These classes and hierarchies allow programmers to
define the characteristics and functions of objects without needing
to repeat source code, the coded instructions in a program. Thus,
using OOP languages can greatly reduce the time it takes for a
programmer to write an application, and also can reduce the size of
the program. OOP languages are flexible and adaptable, so
programs or parts of programs can be used for more than one task.
Programs written with OOP languages are generally shorter in

49
length and contain fewer bugs, or mistakes, than those written with
non-OOP languages.
Object-oriented programming began with Simula, a programming
language developed from 1962 to 1967 by Ole-Johan Dahl and
Kristen Nygaard at the Norwegian Computing Center in Oslo,
Norway. Simula introduced definitive features of OOP, including
objects and inheritance. In the early 1970s Alan Kay developed
Smalltalk, another early OOP language, at the Palo Alto Research
Center of the Xerox Corporation. Smalltalk made revolutionary
use of a graphical user interface (GUI), a feature that allows the
user to select commands using a mouse. GUIs became a central
feature of operating systems such as Macintosh OS and Windows.
The most popular OOP language is C++, developed by Bjarne
Stroustrup at Bell Laboratories in the early 1980s. In 1995 Sun
Microsystems, Inc., released Java, an OOP language that can run
on most types of computers regardless of platform. In some ways
Java represents a simplified version of C++ but adds other features
and capabilities as well, and it is particularly well suited for writing
interactive applications to be used on the World Wide Web.

(b)Hardware
Speech processing
The human vocal and auditory systems. Characteristics of speech
signals: phonemes, prosody, IPA notation. Lossless tube model of
speech production. Time and frequency domain representations of
speech; window characteristics and time/frequency resolution
tradeoffs. Properties of digital filters: mean log response,
resonance gain and bandwidth relations, bandwidth expansion
transformation, all-pass filter characteristics. Autocorrelation and
covariance linear prediction of speech; optimality criteria in time
and frequency domains; alternate LPC parameterization. Speech
coding: PCM, ADPCM, CELP. Speech synthesis: language

50
processing, prosody, diaphone and formant synthesis; time domain
pitch and speech modification. Speech recognition: hidden Markov
models and associated recognition and training algorithms.
Language modelling. Large vocabulary recognition. Acoustic
preprocessing for speech recognition.

Image processing
Modern digital technology has made it possible to manipulate
multi-dimensional signals with systems that range from simple
digital circuits to advanced parallel computers. The goal of this
manipulation can be divided into three categories:

* Image Processing image in -> image out

* Image Analysis image in -> measurements out

* Image Understanding image in -> high-level description out

We will focus on the fundamental concepts of image processing.


Space does not permit us to make more than a few introductory
remarks about image analysis. Image understanding requires an
approach that differs fundamentally from the theme of this book.
Further, we will restrict ourselves to two-dimensional (2D) image
processing although most of the concepts and techniques that are to
be described can be extended easily to three or more dimensions.
Readers interested in either greater detail than presented here or in
other aspects of image processing are referred to

We begin with certain basic definitions. An image defined in the


"real world" is considered to be a function of two real variables,
for example, a(x,y) with a as the amplitude (e.g. brightness) of the
image at the real coordinate position (x,y). An image may be
considered to contain sub-images sometimes referred to as regions-
of-interest, ROIs, or simply regions. This concept reflects the fact
that images frequently contain collections of objects each of which

51
can be the basis for a region. In a sophisticated image processing
system it should be possible to apply specific image processing
operations to selected regions. Thus one part of an image (region)
might be processed to suppress motion blur while another part
might be processed to improve color rendition.

The amplitudes of a given image will almost always be either real


numbers or integer numbers. The latter is usually a result of a
quantization process that converts a continuous range (say,
between 0 and 100%) to a discrete number of levels. In certain
image-forming processes, however, the signal may involve photon
counting which implies that the amplitude would be inherently
quantized. In other image forming procedures, such as magnetic
resonance imaging, the direct physical measurement yields a
complex number in the form of a real magnitude and a real phase.
For the remainder of this book we will consider amplitudes as reals
or integers unless otherwise indicated.

Definition
A digital image a[m,n] described in a 2D discrete space is derived
from an analog image a(x,y) in a 2D continuous space through a
sampling process that is frequently referred to as digitization. The
mathematics of that sampling process will be described in Section
5. For now we will look at some basic definitions associated with
the digital image. The effect of digitization is shown in Figure 1.

The 2D continuous image a(x,y) is divided into N rows and M


columns. The intersection of a row and a column is termed a pixel.
The value assigned to the integer coordinates [m,n] with
{m=0,1,2,...,M-1} and {n=0,1,2,...,N-1} is a[m,n]. In fact, in most
cases a(x,y)--which we might consider to be the physical signal
that impinges on the face of a 2D sensor--is actually a function of
many variables including depth (z), color ( ), and time (t). Unless

52
otherwise stated, we will consider the case of 2D, monochromatic,
static images in this chapter.

The image shown in Figure 1 has been divided into N = 16 rows


and M = 16 columns. The value assigned to every pixel is the
average brightness in the pixel rounded to the nearest integer
value. The process of representing the amplitude of the 2D signal
at a given coordinate as an integer value with L different gray
levels is usually referred to as amplitude quantization or simply
quantization.

Image Sampling

Converting from a continuous image a(x,y) to its digital


representation b[m,n] requires the process of sampling. In the ideal
sampling system a(x,y) is multiplied by an ideal 2D impulse train:

where Xo and Yo are the sampling distances or intervals, d(*,*) is


the ideal impulse function, and we have used eq. . (At some point,
of course, the impulse function d(x,y) is converted to the discrete
impulse function d[m,n].) Square sampling implies that Xo =Yo.

53
Sampling with an impulse function corresponds to sampling with
an infinitesimally small point. This, however, does not correspond
to the usual situation as illustrated in Figure 1. To take the effects
of a finite sampling aperture p(x,y) into account, we can modify the
sampling model as follows:

The combined effect of the aperture and sampling are best


understood by examining the Fourier domain representation.

where s = 2 /Xo is the sampling frequency in the x direction and s


= 2 /Yo is the sampling frequency in the y direction. The aperture
p(x,y) is frequently square, circular, or Gaussian with the
associated P( , ).

Different transducers
For the replacement of eyes we can use cameras. For speech
processing input said there is the vast range of microphones with
respect to sensitivity and output said speakers (loud speakers, head
phones, woofers etc.) there many sensors for light detecting and so
on. In fact it is all depend upon your logic and the way to utilize
the artificial intelligence whether you are working on expert
systems or robotics.
Hydraulic system
The delivery system of a modern braking set up. It uses fluid to
transmit the force applied at the pedal to the wheel cylinders,
where it can be converted back into mechanical energy to activate
the brake shoes or disc calipers.

54
Stepper motors
It use in robot to give them different moves.

1. Isolate the Common Power wire(s) by using an ohmmeter


to check the resistances between pairs of wires. The Common
Power wire will be the one with only half as much resistance
between it and all the others.

This is because the Common Power wire only has one coil
between it and each other wire, whereas each of the other wires
have two coils between them. Hence half the resistance.

55
2. Identify the wires to the coils by supplying a voltage on the
Common Power wire(s) and keeping one of the other wires
grounded while grounding each of the remaining three wires in
turn and observing the results.

Select one wire and ground it

Assume it's connected to coil 4

Keeping it grounded, ground each of the other three wires one by one

Grounding one wire should make the rotor turn a little clockwise

That'll be the wire connected to Coil 3

56
Grounding one wire should make the rotor turn a little
anticlockwise

That'll be the wire connected to Coil 1

Grounding one wire should do nothing

That'll be the wire connected to Coil 2

57
Study Approach to Artificial Intelligence
Fundamental Knowledge Engineering
1-Knowledge Acquisition
1-The AI Field 2-Knowledge Representation
2-Problum Solving
3-Expert System
3-Inferencing
4-Uncertainty

AI Technologies
1-Natural Language Processing
2-Speech Processing
3-Computer Vision
4-Robotics

Building Knowledge-Based Systems


1-Development Process
2-Tools
3-User Interface And Other Design Topics

Implementing AI Solutions
1-Integration
2-Implemention
3-Impacts

Advanced Topics
1-Neural Computing
2-Speicl Topics And The Future

Application And Cases


1-Illustrative Application
2-Project And Case Study
3-Mission-Critical Expert System

58
References
Books
Introduction to Artificial Intelligence
by Eugene Charniak

Introduction to Artificial Intelligence: Second, Enlarged Edition


by Philip C Jackson

Intelligence and Artificial Intelligence: An Interdisciplinary Debate


by Ulrich Ratsch, I O Stamatescu, M M Richter

Artificial Intelligence: Methods and Applications


edited by Nikolaos G. Bourbakis

Bayesian Artificial Intelligence


by Kevin B Korb, Ann Nicholson

Parallel Computing for Pattern Recognition & Artificial Intelligence


by N Ranganathan

Artificial Intelligence and Automation


by Bourbakis

Artificial Intelligence Illuminated


by Ben Coppin

Artificial Intelligence Programming


by Eugene (EDT) Charniak, Christopher K. Riesbeck, Drew V. McDermott, J.
Meehan

Artificial Intelligence In Logic Design


edited by Svetlana N Yanushkevich

Artificial Intelligence Applications And Innovations


by B Wang

Artificial Intelligence for Computer Games: An Introduction


by John David Funge

59
Artificial Intelligence: A Philosophical Introduction
by Copeland

Fundamentals of the New Artificial Intelligence: Beyond Traditional Paradigms


by Toshinori Munakata

Artificial Intelligence in Medicine: 9th Conference on Artificial Intelligence in


Medicine in
edited by Michel Dojat, Elpida Keravnou, Pedro Barahona

Websites
www.aaai.com
www.elsevier.com/locate/artint
www.alicebot.org
www.ai.sri.com
www.cs.berkeley.edu/~russell/ai.html
www.a-i.com
www.singinst.org
www.inf.ed.ac.uk
www.auai.org
www.ijcai.org
www.ida.liu.se/ext/etai
www.aiai.ed.ac.uk
www.tandf.co.uk/journals/titles/08839514.asp
www.oefai.at/ofai-launch.html
www.cs.iastate.edu/~honavar/aigroup.html
www.eccai.org
www.ifi.unizh.ch/ailab
www.acm.org/sigart
www.gameai.com/ai.html
www.elsevier.com/wps/product/cws_home/505627
www.aisb.org.uk
www.norvig.com/paip.html
www.generation5.org
www.iiconference.org
www.dsic.upv.es/ecai2004
www.neuron.co.uk

60

You might also like