You are on page 1of 430

Introduction to Arti cial Intelligence

Linda MacPhee-Cobb, herselfsai.com

1994
Contents

0.1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
0.2 License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1 Neurology and Machine Learning 4
1.1 Some neurology that is related to arti cial intelligence . . . . . . 4
2 Searching 9
2.1 Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 GUI Java Search Tool . . . . . . . . . . . . . . . . . . . . 14
3 Games and Game Theory 60
3.1 Game Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2 Intelligent games . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4 Misc. AI 64
4.1 AI Language, speech . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.1.1 Hidden Markov Models . . . . . . . . . . . . . . . . . . . 65
4.2 Fuzzy Stu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.3 Evolutionary AI . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.4 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.5 Turing Machines, State Machines and Finite State Automaton . 94
4.6 Blackboard Systems . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.7 User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.8 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . 96
4.9 Bayesian Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5 Reasoning Programs and Common Sense 98
5.1 Common Sense and Reasoning Programs . . . . . . . . . . . . . . 98
5.2 Knowledge Representation and Predicate Calculus . . . . . . . . 99
5.3 Knowledge based/Expert systems . . . . . . . . . . . . . . . . . . 105
5.3.1 Perl Reasoning Program 'The Plant Dr.' . . . . . . . . . 108
6 Agents, Bots, and Spiders 125
6.1 Spiders and Bots . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.1.1 Java Spider to check website links . . . . . . . . . . . . . 126

1
6.2 Adaptive Autonomous Agents . . . . . . . . . . . . . . . . . . . . 151
6.3 Inter-agent Communication . . . . . . . . . . . . . . . . . . . . . 151
6.3.1 Java Personal Agent . . . . . . . . . . . . . . . . . . . . . 154
7 Neural Networks 244
7.1 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
7.2 Hebbian Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
7.3 Perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
7.4 Adeline Neural Nets . . . . . . . . . . . . . . . . . . . . . . . . . 247
7.5 Adaptive Resonance Networks . . . . . . . . . . . . . . . . . . . . 248
7.6 Associative Memories . . . . . . . . . . . . . . . . . . . . . . . . 248
7.7 Probabilistic Neural Networks . . . . . . . . . . . . . . . . . . . . 249
7.8 Counterpropagation Network . . . . . . . . . . . . . . . . . . . . 250
7.9 Neural Net Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . 250
7.10 Kohnonen Neural Nets (Self Organizing Networks) . . . . . . . . 250
7.10.1 C++ Self Organizing Net . . . . . . . . . . . . . . . . . . 252
7.11 Backpropagation . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
7.11.1 GUI Java Backpropagation Neural Network Builder . . . 266
7.11.2 C++ Backpropagation Dog Track Predictor . . . . . . . 328
7.12 Hop eld Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 394
7.12.1 C++ Hop eld Network . . . . . . . . . . . . . . . . . . . 396
8 AI and Neural Net Related Math Online Resources 404
8.1 General Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
8.1.1 C OpenGL Sierpinski Gasket . . . . . . . . . . . . . . . . 405
8.1.2 C OpenGL 3D Gasket . . . . . . . . . . . . . . . . . . . . 408
8.1.3 C OpenGL Mandelbrot . . . . . . . . . . . . . . . . . . . 410
8.2 Speci c Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
9 Bibliography 419
9.1 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
9.2 Links to other AI sites . . . . . . . . . . . . . . . . . . . . . . . . 421

2
0.1 Preface
The website for this book has moved! http://herselfsai.com
There are new chapters, source code, and news on that website. Source code
can be directly downloaded from there.
After I nish adding the new chapters and updating and making corrections
to old chapters on the website I'll update this pdf. But you should consider
this pdf outdated for now 1994 was a long time ago in the world of arti cial
intelligence.
This book is intended to be an introduction to arti cial intelligence and
neural networks. I put in lots of source code in C/C++ and JAVA. Having
examples to experiment with can make the concepts clearer. Although I touch
on some math there isn't enough time and space to go through it in detail. A
long list of Internet resources are listed at the back that do an excellent job of
explaining the math needed for AI and NN.

0.2 License
This work licensed under Creative Commons Attribution-Noncommercial-Share
Alike 3.0 United States License.

3
Chapter 1

Neurology and Machine


Learning

1.1 Some neurology that is related to arti cial


intelligence
One of the more dicult tasks arti cial intelligence creators face, is how do
you know it is intelligent? There is not even have an agreement as to what
constitutes intelligence in people let alone what constitutes intelligence in a
machine.
There is some agreement on what is required for arti cial intelligence: pat-
tern recognition; the ability to create or learn, the ability to keep and access
stored information; problem solving ability; communication ability; and the
ability to form intentions i.e. self awareness.
Much progress has been made using neural nets to do pattern recognition.
The ability to create, learn, keep and access stored information has been par-
tially done with black board systems and neural nets. Communication is lagging
behind the other areas. Some speech recognition has been done with neural nets,
but this is pattern recognition, not actual two-way communication. The abil-
ity to form intentions is hotly debated. What constitutes self-awareness in a
machine is also hotly debated and a de nition or test has not yet been agreed
upon.
It is not yet clear whether creativity is a part of general intelligence or a
separate entity. The main traits of creativity are generally agreed to be: a
lack of conventionality and the willingness to question status quo; the ability to
recognize connections be they similar or dissimilar; and an appreciation and skill
in any of the arts. Persistence or motivation is often considered a characteristic
as well. It is necessary to have knowledge in the eld as well. A lucky guess
that is not understood is hardly creative.
It is clear there is such a thing as 'general intelligence'. We can all readily

4
identify intelligence, or lack thereof in people we come across. It is also clear
that intelligence tests measure one's ability to take tests and education, rather
than one's intelligence. General intelligence remains xed over the life of an
individual. Education grows as a person learns more whether through self ed-
ucation, academia or other methods. Some areas of the brain can be damaged
with out harming a persons intelligence, instead only costing the person some
memories or skills.
Hemispherectomy done in extreme cases of epilepsy removes one hemisphere
of the brain surgically. There is some paralysis on the opposite side of the
body, but interestingly intelligence is preserved. Often intelligence is increased
after surgery probably because seizures and the devastating e ect they have are
stopped.
Although psychologists have tested for self awareness by placing dots on the
foreheads of sleeping beasts and determining they were or were not self aware
if they recognized the dot in the mirror as being on them. It is now clear that
self awareness, is a matter of degree and not a have or have not part of beings.
It is also becoming clear that consciousness is a dynamical ongoing process
in the brain. It is not something that can be found in the pieces of the brain
but only in the operation of the brain. How a piece of software works can not
be determined by dismantling the computer into circuits and chips, nor can
consciousness be understood by only looking at pieces of the brain.
The level of intelligence of a species is related to a constant multiplied by
brain size divided by body size. Male human brains average 1371cc, female
brains average 1216 cc. Normal IQ scores have been documented for brains
between 735cc and 1470cc. Before anyone gets confused, remember that it is
excess neurons above and beyond what are needed for body maintenance that
matters, since male bodies are usually much larger than female bodies more
neurons are needed for general maintenance.
There are about ten billion neurons in the human brain. Each of these has
about ten thousand connections to other neurons. There are over two hundred
known neurotransmitters interacting with these neurons. Axons are the single
connection leading away from the neuron, sending out a frequency through
10,000 or so branches o to other neurons. Dendrites are the many connections
leading in to the neuron. The incoming dendrite electric pulses are superimposed
on each other, the intensity of the incoming wave is the important part. Even at
rest a small pulse is maintained in the neuron. There is a set point, a preferred
point that the population returns to between excitation states. As a person's
age increases the steady state amplitude increases, neurons acting as part of
groups increases, so the amplitude increases, neurotransmitters can increase or
decrease this amplitude.
Neurons are in the gray matter of the cortex. The ones you are born with are
all you get in the neocortex. Neurons increase branching and size as you learn
skills and knowledge, and from birth to adolescence die o if unused. If some do
not die o mental illness results. They communicate using neurotransmitters.
The shape varies by task. There are two main types of neurons in the brain,
spiny and smooth. The spiny neurons make up about 80% of the neurons in the

5
brain and are further broken into two groups, pyramidal and stellate. Neurons
change their behavior with experience. Axons are in white matter of the cortex
and form the long distance connections. They increase with age. The more
white matter the faster communication occurs. Older people think faster.
Neurons do not a ect things individually. They each e ect the conditions
in the neighborhood. Each is connected to every other neuron in the brain
with in a few connections. The neurons form populations that have many semi
autonomous independent elements; each has many weak interactions with many
others; the input-output relationships are non-linear; and from the neuron's
point of view have endless energy coming in and leaving. The connections
between neurons can be in series or in parallel, branch into many or reduce
from many neurons to few or one. The feedback can be both cooperative, both
inhibitory or one cooperative and the other inhibitory. Some neurons have only
local connections and can be contributory or inhibitory. Some neurons are long
distance, there are always excitatory.
When the density of connections is deep enough the neurons begin acting
as part of the group rather than individuals. Chaotic attractors and point
attractors form to stabilize the pattern. Once part of a group the neuron gives
as many pulses as it receives. The 40hz background cycle keeps the steady state
going instead of dying o . Positive and negative feed back loops are what allow
for the intentional responses to stimuli.
A neuron takes the incoming pulses, converts them to waves, sums them,
converts the integrated signal to a pulse train and sends it out on the axon
if it is over a certain threshold. The charge travels down the dendrite toward
the soma (main part of the cell ), jumping from one neuron to another which
release neurotransmitters as the charge moves along. When the frequency of
pulses increases each is diminished in the amount it adds to the wave amplitude
so the amplitude can not increase above a certain amount. Incoming ows are
excitatory, out going ows act as inhibitors. The neurotransmitters turn the
ow o and on and then rapidly diminish. After ring, neurons need time to
recover before they can re- re.
Neuromodulary neurons receive input from all over the brain, most impor-
tantly the limbic system during the formation of intentional action. They have
widely branching axons, that do not form synapses but release the neuromodu-
lators through out the brain forming a global in uence, they move about in the
neuropil. Neuromodulators (histamine, serotonin, dopamine, melatonin, CCK,
endorphins, vasopressin, oxytocin, acetylcholine, noradrenaline and others) en-
hance or diminish e ectiveness of synapses bringing lasting changes, cumulative
changes.
Neurotransmitters act locally. One, oxytocin is a neurotransmitter that is
released during orgasm, and childbirth, it erases memories and also is related to
bonding between couples and parents and children. NMDA, mono modulation
of glutamate may be related to intelligence.
The cerebral cortex is about 2.67 square feet when stretched out. It is
about six cells deep. The major wrinkles are common to everyone, just as
everyones basic face structure has two eyes, a nose and a mouth. The wrinkles

6
are individual in the same way that peoples faces are individual despite the
same basic features. The bottom two layers of the cortex send connections to
other parts of the brain, the third layer from the bottom is incoming signals,
the the top three layers receive input from the third from the bottom layer.
1 up
2 up
3 up
4 incoming
5 outgoing
6 outgoing
Large areas of the cortex are known to perform di erent tasks, such as
language or math. Gifted people tend to have a more di erentiated pre-frontal
cortex, and brain organization is also di erent. There are also smaller areas
about a half inch square areas known as 'bumps' or 'patches' each of these
includes millions of neurons and ashes o and on at ve to twenty times per
second. Most perceptions, behaviors and experiences are somehow recorded in
these patches.
The frontal lobes of the cortex contain the motor cortices,the connections
to the muscles and nerves that control motion. This part of the brain also
contains a map of the body. The frontal lobes are highly involved in forming
intent, and length of attention spans. The rate of ring in pre-frontal cortex
res at di erent rates during delayed-choice tasks, depending on previous focus
of attention, and is most active during IQ tests. Di erent small areas of the
pre-frontal cortex are used for di erent types of tasks. The di erence in the
pre-frontal cortex is not in structure but in the places it connects to. The left
pre-frontal cortex encodes memories and the right pre-frontal cortex retrieves
memories. Working memory, also found in this area, is not just a blank scratch
pad but performs other functions as well. Dorsal and lateral areas of the frontal
lobe deal with cognitive functions. While the medial and ventral areas handle
social skills and empathy.
The hippocampuses are two structures about the size and shape of your
little nger deep inside your brain. They release ACh (acetylcholine) along one
of the two cortex layers where dendrites are found when a new thing is to be
learned. The hippocampuses are responsible for sending the signals to compress
information into existing information; treat it as something new and separate;
or recall existing information.
Human and animal learning is broken into three main groupings: instru-
mental conditioning; classical conditioning; and observational. In Instrumental
Conditioning; speci c behavior is rewarded or punished. In Classical Condition-
ing; two stimuli are presented together repeatedly, the animal or person learns
to associate one stimuli with the other. This is the same as Pavlov's condition-
ing of dogs with bells to which the dog does not initially salivate, and food to
which the dog does salivate from the beginning. In Observational; behavior is
learned by watching others.
Memory in humans and animals has three main divisions: Sensory has after
images persisting in the eye after focus is turned away; Short term working

7
memory is where only a few things are kept, this is the working bu er or cache;
Long term storage handles semi-permanent to permanent information storage.
Falsely implanted memories do not record sensory data. We are now beginning
to be able to di erentiate between real memories that have recorded sensory
data and false memories using fMRIs.

8
Chapter 2

Searching

2.1 Searching
Searches are broken into two main categories; uninformed searches (brute-force,
blind), and informed (heuristic, directed) searches. Uninformed searches are
done when there is no information about a preferred search path. Informed
searches have some information to help pick search paths, usually a rule of
thumb is used to reduce the search area. A traveling salesman search going from
Boston to Dallas is uninformed if it begins searching randomly, or methodically
with no preference. An informed search knows Dallas is South West of Boston,
so it begins and concentrates its search in that direction.
Directed graphs (state-space graphs) are used to keep track of possible steps
and the state of the world from step-to-step. The edges are used to de ne
steps and the nodes de ne states of the world. A state space graph has three
basic components: A start node; functions that transform a state description
representing one state into the representation after an action is taken; and a
goal condition.
Four main criteria for evaluating search strategies are:
 completeness; likelihood of nding a solution if a solution exists
 time complexity; (path cost) time to nd a solution (Order O(n))
 space complexity; amount of memory, ram, needed
 optimality; does it nd best solution, if several solutions exist?
There are three steps you must take to avoid loops in your search algorithm:
do not return to the state you just came from; do not create paths with cycles
in them; do not regenerate prior states.
The Breadth First Search rst checks all of the nodes directly connected
to the start node, then it checks the nodes connected to each of the beginning
nodes. In a tree graph it checks the top level, then the second level nodes, and

9
so on. If a solution exists, the Breadth First Search will nd it (algorithmic
search), and it will nd the shallowest solution rst.
 Breadth First Search: Algorithm
 top::
 Is it the end of the queue?
 true: quit, no solution
 false:
 Remove rst node
 Is it the solution node?
 true: return node and quit
 false: expand node and put children of this node at end of the queue
 loop to top:
Depth First (Uniform Cost) Search is similar to the Breadth First Search
except it searches minimum cost rst rather than level. If the cost is equal on
all the levels then it is the same as the Breadth First Search. It will probably
nd a solution faster than Breath First, if several solutions exist. It may get
stuck on dead ends and not nd a solution even if a solution exists, so it is not
an algorithmic search. It may not nd the shallowest or least cost solution, but
is uses far less memory than the Breadth First Search. Usually a boundary is
placed, a depth bound, so if a solution isn't found at a certain depth it backs
up and tries the next section.
 Depth First Search: Algorithm:
 top::
 Is it the end of the queue?
 true: quit, no solution
 false:
 Remove rst node
 Is it the solution node?
 true: return node and quit
 false: expand node and put children of this node at the front of the queue
 loop to top:

10
Iterative Deepening only uses the memory of a Depth-First Search, but will
nd the lowest cost solution if any solution exists. It does a Depth-First Search
with a depth-bound of one, then a Depth-First Search with a depth-bound of
two, and continues until a solution is found.
Constraint Satisfaction Search: Is a search algorithm in which a set of vari-
ables must meet a set of constraints or conditions rather than meet a goal.
(scheduling and the eight queens problem are examples of this.) There are two
main methods of solving constraint problems. One is 'Constructive Methods'
and it works by constructing piece-by-piece a solution, a second is 'Heuristic
Repair' and it works by trying a random solution and moving any piece that
doesn't t into a space where it meets the constraints. The graph search algo-
rithms are used to look for solutions state or constraint graphs.
Greedy Search: A best- rst strategy, it arrives at a solution by making a
sequence of choices, each of which looks the best at the moment. This is like
making change in a store, rst you deal out the largest coins, quarters, then
when the di erence is less than .25, you hand out dimes until it is less than .10,
etc. Cost is estimated using a formula h(). h(n) then gives cost of cheapest
path to goal state. Greedy is similar to Depth First Search.
 Greedy Search: Algorithm
 top::
 have we got a solution?
 true: quit, return answer
 false: grab the largest/best selection we can
 loop to top:
The following searches are heuristic. They combine one or another of the
above searches with a rule of thumb or a weighting method to direct the search.
Several evaluation functions exist which are used to construct an ordered search.
A* (A star) This algorithm combines a best- rst search with a uniform cost
search, usually breadth rst. h(n) is the best- rst formula which is added to
g(n), the known path cost. f (n) = g (n) + h(n): The lowest cost f(n) is followed
rst. Often the example given is of the fteen square puzzle where you slide the
numbers to put them in order. A* works by examining the surrounding squares
and beginning with the most promising of them, and repeating that until the
puzzle is solved.
 A*: Algorithm
 Put rst node on SearchList
 loop::
 Pop top node on SearchList and put on DoneList

11
 if no nodes on SearchList, break, no solution
 is this node the solution?
 ..yes
 ....break with answer
 ..no
 ....calculate f(n) for each node o of this node
 ....check DoneList, if node on DoneList discard
 ....add each node to SearchList in order of smallest f(n) (including previous
nodes on SearchList)
 ....loop::
h(n), an 'admissible heuristic', must be chosen in a way that it never overes-
timates the path cost. If you are trying to nd a route between two cities then
h(n) is a straight line between the two cities. The better the h(n) function is
the better the search will work.
Some examples of h(n) are: For the sliding block children's puzzle that has
numbered blocks that you order by sliding about, h(n) might be the number of
tiles in the correct location For a path from one location to another h(n) might
determine distance from goal of the city node expanded.
A* is optimally ecient for any given h(n) function. It will nd and expand
fewer nodes than any other algorithm. A* is a complete algorithm, and it is of
Order, O(log h  (n))h  (n) is the true cost of reaching the goal. It will also nd
the lowest cost path from start to nish if there is a path.
idA* (iterative deepening A*), A* does a depth rst search up to a cost limit
i place of the breadth- rst part.
SMA* (simpli ed memory bounded A*), A* to stay with in memory space,
this algorithm lls available memory, then drops the highest cost node, to make
room for the new node.
RBFS (recursive best- rst search). This uses depth- rst and best- rst to-
gether. It calculates f(n) for all the nodes expanded o the current node, then
backs up the tree and re-calculates f(n) for the previous nodes. Then it expands
the smallest f(n) of those nodes.
Planning out a set of steps to reach a goal may be done with STRIPS
rules using di erent search techniques. Searching a group of plans begins with
an incorrect or incomplete plan that is changed until it satis es the situation.
Sometimes a rule is learned if it will save time and have general applicability.
STRIPS combines state-space and situational calculus in an e ort to over come
the problems of situational calculus. Situational Calculus is a form of rst-order
predicate calculus with states, actions, and the e ects of states after actions have
taken place. A list of states is kept. States are treated as things and actions are
treated as functions. The e ects of actions are mapped onto the states. The

12
e ects of actions on the states can not always be inferred and this is a major
weakness of Situational Calculus. STRIPS has a set of precondition literals, a
set of delete literals and a set of add literals. To obtain the after action state
using a forward-search the before action conditions are deleted, and add all of
the literals in the add list. Everything not in the delete list is carried over to
the next state.
A recursive STRIPS method adding to each achieved part of the state can
also be used. This is the method used in the 'General Problem Solver', a com-
monsense reasoning program. It uses a global data structure that is set to the
initial state and changed until the goal state is reached. The Sussman anomaly
occurs if a state closer to the goal must be undone to achieve the goal state,
breadth rst searches can sometimes work around this. A backward search with
STRIPS works backward grabbing sub-goals as it goes. It is usually more ef-
cient, but it is also more complicated and Sussman anomalies appear here as
well.
Following is the code to show the e ects of various types of searches between
US cities. It provides a GUI interface and graphical output.

13
2.1.1 GUI Java Search Tool

//AStar.java
//www.timestocome.com
//Fall 2000

//for use with the gui search tool


//this class does A* searches

import java.io.*;
import java.util.*;

class AStar
{

public void AStar(){ }

public Vector AStarSearch(Vector in, String start, String finish)


{

Vector out = new Vector();


Vector temp = new Vector();

City begin = new City ( "", "", 0, 0);


City end = new City ( "", "", 0, 0);

//prime loop and set up things


Enumeration i = in.elements();

while(i.hasMoreElements() ){
City t = (City)i.nextElement();
if ( ((t.city + ", " + t.state).compareTo(start) ) == 0){
begin = t;
}
if( ((t.city + ", " + t.state).compareTo(finish) ) == 0){
end = t;
}
}

14
//get top node off of vector and see if it is destination
temp.addElement(begin);
out.addElement(begin);

// if so return... nothing to do
if( (finish.compareTo(start) ) == 0 ){
return out;
}

//loop
Enumeration j = temp.elements();

while ( j.hasMoreElements() ){
// is queue empty? if so return failure

// else get top node off temp


City z = (City)temp.remove(0);

//see if it is destination if so return successful

if( ( (finish).compareTo(z.city + ", " + z.state) ) == 0) {

return out;

}else{

//else grab each edge city, sort by closest distance


//to destination city and put at front of queue

grabEdges ge = new grabEdges();


Vector q1 = ge.grabEdges(in, z);

Vector q2 = sortEdges( q1, end );

Enumeration k = q2.elements();

while( k.hasMoreElements() ) {

City tc1 = (City)k.nextElement();

if( out.contains(tc1) ){

}else{

15
out.add(0, tc1);
temp.add(0, tc1);

}
}

}
//end loop

return out;
}

public Vector sortEdges (Vector v, City destination )


{

Vector temp = new Vector();


Vector sorted = new Vector();

double smallestDistance = 999999;


City closestCity = new City ( "", "", 0, 0);

if( (v.size() == 1) || (v.size() == 0) ){


//nothing to do bail
return v;
}

Enumeration j = v.elements();

while( j.hasMoreElements() ){

Enumeration i = v.elements();

while( i.hasMoreElements() ){

City tempCity = (City) i.nextElement();

16
double d = Math.sqrt( (tempCity.lat - destination.lat)*
(tempCity.lat - destination.lat) +
(tempCity.lon - destination.lon)*
(tempCity.lon - destination.lon) );

if( d <= smallestDistance){

d = smallestDistance;
closestCity = tempCity;
}

sorted.add(closestCity);
v.remove(closestCity);
j = v.elements();
}

return sorted;

17
//Breadth.java
//www.timestocome.com
//Fall 2000

//for use with the gui search tool


//this class does the breadth-first
//search

//guaranteed to find a solution, if a solution exists.


//will always find shallowest solution first

import java.io.*;
import java.util.*;

class Breadth
{

public void Breadth(){ }

public Vector breadthSearch(Vector in, String start, String finish)


{

Vector out = new Vector();


Vector temp = new Vector();
City begin = new City ( "", "", 0 ,0);
City end = new City ( "", "", 0, 0);

//prime loop and set up things


Enumeration i = in.elements();

while(i.hasMoreElements() ){
City t = (City)i.nextElement();
if ( ((t.city + ", " + t.state).compareTo(start) ) == 0){
begin = t;
}
if( ((t.city + ", " + t.state).compareTo(finish) ) == 0){
end = t;
}

18
}

//get top node off of vector and see if it is destination


temp.addElement(begin);
out.addElement(begin);

// if so return... nothing to do
if( (finish.compareTo(start) ) == 0 ){
return out;
}

//loop
Enumeration j = temp.elements();

while ( j.hasMoreElements() ){
// is queue empty? if so return failure

// else get top node off temp


City z = (City)temp.remove(0);

//see if it is destination if so return successful

if( ( (finish).compareTo(z.city + ", " + z.state) ) == 0) {


out.add(z);
return out;

}else{

//else grab each edge city and add to back of queue


grabEdges ge = new grabEdges();
Vector q1 = ge.grabEdges(in, z);
Enumeration k = q1.elements();

while( k.hasMoreElements() ) {

City tc1 = (City)k.nextElement();

if( out.contains(tc1) ){

}else{

out.add(tc1);
temp.add(tc1);

19
}
}

}
//end loop

return out;
}

20
//Depth.java
//www.timestocome.com
//Fall 2000

//for use with the gui search tool


//this class performs the depth-first searches

import java.io.*;
import java.util.*;

class Depth
{

public void Depth(){ }

public Vector depthSearch(Vector in, String start, String finish)


{

Vector out = new Vector();


Vector temp = new Vector();

City begin = new City ( "", "", 0, 0);


City end = new City ( "", "", 0, 0);

//prime loop and set up things


Enumeration i = in.elements();

while(i.hasMoreElements() ){
City t = (City)i.nextElement();
if ( ((t.city + ", " + t.state).compareTo(start) ) == 0){
begin = t;
}
if( ((t.city + ", " + t.state).compareTo(finish) ) == 0){
end = t;
}
}

//get top node off of vector and see if it is destination

21
temp.addElement(begin);
out.addElement(begin);

// if so return... nothing to do
if( (finish.compareTo(start) ) == 0 ){
return out;
}

//loop
Enumeration j = temp.elements();

while ( j.hasMoreElements() ){
// is queue empty? if so return failure

// else get top node off temp


City z = (City)temp.remove(0);

//see if it is destination if so return successful

if( ( (finish).compareTo(z.city + ", " + z.state) ) == 0) {

return out;

}else{

//else grab each edge city and add to front of queue


grabEdges ge = new grabEdges();
Vector q1 = ge.grabEdges(in, z);
Enumeration k = q1.elements();

while( k.hasMoreElements() ) {

City tc1 = (City)k.nextElement();

if( out.contains(tc1) ){

}else{

out.add(0, tc1);
temp.add(0, tc1);

}
}

22
}

}
//end loop

return out;
}

23
//City.java
//www.timestocome.com
//Fall 2000

import java.util.*;

class City
{

String city;
String state;
double lat;
double lon;

Vector edge = new Vector();

public City (String c, String s, double latitude, double longitude)


{
city = c;
state = s;
lat = latitude;
lon = longitude;

class Edge
{

String city1;
int routeNumber;
double length;

public Edge( String s1, int number)


{
city1 = s1;
routeNumber = number;

24
}

public void setLength( double lat1, double lat2, double lon1, double lon2)
{
double temp =( (lat1-lat2)*(lat1-lat2) + (lon2-lon2)*(lon2-lon2) );
length = ( Math.sqrt(temp ) * 100); //roughly convert to miles
}

25
//CityList.java
//www.timestocome.com
//Fall 2000

import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import java.awt.event.*;

class CityList extends JFrame implements ListSelectionListener


{

public CityList (int i)


{

String list[] = new String[51];

if( i == 1){
list[0] = "Start";
}else if( i ==2){
list[0] = "Finish";
}

for(int j=0; j<50; j++){

list[1]= "Montgomery, Al";


list[2]= "Junea, Ak";
list[3]= "Phoenix, Ax";
list[4]= "Little Rock, Ar";
list[5]= "Sacramento, Ca";
list[6]= "Denver, Co";
list[7]= "Hartford, Cn";
list[8]= "Dover, De";
list[9]= "Tallahassee, Fl";

list[10]= "Atlanta, Ga";


list[11]= "Honolulu, Hi";
list[12]= "Boise, Id";
list[13]= "Springfield, Il";
list[14]= "Indianapolis, In";
list[15]= "Des Moines, Ia";
list[16]= "Topeka, Ks";
list[17]= "Frankfort, Ky";
list[18]= "Baton Rouge, La";

26
list[19]= "Augusta, Me";

list[20]= "Annapolis, Md";


list[21]= "Boston, Ma";
list[22]= "Lansing, Mi";
list[23]= "St Paul, Mn";
list[24]= "Jackson, Ms";
list[25]= "Jefferson City, Mo";
list[26]= "Helena, Mt";
list[27]= "Lincoln, Ne";
list[28]= "Carson City, Nv";
list[29]= "Concord, Nh";

list[30]= "Trenton, Nj";


list[31]= "Sante Fe, Nm";
list[32]= "New York, Ny";
list[33]= "Raleigh, Nc";
list[34]= "Bismark, Nd";
list[35]= "Columbus, Oh";
list[36]= "Oklahoma City, Ok";
list[37]= "Salem, Or";
list[38]= "Harrisburg, Pa";
list[39]= "Providence, Ri";

list[40]= "Columbia, Sc";


list[41]= "Pierre, Sd";
list[42]= "Nashville, Tn";
list[43]= "Austin, Tx";
list[44]= "Salt Lake City, Ut";
list[45]= "Montpelier, Vt";
list[46]= "Richmond, Va";
list[47]= "Olympia, Wa";
list[48]= "Charleston, Wv";
list[49]= "Madison, Wi";

list[50]= "Cheyenne, Wy";

JList jlist = new JList(list);


JScrollPane jscrollpane = new JScrollPane(jlist);

// jlist.setVisibleRowCount(10);

27
wordList.addListSelectionListener(this);

public void valueChanged(ListSelectionEvent e)


{
JList source = (JList)e.getSource();
Object values = source.getSelectedValues();
String selection = (String)values;
}

28
//DrawMap.java
//www.timestocome.com
//Fall 2000

import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import javax.swing.event.*;
import java.util.*;

class DrawMapPanel extends JPanel


{

Vector p, e;
int s=0;
int minh=0, minw=0, maxw=0, maxh=0;
int height=0, width=0;

public DrawMapPanel(Vector point, Vector edge, int h, int w)


{

p = point;
e = edge;
height = h;
width = w;

//create enumerator for cities


City tempCity;
Enumeration counter = point.elements();
boolean firstLoopX = true;
boolean firstLoopY = true;

while (counter.hasMoreElements() ) {

tempCity = (City)counter.nextElement();

//find min/max height


if( tempCity.lon > maxh ){
maxh = (int)tempCity.lon;
}
if( firstLoopY ){
minh = (int)tempCity.lon;
firstLoopY = false;
}else if ( tempCity.lon < minh ){

29
minh = (int)tempCity.lon;
}

//find min/max width


if( tempCity.lat > maxw ){
maxw = (int)tempCity.lat;
}
if( firstLoopX ){
minw = (int)tempCity.lat;
firstLoopX = false;
}else if ( tempCity.lat < minw ){
minw = (int)tempCity.lat;
}
}

//scale using width and hieght


//compare scale width and hieght and use the smaller of the two
if ( (minw - 10) > 0){
minw -= 10;
}
if( (minh -10) > 0){
minh -= 10;
}

maxw += 10;
maxh += 10;

int yDim = maxh-minh;


int xDim = maxw-minw;

double scaleX=0, scaleY=0;

//downsize or upscale?
if( (yDim > h) || (xDim > w) ){
scaleX = xDim/w;
scaleY = yDim/h;
}else{
scaleX = w/xDim;
scaleY = h/yDim;
}

//keep scaling in proportion


int scale=0;
if( scaleX < scaleY){
scale = (int)scaleX;

30
}else{
scale = (int)scaleY;
}

s = scale+1;
}

public void paint(Graphics g1)


{

Color c = new Color ( 200, 255, 200);


g1.setColor(c);

g1.fillRect( 0, 0, height, width);


repaint();

Color c1 = new Color ( 0, 20, 0);


g1.setColor(c1);
//loop back through cities drawing and labeling points
//create enumerator for cities
City tempCity1;
Enumeration counter1 = p.elements();
int x=0, y=0;
int left = s*minw;
int bottom = s*minh;

while (counter1.hasMoreElements() ) {

tempCity1 = (City)counter1.nextElement();
//shift left and scale x
x = (int) ( (tempCity1.lat)*s - left );
x = width - x; //latitude runs right to left, not left to right

// shift top and scale y


y = (int) ( (tempCity1.lon)*s - bottom );
y = height - y; //(0,0) is top left corner, not bottom left

g1.drawString(tempCity1.city, y-2, x-2);


g1.drawOval( y, x, 3, 3);

31
//create enumerator for edges
Enumeration counter2 = tempCity1.edge.elements();
Edge tempEdge = new Edge( "", 0);

while( counter2.hasMoreElements() ) {

tempEdge = (Edge)counter2.nextElement();

//match tempEdge.city1 to City in city vector


//yuch loops with in loops.... clean this up when time allows
Enumeration counter3 = p.elements();
City tempCity3 = new City( "", "", 0, 0);

while( counter3.hasMoreElements() ){

tempCity3 = (City)counter3.nextElement();
if( ( (tempCity3.city).compareTo(tempEdge.city1) ) == 0) {

//grab city1's lat and long and adjust it as above

int x1 = (int) ( (tempCity3.lat)*s - left );


x1 = width - x1; //latitude runs right to left, not left to right

// shift top and scale y


int y1 = (int) ( (tempCity3.lon)*s - bottom );
y1 = height - y1; //(0,0) is top left corner, not bottom left

//draw a line from (x, y) to the adjusted city1


g1.drawLine( y, x, y1, x1);

//label edges
}

repaint();

}
}

32
class DrawMapFrame extends JFrame
{

public DrawMapFrame(Vector point, Vector edge)


{
int height=600, width=800;

setTitle("Map");
setSize(width, height);

addWindowListener(new WindowAdapter(){
public void windowClosing(WindowEvent e)
{
// System.exit(0);
}
} );

Container contentPane1 = getContentPane();

contentPane1.add(new DrawMapPanel(point, edge, width, height) );


}
}

public class DrawMap


{

public void begin()


{

Vector p = new Vector();


Vector e = new Vector();

try{
//collect the data
GetData gd = new GetData();
p = gd.getCity();
e = gd.getEdge(p);
String[] words = gd.buildList(p);
} catch(Exception ex){}

33
JFrame frame1 = new DrawMapFrame(p, e);

frame1.show();

34
//GetData.java
//www.timestocome.com
//Fall 2000

import java.io.*;
import java.util.*;

class GetData
{

public Vector getCity()throws Exception


{

//read in city file (city.dat) and build a vector with an element for each city
//cityName State latitude longitude
FileReader filereader = new FileReader("city.dat");
StreamTokenizer streamtokenizer = new StreamTokenizer(filereader);
String wordIn = "", tempCity = "", tempState = "";
double tempLat = 0, tempLong = 0;
int count = 3;
Vector v = new Vector();

//read and parse file


while(streamtokenizer.nextToken() != StreamTokenizer.TT_EOF ){

if( (streamtokenizer.ttype == StreamTokenizer.TT_WORD) && (count==3) ){


tempCity = streamtokenizer.sval;
count = 0;
}else if( (streamtokenizer.ttype == StreamTokenizer.TT_WORD) && (count==0) ){
tempState = streamtokenizer.sval;
count = 1;
}else if( (streamtokenizer.ttype == StreamTokenizer.TT_NUMBER) && (count==1) ){
tempLat = streamtokenizer.nval;
count = 2;
}else if( (streamtokenizer.ttype == StreamTokenizer.TT_NUMBER) && (count==2) ){
tempLong = streamtokenizer.nval;
count = 3;

//create new city object and add to vector


City c = new City( tempCity, tempState, tempLat, tempLong);
v.add(c);

35
}

filereader.close();

return v;
}

public Vector getEdge(Vector city)throws Exception


{
//for each edge in edge.dat add it to the edge vector for city1, then
//add it to the edge vector for city 2
//routeNumber city1 city2

//open edge file,


FileReader filereader = new FileReader("edge.dat");
StreamTokenizer streamtokenizer = new StreamTokenizer(filereader);

String tempCity1 = "", tempCity2 = "";


double tempRouteNumber = 0;
int count = 2;

//read and parse file, do while more edges


while(streamtokenizer.nextToken() != StreamTokenizer.TT_EOF ){

//read in edge
if( (streamtokenizer.ttype == StreamTokenizer.TT_NUMBER) && (count==2) ){
tempRouteNumber = streamtokenizer.nval;
count = 0;

}else if( (streamtokenizer.ttype == StreamTokenizer.TT_WORD) && (count==0) ){


tempCity1 = streamtokenizer.sval;
count = 1;

}else if( (streamtokenizer.ttype == StreamTokenizer.TT_WORD) && (count==1) ){


tempCity2 = streamtokenizer.sval;
count = 2;

//System.out.println( "Edge=> " + tempRouteNumber +","+tempCity1+","+tempCity2);

int i = 0;
City tempCity;
boolean c1=false, c2=false;
Enumeration counter = city.elements();

36
while( counter.hasMoreElements() ){

tempCity = (City)counter.nextElement();

//match first city


//add to edge vector for that city
if( ( ( (tempCity.city).compareTo(tempCity1) ) == 0)&&(!c1) ) {
c1=true;
//add edge to first city
Edge tempEdge = new Edge( tempCity2, (int)tempRouteNumber);
tempCity.edge.addElement(tempEdge);
}

//match second city


//add to edge vector for that city
if( ( ( (tempCity.city).compareTo(tempCity2) ) == 0)&&(!c2) ) {
c2=true;
//add edge to second city
Edge tempEdge = new Edge( tempCity1, (int)tempRouteNumber);
tempCity.edge.addElement(tempEdge);

}
}
}
}

//close edge file


filereader.close();

return city;

public void printData(Vector v)


{

Enumeration counter = v.elements();


City tempCity = new City (" ", " ", 0, 0);

System.out.println("*****************************************************");

37
while( counter.hasMoreElements() ){

tempCity = (City)counter.nextElement();

System.out.println( "\n**" + tempCity.city + ", " + tempCity.state + " " +


tempCity.lat + " " + tempCity.lon );

Enumeration counter2 = tempCity.edge.elements();


Edge tempEdge = new Edge( " ", 0);

while( counter2.hasMoreElements() ){
tempEdge = (Edge)counter2.nextElement();
System.out.println( tempEdge.city1 + " " + tempEdge.routeNumber + " "
+ tempEdge.length);
}
}

public String[] buildList(Vector v)


{

Enumeration counter = v.elements();


City tempCity = new City (" ", " ", 0, 0);

int length = v.size();

String list[] = new String[length];


int i=0;

while( counter.hasMoreElements() ){

tempCity = (City)counter.nextElement();

list[i] = ( tempCity.city + ", " + tempCity.state );


i++;
}

return list;

38
39
//grabEdges.java
//www.timestocome.com
//Fall 2000

import java.io.*;
import java.util.*;

class grabEdges{

public grabEdges(){}

//grab the edges/roads from a city and add cities


//that they connect to to the back of the vector
public Vector grabEdges (Vector cities, City city)
{

Vector queue = new Vector();

//get the city each edge connects to and add it to temp


// loop while more edges
Enumeration counter = city.edge.elements();
Edge tempEdge = new Edge( "", 0);

while( counter.hasMoreElements() ) {

// get city that edge connects to


// look up the city in the city vector
tempEdge = (Edge)counter.nextElement();

//match tempEdge.city1 to City in city vector


//yuch loops with in loops.... clean this up when time allows
Enumeration counter3 = cities.elements();
City tempCity3 = new City( "", "", 0, 0);

while( counter3.hasMoreElements() ){

tempCity3 = (City)counter3.nextElement();
if( ( (tempCity3.city).compareTo(tempEdge.city1) ) == 0) {
// add that city to the end of the out vector
queue.addElement(tempCity3);

40
}

}
}

return queue;
}

41
//Jpanel.java
//www.timestocome.com
//Fall 2000

import java.awt.*;
import javax.swing.*;
import java.awt.event.*;

class Jpanel extends JPanel {

Jpanel ()
{
setBackground( Color.white );
}

public void paintComponent (Graphics g )


{
super.paintComponent( g );

}
}

42
//printData.java
//www.timestocome.com
//Fall 2000

import java.io.*;
import java.util.*;
import javax.swing.*;

class printData{

public void printData(){}

public void print(Vector v, JTextArea out)


{

Enumeration counter = v.elements();


City tempCity = new City (" ", " ", 0, 0);

while( counter.hasMoreElements() ){

tempCity = (City)counter.nextElement();

out.append( "\n\n **" + tempCity.city + ", " + tempCity.state );

Enumeration counter2 = tempCity.edge.elements();


Edge tempEdge = new Edge( " ", 0);

while( counter2.hasMoreElements() ){
tempEdge = (Edge)counter2.nextElement();

}
}

43
}

44
//Search.java
//www.timestocome.com
//Fall 2000

import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
import javax.swing.event.*;
import java.util.*;

public class Search extends JFrame implements ListSelectionListener


{

Jpanel mainPanel;
JPanel userPanel;
JPanel outputPanel;
JPanel menuPanel;
JPanel buttonPanel;

static Vector v = new Vector();

static String message = "Welcome to the Times to Come Search Engine" +


"\n Select the type of search you would like to "+
"\n perform from the menu, then select your "+
"\n starting and ending cities from the drop "+
"\n down lists." ;

static JTextArea output = new JTextArea ( message, 10, 30);

static int choice = 0;


String[] words;
static String selection = "";
static String startSelection = "";
static String finishSelection = "";

public Search ()
{
super ("http://www.timestocome.com");

try{
//collect the data
GetData gd = new GetData();
v = gd.getCity();

45
Vector v1 = gd.getEdge(v);
//printData(v);
words = gd.buildList(v);
} catch(Exception e){}

Container contentPane = getContentPane();

JScrollPane scrollpaneText = new JScrollPane();


scrollpaneText.add(output);

JButton enter = new JButton ("Begin Search");


JButton start = new JButton ("Starting Location");
JButton finish = new JButton ("Ending Location");

mainPanel = new Jpanel();


userPanel = new Jpanel();
outputPanel = new Jpanel();
menuPanel = new Jpanel();
buttonPanel = new Jpanel();

Color b = new Color( 0, 0, 100);

mainPanel.setBorder( BorderFactory.createBevelBorder( 0 , b, Color.gray) );

JMenuBar mbar = createMenu();


setJMenuBar(mbar);

JList wordList = new JList(words);


JScrollPane scrollPane = new JScrollPane(wordList);

userPanel.add(scrollPane);
wordList.addListSelectionListener(this);

mainPanel.setLayout( new BoxLayout(mainPanel, BoxLayout.Y_AXIS ) );


contentPane.add(mainPanel);

enter.addActionListener(b1);
start.addActionListener(b2);
finish.addActionListener(b3);

buttonPanel.add(enter);
buttonPanel.add(start);

46
buttonPanel.add(finish);

mainPanel.add(buttonPanel);
mainPanel.add(userPanel);

scrollpaneText.setViewportView(output);
outputPanel.add(scrollpaneText);
mainPanel.add(outputPanel);

public static void main( String args[] )


{

final JFrame f = new Search();

f.setBounds( 10, 10, 600, 400 );


f.setVisible( true );
f.setDefaultCloseOperation(DISPOSE_ON_CLOSE);

f.addWindowListener( new WindowAdapter() {


public void windowClosed( WindowEvent e){
System.exit(0);
}
});

public static JMenuBar createMenu()


{
JMenuBar jmenubar = new JMenuBar();

jmenubar.setUI( jmenubar.getUI() );
JMenu jmenu1 = new JMenu("Searches");
JMenu jmenu4 = new JMenu("Draw Map");
JMenu jmenu2 = new JMenu("Help");
JMenu jmenu3 = new JMenu("Quit");

JRadioButtonMenuItem m1 = new
JRadioButtonMenuItem("Breath-First");
m1.addActionListener(a1);

47
JRadioButtonMenuItem m2 = new
JRadioButtonMenuItem("Depth-First");
m2.addActionListener(a2);

JRadioButtonMenuItem m3 = new
JRadioButtonMenuItem("A*");
m3.addActionListener(a3);

JMenuItem m8 = new JMenuItem("Map");


m8.addActionListener(a8);

JMenuItem m6 = new JMenuItem("About");


m6.addActionListener(a6);

JMenuItem m7 = new JMenuItem("Exit");


m7.addActionListener(a7);

jmenu1.add(m1);
jmenu1.add(m2);
jmenu1.add(m3);

ButtonGroup group = new ButtonGroup();


group.add(m1);
group.add(m2);
group.add(m3);

jmenu2.add(m6);

jmenu3.add(m7);

jmenu4.add(m8);

jmenubar.add(jmenu1);
jmenubar.add(jmenu4);
jmenubar.add(jmenu2);
jmenubar.add(jmenu3);

return jmenubar;

48
static ActionListener a1 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m1 = ( JMenuItem )e.getSource();
choice = 1;
output.setText("\n Breadth First algorithm");

}
};

static ActionListener a2 = new ActionListener()


{
public void actionPerformed( ActionEvent e )
{
JMenuItem m2 = ( JMenuItem )e.getSource();
choice = 2;
output.setText("\n Depth First algorithm");

}
};

static ActionListener a3 = new ActionListener()


{
public void actionPerformed( ActionEvent e )
{
JMenuItem m3 = ( JMenuItem )e.getSource();
choice = 3;
output.setText("\n A* algorithm");

}
};

static ActionListener a6 = new ActionListener()


{
public void actionPerformed( ActionEvent e )
{
JMenuItem m6 = ( JMenuItem )e.getSource();
output.setText("http://www.timestocome.com"+
"\nIntelligent tools for an intelligent world" +
"\n'From then to now TimesToCome.com'" +
"\n\n\nFall 2000"+

49
"\nCopyright Times to Come under GNU Copyleft'');
}
};

static ActionListener a7 = new ActionListener()


{
public void actionPerformed( ActionEvent e )
{
JMenuItem m7 = ( JMenuItem )e.getSource();
output.setText("Thank you . . . " );
System.exit(0);

}
};

static ActionListener a8 = new ActionListener()


{
public void actionPerformed( ActionEvent e )
{
JMenuItem m8 = ( JMenuItem )e.getSource();
output.setText("Creating map in separate window " );
DrawMap map = new DrawMap();
map.begin();
}
};

static ActionListener b2 = new ActionListener()


{
public void actionPerformed( ActionEvent e )
{
startSelection = selection;
output.append("\nStarting Location " + startSelection);

}
};

static ActionListener b3 = new ActionListener()


{
public void actionPerformed( ActionEvent e )
{
finishSelection = selection;
output.append("\nEnding Location " + finishSelection );

50
}
};

//what to do when enter key is hit...


static ActionListener b1 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
switch (choice){

case 0:
output.setText ("\n\nChose a type of search");

case 1:
output.setText ("\n\n Breadth First Search" +
"\n from " + startSelection + " to " + finishSelection );

Breadth d1 = new Breadth();


Vector tempV1 = new Vector();

tempV1 = d1.breadthSearch (v, startSelection, finishSelection);

printData pd1 = new printData();


pd1.print(tempV1, output);
output.append("\n\nThat took "+ tempV1.size() +" tries to find");

break;

case 2:
output.setText ("\n\n Depth First Search" +
"\n from " + startSelection + " to " + finishSelection );

Depth d2 = new Depth();


Vector tempV2 = new Vector();

tempV2 = d2.depthSearch (v, startSelection, finishSelection);

printData pd2 = new printData();


pd2.print(tempV2, output);
output.append("\n\nThat took "+ tempV2.size() +"tries to find");
break;

51
case 3:
output.setText ("\n\n A* " "\n from " +startSelection +
" to "+ finishSelection);

AStar d3 = new AStar();


Vector tempV3 = new Vector();

tempV3 = d3.AStarSearch (v, startSelection, finishSelection);

printData pd3 = new printData();


pd3.print(tempV3, output);
output.append("\n\nThat took "+ tempV3.size() +"tries to find");
break;

default:
output.setText ("\n\n I am so confused... " + choice );
break;
}

}
};

public void valueChanged(ListSelectionEvent evt)


{
JList source = (JList)evt.getSource();
Object[] values = source.getSelectedValues();

//output.setText( "\n\n you selected " + (String)values[0] );


selection = (String)values[0];

52
---city.dat file---
Montgomery AL 32.4 86.3
Phoenix AZ 33.5 112.1
LittleRock AR 34.7 92.4
Sacramento CA 38.5 121.4
Denver CO 39.8 104.9
Hartford CT 41.8 72.7
Tallahassee FL 30.5 84.3
Atlanta GA 33.8 84.4
Boise ID 43.6 116.2
Springfield IL 39.8 89.6
Indianapolis IN 39.8 86.1
DesMoines IA 41.6 93.6
Topeka KS 39.0 95.7
Frankfort KY 38.2 84.9
BatonRouge LA 30.4 91.1
Augusta ME 44.3 69.7
Boston MA 42.3 71.0
Lansing MI 42.7 84.6
SaintPaul MN 44.8 93.0
Jackson MS 32.3 90.2
JeffersonCity MO 38.6 92.2
Helena MT 46.6 112.0
Lincoln NE 40.8 96.7
CarsonCity NV 39.1 119.7
Concord NH 43.2 71.6
Trenton NJ 40.2 74.8
SantaFe NM 35.7 106.0
Raleigh NC 35.8 78.7
Bismark ND 46.8 100.8
Columbus OH 40.0 83.0
OklahomaCity OK 35.5 97.5
Salem OR 45.0 123.0
Harrisburg PA 40.3 76.9
Providence RI 41.8 71.4
Columbia SC 34.0 80.9
Nashville TN 36.2 86.8
Austin TX 30.3 97.8
SaltLakeCity UT 40.8 111.9
Montpelier VT 44.3 72.6
Richmond VA 37.5 77.5
Olympia WA 47.0 122.9
Charleston WV 38.4 81.6
Madison WI 43.0 89.4
Cheyenne WY 41.1 104.8

53
54
---edge.dat file ---
5 Sacramento Salem
5 Olympia Salem
10 BatonRouge Tallahassee
15 Helena SaltLakeCity
20 Atlanta Columbia
24 Nashville Atlanta
25 Denver SantaFe
25 Denver Cheyenne
35 SaintPaul DesMoines
35 KansasCity OklahomaCity
35 OklahomaCity Austin
35 Topeka OklahomaCity
39 Madison Springfield
40 SantaFe LittleRock
40 Raleigh Nashville
40 Nashville LittleRock
55 Springfield Jackson
64 Frankfort Charleston
65 Nashville Montgomery
65 Nashville Indianapolis
69 Indianapolis Lansing
70 Denver Topeka
70 Harrisburg Columbus
70 Indianapolis Columbus
70 Trenton Harrisburg
70 Topeka JeffersonCity
74 Indianapolis Springfield
75 Atlanta Frankfort
76 Lincoln Denver
80 Sacremento CarsonCity
80 SaltLakeCity CarsonCity
80 SaltLakeCity Cheyenne
80 Lincoln Cheyenne
83 Richmond Harrisburg
84 Harrisburg Hartford
84 SaltLakeCity Boise
85 Atlanta Montgomery
89 Monteplier Concord
93 Boston Concord
94 StPaul Madison
94 Bismark SaintPaul
95 Boston Augusta
95 Richmond Trenton
95 Providence Boston
545 Austin BatonRouge

55
555 Jackson Austin
560 Raleigh Columbia
565 SantaFe Oklahomacity
565 Jackson BatonRouge
565 Austin LittleRock
565 SaltLakeCity Sacramento
577 SantaFe Phoenix
583 Flagstaff SantaFe
585 Mongomery Jackson
585 SaltLakeCity Denver
585 Atlanta Tallahassee
585 BatonRouge Atlanta
589 Olympia Boise
601 Sacramento Phoenix
605 Topeka DesMoines
610 Olympia Helena
629 Frankfort Nashville
634 JeffersonCity Frankfort
635 Richmond Raleigh
638 Frankfort Indianapolis
641 Charleston Richmond
646 Columbus Frankfort
647 Columbus Charleston
656 DesMoines Lincoln
679 CarsonCity Boise
679 Lincoln Topeka
680 Montpelier Hartford
770 Boston Montpelier

56
map.dat
#city #state #lat #long #main routes connecting/through capitol city
Montgomery AL 32.4 86.3 85->Atlanta, 65->Nashville
Junea AK 58.4 134.1
Phoenix AZ 33.5 112.1 17-40-25->Santa Fe, 10-99->Sacramento
Little Rock AR 34.7 92.4 30-35->Austin, 40->Santa Fe, 40->Nashville
Sacramento CA 38.5 121.4 5->Salem, 80->CarsonCity, 50-15->SaltLakeCity
Denver CO 39.8 104.9 25->SantaFe, 70-15->SaltLakeCity, 76->Lincoln
Hartford CT 41.8 72.7 91->89 Montpelier, 84->Harrisburg
Dover DE 39.1 75.5
Tallahassee FL 30.5 84.3 10->Baton Rouge, 10-75->Atlanta
Atlanta GA 33.8 84.4 75-10->Baton Rouge, 85-10->Baton Rouge, 24->Nashville
Honolulu HI 25.0 168.0
Boise ID 43.6 116.2 84->Salt Lake City, 84-5->Olympia, 84-95->Carson City
Springfield IL 39.8 89.6 39->Madison, 74->Indianapolis
Indianapolis IN 39.8 86.1 74-64->Frankfort, 69->Lansing, 65->Nashville
DesMoines IA 41.6 93.6 35-70->Topeka, 80->Lincoln, 35->Saint Paul
Topeka KS 39.0 95.7 70-29-80->Lincoln, 35->Oklahoma City, 70->Denver, 70->Jefferson City
Frankfort KY 38.2 84.9 64-70->Jefferson City, 75->Atlanta, 75-71->Columbus
BatonRouge LA 30.4 91.1 10-55->Jackson, 10-35->Austin, 10->Tallahassee
Augusta ME 44.3 69.7 95->Boston
Annapolis MD 39.0 76.5
Boston MA 42.3 71.0 95->Augusta, 95->Providence
Lansing MI 42.7 84.6 69->Indianapolis
SaintPaul MN 44.8 93.0 35->Des Moines, 94->Bismark, 94->Madison
Jackson MS 32.3 90.2 20-35->Austin, 20-65->Mongomery, 55->Springfield
JeffersonCity MO 38.6 92.2 70->Topeka
Helena MT 46.6 112.0 15->SaltLakeCity, 15-90-5->Olympia
Lincoln NE 40.8 96.7 76->Denver, 76-80->DesMoines
CarsonCity NV 39.1 119.7 80->Sacremento, 80->SaltLakeCity
Concord NH 43.2 71.6 93->Boston, 89->Monteplier
Trenton NJ 40.2 74.8 95->Richmond, 95->NewYork
SantaFe NM 35.7 106.0 25->Denver, 25-40->OklahomaCity, 25-40-17->Flagstaff
NewYork NY 42.7 73.8 90->Boston, 87->NewYork
Raleigh NC 35.8 78.7 40-95->Richmond, 40-95-20->Columbia
Bismark ND 46.8 100.8 94->StPaul
Columbus OH 40.0 83.0 70->Harrisburg, 70->Indianapolis
OklahomaCity OK 35.5 97.5 35->Topeka, 35->Austin, 40-25->SantaFe, 35->KansasCity
Salem OR 45.0 123.0 5->Olympia, 5->Sacramento
Harrisburg PA 40.3 76.9 70->Trenton, 70->Columbus, 83->Richmond
Providence RI 41.8 71.4 95->Boston
Columbia SC 34.0 80.9 20-40->Raleigh, 20->Atlanta
Pierre SD 44.4 100.3
Nashville TN 36.2 86.8 65-64->Franfort, 40->Raleigh, 65->Montgomery
Austin TX 30.3 97.8 35->OklahomaCity, 35-20->Jackson
SaltLakeCity UT 40.8 111.9 15->Helena, 80->CarsonCity, 80-25->Denver

57
Montpelier VT 44.3 72.6 89->Concord, 89-91-90->Boston
Richmond VA 37.5 77.5 95-40->Raliegh, 64-77->Charleston, 95-15->Harrisburg
Olympia WA 47.0 122.9 5->Salem
Charleston WV 38.4 81.6 77-70->Columbus, 64->Frankfort
Madison WI 43.0 89.4 39-55->Springfield, 94->StPaul
Cheyenne WY 41.1 104.8 25->Denver, 80->SaltLakeCity, 80->Lincoln

58
---README---
To compile the program just compile each *java file
>javac AStar.java
>javac Breadth.java
>javac City.java
>javac Depth.java
>javac DrawMap.java
>javac GetData.java
>javac Jpanel.java
>javac Search.java
>javac grabEdges.java
>javac printData.java

To run the program


>java Search

A Graphical interface will open with instructions


You can create your own data files, using the same layout as the ones provided

59
Chapter 3

Games and Game Theory

3.1 Game Theory


Game theory is the study of how decision makers will act and react in various
situations like negotiating business deals. It is used quite a bit in the study of
economics and politics. John Von Neumann laid much of the ground work for
game theory. This is the eld that has recently gained some fame with the 'A
Beautiful Mind' book and movie about John Nash and the Nash Equilibrium.
Using simpli ed models, often based only a few rules, many behaviors of people
in various situations can be predicted.
In these game models it is assumed that the players will make rational choices
in each decision. A rational choice is a choice where the player chooses the best,
or one of the best if there are more than one top choice for herself. A player is
presented with a set of actions from which she chooses one. She may prefer one
action over another or may consider some actions to be equally preferable. The
only restriction on the actions preferences is that if a player prefers action A
over action B, and she prefers action B over actions C then she must also prefer
action A over C. A payo function is used to describe the bene t of each action
from the player's point of view. For example I may visit a used car lot and nd
a car that is worth 1,000 by my valuation, you may value that car at 500. So
the payo function for the car for me is 1,000 for you it is 500. If I see another
car I like that I value at 250, it only means I prefer the rst car to the second
car. It does not mean that I prefer it 4 times as much. The values are only to
show ordering of choices.
The Nash Equilibrium is the place where each player can do no better,
no matter what the other person decides to do. A good example is the well
known game of 'Prisoner's Dilemma' we have two crooks who worked together
on a crime and have each been caught and are being held separately from each
other. They each have a choice of nking or remaining quiet. If one nks, he
walks with 1 years time in jail, the other person gets 4 years in jail. If neither
nks there is enough evidence to put each away for 2 years time.

60
Crook quiet nk
2 2,2 0,3
1 3,0 1,1
The highest score of all the plays is for both crooks remain quiet and each
receives 2 years jail time. But if Crook one nks, he gets a score of 3 (1 years
time ) against the other player who remains quiet or a score of 1(and gets 3
years jail time) So his best bet is to nk, as is the other crooks. The Nash
equilibrium is at nk/ nk (1,1) since nking is the best move for each player
individually.
The payo function for this game is the same for each player and is: f(Fink,
Quiet) > f(Quiet, Quiet) > f(Fink, Fink) > f(Quiet, Fink) So we could as easily
score it 3, 2, 1, 0 rather than counting years out or 32, 21, 9, 1 the score only
serves to order the choices.
A clearer example is a game in which we have 2 players moving pieces on a
3-d game board. Each player can move in the x, y, or z direction.
X1 Y1 Z1
X2 2*,1* 0,1* 0,0
Y2 0,0 2*,1 1*,2*
Z2 1,2* 1,0 0,1
The * is the best move for each player considering what the other player
does. If player 1 moves in the Y direction player 2's best move is also in the Y
direction (2*,1) The squares with both players have *s are X X2
1
and YZ1
2
. These
are both Nash equilibriums. Finding the Nash Equilibrium this way is called
'Best Response'
Suppose instead of set numbers I have a function that describes the payo
for each player. I could have A(x) = y 2 andB (y ) = 12 x + xy then to nd the
Nash equilibrium I take the derivative of each function, set it to zero, solve and
plot. Any and all places the functions cross on the plot are Nash equilibriums.

3.2 Intelligent games


Game developers are right up there with the government on the progress and
contributions they have made to AI. The game environment is an easier one to
work in because the gamers can control the environment and deal with speci c
issues rather than dealing with the real world. Also, there is lots of money to
buy cool equipment and get top notch people involved. Huge progress has been
made in 2D and 3D graphics, search algorithms, data mining and bots in the
game eld.
Most of the game playing design in arti cial intelligence is the search for
quick, intelligent search routines. Game programs are concerned with reasoning
about actions. Not only must the path of possible moves be sought but the
program must consider the opponents moves which are unknown to the program
until they are made.
It is not possible, even for most simple games, to search all the possible
routes the game can take. Too much time and hardware would be needed. For

61
example tic-tac-toe is one of the simplest games we all know. A game tree that
mapped all possible moves from start to nish would be 9! or 362,880 nodes
large. The rst player would have 9 choices of which box to play in, the second
8 choices since the rst player had taken one, the rst player's second move
would have 7 choices, etc. So the top level of nodes would have 9 choices. Each
level in the tree represents a turn in the game. The second level would have 8
nodes o of each of the original 9 nodes and so on. So you can imagine what
chess or other more complicated games have as the number of possible moves.
Pruning is used to take sections o of the search tree that make no di erence
to play. Heuristic (rule of thumb) evaluations allow approximations to save
search time. For instance in the tic-tac-toe tree described above once the rst
player chooses a position to play then the other 8 nodes of the top layer can be
trimmed o and only the 8 trees under that node need to be searched.
Since it is not usually practical to calculate each possible outcome a cut
o is usually put in place. As an example, for each board in play we can
calculate the advantage by adding up the point value of the pieces on the board
or adding points for position. Then the program can see which of those gives
the program a higher score. Then the program need only calculate ve or so
moves ahead, calculate the advantage at each node and choose the best path.
Rather than calculate ahead a set number of moves, the program can use an
iterative deepening approach and calculate until time runs out. A quiescent
search restricts the above approach. This eliminates moves that are likely to
cause wild swings in the score. The horizon problem occurs when searches do
not look ahead to the end of the game. This is a current unsolved problem in
game programming.
The Min Max algorithm assumes a 'zero sum game', such as tic-tac-toe where
what is good for one player is bad for the other player. This algorithm assumes
that both players will play perfectly and attempt to maximize their scores. The
algorithm only generates the trees on the nodes that are likely to be played.
Max is the computer, Min is the opposing player. It is assumed Max will get
rst turn.
 generate entire game tree down to the maximum level to check
 generate each terminal state value, high values are most bene cial to max,
negative values are most bene cial to min, zero holds no advantage for
either player.
 go up one level, give the node above the previous layer the best score from
the prior layer
 continue up the tree one level at a time until top is reached
 pick the node with the highest score.
The Alpha-Beta method determines whether an evaluation should be made
of the top node by the Min-Max algorithm. It searches all of the nodes, like Min-
Max, then eliminates (prunes) those that are never going to reached. The pro-
gram begins by proceeding with the Min-Max algorithm systematically through

62
the nodes of a tree. First we go down a branch of the tree and calculate the
score for that node. Then we proceed down the next branch. If the score at one
of the leaves is lower than the score obtained in a previous branch of the tree
we don't nish evaluating all the nodes of the branch, rather we move onto the
next branch. The search can be shallow rather than deep saving time. Further
gains in speed can be made by caching the information from branches in a look
up table, re-ordering results, extending some and shortening other searches, or
using probabilities rather than actual numbers for cuto s and using parallel
algorithms.
Ordering may be used to save time as well. In chess captures would be
considered rst, followed by forward moves, followed by backward moves. Or,
ordering can consider the nodes with the highest values rst.
The program must try to nd a winning strategy that does not depend on
the human user's moves. Humans often make small goals and consider moves
that work toward that goal, i.e. capture the queen. David Wilkins Paradise is
the only program so far to do this successfully. Another approach is to use book
learning. Several boards are loaded into a table in memory and if the same board
comes into play the computer can look up what to do from there. The Monte
Carlo simulation has been used successfully in games with non-deterministic
information, such as; Scrabble, dice, and card games.
Temporal-di erence learning is derived from Samuel's machine learning re-
search. Several games are played out and kept in a database. This works well
with board games like backgammon, chess and checkers. Neural nets can be
trained to play games this way, TD Gammon being one the more famous ones.
Most of the AI in games is scripted rather than programmed in traditional
languages so it is an easy starting place for beginners. Python is the currently
preferred languages. All the data is prede ned in a le so the script can look
up the data. This means the script doesn't have to be changed whenever the
data is changed during play. This is especially useful for bot programming.

63
Chapter 4

Misc. AI

4.1 AI Language, speech


There are many problems to be worked through involving language and AI.
Many breakthroughs in AI will follow a system that can actually understand
language, and it is pretty high on the cool things list. There are two major parts
to machine language: speech recognition and generation; and natural language
understanding and generation.
Speech recognition uses neural nets, hidden Markov Models, Bayesian net-
works and other tools to tease out what a person is saying and gure out how
to pronounce words when responding or reading text to a user.
Language understanding uses regular expressions, as in ELIZA, to create
responses to the user. State machines and First Order Predicate Calculus have
had some successes here. Some work has been done with neural nets and word
vectors also. Language understanding programs have four main inter-related ar-
eas to juggle: syntax; semantics; inference; generation. Syntax is the structural
relationship between words. Semantics is the meaning of the words. Inference
concerns what is meant or desired from the request or conversation. Genera-
tion is the ability of the software to create a suitable response. Ambiguity is a
large part of the problem, the same word can mean di erent things in di erent
contexts, the same sentence can mean di erent things as well. For example: I
made her walk. Could mean: I forced her to stride; I poured cement and made
her a walkway.
A language is a set of sentences that may be used to convey information.
English contains about 800,000 words, including technical terms, although no
one individual probably knows more than 60,000 words. Most written English
uses less than 10,000 words and college educated people use about 5,000 words in
conversation. Grammar is the syntax or sequences that make up the sentences.
Syntax as well as the words used convey information. An interesting point is
that where ever people have formed groups a language has developed.

64
4.1.1 Hidden Markov Models
These have been used in speech recognition, handwriting recognition and cur-
rently in many bio-technology projects.
Markov Chain: is a statistical technique that uses a weighted automaton, a
weighted directed graph, in which the input sequence uniquely determines the
path through the automaton to the output observed.
Hidden Markov Model: is a weighted automaton in which only one path is
allowed per speci c input. The Viterbi is the most commonly used algorithm
for processing these models.
Viterbi Algorithm: traces through state graph multiplying the probabilities.
If the probability from the previous level is higher it back traces Example, for
words: need (n-iy), neat (n-iy-t), new (n-uw), knee (n-iy)
 Begin
 n
1:0

 iy
:64
uw
:36

 t
:24
d
:315

 End
 Possible paths are:
 new=n uw => 1:0  :36 => :36
 neat=n iy t => 1:0  :64  :24 => :128
 need=n iy d => 1:0  :64  :445 => :178
 knee=n iy => 1:0  :64  :315 => :2016
The rst loop checks n uw1:0  :36 = :36 and n iy 1:0  :64 = :64
64 is a higher probability so we pursue that
Next pass gives us iy t:64  :24 = :128
iy:64  :315 = :2016
iy d:64  :445 = :178
But these are smaller than the :36 we collected as a high probability in the
previous pass so we back track to that. If there were more levels through our
graph we would continue this loop until reaching the end.
The probabilities are calculated as so:
weight = log [actualprobability ] so if the probability of n uwis:44 the
graph weight is log (:44) => :36

65
4.2 Fuzzy Stu
Fuzzy logic has had great success in running machinery that is computer oper-
ated. For instance, if I write a program to control the thermostat in my home I
can set it for 'cool'. Coming out of winter into summer, 60'F feels cool. Going
from summer into Fall 70' feels cool. I might describe cool as between 50' and
70', warm as between 60' and 80', cold anything less than 60' and hot anything
over 70'. So the computer doesn't get it when I say I would like the home to be
warm. Should it be 60', but that is also cool.
Softening and fuzzing the data enables the computer to be able to deal with
overlapping or otherwise not clear cut data. It also keeps the machine from
jumping about too much when the inputs change. Groups of overlapping data
are hard coded into software along with rules for fuzzing it.

4.3 Evolutionary AI
Genetic programs create individual programs that compete for survival. Those
that do well reproduce, usually with another program that did well. The child
gets a random mix of traits from the parents and may do better or worse than
the parents. Some of these programs are written for speci c problem solving
ability, others for general skills. Often a mutation will be thrown in that will
e ect a very small percentage of the population.
The simplest of these is life. A grid of squares is laid out and life multiplies
or dies o depending on the number of occupied neighboring cells. The newer,
more complex versions have genetic code that children inherit as subroutines
from both parents, a bit of randomness mixed in and they compete in a survival
of the ttest environment. The hope is that after many generations we will have
intelligence.
Arti cial societies are also being used to study and predict what real world
societies will do. Using a simple version of life you can change the rules and
mimic real world situations. This method is also being used by archaeologists
to determine what caused the rises and falls of civilizations gone by. In 1971
an economist, Thomas C. Schelling, used such a method to show how neighbor-
hoods segregate and that racism was not the cause of segregation. Usually a few
very simple rules are all that are needed to have real life simulations develop.

66
Game of Life by Conway in Java. A grid of squares randomly is marked with
on or o . If a square has less than 2 neighbors it dies of lonliness, if it has 2
neighbors it stays the same, if it has 3 neighbors a birth occurs, if it has more
than 3 neighbors it dies of over crowding.
//www.timestocome.com

//Conway's Game of Life ( example of cellular automaton )


//This is a grid of 30 x 30 squares, though any number can be used
//if a square has 0 or 1 neighbor dies of loneliness
//if a square has 2 neighbors no change occurs
//if a square has 3 neighbors a birth occurs
//if a square has 4 or more neighbors dies of overcrowding.

import java.awt.*;
import java.awt.event.*;
import java.util.*;

public class Life extends Frame implements Runnable, WindowListener


{

int xdim = 500;


int ydim = 500;
Thread animThread; // to continuously call repaint();
int boxsize = 10;
Rectangle box = new Rectangle ( 0, 0, boxsize, boxsize );
int gridsize = 30;
int board[][] = new int[gridsize][gridsize];
long delay = 1000L; // how long to pause between redraws
int startposition = 1; //1 is random setup, 2-7 are well known movers

public static void main(String[] args)


{
new Life();
}

//set up frame and start animation


public Life()
{

67
super ( " Life " );
setBounds ( 0, 0, xdim, ydim );
setVisible ( true );
addWindowListener ( this );
animThread = new Thread ( this );
animThread.start();

setupBoard( startposition );

void setupBoard( int setup )


{
//erase board
for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++ ){
board[i][j] = 0;
}
}

if ( setup == 1 ){ //random
Random r = new Random ( System.currentTimeMillis() );
for ( int i=0; i<gridsize; i++ ){
for ( int j=0; j<gridsize; j++ ){
if ( r.nextInt()%2 == 0 ){
board[i][j] = 1;
}
}
}

}else if ( setup == 2 ){ //glider


int center = gridsize/2;
board[center][center] = 1;
board[center-2][center+1] = 1;
board[center][center+1] = 1;
board[center-1][center+2] = 1;
board[center][center+2] = 1;

}else if ( setup == 3 ){ //small exploder


int center = gridsize/2;
board[center][center] = 1;
board[center-1][center+1] = 1;
board[center][center+1] = 1;

68
board[center+1][center+1] = 1;
board[center-1][center+2] = 1;
board[center+1][center+2] = 1;
board[center][center+3] = 1;

}else if ( setup == 4 ){ //exploder


int center = gridsize/2;
board[center][center] = 1;
board[center-2][center] = 1;
board[center+2][center] = 1;
board[center-2][center+1] = 1;
board[center+2][center+1] = 1;
board[center-2][center+2] = 1;
board[center+2][center+2] = 1;
board[center-2][center+3] = 1;
board[center+2][center+3] = 1;
board[center-2][center+4] = 1;
board[center+2][center+4] = 1;
board[center][center+4] = 1;

}else if ( setup == 5 ) { //10 cell row


int center = gridsize/2;
for ( int i=center-5; i<center+5; i++){
board[center][i] = 1;
}

}else if ( setup == 6 ) { //fish


int center = gridsize/2;
board[center][center] = 1;
board[center+1][center] = 1;
board[center+2][center] = 1;
board[center+3][center] = 1;
board[center-1][center+1] = 1;
board[center+3][center+1] = 1;
board[center+3][center+2] = 1;
board[center-1][center+3] = 1;
board[center+2][center+3] = 1;

}else if ( setup == 7 ){ //pump


int center = gridsize/2;
board[center][center] = 1;
board[center+1][center] = 1;
board[center+3][center] = 1;
board[center+4][center] = 1;
board[center][center+1] = 1;
board[center+1][center+1] = 1;

69
board[center+3][center+1] = 1;
board[center+4][center+1] = 1;
board[center+1][center+2] = 1;
board[center+3][center+2] = 1;
board[center-1][center+3] = 1;
board[center+1][center+3] = 1;
board[center+3][center+3] = 1;
board[center+5][center+3] = 1;
board[center-1][center+4] = 1;
board[center+1][center+4] = 1;
board[center+3][center+4] = 1;
board[center+5][center+4] = 1;
board[center-1][center+5] = 1;
board[center][center+5] = 1;
board[center+4][center+5] = 1;
board[center+5][center+5] = 1;

//update the game board life rules go here


void updateBoard()
{

int count;
int oldboard[][] = new int[gridsize][gridsize];

//copy board to old board and clean new board


for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++ ){
oldboard[i][j] = board[i][j];
board[i][j] = 0;
}
}

//for each square check surrounding squares and get a count of neighbors
for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++ ){

count = 0;

70
//count neighbors but don't run off edge
if ( i>0 ){ if ( j>0 ){ count += oldboard[i-1][j-1]; } }
if ( i>0 ){ count += oldboard[i-1][j]; }
if ( i>0 ){ if ( j<(gridsize-1) ){ count += oldboard[i-1][j+1]; }}

if ( j>0 ){ count += oldboard[i][j-1]; }


if ( j<(gridsize-1) ){ count += oldboard[i][j+1]; }

if ( i<(gridsize-1) ){ if ( j>0 ){ count += oldboard[i+1][j-1]; } }


if ( i<( gridsize-1) ){ count += oldboard[i+1][j]; }
if ( i<( gridsize-1) ){ if ( j<(gridsize-1) ){ count += oldboard[i+1][j+1]; }}

if ( count < 2 ) { board[i][j] = 0; } //die of loneliness


else if ( count == 2 ){ board[i][j] = oldboard[i][j]; } //no change, stable
else if ( count == 3 ){ board[i][j] = 1; } //a new life is born
else if ( count > 3 ) { board[i][j] = 0; } //die of overcrowding

}
}
}

//animation loop
public synchronized void run()
{
while ( true ) { //keep animating forever!!!
try {
updateBoard(); // calc location
Thread.sleep( delay ); // after if happens, wait a bit
repaint( 0L ); // request redraw
wait(); // wait for redraw

} catch( Exception ex ) { System.err.println( "Error: " + ex ); }


}
}

public void update(Graphics g) { paint(g); }

//repaint the scene


public synchronized void paint(Graphics g)

71
{
//draw background
g.setColor( Color.white );
g.fillRect( 0, 0, xdim, ydim );

int x = 15;
int y = 35;

//draw board
for ( int i=0; i<gridsize; i++ ){
for ( int k=0; k<gridsize; k++ ){

Color background = new Color ( 210, 210, 210 );


if ( board[i][k] == 0 ){ g.setColor ( background ); }
else{ g.setColor ( Color.blue ); }

g.fillRect ( x, y, boxsize, boxsize );


x += boxsize+5;
}
y += boxsize+5;
x = 15;
}

notifyAll();
}

public void windowOpened( WindowEvent ev ) {}


public void windowActivated( WindowEvent ev ) {}
public void windowIconified( WindowEvent ev ) {}
public void windowDeiconified( WindowEvent ev ) {}
public void windowDeactivated( WindowEvent ev ) {}
public void windowClosed( WindowEvent ev ) {}

public void windowClosing(WindowEvent ev)


{
animThread = null;
setVisible(false);
dispose();
System.exit(0);
}

72
Another example is ocking. This is done by creating a ock of animals and
having them follow 3 rules. Move in same direction as rest of ock; Move to
position self in center of ock; Don't wipe out the other guys. The larger the
ock the more interesting behavior you will see.

73
//www.timestocome.com

//flocking example
//rules
//1. Avoid collisions with other birds
//2. Attempt to match velocity to rest of the group
//3. Attempt to remain in center of the flock

import java.awt.*;
import java.awt.event.*;
import java.util.*;

public class Flocking extends Frame implements Runnable, WindowListener


{

int xdim = 530;


int ydim = 530;
Thread animThread; // to continuously call repaint();
int boxsize = 2;
Rectangle box = new Rectangle ( 0, 0, boxsize, boxsize );
int gridsize = 100;
int board[][] = new int[gridsize][gridsize];
long delay = 300L; // how long to pause between redraws
int flocksize = 30;
int xcenter = 0;
int ycenter = 0;
int oldxcenter = 0;
int oldycenter = 0;
int birds[][] = new int[flocksize][2];
int xdirection = 0;
int ydirection = 0;

public static void main(String[] args)


{
new Flocking();
}

//set up frame and start animation


public Flocking()

74
{
super ( " Flocking " );
setBounds ( 0, 0, xdim, ydim );
setVisible ( true );
addWindowListener ( this );
animThread = new Thread ( this );
animThread.start();

//set board to zeros


for ( int i=0; i<gridsize; i++ ){
for ( int j=0; j<gridsize; j++ ){
board[i][j] = 0;

}
}

//lets start with the birds, assume no collisions on random numbers


Random r = new Random ( System.currentTimeMillis() );
for ( int i=0; i<flocksize; i++){
int x = r.nextInt ( gridsize );
int y = r.nextInt ( gridsize );
board[x][y] = 1;
xcenter += x;
ycenter += y;
birds[i][0] = x;
birds[i][1] = y;
}

//now calc center of flock


xcenter /= flocksize;
ycenter /= flocksize;
//board[xcenter][ycenter] = 2;

//***********************************************************************************
//update here
void updateBoard()
{

//don't change board as we calculate

75
int oldboard[][] = board;
int oldbirds[][] = birds;

//get flock direction


xdirection = xcenter - oldxcenter;
ydirection = ycenter - oldycenter;

if ( xdirection < 0 ) { xdirection = -1; }


else if ( xdirection > 0 ) { xdirection = 1; }
else { xdirection = 0; }

if ( ydirection < 0 ) { ydirection = -1; }


else if ( ydirection > 0 ) { ydirection = 1; }
else { ydirection = 0; }

//don't have direction x and y = 0 or flock stops moving


// so lets randomly choose a new direction
if (( xdirection == 0 ) && ( ydirection == 0 )){
Random r = new Random ( System.currentTimeMillis() );
int random = r.nextInt();
if ( random%8 == 0 ){ xdirection = -1; ydirection = -1; }
else if ( random%7 == 0 ){ xdirection = -1; ydirection = 0; }
else if ( random%6 == 0 ){ xdirection = -1; ydirection = 1; }
else if ( random%5 == 0 ){ xdirection = 1; ydirection = -1; }
else if ( random%4 == 0 ){ xdirection = 1; ydirection = 0; }
else if ( random%3 == 0 ){ xdirection = 1; ydirection = 1; }
else if ( random%2 == 0 ){ xdirection = 0; ydirection = -1; }
else { xdirection = 0; ydirection = 1; }
}

//save so can get direction on next pass


oldxcenter = xcenter;
oldycenter = ycenter;

/*
//send data to user while debugging code
for ( int i=0; i<flocksize; i++){
System.out.println ( birds[i][0] + ", " + birds[i][1] );
}
System.out.println ( "center " + xcenter + ", " + ycenter );
System.out.println ( "direction " + xdirection + ", " + ydirection );
System.out.println();

76
*/

//aim each bird toward center of flock


//aim each bird in general direction of flock
//do not hit other birds
for ( int i=0; i<flocksize; i++){

int x = 0; int y = 0;

//move toward center of flock


if ( xcenter < birds[i][0] ){ x--; }
if ( xcenter > birds[i][0] ){ x++; }
if ( ycenter < birds[i][1] ){ y--; }
if ( ycenter > birds[i][1] ){ y++; }

//move in direction of flock


x += xdirection*(flocksize/10);
y += ydirection*(flocksize/10);

//don't run off edge of world


int edgebuffer = flocksize/10;
if ( (birds[i][0] - x) < edgebuffer ){
if ( x < 0 ) { x *= -edgebuffer; } //change direction

}else if ( (birds[i][0] + x) > gridsize-edgebuffer ){


if ( x > 0 ) { x *= -edgebuffer; } //change direction
}

if ( (birds[i][1] - y ) < edgebuffer ){


if ( y < 0 ) { y *= -edgebuffer; } //change direction
}else if (( birds[i][1] + y) > gridsize-edgebuffer ){
if ( y > 0 ) { y *= -edgebuffer; } //change direction
}

//are we standing still?


if ( x == 0 && y == 0 ){
//*****randomize this if it works also don't run over other birds
if ( birds[i][0] > edgebuffer ){ x--; }
else if ( birds[i][0] < gridsize-edgebuffer ) { x++; }
if ( birds[i][1] > edgebuffer ) { y--; }

77
else if ( birds[i][1] < gridsize-edgebuffer ) { y++; }
}

//avoid other birds


for ( int z=0; z<flocksize; z++){
if (( birds[i][0]+x == birds[z][0] ) && ( birds[i][1]+y == birds[z][1] )){
//we have a clash
x = 0; y = 0; //pause a moment

//****** you are here we need a better solution *****


Random r = new Random ( System.currentTimeMillis() );
int random = r.nextInt();
//change aim slightly make sure still on board and not ontop of someone else
if ( random%4 == 0 ) {}
else if ( random%3 == 0 ) {}
else if ( random%2 == 0 ) {}
else {}
}
}

//update bird
birds[i][0] += x;
birds[i][1] += y;

}//end move each bird

//recalculate center
xcenter = 0; ycenter = 0;
for ( int i=0; i<flocksize; i++ ) {
xcenter += birds[i][0];
ycenter += birds[i][1];
}
xcenter /= flocksize;
ycenter /= flocksize;

//update board

78
for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++ ){
if ( board[i][j] == 1 ) { board[i][j] = 0; }
}
}

for ( int i=0; i<flocksize; i++){


//add in new bird position
int w = birds[i][0];
int h = birds[i][1];
board[w][h] = 1;
}

//board[xcenter][ycenter] = 2;

}
//***********************************************************************************

//animation loop
public synchronized void run()
{
while ( true ) { //keep animating forever!!!
try {
updateBoard(); // calc location
repaint( 0L ); // request redraw
wait(); // wait for redraw
Thread.sleep( delay ); // after if happens, wait a bit

} catch( Exception ex ) { System.err.println( "Error: " + ex ); }


}
}

public void update(Graphics g) { paint(g); }

//repaint the scene


public synchronized void paint(Graphics g)
{
//draw background
g.setColor( Color.white );

79
g.fillRect( 0, 0, xdim, ydim );

int x = 5;
int y = 25; //clear title bar

Color background = new Color( 220, 220, 230 );

//draw board
for ( int i=0; i<gridsize; i++ ){
for ( int k=0; k<gridsize; k++ ){

if ( board[i][k] == 0 ){ g.setColor ( background); }


else if ( board[i][k] == 2 ) { g.setColor ( Color.red ); }
else if ( board[i][k] == 3 ) { g.setColor ( Color.green ); }
else{ g.setColor ( Color.black ); }

g.fillRect ( x, y, boxsize, boxsize );


x += 5;
}

y += 5;
x = 5;
}

notifyAll();
}

public void windowOpened( WindowEvent ev ) {}


public void windowActivated( WindowEvent ev ) {}
public void windowIconified( WindowEvent ev ) {}
public void windowDeiconified( WindowEvent ev ) {}
public void windowDeactivated( WindowEvent ev ) {}
public void windowClosed( WindowEvent ev ) {}

public void windowClosing(WindowEvent ev)


{
animThread = null;
setVisible(false);
dispose();
System.exit(0);
}

80
}

81
This is the same as Conway's Life previous except I added in dna to make
things a bit more interesting. A day counter moves along a dna strand, 2 marks
per day. When a child is born a mix of both parent's dna makes up baby's. The
longer a creature lives, the brighter the color is. Red for one sex, blue for the
other.

import java.util.*;

public class Creature


{
int dna_length = 24; //20 plus some junk dna
int dna[] = new int[dna_length];
int x=0, y=0;
int age=0, sex=0;
boolean alive=true;

Random r = new Random ( System.currentTimeMillis());

Creature( int xMax, int yMax )


{

xMax--; yMax--; //so we don't run off end of array

if ( r.nextInt()%2 == 0 ){ sex = 0; } else { sex = 1; }


age = 0;
alive = true;

for ( int i=0; i<dna_length; i++){


if ( r.nextInt()%2 == 0 ){ dna[i] = 0; } else { dna[i] = 1; }
}

x = r.nextInt( xMax );
y = r.nextInt( yMax );

void grow(){ age++; }

void newLocation(int w, int h)


{
x = w;
y = h;
}

82
void babydna ( int babydna[] )
{
dna = babydna;
}
}

83
import java.awt.*;
import java.awt.event.*;
import java.util.*;

public class Dnalife extends Frame implements Runnable, WindowListener


{

int xdim = 500;


int ydim = 510;
Thread animThread; // to continuously call repaint();
int gridsize = 40;
int board[][] = new int[gridsize][gridsize];
long delay = 1000L; //length of pause between redraws
int boxsize = (int)(xdim/(gridsize+10));
Rectangle box = new Rectangle ( 0, 0, boxsize, boxsize );
int empty = -1;
Vector creatures = new Vector();
int day = 0;
int generations = 0;
int startingCreatures = gridsize*gridsize/5;

public static void main(String[] args)


{
new Dnalife();
}

//set up frame and start animation


public Dnalife()
{
super ( " DNA Life " );
setBounds ( 0, 0, xdim, ydim );
setVisible ( true );
addWindowListener ( this );

setupBoard();

animThread = new Thread ( this );


animThread.start();
}

84
void setupBoard()
{
//erase board
for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++ ){
board[i][j] = empty;
}
}

Random r = new Random ( System.currentTimeMillis());

//set up original creatures


for ( int i=0; i<startingCreatures; i++){
creatures.add(new Creature( gridsize, gridsize ));

//position on gameboard
int x = ((Creature)creatures.elementAt(i)).x;
int y = ((Creature)creatures.elementAt(i)).y;
if ( board[x][y] == empty ){
board[x][y] = i;
}else{ //someone has this spot pick a new one
boolean done = false;

while ( !done ){ //keep trying till we find an empty spot


x = r.nextInt( gridsize );
y = r.nextInt( gridsize );
if ( board[x][y] == empty ){
((Creature)creatures.elementAt(i)).newLocation( x, y );
done = true;
}
}
}
}
}

//update the game board here


void updateBoard()
{

85
int dead=0;
int born=0;
generations++;

//clear board
for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++){
board[i][j] = empty;
}
}

int numberOfCreatures = creatures.size();


int m = numberOfCreatures-1;

//place on board and age one year


numberOfCreatures = creatures.size();
for ( m=0; m<numberOfCreatures; m++){
Creature tempCreature = (Creature) creatures.elementAt(m);
board[ tempCreature.x ][ tempCreature.y ] = m;
tempCreature.grow();
}

//update date
if ( day > 9 ){ day = 0; }else{ day++; }
int mark = day*2; //dna marker

//walk through board


for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++){

int neighbors = 0;
int neighborhood[] = new int[8];
for ( int k=0; k<8; k++){ neighborhood[k] = empty; }

//count neighbors
if ( i > 0 ){ if ( board[i-1][j] != empty ){
neighborhood[neighbors] = board[i-1][j];
neighbors++; } } //north

86
if (( i > 0 ) && ( j < gridsize-1 )){ if ( board[i-1][j+1] != empty ){
neighborhood[neighbors] = board[i-1][j+1];
neighbors++; }} // north east

if ( j < gridsize-1 ){ if ( board[i][j+1] != empty ){


neighborhood[neighbors] = board[i][j+1];
neighbors++; }} // east

if (( i < gridsize-1 ) && ( j < gridsize-1 )) { if ( board[i+1][j+1] != empty ){


neighborhood[neighbors] = board[i+1][j+1];
neighbors++; }} // south east

if ( i < gridsize-1 ){ if ( board[i+1][j] != empty ){


neighborhood[neighbors] = board[i+1][j];
neighbors++; }} // south

if (( i < gridsize-1) && ( j > 0 )){ if ( board[i+1][j-1] != empty ){


neighborhood[neighbors] = board[i+1][j-1];
neighbors++; }} // south west

if ( j > 0 ){ if ( board[i][j-1] != empty ){


neighborhood[neighbors] = board[i][j-1];
neighbors++; }} // west

if (( i > 0 ) && ( j > 0 )){ if ( board[i-1][j-1] != empty ){


neighborhood[neighbors] = board[i-1][j-1];
neighbors++; }} //north west

//if neighbors = 3 && one neighbor opposite sex have baby


//if neighbors > 3 die of over crowding
//if neighbors < 3 die of loneliness pick direction based on dna
if (( neighbors > 0 )&&( board[i][j] != empty )){

//see if any neighbors opposite sex


//only check one sex otherwise duplicate
int sex = ((Creature)creatures.elementAt(board[i][j])).sex;

//see if mating time


if ((( board[i][j] != empty ) && (((Creature)creatures.elementAt(board[i][j])).dna[ma
(((Creature)creatures.elementAt(board[i][j])).dna[mark+1] == 0 ))
{

for ( int k=0; k<8; k++){

if ( neighborhood[k] != empty ){

87
//?found a mate?
if (((Creature)creatures.elementAt(neighborhood[k])).sex != sex ){

//is it mating time?


if ((((Creature)creatures.elementAt(neighborhood[k])).dna[mark] == 1 ) &&
(((Creature)creatures.elementAt(neighborhood[k])).dna[mark+1] == 0 ))
{

//if so create new Creature add to vector


Creature c = new Creature( gridsize, gridsize );
int row = c.x;
int col = c.y;
boolean done = false;

if ( board[row][col] == empty ){
creatures.add ( c );
born++;
done = true;

}else{ //need to reposition

Random r = new Random ( System.currentTimeMillis());


int count = 0;

while ( !done ){

count++;

if ( count > 8 ) { // crowded board, too crowded for a new life


done = true;
}

int xPos = r.nextInt( gridsize );


int yPos = r.nextInt( gridsize );
if ( board[xPos][yPos] == empty ){
c.x = xPos;
c.y = yPos;
creatures.add ( c );
done = true;
born++;
}//end if
}//end while
} //end else

88
//take dna from parents for baby
Random r = new Random ( System.currentTimeMillis());
int dnaMix = r.nextInt(20);
int newDna[] = new int[20];

//first parent
for ( int p1=0; p1<dnaMix; p1++){
newDna[p1] = ((Creature)creatures.elementAt(board[i][j])).dna[p1];
}
//second parent
for ( int p2=dnaMix; p2<20; p2++){
newDna[p2] = ((Creature)creatures.elementAt(neighborhood[k])).dna[p2];
}
//pass dna onto baby
((Creature)creatures.lastElement()).babydna(newDna);

}//end if mating time, second parent


}// if found mate
}//if neighborhood[k] !empty
}//for k
}//if mating time first parent

if (( neighbors < 1 )&&( board[i][j] != empty )){ //die lonliness


if (((Creature)creatures.elementAt(board[i][j])).dna[mark] == 0 ){
if (((Creature)creatures.elementAt(board[i][j])).dna[mark+1] == 0 ){
((Creature)creatures.elementAt(board[i][j])).alive = false;
dead++;
}
}
}else if (( neighbors > 5 )&&( board[i][j] != empty )){ //die overcrowding
if (((Creature)creatures.elementAt(board[i][j])).dna[mark] == 0 ){
if (((Creature)creatures.elementAt(board[i][j])).dna[mark+1] == 0 ){
((Creature)creatures.elementAt(board[i][j])).alive = false;
dead++;
}
}
}

}//for j

89
}//for i

//remove dead
int length = creatures.size();
for ( int i=0; i<length; i++){
if ( ((Creature)creatures.elementAt(i)).alive == false ){
creatures.removeElementAt(i);
length--;
}
}

//********* need to redraw board here to reflect new vector numbers ****************//
//erase board
for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++){
board[i][j] = empty;
}
}

//place creatures on board


length = creatures.size();
for ( int i=0; i<length; i++){

//position on gameboard
int x = ((Creature)creatures.elementAt(i)).x;
int y = ((Creature)creatures.elementAt(i)).y;
board[x][y] = i;

}
//************************************************************************************//

System.out.println ( " generation " + generations + " vector size " + creatures.size() +
" dead " + dead );
}

90
//animation loop
public synchronized void run()
{
while ( true ) { //keep animating forever!!!
try {

Thread.sleep( delay ); // wait a bit


updateBoard(); // calc location
repaint( 0L ); // request redraw
wait(); // wait for redraw

} catch( Exception ex ) { System.err.println( "Error: " + ex ); }


}
}

public void update(Graphics g) { paint(g); }

//repaint the scene


public synchronized void paint(Graphics g)
{

//draw background
g.setColor( Color.white );
g.fillRect( 0, 0, xdim, ydim );
Color background = new Color ( 210, 210, 210 );
Color color = new Color ( 0, 100, 0 );
int age=0, sex=0;

//margins
int x = 5;
int y = 25;

//first make sure we initialized everything this keeps graphics thread from
//trying to draw board before we've set up initial conditions
if ( creatures.size() > 0 ){

//draw current state of affairs


for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++){

91
g.setColor ( background ); // if square not empty this gets changed

//color? show age and sex


int mark = board[i][j];
if (( mark != empty ) && ( mark >= 0)){

sex = ((Creature)creatures.elementAt(mark)).sex;
age = ((Creature)creatures.elementAt(mark)).age;

if ( sex == 0 ){
if ( age < 255 ){ color= new Color( age*2, 0, 0 );
}else{ color = new Color ( 255, 0, 0 ); }

}else{
if ( age < 255 ) { color = new Color ( 0, 0, age*2 );
}else{ color = new Color ( 0, 0, 255 ); }
}

g.setColor ( color );
}

//fill in the block


g.fillRect( x, y, boxsize, boxsize );

x += boxsize+1;

}
y += boxsize +1;
x = 5;
}

}//end if board initialized creatures.size() > 0

notifyAll();
}

public void windowOpened( WindowEvent ev ) {}


public void windowActivated( WindowEvent ev ) {}
public void windowIconified( WindowEvent ev ) {}
public void windowDeiconified( WindowEvent ev ) {}
public void windowDeactivated( WindowEvent ev ) {}
public void windowClosed( WindowEvent ev ) {}

92
public void windowClosing(WindowEvent ev)
{
animThread = null;
setVisible(false);
dispose();
System.exit(0);
}

4.4 Computer Vision


There are two types of computer eyes in use now. One known as 'spot or
jumping' eyes send out a narrow laser beam and measure the amount of light
that comes back. A second type of computer vision uses 'imaging eyes' these
form pictures like the digital cameras do.
Image ltering from 2-d into 3-d involves ltering many items such as: shad-
ing; color; intensity; texture; etc. The information is then used to create a scene.
First the image is decomposed into pixels or small blocks of color and light in-
tensity. Then to erase spikes of light or clean up random noise in the picture, a
lter operation (convolution) is slid over the image that averages pixels that are
adjacent to each other and replaces the center pixel with the average. Usually
a two dimensional Gaussian is used as the weighting function. Another method
looks for pixels that vary greatly from the pixels surrounding them. It gives this
pixel the average of the surrounding pixels. This method tends not to blur the
image as much as the rst method.
Next some type of edge detection is performed. One method enhances areas
that have large changes in color from pixel to pixel. This is done by a process
that, in e ect, takes the second derivative of the image. Where the second
derivative is zero an edge occurs. This is done by creating a sliding window
with a positive and negative section. As it slides over the image it compares
the pixel underneath to itself. The sum is zero over areas that don't change.
A method used in maps is 'contouring' this measures the di erence in intensity
of the pixels rather than color. This pulls out and clari es areas of di erent
depths.
The convolution and edge detection are generally combined into one oper-
ation. This results in the Laplacian of the Gaussian function being performed
over the image. There are newer, better techniques available now than this one.
Another method to di erentiate objects from an image attempts to nd
areas of gradual change in color, light intensity, etc. An area is a region of

93
homogeneous pixels that di er not more than a small amount, . Adjacent
regions are not homogeneous. Split and Merge [Horowitz & Pavlidis] is one
such method. The whole image is split into equal parts, these are tested for
homogeneity, if the regions are not homogeneous then the splitting continues
until all the regions are homogeneous. Regions are then merged with other
regions that are homogeneous with themselves. This method and the one above
run into many problems with di erentiating shadows from edges.
Now scene analysis is done to extrapolate a scene from the information
gathered. For this part more information is needed. Other scenes, stereo vision,
or positions of the moving camera. In one method a line drawing is extrapolated
and the junctions of the lines are matched to table entries to determine if the
object extends outward or inward. If the scene contains well known objects the
objects may be stored as line drawings in a table to be matched.
((x2 +y 2 )
(2standardDeviation2 ))
Gaussian: (2P istandardDeviation2) 
1 e

average: N(XumberOf
i+X j +X k )
0
qX s
2
standard Deviation: ((XNi+umberOfX j +X k +X::)
0 X s
)
The standard Deviation describes a bell shaped curve. This gives closer
pixels a higher weight factor.
2 2
Laplacian: @ (@x
F (x;y )
2 + @ F@y(x;y
2
)

4.5 Turing Machines, State Machines and Finite


State Automaton
A Turing machine consists of a black box, and an in nite tape. The tape is
divided into blocks. Each block has one symbol. The tape head can be on
one block at time, it can read or write that block and then move one block
forward or one block backward, and it can stop. The black box has a controller,
a nite memory and decides what to do based on the information read. Turing
Machines can describe any nitely describable function that maps one set of
strings onto another set of strings. It is believed that any function that can
be computed can be computed by a Turing Machine. Automaton theory is a
branch of mathematics that describes the operation of computers, especially
that of Turing Machines. A Turning Machine that can read but not write is a
Finite State Automaton. These are used extensively in text searches, pattern
matching and regular expressions.
A State Machine keeps a temporal representation of the state of its envi-
ronment. It tracks the state it begins in, as well as states in which actions or
sensory data change the state. Finite state machines are used as part of the
AI engine in games. This type of setup can be computationally ecient for up-
dating game characters. A Finite State Machine is a set consisting of: a nite
set of states; a nite set of input symbols; a nite set of output symbols; the
next state function; the next output function. In a neural net state machine the
states are generally stored in some type of a vector. The network has a hidden

94
unit for each feature of the environment it needs. The input consists of an input
for each sense, plus the input from the hidden units. There is an output for
each possible action that can be taken. Another way to store the information
is with a map that is updated. This is known as 'iconic representation' and is
used for software agents.

4.6 Blackboard Systems


A blackboard system works like a blackboard in a room full of people. Some
people, agents or parts of the program are allowed to add, change, erase or read
the information on the blackboard. There is usually an overseer who requests
that missing sections get lled in or updated. When each piece does its work
the information is complete. There is also a con ict resolution piece to settle
disputes. Each agent or program is an expert on an area or type of information.
So the blackboard is just a centralized data structure to which various agents
or subroutine have access.

4.7 User Interfaces


Computers were originally only used by programmers and other people familiar
with the workings of them. The GUI (Graphical User Interface) originally was
demonstrated by Ivan Sutherland in Sketchpad, his Phd thesis. Others attribute
the GUI interface design to the smalltalk Project at Xerox. Apple was the
company to bring the GUI to the public, followed by Windows and lastly by
Unix/Linux, whose users stubbornly considered it to be unnecessary overhead.
Later came hypertext and the Internet morphed into the WWW as we know
it today. Hypertext was rst used by Vannevar Bush in MEMEX. The term hy-
pertext was coined later by Ted Nelson. The universities used it in information
tools and Apple used it in HyperCard bringing the idea to the public. Others
attribute the rst use of hyperlinks to Dynatext.
Interactive interfaces with bots and agents combined with speech recognition
are the current cutting edge in interfaces. Many new concerns come with the new
interfaces. A too human appearing interface may be scary, or may attributed
far more intelligence than it should be. Consider how much intelligence the
public already attributes to computers. The user interface should speed up and
make the task easier, not exist to entertain and should not slow down the task
at hand. If the task is user intensive such as word processing, searches, etc.
Then speech in place of typing will not slow things down since the computer is
already tied to the users speed. Otherwise traditional keyboarding and mouse
interfaces should be used.
Other, non-anthropomorphic interfaces are also being developed. People
take in large amounts of information visually. Looking at a plot is far more
informative than looking a list of numbers. A graphical user interface can convey
much more information than a textual one. TreeViz conveys information about

95
les stored on a users hard drive. It uses color sound and shape to map the
whole drive on the screen in front of the user for easy cleanups.

4.8 Support Vector Machines


Support Vector Machines (SVM) are based on a non-linear generalization of the
'Generalized Portrait' algorithm developed in Russia in the 1960s. They have
been around since the 1970s but only recently have begun to attract attention.
They have been successfully used for handwriting and speech recognition, as
well as speaker recognition and have the ability to pull objects such as faces out
of images.
Support vector machines can sort data into two classes, it is in the set or it is
not in the set. Data which is not linearly separable can become linearly separable
in higher dimensions. However, if data is put into too many dimensions then
data classes are memorized rather than learned, this is known as over- tting,
and the SVM will not handle new data properly. The error rate of SVM can be
explicitly calculated which can not be done for neural networks.
We want to create a hyperplane that gives the maximum possible distance
between the points in the set and the points out of the set, with the maximum
margin around it. So we want the maximum distance between the point in each
set that is closest to the other group. We then create a margin of two lines be-
tween the data. The main method used to do this is using Lagrange Multipliers.
(aka the 'Quadratic Programming Problem' most better spreadsheets and math
programs have this built in to them.)
Lagrange Multipliers are used to nd the extrema of f (x) subject to a con-
straint g (x) where f and g are functions with continuous rst partial derivatives.
f = 1 g1 + 2 5 g2 + :::  is the Lagrange Multiplier.
The 'kernel' is a formula for the dot product in higher dimensional 'feature
space'. Feature space is the higher dimension space we have mapped our data
into to make it linearly separable. A Polynomial of various dimensions and
Gaussian Kernels are the most commonly used.

4.9 Bayesian Logic


Bayesian neural networks and expert systems (a.k.a. Uncertainty represen-
tation, belief networks, probabilistic networks) are based on "Bayes' rule" or
"Bayes' theorem" which is a statistical theory. It was developed by Thomas
Bayes in the 18th century. It takes and ips the probability given the original
conditional probability. This is used to deal with uncertainty in expert sys-
tems. They often form the main part of spell correcting and speech recognition
programs.
P (x j y )P (y )
P (y jx) = P (x )

Example:
 P(A) is the event of a person having cancer (10%)

96
 P(B) is the event of person being a smoker (50%)
 P(B|A) is the percent cancer patients who smoke (80%)
 We wish to know the likelihood of the smoker having cancer
 j
P (A B ) =  1) = :16 or a 16%chance.
(: 8 :
:5

A Bayesian network is an acyclic tree graph. An acyclic tree graph can not
cycle back to previous conditions. Its nodes, occurrences, contain the possible
outcomes and tables of probabilities of each considering the inputs to this node.
The connecting edges contain the e ects of occurrences on one another. The
probabilities of all occurrences must total 100%, and all occurrences must be
accounted for. A node must be conditionally independent of any subset of
nodules that are not descendants of it, this reduces the number of possibilities
for each node that must be calculated.
There are three commonly used patterns of inference in Bayes Networks;
Top-down which uses a chain rule to add up probabilities; Bottom-up which uses
Bayes Rule; and a hybrid system. All of these use recursion in the algorithms
making them computationally intensive.
Children of a parent node can be independent of each other, none of them
contributing to the probabilities of another. In that case the parent is said to
d-separate them. This can be used to cut down the number of calculations
needed to work through the net.
The is network trained by giving the likely probabilities to seed it. When
something new happens the probabilities are re-evaluated. This causes all the
probabilities to be re-calculated, remember they must total 100%. The network
structure must also be redetermined. Often this can be done before training
occurs. Hidden nodes can sometimes help reduce the size of the network.

97
Chapter 5

Reasoning Programs and


Common Sense

5.1 Common Sense and Reasoning Programs


Programs that have commonsense are some of the earliest attempts at arti cial
intelligence. The problem with the commonsense programs is there is no way to
easily de ne the knowledge in a way the computer can use with out the computer
understanding language. Little success was made here, but the descendants of
common sense programs, expert systems have been very successful at problem
solving of the type people learn in college, like engineering, law and medical
diagnoses. From these studies came the computer languages Lisp and Prolong.
John McCarthy's paper on common sense shaped the arti cial intelligence
movement for several decades. The Advice Taker is a program that was to have
commonsense. John McCarthy states that 'A program has common sense if
it automatically deduces for itself a suciently wide class of immediate conse-
quences of anything it is told and what it already knows'. So the goal is for the
Advice Taker to be able to discover simple abstractions. McCarthy de nes proof
of a program having common sense to be: that all behaviors be representable
in the system; interesting changes in behavior must be expressible in a simple
way; the system must understand the concept of partial success; and the system
must be able to create subroutines to be used in procedures.
Another attempt at a general problem solving program following this type of
reasoning is the "General Problem Solver" [Newell, Shaw, Simon, Ernst]. The
program was given objects and operators with which to compare objects, take
actions, and develop situations. An initial situation and a goal are given to
the program and the program determines how to get from here to there. Three
basic steps are taken to do this:
 Evaluate di erence between current situation and the goal state
 Find an operator that lowers the di erence between current and goal state

98
 If it is legal to apply this operator, do so, else determine a situation that
can use that operator and set it as a short term goal.
Reasoning Programs are the next step in this line of research. They will
need to be able to sense and or nd information about their environment, to
prove whether or not solutions exist to problems given them. They will need to
be able to reason out steps from initial situations to goals. But they will need
a language to de ne objects, goals, operators, logic, and temporally, states of
being and other things in order to accomplish this.
Common sense is very di erent than intelligence or education. Some people
have one, two, or all three of these qualities. Teaching and testing for common
sense has not progressed well with people and will probably not do well with
computers until we have a greater understanding of exactly what common sense
is, how it is acquired and how it can be tested for. Some other problems de-
veloping these systems are putting common sense into a language that is easily
understood by people and computers. A second major problem has been rep-
resenting time and changes that occur over time. Common sense seems to be
learned from doing rather than being taught, so it may be that agents may gain
common sense about the computer network they exist on, or further down the
line robots may gain a bit of what we consider common sense about our world.

5.2 Knowledge Representation and Predicate Cal-


culus
One of the trickier parts of designing arti cial intelligence is how to represent
information to a thing that can not understand language. A good knowledge
representation language will combine the ease of spoken and written languages,
like English, with the conciseness of a computer language, like C.
Propositional (boolean) Logic
This consists of a syntax, the semantics and rules for deducing new sentences.
Symbols are used to represent facts which may or may not be true. W may
represent the fact it is windy outside. The symbols are combined with boolean
logic to generate new sentences. Facts can be true or false only. The program
may know a statement is true or false or be uncertain as to its truth or falsity.
Backus-Naur Form (bnf) is the grammar used to put sentences together. The
semantics are just the interpretation of a statement about a proposition. Truth
tables are used to de ne the semantics of sentences or test validity of statements.
 Truth table for AND
 T and T = T
 T and F = F
 F and F = F
 F and T = F

99
First Order Logic ( rst-order predicate calculus)
This consists of objects, predicates on objects, connectives and quanti ers.
Predicates are the relations between objects, or properties of the objects. Con-
nectives and quanti ers allow for universal sentences. Relations between objects
can be true or false as well as the objects themselves. The program may not
know whether something is true or false or give it a probability of truth or
falseness.
Procedural Representation
This method of knowledge representation encodes facts along with the sequence
of operations for manipulation and processing of the facts. This is what expert
systems are based on. Knowledge engineers question experts in a given eld and
code this information into a computer program. It works best when experts
follow set procedures for problem solving, i.e. a doctor making a diagnoses.
The most popular of the Procedural Representations is the Declarative. In the
Declarative Representation the user states facts, rules, and relationships. These
represent pure knowledge. This is processed with hard coded procedures.
Relational Representation
Collections of knowledge are stored in table form. This is the method used for
most commercial databases, Relational Databases. The information is manipu-
lated with relational calculus using a language like SQL. This is a exible way
to store information but not good for storing complex relationships. Problems
arise when more than one subject area is attempted. A new knowledge base
from scratch has to be built for each area of expertise.
Hierarchical Representation
This is based on inherited knowledge and the relationships and shared attributes
between objects. This good for abstracting or granulating knowledge. Java and
C++ are based on this.
Semantic Net
A data graph structure is used and concrete and abstract knowledge is rep-
resented about a class of problems. Each one is designed to handle a speci c
problem. The nodes are the concepts features or processes. The edges are the re-
lationships (is a, has a, begins, ends, duration, etc). The edges are bidirectional,
backward edges are called 'Hyper Types' or 'Back Links'. This allows backward
and forward walking through the net. The reasoning part of the nets includes:
expert systems; blackboard architecture; and a semantic net description of the
problem. These are used for natural language parsing and databases.
Predicate Logic and propositional
Most of the logic done with AI is predicate logic. It used to represent objects,
functions and relationships. Predicate logic allows representation of complex
facts about things and the world. (If A then B). A 'knowledge base' is a set of
facts about the world called 'sentences'. These are put in a form of 'knowledge
representation language'. The program will 'ASK' to get information from the
knowledge base and 'TELL' to put information into the knowledge base. Using
objects, relations between them, and their attributes almost all knowledge can
be represented. It does not do well deriving new knowledge. The knowledge
representation must take perceptions and turn them into sentences for the pro-

100
gram to be able to use them, and it must take queries and put them into a form
the program can understand.
Frames
Each frame has a name and a set of attribute-value pairs called slots. The
frame is a node in a semantic network. Hybrid frame systems are meant to
over come serious limitations in current setups. They work much like an object
oriented language. A frame contains an object, its attributes, relationships and
its inherited attributes. This is much like Java classes. We have a main class
and sub classes that have attributes, relationships, and methods for use.
A logic has a language, inference rules, and semantics. Two logical languages
are propositional calculus and predicate calculus. Propositional Calculus which
is a descendant of boolean algebra is a language that can express constraints
among objects, values of objects, and inferences about objects and values of
objects.
The elements of propositional calculus are:
Atoms The smallest elements
Connectives or, and, implies, not
Sentences aka 'well-formed formula's', w s.
The legal w s
disjunction or
conjunction and
implication implies
negation not
Rules of inference are used to produce other wwfs
modus ponens (x AND (x implies y) ) implies y
AND introduction x, y implies (x AND y)
AND commutativity (x AND y) implies (y AND x)
AND elimination x AND y implies x
OR introduction x, y implies (x OR y)
NOT elimination NOT (NOT x) implies x
resolution combining rules of inference into one rule example::: (x OR y) AND
(NOT y OR z) == x OR z
Horn clauses a clause having one TRUE literal, there are three types; a single
atom (q); an implication or rule (p AND q => r); a set of negative literals
(p OR q =>), these have linear time algorithms.

101
De nitions
Semantics associations of elements of a language with the elements of the
domain
Propositions a statement about an atom, example: The car is running, the
car is the atom, is running is the proposition
interpretation is the association of the proposition with the atom
denotation in a given interpretation the proposition associated with the atom
is the denotation
value TRUE, FALSE, given to an atom
knowledge base a collection of propositional calculus statements that are true
in the domain

t ruth table a tabular format for representing states


X Y OR
0 0 0
0 1 1
1 0 1
1 1 1
satis es a true statement under a given interpretation
model an interpretation that satis es each statement in a set of statements.
validity a statement that is TRUE under all interpretations
equivalence statements are equivalent if their truth values are identical under
all interpretations.
Examples
DeMorgan's Laws NOT(x OR y) == (NOT x) AND (NOT y)
NOT(x AND y) == (NOT x) OR (NOT y)
Contrapositive (x implies y) == (NOT x implies NOT y)
if x == y ( x implies y) == (y implies x)
entailment, |= if a statement, x, is true under all interpretations for which
each of the sentences in a set has the value TRUE, then |= x
sound if for , and w; |-w implies |=w
complete there is a proof that if for , and w; whenever |=w

102
propositional satis ability, aka PSAT a model for the formula that com-
prises the conjunction of all the statements in the set .
Predicate Calculus , takes propositional calculus further by allowing state-
ments about propositions as well as about objects. This is rst order
predicate calculus.
Contains:
object constants, term strings of characters, xyz, linda, paris
relation constants divided by, distance to/from, larger than
function constants small, big, blue
functional expression examples: distance(here, there); xyz;
worlds can have in nite objects, functions on objects, relations over objects
interpretations maps object constants into objects in the world
quanti ers can be universal or for a selected object or group of objects
Predicate Calculus is used to express mathematical theories. It consists of
sentences, inference rules and symbols. First-order predicate calculus symbols
consist of variables about which a statement can be made, logic symbols (and,
or, not, for all, there exists, implies) and punctuation ( '(', ')' ).
If we have a set S in which all of the statements are true then S is a model.
If S implies U then U is true for all models of S and NOT U is false for all
models of S. If we make a set S' which has all of the statements of S and the
statement NOT U it is not a model. All statements in a model must be true. S'
is unsatis able since there is no way for the statements of S and the statement
NOT U, both of which are in S' to be true at the same time. This is used to
prove formulas in theorem proving. To show S implies U is is sucient to show
S'= S, NOT U is unsatis able.
Resolution and uni cation
Resolution: prove A true by proving A Not is false. Uni cation: take two pred-
icate logic sentences and using substitutions make them the same. Uni cation
is the single operation that can be done on data structures (expression trees) in
Prolog. These are the techniques used to process predicate logic knowledge and
the are the basis for Lisp and Prolog.
Resolution is one way to prove unsatis ability.
 First replace each statement with its clause form equivalent. (D implies
C becomes OR(A;  B ).
 Then take each NOT and apply it individually to all the symbols. (\  C )
 (D; C )becomes(D;
 Remove all dummy variables so each item is represented by only one sym-
bol. (If G or H can represent dogs, replace all the G's with H's or all the
H's with G's so the representation is consistent.

103
 Using the Skolem function remove all the 'there exists' logic symbols ( re-
place 8x8y SUCH THAT z SATISFIES P(x,y,z) with 8x8y SATISFYING
P (x; y; z )

 Last remove all of the 8 universal quanti ers. (expand the formula)
Another method for proving unsatis ability is the Uni cation Procedure.
This uses a substitution as follows:
 B = (g (z ); x); (a; y )
  y ); Q(b; y )
C = P (x;

 CB = P (g(z ); a); Q(b; a)


B is a more general clause than C and each substitution is made to produce a
more abstract statement.
 Set L= the empty set/no substitutions
 Set K=0 zero
 If CL contains only one literal return L as the most abstract uni cation
of C and stop
 If CL contains more than one literal replace all elements of C and L that
are the same with dummy variables
 Construct the disagreement set for CL (CL = P (g (x); a; f (u; v )); P (u; a; z )becomesg (x); u)
 Substitute back so CL becomes P (g (x); a; f (g (x); v )); P (g (x); a; z )
 If there are no more disagreements stop; else repeat procedure.
Unsatis ability proofs are often done with the Binary Resolution Procedure.
This enables construction of clauses that are logically implied by a given set
of clauses. For example suppose C=Li and D=Mi where L and M are literals
and none of the literals in L or M are in the other set. (Use dummy variables
if necessary.) let li be a subset of L and mi be a subset of M. Choose the
most general set so that: F = Li liG [ M i miG So that the most general
G = (f (x); y ); (f (x); z ) Follow the procedure above for getting the most general
clause. Now the binary resolution procedure is applied. When all the possible
resultant clauses (set R) are obtained from set S. Then R(i+1) (S ) = Ri (S ) [
R(Ri (S )) are formed. This is continued until the empty set is formed (S is
unsatis able) or time runs out (no proof).

104
5.3 Knowledge based/Expert systems
There are knowledge based agents and expert systems that reason using rules
of logic. These systems that do what an expert in a given eld might do, tax
consulting, medical diagnosis etc. They do well at the type of problem solving
that people go to a university to learn. Usually predicate calculus is used to
work through a given problem. This type of problem solving is known as 'system
inference'. The program should be able to infer relationships, functions between
sets, some type of grammar, and some basic logic skills. The system needs to
have three major properties: soundness, con dence that a conclusion is true;
completeness, the system has the knowledge to be able to reach a conclusion;
and tractability, it is realistic that a conclusion can be reached.
Reasoning is commonly done with if-then rules in expert systems. Rules
are easily manipulated, forward chaining can produce new facts and backward
chaining can check statements accuracy. The newer expert systems are set up
so that users, who are not programmers, can add rules and objects and alter
existing rules and objects. This provides a system that can remain current and
useful with out having to have a full time programmer working on it.
There are three main parts to the expert system: knowledge base, a set of if-
then rules; working memory, a database of facts; inference engine, the reasoning
logic to create rules and data.
The knowledge base is composed of sentences. Each sentence is a represen-
tation of a fact or facts about the world the agent exists in or facts the expert
system will use to make determinations. The sentences are in a language known
as the knowledge representation language.
Rule learning for knowledge based and expert systems is done with either
inductive or deductive reasoning. Inductive learning creates new rules, that are
not derivable from previous rules about a domain. Deductive learning creates
new rules from existing rules and facts.
Rules are made of antecedent clauses (if), conjunctions (and, or) and conse-
quent clauses (then). A rule in which all antecedent clauses are true is ready to
re or triggered. Rules are generally named for ease of use and usually have a
con dence index. The con dence index (certainty factor) shows how true some-
thing is, i.e. 100% a car has four wheels, 50% a car has four doors. Sometimes
sensors are also part of the system. They may monitor states in the computer or
environment. The Rete algorithm is the most ecient of the forward chaining
algorithms.
Reasoning can be done using 'Horn Clauses', these are rst-order predicate
calculus statements that have, at most, one true literal. Horn Clauses have
linear order time algorithms and this allows for a faster method of reasoning
through lots of information. This is usually done with PROLOG or lisp. Clauses
are ordered as such: goal, facts, rules. Rules have one or more negative literals
and one positive literal that can be strung together in conjunctions that imply
a true literal. A fact is a rule that has no negative literals. A list of positive
literals with out a consequent are a goal. The program loops checking the list in
order, when a resolution is performed a new loop is begun with that resolution.

105
If the program resolves its goal the proof can be given in tree form, 'and/or
tree'.
Nonmonotomic reasoning is used to x problems created by a change in in-
formation over time. More information coming in negates a previous conclusion
and a new one needs to be drawn.
A con ict resolution process must be put in place as well to deal with con-
icting information. This can be done by: rst come, rst serve; most speci c
rule is kept; most recently changed data rule triggered; once rule is resolved
take it out of the con ict resolution set.
Forward chaining takes the available facts and rules and deduces new facts
which it then uses to deduce more new facts, or invoke actions. Forward chaining
can also be done by simply the application of if-then statements: The RETE
algorithm is the most ecient at doing forward chaining right now, it compiles
the rules into a network that it traverses eciently. This is similar to the
blackboard systems.
Dynamic knowledge bases, known as truth maintenance systems, may be
used. This uses a 'spreadline' which is similar to a spread sheet that will calcu-
late missing and updated values as other values entered.
 General algorithm forward chaining
 load rule base into memory
 load facts into memory
 load initial data into memory
 match rules to data and collect triggered rules
 LOOP
 if con ict resolution done BREAK
 use con ict resolution to resolve con icts among rules
 re selected rules
 END LOOP
Backward Chaining evaluates a goal and moves backward through the rules
to see if true. An example is a medical diagnosis expert system, that takes in
information from questions then returns a diagnoses. PROLOG systems are
backward chaining.
 General algorithm backward chaining
 load rule base into memory
 load facts into memory
 load initial data

106
 specify a goal
 load rules speci c to that goal onto a stack
 LOOP
 if stack is empty BREAK
 pop stack
 WHILE MORE ANTECEDENT CLAUSES
 if antecedent is false pop stack and NEXT WHILE
 if antecedent true re rule and NEXT WHILE
 if antecedent unknown PUSH onto stack (we may later ask user for more
information about this antecedent.
 END LOOP
The Rete Algorithm is considered to be the best algorithm for forward chain-
ing expert systems. It is the fastest but also requires much memory. It uses
temporal redundancy, rules only alter a few facts at a time, and structural
similarity in the left hand side of rules to do so.
The Rete is a an acyclic graph that has a root node. The nodes are patterns
and the paths are the left hand sides of the rules. The root node has one kind
node attached to it for each kind of fact. Each kind node has one alpha node
attached to it for each rule and pattern. Then the alpha nodes have associated
memories which describe relationships. Each rule has two beta nodes. The left
part is from alpha(i) and the right from alpha(i+1). Each beta node stores the
JOIN relationships. Changes to rules are entered at the root and propagated
through the graph.
Knowledge based agents loop through two main functions. One is to sense
the world and TELL the knowledge base what it senses, two is to ASK what it
should do about what it senses, which it then does. An agent can be constructed
by giving it all the sentences it will need to perform its functions. An agent can
also be constructed by a learning mechanism that takes perceptions about the
environment and turns them into sentences that it adds to the knowledge base.

107
5.3.1 Perl Reasoning Program 'The Plant Dr.'
The Plant Doctor. It gets user input from an HTML form and uses the
plantdr.pl Perl program to gure out the problem, using weighted symptoms,
chaining forward. The database is hard coded in the Perl script since this is a
very small system.

#!/usr/bin/perl

print "Content-type: text/html";


print "\n\n";
print "<html>";
print "<head>";
print "<title>Plant Doctor</title>";
print "<body bgcolor\=#ffffdd>\n";

$method = $ENV{'REQUEST_METHOD'};
if( $method !~ /POST/ ){
print "<p align=center><blink>invalid input method used</blink><p>\n";
print "<p> please use the post method <p>";

exit (0);
}

$bytes = $ENV{'CONTENT_LENGTH'};
read (STDIN, $query, $bytes );

(@variables) = split ('&', $query );

#possible problems
$tooMuchWater = 0;
$tooLittleWater = 0;
$tooMuchSun = 0;
$tooLittleSun = 0;
$tooMuchHumidity = 0;
$tooLittleHumidity = 0;
$tooMuchFertilizer = 0;
$tooLittleFertilizer = 0;
$tooHighTemperature = 0;
$tooLowTemperature = 0;
$extremeChange = 0;

108
$insects = 0;
$chemicals = 0;

#get input values


for $a_variable (@variables){
( $var_name, $value) = split ( '=', $a_variable);
$form {$var_name} = $value;

#apply symptoms to problems


if ($form{'a1'} eq 'a1' ) {
$tooLittleWater ++;
$tooMuchWater ++;
$tooMuchFertilizer ++;
$extremeChange ++;

}
if ($form{'a2'} eq 'a2' ) {
$tooLittleSun ++;
$tooLittleWater ++;
$tooLittleFertilizer ++;
$tooMuchWater ++;
$tooMuchFertilizer ++;
$tooHighTemperature ++;
$tooLittleHumidity ++;
$tooLowTemperature ++;

if ($form{'b1'} eq 'b1' ) {
$tooMuchSun ++;

}
if ($form{'b2'} eq 'b2' ) {
$tooLittleSun ++;
$tooLittleFertilizer ++;
$tooLowHumidity ++;

}
if ($form{'b3'} eq 'b3' ) {
$tooLittleFertilizer ++;

109
$tooMuchSun ++;
$toomuchFertilizer ++;
$tooHighTemperature ++;
$chemicals ++;

}
if ($form{'b4'} eq 'b4' ) {
$tooLittleSun +=2;

}
if ($form{'b5'} eq 'b5' ) {
$tooMuchWater ++;

if ($form{'b6'} eq 'b6' ) {
$tooMuchSun += 2;

if ($form{'c1'} eq 'c1' ) {
$tooLittleWater ++;
$tooLittleSun ++;
$tooLittleFertilizer ++;
$tooMuchWater ++;
$tooMuchFertilizer ++;
$tooHighTemperature ++;

if ($form{'d1'} eq 'd1' ) {
$tooMuchSun ++;

}
if ($form{'d2'} eq 'd2' ) {
$tooLittleWater ++;
$tooLittleFertilizer ++;
$tooMuchWater ++;
$tooMuchFertilizer ++;
$chemicals ++;
$tooLittleHumidity ++;
$tooLowTemperature ++;

110
if ($form{'d3'} eq 'd3' ) {
$tooMuchWater ++;
$tooMuchSun ++;
$tooLowHumidity ++;

}
if ($form{'d4'} eq 'd4' ) {
$tooLittleWater ++;
$tooMuchWater ++;
$insects ++;
$tooLowTemperature ++;

if ($form{'e1'} eq 'e1' ) {
$tooLowTemperature += 2;
$tooLittleWater ++;

}
if ($form{'e2'} eq 'e2' ) {
$tooLittleWater += 2;

}
if ($form{'e3'} eq 'e3' ) {
$tooLittleWater ++;

}
if ($form{'e4'} eq 'e4' ) {
$tooLittleWater ++;

}
if ($form{'e5'} eq 'e5' ) {
$insects += 2;

}
if ($form{'e6'} eq 'e6' ) {
$insects +=2;

if ($form{'f1'} eq 'f1' ) {
$tooLittleWater ++;
$tooMuchWater ++;
$tooMuchSun ++;
$tooMuchFertilizer ++;
$tooHighTemperature ++;

111
$tooLowHumidity ++;
$tooLowTemperature ++;

}
if ($form{'f2'} eq 'f2' ) {
$tooLittleSun ++;
$tooLittleFertilizer ++;

}
if ($form{'f3'} eq 'f3' ) {
$tooLittleWater ++;
$tooMuchWater ++;
$tooMuchFertilizer ++;
$tooLowTemperature ++;

}
if ($form{'f4'} eq 'f4' ) {
$tooLittleSun ++;
$tooLittleFertilizer ++;
$tooMuchWater ++;
$tooHighTemperature ++;

}
if ($form{'f5'} eq 'f5' ) {
$tooLittleSun += 2;

}
if ($form{'f6'} eq 'f6' ) {
$tooMuchWater ++;

}
if ($form{'f7'} eq 'f7' ) {
$tooMuchWater ++;
$tooMuchHumidity ++;
}

if ($form{'g1'} eq 'g1' ) {
$tooLittleSun += 4;
$tooHighTemperature ++;

}
if ($form{'g2'} eq 'g2' ) {
$tooLittleSun ++;
$tooHighTemperature ++;

112
}
if ($form{'g3'} eq 'g3' ) {
$tooLowTemperature ++;
$tooHighTemperature ++;
$tooLittleSun ++;
$tooMuchFertilizer ++;

}
if ($form{'g4'} eq 'g4' ) {
$tooLowTemperature ++;
$tooHighTemperature ++;

}
if ($form{'g5'} eq 'g5' ) {
$tooLowSun ++;

if ($form{'h1'} eq 'h1' ) {
$tooMuchWater ++;

}
if ($form{'h2'} eq 'h2' ) {
$insects += 2;

}
if ($form{'h3'} eq 'h3' ) {
$insects += 2;

if ($form{'h4'} eq 'h4' ) {
$tooLittleWater += 2;

if ($form{'h5'} eq 'h5' ) {
$tooLittleWater ++;

#scores
$tooMuchWater /= 13;

113
$tooLittleWater /= 11;
$tooMuchSun /= 5;
$tooLittleSun /= 10;
$tooMuchHumidity /= 1;
$tooLittleHumidity /= 2;
$tooMuchFertilizer /= 7;
$tooLittleFertilizer /= 7;
$tooHighTemperature /= 9;
$tooLowTemperature /= 8;
$extremeChange /= 1;
$insects /= 5;
$chemicals /= 2;

if ($tooMuchWater > .07){ $tmw = 1; }


if ($tooLittleWater > .07 ){ $tlw = 1; }
if ($tooMuchSun > .1 ) { $tms = 1; }
if ($tooLittleSun > .07 ) { $tls = 1;}
if ($tooMuchHumidity > .1) { $tmh = 1;}
if ($tooLittleHumidity > .1) { $tlh = 1;}
if ($tooMuchFertilizer > .1) { $tmf = 1;}
if ($tooLittleFertilizer > .1 ) { $tlf = 1; }
if ($tooHighTemperature > .1 ) { $tht = 1; }
if ($tooLowTemperature > .1 ) { $tlt = 1; }
if ($insects > .25 ) { $i = 1;}
if ($extremeChange > .1) {$ex = 1;}

$foundAnswer = 0;

print "\n<br><br><b>Recommendations</b><br><br>";
print "\n<center><table width=500><tr><td>";

print "\n With any plant the best care is that that comes closest";
print " to what it lives like in the wild. Find out as much as ";
print " you can about its native habits and duplicate them as ";
print " best as you can. Always use a well drained pot for your";
print " plant, none of them like to sit in water.";
print "<br><br><br><br>";
print "<br><br><b>Analysis: </b><br>";

114
$total = $tmw +$tlw +$tms +$tls +$tmh +$tlh +$tmf +$tlf +$tht +$tlt +$i +$ex;

if ( $total == 1) {
if ($tmw){
print "\n<br> You are very likely over watering your plant.";
print " Make sure that the pot the plant is in has ";
print " excellent drainage. If it does then water ";
print " less frequently. Try half as often for a ";
print " start and watch how your plant responds.";
print " Clay pots will hold the water less than ";
print " plastic will, consider using clay pots if ";
print " you are not already doing so.";
$foundAnswer = 1;
}
if ($tlw){
print "\n<br> You are probably underwatering your plant.";
print " Either water more frequently, or if you are ";
print " not likely to water more frequently, then ";
print " re-pot your plant in a soil that will hold ";
print " water longer. Try adding potting soil to ";
print " orchid mulch, or moss for your orchids. ";
print " Add moss or water holding pellets to your ";
print " house plants that are already in soil.";
$foundAnswer = 1;
}
if ($tms){
print "\n<br> It is too bright, this plant wants less light.";
print " Try moving it back from the window a foot and ";
print " see how the plant responds. If that doesn't ";
print " work, try a less sunny window or lace curtains ";
print " to help filter the light. The plants that have ";
print " purple undersides to the leaves usually need less ";
print " light than the plants with all green on their leaves.";

$foundAnswer = 1;
}
if ($tls){
print "\n<br>This plant needs more sun.";
print " Try moving it closer to the window, a foot";
print " makes an enormous difference. If that ";
print " doesn't make a difference try a sunnier ";
print " window, or a lamp if no other window is brighter. ";
print " African Violets will grow quite happily and flower ";
print " with only a table lamp directly over them.";

115
$foundAnswer = 1;
}
if ($tmh){
print "\n<br>This plant needs a drier atmosphere.";
print " On top of the refrigerator is very dry,";
print " if that is a sunny location. Or next to ";
print " a heater if there is one near a window.";
$foundAnswer = 1;
}
if ($tlh){
print "\n<br>This plant needs higher humididty.";
print " The bathroom and kitchen are the most";
print " humid rooms in the home. If neither of ";
print " those will work, try putting a tray of ";
print " water under the plant. (Make sure the ";
print " plant is above, not in the water). Or ";
print " try a small table top fountain near the ";
print " plant. If the plant is very small, make or ";
print " place it in a terrarium.";
$foundAnswer = 1;
}
if ($tmf){
print "\n<br>Too much fertilizer, cut down the dosage.";
print " Fertilize weakly, weekly, a general rule of thumb ";
print " is to use half of the listed dose on the bottle.";
$foundAnswer = 1;
}
if ($tlf){
print "\n<br>This plant needs fertilizer.";
print " Fertilize weakly, weekly. Find a good ";
print " all purpose fertilizer at your nursery.";
$foundAnswer = 1;
}
if ($tht){
print "\n<br>It is too warm for this plant.";
print " Perhaps closer to a window or ";
print " door will give it cooler air? ";
$foundAnswer = 1;
}
if ($tlt){
print "\n<br>This plant is too cold.";
print " Is this plant outdoors past its ";
print " season? Or too close to a window ";
print " or door if inside?";
$foundAnswer = 1;
}

116
if ($i){
print "\<br>This plant has bugs.";
print " If they are tiny (spider mites)";
print " or aphids, then just mix a quart";
print " of water with a tablespoon of ";
print " cooking oil and a tablespoon of";
print " liquid dish soap. Spray the infected plant";
print " once a day until the bugs are gone, then";
print " give it a good rinsing off in the sink.";
print " A few days of spraying will cure";
print " most infestations. Otherwise ";
print " head to your local supply store ";
print " for insecticides.";
$foundAnswer = 1;
}

if ( $chemicals >= .5) {


print "\n<br><br>";
print "\n<br>Houseplants are sensitive to chemicals in the water, ";
print " especially fluoride. Or you might have a salt or ";
print " fertilizer buildup in the soil, is there a layer of ";
print " white crystals on the pot? If so try re-potting your";
print " plant. If not, your tap water may have too many ";
print " chemicals in it for the plant.";
$foundAnswer = 1;
}

if ( ( $extremeChange > .5 ) && ( !$waterFlag) && (!$sunFlag )


&& (!$temperatureFlag ) && (!$bakeFlag) ){

print "\n<br><br>Did you just bring this plant home, or relocate it?";
print " It sounds like it is unhappy about a recent move.";
print " Ficus in particular is sensitive about moves.";
print " Give it a little time to adjust or if it continues";
print " to be unhappy move it to a better location.";
$foundAnswer = 1;
}

117
if ( ($tooMuchWater + $tooLittleWater) >1 ){
$waterFlag = 1;
print "\n<br>Are you watering the plant consistently?";
print " Plants do not like to cycle through wet and dry spells.";
$foundAnswer = 1;
}

if (( $tooMuchWater > .5 )||( $tooLittleWater > .5)){


if ( $tooMuchWater > $tooLittleWater ) {
print "\n<br><br>";
print "\n<br>If this is a cactus, the top inch of dirt should";
print " be bone dry before you re-water";
print " If this is an orchid, try re-potting it in bark";
print " Most other house plants want about the top quarter";
print " inch of soil to be dry before re-watering.";
print " Also check the drainage, there may be water remaining";
print " in the pot after you water, rather than running through.";
$foundAnswer = 1;
}else {
print "\n<br><br>";
print "\n<br>That is some thirsty plants you have there";
print " Try re-watering when ever the top inch of ";
print " soil is dry.";
print " If this is an orchid, try watering every day or two";
print " or repot and mix dirt in with the mulch to hold the";
print " water a bit longer.";
print " For other houseplants water more frequently, or ";
print " you can buy additives for the soil that will hold the";
print " water longer.";
$foundAnswer = 1;

}
}

if ( ( $tooHighTemperature + $tooHighSun + $tooLittleHumidity) > 1) {


$bakeFlag = 1;
print "\n<br><br>Ouch! The plant is baking, less sun, less heat!";
$foundAnswer = 1;
}

118
if ( ($tooLittleSun + $tooMuchSun) > 1 ){
$sunFlag = 1;
print "\n<br><br>The light is wrong, it may be too short or long or ";
print " too intense or not bright enough";
$foundAnswer = 1;
}

if ( ($tooMuchSun > .5) || ($tooLittleSun > .3) ){


if ($tooMuchSun > $tooLittleSun){
print "\n<br><br>";
print "\n<br>Try a lace curtain or partially close the blind.";
print " You can also try moving the plant back from the window";
print " The light goes down quickly even a foot makes a huge";
print " difference.";
$foundAnswer = 1;
}else {
print "\n<br><br>";
print "\n<br>The bane of the house plant lover, too few sunny ";
print " windows. Move the plant to a brighter window ";
print " if you can, if not try moving it closer to the ";
print " window you have it in, or put a lamp near the plant ";
print " to give it additional light.";
$foundAnswer = 1;
}
}

if ( ($tooLittleHumidity + $tooMuchHumidity) > 1){


$humidityFlag = 1;
print "\n<br><br>Is the plant near a radiator or heat source?";
print " Try relocating it somewhere where the humidity ";
print " is more consistently.";
$foundAnswer = 1;
}

if (( $tooMuchHumidity >= .5 ) || ( $tooLittleHumidity >= .5 )){


if ($tooMuchHumidity > $tooLittleHumidity){
print "\n<br><br>Your plant doesn't like so much dampness.";
print " The kitchen and bathrooms are the most humid";
print " rooms in the house. Try moving it out of either";

119
print " if it is in one of those rooms. Sometimes a ";
print " sunnier location will help.";
$foundAnswer = 1;
}else {
print "\n<br>Your plant seems to want more humidity. ";
print " Try placing the plant in the kitchen or bathroom";
print " to give it more humidity, or place a tray of water";
print " with pebbles in it to keep the plant out of the ";
print " water under the plant. You can buy table fountains";
print " cheaply now, try placing a fountain near the plant";
print " if you don't like the other options.";
$foundAnswer = 1;
}
}

if ( ($tooMuchFertilizer + $tooLittleFertilizer) > 1 ){


$fertilizerFlag = 1;
"print \n<br><br>Fertilizer is perhaps not consistent? ? ";
$foundAnswer = 1;
}

if ( ($tooMuchFertilizer > .5) || ( $tooLittleFertilizer > .5)){

if ( $tooMuchFertilizer > $tooLittleFertilizer){


print "\n<br><br>";
print " Try using the fertilizer 1/2 strength";
print " each watering";
$foundAnswer = 1;
}else {
print "\n<br><br>";
print " Try using a fertilizer half strength each";
print " watering.";
$foundAnswer = 1;
}
}

if ( ($tooHighTemperature + $tooLowTemperature) > 1 ){


$temperatureFlag = 1;

120
print "\n<br><br>Is is drafty near the plant?";
print " Orchids are about the only plants that like a draft";
print " and not all the orchids like it!";
$foundAnswer = 1;
}

if ( ($tooHighTemperature > .5) || ($tooLowTemperature > .5)){

if ( $tooHighTemperature > $tooLowTemperature){


print "\n<br><br>Too warm ";
print " Can you move this plant to a cooler location?";
print " if not, try putting a gentle fan on it.";
$foundAnswer = 1;
}else {
print "\n<br><br>Too cool or drafty";
print " Try location less drafty, or warmer.";
$foundAnswer = 1;

}
}

$tooMuchWater *= 100;
$tooLittleWater *= 100;
$tooMuchSun *= 100;
$tooLittleSun *= 100;
$tooMuchHumidity *= 100;
$tooLittleHumidity *= 100;
$tooLittleFertilizer *= 100;
$tooMuchFertilizer *= 100;
$tooHighTemperature *= 100;
$tooLowTemperature *= 100;
$extremeChange *= 100;
$insects *= 100;
$chemicals *= 100;

if ( ! $foundAnswer ){
print "\n<br>I did not find an answer for you. ";
print " Check the following table for likely causes and see if ";
print " the items listed might apply, ";
print " Or go back to the form and see if any ";
print " other conditions might apply and re-submit ";
print " the form.";

121
print "\n<br><br><br><br>";
print "\n<table>";
print "\n<th>Likely Sources of Problem<br></th>";
if ( $tooMuchWater){
print sprintf "\n<tr><td>Too Much Water</td></tr>", $tooMuchWater;
}
if ( $tooLittleWater ){
print sprintf "\n<tr><td>Too Little Water </td></tr>", $tooLittleWater;
}
if ($tooMuchSun ){
print sprintf "\n<tr><td>Too Much Light </td></tr>", $tooMuchSun;
}
if ($tooLittleSun){
print sprintf "\n<tr><td>Too Little Light </td></tr>", $tooLittleSun;
}
if ($tooMuchHumidity){
print sprintf "\n<tr><td>Too Much Humidity </td></tr>", $tooMuchHumidity;
}
if ($tooLittleHumidity){
print sprintf "\n<tr><td>Too Little Humidity </td></tr>", $tooLittleHumidity;
}
if ($tooMuchFertilizer ){
print sprintf "\n<tr><td>Too Much Fertilizer </td></tr>", $tooMuchFertilizer;
}
if ($tooLittleFertilizer ){
print sprintf "\n<tr><td>Too Little Fertilizer </td></tr>", $tooLittleFertilizer;
}
if ($tooHighTemperature) {
print sprintf "\n<tr><td>Too High Temperature </td></tr>", $tooHighTemperature;
}
if ($tooLowTemperature){
print sprintf "\n<tr><td>Too Low Temperature </td></tr>", $tooLowTemperature;
}
if ( $extremeChange ){
print sprintf "\n<tr><td>Too extreme of a change </td></tr>", $extremeChange;
}
if ($insects ){
print sprintf "\n<tr><td>Insects </td></tr>", $insects;
}
if ($chemicals){
print sprintf "\n<tr><td>Chemicals </td></tr>", $chemicals;
}
print "\n</table>";

122
}

print "\n</td></tr></table>";

print "\n<p></body>";
print "\n<p></html>";

123
|>left o here

124
Chapter 6

Agents, Bots, and Spiders

6.1 Spiders and Bots


'Bot' is short for 'robot' and refers to s software robot. A bot goes out into
the Internet and pulls back data. Bots are used to handle repetitive tasks, to
index the web for search engines, by games to be your avatar and to do price
checks and other web searches. A spider is a specialized bot. A spider starts
on a given web-page and is restricted to a given domain or set of domains. The
spider traverses the area by collecting links on the page and going from link to
link. Java and Python are the preferred language for them.
Since computers do not understand language teaching bots to gather and
sort information is quite a challenge. WebMate uses multiple TF-idF vectors
each one in a di erent domain of interest to the user, as well as 'Trigger Pair'
word searches in documents, and WebMate learns from watching the user. It
runs between the browser and the HTTP server monitoring transactions. Web-
Mate learns the classi cations rather than have the user select and feed them
to the program. The program learns incrementally and changes as the users
interests change. The learning algorithm is run whenever the user ags a doc-
ument as useful. Information gathering agents may use the SIMS architecture.
Each agent is a specialist in a di erent subject. These agents use KQML as the
communication language between them, and LOOM as the knowledge represen-
tation language. One agent is created, then others are instantiated to become
experts in di erent areas of knowledge ( ight schedules, hotel locations and
rates, etc) and an area of domain (type of database, physical location, etc). A
network of these agents is then put together in an acyclic graph.

125
6.1.1 Java Spider to check website links
This program is a java spider that traverses a website. It starts with a le you
give it, grabs the links from that le and sorts them into links from that site
and links external to that site. It creates a list of each, and grabs each of the
internal pages, and parses them pulling out the links and again adding them to
either the internal or external list. It then checks each link and lets you know
which ones are incorrect.

126
//GUIGetIP.java
//www.timestocome.com
//Fall 2000

//get the ip address and the name of the host machine this program is run on

import java.net.*;

public class GUIGetIP {

public String[] GetIP () throws Exception


{

String out[] = new String[2];

InetAddress host = null;


host = InetAddress.getLocalHost();
byte ip[] = host.getAddress();

out[0] = host.getHostName();

out[1] = "";
for (int i=0; i<ip.length; i++){

if( i>0)out[1] += ".";


out[1] += ip[i] & 0xff ;

return out;

}
}

127
//GUIIPtoName.java
//www.timestocome.com
//Fall 2000

//This converts an IP address to the host name. This seems a bit flakey, I
//understand earlier java versions had trouble with this command as
//well. Sometimes it gives the name, sometimes it just returns the
//ip address.

import java.net.*;

public class GUIIPtoName{

public String iptoName( String host )


{

InetAddress address = null;

try{
address = InetAddress.getByName( host );

}catch( UnknownHostException e){


return ( "Invalid IP or malformed IP");
//System.exit(0);
}

return address.getHostName();
}
}

128
//GUINsLookup.java
//www.timestocome.com
//lookup an ip address given a host name

import java.net.*;

public class GUINsLookup {

public String guiNslookup( String host)


{

InetAddress address = null;

try{

address = InetAddress.getByName(host);

}catch (UnknownHostException e){

return ("Unknown host");


//System.exit(0);
}

byte[] ip = address.getAddress();
String temp = "";

for (int i=0; i<ip.length; i++){


if( i>0 ) temp += (".");
temp += ((ip[i]) & 0xff);
}

return temp;
}
}

129
//jpanel.java
//www.timestocome.com
//Fall 2000

import java.awt.*;
import javax.swing.*;
import java.awt.event.*;

class Jpanel extends JPanel {

Jpanel ()
{
setBackground( Color.white );
}

public void paintComponent (Graphics g )


{
super.paintComponent( g );

}
}

130
//LinkInfo.java
//www.timestocome.com
//Fall 2000

//part of the LinkChecker program.


//This class stores the url of the link being
//checked, the file we found this url in and
//the status of this link.

import java.io.*;
import java.net.*;
import java.util.*;

class LinkInfo
{
String fileContainingLink;
String stringLink;
String sourceFile;
String info = "None";
URL link;

//add in other pages external links found on


Vector otherLocations = new Vector();

public LinkInfo( String fcl, String sf, String lu) throws Exception
{
sourceFile = sf;
stringLink = lu;
link = new URL (lu);
fileContainingLink = fcl;

public void setInfo (String i)


{
info = i;
}

public void print ()


{

131
System.out.println( "<>link " + link );
}

public String toString()


{
return stringLink;
}

132
//ListEntry.java
import java.io.*;
import java.net.*;
import java.util.*;

public class ListEntry


{

String source; //html file that link was taken from


URL url; //link to be checked
String notes; //good, bad, error messages

ListEntry( String s, String u)


{
notes = null;
source = s;
try {
url = new URL(u);
}catch (MalformedURLException e){
notes = "Malformed URL Exception";
}
}

public void addNote( String n)


{
this.notes = n;
}

133
//www.timestocome.com
//Fall 2000

//copyright Times to Come


//under the GNU Lesser General Public License
//version 2.1
//available for viewing at http://www.timestocome.com/copyleft.txt

import java.awt.*;
import javax.swing.*;
import java.awt.event.*;

public class GUILinkCheckerV1 extends JFrame


{

Jpanel mainPanel;
JPanel userPanel;
JPanel outputPanel;
JPanel menuPanel;

static String message = "Welcome to Times to Come Website Tools!" +


"\nChoose an Option from the menu above";
static JTextArea output = new JTextArea ( message, 15, 30);
static int choice = 0;
static JTextField tfInput = new JTextField (30);

public GUILinkCheckerV1 ()
{
super ("Times to Come Link Checker");

Container contentPane = getContentPane();

JLabel lInput = new JLabel ("Your Input: ");


JButton enter = new JButton ("Enter");

mainPanel = new Jpanel();


userPanel = new Jpanel();
outputPanel = new Jpanel();

134
menuPanel = new Jpanel();

Color b = new Color( 0, 0, 100);

mainPanel.setBorder( BorderFactory.createBevelBorder( 0 , b, Color.gray) );

JMenuBar mbar = createMenu();


setJMenuBar(mbar);

mainPanel.setLayout( new BoxLayout(mainPanel, BoxLayout.Y_AXIS ) );


contentPane.add(mainPanel);

userPanel.add(lInput);
userPanel.add(tfInput);
userPanel.add(enter);
enter.addActionListener(b1);

mainPanel.add(userPanel);

outputPanel.add(output);
mainPanel.add(outputPanel);
}

public static void main( String args[] )


{

final JFrame f = new GUILinkCheckerV1();

f.setBounds( 10, 10, 600, 400 );


f.setVisible( true );
f.setDefaultCloseOperation(DISPOSE_ON_CLOSE);

f.addWindowListener( new WindowAdapter() {


public void windowClosed( WindowEvent e){
System.exit(0);
}
});

public static JMenuBar createMenu()


{

135
JMenuBar jmenubar = new JMenuBar();

jmenubar.setUI( jmenubar.getUI() );
JMenu jmenu1 = new JMenu("Options");
JMenu jmenu2 = new JMenu("Help");
JMenu jmenu3 = new JMenu("Quit");

JRadioButtonMenuItem m1 = new
JRadioButtonMenuItem("Get local host information");
m1.addActionListener(a1);

JRadioButtonMenuItem m2 = new
JRadioButtonMenuItem("Convert IP number to domain name");
m2.addActionListener(a2);

JRadioButtonMenuItem m3 = new
JRadioButtonMenuItem("Convert domain name to IP number");
m3.addActionListener(a3);

JRadioButtonMenuItem m4 = new
JRadioButtonMenuItem("Check website for bad links");
m4.addActionListener(a4);

JMenuItem m6 = new JMenuItem("About");


m6.addActionListener(a6);

JMenuItem m7 = new JMenuItem("Exit");


m7.addActionListener(a7);

jmenu1.add(m1);
jmenu1.add(m2);
jmenu1.add(m3);
jmenu1.add(m4);

ButtonGroup group = new ButtonGroup();


group.add(m1);
group.add(m2);
group.add(m3);
group.add(m4);

jmenu2.add(m6);

136
jmenu3.add(m7);

jmenubar.add(jmenu1);
jmenubar.add(jmenu2);
jmenubar.add(jmenu3);

return jmenubar;

static ActionListener a1 = new ActionListener()


{
public void actionPerformed( ActionEvent e )
{
JMenuItem m1 = ( JMenuItem )e.getSource();
choice = 1;
output.setText("\nUse this to get your temporary online IP address and "+
"\nname if your computer does not have a permanent IP number"+
"\n\n\nTo use:"+
"\nJust hit the Enter button now.");

}
};

static ActionListener a2 = new ActionListener()


{
public void actionPerformed( ActionEvent e )
{
JMenuItem m2 = ( JMenuItem )e.getSource();
choice = 2;
output.setText("\nUse this to get a domain name from an IP number."+
"\nThis is useful to find out who is visiting your site"+
"\nor trying to hack into your home machine."+
"\n\nTo use:"+
"\nEnter the number (see sample next line)"+
"\n127.0.0.1"+
"\n in 'Your Input' and then hit the Enter button." +
"\n\n\nIf the original number is returned instead of the"+
"\ndomain name, that means it wasn't found");
}
};

static ActionListener a3 = new ActionListener()

137
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m3 = ( JMenuItem )e.getSource();
choice = 3;
output.setText("\nThis function gets an IP number from a domain name. "+
"\nI'm not sure it is very useful unless your site IP " +
"\nnumber changes for some reason?"+
"\n\nTo use:"+
"\nEnter the domain name (see sample next line)"+
"\nwww.yoursite.com"+
"\n in 'Your Input' and then hit the Enter button.");
}
};

static ActionListener a4 = new ActionListener()


{
public void actionPerformed( ActionEvent e )
{
JMenuItem m4 = ( JMenuItem )e.getSource();
choice = 4;
output.setText("\nThis will check all of your website links beginning"+
"\nwith the top page. It grabs the links off of that page"+
"\nchecks to see if they are internal to the site, then "+
"\ngrabs all internal pages and checks their links for "+
"\ninternal and external links. "+
"\nIt does not yet grab the links from a frame. For those"+
"\nenter the main framed page and let the link checker run"+
"\nfrom there."+
"\nIt also does not yet check javascript links and plugins"+
"\nI'll add them in later when I have time." +
"\n\nTo use: " +
"\nEnter your site's top page URL (see sample next line)"+
"\nhttp://www.yoursite.com/index.html"+
"\n in 'Your Input' and then hit the Enter button."+
"\n\n\nThis may take a while on a large site. Be patient!");
}
};
static ActionListener a6 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m6 = ( JMenuItem )e.getSource();
output.setText("http://www.timestocome.com"+
"\nFall 2000"+
"\nCopyright Times to Come under GNU Copyleft" );

138
}
};

static ActionListener a7 = new ActionListener()


{
public void actionPerformed( ActionEvent e )
{
JMenuItem m7 = ( JMenuItem )e.getSource();
output.setText("Thank you . . . " );
System.exit(0);

}
};

//what to do when enter key is hit...


static ActionListener b1 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
switch (choice){

case 0:
output.setText ("\n\nPick an option from the menu first");

case 1:
output.setText ("\n\n Getting your IP address . . .");

try{
GUIGetIP guigetip = new GUIGetIP();
String answer[] = new String[2];
answer = guigetip.GetIP();
output.setText( "\n" + answer[0] + "\n" + answer[1] + "\n");
}catch(Exception e1){}

break;

case 2:
output.setText("\n\n Getting Domain Name from IP address . . .");

if( tfInput.getText() == ""){


output.setText ("Enter an IP address above");
}else{
GUIIPtoName guiiptoname = new GUIIPtoName( );
String answer1 = guiiptoname.iptoName( tfInput.getText() );
output.setText ("\n" + answer1 );
}

139
break;

case 3:
output.setText ("\n\n Performing Name Server lookup . . .");

if( tfInput.getText() == ""){


output.setText("\n\nEnter Domain name www.site.com above");
}else{
GUINsLookup guinslookup = new GUINsLookup();
String answer2 = guinslookup.guiNslookup( tfInput.getText() );
output.setText ( "\n" + answer2 );
}
break;

case 4:
output.setText ("\n\n Checking website links . . .");
output.append ("\n\n This may take a while on a large site,");
output.append ("\n\n or one with lots of links.");

if( tfInput.getText() == ""){


output.setText ( "\n\nEnter site URL above http://www.site.com ");
}else{

try{
GraphicLinkCheckerV1 glckr = new GraphicLinkCheckerV1();
glckr.Main( tfInput.getText(), output );
}catch(Exception e2){}
}

break;

default:
output.setText ("\n\n I am so confused... " + choice );
break;
}

}
};
}

140
//www.timestocome.com
//Fall 2000

//copyright Times to Come


//under the GNU Lesser General Public License
//Version 2.1
//see http://www.timestocome.com/copyleft.txt for details

//Program to verify links on website are current.


//Traverses a site using the first page input by a
//user and checks all links on the first page
//and adds them to an internal and external list
//if the page is internal to the web site, then
//it gets checked and links pulled from it and sorted.

//Program doesn't check links that are javascript... opening new windows
// but will pick up a link that isn't and most people who
// use javascript put up the same link with out it
// for those who don't use javascript so this should be ok.
// It does pick up links that are javascript mouse rollovers.

//Program also does not check flash links 'clsid'


//or 'mailto' links or 'ftp' links.

//also does not check framed pages, however,


// the user can begin with the pages called
// by the frameset page instead of the top
// page for the site and check framed sections that way.

//if there is a dns error the program will hang. So does ping and netscape.
//Sorry but I've not yet the time to fix it.

import java.io.*;
import java.net.*;
import java.util.*;
import java.awt.*;

141
import javax.swing.*;
import java.awt.event.*;

public class GraphicLinkCheckerV1


{

static Vector externalList = new Vector();


static Vector internalList = new Vector();
static Vector toBeCheckedInternalList = new Vector();
static Vector badList = new Vector();
static String top = "";
static int pageCount = 1;
static String sourceFile = "";

//* main loop


public void Main (String beginningPage, JTextArea out )throws Exception
{

//*get top internal link and get page, removing from internal list
//* remove page from url if explicitly stated and end with directory
LinkInfo first = new LinkInfo( "Start page ", beginningPage, beginningPage);

//*internal links are sub-directories of here or in this directory.


top = trimUrl(beginningPage); //save only directory information

//is site/page up and about?


boolean testflag = false;

//*prime main loop with usr input

URLConnection urlconnection = first.link.openConnection();


int contentlength = urlconnection.getContentLength();
parsePage(urlconnection, contentlength);

while ( !(toBeCheckedInternalList.isEmpty()) ) {

//*get top link off internal list


LinkInfo tempLI = (LinkInfo)toBeCheckedInternalList.firstElement();

//*new top link so subdirectory links are properly pieced together


top = trimUrl(tempLI.stringLink);

142
//give user feed back so we know we are not off lost in cyberspace
pageCount ++;

try{

InputStream testConnection = tempLI.link.openStream();


testflag = true;

}catch(IOException e){

toBeCheckedInternalList.removeElementAt(0);

if(testflag){

try{
//*get useful link info if page not found or site down
//*and add to info section of LinkInfo and get page and page size

URLConnection urlconn = tempLI.link.openConnection();


int contentlgth = urlconn.getContentLength();

//*sift links out of this page

parsePage(urlconn, contentlgth);

//*remove so we can get next link


//and add to good internal links list
toBeCheckedInternalList.removeElementAt(0);

}catch(IOException e){

//System.out.println("File not found " + tempLI.stringLink);


//add to bad internal list
badList.addElement(tempLI);
toBeCheckedInternalList.removeElementAt(0);

}
}

143
//now check links outside our site
//System.out.println("\n\n\nChecking external links: ");
out.setText("\n\n\n Checked " + pageCount + " pages on website\n");

pageCount = 0;
Enumeration e5 = externalList.elements();

//while still links in external list...


while(e5.hasMoreElements() ){

pageCount ++;

LinkInfo tempLinkInfo5 = (LinkInfo)e5.nextElement();

//check if host site is up?


try{

InputStream testConnection = tempLinkInfo5.link.openStream();

}catch(IOException e){

badList.addElement(tempLinkInfo5);
break;
}

}//end enumeration loop

//*print list internal list as 'Pages checked' so user knows


//*what pages were checked on the site
//System.out.println( "\n\n\nWebsite Pages checked");
Enumeration e2 = internalList.elements();

while(e2.hasMoreElements() ){
LinkInfo tempLinkInfo2 = (LinkInfo)e2.nextElement();
}

//print list "bad links"

out.append("\n\n\n Problems in website links (if any)");


Enumeration e3 = badList.elements();

while(e3.hasMoreElements() ){
LinkInfo tempLinkInfo3 = (LinkInfo)e3.nextElement();

144
out.append("\nBad Link: " + tempLinkInfo3.stringLink + "\n
In Page: " + tempLinkInfo3.fileContainingLink);
}

}// * end main

//*********************************************************
//*********************************************************

//*collect links found in pages on website and sort for checking


public static void parsePage(URLConnection urlconnection, int contentlength)
throws Exception
{

String[] links = new String[1000];


String link = "";
int count = 0;
int character;
String source = urlconnection.getURL().toString();

//*parse page and get links


if (contentlength > 0){

InputStream in = urlconnection.getInputStream();

//*if link external to site add to external list and page it is on


//*if internal add to back of internal link list if not already there
while ( (character = in.read() ) != -1 ){

//*find all links excepting in frames


if( (char)character == '<'){

character = in.read();

if( ( (char)character == 'a') || ( (char)character == 'A') ){


character = in.read();

//*dump href="
while( (char)character != '"'){
character = in.read();
}

145
character = in.read(); //*skip over first quotation mark
while( (char)character != '"'){
link += (char)character;
character = in.read();
}

links[count] = link;
count++;
link = "";
}

}//>
}//*end while loop find next link
in.close();

//*sort into internal and external and fix up link formatting if nec.
//*ditch mailto, ftp, flash, and javascript links...
for(int i=0; i<count; i++){

String inLink;

//*ditch ftp links and mailto links


if( links[i].startsWith("ftp") || links[i].startsWith("Ftp") ||
links[i].startsWith("FTP") || links[i].startsWith("mail") ||
links[i].startsWith("Mail") || links[i].startsWith("MAIL") ){

//*add to internal links if not already there and add to toBeCheckedInternal


}else if( inOrOut(links[i], top) ){

inLink = fixUp(links[i], top);


links[i] = inLink;

//*ditch javascript links...


boolean javascript = false;
if( (links[i].indexOf("javaS") > 0 ) || (links[i].indexOf("Javas") > 0)
|| (links[i].indexOf("JAVAS") > 0 ) || (links[i].indexOf("JavaS") > 0)
|| (links[i].indexOf("javas") >0 ) ){
javascript = true;
}

146
//*check for flash links and ditch them...
boolean flash = false;
if( links[i].indexOf("clsid") > 0){
flash = true;
}

//*duplicate checking ditch duplicate internal links


LinkInfo tempLinkInfo = new LinkInfo(source, top, links[i]);
Enumeration e = internalList.elements();
boolean flag = false;

while(e.hasMoreElements()){
LinkInfo tempLinkInfo2 = (LinkInfo)e.nextElement();

if( (tempLinkInfo2.link).equals(tempLinkInfo.link) ){
flag = true;
break;
}

//*do nothing else but add to list

}if( (!flag)&&(!javascript)&&(!flash) ){
internalList.addElement(new LinkInfo(source, top, links[i]));
toBeCheckedInternalList.addElement(new LinkInfo(source, top, links[i]));
flag = false;
}

//*add to external links if not a duplicate


}else{

//check for duplicates and keep running list of


//source file info so user knows where to find
//links that have to be fixed.

LinkInfo tempLinkInfo1 = new LinkInfo(source, top, links[i]);


Enumeration e1 = externalList.elements();
boolean flag1 = false;

while(e1.hasMoreElements() ){

LinkInfo tempLinkInfo2 = (LinkInfo)e1.nextElement();

if( (tempLinkInfo2.link).equals(tempLinkInfo1.link) ){

147
//System.out.println("Found duplicate external link " + links[i]);
//add duplicate pages for user reference
tempLinkInfo2.otherLocations.addElement(tempLinkInfo1.sourceFile);
flag1 = true;
break;
}

}//*end while loop

if (!flag1){

externalList.addElement( new LinkInfo(source, top, links[i]) );

}
}
}

public static boolean inOrOut (String lk, String base)


{
//*determine if link internal or external (if off of top directory must be internal)
if( lk.startsWith(base) || lk.startsWith(base) || lk.startsWith(base) ){
return true;

}else{ //*if not off top directory and begins with http must be external

if( lk.startsWith("http") || lk.startsWith("HTTP") || lk.startsWith("Http")){


return false;

}else { //*anything left must be internal


return true;

}
}
}

public static String fixUp (String lk, String base)


{
String temp = "";

148
//*patch together internal links if needed before adding to vector
//*does it begin with http? if so ok do nothing and return string
if( lk.startsWith(base) || lk.startsWith(base) || lk.startsWith(base) ){
return lk;

//*else attach base url to string


}else{
return (base + lk);
}
}

public static String trimUrl (String lk)


{
//*see if we have a file name at the end of our url or a directory
//*if it is a file name trim file name off of url, so when we
//*attach it to internal urls we don't get confused.

int length = lk.length();

//*first see if we have a file on the end


//*if not just return the original url
if( lk.endsWith(".HTML") || lk.endsWith(".html") || lk.endsWith(".Html")
|| lk.endsWith(".HTM") || lk.endsWith(".htm") || lk.endsWith(".Htm") ){

//*we need to trim string

char temp[] = lk.toCharArray();


char backwards[] = new char[length];
String bw ="";

//*first reverse the string and trim back to first '/'


int j=0;
boolean flag = false;

for( j=(length-1); j>=0; j-- ){

if ((int)temp[j] == 47){
flag = true;
}

if( flag )
bw += temp[j];
}

//*flip trimmed string back around

149
char temp2[] = bw.toCharArray();
String tempString = "";

for( int k=(bw.length()-1); k>0; k--){


tempString += temp2[k];
}
return tempString + "/";

//*or all is well just return original string


}else{
return lk + "/";
}
}
}

150
6.2 Adaptive Autonomous Agents
Mobile agents have been in use since the early 1980s where they were used to
balance loads on homogeneous networks. Telescript, introduced in the early
1990s by General Magic, was the rst to be known as a 'Mobile Agent'. Java
and Python are the preferred languages for agents.
Agents are programs that operate with little or no human supervision. In
time they will initiate actions, form goals, construct plans of action, migrate
to di erent locations and communicate with other agents. They respond to
events and adjust behavior accordingly with out human intervention. They will
interact with other agents and with people to accomplish goals. Agents will
continue to exist and remember training and tasks even if the user's computer
crashes or is turned o . If they are well designed agents will have personality,
and like a good secretary will intrude only when necessary and not be intrusive.
There are di erent classes of agents depending on the agent's abilities: they
may be static or mobile; react to events or not; work alone or with other agents;
learn or be hardwired; autonomous or not.
Intelligent agents solve several classes of problems, they simplify distributed
computing, information retrieval, sorting and classi cation of data, and handle
repetitious tasks for the user. Already agents have taken over many tasks users
do not wish to do themselves, like scheduling appointments, answering email,
sorting news group information and getting the current news stories that match
the user's interest. As the agent learns more about its user it will become more
useful to the that user.
Agents behavior and ability to solve problems may be either in the individual
agent or the agent may serve as a dumb part of a group that can solve a problem.
(Think of bees or ants working together) Agents that work as a minor part of a
group form a more stable system and may be able to handle tasks not easily done
by computers. Without a central intelligence the group may grow stronger and
smarter. This type of agent setup may scale up better than individual agents.

6.3 Inter-agent Communication


One of the obstacles in designing agents is to nd a common language for them
to use to communicate with each other and other people's agents. Right now
each designer writes a communications system for her own agents, so they can
only talk to each other, much like people who speak only one language can only
speak with others who speak the same language. Several ideas have been put
forth and these are the most promising.
KSE (Knowledge Sharing E ort) is sponsored by the Advanced Research
Projects Agency. It is a project to put together a uniform method for agents to
communicate with each other. KQML is one of the projects being developed.
The two main subproblems are: translating from one representation language to
another; content sharing among applications. Communication protocols must
develop: an interaction protocol; a communication language; and a transport

151
protocol (SMTP, HTTP, ...).
KQML (Knowledge Query Markup Language) uses messages that carry in-
formation about the type information they are transmitting; assertion, request,
query. Performatives are the primitives which de ne permissible operations
agents may do in e ort to communicate with each other. It uses special agents
called facilitators that handle many tasks: Track locations of agents by speci c
identity or type of service; Track services available and needed by agents; Act
like post oces, holding, forwarding, receiving messages for agents; Translate
between agent communication languages; Break complex problems in to parts
and distribute tasks to agents that can handle them; Monitor the agents.
KQML uses categories or levels for agent communication: content, the con-
tent of the message, text, binary strings etc.; communication, sender, recip-
ient, message ids; message, identi es protocols for message transfer, handles
encoding, and descriptions of content. Requirements of KQML: form, simple,
declarative, and easily understood by humans; semantics, Should be familiar,
unambiguous, well grounded in theory; implementation, Needs to be ecient
and backward compatible; networking, Platform independent across networks
and support synchronous and asynchronous.; environment, will be distributed,
dynamic and non-heterogeneous; reliability, must be reliable and secure; con-
tent,the language should be layered, like all networked software. There are still
some diculties with KQML. There are ambiguities and vagueness and mis-
directions, misclassi cations and some things that are needed but not yet in
existence, in statements known as performatives (statements that work just by
declaration).
KIF (Knowledge Interchange Format) is particular syntax format, similar
to Lisp, for rst predicate calculus communication between agents. KIF can
perform translation from one language format to another. It can also be used
to communicate between agents. KIF is a rst order predicate calculus using
pre x format. It supports the de nition of objects, functions, relations, rules
and meta knowledge. It is not a programming language. KIF has three main
parts; variables, operators, and constants. KIF has two types of variables;
individual (begin with ?) and sequences (begin with @). It has four operators;
term (objects), rule (legal logical inference steps), sentence (facts), de nition
(constants). A form is a sentence, rule or de nition.
ACL (Agent Communication Language) has three pieces; vocabularies, KIF,
and KQML. The vocabulary uses an open ended dictionary of terms that can
be referenced by agents.
Telescript is an object orientated remote programming language for use with
mobile agents developed by General Magic. It has three main parts: a language
for developing agents and environments; an interpreter for the Telescript lan-
guage; and communication protocols (TCP/IP). The entire application can be
written in Telescript but usually a combination of Telescript and C/C++ is
used.
KAoS (Knowledgeable Agent-orientated System) di ers from the other inter-
agent communication methods in that it considers not only the message but the
sequence of messages in which it occurs. This enables agents to coordinate fre-

152
quently recurring interactions. KAoS makes use of 'agent orientated program-
ming' which is an extension of object orientated programming. This provides a
consistent structure for the agents and an easier way to do agent programming.
The agents contain: Knowledge (facts, beliefs); Desires; Intentions; and Capa-
bilities. From birth the agent goes into a loop of 'updating the structure' and
'formulating and acting on intentions', unless it is in a cryogenic state, until its
death.
Communication takes place with messages containing verbs, and informa-
tion. The messages are structured much like the KQML messages. Commu-
nication between agents takes place only within the domain the agents are in.
Proxy agents communicate between domains in a given environment. Mediation
agents communicate with outside agents.
Instances of agents of particular classes are created to work in various do-
mains. Using inheritance specialized agents are created. Domain managers
control entry and exit of agents in a domain, and matchmaker agents give ac-
cess and information about services in the domain they are in.

153
6.3.1 Java Personal Agent

---Agent.java--

//www.timestocome.com
//2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

import javax.swing.*;
import java.awt.event.*;
import java.awt.*;
import java.io.*;

public class Agent


{
public static void main ( String[] arg )
{
JFrame frame = new buildFrame();
frame.setBackground( Color.white );
frame.show();
}
}

class buildFrame extends JFrame


{
//screen size
int width;
int height;

public buildFrame()
{

154
Toolkit tk = Toolkit.getDefaultToolkit();
Dimension d = tk.getScreenSize();

int w = d.width; //screen size


int h = d.width;

int width = 800; //window size


int height = 800;

String name = "";

try{
FileReader fr = new FileReader ( "data/user.txt" );
BufferedReader br = new BufferedReader ( fr );

br.readLine(); //username
name = br.readLine();

fr.close();
}catch (IOException e){}

setTitle ( name );
setSize ( width, height );
setLocation ( width/3, height/10 );

addWindowListener ( new WindowAdapter()


{ public void windowClosing ( WindowEvent e )
{ System.exit( 0 ); }
});

Container contentPane = getContentPane();

MainPanel bp = new MainPanel( );


bp.setBackground ( Color.white );
contentPane.add ( bp );

}
}

155
--Conversation.java--

import javax.swing.*;
import java.awt.event.*;
import java.awt.*;
import java.io.*;
import java.util.*;

public class Conversation


{
Conversation()
{

JFrame frameChat = new createFrame();


frameChat.setBackground( Color.white );
frameChat.show();

}
}

class createFrame extends JFrame implements KeyListener


{

int baseScore = 10;


TextArea computer, user;
String cText = "Hello!"; //computer opener

public createFrame()
{

String name = "Conversation";

setTitle ( name );
setSize ( 400, 400 );

156
setLocation ( 100, 300 );

addWindowListener ( new WindowAdapter()


{ public void windowClosing ( WindowEvent e )
{ System.exit( 0 ); }
});

Container cp = getContentPane();
cp.setBackground ( Color.white);

computer = new TextArea();


Color computercolor = new Color ( 230, 255, 230);
computer.setBackground ( computercolor );
computer.setText (cText );
computer.setEditable(false);

user = new TextArea();


Color usercolor = new Color ( 230, 230, 255 );
user.setBackground ( usercolor );
user.setText ( "" );

JSplitPane jsp = new JSplitPane( JSplitPane.VERTICAL_SPLIT, computer, user );


cp.add ( jsp);

user.addKeyListener( this );
user.requestFocus();

public void keyTyped ( KeyEvent e){

int baseScore = 10, k = 10, t=0;


FileReader fr;
BufferedReader br;
FileWriter fw;
BufferedWriter bw;
String s="", n="";
String in, out;
String responses[] = { "", "", "", "", "", "", "", "", "", "",

157
"", "", "", "", "", "", "", "", "", "" };

if ( e.getKeyChar() == e.VK_ENTER ){

//grab user entered string


String uText = user.getText();
uText.trim();

if ( uText.indexOf ( "\n" ) > -1 ){


uText = uText.substring ( 1, uText.length() );
}

//post user string in conversation window


computer.append( "\n>>" + uText );

//update file ctext


computerUpdate( uText, cText );

//update file utext


cText = userUpdate( uText, cText );

//post response in conversation window


computer.append ( "\n>>" + cText );

//clear user frame


user.setText ("");

}
}

public void keyPressed ( KeyEvent e){}


public void keyReleased ( KeyEvent e ){}

void computerUpdate( String u, String c )


{

//cleanup white space


u.trim();
c.trim();

158
//does the file cText exist?
String fileName = "data/" + c;
try{
FileReader fr = new FileReader ( fileName );
BufferedReader br = new BufferedReader ( fr );
//yes
// is string uText listed?
// yes
// add one to uText

boolean flag = false;


int rcount = 0, k = 10;
String in = "", s = "", n = "";
String responses[] = { "", "", "", "", "", "", "", "", "", "",
"", "", "", "", "", "", "", "", "", "" };

while (( in = br.readLine()) != null ){

//break string into response and score


int marker = in.indexOf ( '#' );
if ( marker >= 0 ){
s = in.substring ( 0, marker );
n = in.substring ( marker + 1, in.length());

Integer j = new Integer ( n );


k = j.intValue();

//store responses in an array


responses[rcount] = in;
rcount++;

//we found a match


if ( s.compareTo ( u ) == 0 ){
k++;
responses[rcount-1]= s + "#" + n.valueOf(k);
flag = true;

}//if ( marker >=0 ){


}

// no
// add uText with base score

159
// else create a line with that user string and base score
if ( !flag ) {

responses[rcount] = u + "#" + n.valueOf ( baseScore );


//write this to end of list in file

//re-write file with updated data


try {

FileWriter fw = new FileWriter ( "data/" + c );


BufferedWriter bw = new BufferedWriter ( fw );

for ( int i=0; i<=(rcount); i++ ){


bw.write ( responses[i] + "\n" );

bw.flush();
fw.close();
}catch ( IOException ioe3){}

fr.close();

//no file does not exist


}catch ( FileNotFoundException e ){
// create the file
try{
FileWriter fw = new FileWriter ( fileName );
BufferedWriter bw = new BufferedWriter ( fw );
// add uText with base score

bw.write( u + "#" + baseScore );


bw.flush();
fw.close();

}catch( IOException e2){}

}catch ( IOException e1 ){}

160
String userUpdate ( String u, String c )
{
String reply = "";

//cleanup white space


u.trim();
c.trim();

//does file uText exist?


String fileName = "data/" + u;
try {
FileReader fr = new FileReader ( fileName );
BufferedReader br = new BufferedReader ( fr );
// yes
// are there any responses?

// yes
// grab random one of top 3 scorers
// set reply to that and return

// read in responses and collect top 3 scores


String topthree[] = {" ", " ", " " };
int topscore[] = { 0, 0, 0 };
int t = 0, k = 0;
String s = "", n = "", in = "";
int linecount = 0;

while (( in = br.readLine()) != null ){

linecount++;
int marker = in.indexOf ( '#' );
s = in.substring ( 0, marker );
n = in.substring ( marker + 1, in.length());
Integer j = new Integer ( n );
k = j.intValue();

for ( int i=0; i<3; i++){


if ( k > topscore[i] ){
topscore[i] = k;
topthree[i] = s;

161
}
}
}

// randomly pick on of those resposes


Random r = new Random();
t = r.nextInt(3) +1;

// set computer string to that and append dialog


if ( linecount >= 3){

reply = topthree[t];
}else if ( linecount >1 ){
reply = topthree[1];
}else{
reply = topthree[0];
}

fr.close();

// no file does not exist


}catch( FileNotFoundException e){

// create the file


try {
FileWriter fw = new FileWriter ( "data/" + c );
BufferedWriter bw = new BufferedWriter ( fw );

// find nearest match

// are there responses in the file?


// yes
// randomly pick one
// add it to the new file
// set response to it and return

// no responses in file
// randomly pick a file
// randomly grab a response
// add it to new file
// set response to it and return
reply = "I don't know?";

162
// find a close match to user string in the file list
// randomly pick a string from that file
//break user text into words
String uwords[] = new String[20];
char utemp[] = u.toCharArray();
String utempWord = "";
int uwordCount = 0;

for ( int i=0; i<u.length(); i++){

if ( utemp[i] == '#' ){
uwords[uwordCount] = utempWord;
utempWord = "";
uwordCount++;
i = u.length();

}else if ( utemp[i] != ' ' ){


utempWord += utemp[i];

}else{
uwords[uwordCount] = utempWord;
utempWord = "";
uwordCount ++;
}
}

//read in directory of responses


File dir = new File ("data/");
File dirList[] = dir.listFiles();

//break each file name into words


//*** make the 1024 much larger before final.!!!
String dwords[][] = new String[1024][50];
String dtempWord = "";
int dwordCount = 0;
char dtemp[];
int mLast = 0;
int m = 0;

//for keeping score

163
int topScore = 0;
int tempScore[] = new int[dirList.length];
for ( int i=0; i< dirList.length; i++){ tempScore[i] = 0;}
int tempLocation = 0;

// now for each file name


//and for each word in the file name

for ( m=0; m<dirList.length; m++){

//so we don't connect last word to first word


if ( m != mLast ){
dwords[m-1][dwordCount] = dtempWord;
dtempWord = "";
dwordCount = 0;
mLast = m;
}

//convert file name to char array


fileName = dirList[m].toString();

//remove 'data/' from file list names


String tempfileName = fileName.substring ( 5, fileName.length() );
dtemp = tempfileName.toCharArray();

//break file name into words


for ( int l=0; l<dtemp.length; l++){
if ( dtemp[l] != ' '){
dtempWord += dtemp[l];
}else {
dwords[m][dwordCount] = dtempWord;
dwordCount++;
dtempWord = "";
}
}

//catch last word of last file


dwords[m-1][dwordCount] = dtempWord;

164
//now we have a list of user words
//and a list of words in the files
//****fix max count for q!!!
for ( m=0; m<dirList.length; m++){
for ( int p=0; p<uwordCount; p++){
for ( int q=0; q<20; q++){
if (( uwords[p] != null ) && ( dwords[m][q] != null)){
if ( uwords[p].compareTo ( dwords[m][q] ) == 0 ){
tempScore[m]++;
}
}
}

//get high score and which file scores high


if ( tempScore[m] > topScore ){
topScore = tempScore[m];
tempLocation = m;
}

}//end second m loop

//now we have our file tempLocation


//grab the top response from it and return it to the user from the computer

try{
FileReader fr = new FileReader ( dirList[tempLocation] );
BufferedReader br = new BufferedReader ( fr );

// read in responses and pick a random one


String pickstring[] = new String[20];
int count = 0;
String in, n = "";

165
while (( in = br.readLine()) != null ){

int marker = in.indexOf ( '#' );


String s = in.substring ( 0, marker );
n = in.substring ( marker + 1, in.length());

pickstring[count] = s;
count++;

Random r = new Random();


int t = 0;

if ( count > 0 ){
t = r.nextInt(count);
}else{
t = r.nextInt(dirList.length);
}

reply = pickstring[t];

// add that string with base score to the file just created
String out = c + "#" + n.valueOf ( baseScore );

// create a file with the same name as the user string

try{
fw = new FileWriter ( "data/" + u );
bw = new BufferedWriter ( fw );

bw.write(out);

166
bw.flush();
// close the file
fw.close();
}catch (IOException e4 ){}

}catch ( IOException x ) { System.out.println ( "IOException " + x ); }

// }catch ( IOException e){}

}catch ( IOException e3) {} //try create new file

}catch ( IOException e1){}

return reply;
}
}

167
--Fetch2.java
import java.io.*;
import java.net.*;
import java.util.Date;
import java.util.*;
import java.text.*;

class Fetch2 extends Thread


{

URL url;
File filename;
int score;
String description;
String words[] = new String[10];
long downloadtime;
long parsetime;

Fetch2(URL u, String[] w){


url = u;
words = w;
}

public void run()


{

downloadtime = System.currentTimeMillis();

//grab the file from the net and save it to disk


try {

HttpURLConnection uc = (HttpURLConnection) url.openConnection();


String response = uc.getResponseMessage();
uc.setInstanceFollowRedirects ( false );

String data = "";


int c;

if ( response.compareTo ( "OK") == 0 ) {
InputStream in = uc.getInputStream();

168
while ( ( c = in.read() ) != -1 ){
data += (char)c;
}

in.close();
uc.disconnect();

//save to disk
String name = url.toString();
name = name.substring ( 7, name.length() );
name = name.replace ( '/', '_' );

File file = new File ( "workSpace/" + name);


FileWriter fw = new FileWriter ( file );
BufferedWriter bw = new BufferedWriter ( fw );

bw.write ( data );
bw.flush();
bw.close();

downloadtime -= System.currentTimeMillis();

parsetime = System.currentTimeMillis();

//score doc and pull interesting links from page


Found f = new Found ( url, file );
int s = f.score ( words );
if ( s > 0 ) {f.pullLinks ();}
f.sortLinks( words );

description = f.docDescription + "\n<br> " + f.getDesc();


score = (int)f.total;
filename = f.file;
URL promisingLinks[] = f.topLinks;

parsetime -= System.currentTimeMillis();

Search.imhome( score, filename, url, description, promisingLinks, downloadtime, parsetime

169
}catch ( IOException e ){
downloadtime -= System.currentTimeMillis();
Search.imhome( url, downloadtime );
System.out.println ( url + "::" + e );
}

}//end public void run


}//end class Fetch

170
--Find.java--

//www.timestocome.com
//Winter 2002/2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

import java.io.*;
import java.net.*;
import java.util.Date;
import java.text.*;
import java.util.*;

class Find extends Thread


{

//load usr list of terms


//get count of terms

// public static void main ( String args[] )


Find(){}

public void run()

int numberOfWords = 0;
int numberOfUrls = 0;
int maxURLS = 512;
int maxWords = 10;
URL urlList[] = new URL[maxURLS];
String wordList[] = new String[maxWords];

171
Vector docs = new Vector();

//load usr list of starting urls


//get count of urls
int c = 0;

try{
FileReader fr = new FileReader ( "data/news.txt" );
BufferedReader br = new BufferedReader ( fr );
String in;

while (( in = br.readLine() ) != null ){


try{
urlList[c] = new URL ( in );
c++;
}catch (MalformedURLException e){ }
}
}catch ( IOException ex ) { }

numberOfUrls = c;

//create list of key words we are hoping to find


try{
FileReader fr = new FileReader ( "data/newskeys.txt" );
BufferedReader br = new BufferedReader ( fr );
String in;

c = 0;
while (( in = br.readLine() ) != null ){
wordList[c] = in;
c++;
}

}catch ( IOException ex ) { }

numberOfWords = c;

int pagecount = 0;
int i = 0;
int loop = 0;

172
//main loop
while (( urlList[i] != null )&&( i< maxURLS)){

File newfile = getDocument( urlList[i] ); //download url


Found tempFound = new Found( urlList[i], newfile ); //create object
tempFound.score( wordList ); //see how relevent this page is

pagecount++;
System.out.println ( "pagecount = " + pagecount );
System.out.println ( "page " + urlList[i] );

if ( tempFound.total == 0 ){ //it's junk


newfile.delete();//lets keep things clean

}else{ //its good


tempFound.pullLinks();
tempFound.sortLinks( wordList );

//add top links to list to download

for ( int l=0; l<tempFound.topLinks.length; l++){


if ( tempFound.topLinks[l] != null ){
if ( numberOfUrls < maxURLS ){
urlList[numberOfUrls] = tempFound.topLinks[l];
}
numberOfUrls++; //tack it to the end of the list
}
}

docs.addElement ( tempFound ); //add to vector


}//end else

i++;
}//end url list while

//now sort the list by score


sort ( docs, 0, (docs.size()-1) );

//create an html page for user with info


//vector is sorted low to high do we want all the pages?

173
//check vector size and grab top 0-20 pages
//create an html page
//grab the url as a link and the top 20 or so words after the <body> tag
//wrap up page

//create file
File resultsFile = new File ( "searchresults.html");
FileWriter fw;

try {
fw = new FileWriter ( resultsFile );
BufferedWriter bw = new BufferedWriter ( fw );

//write header, intro....


String header = new String ( " \n<html><title>Search Results</title><body> ");
bw.write ( header );
int start = 0;
if ( docs.size() >20 ){
start = docs.size() - 20;
}

//reverse the order


for ( int q=(docs.size()-1); q>start; q--){

//grab file from doc


File f = ((Found)docs.elementAt(q)).file;

//send to getDesc
String desc = ((Found)docs.elementAt(q)).getDesc();
//create a link for desc...
String link = "\n\n<a href=\"" + ((Found)docs.elementAt(q)).url +"\">";
bw.write ( "<table border=3 ><tr><td>");
bw.write ( link );

bw.write ( desc );
bw.write ( "<br><br>");

String docD = ((Found)docs.elementAt(q)).docDescription;

bw.write ( docD );
bw.write ( "</td></tr></table><br><br><br>");

174
}

//write footer
String footer = new String ( "\n</body></html>" );
bw.write ( footer );

//close file
bw.flush();
bw.close();

}catch (IOException e ){}

//how can user recall or save this page? need to add that in here
//pop up window with infor, save button, erase button, close window button
//add user tool to main agent to bring page back up
SearchPanel sp = new SearchPanel();

// need a cleanup routine so news dir doesn't get huge with old stuff
}

//so just what is it we downloaded?

//want to sort on a double -- docs.elementAt(x).total


static Vector sort ( Vector d, int lb, int ub)
{
int j = d.size()/2;

if ( lb < ub ){
j = partition ( d, lb, ub );
sort ( d, lb, j-1 );
sort ( d, j+1, ub );
}

return d;
}

175
static int partition ( Vector d, int lb, int ub )
{

double a = ((Found)d.elementAt(lb)).total;
Found aFound = (Found)d.elementAt(lb);

int up = ub;
int down = lb;

while ( down < up ){


while (( ((Found)d.elementAt(down)).total <= a ) && (down < ub ))
down++;
while ( ((Found)d.elementAt(up)).total > a )
up--;
if ( down < up ){
//exchange
Found tempD = (Found)d.elementAt(down);
Found tempU = (Found)d.elementAt(up);

d.setElementAt( tempU, down);


d.setElementAt( tempD, up);

}
}
d.setElementAt( (Found)d.elementAt(up), lb);
d.setElementAt ( aFound, up );

return up;
}

//*********************
//if file not found or other error, remover from url list
//so we don't keep trying to download the same bad file
//*********************************************

private static File getDocument( URL u )


{

176
int c;
String data = "";
File newsFile = new File( "/news/dummy" );

try {

URLConnection urlconnection = u.openConnection();


//System.out.println ( "downloading... " + urlconnection );

new Date ( urlconnection.getLastModified());


int contentlength = urlconnection.getContentLength();

data = "";

if ( contentlength > 0 ){
InputStream in = urlconnection.getInputStream();

while ( ( c = in.read() ) != -1 ){
data += (char)c;

in.close();

//dump to file for parsing /etc


String name = u.toString();
name = name.substring( 7, name.length() );
name = name.replace ( '/', '_' );

newsFile = new File ( "news/" + name );


FileWriter fw = new FileWriter ( newsFile );
BufferedWriter bw = new BufferedWriter ( fw);

bw.write( data );
bw.flush();
bw.close();
}

}catch (IOException e ){
System.out.println ( "Error getting news: " + e );
}

return newsFile;

177
}

178
--Found.java--

//www.timestocome.com
//Winter 2002/2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

import java.io.*;
import java.net.*;
import java.util.Date;
import java.text.*;

class Found
{

int urlarraysize = 4096;


URL url;
int total = 0;
File file;
String urlDescription[] = new String [urlarraysize];
int linkScore[] = new int[urlarraysize];
URL urlList[] = new URL[urlarraysize];;
String docDescription = "";
int links = 0; //number of links located
URL topLinks[] = new URL[urlarraysize];
int wordsFound = 0;

//new
Found( URL u, File f){

179
file = f;
url = u;

docDescription = u.toString();

for ( int i=0; i<urlarraysize; i++){


urlList[i] = null;
}

//score
int score ( String wordList[] )
{

int count = 0;
//int tempArray[] = new int[wordarraysize];
int wordTally[] = new int[ wordList.length ];

// read in file, word by word


try{
FileReader fr = new FileReader( file );
StreamTokenizer st = new StreamTokenizer ( fr );
StreamTokenizer st1 = new StreamTokenizer ( fr );
String in;
String in2;

while ( st.nextToken() != st.TT_EOF){

if ( st.ttype == st.TT_WORD){
in = st.sval;

// if a word matches one on list


for ( int j=0; j< wordList.length; j++){

if ( wordList[j] != null ){
if ( in.compareToIgnoreCase( wordList[j] ) == 0){
total++;

180
System.out.println ( "found word: " + wordList[j] + " in file: " + file );

//check for other words on list in vincinity


//set to current position
st1 = st;
//move ahead one position
st1.nextToken();
for ( int k=0; k<20; k++){
if ( st1.nextToken() != st1.TT_EOF ){
if ( st1.ttype == st1.TT_WORD ){

in2 = st1.sval;

for ( int l=0; l<wordList.length; l++){


if ( wordList[l] != null ){
if ( in2.compareToIgnoreCase ( wordList[l] ) == 0 ){

total++;
}
}
}
}
}
}

}
}
}
}

}catch (IOException e){


System.out.println ( "Error: " + e );
}

return total;

181
//pullLinks
//add in link location.....
void pullLinks()
{

String linkDescription = "";

try{
//read document
FileReader fr = new FileReader ( file );
int c;
String urlName = new String ("http://");
int wordcount = 0;

while ( ( c = fr.read()) != -1 ){
char x = (char) c;

if ( x == ' ' ){
wordcount++;
}

//parse out links <a href ....</a>


if ( x == '<' ){

x = (char) fr.read();
if (( x == 'A' ) || ( x == 'a' )){

x = (char) fr.read();
if ( x == ' ' ){

x = (char) fr.read();
if (( x == 'H' ) || ( x == 'h')){

x = (char) fr.read();
if (( x == 'R' ) || ( x == 'r' )){

x = (char) fr.read();
if (( x == 'E' ) || ( x == 'e' )){

x = (char) fr.read();

182
if (( x =='F' ) || ( x == 'f')){

x = (char) fr.read();
if ( x == '=' ){

x = (char) fr.read();
if ( x == '"' ){

x = (char ) fr.read();
if ( x == 'h' ){

x = (char) fr.read();
if ( x == 't' ){

x = (char) fr.read();
if ( x == 't' ){

x = (char) fr.read();
if ( x == 'p' ){

x = (char) fr.read();//skip //
x = (char) fr.read();
x = (char) fr.read();

while ( ( c = (char)fr.read()) != '"'){

urlName += (char) c;
}

urlList[links] = new URL ( urlName );

urlName = "http://";

//skip stuff from end of url to '>'

while ( ( c = (char)fr.read()) != '>'){

}
//c = fr.read(); //skip '>'

183
while ( ( c = (char)fr.read()) != '<' ){
linkDescription += (char)c;

}
urlDescription[links] = ( linkDescription );
linkDescription = "";
links++;
}
}
}
}
}
}
}
}
}
}
}
}
}

}//end while

}catch ( IOException e ){
System.out.println ( e );
}

public void sortLinks( String wordList[] )


{

//for each link we collected


for ( int i=0; i<urlarraysize; i++){
if ( urlDescription[i] != null ){}
}

184
//number of words in word list
int numberWords = 0;
for ( int i=0; i<wordList.length; i++){
if ( wordList[i] != null ){
numberWords ++;
}
}

//compare link description to user words ( +2 for each word )


for ( int i=0; i<urlarraysize; i++){
for (int j=0; j<numberWords; j++){

//break url description into words


if ( urlDescription[i] != null ){
char p[] = urlDescription[i].toCharArray();
String w[] = new String[20];
String wrd = "";
int r = 0;
for ( int q=0; q<p.length; q++){
if (( p[q] == ' ' ) && ( r<20 )){ //break we got a word

//pull out html terms...


if ( wrd.compareToIgnoreCase("br") == 0){
}else if ( wrd.compareToIgnoreCase("td") == 0 ){
}else if ( wrd.compareToIgnoreCase("table") == 0 ){
}else if ( wrd.compareToIgnoreCase("body") == 0 ){
}else if ( wrd.compareToIgnoreCase("img") == 0 ){
}else if ( wrd.compareToIgnoreCase("src") == 0 ){
}else if ( wrd.compareToIgnoreCase("height" ) == 0){
}else if ( wrd.compareToIgnoreCase("width" ) == 0){
}else if ( wrd.compareToIgnoreCase("col" ) == 0 ){
}else if ( wrd.compareToIgnoreCase("row") == 0 ){
}else if ( wrd.compareToIgnoreCase("form") == 0 ){
}else if ( wrd.compareToIgnoreCase("li") == 0 ){
}else if ( wrd.compareToIgnoreCase("tr") == 0 ){
}else if ( wrd.compareToIgnoreCase("nbsp") == 0 ){
}else if ( wrd.compareToIgnoreCase("class") == 0 ){
}else if ( wrd.compareToIgnoreCase("b") == 0 ){
}else if ( wrd.compareToIgnoreCase("href") == 0){
}else if ( wrd.compareToIgnoreCase("p") == 0 ){

185
}else{ //add to list

w[r] = wrd;
wrd = "";
r++;
}
}else{
wrd += p[q];
}

//get number of words


int ct = 0;
for ( int q=0; q<20; q++){
if ( w[q] != "null" ){
ct++;
}
}
int cnt = 0;
for ( int q=0; q<ct; q++){
if (( w[q] != null ) &&( wordList[j] != null )){
if ( w[q].compareToIgnoreCase( wordList[j] ) == 0){

linkScore[i]++;
}
}
}
}
}
}

//collect links with promise


for ( int i=0; i<urlarraysize; i++){
// if ( linkScore[i] != 0 ){
if (urlList[i] != null ){

topLinks[i] = urlList[i];
//total += linkScore[i];
}
// }

186
}

//get page title


String getDesc ()
{

String d = "";

FileReader fr;

try {
fr = new FileReader ( file );
int c;

while ( ( c = fr.read()) != -1 ){
char x = (char)c;

if ( x == '<'){
x = (char) fr.read();

//grab doc title


if (( x == 't') || ( x == 'T')){
x = (char) fr.read();
if (( x == 'i') || ( x == 'I')){
x = (char) fr.read();
if (( x == 't')||( x == 'T')){
x = (char) fr.read();
if (( x == 'l')||( x == 'L')){
x = (char) fr.read();
if (( x == 'e') || ( x == 'E')){
x = ( char) fr.read();
if ( x == '>'){

x = (char) fr.read();
for (int q=0; q<100; q++){
if ( x != '<'){
d += x;
x = (char) fr.read();

187
}
}
}
}
}
}
}
}
}

}//end while

d += "</a>";
return d;

}catch ( IOException e ){ System.out.println ( e );}

return d;
}

188
--GetIP.java--

//www.timestocome.com

//Winter 2002/2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

//this gets your current ip address


//even if you are behind a firewall

//it gets it from http://www.timestocome.com/webtools/getip.shtml


//you can also use http://checkip.dyndns.org
//or you can set up a page to check ip numbers on your own website
//directions are given at http://www.timestocome.com/webtools/ipfinder.html

//this class is written to go with the www.timestocome.com agent

import java.io.*;
import java.net.*;
import java.util.Date;

class GetIP
{

GetIP(){}

public String todaysIP ()


{

URL url;
URLConnection urlconnection;

189
try {
url = new URL ( "http://www.timestocome.com/webtools/getip.shtml");
urlconnection = url.openConnection();
}catch (MalformedURLException e){
return ( "There is a problem with the URL " + e);
}catch (IOException e1){
return ( "The site can not be reached " + e1);
}

int contentlength = urlconnection.getContentLength();


char parsethis[] = new char[contentlength];

//get ip number from webserver


try{
int i = 0;
if ( contentlength > 0 ) {

InputStream in = urlconnection.getInputStream ();


int c;

while ( (c = in.read() ) != -1 ) {
parsethis[i] = (char) c;
i++;
}

in.close();
}
}catch (IOException e2 ) {
return ( "Unable to get IP number from server " + e2 );
}

//parse out ip address from html


int start = 0, stop = 0;
for ( int j=0; j<contentlength; j++){

if (( parsethis[j] == 'b' ) && ( parsethis[j+1] == 'l') &&


( parsethis[j+2] == 'a') &&
( parsethis[j+3] == 'c') &&
( parsethis[j+4] == 'k') )
start = j+9;

190
if ( ( parsethis[j] == '/' ) && ( parsethis[j+1] == 'b' ) &&
( parsethis[j+2] == 'o') && ( parsethis[j+3] == 'd') )
stop = j-3;

String address = "";


for (int j=start; j<(stop-1); j++){
address += parsethis[j];
}

String in = "";
//save ip to file if changed and notify if changed
try {
FileReader fr = new FileReader ( "data/ip.txt" );
BufferedReader br = new BufferedReader ( fr );

in = br.readLine();
fr.close();

}catch ( IOException e ) { System.out.println ( e );}

if ( address.equals( in ) ){

return ( " IP address is unchanged: " + address );

}else{

try {
FileWriter fw = new FileWriter ( "data/ip.txt" );
BufferedWriter bw = new BufferedWriter ( fw );
fw.write ( address );
fw.flush();
fw.close();
}catch ( IOException e ) {}

return ( " New IP address is: " + address + " old address: " + in );

191
}

192
--Joke.java--

//www.timestocome.com
//Winter 2002/2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

//to be included with Agent.java


//this class reads a directory of text
//files each with a joke, named 0.txt... someNumber.html
//figures out how many are in the directory
//and randomly feeds one to the agent.

//any jokes may be added in txt format


//just use the next available number

//.....add weights for adjusting jokes


//user likes and pick jokes more likely to
//please user

//keep track of jokes used so we don't use


//the same joke too often, no matter how much
//it is liked
import java.io.*;
import java.util.*;
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import javax.swing.filechooser.*;

public class Joke


{

JFileChooser fc;
JButton openButton;
JButton closeButton;

193
int result;

//key word list


//table of user and agent ratings by file
File list[]; //list of files in joke directory
File list1[]; //list of file to add to joke directory
String lastFile;
int last; //number of last file
int numberFiles;

Joke()
{
//read directory and get list length/number of jokes
File dir = new File ( "jokes" );
list = dir.listFiles();

String tmp = "";


lastFile = tmp.valueOf(list.length -1) + ".html";
last = list.length - 1;

//update table if more/less than last look


//send this info to agent which will store/retrieve
//permament data for all sub sections

void rating( String wordList[] )


{

//read in list of files


//read directory and get list length/number of jokes
File dir = new File ( "jokes" );
File list[] = dir.listFiles();
numberFiles = list.length;
int numberWords = wordList.length;
int wordTally[] = new int[numberWords];
int tempArray[] = new int[2048];
double score[] = new double[numberFiles];

//create new rating file


//for each file in directory
for ( int i=0; i< numberFiles; i++){

194
int count = 0;
// read in file, word by word
try{
FileReader fr = new FileReader( list[i] );
StreamTokenizer st = new StreamTokenizer ( fr );
String in;

while ( st.nextToken() != st.TT_EOF){

if ( st.ttype == st.TT_WORD){
in = st.sval;

// if a word matches one on list


for ( int j=0; j<numberWords; j++){
// put cooresponding code in rating array
// add one to word tally

if ( in.compareToIgnoreCase( wordList[j] ) == 0){


tempArray[count] = (j+1);
wordTally[j]++;
}
}
}
count++;
}

}catch (IOException e){}

//calculate score for this file


//word count score
double weight = 2.0;
double s = 0.0;
for ( int k=0; k<numberWords; k++){
s += weight * wordTally[k];
weight -= 0.10;
}

//proximity score
weight = 1.0;
int x = 0; //sub total
int y = 0; //total

195
for ( int k=0; k<2048; k++){
if ( tempArray[k] != 0 ){
x++;
}else{
x = 0;
}

y += x;

score[i] = s + y/count;

//reset wordTally
for ( int k=0; k<numberWords; k++){
wordTally[k] = 0;
}
//reset tempArray
for ( int k=0; k<2048; k++){
tempArray[k] = 0;
}

//create sorted file list


File temp = new File ("");
int index = 0;
double small = 0.0;

for ( int i = numberFiles-1; i>0; i--){


small = score[0];
index = 0;

for ( int j=1; j<=i; j++){


if ( score[j] < small ){
small = score[j];
index = j;
temp = list[j];
}
}
score[index] = score[i];

196
score[i] = small;
list[index] = list[i];
list[i] = temp;

try{

//write sorted file list to disk


FileWriter fw = new FileWriter( "data/jokeSort.txt" );
BufferedWriter bw = new BufferedWriter ( fw );

for ( int i=0; i<numberFiles; i++){


bw.write( list[i].toString() );
bw.newLine();
}
bw.flush();

bw.close();
}catch ( IOException e ){}

File tellJoke ()
{

try{
//read in sorted file list
FileReader fr = new FileReader ( "data/jokeSort.txt" );
BufferedReader br = new BufferedReader ( fr );

for (int i=0; i<numberFiles; i++){


list[i] = new File ( br.readLine() );
}
}catch ( IOException e ) {}

//randomly pick one that is rated in top half


int number = (int)(Math.random() * last/2);

//remove it from the list so it won't get re used

197
//and return file handle to agent
return list[number];

void addNew ( )
{

//get directory name where new files are stored


fc = new JFileChooser ();
int result = fc.showOpenDialog( null );

if ( result == 1 ){ //do nothing cancelled operation


}else{

fc.setFileSelectionMode ( fc.DIRECTORIES_ONLY );
File dir = fc.getCurrentDirectory();

String testString = "Should I copy the files in " + dir +


" to jokes directory and delete them from " + dir + "?";

//double check file copy-delete


int reply = JOptionPane.showConfirmDialog ((Component)
null, testString,
"Confirm file move/delete",
JOptionPane.YES_NO_OPTION);

if ( reply == JOptionPane.YES_OPTION ){
System.out.println ( "ok");

//read new files into a list


list1 = dir.listFiles();

//move over new files and change the name on the way
//we already have the last file name
for ( int i=0; i<list1.length; i++){

String newname = ( "jokes/" + (last+1+i) + ".html" );

198
File newfile = new File( newname );
list1[i].renameTo( newfile );
}

}else if ( reply == JOptionPane.NO_OPTION ) {} //do nothing

}
}

199
--JokesList.java--

//www.timestocome.com
//Winter 2002/2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
import java.io.*;

public class JokesList extends JPanel implements ActionListener


{

int listLength = 10;

private JComboBox jokeskeys = new JComboBox();


private String keys[] = new String[10];

JokesList ()
{

//open list files and read the lists into the arrays
try{
FileReader fr = new FileReader ( "data/jokekeys.txt" );
BufferedReader br = new BufferedReader ( fr );
String in;

for ( int i=0; i<listLength; i++){


keys[i] = br.readLine();
jokeskeys.addItem ( keys[i]);
}

fr.close();

200
}catch ( IOException e ){

keys[0] = "Enter";
keys[1] = "the";
keys[2] = "keywords";
keys[3] = "you";
keys[4] = "wish";
keys[5] = "to";
keys[6] = "search";
keys[7] = "for";
keys[8] = "here";
keys[9] = "";

jokeskeys.setEditable(true);
jokeskeys.getEditor().addActionListener ( this );

JPanel listPanel = new JPanel();


listPanel.setBackground ( Color.white );
jokeskeys.setBackground ( Color.white );

listPanel.add ( jokeskeys );
add ( listPanel );

public void actionPerformed ( ActionEvent e )


{

String newItem = (String)jokeskeys.getEditor().getItem();


int place = jokeskeys.getSelectedIndex();
keys[place] = newItem;

jokeskeys.removeItemAt ( place );
jokeskeys.insertItemAt ( newItem, place );

201
//update the file
try {
FileWriter fw = new FileWriter ( "data/keys.txt");
BufferedWriter bw = new BufferedWriter ( fw );

for ( int i=0; i<listLength; i++){


bw.write ( keys[i] );
bw.newLine();
}

bw.flush();
fw.close();

}catch ( IOException e1) {}


}
}

202
--MainPanel.java--

//www.timestocome.com
//Winter 2002/2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import javax.swing.text.*;
import javax.swing.event.*;
import java.io.*;
import java.net.*;

class MainPanel extends JPanel implements ActionListener


{

private JButton newsButton;


private JButton helpButton;
private JButton exitButton;

private String message;


private JEditorPane output;
private JPanel outputPanel;

public MainPanel()
{

203
output = new JEditorPane();

try {
output.setPage ( "file:index.html" );
}catch (IOException e){
System.out.println ( e );
}

outputPanel = new JPanel();


outputPanel.add ( output );

add ( new JScrollPane ( output ) );


outputPanel.setBackground ( Color.white );
add ( outputPanel );

Color buttonColor = new Color ( 204, 204, 255);

newsButton = new JButton ( " Search" );


helpButton = new JButton ( "Help" );
exitButton = new JButton ( "Quit" );

newsButton.setBackground ( buttonColor );
helpButton.setBackground ( buttonColor );
exitButton.setBackground ( buttonColor );

JPanel buttonPanel = new JPanel();

buttonPanel.add ( Box.createRigidArea ( new Dimension ( 112, 55 )));

buttonPanel.add ( helpButton );
buttonPanel.add ( exitButton );

204
buttonPanel.setBackground ( Color.white );
buttonPanel.setBorder ( BorderFactory.createLineBorder( buttonColor ));
buttonPanel.setLayout ( new BoxLayout ( buttonPanel, BoxLayout.X_AXIS));
buttonPanel.add ( Box.createRigidArea ( new Dimension ( 112, 55 )));

newsButton.addActionListener ( this );
helpButton.addActionListener ( this );
exitButton.addActionListener ( this );

//searches
JPanel listPanel = new JPanel();

JPanel listPanel3 = new JPanel( );


JLabel newsLabel = new JLabel ( "Keywords" );
NewsList nl = new NewsList();
nl.setBackground ( Color.white );
listPanel3.setBackground ( Color.white );
listPanel3.add ( newsLabel );
listPanel3.add ( nl );

JPanel listPanel4 = new JPanel( );


JLabel urlLabel = new JLabel ( " Starting URLS ");
URLList ul = new URLList();
ul.setBackground ( Color.white );
listPanel4.setBackground ( Color.white );
listPanel4.add ( urlLabel);
listPanel4.add ( ul );

listPanel.setBackground ( Color.white );
listPanel.setBorder ( BorderFactory.createLineBorder( buttonColor ));

listPanel.add ( listPanel3 );
listPanel.add ( listPanel4 );
listPanel.add ( newsButton );

add ( buttonPanel );
add ( listPanel );

205
}

public void actionPerformed ( ActionEvent evt )


{
Object source = evt.getSource();
Color color = Color.white;

if ( source == newsButton ){

output.setText ( "<html><head></head><body><br> ... Searching ... <br>" +


"<br> Depending on the speed of your internet <br> " +
"connection, and how many pages your maximum search<br>" +
"and how fast your computer is, and how much memory <br>" +
"you have this search can take a very long time. " +
"<br><br>Leave the agent running, the search will run " +
"<br> in the backbground. When it is done a page of "+
"<br>results will be stored in the 'results' directory"+
"<br>and the agent will let you know the search is done." +
"<br>2 status bars will show you the progress while the searching"+
"<br>is ongoing. One shows the progress downloading pages from "+
"<br>the internet, one shows the average score of pages downloaded."
);

Thread t = new Search();


t.start();

}else if ( source == helpButton ){

output.setText( "<html><head></head><body><p>" +
"<br>http://www.timestocome.com"+
"<br>Winter 2002-2003"+
"<br>Copyright TimestoCome.com"+
"<br>contact theboss@timestocome.com"+
"<br>for information." +
"<br><br><br>"+
"<br>Be sure to enter your name, email, "+
"zip code and agent name to begin individualizing "+
"your agent" +
"</body></html>" );

206
}else if ( source == exitButton ){

System.exit(0);
}

setBackground( color );
repaint();
}
}

class LinkFollower implements HyperlinkListener


{
private JEditorPane pane;

public LinkFollower ( JEditorPane pane )


{
this.pane = pane;
}

public void hyperlinkUpdate ( HyperlinkEvent evt )


{
if ( evt.getEventType() == HyperlinkEvent.EventType.ACTIVATED) {
try{
pane.setPage ( evt.getURL() );
}catch (Exception e){
}
}
}
}

207
--NewsList.java--

//www.timestocome.com
//Winter 2002/2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
import java.io.*;

public class NewsList extends JPanel implements ActionListener


{

int listLength = 10;

private JComboBox newskeys = new JComboBox();


private String urlKeys[] = new String[10];

NewsList ()
{

//open list files and read the lists into the arrays
try{
FileReader fr = new FileReader ( "data/newskeys.txt" );
BufferedReader br = new BufferedReader ( fr );
String in;

for ( int i=0; i<listLength; i++){


urlKeys[i] = br.readLine();
newskeys.addItem ( urlKeys[i]);
}

fr.close();

208
}catch ( IOException e ){

urlKeys[0] = "Enter";
urlKeys[1] = "the";
urlKeys[2] = "keywords";
urlKeys[3] = "you";
urlKeys[4] = "wish";
urlKeys[5] = "to";
urlKeys[6] = "search";
urlKeys[7] = "for";
urlKeys[8] = "here";
urlKeys[9] = "";

newskeys.setEditable(true);
newskeys.getEditor().addActionListener ( this );

JPanel listPanel = new JPanel();

listPanel.setBackground ( Color.white );
newskeys.setBackground ( Color.white );

listPanel.add ( newskeys );
add ( listPanel );

public void actionPerformed ( ActionEvent e )


{

String newItem = (String)newskeys.getEditor().getItem();


int place = newskeys.getSelectedIndex();
urlKeys[place] = newItem;

newskeys.removeItemAt ( place );
newskeys.insertItemAt ( newItem, place );

//update the file

209
try {
FileWriter fw = new FileWriter ( "data/keys.txt");
BufferedWriter bw = new BufferedWriter ( fw );

for ( int i=0; i<listLength; i++){


bw.write ( urlKeys[i] );
bw.newLine();
}

bw.flush();
fw.close();

}catch ( IOException e1) {}


}
}

210
--Pages.java--

import java.net.*;
import java.io.*;

class Pages
{

int score;
File filename;
URL url;
String description;

Pages ( int s, File f, URL u, String d )


{
score = s;
filename = f;
url = u;
description = d;
}

int getScore()
{
return score;
}

File getFile()
{
return filename;
}

URL getURL()
{
return url;
}

String getDescription()
{
return description;
}

211
}//end Pages class

212
--Progress.java--
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;

public class Progress extends JFrame


{

Progress( progress p )
{
addWindowListener ( new WindowAdapter ()
{ public void windowClosed ( WindowEvent e ) {} } );

Container container = getContentPane();


container.add ( p );
}
}

213
--Search.java--

import javax.swing.*;
import java.awt.event.*;
import java.awt.*;
import java.io.*;
import java.net.*;
import java.util.*;

public class Search extends Thread


{

int numberOfWords = 0;
int numberOfUrls = 0;
//static int maxURLS = 256;
static int maxURLS = 100;
static int maxWords = 10;
URL urlList[] = new URL[maxURLS];
static URL downloadedURLS[] = new URL[maxURLS];
static String wordList[] = new String[maxWords];
static Vector docs = new Vector();
static int totalPages = 0;
static int threadCount = 0;
static progress p; //page count
static progress p1; //score average
static long pt = 0;
static long dt = 0;
static int tpg = 0;
static int tpb = 0;
static int done = 0;
static int totalScore = 0;
static double averageScore = 0;
static int pageCount = 0;

public void run ( )


{

getUserInput();

//set up a progress bar to give feedback to user


p = new progress( maxURLS );
p1 = new progress ( 100 );

JFrame f = new Progress( p );

214
f.setTitle ( "Page Count ... " );
f.setSize ( 200, 60 );
f.setBackground ( Color.white );
f.setVisible(true);

JFrame f1 = new Progress( p1 );


f1.setTitle ( "Average Score ... " );
f1.setSize ( 200, 60 );
f1.setBackground ( Color.white );
f1.setVisible(true);

//create several threads based on url list size if


// it grows add threads, remove some as it shrinks
// up to some max number of threads and max number of
// urls to fetch

Thread fetch[] = new Thread[10];

//jump start with usr defined urls


for ( int i=0; i<10; i++){
if ( urlList[i] != null ){

fetch[i] = new Fetch2 ( urlList[i], wordList );


fetch[i].start();
totalPages++;
threadCount++;
downloadedURLS[i] = urlList[i];
}
}
}//end main

static void imhome (int s, File f, URL u, String d, URL links[], long dtime, long pti
{

threadCount--;
System.out.println ( "thread count: " + threadCount + " url: " + u );

pt += ptime;
dt += dtime;

215
tpg++;

p.setValue ( tpg ); //update progress bar


p1.setValue ( (int)averageScore );

Pages pg = new Pages( s, f, u, d); //add a page to the vector

totalScore += s;
pageCount++;
averageScore = totalScore/pageCount;

//check the score


//if its a dud dont' bother with it...
if ( pg.score < averageScore ){
}else{

docs.addElement(pg);

if ( totalPages < maxURLS ) {

Thread fetchl[] = new Thread[links.length];

for ( int i=0; i<links.length; i++){


if ( links[i] != null ){

int duplicateFlag = 0;

//don't re-download the same page


for ( int j=0; j<maxURLS; j++){
if ( downloadedURLS[j] == null){
}else{

String tempA = links[i].toString();


String tempB = downloadedURLS[j].toString();

if ( tempA.compareTo(tempB) == 0 ){
duplicateFlag = 1;
}
}
}

if ( duplicateFlag == 0 ){

216
fetchl[i] = new Fetch2 ( links[i], wordList );
fetchl[i].start();
totalPages++;

downloadedURLS[totalPages] = links[i];
//need to send this info to user....
p.setValue ( totalPages );
threadCount++;
}

}
}
}

}//end else if pg score < 0

//wait for all the downloads to complete


if (( threadCount <= (maxURLS/20) ) && ( totalPages > ( maxURLS*.90))){ //allow for a fe
System.out.println ( "time to finish " + threadCount );
finish();
}
}

//download failed
static void imhome ( URL u, long dtime )
{
threadCount--;

tpb++;

if ( threadCount <= 1 ){
System.out.println ( "time to finish " + threadCount);
finish();
}
}

static void finish()


{

217
if ( done == 1){
return;
}
//when list is empty sort vector and grab a percent or
//number of the highest scoring pages

sort ( docs, 0, (docs.size()-1) );

System.out.println ( "Sorted list");


for ( int i=0; i<docs.size(); i++){
Pages p = (Pages)docs.elementAt(i);
System.out.println ( p.score + ", " + p.url );
}

//create user page


//save this page to the 'results' directory and let user know we are done
//put search info up in window for user.
createPage();

System.out.println ( "Total pages: " + (tpg+tpb));


System.out.println ( "Avg Download time(sec): " + ((-dt/(tpg+tpb))/10000) + ", Avg Parse

//delete all the files in the working directory 'workSpace'


File dir = new File ("workSpace");
File rmList[] = dir.listFiles();
for ( int i=0; i<rmList.length; i++){
rmList[i].delete();
}

done = 1;
}

static void createPage()


{

//create an html page for user with info


//vector is sorted low to high do we want all the pages?
//check vector size and grab top 0-20 pages
//create an html page
//grab the url as a link and the top 20 or so words after the <body> tag
//wrap up page

218
try {

//unique file name


Date d = new Date();
long t = d.getTime();
Long l = new Long ( t );
String fn = l.toString();

//create file
File resultsFile = new File ( "results/" + fn);
FileWriter fw;

fw = new FileWriter ( resultsFile );


BufferedWriter bw = new BufferedWriter ( fw );

//write header, intro....


String header = new String ( " \n<html><title>Search Results</title><body> ");
bw.write ( header );
int start = 0;
if ( docs.size() >20 ){
start = docs.size() - 20;
}

//reverse the order


for ( int q=(docs.size()-1); q>start; q--){

//grab file from doc


File f = ((Pages)docs.elementAt(q)).filename;

//send to getDesc
String desc = ((Pages)docs.elementAt(q)).description;
//create a link for desc...
String link = "\n\n<a href=\"" + ((Pages)docs.elementAt(q)).url +"\">";
double scr = ((Pages)docs.elementAt(q)).score;

219
System.out.println ( "link: " + link );
System.out.println ( "desc: " + desc );
System.out.println ( "score: " + scr );

bw.write ( "<table border=3 <tr><td>");


bw.write ( link );
bw.write ( desc );
bw.write ( "</a><br><br>");
bw.write ( "</td></tr></table><br><br><br>");
}

//write footer
String footer = new String ( "\n</body></html>" );
bw.write ( footer );

//close file
bw.flush();
bw.close();

}catch (IOException e ){}

//how can user recall or save this page? need to add that in here
//pop up window with info, save button, erase button, close window button
//add user tool to main agent to bring page back up
// SearchPanel sp = new SearchPanel();

void getUserInput()
{

//load usr list of starting urls


//get count of urls

try{
FileReader fr = new FileReader ( "data/news.txt" );
BufferedReader br = new BufferedReader ( fr );
String in;

220
while (( in = br.readLine() ) != null ){

try{

urlList[numberOfUrls] = new URL( in );


numberOfUrls++;

}catch (MalformedURLException e){ }


}
}catch ( IOException ex ) { }

//create list of key words we are hoping to find


try{
FileReader fr = new FileReader ( "data/newskeys.txt" );
BufferedReader br = new BufferedReader ( fr );
String in;

while (( in = br.readLine() ) != null ){


wordList[numberOfWords] = in;
numberOfWords++;
}

}catch ( IOException ex ) { }

}//end getUserInput

//want to sort on a double -- docs.elementAt(x).total


static Vector sort ( Vector d, int lb, int ub)
{
int j = d.size()/2;

if ( lb < ub ){
j = partition ( d, lb, ub );
sort ( d, lb, j-1 );
sort ( d, j+1, ub );
}

return d;
}

221
static int partition ( Vector d, int lb, int ub )
{

double a = ((Pages)d.elementAt(lb)).score;
Pages aFound = (Pages)d.elementAt(lb);

int up = ub;
int down = lb;

while ( down < up ){


while (( ((Pages)d.elementAt(down)).score <= a ) && (down < ub ))
down++;
while ( ((Pages)d.elementAt(up)).score > a )
up--;
if ( down < up ){
//exchange
Pages tempD = (Pages)d.elementAt(down);
Pages tempU = (Pages)d.elementAt(up);

d.setElementAt( tempU, down);


d.setElementAt( tempD, up);

}
}
d.setElementAt( (Pages)d.elementAt(up), lb);
d.setElementAt ( aFound, up );

return up;
}

}//end Search

222
223
--SearchPanel.java--

//www.timestocome.com
//Winter 2002/2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

import javax.swing.*;
import javax.swing.event.*;
import javax.swing.text.*;
import java.awt.event.*;
import java.awt.*;
import java.io.*;
import java.net.*;

class SearchPanel
{
SearchPanel()
{
JFrame sf = new SearchFrame();

sf.setBackground ( Color.white );
sf.show();
}
}

class SearchFrame extends JFrame


{

224
SearchFrame ()
{

setTitle ( "Search Results");


setSize ( 600, 800 );

setDefaultCloseOperation( DISPOSE_ON_CLOSE);

addWindowListener ( new WindowAdapter()


{
public void windowClosing ( WindowEvent e )
{ //close this frame
}
});

Container scp = getContentPane();


ShowResults sr = new ShowResults();
sr.setBackground ( Color.white );
scp.add( sr );

}
}

class ShowResults extends JPanel implements ActionListener


{

JButton savethis, getold, clean, home;


JEditorPane dataout;
JTextField jtf;

ShowResults(){

//show results html page


dataout = new JEditorPane();
dataout.setEditable( false );
dataout.addHyperlinkListener ( new LinkFollower ( dataout ));
JScrollPane sp = new JScrollPane ( dataout );

try {
dataout.setPage ( "file:searchresults.html" );

225
}catch (IOException e){System.out.println ( e );}

JPanel dataoutPanel = new JPanel();


dataoutPanel.add ( sp );
dataoutPanel.setBackground ( Color.white );

//text field for same file name


jtf = new JTextField ( " save_search.html " );

Color buttonColor = new Color ( 204, 204, 255 );

//add buttons for user to save this,


//review previous pages
//clean out old data
savethis = new JButton ("Save these Search Results");
savethis.setBackground ( buttonColor );
savethis.addActionListener( this );

getold = new JButton ( "Get previous saved Results");


getold.setBackground ( buttonColor );
getold.addActionListener ( this );

clean = new JButton ( "Clean up" );


clean.setBackground ( buttonColor );
clean.addActionListener ( this );

home = new JButton ( "Back to Search Page" );


home.setBackground ( buttonColor );
home.addActionListener ( this );

JPanel b1Panel = new JPanel();


b1Panel.setLayout ( new BoxLayout ( b1Panel, BoxLayout.X_AXIS));
b1Panel.add ( Box.createRigidArea ( new Dimension ( 77, 30 )));
b1Panel.setBackground ( Color.white );
b1Panel.add ( jtf );
b1Panel.add ( Box.createRigidArea ( new Dimension ( 72, 30 )));

226
b1Panel.add ( savethis );
b1Panel.add ( Box.createRigidArea ( new Dimension ( 77, 30 )));

JPanel bPanel = new JPanel();


bPanel.setLayout ( new BoxLayout ( bPanel, BoxLayout.X_AXIS));
bPanel.add ( Box.createRigidArea ( new Dimension ( 40, 30 )));

bPanel.setBackground ( Color.white );
bPanel.add ( getold );
bPanel.add ( clean );
bPanel.add ( home );

bPanel.add ( Box.createRigidArea ( new Dimension ( 40, 30 )));

b1Panel.setBorder ( BorderFactory.createLineBorder ( buttonColor ));


bPanel.setBorder ( BorderFactory.createLineBorder ( buttonColor ));
dataoutPanel.setBorder ( BorderFactory.createLineBorder ( buttonColor ));

add ( bPanel );
add ( b1Panel );
add ( dataoutPanel );

public void actionPerformed ( ActionEvent e )


{
Object source = e.getSource();

if ( source == savethis ){
//save to search directory
//get name from user and copy-move
//searchresults.html to search/username.html
File old = new File ( "searchresults.html" );
String newfile = "search/" + jtf.getText();
File newf = new File ( newfile );
old.renameTo( newf );
jtf.setText ( "saved file");

227
}else if ( source == getold ){
//list files in search dir and
//show the one user picks

File searchdir = new File ( "search" );


JFileChooser jfc = new JFileChooser( searchdir );

jfc.addChoosableFileFilter ( new Filter1() );


jfc.showOpenDialog( null );

File f = jfc.getSelectedFile();
try{
dataout.setPage( "File:" + f );
}catch ( IOException ex ){}

}else if ( source == clean ){


//rm all files in news
File d = new File ( "news" );
File list[] = d.listFiles();
for (int l=0; l<list.length; l++){
list[l].delete();
}
//rm searchresults.html
File temp = new File( "searchresults.html" );
temp.delete();
}else if ( source == home ){

try {
dataout.setPage ( "file:searchresults.html" );
}catch (IOException e2){System.out.println ( e2 );}

/*
class LinkFollower implements HyperlinkListener
{
private JEditorPane pane;

public LinkFollower ( JEditorPane pane )


{

228
this.pane = pane;
}

public void hyperlinkUpdate ( HyperlinkEvent evt )


{
if ( evt.getEventType() == HyperlinkEvent.EventType.ACTIVATED) {
try{
pane.setPage ( evt.getURL() );
}catch (Exception e){
}
}
}
}
*/

//filter file chooser stuff


class Filter1 extends javax.swing.filechooser.FileFilter
{

public boolean accept( File fileobj )


{

String extension = "";

if ( fileobj.getPath().lastIndexOf('.') >0 )
extension =
fileobj.getPath().substring(
fileobj.getPath().lastIndexOf ('.') + 1).toLowerCase();

if ( extension != "" )
return extension.equals ( "html" );
else
return fileobj.isDirectory();
}

public String getDescription()


{
return "HTML files (*.html)";
}

229
}

230
--URLList.java

//www.timestocome.com
//Winter 2002/2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
import java.io.*;

public class URLList extends JPanel implements ActionListener


{

int listLength = 10;

private JComboBox URLkeys = new JComboBox();


private String keys[] = new String[10];

URLList ()
{

//open list files and read the lists into the arrays
try{
FileReader fr = new FileReader ( "data/news.txt" );
BufferedReader br = new BufferedReader ( fr );
String in;

for ( int i=0; i<listLength; i++){


keys[i] = br.readLine();
URLkeys.addItem ( keys[i]);
}

231
fr.close();

}catch ( IOException e ){

keys[0] = "http://www.cnn.com";
keys[1] = "http://www.foxnews.com";
keys[2] = "http://www.drudgereport.com";
keys[3] = "http://www.slashdot.org";
keys[4] = "http://www.boston.com";
keys[5] = "http://www.wired.com";
keys[6] = "http://www.projo.com";
keys[7] = "http://news.bbc.co.uk/";
keys[8] = "http://www.nando.com";
keys[9] = "http://www.timestocome.com/blogs/blogs.html";

URLkeys.setEditable(true);
URLkeys.getEditor().addActionListener ( this );

JPanel listPanel = new JPanel();

listPanel.setBackground ( Color.white );
URLkeys.setBackground ( Color.white );

listPanel.add ( URLkeys );
add ( listPanel );

public void actionPerformed ( ActionEvent e )


{

String newItem = (String)URLkeys.getEditor().getItem();


int place = URLkeys.getSelectedIndex();
keys[place] = newItem;

URLkeys.removeItemAt ( place );

232
URLkeys.insertItemAt ( newItem, place );

//update the file


try {
FileWriter fw = new FileWriter ( "data/keys.txt");
BufferedWriter bw = new BufferedWriter ( fw );

for ( int i=0; i<listLength; i++){


bw.write ( keys[i] );
bw.newLine();
}

bw.flush();
fw.close();

}catch ( IOException e1) {}


}
}

233
--User.java

//www.timestocome.com
//Winter 2002/2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

import javax.swing.*;
import javax.swing.event.*;
import java.awt.event.*;
import java.awt.*;
import java.io.*;

public class User


{

public User()
{
JFrame f = new userFrame();
f.setBackground( Color.white );
f.show();

class userFrame extends JFrame implements ActionListener


{

int width = 400;


int height = 120;

234
JTextArea userName;
JTextArea agentName;
JTextArea zipcode;
JTextArea email;

public userFrame()
{

setTitle ( "User and Agent Information" );


setSize ( width, height );
setLocation ( width/2, height/2 );

addWindowListener( new WindowAdapter(){


public void windowClosed ( WindowEvent e) {}
});

Container cp = getContentPane();

JPanel userPanel = new JPanel( new GridLayout ( 5, 2 ));


userPanel.setBackground ( Color.white );
userPanel.setBorder ( BorderFactory.createRaisedBevelBorder() );

String un = "", an = "", zc = "", em = "";

try {
FileReader fr = new FileReader ( "data/user.txt");
BufferedReader br = new BufferedReader ( fr );
String in;

un = br.readLine();
an = br.readLine();
zc = br.readLine();
em = br.readLine();

fr.close();

}catch ( IOException e ) {}

JLabel userNamel = new JLabel ( "Your Name: " );


JLabel agentNamel = new JLabel ( "Agent Name: ");
JLabel zipcodel = new JLabel ( "Your Zip Code: ");
JLabel emaill = new JLabel ( "Your Email Address: ");

userNamel.setBorder ( BorderFactory.createEtchedBorder() );

235
agentNamel.setBorder ( BorderFactory.createEtchedBorder() );
zipcodel.setBorder ( BorderFactory.createEtchedBorder() );
emaill.setBorder ( BorderFactory.createEtchedBorder() );

userName = new JTextArea( un );


agentName = new JTextArea ( an );
zipcode = new JTextArea( zc );
email = new JTextArea( em );

userName.setBorder ( BorderFactory.createEtchedBorder() );
agentName.setBorder ( BorderFactory.createEtchedBorder() );
zipcode.setBorder ( BorderFactory.createEtchedBorder( ) );
email.setBorder ( BorderFactory.createEtchedBorder() );

JButton done = new JButton ( "Done");


done.addActionListener ( this );

userPanel.add ( userNamel );
userPanel.add ( userName );
userPanel.add ( agentNamel );
userPanel.add ( agentName );
userPanel.add ( zipcodel );
userPanel.add ( zipcode );
userPanel.add ( emaill );
userPanel.add ( email );
userPanel.add ( done );
cp.add( userPanel);

public void actionPerformed ( ActionEvent evt)


{

try {
FileWriter fw = new FileWriter ( "data/user.txt" );
BufferedWriter bw = new BufferedWriter ( fw );

bw.write ( userName.getText() );
bw.newLine();
bw.write ( agentName.getText() );
bw.newLine();
bw.write ( zipcode.getText() );
bw.newLine();

236
bw.write ( email.getText() );
bw.newLine();

bw.flush();
bw.close();

}catch ( IOException e ){ }

//close window
setVisible ( false );

237
--Weather.java

//www.timestocome.com
//Winter 2002/2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

import java.io.*;
import java.net.*;
import java.util.Date;

public class Weather


{

URL url;

Weather(String zip)
{

String putURLtogether = "http://www.srh.noaa.gov/zipcity.php?inputstring=" + zip;


try{
url = new URL( putURLtogether );
}catch ( MalformedURLException e ){}

String poll ()
{
int c;
String data = "<Html><Head></Head><Body><br><br>";
data += "Brought to you from <a href=\"http://www.noaa.gov\">NOAA</a>";
data += "<br><br><table><tr><td><b>";

238
try {

URLConnection urlconnection = url.openConnection();


int contentlength = urlconnection.getContentLength();

InputStream in = urlconnection.getInputStream();

while ( ( c = in.read() ) != -1 ){
if ( (char)c == '7'){
if ( (char)in.read() == '-'){
if ( (char)in.read() == 'D' ){
if ( (char)in.read() == 'a' ){
if ( (char)in.read() == 'y' ){
while ( ( c = in.read() ) != -1 ){
if ( (char)c == '<'){
if ( (char)in.read() == 't' ){
if ( (char)in.read() == 'd' ){
if ( (char)in.read() == '>' ){
if ( (char)in.read() == '<'){
if ( (char)in.read() == 'b'){
if ( (char)in.read() == '>' ){

while ( (c = in.read()) != -1 ){

data += (char)c;

if ( data.endsWith ( "</table>") ){
break;
}

}
}
}
}
}
}
}
}
}
}
}
}
}
}
}

239
in.close();

data += "</Body></Html>";

return data;

}catch (IOException e ){
return ( "Error getting weather: " + e );
}
}
}

240
--progress.java--
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;

public class progress extends JPanel


{

JProgressBar jprogressbar = new JProgressBar();

progress( int max )


{
setBackground ( Color.white );
jprogressbar.setMinimum ( 0 );
jprogressbar.setValue ( 0 );
jprogressbar.setMaximum ( max );
jprogressbar.setForeground ( Color.red );
jprogressbar.setBackground ( Color.white );
jprogressbar.setOrientation( JProgressBar.HORIZONTAL );
jprogressbar.setBorder ( BorderFactory.createRaisedBevelBorder());
add( jprogressbar );

void setValue( int x)


{
jprogressbar.setValue( x );
}

241
--README--

//www.timestocome.com
//Winter 2002/2003

//copyright Times to Come


//under the GNU Lesser General Public License
//version 0.1
//availble for viewing at http://www.timestocome.com/copyleft.txt

This agent is far from done. So far it checks your local weather,
tells you a joke, does deep link searches and gets your ip number.
It will also learn to converse with you. The more you converse with
the agent the better at conversation it will become.

To compile

javac Agent.java

To run
java Agent

If you get out of memory errors try


java -Xmx64M Agent

or
java -Xmx128M Agent

------------------------------------------------
data for the agent is as follows:
for conversations:
store files in a directory named 'data'
the file name should be a sentence
the file data is responses to that sentence followed by a #10 which
is the beginning score. The more a sentence gets used the higher
the score will be

example:
File name 'How are you?'
File data
I'm fine and you?#12
Excellent#10

242
for jokes
store jokes in a directory named 'jokes'
The file names should be sequential numbers followed by .html
store each joke as a regular HTML webpage, using the background
and fonts of your choice

example:
File name '1.html'
<html>
<head></head>
<body>
<br>Why did the chicken cross the road?
<br>To get to the other side.
</body>
</html>

243
Chapter 7

Neural Networks

7.1 Neural Networks


Neural nets are good at doing what computers traditionally do not do well, pat-
tern recognition. They are good for sorting data, classifying information, speech
recognition, diagnosis, and predictions of non-linear phenomena. Neural nets
are not programmed but learn from examples either with or without supervised
feedback.
Modeled after the human brain, they give more weight to connections used
frequently and reduce the size (weight) of connections not used. Some neural
nets must be supervised while learning, given data to sort and given feedback as
to whether data is correctly sorted, forward feed backpropagation networks are
the best understood and most successful of these. Some, such as self organizing
networks, gure things out for themselves.
McCulloch and Pitts, in 1943, proved that networks comprised of neurodes
could represent any nite logical expression. In 1949 Hebb de ned a method for
updating the weights in neural networks. Kolmogorov's Theorem was published
in the 1950's. It states that any mapping between two sets of numbers can be
exactly done with a three layer network. He did not refer to neural networks
in his paper, this was applied later. His paper also describes how the neural
network is to be constructed. The input layer has one neurode for every input.
These neurodes have a connection to each neurode in the hidden layer. The
hidden layer has (2*n + 1) Neurodes, n is the number of inputs. The hidden
layer sums a set of continuous real monotonically increasing functions, like the
sigmoid function. The output layer has one neurode for every output. Rosen-
blatt in 1961 developed the Perception ANN (arti cial neural network). In the
1960's Cooley and Tucky devised the Fast Fourier Transform algorithm which
made signal processing with neural networks feasible. Widrow and Ho then
developed Adaline. 1969 was the year neural networks almost died. A paper
published by Minsky and Papert showed that the XOR function could not be
done with the Adeline and other similar networks. 1972 brought new interest

244
with Kohonen and Anderson independently published papers about networks
that learned with out supervision, SOM, (self organizing maps). Grossberg and
Carpenter developed the ART (adaptive resonance theory) which learns with
out supervision in the late 1960's. The 1970's brought NEOCOGNItrON, for
visual pattern recognition. Hop eld published PDP ("Parallel Distributed Pro-
cessing") in three volumes. These books described neural networks in a way
that was easy to understand.
If a neural net is too large it will memorize rather than learn. Neural nets
usually are composed of three layers, input, hidden, and output. More layers
can be added, but usually little is gained from doing so. The connections vary
by the network type. Some nets have connections from each node in one layer
to the next, some have backward connections to the previous layer and some
have connections with in the same layer. Neural networks map sets of inputs
to sets of outputs. Learning is what shapes the neural networks surface. Su-
pervised learning algorithms take inputs and match them to outputs, correcting
the network if the output does not match the desired output. Unsupervised
learning algorithms do not correct the output given by the neural net. The net
is provided with inputs, but not with outputs.
Training data for a neural net should be fairly representative of the actual
data that will be used. All possibilities should be covered and the proportion
of data in each area should match the proportion in the real data. There are
several ways of training of neural nets: Hard coded weights determined by
experience or mathematical formulas can serve in place of a training algorithm;
Supervised training uses input and matching output patterns to let the net set
the weights; Graded training only uses input patterns, but then the neural net
receives feedback on how accurate its answer is; Unsupervised Training uses
only input patterns then the neural nets output is the correct answer.
Autonomous learning in neural nets is di erent from other unsupervised
learning systems in that the neural net can learn selectively, it doesn't learn
every pattern input, only those that are 'important'. An autonomous learning
neural net has the following capabilities; it organizes information into categories
without outside input and will reorganize them if it makes sense to do so; it
retrieves information from less than perfect input; it is con gured to work in
parallel to keep speed reasonable; the system is always selectively learning;
priorities given to input patterns can change; it can generalize; and it has more
memory space than it needs; it must be able to expand and add to its knowledge
rather than overwriting previously learned knowledge. Of course something this
wonderful should also make your co ee and sort your email for you too.
Simulated annealing is a statistical way to solve optimization problems, like
setting a schedule or wiring a network. Boltzmann networks use this algorithm
to learn. A random solution is chosen and compared to the current best solution
found. The better of the two is kept and then depending on the problem some
random changes are made. The amount of randomness in each loop is decreased
over time allowing the net to slowly settle into a solution. The randomness helps
to keep the net from settling into local minimas rather than global minimas.
The Lyapunov function, also known as the energy function, is used to test

245
for convergence of the neural network. The function decreases as the network
changes and assures stability.

7.2 Hebbian Learning


"When the axon of cell A is near enough to excite a cell B and repeatedly or
persistently takes part in ring it, some growth process or metabolic change
takes place in one or both cells such that A's eciency, as one of the cells ring
B, is increased." [D.O.Hebb, The Organization of Behavior]
In other words, in a neural net, the connections between neurodes get larger
weights if they are repeatedly used during training.
There are adjustments that have been made to this rule. Weights are
bounded between -1.0 and 1.0. Neurodes that are not used are decreased in
value. Neohebbian Learning takes this into consideration. It iteratively com-
putes each nodes connection weights using N ewW eight = OldW eight F 
F orgottenW eight + N  N ewLearningW eight. F, N are constants between 0
and 1.0, F being how quickly to forget and N being how quickly to learn.
Di erential Hebbian Learning adjusts the learning and forgetting by pro-
portion to the amount of change in weight since last cycle. Which is just the
derivative of the neurode's output over time.
Drive reinforcement theory developed by Harry Klopf is a learning system
that modi es di erential Hebbian learning. The weight increase depends on the
product of the change in the output signal of the receiving neurode and the
weighted sum of the inputs over time. This allows some temporal learning to
occur in the system. This system is closer to the classical conditioned training
done by Pavlov.

7.3 Perceptron
Rosenblatt added the learning law to the McCulloch-Pitts neurode to make it
Perception, which is the rst of the neural net learning models. The perception
has one layer of inputs and one layer of outputs, but only one group of weights.
If data points on a plot are linearly separable (we can draw a straight line sepa-
rating points that belong in di erent categories), then we can use this learning
method to teach the neural net to properly separate the data points.
The McCulloch-Pitts neurode res a +1 if the neurode's total input the
sum of each input * its weight + some bias function is greater than the set
threshold. If it is less than the set threshold, or if there is any inhibitory input
a -1 is red. If the weights are chosen to be 1 for each input and the threshold
is zero, then the bias is chosen to be 0.5 - input*weight then the neurode works
as an AND function. If the bias is chosen to be -0.5 then the neurode acts as a
OR function. If the bias is chosen to be 0.5 it behaves as a NOT operator. Any
logical function can be created using only AND, OR and NOT gates so a neural
net can be created with McCulloch-Pitts neurodes to solve any logical function.

246
We start with a weight vector that has its tail at the origin and a randomly
picked point. Each data point is input to the neurode and it responds with
either a +/- 1, the weight vector is multiplied by the correct output. This is
done until all data points are input and the neurode gives the correct output
for each point.
The perception fell out of favor since it can only handle linearly separable
functions which means simple functions like XOR, or parity can not be computed
by them. Minsky and Papert published a book 'Perceptions', in the 1980's, that
proved that one and two layer neural nets could not handle many real world
problems and research fell o for about twenty years in neural nets.
An additional layer and set of weights can enable the Perception to handle
functions that are not linear. A separate layer is needed for each vertex needed
to separate the function. A 1950's paper by A.N. Kilmogorov published a proof
that a three layer neural network could perform any mapping exactly between
any two sets of numbers.
Multi layered perceptrons were developed than can handle XOR functions.
Hidden layers are added and they are trained using backpropagation or a similar
training algorithm. Using one layer linearly separable problems can be solved.
Using two layers regions can be sorted and with three layers enclosed regions
can be sorted.

7.4 Adeline Neural Nets


Adeline, ADAptive linear Neuron was developed by Widrow and Ho in 1959.
It is a classic example of an 'Adaptive Filter Associative Memory Neural Net'
or 'Adaptive linear Element'. It has only an input layer consisting of a node
for each input and an output layer that has only one node. It can learn to
sort linear input into two groups. Inputs are real numbers between -1..+1. The
neurode forms a weighted sum of all inputs and output's a +/-1. There is one
input with a weighted synapse for every number in the input vector. It has an
extra input 'mentor' used during training which carries the expected output for
the given input.
Adeline can only separate data in to two groups. The data must be linearly
separable. The Adeline's training starts with a straight line drawn anywhere
on the plot provided it intersects the origin. The training e ectively rotates
this line until it properly separates the data into the two groups, using the least
mean squares algorithm. The angle of this line is the angle Adeline tests against
the input vector times the weight vector (dot product). If the angle of the dot
product of these two vectors is less than  a 1 is output, if it is less than a 0 is
output.
Dot product: A  BorAx  Bx + Ay  By + :::: or ABcos() where theta is
the angle between vector A and vector B. A bold A or B represents the length
of the vector. Or: [expectedresponse actualresponse]  [learningconstant(<
1)]  [Ia2 + Ib2 + Ic2 :::](1=2) =[W a2 + W b2 + W c2 :::](1=2)  [Ia; Ib; Ic; :::] (now adjust
the weights by the amounts in the vector) The learning constant must be less

247
than 2 or the network will not stabilize.
Input patterns are used to set the initial weights, during which time the
mentor node is set to += 1 depending on the desired output. Following that
a training set, di erent from the initial set, is tried. If the answer is correct we
do nothing. If the answer is not correct the weights are adjusted using the delta
rule.
The delta rule changes the weights in proportion to the amount they are
incorrect. The distance is determined by subtracting network's actual response
di erence from expected response; multiply this by a training constant; multiply
by the size and direction of the input pattern vector; and use this information to
determine the change in weight. This is also known as the Least Mean Squared
Rule ChangeInW eight = 2  LearningRate  InputN odej  (DesiredOutput
ActualOutput)
Collections of Adeline's in a layer can be taught multiple patterns. Adelines
can have additional inputs that are powers or multiplications of inputs and are
referred to as higher order networks. It may work better at pattern solving
than a many layered single order network. This may be used in more than two
dimensions. A line separates linear data in a plane, a plane separates linear
data in three dimensions, etc. Adelines and Madelines can be used to clean up
noise from data provided there is a good copy of the data to learn from during
training.

7.5 Adaptive Resonance Networks


Developed by spouses Stephen Grossberg and Gail Carpenter Adaptive Reso-
nance Theory, ART, is a self organizing network that learns without supervised
training. ART uses a competitive input-output training to allow the network to
learn new information with out losing information already learned.
These networks consist of an input (comparison) layer with a node for each
input dimension and an output (recognition) layer that has a node for each
category. There is a hidden layer between them that lters information feed
back to the input layer from the output layer. There are also controls for each
layer to control the direction of information. Competitive training occurs and
the highest valued node wins.
Patterns are presented to the input layer which tries to nd the closest
matching weight vector. If a matching weight vector is found it is compared to
the categories for a match. If there are weight and category matches then the
network is in resonance and training is performed to better match the weights.
If no category is found a new one is created.

7.6 Associative Memories


Associate memory stores information by associating or correlating it with other
memories. Most neural nets have the capability to store memory this way.

248
Associate memory systems can recall information based on garbled input, details
are stored in a distributive fashion, are accessible by content, are very robust,
and most importantly can generalize. The two classes of associative memory
classi ed by how they store memories are: auto associative; hetero-associative.
Autoassociate: each data item is associated with itself. Used for cleaning
up and recognizing handwriting. Training is done by giving the same pattern
to the input and output nodes.
Hetero-associative: di erent data items are associated with each other. One
pattern is given and another is output, a translation program would fall in this
category. This one is trained by giving one input pattern to the input nodes
and the desired output pattern to the output nodes.
The main architectures for associated memory neural networks are: crossbar
(aka Hop eld); adaptive lter networks; competitive lter networks.
Adaptive lter networks, like Adelines, test each neurode to see if it is the
pattern speci c to that neurode. These are used in signal processing.
Competitive lter networks, like Kohonens, have neurodes competing to be
the one that matches the pattern. They self-organize and they perform statis-
tical modeling with out outside aid or input.

7.7 Probabilistic Neural Networks


Probabilistic neural networks are forward feed networks built with three lay-
ers. They are derived from Bayes Decision Networks. They train quickly since
the training is done in one pass of each training vector, rather than several.
Probabilistic neural networks estimate the probability density function for each
class based on the training samples using Parzen or a similar probability density
function. This is calculated for each test vector. Usually a spherical Gaussian
basis function is used, although many other functions work equally well.
Vectors must be normalized prior to input into the network. There is an
input unit for each dimension in the vector. The input layer is fully connected
to the hidden layer. The hidden layer has a node for each classi cation. Each
hidden node calculates the dot product of the input vector with a test vector
subtracts 1 from it and divides the result by the standard deviation squared.
The output layer has a node for each pattern classi cation. The sum for each
hidden node is sent to the output layer and the highest values wins.
The Probabilistic neural network trains immediately but execution time is
slow and it requires a large amount of space in memory. It really only works for
classifying data. The training set must be a thorough representation of the data.
Probabilistic neural networks handle data that has spikes and points outside the
norm better than other neural nets.

249
7.8 Counterpropagation Network
The counterpropagation network is a hybrid network. It consists of an out-
star network and a competitive lter network. It was developed in 1986 by
Robert Hecht-Nielsen. It is guaranteed to nd the correct weights, unlike reg-
ular backpropagation networks that can become trapped in local minimums
during training.
The input layer neurodes connect to each neurode in the hidden layer. The
hidden layer is a Kohonen network which categorizes the pattern that was input.
The output layer is an outstar array which reproduces the correct output pattern
for the category.
Training is done in two stages. The hidden layer is rst taught to categorize
the patterns and the weights are then xed for that layer. Then the output
layer is trained. Each pattern that will be input needs a unique node in the
hidden layer, which is often too large to work on real world problems.

7.9 Neural Net Meshes


Meshes are used in visualization, image processing, neurology and physics ap-
plications. They are a grid of regular or irregular shape that stores information
or represents a shape rather than a at object. Neural nets are used to adjust
the meshes in 3d graphics.
Meshes also derived from Pask's Conversation Theory. The gist of the meshes
being that distributed information (like that of the Internet) adapts to the
semantic expectations of the users. The system then self organizes to meet
expectations.

7.10 Kohnonen Neural Nets (Self Organizing Net-


works)
The Kohonen Self Organizing Map (Network) uses unsupervised, competitive
learning. These networks are used for data clustering as in, speech recognition
and handwriting recognition. They are also used for sparsely distributed data.
Self Organizing Networks consist of two layers, an input layer and a Kohonen
layer.The input layer has a node for each dimension of the input vector. The
input nodes distribute the input pattern to each node in the the Kohonen layer
so the two layers are fully connected. The output layer has at least as many
nodes as categories to be recognized. One neurode in the output layer will be
activated for each pattern. Each input is connected to each output and there
are no connections between the layers.
The network uses lateral inhibition, which is how the vision system works in
people. Connections are formed to neighboring neurodes which are inhibitory.
The strength of the neurode is inversely proportional to the distance it is away
from other nodes. The neurode with the strongest signal dampens the neurodes

250
close to it using a Mexican Hat function. (so called because it looks like a
Mexican hat.) The Mexican Hat function is also used in wavelets and image
processing. An example is 1:5x4 4x2 + 2, try plotting this between -2 and 2.
The neurodes close to the one activated take part in the training, the others do
not. To make it computationally ecient a step function is used instead of a
true Mexican hat function.
Self organization is a form of unsupervised learning. This sets weights with a
'winner take all' algorithm. Each neurode learns a classi cation. Input vectors
will be classed into the group to which they are closest.
General algorithm
The weights between the nodes are initialized to random values between 0.0 and
1.0.
Then the weight vector is normalized.
The learning rate is set between 1.0 and 0.0 and decreased linearly each itera-
tion.
The neighborhood size is set and decreased linearly each iteration
The input vector is normalized and fed into the network.
The input vector is multiplied by the connection weights and the total is accu-
mulated by the Kohonen network nodes.
The winning nodes out put is set to one and all the other nodes are set to zero.
Weights are adjusted Wnew = Wold + training constant ( input - Wold)
Training continues until a winning node vector meets some minimum error stan-
dard.

251
7.10.1 C++ Self Organizing Net
//som.cpp
//this is an example of a 'Self Organizing Kohonen Map'
//http://www.timestocome.com

//this is the driver program for the kohonen network (layer.cpp)


//the algorithm and other notes are there.

#include "somlayer.cpp"

int main (int argc, char **argv)


{

//new kohonen network


network kohonen;

//read in data
kohonen.getData();
kohonen.readInputFile();

//set up nodes, layers and weights


kohonen.createNetwork();

//train the network


kohonen.train();

//dump input to a file for user


kohonen.print();

252
//somlayer.cpp
//www.timestocome.com
//
//
//This program is a C/C++ program demonstrating the
//self organizing network (map) {algorithm by Kohonen}
//This is an unsupervised network
//one neurode in the output layer will
//be activated for each different input pattern
//The activated node will be the one whose weight
//vector is closest to the input vector
//
//It reads in a data file of vectors in the format:
//99.99 88.88 77.77
//66.66 55.55 44.44
//
//algorithm
//weight array is created
//(number of input dimensions) X (number of input dimensions * number of vectors)
//the weights are initialized to a random number between 0 and 1
//weight vectors are normalized
//the learning rate is set to one and linearly decremented
// depending on maximum number of iterations
//the neighborhood size is set to the max allowed by the kohonen out put layer size
// and decremented linearly depending on the maximum number of iterations
//the input vector is normalized
//each input is multiplied by a connecting weight and sent to each output node
//the inputs for each output node are summed
//the winning node is set to one
//the outer output nodes are set to zero
//the distance between the winning node and the input vector are checked
//if the distance is not inside minimum acceptable the weights are adjusted
// Wnew = Wold + trainingConstant * (input - Wold)
//the nieghborhood size and training constant are decreased
//
//and the next loop is begun.

#ifndef _LAYER_CPP
#define _LAYER_CPP

#include <iostream.h>
#include <stdlib.h>
#include <math.h>

253
#include <time.h>
#include <stdio.h>
#include <string>

#define MAX_DIMENSIONS 100


#define MAX_VECTORS 100

class network{

private:

int vectorsIn, weightColumn;


int nodesIn, nodesK;
char fileIn[128], fileOut[128];
int maxIterations;
double distanceTolerance;
int decreaseNeighborhoodSize, neighborhoodSize;
double decrementLearningConstant, learningConstant;
double kohonen[];
double weights[MAX_DIMENSIONS][MAX_DIMENSIONS*MAX_VECTORS];
double inputArray[MAX_VECTORS][MAX_DIMENSIONS];
int winningNode;
double distance;
int firstLoop;
double trackDistance[MAX_VECTORS];
int trackWinner[MAX_VECTORS];

void normalizeWeights()
{

for ( int i=0; i<vectorsIn; i++){


double total = 0.0;

for ( int j=0; j<weightColumn; j++){


total += weights[i][j] * weights[i][j];
}

double temp = sqrt(total);

for( int k=0; k<weightColumn; k++){


weights[i][k] = weights[i][k] / temp;

254
}

void normalizeInput()
{

for ( int i=0; i<vectorsIn; i++){


double total = 0.0;

for ( int j=0; j<nodesIn; j++){


total += inputArray[i][j] * inputArray[i][j];
}

for( int j=0; j<nodesIn; j++){


inputArray[i][j] = inputArray[i][j] / sqrt(total);
}
}

public:

network(){}

~network(){}

255
void createNetwork()
{

// initialize weights to a value between 0.0 and 1.0


srand (time(0));

for (int i=0; i<vectorsIn; i++){


for (int j=0; j<weightColumn; j++){

int max = 1;

weights[i][j] = (double) rand()/RAND_MAX;

}
}

normalizeWeights();
}

void getData()
{

//get from user


// number of input nodes
cout << "*****************************************************"<< endl;
cout << "* Enter the number of input nodes needed. (This is *"<< endl;
cout << "* number of dimensions in your input vector. *"<< endl;
cout << "*****************************************************"<< endl;
cin >> nodesIn;

// number of input vectors


cout << "*****************************************************"<< endl;
cout << "* Enter the number of vectors to be learned. *"<< endl;
cout << "*****************************************************"<< endl;
cin >> vectorsIn;

// name of input file


cout << "*****************************************************"<< endl;

256
cout << "* Enter the name of your input file containing the *"<< endl;
cout << "* vectors. *"<< endl;
cout << "*****************************************************"<< endl;
cin >> fileIn;

// name of file to output results to


cout << "*****************************************************"<< endl;
cout << "* Enter the name of the file for output. *"<< endl;
cout << "*****************************************************"<< endl;
cin >> fileOut;

// distance tolerance
cout << "*****************************************************"<< endl;
cout << "* Enter the distance tolerance that is acceptable *"<< endl;
cout << "*****************************************************"<< endl;
cin >> distanceTolerance;

// max number of iterations before giving up


cout << "*****************************************************"<< endl;
cout << "* Enter the maximum iterations for learning cycle *"<< endl;
cout << "* before giving up. *"<< endl;
cout << "*****************************************************"<< endl;
cin >> maxIterations;

//determine output layer size & initialize


nodesK = nodesIn * vectorsIn;
for(int i=0; i<nodesK; i++){
kohonen[i] = 0;
}

//vectorsIn = nodesIn;
weightColumn = nodesIn * vectorsIn;

//determine amount to decrease neighborhood size


//every so many loops reduce neighborhood size
decreaseNeighborhoodSize = (int)maxIterations/nodesK;

// and learning constant by each learning iteration


decrementLearningConstant = learningConstant/maxIterations;

257
//user gave us the number of floating numbers per row (dimensions)
//and the number of lines (vectors)
//only a space is used between the numbers, no commas or other markers.
void readInputFile()
{

FILE *fp = fopen( fileIn, "r");

//read in vectors
for (int i=0; i<vectorsIn; i++){
for (int j=0; j<nodesIn; j++){
fscanf ( fp, "%lf", &inputArray[i][j]);
}
}

fclose (fp);
normalizeInput();

void train ()
{

double distance = 0.0;


double oldDistance = 0.0;
int distanceCount = 0;

//for each vector, loop


for (int x=0; x<vectorsIn; x++){

cout << "************vector " << x << " *************"<< endl;


for ( int q=0; q<nodesIn; q++){
cout << " " << inputArray[x][q];

258
}
cout << endl << endl;

//determine initial neighborhood size


neighborhoodSize = nodesK;
int count = 0;

//set initial values that aren't set in createNetwork


learningConstant = 1.0;
double distance = 0.0;
firstLoop = 1;
int winningNode = 0;

// inner loop
// see if outside number iterations = break from inner loop
while (count < maxIterations ){

count++;
cout << "\n loop number " << count;
cout << "\tdistance " << distance;
cout << "\twinning node " << winningNode << endl;

// multiply input by its weight connecting to each kohonen node


// sum total for each node in kohonen layer
for ( int i=0; i<nodesIn; i++){// for each input dimension
for ( int k=0; k<nodesK; k++){ //for each kohen node
kohonen[k] += inputArray[x][i] * weights[i][k];
}
}

// see which is winning node


double winner = 0.0;

for( int i=0; i<nodesK; i++){


if (kohonen[i] > winner){
winner = kohonen[i];
winningNode = i;
trackWinner[x] = i;
}
}

// set winner to one

259
// set all other outputNodes to zero
for( int i=0; i<nodesK; i++){
if( i != winningNode ){
kohonen[i] = 0.0;
}else{
kohonen[i] = 1.0;
}
}

// see if in distance tolerance = break from inner loop


// we're done with this vector, do next
oldDistance = distance;
distance = 0.0;

for ( int i=0; i<nodesIn; i++){


distance += inputArray[x][i] - weights[i][winningNode];
}

distance = sqrt ( distance * distance);


trackDistance[x] = distance;

if (distance < distanceTolerance){


cout << "Node " << winningNode << " won. " << endl;
cout << "Distance is with in tolerance! " << endl;
cout << "loop number " << count << endl;
break;
}

//shake things up if distance grows for too long


if (distance > oldDistance ){
distanceCount ++;
}

if (distanceCount > 5){


distanceCount = 0;
neighborhoodSize = nodesK;
}

// reduce neigborhood size


int right = nodesK, left = 0;

260
if (( count % decreaseNeighborhoodSize == 0)
&& (neighborhoodSize > 1)){

neighborhoodSize--;

//keep inside weight array bounds


if( winningNode > neighborhoodSize ){
left = winningNode - neighborhoodSize;
}else{
left = 0;
}

if ( winningNode + neighborhoodSize < nodesK){


right = winningNode + neighborhoodSize;
}else{
right = nodesK;
}

// reduce learning constant


learningConstant -= decrementLearningConstant;

//flip flag
firstLoop = 0;

// adjust weights Wn = Wo + LC * (input - Wo)


// do the whole matrix the first pass and do
// only the neighborhood of the winning node on
// subsquent passes.

for ( int i=left; i<right; i++){


int j=0;
for ( j=0; j<nodesIn; j++){
weights[i][j] += learningConstant * (inputArray[i][j] - weights[i][j]);

//keep things from blowing up


if (weights[i][j] < 0.0){
weights[i][j] = 0.0;
}

261
if( weights[i][j] > 1.0){
weights[i][j] = 1.0;
}

// re-normalize weights
normalizeWeights();

}//end inner loop count < maxIterations

}//end loop for each vector

void print()
{

//open file
FILE *fp = fopen ( fileOut, "w");

//headings
fprintf (fp, "\n\n\n data from training run \n");

//print weight array


fprintf( fp, "\nWeight Array\n");

for ( int i=0; i<vectorsIn; i++){


fprintf (fp, "\n");
for ( int j=0; j<weightColumn; j++){
fprintf (fp, " %lf ", weights[i][j]);
}
}

//headings
fprintf ( fp, "\n\n\nnormalized input vectors\t\twinning node\tdistance\n");

262
//print vectors, winning node for each and distance for each
for ( int i=0; i<vectorsIn; i++){

//input vector //winning node number //final distance tolerance


fprintf( fp, "\n");

for ( int j=0; j<nodesIn; j++){


fprintf ( fp, " %lf ", inputArray[i][j]);
}

fprintf (fp, "\t %d\t %lf ", trackWinner[i], trackDistance[i]);

//close output file


fclose(fp);

};

#endif // _LAYER_CPP

263
7.11 Backpropagation
Forward Feed Back Propagation networks (aka Three Layer Forward Feed Net-
works) have been very successful. Some uses include teaching neural networks to
play games, speak and recognize things. Backpropagation networks can be used
on several network architectures. The networks are all highly interconnected
and use non-linear transfer functions. The network must have at minimum
three layers, but rarely needs more than three layers.
Back-propagation supervised training for Forward-Feed neural nets uses pairs
of input and output patterns. The weights on all the vectors are set to random
values. Then input is fed to the net and propagates to the output layer and the
errors are calculated. Then the error correction is propagated back through the
hidden layer then to the input layer in the network. There is one input neurode
for each number (dimension) in the input vector, there is one output neurode
for each dimension in the output vector. So the network maps IN-dimensional
space to OUT-dimension space. There is no set rule for determining the num-
ber of hidden layers or the number of neurodes in the hidden layer. However,
if too few hidden neurodes are chosen then the network can not learn. If too
many are chosen, then the network memorizes the patterns rather than learning
to extract relevant information. A rule of thumb for choosing the number of
hidden neurodes is to choose log( 2)X where X is the number of patterns. So
if you have 8 distinct patterns to be learned, then log( 2)8 = 3 and 3 hidden
neurodes are probably needed. This is just a rule of thumb, experiment to see
what works best for your situation.
The delta rule is used for error correction in backpropagation networks. This
is also known as the least mean squared rule. N ewW eight = OldW eight 2 
LearningConstant  N eurodeOutput(desiredOutput actualOutput) The delta
rule uses local information for error correction. This rule looks for a minimum.
In an e ort to nd a minimum it may nd a local minimum rather than the
global minimum. Picture trying to nd the deepest hole in your yard, if you
measure small sections at a time you may locate a hole but it may not be the
deepest in the yard. The generalized delta rule seeks to correct this by looking
at the gradient for the entire surface, not just local gradients.
The error vectorPis aimed at zero during training. The vector is calculated
as: Error = ( 21  ( overeachoutputnumber (desired actual)2 )) To get the error
close to zero, with in a tolerance, we use iteration. Each iteration we move
a step downward. We take the gradient, the derivative of a vector, and use
the steepest descent to minimize the error. So thenewweight = oldW eight +
stepsize  ( gradientW (e(W )).
The derivative of the function T (x) = (1=(1 e x )) is just T (x)  (1 T (x))
so using the chain rule we arrive at the error correction function
(desired actual)(1 actual)  eachN odeOutW eight  eachN odeHiddenW eight
the weight is then changed by the amount of the error correction function
as it propagates back through the network.
To train the net all weights are randomly set to a value between -1.0 and 1.0
To do the calculations going forward through the net:

264
Each NodeInput is multplied by each weight connected to it
Each HiddenNode sums up these incoming weights and adds a bias to the
total
This value is used in the sigmoid function as x 1/(1+e x)
If this value is greater than the threshold the HiddenNode res this value,
else it res zero
Each HiddenNode is multiplied by each weight connected to it
Each OutputNode sums up these incoming weights and adds a bias to the
total
This value is used in the sigmoid function as x 1/(1 + e x)
This is the value out put by the OutputNode
To calculate the adjusments during training, you gure out the error and
propigate it back like this:
Adjust weights between HiddenNodes and OutputNodes
ErrorOut = ( OutputNode)*(1-OutputNode)(DesiredOutput - OutputNode)
ErrorHidden = (HiddenNode)*(1-HiddenNode)*(Sum ErrorOut*Weight +
ErrorOut*Weight ... ) for each weight connected to this node
LearningRate = LearningConstant * HiddenNode
(LearningConstant is usually set to something around 0.2 )
Adjustment = ErrorOut * LearningRate
Weight = Weight - Adjustment
Adjust weights between HiddenNodes and InputNodes
Adjustment = ( ErrorHidden)*(LearningConstant)*(NodeInput)
Weight = Weight - Adjustment
Adjust Threshold
On OutputNode, Threshold = Threshold - ErrorOut * LearningRate
On HiddenNode, Threshold = Threshold - ErrorHidden * LearningRate
If you use a neural net that also accounts for imaginary numbers you can
adapt this function so it is not always positive and calculate all of the four
derivatives needed.
Numerous iterations are required for a backpropagation network to learn.
Therefore it is not practical for neural nets that must learn in 'real time'. It
will not always arrive at a correct set of weights. It may get trapped in local
minimums rather than an actual minimum. This is a problem with the 'steepest
decent' algorithm. A momentum term that allows the calculation to slide over
small bumps is sometimes employed. Back propagation networks do not scale
well. They are only good for small neural nets.

265
7.11.1 GUI Java Backpropagation Neural Network Builder

//backpropagation.java
//http://www.timestocome.com
//Neural Net Building Program
//winter 2000-2001

import javax.swing.*;
import java.io.*;

class backpropagation{

private double trainingConstant;


private double threshold;
private File trainingDataFile;
private neuralnet nnToTrain;
private double[][] vectorsIn;
private double[][] vectorsOut;
private int numberOfVectors = 0;
private double[][] neurodeOutputArray;
private JTextArea message = new JTextArea();
private int nodesPerLayer[];
private int max, outNodes, noLayers, inNodes;
private double allowedError;

backpropagation
(neuralnet n, double c, double t, File f, JTextArea info, int noV, double err)
throws Exception
{

allowedError = err;
max = n.maxNodes;

266
outNodes = n.out;
noLayers = n.numberOfLayers;
inNodes = n.in;
trainingConstant = c;
threshold = t;
nnToTrain = new neuralnet();
nnToTrain = n;
nnToTrain.threshold = t;

numberOfVectors = noV;
message = info;

trainingDataFile = f;
FileReader fr = new FileReader(f);
BufferedReader br = new BufferedReader(fr);
String lineIn;
vectorsIn = new double[numberOfVectors][inNodes];
vectorsOut = new double[numberOfVectors][outNodes];
neurodeOutputArray = new double[noLayers][max];
nodesPerLayer = new int[noLayers+1];
nodesPerLayer[0] = inNodes;

message.setText( "done initializing variables");

for(int k=1; k<(noLayers - 1); k++){


nodesPerLayer[k]=nnToTrain.hiddenLayers[k-1];
}

nodesPerLayer[noLayers - 1] = outNodes;

message.append("\n parsing input file into arrays");

//now parse them into arrays


StreamTokenizer st = new StreamTokenizer(fr);
int k = 0, j =0, i =0;

while(st.nextToken() != st.TT_EOF){

message.setText("\n reading token..." );

267
if(st.ttype == st.TT_NUMBER){

if( i < inNodes){

vectorsIn[k][i] = st.nval;
i++;

}else if( j < outNodes){

vectorsOut[k][j] = st.nval;
j++;

if(j == outNodes){

k++;
i = 0;
j = 0;
}
}

}
}

info.setText("...loaded i-o vectors for training....");

public neuralnet train()


{

//*********forward we go*******************
//propagate input through nn
int vectorNumber = 0;
while(vectorNumber < numberOfVectors){

long loopNumber = 0;
boolean noConvergence = false;
boolean gotConvergence = false;

268
while( !noConvergence && !gotConvergence){

//?safety bail after so many loops, assume no convergence today.....


loopNumber ++;
if(loopNumber > 1000){
noConvergence = true;
message.setText("Convergence Failure on training vector # "
+ (vectorNumber+1) );
}

//****for each training pair....*********!!!!!!!!!!!!

//input the input vector to each node in first layer


for(int i=0; i<inNodes; i++){
neurodeOutputArray[0][i] = vectorsIn[vectorNumber][i];
}

//*for each layer after first


//output = sum incoming weights,
//input value to sigmoid function 1/(1 + exp ^(-x))
for(int l=1; l< noLayers; l++){

for(int n=0; n < nodesPerLayer[l]; n++){

double temp = 0;

//sum incoming weights * output from previous layer


for(int w=0; w<nodesPerLayer[l-1]; w++){

temp += neurodeOutputArray[l-1][w] * nnToTrain.weightTable[l-1][w][n];

//run through sigmoid


double temp2 = 1/ ( 1 + Math.pow(Math.E, temp) );

//check if over threshold


if( temp2 >= threshold){
//update neurodeOutputArray
neurodeOutputArray[l][n] = temp2;

269
}

}
}

//*******and back we go***************

//create 2 arrays, one to store currentLayerError, one to store previousLayerError


double errorVectorCurrent[] = new double[max];
double errorVectorPrevious[] = new double[max];
double desired = 0, actual = 0;

//calculate error vector { desired-actual, desired-actual, ...}


//and calculate errors for output layer
for(int i=0; i<outNodes; i++){

desired = vectorsOut[vectorNumber][i];
actual = neurodeOutputArray[noLayers-1][i];

errorVectorCurrent[i] = (actual)*(1-actual)*(actual-desired);

//for each layer work back the error


for( int layer = (noLayers-1); layer>0; layer--){

for(int node=0; node<nodesPerLayer[layer]; node++){

//output of this node on the forward pass


double no = neurodeOutputArray[layer][node];
double tempCalc = 0.0;
double pvsOut = 0.0;

for(int wgt=0; wgt<nodesPerLayer[layer-1]; wgt++){

//current weight
double cw = nnToTrain.weightTable[layer-1][wgt][node];
//output of node connecting to input end of this weight
pvsOut = neurodeOutputArray[layer-1][wgt];

270
tempCalc += cw*pvsOut;

errorVectorPrevious[node] = no * (1-no) * tempCalc;

for(int i=0; i<nodesPerLayer[layer-1]; i++){


nnToTrain.weightTable[layer-1][i][node] +=
trainingConstant * errorVectorCurrent[node] * pvsOut;

}
}

//mv previousLayerError to currentLayerError


//initialize previousLayerError
for(int k=0; k<max; k++){
errorVectorCurrent[k] = errorVectorPrevious[k];
errorVectorPrevious[k] = 0.0;
}

//?quit when error below a certain threshold?


double errorCheck = 0.0;

for(int k=0; k<max; k++){

errorCheck += Math.sqrt( (desired-actual)*(desired-actual) );

message.append("\nDesired-Actual=error " +
desired+ "-" +actual+ "=" +(desired-actual) );

if (errorCheck < allowedError ){


gotConvergence = true;
message.append("\n\n got convergence...");
}

271
vectorNumber ++;
}

message.append("\n\nTraining run is done");


return nnToTrain;

272
//DisplayNet.java
//http://www.timestocome.com
//Neural Net Building Program

import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import java.text.*;

public class DisplayNet extends JFrame


{

jpaneldisplaynet jpdn;

public DisplayNet(int i, int o, int h[], double w[][][])


{

super ( "Display Neural Net");

Container rootPanel = getContentPane();

JScrollBar sby = new JScrollBar();


jpdn = new jpaneldisplaynet(i, o, h, w);

rootPanel.add(jpdn);
rootPanel.add(sby, BorderLayout.EAST);

sby.addAdjustmentListener(new AdjustmentListener()
{
public void adjustmentValueChanged( AdjustmentEvent evt){
JScrollBar sb = (JScrollBar)evt.getSource();
jpdn.setScrolledPosition(evt.getValue());
jpdn.repaint();
}
});

273
}

void display(int in, int out, int hidden[], double weight[][][]){

//create window
final JFrame f = new DisplayNet(in, out, hidden, weight);
f.setBounds( 100, 50, 400, 600);
f.show();

//destroy window
f.setDefaultCloseOperation(DISPOSE_ON_CLOSE);

f.addWindowListener(new WindowAdapter(){
public void windowClosed(WindowEvent e){
f.setVisible(false);
}
});
}
}

class jpaneldisplaynet extends JPanel


{

int inNodes;
int outNodes;
int hiddenNodes[];
double weights[][][];
int noLayers;
int scrollx = 0, scrolly = 0;
int layers[];

274
jpaneldisplaynet(int i, int o, int h[], double w[][][])
{

inNodes = i;
outNodes = o;
hiddenNodes = h;
weights = w;
noLayers = h.length + 2;

layers = new int[noLayers+1];


layers[0] = inNodes;

for(int k=1; k<(noLayers - 1); k++){


layers[k]=hiddenNodes[k-1];
}

layers[noLayers - 1] = outNodes;

public void paint(Graphics g)


{

Color backColor = new Color(225, 255, 225);


int x = 50;
int y = 50;
int q = 100;
int c = 0;
int rows = 0;
int cols = noLayers;

g.setColor(backColor);
g.fillRect(0, 0, 1280, 960);

275
Color nodeColor = new Color( 0, 80, 0);
Color weightColor = new Color(0, 0, 255);

g.setColor(nodeColor);

//Heading...
g.drawString("The node and layer locations are in green, +
weights are in blue.", x, y-30);
g.drawString("The leftmost layer is the input,+
the rightmost layer is output.", x, y-20);

for(int i=0; i<hiddenNodes.length; i++){


c++;
if(hiddenNodes[i]>rows){ rows = hiddenNodes[i];}
}

//get number of rows


if(inNodes > rows){
rows = inNodes;
}else if( outNodes > rows){
rows = outNodes;
}

int max;
if(rows>cols){
max = rows;
}else{
max = cols;
}

int r = 80; c = 40;


NumberFormat nf = NumberFormat.getNumberInstance();
nf.setMaximumFractionDigits(3);

for(int i=0; i<cols; i++){

r -= (scrolly*40);

276
for(int j=0; j<layers[i]; j++){

g.setColor(nodeColor);
int printRow = r + (j+1)*20;

g.drawString( "Nd " + (j+1) + " L # " + (i+1) +


" ", (c + (i*100)), printRow);

g.setColor(weightColor);
for(int k=0; k<layers[i+1]; k++){

if(weights[i][j][k] != 0){
g.drawString( " " + nf.format(weights[i][j][k]) +
" ", (c+(i*100)), printRow+(20*(k+1)));
r = printRow + 20*(k+1);
}
}

}
r = 80; //lreset at end of column
}

//to eliminate flicker


public void update(Graphics g)
{
paint(g);
}

//scroll bar stuff


public void setScrolledPosition( int locationy)
{
scrolly = locationy;

277
}

278
//filefilter.java
//http://www.timestocome.com
//Neural Net Building Program
//winter 2000-2001

import java.io.File;
import javax.swing.filechooser.*;

class filefilter extends javax.swing.filechooser.FileFilter


{

public boolean accept (File fileobj)


{

String extension = "";

if(fileobj.getPath().lastIndexOf('.') > 0)
extension = fileobj.getPath().substring(
fileobj.getPath().lastIndexOf('.')
+ 1).toLowerCase();

if(extension != "")
return extension.equals("net");
else
return fileobj.isDirectory();
}

public String getDescription()


{
return "Neural Net Files (*.net)";
}

279
//DisplayVectors.java
//http://www.timestocome.com
//Neural Net Building Program

import java.awt.*;
import java.awt.event.*;
import javax.swing.*;

public class DisplayVectors extends JFrame


{

jpaneldisplayvectors jpdv;

public DisplayVectors()
{

super ( "Input and Output Vectors");

Container rootPanel = getContentPane();

jpdv = new jpaneldisplayvectors();

rootPanel.add(jpdv);

void display(){

//create window
final JFrame f = new DisplayVectors();
f.setBounds( 200, 200, 180, 600);
f.setVisible(true);

//destroy window
f.setDefaultCloseOperation(DISPOSE_ON_CLOSE);

280
f.addWindowListener(new WindowAdapter(){
public void windowClosed(WindowEvent e){
f.setVisible(false);
}
});
}
}

class jpaneldisplayvectors extends JPanel


{

jpaneldisplayvectors()
{
setBackground(Color.white);
}

public void paintComponent(Graphics g)


{

super.paintComponent(g);

}
}

281
//gui.java
//http://www.timestocome.com
//Neural Net Building Program
//winter 2000-2001

import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
import java.io.File;
import javax.swing.filechooser.*;
import java.io.*;

public class gui extends JFrame


{

static jpanel jpanelNew;


static jpanel jpanelTrain;
static jpanel jpanelUse;

static JTextArea output = new JTextArea("http://www.TimesToCome.com", 8, 47);


jpanel jpanelInformation;

static int choice = 0;


static String outputText = "";
static Container rootPane;

static JFileChooser jfilechooserOpen = new JFileChooser();


static JFileChooser jfilechooserSave = new JFileChooser();

Color c = new Color( 225, 255, 225);

//build
static JTextField NumberInputs;
static JTextField NumberOutputs;
static JTextField NumberHidden;
static JTextField NumberPerHidden;
JButton jbuttonBuild;

//train
static JTextField TrainingConstant;

282
static JTextField Threshold;
static JTextField TrainingVectorFile;
static JTextField NoTrainingVectors;
static JTextField Error;
JTextField fileTrain;
JButton jbuttonTrain;

//use
JTextField filetouse;
static JTextField vectorfiletouse;
static JTextField NoVectors;
JButton jbuttonUse;

//file stuff
static File currentFile;
static neuralnet nn;

public gui()
{

super ("http://www.TimesToCome.com");

JLabel LBlank1 = new JLabel(" ");


JLabel LBlank2 = new JLabel(" ");
JLabel LBlank3 = new JLabel(" ");

//new net info


jpanelNew = new jpanel("Create a New Neural Net");

NumberInputs = new JTextField(5);


JLabel LNumberInputs = new JLabel("Number of neurodes input layer: ");

NumberOutputs = new JTextField(5);


JLabel LNumberOutputs = new JLabel("Number of neuroded output layer: ");

NumberPerHidden = new JTextField(20);


JLabel LNumberPerHidden = new JLabel("Number of neurodes in hidden layers: ");

jpanelNew.add(Box.createRigidArea(new Dimension(570, 5)));

283
JPanel jp1 = new JPanel();
jp1.setBackground(c);
jp1.setLayout(new BoxLayout(jp1, BoxLayout.X_AXIS));
jp1.add(LNumberInputs);
jp1.add(NumberInputs);
jp1.add(Box.createHorizontalStrut(20));
jpanelNew.add(jp1);

JPanel jp3 = new JPanel();


jp3.setBackground(c);
jp3.setLayout(new BoxLayout(jp3, BoxLayout.X_AXIS));
jp3.add(LNumberPerHidden);
jp3.add(NumberPerHidden);
jp3.add(Box.createHorizontalStrut(20));
jpanelNew.add(jp3);

JPanel jp2 = new JPanel();


jp2.setBackground(c);
jp2.setLayout(new BoxLayout(jp2, BoxLayout.X_AXIS));
jp2.add(LNumberOutputs);
jp2.add(NumberOutputs);
jp2.add(Box.createHorizontalStrut(20));
jpanelNew.add(jp2);

jbuttonBuild = new JButton("Build");


jpanelNew.add(jbuttonBuild);
jbuttonBuild.addActionListener(jb1);

//training info
jpanelTrain = new jpanel( "Train a Neural Net");

TrainingConstant = new JTextField(5);


JLabel LTrainingConstant = new JLabel("Training constant: ");

284
Threshold = new JTextField(20);
JLabel LThreshold = new JLabel("Threshold: ");

Error = new JTextField(20);


JLabel LError = new JLabel("Allowed Error: ");

TrainingVectorFile = new JTextField(20);


JLabel LTrainingVectorFile = new JLabel("Name of training vector file: ");

NoTrainingVectors = new JTextField(20);


JLabel LNoTrainingVectors = new JLabel("Number training vector pairs: ");

NoVectors = new JTextField(20);


JLabel LNoVectors = new JLabel("Number of vectors to process: ");

jpanelTrain.add(Box.createRigidArea(new Dimension(570, 5)));

JPanel jp4 = new JPanel();


jp4.setBackground(c);
jp4.setLayout(new BoxLayout(jp4, BoxLayout.X_AXIS));
jp4.add(LTrainingConstant);
jp4.add(TrainingConstant);
jp4.add(Box.createHorizontalStrut(20));
jpanelTrain.add(jp4);

JPanel jp5 = new JPanel();


jp5.setBackground(c);
jp5.setLayout(new BoxLayout(jp5, BoxLayout.X_AXIS));
jp5.add(LThreshold);
jp5.add(Threshold);
jp5.add(Box.createHorizontalStrut(20));
jpanelTrain.add(jp5);

JPanel jp6 = new JPanel();


jp6.setBackground(c);
jp6.setLayout(new BoxLayout(jp6, BoxLayout.X_AXIS));
jp6.add(LTrainingVectorFile);
jp6.add(TrainingVectorFile);
jp6.add(Box.createHorizontalStrut(20));
jpanelTrain.add(jp6);

JPanel jp7 = new JPanel();


jp7.setBackground(c);
jp7.setLayout(new BoxLayout(jp7, BoxLayout.X_AXIS));
jp7.add(LNoTrainingVectors);

285
jp7.add(NoTrainingVectors);
jp7.add(Box.createHorizontalStrut(20));
jpanelTrain.add(jp7);

JPanel jp8 = new JPanel();


jp8.setBackground(c);
jp8.setLayout(new BoxLayout(jp8, BoxLayout.X_AXIS));
jp8.add(LError);
jp8.add(Error);
jp8.add(Box.createHorizontalStrut(20));
jpanelTrain.add(jp8);

jbuttonTrain = new JButton("Train");


jpanelTrain.add(jbuttonTrain);
jbuttonTrain.addActionListener(jb2);

//usage info
jpanelUse = new jpanel( "Use a Neural Net");
jpanelUse.add(Box.createRigidArea(new Dimension(570, 5)));

JPanel jp9 = new JPanel();


jp9.setBackground(c);
jp9.setLayout(new BoxLayout(jp9, BoxLayout.X_AXIS));
jp9.add(LNoVectors);
jp9.add(NoVectors);
jp9.add(Box.createHorizontalStrut(20));
jpanelUse.add(jp9);

vectorfiletouse = new JTextField(20);


JLabel Lvectorfiletouse = new JLabel("Vector file to use: ");

JPanel jpf4 = new JPanel();


jpf4.setBackground(c);
jpf4.setLayout(new BoxLayout( jpf4, BoxLayout.X_AXIS));
jpf4.add(Lvectorfiletouse);
jpf4.add(vectorfiletouse);
jpf4.add(Box.createHorizontalStrut(20));
jpanelUse.add(jpf4);

286
jbuttonUse = new JButton("Process");
jpanelUse.add(jbuttonUse);
jbuttonUse.addActionListener(jb3);

//information
jpanelInformation = new jpanel( "Information");
JScrollPane scrollpaneText = new JScrollPane();
scrollpaneText.add(output);
scrollpaneText.setViewportView(output);
jpanelInformation.add(scrollpaneText);

//file open and save stuff


jfilechooserOpen.addChoosableFileFilter(new filefilter());
jfilechooserSave.addChoosableFileFilter(new filefilter());

//set up interface
rootPane = getContentPane();
rootPane.setBackground(Color.white);
rootPane.setLayout(new FlowLayout());
rootPane.add(jpanelNew);
rootPane.add(jpanelTrain);
rootPane.add(jpanelUse);
rootPane.add(jpanelInformation);

//add in menu
jmenubar();

public static void main( String argv [])


{

//create window
JFrame f = new gui();
f.setBounds( 100, 100, 650, 700);
f.setVisible(true);

287
//destroy window
f.setDefaultCloseOperation(DISPOSE_ON_CLOSE);

f.addWindowListener(new WindowAdapter(){
public void windowClosed(WindowEvent e){
System.exit(0);
}
});

//build menus and add mouse click listeners...


void jmenubar()
{

JMenuBar jmenubar = new JMenuBar();

jmenubar.setUI( jmenubar.getUI() );

JMenu jmenu1 = new JMenu("File");


JMenu jmenu2 = new JMenu("Help");
JMenu jmenu4 = new JMenu("View");
JMenu jmenu5 = new JMenu("Print");

JMenuItem m1 = new JMenuItem("New");


m1.addActionListener(a1);

JMenuItem m2 = new JMenuItem("Open");


m2.addActionListener(a2);

JMenuItem m3 = new JMenuItem("Save");


m3.addActionListener(a3);

JMenuItem m4 = new JMenuItem("Train");


m4.addActionListener(a4);

JMenuItem m6 = new JMenuItem("About");


m6.addActionListener(a6);

288
JMenuItem m7 = new JMenuItem("Exit");
m7.addActionListener(a7);

JMenuItem m8 = new JMenuItem("Net");


m8.addActionListener(a8);

JMenuItem m11 = new JMenuItem("Net");


m11.addActionListener(a11);

JMenuItem m13 = new JMenuItem("Introduction");


m13.addActionListener(a13);

JMenuItem m15 = new JMenuItem("Process");


m15.addActionListener(a15);

jmenu1.add(m1);
jmenu1.add(m2);
jmenu1.add(m3);
jmenu1.add(m4);
jmenu1.add(m15);
jmenu1.addSeparator();
jmenu1.add(jmenu5);
jmenu1.addSeparator();
jmenu1.add(m7);

jmenu4.add(m8);

jmenu5.add(m11);

jmenu2.add(m13);
jmenu2.add(m6);

jmenubar.add(jmenu1);
jmenubar.add(jmenu4);
jmenubar.add(jmenu2);

setJMenuBar(jmenubar);

289
//create new neural net
static ActionListener a1 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m1 = ( JMenuItem )e.getSource();

output.setText("\n Use this to create a new neural net");


output.setText("\n Enter the number of neurodes for input and output layers.");
output.append("\n For the hidden layer enter the neurodes per layer, ");
output.append("\n separated by commas. Ex: 4, 5, 7 for 3 hidden layers");
output.append("\n So there would be 4 neurodes in the layer next to the ");
output.append("\n input layer, 5 in the next layer and 7 in the ");
output.append("\n hidden layer next to the output layer.");
output.append("\n Hit [Build] when all the information is entered.");

}
};

//open exisiting net


static ActionListener a2 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m2 = ( JMenuItem )e.getSource();

int result = jfilechooserOpen.showOpenDialog(null);


File fileobj = jfilechooserOpen.getSelectedFile();

if(result == JFileChooser.APPROVE_OPTION){
output.setText("Opening... " + fileobj.getPath());
currentFile = fileobj;

try{
FileInputStream fis = new FileInputStream(currentFile);
ObjectInputStream ois = new ObjectInputStream(fis);
nn = (neuralnet)ois.readObject();
ois.close();
}catch(Exception exception){}

}
};

290
//save net
static ActionListener a3 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m3 = ( JMenuItem )e.getSource();

int result = jfilechooserSave.showSaveDialog(null);


File fileobj = jfilechooserSave.getSelectedFile();

if(result == JFileChooser.APPROVE_OPTION){
output.setText("Saving... " + fileobj.getPath());
currentFile = fileobj;

try{
FileOutputStream fos = new FileOutputStream(currentFile);
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject(nn);
oos.flush();
oos.close();
}catch(Exception exception){}

}
}
};

//train net
static ActionListener a4 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m4 = ( JMenuItem )e.getSource();

output.setText("\n Use this to train a neural net that you have ");
output.append("\n created and saved.");
output.append("\n Enter the training constant (~0.2, the threshold (~.2) 0.0-1.0");
output.append("\n and the file where your training vectors are stored.");
output.append("\n The training vector file should be of the format:");
output.append("\n (1.0, 4.3, 5.6) (3.6, 6.7, 5.2, 5.3)");
output.append("\n The first file per line should be the input vector,");

291
output.append("\n and the second should be the output vector");
output.append("\n Make sure you open a file from the file menu to train.");
output.append("\n press [Train] when you are ready to begin.");

}
};

//about
static ActionListener a6 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m6 = ( JMenuItem )e.getSource();
output.setText( "\nhttp://www.timestocome.com"+
"\nNeural Net Building Program"+
"\nCopyright (C) 2001 Linda MacPhee-Cobb"+
"\nThis program is free software; you can"+
"\nredistribute it and/or modify"+
"\nit under the terms of the GNU General "+
"\nPublic License as published by"+
"\nthe Free Software Foundation; either "+
"\nversion 2 of the License, or "+
"\nany later version."+
"\n\nThis program is distributed in the hope"+
"\nthat it will be useful,"+
"\nbut WITHOUT ANY WARRANTY; without even "+
"\nthe implied warranty of "+
"\nMERCHANTABILITY or FITNESS FOR A PARTICULAR "+
"\nPURPOSE. See the"+
"\nGNU General Public License for more details."+
"\nYou should have received a copy of the "+
"\nGNU General Public License"+
"\nalong with this program; if not, "+
"\nwrite to the Free Software"+
"\nFoundatation, Inc., 59 Temple Place, "+
"\nSuite 330, Boston, Ma 02111-1307"+
"\nUSA"+
"\nI may be reached via the website http://www.timestocome.com"+
"\nlinda macphee-cobb"+
"\nwinter 2000-2001");

292
};

//exit
static ActionListener a7 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m7 = ( JMenuItem )e.getSource();
output.setText( "Thank you . . . ");
System.exit(0);

}
};

//display net
static ActionListener a8 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m8 = ( JMenuItem )e.getSource();

if(nn == null ){
output.setText("Please open or create a neural net to display.");
}else{
DisplayNet dn = new DisplayNet(nn.in, nn.out, nn.hiddenLayers, nn.weightTable);
dn.display(nn.in, nn.out, nn.hiddenLayers, nn.weightTable);
}
}
};

//print net
static ActionListener a11 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m11 = ( JMenuItem )e.getSource();

output.setText("This will print the weights and node information to" +


"\na file 'printme.txt' in this directory which you can" +
"\nthen print.");

293
if(nn == null ){
output.setText("Please open or create a neural net to print.");
}else{
//print to a file
PrinterNet pn = new PrinterNet(nn.in, nn.out, nn.hiddenLayers, nn.weightTable);

//this is supposed to print to printer...


//the code is here, but it doesn't work under linux
//running cups. It is probably a cups problem
//comment out print to a file and try this instead if you
//are not running cups
/*
try{
PrintNet pn = new PrintNet(nn.in, nn.out, nn.hiddenLayers, nn.weightTable);
pn.print();
}catch (Exception exception){
output.setText("\n unable to create file");
}
*/
}

}
};

//help
static ActionListener a13 = new ActionListener()
{

public void actionPerformed( ActionEvent e)


{
Help h = new Help();
h.display();

}
};

//process vectors
static ActionListener a15 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m15 = ( JMenuItem )e.getSource();

294
choice = 15;
output.setText("From the file menu open the neural net");
output.append("\nYou wish to use, then enter the name");
output.append("\nof the file containing the vectors you");
output.append("\nwish to process. The vector file format");
output.append("\nshould be:");
output.append("\n(3.2, 5.66, 2.0)");
output.append("\nwith one vector per line");

//open a neural net to use


try{
FileInputStream fis = new FileInputStream(currentFile);
ObjectInputStream ois = new ObjectInputStream(fis);
nn = (neuralnet)ois.readObject();
ois.close();
}catch(Exception exception){}

//open vector file and verify

}
};

//build net
static ActionListener jb1 = new ActionListener()
{
public void actionPerformed(ActionEvent e)
{
int i = 0;
int o = 0;
String h = "";

if( (NumberInputs.getText().compareTo("") != 0 ) &&


(NumberOutputs.getText().compareTo("") != 0 ) &&
(NumberPerHidden.getText().compareTo("") != 0 ) ){

i = (int)(Double.valueOf(NumberInputs.getText() ).doubleValue());
o = (int)(Double.valueOf(NumberOutputs.getText() ).doubleValue());
h = NumberPerHidden.getText();;

295
if( (i == 0) || (o == 0)){
output.setText ("Please enter valid numbers for input and output neurodes");
}else if( h == ""){
output.setText ("Please enter a training file name.");
}else{
output.setText("\nBuilding...");
nn = new neuralnet( i, o, h, output);
nn.setInitWeights(output);

}
}else{
output.setText("Please enter all values needed");
}

}
};

//train net
static ActionListener jb2 = new ActionListener()
{
public void actionPerformed(ActionEvent e)
{
double tconstant = -1.0;
double threshold = -1.0;
String fname = "";
double error = -1.0;

output.setText(" ");

//get information from user


if( (TrainingConstant.getText().compareTo("") != 0 ) &&
(Threshold.getText().compareTo("") != 0 ) ){

tconstant = (Double.valueOf(TrainingConstant.getText() ).doubleValue());


threshold = (Double.valueOf(Threshold.getText() ).doubleValue());
fname = TrainingVectorFile.getText();
int notrainingvectors =
(int)(Double.valueOf(NoTrainingVectors.getText() ).doubleValue());
error = (Double.valueOf(Error.getText() ).doubleValue() );

//quick error check and let user know if there is trouble


if( (tconstant<=0.0)||(threshold<0.0)){

296
output.setText( "Enter a training constant and threshold > 0.0 please.");

}else if( fname.compareTo("") == 0){

output.setText(" Please enter the name of the training file");

}else if( notrainingvectors <= 0){

output.setText("Enter the number of training vector pairs in the training file");

}else if(error < 0.0 ){

output.setText("Enter the allowed error in output please");

}else{

currentFile = new File(fname);

if(currentFile.isFile()){

if(nn == null){

output.setText
("\nPlease open or create a neural net to train.");

}else{
try{

backpropagation bp = new backpropagation


(nn, tconstant, threshold, currentFile, output, notrainingvectors, error);

nn = bp.train();

}catch(Exception exc){}
}

}else{
output.setText("\n Check training file name and path ");
}
}

}else{
output.setText("Please fill in all of the blanks");
}

297
}
};

//process vectors
static ActionListener jb3 = new ActionListener()
{
public void actionPerformed(ActionEvent e)
{

int nv = -1;
nv = (int)(Double.valueOf(NoVectors.getText() ).doubleValue());
output.setText("\n Processing...");
File processfile;

//JTextField vectorfiletouse;
String fname = vectorfiletouse.getText();

if( fname.compareTo("")==0){
output.setText("Please enter a file name");

}else if( nv <= 0){

output.setText("Enter the number of vectors to process");

}else if(nn == null){

//open a neural net


try{
output.setText( "Opening neural net ");

FileInputStream fis = new FileInputStream(fname);


ObjectInputStream ois = new ObjectInputStream(fis);
nn = (neuralnet)ois.readObject();
ois.close();
}catch(Exception exception){}

}else{

processfile = new File(fname);

298
try{

process pv = new process(nn, nv, output, processfile);


pv.doit();

}catch(Exception exc){
output.setText("\n Hellfire and damnation. I believe we ran off the end");
output.append("\n of an array. Double check your numbers.");
}

}
}
};

299
//Help.java
//http://www.timestocome.com
//Neural Net Building Program

import java.awt.*;
import java.awt.event.*;
import javax.swing.*;

public class Help extends JFrame


{

jpanelHelp jph;

public Help()
{

super ( "Help");

Container roothelpPanel = getContentPane();

jph = new jpanelHelp();

roothelpPanel.add(jph);

void display(){

//create window
final JFrame f1 = new Help();
f1.setBounds( 200, 200, 180, 600);
f1.setVisible(true);

//destroy window
f1.setDefaultCloseOperation(DISPOSE_ON_CLOSE);

f1.addWindowListener(new WindowAdapter(){
public void windowClosed(WindowEvent e){
f1.setVisible(false);

300
}
});
}
}

class jpanelHelp extends JPanel


{

String newA = "File->New ";


String newB = " Use this option to create a new neural net.";
String newC = " This creates a forward feed, backpropagation" ;
String newD = " net. The sigmoid function is used as the " ;
String newE = " threshold function and its derivative for the ";
String newF = " backpropagation. You set the number of layers " ;
String newG = " and the number of neurodes per layer. Do keep in ";
String newH = " mind the amount of ram on your computer, this " ;
String newI = " net requires a large amount of ram. You can ";
String newJ = " set your own threshold values for firing, and " ;
String newK = " your own learning constants." ;

String importA = "File->Open" ;


String importB = " Use this option to import a net you " ;
String importC = " previously created and saved with this ";
String importD = " program. " ;

String saveA = "File->Save" ;


String saveB = " Use this option to save the net you are";
String saveC = " currently working on." ;

String trainA = "File->Train" ;


String trainB = " Use this option to train your neural net";
String trainC = " You will need several input and output vector " ;
String trainD = " pairs for training." ;
String trainE = " The output for each neurode will be less than 1.0";
String trainF = " So set the threshold with this in mind.";
String trainG = " Try a training constant of 0.2 if you are";
String trainH = " unsure what to use.";
String trainI = " The training file should have pairs of in/output vectors.";
String trainJ = " (ain, bin, cin,) (zout, yout, xout, wout)";
String trainK = " one pair to a line.";

301
String processA = "File->Process";
String processB = " Use this to process a file of input vectors";
String processC = " through the net you've created.";
String processD = " Use the format (a, b, c, d) one vector ";
String processE = " per line for the input file.";

String printWA = "File->Print->Net";


String printWB = " Use this to send to the printer the weight ";
String printWC = " tables of this net";

String displayNA = "View->Net ";


String displayNB = " Use this option to display the net in a new window.";

jpanelHelp()
{
setBackground(Color.white);
}

public void paintComponent(Graphics g)


{
int x = 80;
int y = 15;
int incY = 15;

super.paintComponent(g);
g.drawString(newA, x, y);
g.drawString(newB, x, y + incY);
g.drawString(newC, x, y + 2*incY);
g.drawString(newD, x, y + 3*incY);
g.drawString(newE, x, y + 4*incY);
g.drawString(newF, x, y + 5*incY);
g.drawString(newG, x, y + 6*incY);
g.drawString(newH, x, y + 7*incY);
g.drawString(newI, x, y + 8*incY);
g.drawString(newJ, x, y + 9*incY);
g.drawString(newK, x, y + 10*incY);

g.drawString(importA, x, y + 12*incY);
g.drawString(importB, x, y + 13*incY);
g.drawString(importC, x, y + 14*incY);

302
g.drawString(importD, x, y + 15*incY);

g.drawString(saveA, x, y + 17*incY);
g.drawString(saveB, x, y + 18*incY);
g.drawString(saveC, x, y + 19*incY);

g.drawString(trainA, x, y + 21*incY);
g.drawString(trainB, x, y + 22*incY);
g.drawString(trainC, x, y + 23*incY);
g.drawString(trainD, x, y + 24*incY);
g.drawString(trainE, x, y + 25*incY);
g.drawString(trainF, x, y + 26*incY);
g.drawString(trainG, x, y + 28*incY);
g.drawString(trainH, x, y + 29*incY);
g.drawString(trainI, x, y + 30*incY);
g.drawString(trainJ, x, y + 31*incY);
g.drawString(trainK, x, y + 32*incY);

g.drawString(processA, x, y + 33*incY);
g.drawString(processB, x, y + 34*incY);
g.drawString(processC, x, y + 35*incY);
g.drawString(processD, x, y + 36*incY);
g.drawString(processE, x, y + 37*incY);

g.drawString(printWA, x, y + 38*incY);
g.drawString(printWB, x, y + 39*incY);
g.drawString(printWC, x, y + 40*incY);

g.drawString(displayNA, x, y + 41*incY);
g.drawString(displayNB, x, y + 42*incY);

}
}

303
//jpanel.java
//http://www.timestocome.com
//Neural Net Building Program
//winter 2000-2001

import javax.swing.*;
import java.awt.*;

class jpanel extends JPanel


{

jpanel(String s)
{
Color c = new Color(225, 255, 225);

setBackground(c);
setBorder(BorderFactory.createTitledBorder(
BorderFactory.createEtchedBorder(),s));
setLayout(new BoxLayout(this, BoxLayout.Y_AXIS));

public void paintComponent( Graphics g )


{
super.paintComponent(g);

}
}

304
//neuralnet.java
//http://www.timestocome.com
//Neural Net Building Program
//winter 2000-2001

import javax.swing.*;
import java.util.*;
import java.io.*;

public class neuralnet implements Serializable {

//number of nodes in input and output layers


int in, out;

//no one would build a nn with more than a


//few hidden layers..... but lets set this high just in case....
private int[] tempArray = new int[100];

//number of nodes in each hidden layer


int[] hiddenLayers;

//the weight table is in 3d


//it makes the code cleaner and hopefully a bit quicker
//double[layer number][node number]
//[connected to node number in next layer going forward]
double[][][] weightTable;
int maxNodes = 0;
int numberOfConnections = 0;
int numberOfLayers = 0;
double threshold = 0.0;

neuralnet(){}

neuralnet(int i, int o, String s, JTextArea info)


{
in = i;
out = o;

305
if(in > out) {
maxNodes = in;
}else{
maxNodes = out;
}

//break string s up in to a list of lengths of hidden layers


char temp[] = s.toCharArray();
int count = 0;
String tempS = "";

for(int j=0; j<temp.length; j++){

if(temp[j] != ','){
tempS += temp[j];
}else{
tempArray[count] = (int) Double.valueOf(tempS).doubleValue();

if(tempArray[count] > maxNodes){


maxNodes = tempArray[count];
}

count++;
tempS ="";
}
}

//get the last number...


if(tempS.compareTo("") != 0){
tempArray[count] = (int) Double.valueOf(tempS).doubleValue();

if(tempArray[count] > maxNodes){


maxNodes = tempArray[count];
}

hiddenLayers = new int[count+1];


for(int j=0; j<count+1; j++){
hiddenLayers[j] = tempArray[j];
}

306
//build a 3-d weight table to store our weights in
numberOfLayers = count + 3;
numberOfConnections = maxNodes;
weightTable = new double[numberOfLayers][maxNodes][numberOfConnections];

//let user know what we have done...


info.setText(" I successfully created your neural net");

info.append("\n Use the file->save menu to save your neural net.");


info.append("\n\n\n Use the view->net to see what I have done.");
info.append("\n The weights are set with random numbers until it is trained.");

//initialize the table with random numbers between -1.0 and 1.0
public void setInitWeights(JTextArea information)
{
//set all weights to zero

for(int i=0; i<numberOfLayers; i++){


for(int j=0; j<maxNodes; j++){
for(int k=0; k<numberOfConnections; k++){
weightTable[i][j][k] = 0.0;
}
}
}

//now put in random numbers for exisiting connections


//leave other sections of array set to zero...
int nodeCount[] = new int[numberOfLayers];

nodeCount[0] = in;

for(int i=0; i<(numberOfLayers-2); i++){


nodeCount[i+1] = hiddenLayers[i];

307
}

nodeCount[numberOfLayers-1] = out;

Random seed = new Random();

for(int i=0; i < numberOfLayers-1; i++){

for(int j=0; j < (nodeCount[i]); j++){

for(int k=0; k< nodeCount[i+1]; k++){

double number = seed.nextDouble();

int negative = seed.nextInt();

if( (negative % 2) == 0){


number -= 1.0;
}

weightTable[i][j][k] = number;

}
}
}

308
//neurode.java
//http://www.timestocome.com
//Neural Net Building Program

class neurode {

private int layerType; //0 = input; 1 = hidden; 2 = output


private int layerNumber; //0 for first/input; 1 for first hidden...
private int layerRow; //0 for first position in vector

private double threshold; //value may be different for different layers


private double learningConstant; //may be different for different layers
private double value; //the sum of inputs * threshold function

private boolean fire; //have we reached threshold?

neurode( int type, int number, int row, double t, double l)


{
layerType = type;
layerNumber = number;
layerRow = layerRow;
threshold = t;
learningConstant = l;
}

void calculateValue()
{

//sum incoming values


//multiply by sigmoid function
//check to see if over threshold
//if over threshold set fire to true
//else set fire to false

309
}

310
//PrinterNet.java
//http://www.timestocome.com
//Neural Net Building Program

import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import java.text.*;

public class PrinterNet extends JFrame


{

int inNodes;
int outNodes;
int hiddenNodes[];
double weights[][][];
int noLayers;
int scrollx = 0, scrolly = 0;
int layers[];

public PrinterNet(int i, int o, int h[], double w[][][])


{

PrintJob pj = getToolkit().getPrintJob(this, "Print Neural Net", null);


Graphics g = pj.getGraphics();

/*
inNodes = i;
outNodes = o;
hiddenNodes = h;
weights = w;
noLayers = h.length + 2;

layers = new int[noLayers+1];


layers[0] = inNodes;

for(int k=1; k<(noLayers - 1); k++){


layers[k]=hiddenNodes[k-1];
}

311
layers[noLayers - 1] = outNodes;

int x = 50;
int y = 50;
int q = 100;
int c = 0;
int rows = 0;
int cols = noLayers;

g.drawString("The leftmost layer is the input,


the rightmost layer is output.", x, y-20);

for(int l=0; l<hiddenNodes.length; l++){


c++;
if(hiddenNodes[l]>rows){ rows = hiddenNodes[l];}
}

//get number of rows


if(inNodes > rows){
rows = inNodes;
}else if( outNodes > rows){
rows = outNodes;
}

int max;
if(rows>cols){
max = rows;
}else{
max = cols;
}

int r = 80; c = 40;


NumberFormat nf = NumberFormat.getNumberInstance();
nf.setMaximumFractionDigits(3);

for(int l=0; l<cols; l++){

312
r -= (scrolly*40);

for(int j=0; j<layers[l]; j++){

int printRow = r + (j+1)*20;

g.drawString( "Nd " + (j+1) + " L # " + (l+1) +


" ", (c + (i*100)), printRow);

for(int k=0; k<layers[l+1]; k++){

if(weights[l][j][k] != 0){
g.drawString( " " + nf.format(weights[l][j][k]) +
" ", (c+(i*100)), printRow+(20*(k+1)));
r = printRow + 20*(k+1);
}
}

}
r = 80; //lreset at end of column
}
*/
g.dispose();
pj.end();

313
//PrintNet.java
//http://www.timestocome.com
//Neural Net Building Program

import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import java.text.*;
import java.io.*;

public class PrintNet


{

int inNodes;
int outNodes;
int hiddenNodes[];
double weights[][][];
int noLayers;
int layers[];
String fileName = "printme.txt";

public PrintNet(int i, int o, int h[], double w[][][])


{

inNodes = i;
outNodes = o;
hiddenNodes = h;
weights = w;
noLayers = h.length + 2;

layers = new int[noLayers+1];


layers[0] = inNodes;

for(int k=1; k<(noLayers - 1); k++){


layers[k]=hiddenNodes[k-1];
}

layers[noLayers - 1] = outNodes;

314
}

public void print() throws Exception


{

int x = 50;
int y = 50;
int q = 100;
int c = 0;
int rows = 0;
int cols = noLayers;

FileWriter fw = new FileWriter(fileName);


BufferedWriter bw = new BufferedWriter(fw);

for(int i=0; i<hiddenNodes.length; i++){


c++;
if(hiddenNodes[i]>rows){ rows = hiddenNodes[i];}
}

//get number of rows


if(inNodes > rows){
rows = inNodes;
}else if( outNodes > rows){
rows = outNodes;
}

int max;
if(rows>cols){
max = rows;
}else{
max = cols;
}

int r = 80; c = 40;


NumberFormat nf = NumberFormat.getNumberInstance();
nf.setMaximumFractionDigits(3);

315
for(int i=0; i<cols; i++){

String s1 = new String("\n\n Layer # " + (i+1));


bw.write(s1, 0, s1.length());

for(int j=0; j<layers[i]; j++){

int printRow = r + (j+1)*20;

String s2 = new String( "\nNd " + (j+1) + " Weights=> ");


bw.write(s2, 0, s2.length());

for(int k=0; k<layers[i+1]; k++){

if(weights[i][j][k] != 0){

String s3 = new String( " " + nf.format(weights[i][j][k]) + ", ");


bw.write(s3, 0, s3.length());

r = printRow + 20*(k+1);
}
}

}
r = 80; //lreset at end of column
}

bw.close();

316
//process.java
//http://www.timestocome.com
//Neural Net Building Program
//winter 2000-2001

import javax.swing.*;
import java.io.*;

class process{

private double trainingConstant;


private double threshold;
private File dataFile;
private neuralnet nn;
private double[][] vectorsIn;
private double[][] vectorsOut;
private int numberOfVectors = 0;
private double[][] neurodeOutputArray;
private JTextArea message = new JTextArea();
private int nodesPerLayer[];
private int max, outNodes, noLayers, inNodes;
// private double allowedError;
private double[][] answerArray;

process(neuralnet n, int noV, JTextArea info, File f)


throws Exception
{

max = n.maxNodes;
outNodes = n.out;
noLayers = n.numberOfLayers;
inNodes = n.in;
threshold = n.threshold;
nn = n;
numberOfVectors = noV;

message = info;
answerArray = new double[noV][outNodes];

317
dataFile = f;
FileReader fr = new FileReader(f);
BufferedReader br = new BufferedReader(fr);
String lineIn;

vectorsIn = new double[numberOfVectors][inNodes];


vectorsOut = new double[numberOfVectors][outNodes];
neurodeOutputArray = new double[noLayers][max];

nodesPerLayer = new int[noLayers+1];


nodesPerLayer[0] = inNodes;

for(int k=1; k<(noLayers - 1); k++){


nodesPerLayer[k]=nn.hiddenLayers[k-1];
}

nodesPerLayer[noLayers - 1] = outNodes;

//now parse them into arrays


StreamTokenizer st = new StreamTokenizer(fr);
int k = 0, j =0, i =0;

while(st.nextToken() != st.TT_EOF){

if(st.ttype == st.TT_NUMBER){

if( i < inNodes){

vectorsIn[k][i] = st.nval;
i++;

if(i == inNodes){

k++;

318
i = 0;

}
}

}
}

info.setText("...loaded vectors for processing....");

public void doit()


{

//propagate input through nn


int vectorNumber = 0;
while(vectorNumber < numberOfVectors){

//input the input vector to each node in first layer


for(int i=0; i<inNodes; i++){
neurodeOutputArray[0][i] = vectorsIn[vectorNumber][i];

//*for each layer after first


//output = sum incoming weights,
//input value to sigmoid function 1/(1 + exp ^(-x))
for(int l=1; l< noLayers; l++){

for(int n=0; n < nodesPerLayer[l]; n++){

double temp = 0;

//sum incoming weights * output from previous layer


for(int w=0; w<nodesPerLayer[l-1]; w++){

temp += neurodeOutputArray[l-1][w] * nn.weightTable[l-1][w][n];

319
}

//run through sigmoid


double temp2 = 1/ ( 1 + Math.pow(Math.E, -temp) );

//check if over threshold


if( temp2 >= threshold){
//update neurodeOutputArray
neurodeOutputArray[l][n] = temp2;

//save for user


if( l == (noLayers - 1) ){ //this must be output layer
answerArray[vectorNumber][n] = temp2;

}
}
}

vectorNumber ++;
}

for(int i=0; i<numberOfVectors; i++){

message.append ( "\n vector # " + i + "input ");

for(int k=0; k<inNodes; k++){

message.append( " " + vectorsIn[i][k] + ", ");

message.append(" output " );


for(int j=0; j<outNodes; j++){

message.append( " " + answerArray[i][j] + ", ");

}
}

message.append("\n\ndone processing vectors");

320
}

321
//weighttable.java
//winter 2000-2001

//we need a table to store the weights after we are done


//and to hold them while we train the net

class weighttable {

//use a 3-d table


//row is the position in the row of this neurode
//column is the neurode in the forward row this neurode is connecting to
//depth is the layer this neurode resides in.
private double[][][] weights = new double[][][];

private boolean training = false; //set this to true to change weights


private int maxRow, maxColumn, maxLayer;

createTable(int r, int c, int l)


{
//allocate space for the table
maxRow = r;
maxColumn = c;
maxLayer = l;

//set up random weights for each weight


//that are between -1.0 and 1.0

//put zeros in extra table positions


//set training to true
}

saveTable()
{
//print to a file
}

printTable()
{
//print to screen or printer
}

322
loadTable()
{
//load a saved table into memory for use
}

setWeights()
{
//if training set to true
//backpropagation training
//else print error message to user
}

323
//printme.txt

Layer # 1
Nd 1 Weights=> 0.51, 0.48,
Nd 2 Weights=> 0.953, 0.181,
Nd 3 Weights=> -0.374, 0.273,
Nd 4 Weights=> -0.082, 0.227,

Layer # 2
Nd 1 Weights=> 0.923, 0.75, 0.64,
Nd 2 Weights=> 0.319, -0.259, -0.929,

Layer # 3
Nd 1 Weights=> -0.571, -0.081, 0.154, 0.786, -0.390, -0.105,
Nd 2 Weights=> 0.742, 0.825, -0.558, -0.221, -0.851, -0.564,
Nd 3 Weights=> 0.128, -0.869, 0.333, 0.742, -0.767, -0.641,

Layer # 4
Nd 1 Weights=>
Nd 2 Weights=>
Nd 3 Weights=>
Nd 4 Weights=>
Nd 5 Weights=>
Nd 6 Weights=>

324
//process.txt
(1,3,5,7)
(2,4,6,8)
(1,2,3,4)
(0.1, 0.2, 0.3, 0.4)

325
//train.txt
(0.4, -0.4) (.9)

326
//test2.net
(0.1, 0.2, 0.3, 0.4) (0.5, 0.6, 0.7, 0.8)
(0.5, 0.6, 0.7, 0.8) (0.9, 1.0, 1.1, 1,2)

327
7.11.2 C++ Backpropagation Dog Track Predictor

This is a neural net created for the fortune/games section. It is


a backpropagation net that picks the winners for a race given the
information available on line before the races. It has two perl tools
that clean up the data so you can train it to your favorite track. It
will create the weight tables you can then use to build a tool to grab
current race info and predict winners.

This is a set of tools to download the data files from a race track,
clean them up and train a neural net to predict the winner of the race
given only the information available online before a race.

First pick the track you wish to use:


http://www.ildado.com/dog_racing_tracks_usa.html

Then download the Entries/Results/Charts for a while.


I would use at least a months worth of data. The tracks re-use the
file names on a monthly basis, so if you use more than a month's data
you will need to change the names of store them in a different
directory

After that you need to clean up the data


ALWAYS work on a copy, not your original downloads

1) Create a working directory


2) create a directory 'data'
3) create a directory 'trainingdata'
4) put a copy of the downloaded data in 'trainingdata' directory
5) run cleandata.pl
6) run formatdata.pl
7) edit that data.dat file and remove any lines that have just commas
and no data and any lines that have commas with out data between them.

8) Then you can train your neural net


compile dogs.cpp or run races which is just dogs.cpp precompiled

This creates a weight table, and and error file so you can see how
well it is working.

9) predictor.cpp can be compiled and that will run from a command


prompt and ask the user for the information for a race and output
the predicted winners. It uses the weight table created from dogs.cpp

328
10) testnet.cpp can be used to run test data through your net and see
how accurate it is at predicting the winners.

12) File list


-cleandata.pl used to pull relevant information from downloaded files
-formatdata.pl used to put the cleaned data into a file the neural net
program can read
-dogs.cpp is the training program
-predictor.cpp gets user input and predicts winners
-testnet.cpp runs through a training file and gives info on accuracy
of the net
-example.error.dat -training error debug file
-example.test.data.dat - data.dat file used for testing
-example.testData.dat - testing debug file
-example.training.data.dat - data.dat file used for training
-example.weights.dat - example weight file
-dogs - compiled, executable dogs.cpp
-predictor - compiled, executable predictor.cpp
-testnet - compiled, executable testnet.cpp
**only a few sample lines are shown in the example files here

329
---cleandata.pl---
#!/usr/bin/perl

#program to clean up the entry and result files


#from the greyhound tracks so the data can be
#fed into the neural net for training.

#input files are entries...


#G-greyhound
#E for entries
#2 letter track id
#two digit day of month
#S/A/E/L student, afternoon, evening, late night

#results
#G - greyhound
#R for results
#2 letter track id
#two digit day of month
#S/A/E/L student, afternoon, evening, late night

#data files are created for each race


#2 letter track id
#two digit day of month
#S/A/E/L student, afternoon, evening, late night
#race number

#input data file will have the


#number and name of each dog, the odds, and the weight (odds are divided out)
#the 3 or 4 number handicap
#the ordered list of winners

#read in list of entries files in directory into an array


opendir( DIRECTORY, 'trainingdata')
or die "Can't open directory trainingdata.";

#$temp = join ( ', ', readdir(DIRECTORY));


while (defined ($file = readdir(DIRECTORY))){

push (@filelist, $file);


}
closedir (DIRECTORY);

$i = 0;
foreach $item (@filelist){

330
if ( ( $item =~ /GR[A-Z, 0-9]+.HTM/)
|| ( $item =~ /^\./)
|| ( $item =~ /^\.\./) )
{
$oldFileName = $item;
delete @filelist[$i];
}
$i++;
}

foreach $item (@filelist){


if ($item){
push (@entriesfiles, $item);
}
}

#FILE LOOP
#open the first/next entries file
foreach $item (@entriesfiles){

open (INPUT, "trainingdata/$item")


or die "Couldn't open training data file $item";

$oldFileName = $item;
$race = 0;

#read races and grab data


#RACE LOOP
while (<INPUT>){

if ( /Grade/ ){

$race++;

#create a new file for the first race with the correct file name
$newFileName = $oldFileName;
$newFileName =~ s/G/$race/;
$newFileName =~ s/HTM/txt/;
$newFileName = lc ($newFileName);

#open file for writing


open ( OUT, ">data/$newFileName")

331
or die "Couldn't open data/$newFileName";

#for each dog -- write out dogs number, name, odds(divided), weight
#parse line
if ( /^[1-8][\s]/ ){

$parseline = $_;

$parseline =~ s/(^[1-8][\s])([A-Za-z|\.|\-|\s|\']+)([0-9]+[\-][0-9]+[\s])
([A-Za-z|\.|\-|\s|\'|\&]+[\s]*)([\(][0-9]+[\)])/$1 $2 $3 $4 $5 $6 $7 $8 $9/;

$dogNumber = $1;
$dogName = $2;
$odds = $3;
$weight = $5;

$oddsTop = $odds;
$oddsBot = $odds;

if ( $oddsBot == 0){
#print " \n !!! $dogNumber, $dogname, $odds, $weight";

}else{

$oddsTop =~ s/\-\d+//;
$oddsBot =~ s/\d+\-//;
$odds = $oddsTop/$oddsBot;
}

$dogNumber =~ s/\s//;
$weight =~ s/\(//;
$weight =~ s/\)//;

#write line to file


print OUT "\n $dogNumber,$dogName,$odds,$weight";

#for (0..4) #write out the handicaps'


#use a zero for the 4th if there are only 3

332
if ( /^Track Handicapper:/ ){
$parseline = $_;

$parseline =~ s/([Track Handicapper: ])([0-9])([\-])([0-9])([\-])([0-9])([\-]*)


([0-9]*)/$1 $2 $3 $4 $5 $6 $7 $8 $9/;

$h1 = $2;
$h2 = $4;
$h3 = $6;
$h4 = $8;

if ( ! $h4){
$h4 = 0;
}
print OUT "\nH: $h1, $h2, $h3, $h4";
close OUT;
}

}#end race loop


close (INPUT);

#open the correct results file


$resultsFile = $oldFileName;
$resultsFile =~ s/GE/GR/;
$flag = 0;

#if not found rm the entries file and grab the next entries file
if ( ! (open (INPUT2, "trainingdata/$resultsFile"))){

#delete the entry file


#delete the data files with the races for that entry file
print "\n missing results file $resultsFile";
print "\n files to delete are:";

if ( "trainingdata/$resultsFile" ){
print "\n trainingdata/$resultsFile";
unlink ("trainingdata/$resultsFile");
}
if ( "trainingdata/$oldFileName"){
print "\n trainingdata/$oldFileName";
unlink ("trainingdata/$oldFileName");
}
if ( "data/$newFileName"){
print "\n data/$newFileName";
unlink ("data/$newFileName");

333
}

}else{

open (INPUT2, "trainingdata/$resultsFile");

while (<INPUT2>){

#get the winners from the file for the correct race
#find each race
if ( /Grade:/ ){

$flag++;
$flag1 = 0;
$flag2 = 0;
$flag3 = 0;
$raceNo = $flag;
$done = 0;
}

elsif( (/^[0-9]/) && (! $flag1) && (! $flag2) && (! $flag3) ){


#cleanup data for file
$parseline = $_;
$first = $parseline;
$first =~ s/([1-8])([A-Za-z|\.|\'|\&|\s]+)/$1 $2 $3 $4 $5 $6 $7 $8 $9/;
$firstNumber = $1;
$firstName = $2;
$flag1 = 1;

elsif ( (/^[0-9]/) && ($flag1) && (! $flag2) && (! $flag3) ){


#clean up data for file
$parseline = $_;
$second = $parseline;
$second =~ s/([1-8])([A-Za-z|\.|\'|\&|\s]+)/$1 $2 $3 $4 $5 $6 $7 $8 $9/;
$secondNumber = $1;
$secondName = $2;
$flag2 = 1;

elsif ( (/^[0-9]/) && ($flag1) && ($flag2) && (! $flag3) ){


#cleanup data for file
$parseline = $_;

334
$third = $parseline;
$third =~ s/([1-8])([A-Za-z|\.|\'|\&|\s]+)/$1 $2 $3 $4 $5 $6 $7 $8 $9/;
$thirdNumber = $1;
$thirdName = $2;
$flag3 = 1;
}

if ( ($flag1) && ($flag2) && ($flag3) && (! $done) ){

$done = 1;

#open correct output file


$outputFileName = $oldFileName;
$outputFileName =~ s/G/$raceNo/;
$outputFileName =~ s/HTM/txt/;
$outputFileName = lc ($outputFileName);

open ( OUT2, ">>data/$outputFileName")


or die "Couldn't open data/$outputFileName";

#add the winners at the end of the file


print OUT2 "\n$firstNumber, $firstName";
print OUT2 "\n$secondNumber, $secondName";
print OUT2 "\n$thirdNumber, $thirdName";

#close the OUT file


close (OUT2);
}

close (INPUT2);
}

} #END FILE LOOP

335
--formatdata.pl---
#!/usr/bin/perl

#open output file


$outputfile = "data.dat";
open (OUT, ">$outputfile") or die ("Can not create output file");

#get directory list of files


opendir (DIRECTORY, 'data');

push (@filelist, readdir(DIRECTORY) );


$races = @filelist;
print "\n there are $races races";

#win place show counter


$i = 0;

#dog Number counter


$d = 1;

#read in a file
foreach $file (@filelist){

print "\n File:> $file\n";

if (( $file == '.') | ( $file == '..') ){


#do nothing
}else{

open (FILEHANDLE, "data/$file") or die ("Can not open data/$file");

while (<FILEHANDLE>){

#collect the 4 handicaps


if ( $_ =~ /H:\s[1-8]/ ){

$temp = $_;
$temp =~ s/([H:\s]+)([1-8])([\,])([\s])([1-8])([\,])([\s])([1-8])([\,])([\s])([0-8])
/$1 $2 $3 $4 $5 $6 $7 $8 $9 $10 $11/;

336
$h1 = $2;
$h2 = $5;
$h3 = $8;
$h4 = $11;

#collect each dog's number, odds and weight

elsif ( $_ =~ /(\s[1-8])/){

$temp = $_;

$temp =~ s/([\s])([1-8])([\,\s])([A-Za-z|\.|\-|\s|\']+)
([\s\,])([\d+|\.]+)([\,])([\d]+)/
$1 $2 $3 $4 $5 $6 $7 $8 $9/;

$pos = $2;
$odds = $6;
$weight = $8;

if ( $d == 1){
$d = 2;
$p1 = 0;
$o1 = $6;
$w1 = $ 8;

}elsif ( $d == 2){
$d = 3;
$p2 = 0;
$o2 = $6;
$w2 = $ 8;

}elsif ( $d == 3){
$d = 4;
$p3 = 0;
$o3 = $6;
$w3 = $ 8;

}elsif ( $d == 4){
$d = 5;
$p4 = 0;
$o4 = $6;
$w4 = $ 8;

337
}elsif ( $d == 5){
$d = 6;
$p5 = 0;
$o5 = $6;
$w5 = $ 8;

}elsif ( $d == 6){
$d = 7;
$p6 = 0;
$o6 = $6;
$w6 = $ 8;

}elsif ( $d == 7){
$d = 8;
$p7 = 0;
$o7 = $6;
$w7 = $ 8;

}elsif ( $d == 8){
$d = 1;
$p8 = 0;
$o8 = $6;
$w8 = $ 8;

#collect the win, place, show numbers


elsif ( $_ =~ /[1-8]/ ) {

$temp = $_;
$temp =~ s/([1-8])([\,\s]) /$1/;

if ($i == 0){
$win = $1;
$i++;
}elsif ($i == 1){
$place = $1;
$i++;
}elsif ( $i == 2){
$show = $1;
$i = 0;

338
}

if ( $h1 == 1){
$p1 = 1;
}elsif ( $h1 == 2 ){
$p2 = 1;
}elsif ( $h1 == 3 ){
$p3 = 1;
}elsif ( $h1 == 4 ) {
$p4 = 1;
}elsif ( $h1 == 5) {
$p5 = 1;
}elsif ( $h1 == 6 ){
$p6 = 1;
}elsif ( $h1 == 7) {
$p7 = 1;
}elsif ( $h1 == 8 ){
$p8 = 1;
}
if ( $h2 == 1){
$p1 = 2;
}elsif ( $h2 == 2 ){
$p2 = 2;
}elsif ( $h2 == 3 ){
$p3 = 2;
}elsif ( $h2 == 4 ) {
$p4 = 2;
}elsif ( $h2 == 5) {
$p5 = 2;
}elsif ( $h2 == 6 ){
$p6 = 2;
}elsif ( $h2 == 7) {
$p7 = 2;

339
}elsif ( $h2 == 8 ){
$p8 = 2;
}
if ( $h3 == 1){
$p1 = 3;
}elsif ( $h3 == 2 ){
$p2 = 3;
}elsif ( $h3 == 3 ){
$p3 = 3;
}elsif ( $h3 == 4 ) {
$p4 = 3;
}elsif ( $h3 == 5) {
$p5 = 3;
}elsif ( $h3 == 6 ){
$p6 = 3;
}elsif ( $h3 == 7) {
$p7 = 3;
}elsif ( $h3 == 8 ){
$p8 = 3;
}
if ( $h4 == 1){
$p1 = 4;
}elsif ( $h4 == 2 ){
$p2 = 4;
}elsif ( $h4 == 3 ){
$p3 = 4;
}elsif ( $h4 == 4 ) {
$p4 = 4;
}elsif ( $h4 == 5) {
$p5 = 4;
}elsif ( $h4 == 6 ){
$p6 = 4;
}elsif ( $h4 == 7) {
$p7 = 4;
}elsif ( $h4 == 8 ){
$p8 = 4;
}

if ( $h4 == 0){
$p1 /= 3;
$p2 /= 3;
$p3 /= 3;
$p4 /= 3;
$p5 /= 3;
$p6 /= 3;

340
$p7 /= 3;
$p8 /= 3;
}else{
$p1 /= 4;
$p2 /= 4;
$p3 /= 4;
$p4 /= 4;
$p5 /= 4;
$p6 /= 4;
$p7 /= 4;
$p8 /= 4;
}

#write the information one race (file) per line


#deliminated by commas

#print "$p1,$o1,$w1,$p2,$o2,$w2,$p3,$o3,$w3,
$p4,$o4,$w4,$p5,$o5,$w5,$p6,$o6,$w6,$p7,$o7,$w7,$p8,$o8,$w8,$win,$place,$show\n";

print OUT "$p1,$o1,$w1,$p2,$o2,$w2,$p3,$o3,$w3,$p4,$o4,


$w4,$p5,$o5,$w5,$p6,$o6,$w6,$p7,$o7,$w7,$p8,$o8,$w8,$win,$place,$show,\n";

close (FILEHANDLE);

# print "$p1,$o1,$w1,$p2,$o2,$w2,$p3,$o3,$w3,$p4,$o4,$w4,$p5,
$o5,$w5,$p6,$o6,$w6,$p7,$o7,$w7,$p8,$o8,$w8,$win,$place,$show\n";

close (OUT);

341
//---dogs.cpp---
//www.timestocome.com
//neural net to better pick winning dogs
//data is downloaded from the racing tracks
//and parsed using cleandata.pl followed by formatData.pl
//this program then takes that data and creates a weight table
//using a backpropagation neural net.

//later a program will be written to get race information


//from the user and out put the predicted winning dogs
//through a browser interface.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>

#include <ctime>
#include <iostream>
#include <fstream>
using namespace std;

#define NODESIN 24
#define NODESHIDDEN 16
#define NODESOUT 8
#define VECTORSIN 2000
#define LOOPSMAX 100
#define ERRORMAX 1.0

int trainingRoutine (double wgtsI[NODESIN][NODESHIDDEN],


double wgtsO[NODESHIDDEN][NODESOUT],
double vctrIn[VECTORSIN][32], double vctrOut[NODESOUT], int vectorCount);

int readData (double v[VECTORSIN][32]);

void debugInfo(int i, int loops, double vctrIn[VECTORSIN][NODESIN+NODESOUT],


double outputNodes[NODESOUT], double totalError, int goodRaces)
double wgtsI[NODESIN][NODESHIDDEN], double wgtsO[NODESHIDDEN][NODESOUT]);

int testData (int i, double vctrIn[VECTORSIN][NODESIN + NODESOUT],


double outputNodes[NODESOUT] );

342
void randomizeWeights ( double weightsI[NODESIN][NODESHIDDEN],
double weightsO[NODESHIDDEN][NODESOUT]);

int main (void)


{

//create neural net with 24 inputs (3 per dog)


//18 hidden nodes and 8 output nodes

//create 2-2d weight tables

//weights between input and hidden layer


double weightsI[NODESIN][NODESHIDDEN];

//weights between hidden layer and output


double weightsO[NODESHIDDEN][NODESOUT];

//output vector
double outputData[NODESOUT];
for (int i=0; i<NODESOUT; i++){
outputData[i] = 0.0;
}

//randomize initial weights


randomizeWeights(weightsI, weightsO);

//create table to store info


//2 d table, one row per vector
// store input/output/nnoutput in columns.
double information[VECTORSIN][NODESIN + 2 * NODESOUT ];

//open input file for reading (data.dat)


//open data file, parse the data and stuff it
//in an array
// handicap/odd/weight * 8 + 8 final positions
double inputData[VECTORSIN][32];
int numberOfVectors = readData( inputData );

343
//run training routine
trainingRoutine ( weightsI, weightsO, inputData, outputData, numberOfVectors);

//write out final weights


//create, open, write, close weight files
ofstream fout("weights.dat");

if (!fout.is_open()){
cerr << "Could not create weights.dat" << endl;
exit(1);
}

fout << "\nWeights between input and hidden layers\n\n";


for( int i=0; i< NODESIN; i++){
fout << "\n\n";
for ( int j=0; j<NODESHIDDEN; j++){
fout << " " << weightsI[i][j] << ",";
}
}
fout << "\n\nWeights between hidden and output layers\n\n";
for ( int i=0; i< NODESHIDDEN; i++){
fout << "\n\n";
for (int j=0; j< NODESOUT; j++){
fout << " " << weightsO[i][j] << ",";
}
}

fout.close();

344
int trainingRoutine (double wgtsI[NODESIN][NODESHIDDEN],
double wgtsO[NODESHIDDEN][NODESOUT],
double vctrIn[VECTORSIN][NODESIN+NODESOUT],
double vctrOut[NODESOUT], int vectorCount)
{

double outputNodes[NODESOUT];
double hiddenNodes[NODESHIDDEN];
double errorO[NODESOUT];
double errorI[NODESHIDDEN];
double bias = 0.0;
double threshold = 0.20;
double learningRateI = 0.20;
double learningRateO = 0.20;
double errorAdjustmentO[NODESOUT];
double errorAdjustmentI[NODESHIDDEN];
int loops = 0;
int badloops = 0;
int goodRaces=0, badRaces = 0;

for (int i=0; i<NODESOUT; i++){


outputNodes[i] = 0.0;
}

for ( int i=0; i<NODESHIDDEN; i++){


hiddenNodes[i] = 0.0;
}

//********main training loop

//for each vector


for ( int i = 0; i< vectorCount; i++ ){

//reset some things for each vector


double totalError = 0.0;
for (int m=0; m<NODESOUT; m++){ outputNodes[m] = 0.0; }
for ( int m=0; m<NODESOUT; m++){ errorO[m] = 0.0; }
for (int m=0; m< NODESHIDDEN; m++){ errorI[m] = 0.0; }
for ( int m=0; m<NODESHIDDEN; m++){ hiddenNodes[m] = 0.0; }
badloops = 0;

345
//loop until converge for the vector,
//or maximum error allowed error is reached.
for ( int loops=0; loops< LOOPSMAX; loops++){

//take the input vector and multiply it by each weight


//sum up the total for each of the hidden neurodes
for( int j=0; j<NODESIN; j++){
for (int k=0; k<NODESHIDDEN; k++){
hiddenNodes[k] += vctrIn[i][j] * wgtsI[j][k];

}
}

//add bias
//stabilize with sigmoid function
//see if over threshold to fire
for ( int j=0; j<NODESHIDDEN; j++){

hiddenNodes[j] += bias;
hiddenNodes[j] = 1/ (1 + exp(-hiddenNodes[j]));

//don't fire if below threshold


if ( hiddenNodes[j] < threshold){
hiddenNodes[j] = 0.0;
}
}

//now do this again for the next layer


//accumulate total input
for (int j=0; j<NODESHIDDEN; j++){
for (int k=0; k< NODESOUT; k++){
outputNodes[k] += hiddenNodes[j] * wgtsO[j][k];

}
}

for ( int j=0; j<NODESOUT; j++){

346
outputNodes[j] += bias;
outputNodes[j] = 1/ (1 + exp(outputNodes[j]));

if (hiddenNodes[j] < threshold){


hiddenNodes[j] = 0.0;
}
}

//determine error
totalError = 0.0;
for (int j=0; j< NODESOUT; j++){
errorO[j] = outputNodes[j] - vctrIn[i][NODESIN + j];
totalError += errorO[j];
}

for (int j=0; j<NODESOUT; j++){


errorAdjustmentO[j] = 0.0;
}

for (int j=0; j<NODESOUT; j++){


for (int k=0; k<NODESHIDDEN; k++){
errorAdjustmentO[j] += wgtsO[k][j] * errorO[j];
}
}

for (int j=0; j<NODESHIDDEN; j++){


errorAdjustmentO[j] = hiddenNodes[j] * ( 1 - hiddenNodes[j])
* errorAdjustmentO[j];
}

//now adjust each weight


//output layer
for(int j=0; j<NODESHIDDEN; j++){
for ( int k=0; k< NODESOUT; k++){
wgtsO[j][k] += learningRateO * hiddenNodes[j] * errorO[k];

}
}

//input layer

347
for( int j=0; j<NODESIN; j++){
for (int k=0; k< NODESHIDDEN; k++){
wgtsI[j][k] += learningRateI * vctrIn[i][j] *
errorAdjustmentO[k] ;

}
}

int done = testData(i, vctrIn, outputNodes);


if ( ( done == 1 ) || (badloops > LOOPSMAX) ){

if ( done == 1){ goodRaces++; }


badloops = 0;
debugInfo( i, loops, vctrIn, outputNodes, totalError,
goodRaces, wgtsI, wgtsO );
break;

} else {

if ( (totalError > 3.0 )&&(totalError < 2.0)){


learningRateO = .60;
learningRateI = .60;
}else if ( (totalError > 2.0 )&&(totalError < 1.0)){
learningRateO = .50;
learningRateI = .50;
}else if ( (totalError > 1.0)&&(totalError < 0.5)){
learningRateO = .40;
learningRateI = .40;
}else if (totalError > 0.5){
learningRateO = .30;
learningRateI = .30;
}else {
learningRateO = .10;
learningRateI = .10;
}

badloops++;
}
//debugInfo( i, loops, vctrIn, outputNodes,
totalError, goodRaces, wgtsI, wgtsO );

}//end loops over a vector

348
}//******************end training loop (move to next vector)

return 0;
}

void randomizeWeights ( double weightsI[NODESIN][NODESHIDDEN],


double weightsO[NODESHIDDEN][NODESOUT]){

//randomize initial weights


srand ( time(0));
double n;

for (int i=0; i<NODESIN; i++){


for ( int j=0; j<NODESHIDDEN; j++){

n = rand() % 2;

if ( n == 0){
weightsI[i][j] = ((double) (rand()))/RAND_MAX;
}else {
weightsI[i][j] = (-1.0) * ((double) (rand()))/RAND_MAX;
}
}
}

for (int i=0; i<NODESHIDDEN; i++){


for (int j=0; j<NODESOUT; j++){

n = rand() % 2;

if ( n == 0){
weightsO[i][j] = ((double) (rand()))/RAND_MAX;
}else {
weightsO[i][j] = (-1.0) * ((double) (rand()))/RAND_MAX;
}
}
}

349
}

//read in the data file that was created by


//the two perl routines and parse the data into
//an array, one line per vector, 24 inputs, 3 outputs
//no idiot checking since we created this file and
//checked it ourselves, assume proper formatting
//of the data

int readData (double v[VECTORSIN][32])


{

ifstream fin ("data.dat");


char tempString[256];
int count = 0; //how many vectors do we have?
int track = 0;

if ( !fin.is_open()){
printf ( "\nCould NOT open data.dat ");
exit (1);
}

//while more vectors in to read


while ( fin ){

fin >> tempString;


track = 0;

//27 numbers all separated by 26 commas


//double, double, int, .... x 8
//handicap (between 0 and 1)
//odds (between 0 and )
//weight (between 0 and )

int length = strlen(tempString);

350
char temp[257];
int endOfLine = 0;

int j=0;
for (int i=0; i<length; i++){

if ( tempString[i] != ',' ){

//take the non ',' char and concat it to end of temp


temp[j] = tempString[i];
j++;

}else{

//convert temp to a double


temp[j+1] = '\0';
float tempNumber = atof (temp);

//put it into training array


v[count][track] = (double) tempNumber;

//adjust odds
if (( track%3 == 0) &&( track < 24)){
// v[count][track] /= 10.0;
}

//adjust dog's weight so net is not dominated by them


if( ( (track+1) %3 == 0)&&(track < 24)){
v[count][track] /= 100.0;

//adjust handicap
if (( (track+2)%3 == 0) && (track < 24)) {
v[count][track] /= 10.0;
}

//jump to next column in array and reset temp array


track++;
for( int k=0; k<257; k++){
temp[k] = ' ';
}
j=0;

351
}
}
count++;
}

count -= 1;

//adjust win/place/show
for ( int i=0; i<count; i++){

double win = v[i][24];


double place = v[i][25];
double show = v[i][26];

v[i][24] = 0;
v[i][25] = 0;
v[i][26] = 0;

int w = (int)(23 + win);


int p = (int)(23 + place);
int s = (int)(23 + show);

v[i][w] = .75;
v[i][p] = .50;
v[i][s] = .25;
}

fin.close();
return count;

void debugInfo(int i, int loops, double vctrIn[VECTORSIN][NODESIN+NODESOUT],


double outputNodes[NODESOUT], double totalError, int goodRaces,
double wgtsI[NODESIN][NODESHIDDEN], double wgtsO[NODESHIDDEN][NODESOUT])
{

//store some data in a text file each loop


//input // output // actual output
ofstream fptr;
fptr.open("error.dat", ios::app);

352
if (!fptr.is_open()){
cerr << "Could not create error.dat" << endl;
exit(1);
}

//this file can get quite large, I only used it for debugging
//dump some info to file for review
fptr <<"\n*********************************************************\n";
fptr << "\n\n Vector " << i << ", loop Number " << loops << endl;

fptr << "\n Input \t" ;


for ( int z=0; z <= NODESIN; z++ ){
fptr << vctrIn[i][z] << ", ";
if ( z%3 == 0 ){
fptr << endl ;
}
}

fptr << "\n\n Actual \t Desired " << endl;


for (int z=0; z<NODESOUT; z++){
fptr << outputNodes[z] << "\t" << vctrIn[i][z + NODESIN] << endl;
}

totalError = sqrt (totalError * totalError);


fptr << "\n\n\ntotalError " << totalError << "\t\t";

//for readablility only


int first=0, second=0, third=0;
for (int m=0; m<NODESOUT; m++){
if ( vctrIn[i][m + NODESIN] == .75 ){
first = m+1;
}else if ( vctrIn[i][m + NODESIN] == .50 ){
second = m+1;
}else if ( vctrIn[i][m + NODESIN] == .25 ){
third = m+1;
}

fptr <<"\n Actual: " << first << ", " << second << ", " << third << "\t\t";

353
//convert output to easily readable information
int w = 0, p = 0, s = 0;
double win=0.0, place=0.0, show=0.0;
double temp[NODESOUT];
for ( int m=0; m< NODESOUT; m++){
temp[m] = outputNodes[m];
}
for (int m=0; m< NODESOUT; m++){
if( temp[m] > win){
win = temp[m];
w = m+1;
}
}
temp[w-1] = 0.0;
for (int m=0; m<NODESOUT; m++){
if ( temp[m] > place){
place = temp[m];
p = m+1;
}
}
temp[p-1] = 0.0;
for( int m=0; m<NODESOUT; m++){
if ( temp[m] > show){
show = temp[m];
s = m+1;
}
}

fptr << "\n Predicted:" ;


fptr << " " << w << ", " << p << ", " << s;
fptr << endl;

/*
//weights
fptr << endl;

fptr << "\n Weights (input side) " << endl;


for ( int m=0; m<NODESIN; m++){
for ( int n=0; n< NODESHIDDEN; n++){

354
fptr << wgtsI[m][n] << ",\t ";
}
fptr << endl;
}

fptr << "\n Weights (output side) " << endl;


for (int m=0; m < NODESHIDDEN; m++){
for (int n=0; n< NODESOUT; n++){
fptr << wgtsO[m][n] << ",\t ";
}
fptr << endl;
}

fptr <<"\n**********************************************************\n";
fptr <<"good races " << goodRaces << " bad races " << i-goodRaces << endl;
*/
fptr.close();

int testData (int i, double vctrIn[VECTORSIN][NODESIN + NODESOUT],


double outputNodes[NODESOUT] )
{

//convert raw numbers to win/place/show dog


int first=0, second=0, third=0;
for (int m=0; m<NODESOUT; m++){
if ( vctrIn[i][m + NODESIN] == .75 ){
first = m+1;
}else if ( vctrIn[i][m + NODESIN] == .50 ){
second = m+1;
}else if ( vctrIn[i][m + NODESIN] == .25 ){
third = m+1;
}

//convert output to easily readable information


int w = 0, p = 0, s = 0;
double win=0.0, place=0.0, show=0.0;

355
double temp[NODESOUT];
for ( int m=0; m< NODESOUT; m++){
temp[m] = outputNodes[m];
}
for (int m=0; m< NODESOUT; m++){
if( temp[m] > win){
win = temp[m];
w = m+1;
}
}
temp[w-1] = 0.0;
for (int m=0; m<NODESOUT; m++){
if ( temp[m] > place){
place = temp[m];
p = m+1;
}
}
temp[p-1] = 0.0;
for( int m=0; m<NODESOUT; m++){
if ( temp[m] > show){
show = temp[m];
s = m+1;
}
}

if ( ( first == w) && ( second == p ) && ( third == s ) ){


return 1;
}else{
return 0;
}

356
//---predictor.cpp---

//this part of dogs track neural net reads in


//the weight table and gets info from user to predict the
//winner

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>

#include <ctime>
#include <iostream>
#include <fstream>
using namespace std;

#define NODESIN 24
#define NODESHIDDEN 16
#define NODESOUT 8

int readWeights ( double weightsI[NODESIN][NODESHIDDEN],


double weightsO[NODESHIDDEN][NODESOUT]);

int predict( double dataIn [NODESIN], double dataOut[NODESOUT],


double weightsI[NODESIN][NODESHIDDEN],double weightsO[NODESHIDDEN][NODESOUT]);

int userInput( double dataIn [NODESIN]);


int userOutput( double dataOut [NODESOUT]);

int main (void)


{

//set up stuff
double weightsI[NODESIN][NODESHIDDEN];
double weightsO[NODESHIDDEN][NODESOUT];
double outputData[NODESOUT];
for (int i=0; i<NODESOUT; i++){ outputData[i] = 0.0; }

357
//read in weights from file
readWeights(weightsI, weightsO);

//get info from user


//and create a vector to put through net
double dataIn[NODESIN];
userInput(dataIn);

//run input data through nn


predict(dataIn, outputData, weightsI, weightsO);

//output data to user


userOutput(outputData);

int predict (double dataIn[NODESIN], double outputNodes[NODESOUT],


double wgtsI[NODESIN][NODESHIDDEN], double wgtsO[NODESHIDDEN][NODESOUT])
{

double bias = 0.0;


double threshold = 0.20;
double hiddenNodes[NODESHIDDEN];

for (int m=0; m<NODESOUT; m++){ outputNodes[m] = 0.0; }


for ( int m=0; m<NODESHIDDEN; m++){ hiddenNodes[m] = 0.0; }

//take the input vector and multiply it by each weight


//sum up the total for each of the hidden neurodes
for( int j=0; j<NODESIN; j++){
for (int k=0; k<NODESHIDDEN; k++){
hiddenNodes[k] += dataIn[j] * wgtsI[j][k];

}
}

//add bias

358
//stabilize with sigmoid function
//see if over threshold to fire
for ( int j=0; j<NODESHIDDEN; j++){

hiddenNodes[j] += bias;
hiddenNodes[j] = 1/ (1 + exp(-hiddenNodes[j]));

//don't fire if below threshold


if ( hiddenNodes[j] < threshold){
hiddenNodes[j] = 0.0;
}
}

//now do this again for the next layer


//accumulate total input
for (int j=0; j<NODESHIDDEN; j++){
for (int k=0; k< NODESOUT; k++){
outputNodes[k] += hiddenNodes[j] * wgtsO[j][k];

}
}

for ( int j=0; j<NODESOUT; j++){


outputNodes[j] += bias;
outputNodes[j] = 1/ (1 + exp(outputNodes[j]));

if (hiddenNodes[j] < threshold){


hiddenNodes[j] = 0.0;
}
}

return 0;

359
int userInput( double dataIn [NODESIN])
{

//initalize vector
for ( int i=0; i<NODESIN; i++){
dataIn[i] = 0.0;
}

//handicap, odds, weight x 8


//h1, o1, w1, h2, o2, w2, h3, o3, w3, h4, o4, w4 ...
// 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ...
//we need the weight and odds for each of the 8 dogs
//divide the odds by 10
//divide the weight by 100

for (int d=0; d<8; d++){

cout << "\n Enter dog " << (d+1)


<< "'s odds, weight separated by a space and hit enter =>> " ;
cin >> dataIn[1 + d*3];
cin >> dataIn[2 + d*3];
dataIn[1 + d*3] /= 10.0;
dataIn[2 + d*3] /= 100.0;

int h1=0, h2=0, h3=0, h4=0;


//we need the handicaps in order
cout <<"\n Now enter the handicaps for the race, if there are 3 instead of 4'';
cout << ``\n handicaps enter a zero for the 4th " << endl;

cout << "\n hit enter after each number " << endl;
cin >> h1;
cin >> h2;
cin >> h3;
cin >> h4;

//put a zero in dogs not listed


//if there are 3 handicaps then set the dog's handicaps to 1, 2/3, 1/3
if ( h4 ==0 ){
h1--; dataIn[h1*3] = 1.0;

360
h2--; dataIn[h2*3] = .67;
h3--; dataIn[h3*3] = .34;
}else{
//if there are 4 handicaps then set the dog's handicaps to 1, 3/4, 1/2, 1/4
h1--; dataIn[h1*3] = 1.0;
h2--; dataIn[h2*3] = .75;
h3--; dataIn[h3*3] = .50;
h4--; dataIn[h4*3] = .25;
}

return 0;

//read in weight file


int readWeights (double weightsI[NODESIN][NODESHIDDEN],
double weightsO[NODESHIDDEN][NODESOUT] )
{

//counters to keep track of weights


int input = 0;
int hiddenI = -1;
int hiddenO = 0;
int output = 0;

ifstream fin ("weights.dat");


char tempString[256];

if ( !fin.is_open()){
printf ( "\nCould NOT open weights.dat ");
exit (1);
}

//while more wieghts in to read


while ( fin ){

fin >> tempString;

361
//weights file has one row for each input node and one weight for
//each hidden node in the row
//then we have one row for each hidden node
//and one weight for each output

int length = strlen(tempString);


char temp[257];
int endOfLine = 0;

int j=0;
for (int i=0; i<length; i++){

if ( tempString[i] != ',' ){

//take the non ',' char and concat it to end of temp


temp[j] = tempString[i];
j++;

}else{

//convert temp to a double


temp[j+1] = '\0';
float tempNumber = atof (temp);

//put it into weight array


//weightsI[NODESIN][NODESHIDDEN] = (double) tempNumber;
//weightsO[NODESHIDDEN][NODESOUT] = (double) tempNumber;
if ( input < NODESIN ){
if ( hiddenI < (NODESHIDDEN-1) ){
hiddenI++;
weightsI[input][hiddenI] = tempNumber;
}else {
hiddenI = 0;
input++;
if ( input < NODESIN ){
weightsI[input][hiddenI] = tempNumber;
}else {
weightsO[hiddenI][output] = tempNumber;
}
}
}else{
if (output < (NODESOUT-1)){
output++;
weightsO[hiddenO][output] = tempNumber;

362
} else {
output = 0;
hiddenO++;
weightsO[hiddenO][output] = tempNumber;
}
}

//jump to next column in array and reset temp array


for( int k=0; k<257; k++){
temp[k] = ' ';
}
j=0;
}
}

fin.close();
return 0;

int userOutput( double dataOut [NODESOUT]){

//convert output to easily readable information


int w = 0, p = 0, s = 0;
double win=0.0, place=0.0, show=0.0;
double temp[NODESOUT];

for ( int m=0; m< NODESOUT; m++){


temp[m] = dataOut[m];
}

for (int m=0; m< NODESOUT; m++){


if( temp[m] > win){
win = temp[m];
w = m+1;
}

363
}

temp[w-1] = 0.0;
for (int m=0; m<NODESOUT; m++){
if ( temp[m] > place){
place = temp[m];
p = m+1;
}
}

temp[p-1] = 0.0;
for( int m=0; m<NODESOUT; m++){
if ( temp[m] > show){
show = temp[m];
s = m+1;
}
}

cout << "\n Predicted:" ;


cout << " " << w << ", " << p << ", " << s;
cout << endl;

return 0;

364
//---testnet.cpp

//neural net to better pick winning dogs


//data is downloaded from the racing tracks
//and parsed using cleandata.pl followed by formatData.pl
//this program then takes that data and creates a weight table
//using a backpropagation neural net.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>

#include <ctime>
#include <iostream>
#include <fstream>
using namespace std;

#define NODESIN 24
#define NODESHIDDEN 16
#define NODESOUT 8
#define VECTORSIN 2000
#define LOOPSMAX 100
#define ERRORMAX 1.0

int trainingRoutine (double wgtsI[NODESIN][NODESHIDDEN],


double wgtsO[NODESHIDDEN][NODESOUT], double vctrIn[VECTORSIN][32],
double vctrOut[NODESOUT], int vectorCount);

int readData (double v[VECTORSIN][32]);

void debugInfo(int i, int loops, double vctrIn[VECTORSIN][NODESIN+NODESOUT],


double outputNodes[NODESOUT], double totalError, int goodRaces,
double wgtsI[NODESIN][NODESHIDDEN], double wgtsO[NODESHIDDEN][NODESOUT]);

int testData (int i, double vctrIn[VECTORSIN][NODESIN + NODESOUT],


double outputNodes[NODESOUT] );

365
int readWeights ( double weightsI[NODESIN][NODESHIDDEN],
double weightsO[NODESHIDDEN][NODESOUT]);

int main (void)


{

//create neural net with 24 inputs (3 per dog)


//18 hidden nodes and 8 output nodes

double weightsI[NODESIN][NODESHIDDEN];
double weightsO[NODESHIDDEN][NODESOUT];

//output vector
double outputData[NODESOUT];
for (int i=0; i<NODESOUT; i++){
outputData[i] = 0.0;
}

//read in weights from file


readWeights(weightsI, weightsO);

//create table to store info


//2 d table, one row per vector
// store input/output/nnoutput in columns.
double information[VECTORSIN][NODESIN + 2 * NODESOUT ];

//open input file for reading (data.dat)


//open data file, parse the data and stuff it
//in an array
// handicap/odd/weight * 8 + 8 final positions
double inputData[VECTORSIN][32];
int numberOfVectors = readData( inputData );

//run training routine


trainingRoutine(weightsI, weightsO, inputData, outputData, numberOfVectors);

366
}

//read in weight file


int readWeights (double weightsI[NODESIN][NODESHIDDEN],
double weightsO[NODESHIDDEN][NODESOUT] )
{

//counters to keep track of weights


int input = 0;
int hiddenI = -1;
int hiddenO = 0;
int output = 0;

ifstream fin ("weights.dat");


char tempString[256];

if ( !fin.is_open()){
printf ( "\nCould NOT open weights.dat ");
exit (1);
}

//while more wieghts in to read


while ( fin ){

fin >> tempString;

//weights file has one row for each input node and one weight for
//each hidden node in the row
//then we have one row for each hidden node
//and one weight for each output

int length = strlen(tempString);


char temp[257];

367
int endOfLine = 0;

int j=0;
for (int i=0; i<length; i++){

if ( tempString[i] != ',' ){

//take the non ',' char and concat it to end of temp


temp[j] = tempString[i];
j++;

}else{

//convert temp to a double


temp[j+1] = '\0';
float tempNumber = atof (temp);

//put it into weight array


//weightsI[NODESIN][NODESHIDDEN] = (double) tempNumber;
//weightsO[NODESHIDDEN][NODESOUT] = (double) tempNumber;
if ( input < NODESIN ){
if ( hiddenI < (NODESHIDDEN-1) ){
hiddenI++;
weightsI[input][hiddenI] = tempNumber;
}else {
hiddenI = 0;
input++;
if ( input < NODESIN ){
weightsI[input][hiddenI] = tempNumber;
}else {
weightsO[hiddenI][output] = tempNumber;
}
}
}else{
if (output < (NODESOUT-1)){
output++;
weightsO[hiddenO][output] = tempNumber;
} else {
output = 0;
hiddenO++;
weightsO[hiddenO][output] = tempNumber;
}
}

368
//jump to next column in array and reset temp array
for( int k=0; k<257; k++){
temp[k] = ' ';
}
j=0;
}
}

fin.close();
return 0;

int trainingRoutine (double wgtsI[NODESIN][NODESHIDDEN],


double wgtsO[NODESHIDDEN][NODESOUT],
double vctrIn[VECTORSIN][NODESIN+NODESOUT],
double vctrOut[NODESOUT], int vectorCount)
{

double outputNodes[NODESOUT];
double hiddenNodes[NODESHIDDEN];
double errorO[NODESOUT];
double errorI[NODESHIDDEN];
double bias = 0.0;
double threshold = 0.20;
double learningRateI = 0.20;
double learningRateO = 0.20;
double errorAdjustmentO[NODESOUT];
double errorAdjustmentI[NODESHIDDEN];
int loops = 0;
int badloops = 0;
int goodRaces=0, badRaces = 0;
int loopCount = 0;

for (int i=0; i<NODESOUT; i++){


outputNodes[i] = 0.0;
}

369
for ( int i=0; i<NODESHIDDEN; i++){
hiddenNodes[i] = 0.0;
}

//for each vector


for ( int i = 0; i< vectorCount; i++ ){

//reset some things for each vector


double totalError = 0.0;
for (int m=0; m<NODESOUT; m++){ outputNodes[m] = 0.0; }
for ( int m=0; m<NODESOUT; m++){ errorO[m] = 0.0; }
for (int m=0; m< NODESHIDDEN; m++){ errorI[m] = 0.0; }
for ( int m=0; m<NODESHIDDEN; m++){ hiddenNodes[m] = 0.0; }
badloops = 0;

//take the input vector and multiply it by each weight


//sum up the total for each of the hidden neurodes
for( int j=0; j<NODESIN; j++){
for (int k=0; k<NODESHIDDEN; k++){
hiddenNodes[k] += vctrIn[i][j] * wgtsI[j][k];

}
}

//add bias
//stabilize with sigmoid function
//see if over threshold to fire
for ( int j=0; j<NODESHIDDEN; j++){

hiddenNodes[j] += bias;
hiddenNodes[j] = 1/ (1 + exp(-hiddenNodes[j]));

//don't fire if below threshold


if ( hiddenNodes[j] < threshold){
hiddenNodes[j] = 0.0;

370
}
}

//now do this again for the next layer


//accumulate total input
for (int j=0; j<NODESHIDDEN; j++){
for (int k=0; k< NODESOUT; k++){
outputNodes[k] += hiddenNodes[j] * wgtsO[j][k];

}
}

for ( int j=0; j<NODESOUT; j++){


outputNodes[j] += bias;
outputNodes[j] = 1/ (1 + exp(outputNodes[j]));

if (hiddenNodes[j] < threshold){


hiddenNodes[j] = 0.0;
}
}

//determine error
totalError = 0.0;
for (int j=0; j< NODESOUT; j++){
errorO[j] = outputNodes[j] - vctrIn[i][NODESIN + j];
totalError += errorO[j];
}

debugInfo( i, loops, vctrIn, outputNodes, totalError,


goodRaces, wgtsI, wgtsO );

}//******************end training loop (move to next vector)

return 0;
}

371
//read in the data file that was created by
//the two perl routines and parse the data into
//an array, one line per vector, 24 inputs, 3 outputs
//no idiot checking since we created this file and
//checked it ourselves, assume proper formatting
//of the data

int readData (double v[VECTORSIN][32])


{

ifstream fin ("data.dat");


char tempString[256];
int count = 0; //how many vectors do we have?
int track = 0;

if ( !fin.is_open()){
printf ( "\nCould NOT open data.dat ");
exit (1);
}

//while more vectors in to read


while ( fin ){

fin >> tempString;


track = 0;

//27 numbers all separated by 26 commas


//double, double, int, .... x 8
//handicap (between 0 and 1)
//odds (between 0 and )
//weight (between 0 and )

int length = strlen(tempString);


char temp[257];
int endOfLine = 0;

372
int j=0;
for (int i=0; i<length; i++){

if ( tempString[i] != ',' ){

//take the non ',' char and concat it to end of temp


temp[j] = tempString[i];
j++;

}else{

//convert temp to a double


temp[j+1] = '\0';
float tempNumber = atof (temp);

//put it into training array


v[count][track] = (double) tempNumber;

//adjust odds
if (( track%3 == 0) &&( track < 24)){
// v[count][track] /= 10.0;
}

//adjust dog's weight so net is not dominated by them


if( ( (track+1) %3 == 0)&&(track < 24)){
v[count][track] /= 100.0;

//adjust handicap
if (( (track+2)%3 == 0) && (track < 24)) {
v[count][track] /= 10.0;
}

//jump to next column in array and reset temp array


track++;
for( int k=0; k<257; k++){
temp[k] = ' ';
}
j=0;
}
}
count++;

373
}

count -= 1;

//adjust win/place/show
for ( int i=0; i<count; i++){

double win = v[i][24];


double place = v[i][25];
double show = v[i][26];

v[i][24] = 0;
v[i][25] = 0;
v[i][26] = 0;

int w = (int)(23 + win);


int p = (int)(23 + place);
int s = (int)(23 + show);

v[i][w] = .75;
v[i][p] = .50;
v[i][s] = .25;
}

fin.close();
return count;

void debugInfo(int i, int loops, double vctrIn[VECTORSIN][NODESIN+NODESOUT],


double outputNodes[NODESOUT], double totalError, int goodRaces,
double wgtsI[NODESIN][NODESHIDDEN], double wgtsO[NODESHIDDEN][NODESOUT])
{

//store some data in a text file each loop


//input // output // actual output
ofstream fptr;
fptr.open("testData.dat", ios::app);

if (!fptr.is_open()){
cerr << "Could not create error.dat" << endl;
exit(1);

374
}

static double score = 0.0;

//this file can get quite large, I only used it for debugging
//dump some info to file for review
fptr <<"\n******************************************************\n";
fptr << "\n Vector # " << i << endl;

totalError = sqrt (totalError * totalError);


fptr << "\n\n\ntotalError " << totalError << "\t\t";

//for readablility only


int first=0, second=0, third=0;
for (int m=0; m<NODESOUT; m++){
if ( vctrIn[i][m + NODESIN] == .75 ){
first = m+1;
}else if ( vctrIn[i][m + NODESIN] == .50 ){
second = m+1;
}else if ( vctrIn[i][m + NODESIN] == .25 ){
third = m+1;
}

fptr << "\n Actual:\t" << first << ",\t" << second << ",\t" << third
<< "\t\t";

//convert output to easily readable information


int w = 0, p = 0, s = 0;
double win=0.0, place=0.0, show=0.0;
double temp[NODESOUT];
for ( int m=0; m< NODESOUT; m++){
temp[m] = outputNodes[m];
}
for (int m=0; m< NODESOUT; m++){
if( temp[m] > win){
win = temp[m];
w = m+1;

375
}
}
temp[w-1] = 0.0;
for (int m=0; m<NODESOUT; m++){
if ( temp[m] > place){
place = temp[m];
p = m+1;
}
}
temp[p-1] = 0.0;
for( int m=0; m<NODESOUT; m++){
if ( temp[m] > show){
show = temp[m];
s = m+1;
}
}

fptr << "\n Predicted: \t" << w << ", \t" << p << ", \t" << s;
fptr << endl;

int right = 0;
if ( (first == w )||(second == w )||(third == w)){
right++;
}
if ( (second == p )||(first == p )||(third == p)){
right++;
}
if ( (third == s)||(first == s)||(second ==s)){
right++;
}

score += right/3.0;

fptr << "\ncorrectly guessed dogs : "<< right<<" average "<<


score/(i+1) << endl;

fptr.close();

376
int testData (int i, double vctrIn[VECTORSIN][NODESIN + NODESOUT],
double outputNodes[NODESOUT] )
{

//convert raw numbers to win/place/show dog


int first=0, second=0, third=0;
for (int m=0; m<NODESOUT; m++){
if ( vctrIn[i][m + NODESIN] == .75 ){
first = m+1;
}else if ( vctrIn[i][m + NODESIN] == .50 ){
second = m+1;
}else if ( vctrIn[i][m + NODESIN] == .25 ){
third = m+1;
}

//convert output to easily readable information


int w = 0, p = 0, s = 0;
double win=0.0, place=0.0, show=0.0;
double temp[NODESOUT];
for ( int m=0; m< NODESOUT; m++){
temp[m] = outputNodes[m];
}
for (int m=0; m< NODESOUT; m++){
if( temp[m] > win){
win = temp[m];
w = m+1;
}
}
temp[w-1] = 0.0;
for (int m=0; m<NODESOUT; m++){
if ( temp[m] > place){
place = temp[m];
p = m+1;
}
}
temp[p-1] = 0.0;
for( int m=0; m<NODESOUT; m++){
if ( temp[m] > show){
show = temp[m];
s = m+1;

377
}
}

if ( ( first == w) && ( second == p ) && ( third == s ) ){


return 1;
}else{
return 0;
}

378
--example.error.dat

***********************************************************************

Vector 0, loop Number 60

Input 0,
0.5, 0.52, 1,
0.45, 0.82, 0.666667,
0.35, 0.72, 0.333333,
0.25, 0.68, 0,
0.6, 0.61, 0,
1, 0.84, 0,
1.2, 0.61, 0,
0.8, 0.69, 0.5,

Actual Desired
0.229886 0.5
0.632139 0.75
0.117174 0
0.228958 0.25
0.126706 0
0.166924 0
0.0692887 0
0.22751 0

totalError 0.298586
Actual: 2, 1, 4
Predicted: 2, 1, 4

***********************************************************************

Vector 1, loop Number 41

Input 0,
1, 0.55, 0,
0.8, 0.6, 0,
1.2, 0.54, 0,
0.6, 0.55, 0.666667,
0.35, 0.69, 0,

379
0.5, 0.6, 0.333333,
0.25, 0.58, 1,
0.45, 0.63, 0.25,

Actual Desired
0.288367 0.25
0.219338 0
0.145928 0
0.109193 0
0.113589 0
0.500957 0.75
0.289854 0.5
0.129178 0

totalError 0.296404
Actual: 6, 7, 1
Predicted: 6, 7, 1

***********************************************************************

Vector 2, loop Number 61

Input 0,
0.5, 0.74, 0,
1.2, 0.69, 0.333333,
0.25, 0.59, 0,
0.6, 0.57, 0.666667,
0.35, 0.71, 0,
0.8, 0.7, 1,
0.45, 0.6, 0,
1, 0.69, 0,

Actual Desired
0.0956196 0
0.0998712 0
0.0435176 0
0.0440099 0
0.158237 0.25
0.157403 0
0.599425 0.75
0.413257 0.5

380
totalError 0.111342
Actual: 7, 8, 5
Predicted: 7, 8, 5

***********************************************************************

Vector 3, loop Number 49

Input 0,
1, 0.55, 0,
0.6, 0.58, 0,
0.8, 0.55, 0,
1.2, 0.55, 1,
0.45, 0.74, 0.666667,
0.35, 0.67, 0.333333,
0.25, 0.76, 0,
0.5, 0.55, 0,

Actual Desired
0.076147 0
0.0524968 0
0.517485 0.75
0.0393935 0
0.105092 0
0.377277 0.5
0.373286 0.25
0.155083 0

totalError 0.196261
Actual: 3, 6, 7
Predicted: 3, 6, 7

***********************************************************************

Vector 4, loop Number 44

Input 0,
0.8, 0.58, 0,

381
1, 0.6, 0,
0.6, 0.57, 0,
0.5, 0.6, 1,
0.45, 0.63, 0.666667,
0.35, 0.73, 0,
1.2, 0.71, 0.333333,
0.25, 0.72, 0,

Actual Desired
0.0416575 0
0.0540597 0
0.478243 0.5
0.0508952 0
0.152083 0.25
0.126667 0
0.147998 0
0.532813 0.75

totalError 0.0844163
Actual: 8, 3, 5
Predicted: 8, 3, 5

***********************************************************************

382
--example.test.data.dat
0.5,4.5,64,0.25,3.5,56,0,2.5,61,0,6,58,0,8,67,0,10,58,1,12,64,0.75,5,64,2,3,5,

0,4.5,74,0,6,58,0.333333333333333,8,79,0.666666666666667,2.5,81,0,3.5,62,0,10,
72,0,12,70,1,5,58,4,8,1,

0.333333333333333,12,63,1,2.5,58,0,4.5,64,0,8,59,0.666666666666667,5,55,0,3.5,
72,0,6,62,0,10,66,2,5,1,

0,3.5,60,0,10,67,0,12,72,1,5,67,0,4.5,74,0,6,68,0.333333333333333,8,59,
0.666666666666667,2.5,58,7,8,3,

1,3.5,62,0,4.5,62,0,5,59,0,8,79,0.333333333333333,6,61,0,2.5,57,0,10,77,
0.666666666666667,12,59,2,3,7,

0.333333333333333,5,71,0,2.5,71,0,12,54,1,8,61,0,4.5,52,0.666666666666667,10,
77,0,3.5,73,0,6,54,4,3,1,

0,3.5,81,0,5,70,1,12,76,0.333333333333333,4.5,71,0,2.5,67,0,10,65,0,8,61,
0.666666666666667,6,62,8,3,7,

0,8,60,0.666666666666667,10,78,0,3.5,55,0,5,76,0,12,75,0.333333333333333,6,57,
1,2.5,77,0,4.5,59,2,3,5,

0,8,75,0,12,75,1,5,62,0.333333333333333,4.5,69,0,2.5,68,0.666666666666667,6,72,
0,3.5,62,0,10,72,4,8,5,

0.666666666666667,4.5,64,0,3.5,59,0,8,60,0.333333333333333,12,57,0,2.5,63,0,6,
62,0,5,59,1,10,60,1,6,5,

0.333333333333333,5,62,0,2.5,79,0,6,77,0,12,72,0.666666666666667,8,75,1,3.5,69,
0,4.5,69,0,10,58,6,7,1,

0,8,58,0,2.5,73,0,12,72,0,15,58,0,5,60,0,6,82,0,3.5,68,0,8,63,6,5,8,

0,5,73,0,12,70,0.25,4.5,75,0,10,70,0.5,8,65,1,2.5,58,0,6,57,0.75,3.5,69,1,5,7,

0,5,74,1,12,58,0.666666666666667,10,68,0.333333333333333,8,60,0,4.5,76,0,3.5,
74,0,2.5,73,0,6,71,4,3,7,

0,8,55,0,12,71,1,10,77,0.333333333333333,6,71,0.666666666666667,5,64,0,4.5,64,
0,2.5,57,0,3.5,68,2,4,7,

0,8,75,1,3.5,61,0,2.5,79,0,10,75,0,4.5,61,0,12,73,0.666666666666667,5,56,
0.333333333333333,6,58,8,6,7,

383
0,6,72,0,3.5,55,1,5,61,0.333333333333333,10,74,0,12,69,0,4.5,59,
0.666666666666667,2.5,70,0,8,69,4,1,3,

1,8,60,0,10,82,0,3.5,69,0,4.5,63,0.333333333333333,12,64,0,5,72,0,6,58,
0.666666666666667,2.5,72,3,8,5,

0.333333333333333,5,62,0,3.5,62,1,6,64,0,2.5,53,0,8,61,0,4.5,63,
0.666666666666667,10,59,0,12,73,2,1,6,

0,8,61,0.333333333333333,6,64,0,12,72,1,5,75,0.666666666666667,2.5,76,0,10,64,
0,4.5,58,0,3.5,74,3,5,6,

1,5,62,0,10,58,0.666666666666667,2.5,60,0,4.5,73,0,12,79,0,3.5,60,0,6,71,
0.333333333333333,8,64,8,5,1,

0.333333333333333,12,72,0,10,58,0,4.5,73,0,2.5,57,0.666666666666667,8,60,0,6,
60,0,3.5,74,1,5,73,1,2,8,

0.333333333333333,4.5,55,0,8,72,0.666666666666667,6,70,0,2.5,60,0,5,73,1,3.5,
57,0,10,59,0,12,74,7,1,2,

0.666666666666667,8,74,0,12,72,0,15,59,0.333333333333333,3.5,75,1,5,72,0,6,68,
0,2.5,64,0,4.5,67,8,3,4,

0.666666666666667,2.5,67,1,8,77,0,12,68,0,3.5,54,0,4.5,61,0.333333333333333,10,
79,0,6,57,0,5,63,3,8,2,

0.333333333333333,3.5,74,0,10,70,0,12,62,1,2.5,71,0,5,54,0.666666666666667,6,
64,0,4.5,59,0,8,72,2,3,4,

1,12,74,0,10,57,0.333333333333333,3.5,56,0,4.5,72,0,5,59,0,2.5,58,0,6,64,
0.666666666666667,8,70,3,5,4,

0.5,5,69,0.25,6,58,0,8,56,0,3.5,64,0.75,2.5,73,1,12,57,0,10,75,0,4.5,69,1,2,3,

0.333333333333333,4.5,67,0,10,76,0,3.5,59,0,2.5,56,0,8,63,1,6,59,0,12,73,
0.666666666666667,5,75,8,4,5,

0,3.5,62,0.333333333333333,8,79,0,10,55,1,12,64,0,2.5,63,0.666666666666667,5,68,
0,4.5,78,0,6,55,7,5,2,

0,6,60,0,5,70,0.666666666666667,4.5,70,0.333333333333333,8,65,0,12,73,0,3.5,
70,0,2.5,64,1,10,73,6,2,7,

0,2.5,73,1,12,61,0,3.5,65,0,8,76,0,4.5,63,0.333333333333333,5,74,0,10,63,

384
0.666666666666667,6,58,7,8,2,

0,3.5,57,1,6,66,0,8,72,0,12,58,0.333333333333333,4.5,76,0.666666666666667,5,
59,0,10,86,0,2.5,77,5,8,6,

0.666666666666667,6,60,0,8,58,0.333333333333333,10,74,1,3.5,58,0,12,60,0,2.5,
64,0,4.5,61,0,5,61,7,8,2,

1,3.5,65,0,12,63,0.333333333333333,10,79,0,4.5,61,0,5,65,0.666666666666667,2.5,
64,0,6,69,0,8,69,2,6,5,

0,8,56,0,3.5,61,0.333333333333333,10,73,0,6,59,1,12,75,0,2.5,61,
0.666666666666667,5,58,0,4.5,69,5,3,7,

1,12,71,0,10,74,0,3.5,69,0.333333333333333,4.5,62,0,5,65,0,6,57,0,2.5,55,
0.666666666666667,8,65,3,7,8,

385
--example.testData.dat

***********************************************************************

Vector # 0

totalError 0.291978
Actual: 5,1,3
Predicted: 8, 5, 7

correctly guessed dogs : 1 average 0.333333

***********************************************************************

Vector # 1

totalError 0.459928
Actual: 6,1,3
Predicted: 8, 2, 1

correctly guessed dogs : 1 average 0.333333

***********************************************************************

Vector # 2

totalError 0.310754
Actual: 3,4,5
Predicted: 8, 1, 2

correctly guessed dogs : 0 average 0.222222

***********************************************************************

Vector # 3

386
totalError 0.308097
Actual: 3,4,5
Predicted: 8, 2, 5

correctly guessed dogs : 1 average 0.25

387
---example.training.data.data
0,5,52,1,4.5,82,0.666666666666667,3.5,72,0.333333333333333,2.5,68,0,6,61,0,10,84
,0,12,61,0,8,69,2,1,4,

0,10,55,0,8,60,0,12,54,0,6,55,0.666666666666667,3.5,69,0,5,60,0.333333333333333,
2.5,58,1,4.5,63,6,7,1,

0,5,74,0,12,69,0.333333333333333,2.5,59,0,6,57,0.666666666666667,3.5,71,0,8,70,1 ,
4.5,60,0,10,69,7,8,5,

0,10,55,0,6,58,0,8,55,0,12,55,1,4.5,74,0.666666666666667,3.5,67,0.333333333333333,
2.5,76,0,5,55,3,6,7,

0,8,58,0,10,60,0,6,57,0,5,60,1,4.5,63,0.666666666666667,3.5,73,0,12,71,
0.333333333333333,2.5,72,8,3,5,

0.333333333333333,2.5,72,0,10,54,0,12,55,0,8,64,0,5,53,0.666666666666667,3.5,57,
1,9,67,0,6,60,7,3,1,

0.333333333333333,2.5,56,0,10,55,0,12,63,1,4.5,70,0,6,58,0,5,63,0,8,57,
0.666666666666667,3.5,64,1,2,7,

0,10,71,0,6,73,0,5,60,0,12,82,0.666666666666667,3.5,57,0.333333333333333,2.5,58,
1,4.5,71,0,8,56,2,7,4,

0,8,63,0,12,58,0,6,70,0,5,75,0,10,64,1,4.5,76,0.666666666666667,3.5,65,
0.333333333333333,5,67,2,7,8,

0.333333333333333,2.5,65,0,6,60,1,4.5,73,0,10,76,0,5,73,0,12,58,0,8,73,
0.666666666666667,3.5,69,1,5,3,

0.333333333333333,,,0,4.5,68,1,6,68,0,3.5,61,0,8,59,0,15,71,0,8,58,
0.666666666666667,12,60,2,3,7,

0.333333333333333,8,79,0,5,60,0.666666666666667,12,53,0,3.5,68,0,5,60,0,10,73,1,
6,64,0,4.5,72,1,7,4,

0,3.5,58,0.333333333333333,8,74,0,2.5,67,0,10,73,0,6,71,0,12,55,1,5,73,
0.666666666666667,4.5,77,1,2,8,

0,3.5,57,0,12,56,0,5,64,1,8,78,0.333333333333333,4.5,74,0,2.5,57,0,10,75,
0.666666666666667,6,58,8,3,7,

1,2.5,65,0,4.5,74,0,12,69,0.666666666666667,6,57,0,3.5,57,0,8,79,0,5,55,
0.333333333333,10,60,1,2,4,

388
0.333333333333333,5,74,0,2.5,59,0,10,72,0.666666666666667,6,58,1,3.5,58,0,4.5,72
,0,8,74,0,12,57,1,5,3,

0,3.5,65,0.333333333333333,8,55,0,2.5,75,0,6,72,1,5,72,0,4.5,65,0,12,73,
0.666666666666667,10,58,4,2,7,

1,3.5,73,0,4.5,59,0,10,77,0.333333333333333,5,69,0,2.5,79,0,8,61,0,12,51,
0.666666666666667,6,70,4,2,5,

389
--example.weights.dat

Weights between input and hidden layers

0.312235, -0.144395, 0.189134, 0.0486099, 1.33523, -0.740377, -0.588529,


-0.738951, 0.393378, -0.424741, -0.165845, -0.105024,
-0.509166, -0.559777, -0.322799, 0.611471,

-0.0139035, 0.192289, 0.215884, 0.42839, 1.22424, -0.557538, 0.748829,


0.811425, -0.567415, 0.478949, -0.930788, -0.815847,
-0.675085, 0.143432, 0.907429, -0.791605,

0.386623, -0.491939, -0.355795, -1.21822, -0.0578587, 0.373054, 0.400678,


0.0381052, 0.333938, -0.551939, 0.666651, 0.496774, -0.0996118
-0.338756, 0.395109, -0.0946093,

-0.408185, -0.105322, 0.502763, -1.20785, -0.659191, 0.722583, -0.123789,


0.297074, -0.0360918, -0.0410618, 0.0563667, -0.262915,
-0.346919, -0.161358, -0.732031, 0.903635,

-1.08379, -0.481309, -0.722972, -0.553411, -1.00448, 0.100583, -0.383445,


-1.08466, 0.624652, -0.430412, -0.844843, 0.332018, 0.756297,
0.934539, 0.749344, 0.34311,

-1.1259, 0.888236, -0.932168, -0.617218, -0.902981, -0.815684, -0.794956,


-0.707963, -0.116212, 1.10673, -0.600709, -0.608273, -0.584743
-0.846697, -0.380985, 0.864341,

-1.34424, -0.868917, -1.02222, 0.481363, 0.383186, 0.12025, -0.711846,


-0.817075, -0.0823566, -0.334653, -0.584699, 0.280099,
0.693849, 0.504637, -0.316601, 0.218029,

-0.522936, -0.258518, 1.31827, -1.00274, -0.749833, 0.412043, 0.672309,


-0.653525, 0.485816, 0.498209, -0.252008, -0.812285, 0.602662,
0.0212719, 0.685901, 0.149714,

-0.194676, 0.767563, 0.0731506, 0.526717, 0.691788, -0.0190201, -0.560069,


-1.11794, 1.4468, 1.27252, -0.645344, -0.191523, -0.0549552,
-0.168464, 0.452096, -0.604246,

0.407949, 0.717795, 0.28697, 0.417937, -0.135654, 0.510972, 0.795559, 0.789836


-0.646869, 1.02887, 0.4916, 0.655899, -0.384916, -0.286857,
0.142541, -0.47817,

390
-0.830313, 0.402075, -1.00981, -0.815833, 0.300742, -1.28449, -1.72554,
-0.289421, 0.121561, -0.379148, 0.359904, 0.410897, 0.326119,
0.581417, 0.588857, 0.205351,

-0.0858752, -0.602156, -0.365654, 0.252064, 0.125631, 0.390546, 0.595961,


-0.369163, 0.490337, 0.537631, -0.573171, 0.135279, -0.293407,
-0.36908, 0.776284, -0.86512,

0.188277, 0.416897, 0.566542, -0.655683, 0.0156013, -0.141072, 0.0233197,


-0.504302, -0.47505, 0.716924, -0.306786, 0.0982113,
-0.495284, 0.482457, -0.0417226, -0.393999,

-1.16647, -0.975248, -0.680646, 0.459391, -0.774258, -0.129249, 0.25338,


-0.489245, 0.857715, -0.0587445, 0.764717, 0.344552, -0.983711,
0.530571, -0.174586, 0.203183,

-1.06123, -0.24899, -0.376542, 0.218682, -0.927047, -0.75323, 0.0776283,


-0.837238, 1.17516, 0.89532, 0.637019, -0.753449, 0.728891,
0.645152, 0.20859, -0.834837,

0.616967, 0.159905, 0.809849, 0.466468, -0.564717, -0.0996914, 0.627333,


-0.669686, 1.06178, 0.997358, -0.131645, 0.230917, 0.975456,
0.764013, -0.724403, -0.114928,

0.0263852, 0.68227, 0.387013, -0.249744, 0.0457159, 0.616758, -0.36878,


-0.59523, -0.131195, 0.932923, -0.0913405, 0.574165, -0.715751,
-0.252504, 0.937027, 0.967609,

-1.34687, 0.773338, -0.0186759, -0.262033, 0.0128024, 0.140075, -0.314998,


-0.120705, 0.7326, -0.00204962, -0.472609, -0.438004,
0.265128, 0.629909, 0.0579386, 0.689211,

-0.381792, -0.22498, -0.307052, 0.706148, 0.863566, -0.346275, -0.0547047,


-0.572196, 0.212506, 0.838752, 0.837534, -0.711632, 0.878157,
0.304817, 0.939058, 0.862614,

-0.0394301, 0.778428, 0.166991, -0.659354, -0.806244, -0.931821, -0.122326,


0.0331636, 0.134506, -0.171992, -0.171233, 0.339171,
-0.648418, 0.987479, -0.638016, 0.8682,

0.502675, -0.106586, -0.0402063, -0.261886, -0.837591, -0.996017, 0.0727204,


-1.08123, -0.46855, 0.411652, 0.849411, 0.152663, -0.044132,
-0.084293, -0.990235, -0.829724,

0.534, 0.930342, -0.0334462, -0.541376, -0.898799, -0.945154, -0.0932906,

391
-0.172246, 0.500135, 0.356741, 0.483515, -0.630037, -0.447533,
0.209796, 0.666823, 0.272065,

0.557009, -0.916039, -0.857561, -1.05275, -0.540343, -1.21212, 0.0937207,


0.485678, 0.408676, 1.11695, 0.121299, 0.0235673, 0.184337,
0.00527234, 0.272882, -0.0452072,

0.037835, -0.226153, -0.0816479, -0.34464, 0.674777, -0.51579, -1.15405,


0.65004, -0.25135, 0.721084, -0.108838, 0.402305, -0.624131,
-0.14548, 0.684915, -0.0506577,

Weights between hidden and output layers

-0.298384, 0.07719, 0.815364, -0.951424, -0.0749845, 0.554561, 0.116341,


0.365722,

0.547885, 0.224371, 0.0232105, -0.322847, -0.360129, 0.0665348, 0.227563,


-0.350492,

0.561964, -0.368009, 0.243067, 0.476517, 0.0313402, -0.0298075, -0.269999,


-0.0454656,

0.311055, 0.394885, -0.408628, -0.435073, 1.15274, 0.783402, 0.709321,


-0.779884,

-0.0911446, -0.117496, 0.391258, 0.684276, -0.515919, 0.413093, -0.435185,


-0.0110238,

0.24413, -0.466538, 0.492854, -0.680361, -0.856781, 0.815862, 0.0720998,


0.487135,

0.258172, -0.0607109, 0.349455, -0.0211922, 0.293106, 0.650396, 0.0906403,


0.143155,

0.18206, 0.0795079, 1.00199, -0.83526, 0.94233, 0.218302, 0.852732, 0.250201,

0.710023, -0.234678, 0.688201, 0.799773, -0.168626, -0.6546, 0.838424,


-0.341209,

0.260963, 0.181634, 0.0915149, 0.295776, 0.590944, 1.04424, 0.133481, 1.16185,

0.165425, -0.169801, 0.524056, -0.197722, -0.17384, -0.105439, -0.0679423,


-0.303441,

392
0.19013, 0.164908, -0.203131, -0.0488369, 0.063752, -0.121208, 0.534113,
0.394686,

0.209294, 0.0435774, -0.171238, 0.373208, -0.149831, -0.351153, 0.209717,


0.228079,

0.841326, -0.0469582, 0.83333, -0.139613, -0.00181498, -0.326188, -0.0477979,


0.248631,

-0.646495, 1.07442, -0.272258, 0.743573, -0.393897, 0.262803, 0.457984,


0.170558,

-0.472425, -0.320962, 0.344959, -0.0988614, 0.621647, 1.64686, 0.601372,


-0.0463286,

393
7.12 Hop eld Networks
John Hop eld, in the late 1970's, brought us these networks. These networks
can be generalized and are robust. These networks can also be described math-
ematically. On the downside they can only store 15% as many states as they
have neurodes, and the patterns stored must have Hamming distances that are
about 50% of the number of neurodes.
Hop eld networks, aka crossbar systems, are networks that recall what is
fed into them. This makes it useful for restoring degraded images. It is a fully
connected net, every node is connected to every other node. The nodes are not
connected to themselves.
Calculating the weight matrix for a Hop eld network is easy. This is an
example with 3 input vectors. You can train the network to match any number
of vectors provided that they are orthogonal.
Orthogonal vectors are vectors that give zero when you calculate the dot
product.
orthogonal (0, 0, 0, 1) (1, 1, 1, 0) = 0*1 + 0*1 + 0*1 + 1*0 = 0
orthogonal (1, 0, 1, 0) (0, 1, 0, 1) = 1*0 + 0*1 + 1*0 + 0*1 = 0
NOT orthogonal (0, 0, 0, 1) (0, 1, 0, 1) = 0*0 + 0*1 + 0*0 + 1*1 = 1
Orthogonal vectors are perpendicular to each other.
To calculate the weight matrix for the orthogonal vectors (0, 1, 0, 0), (1, 0,
1, 0), (0, 0, 0, 1)
rst make sure all the vectors are orthogonal
(0, 1, 0, 0) (1, 0, 1, 0) = 0*1 + 1*0 + 0*1 + 0*0 = 0
(0, 1, 0, 0) (0, 0, 0, 1) = 0*0 + 1*0 + 0*0 + 0*1 = 0
(1, 0, 1, 0) (0, 0, 0, 1) = 1*0 + 1*0 + 1*0 + 0*1 = 0
Change the zeros to negative ones in each vector
(0, 1, 0, 0) === (-1, 1, -1, -1)
(1, 0, 1, 0) === (1, -1, 1, -1)
(0, 0, 0, 1) === (-1, -1, -1, 1)
Multiply each matrix by itself
0 1 0 1
1 1 1 1 1
B 1 C  B 1 1 1 1 C
B
@ 1
C
A  1 1 1 1 =B
@ 1 1 1 1
C
A (7.1)
1 1 1 1 1
0 1 0 1
1 1 1 1 1
B 1 C  B 1 1 1 1 C
B
@ 1
C
A  1 1 1 1 =B
@ 1 1 1 1
C
A (7.2)
1 1 1 1 1
0 1 0 1
1 1 1 1 1
B 1 C  B 1 1 1 1 C
B
@ 1
C
A  1 1 1 1 =B
@ 1 1 1 1
C
A (7.3)
1 1 1 1 1

394
The third step is to put zeros on the main diagonal of each matrix and add
them together. (Putting zeros on the main diagonal keeps each node from being
connected to itself. 0 1
0 1 1 1
B 1 0 1 1 C
B C (7.4)
@ 1 1 0 1 A
1 1 1 1
0 1 0 1 0 1
0 1 1 1 0 1 1 1 0 1 3 1
B 1 0 1 1 C B 1 0 1 1 C B 1 0 1 1 C
B C+B C = T heresultingmatrixis : B C
@ 1 1 0 1 A @ 1 1 0 1 A @ 3 1 0 1 A
1 1 1 0 1 1 1 0 1 1 0 1
(7.5)
The Hop eld network is fully connected, each weight connects to every other
weight [n1] - [n2] = weight is -1
[n1] - [n3] = weight is 3
[n1] - [n4] = weight is -1
[n2] - [n1] = weight is -1
[n2] - [n3] = weight is -1
[n2] - [n4] = weight is -1
[n3] - [n1] = weight is 3
[n3] - [n2] = weight is -1
[n3] - [n4] = weight is -1
[n4] - [n1] = weight is 1
[n4] - [n2] = weight is 1
[n4] - [n3] = weight is 1
These networks can also be described as having a potential energy surface
with conical holes representing the data. Holes having similar depth and di-
ameter represent data with similar properties. The input data seeks the lowest
potential energy and settles in to the closest hole. The energy surfaces of these
networks are mathematically equivalent to that of 'spin glasses'.
Some problems with these neural nets are they are computationally intensive,
use lots of memory, and although I haven't seen it mentioned I would guess race
conditions may present a problem since data is updated continuously at each
node with the output from one becoming the input for another.
BAM, bidirectional associative memory is an example of a Hop eld network.
It consists of two fully connected layers, one for input and one for output. The
nodes in each layer do not have connections to other nodes in the same layer.
The weights are bidirectional, meaning that there is communication in both
directions along the weight vector. There are no connections between neurodes
in the same layer. BAM networks take only -1's and 1's as input and only
output -1's and 1's. So if you are working with binary data, you must convert
the zeros to -1's. The weights are calculated in the same way as the Hop eld
example above. The nodes are either 0 or 1 (on or o ).

395
7.12.1 C++ Hop eld Network
--hopfield.cpp---
//Linda MacPhee-Cobb
//www.timestocome.com

//example Hopfield network

#include <stdio.h>
#include <iostream.h>

class neurode
{

private:
int total;

public:

neurode() { }

int threshold( int inputVector[4], int numberOfNodes, int locationOfNode,


int weightArray[4][4])
{

total = inputVector[locationOfNode];

for( int i=0; i<numberOfNodes; i++){

total += inputVector[i] * weightArray[locationOfNode][i];

if( total > 0){


return 1;
}else{

396
return 0;
}
}
};

int main ()
{

int v1[] = {1, 0, 1, 0};


int v2[] = {0, 1, 0, 1};

int nodeLocation;
int const nodes=4;

int weightArray[nodes][nodes] = {
{ 0, -1, 1, -1},
{-1, 0, -1, 1},
{ 1, -1, 0, -1},
{-1, 1, -1, 0}
};

int output[nodes];
neurode n[nodes];

for( int i=0; i<nodes; i++){


output[i] = n[i].threshold(v1, nodes, i, weightArray);
}

cout << "\n Input vector 1: {1,0,1,0} output " << endl;
for(int i=0; i<nodes; i++){
cout << " " << output[i];
}
cout << endl;

for( int i=0; i<nodes; i++){


output[i] = n[i].threshold(v2, nodes, i, weightArray);
}

cout << "\n Input vector 2: {0,1,0,1} output " << endl;
for(int i=0; i<nodes; i++){

397
cout << " " << output[i];
}
cout << endl;

398
--Network.java
//www.timestocome.com
//Fall 2000
//class network is needed for the hopfield network

import java.util.*;

public class Network


{

public Vector v = new Vector();


int output[] = new int[4];

public int threshold( int k )


{

if (k>=0){
return 1;
}else{
return 0;
}

public void activation( int p[] )


{
for(int i=0; i<4; i++){

( (Neuron)v.elementAt(i) ).activation = ( (Neuron)v.elementAt(i) ).act(4, p);


output[i] = threshold(( (Neuron)v.elementAt(i) ).activation);
}
}

//create single layer 4 neuron fully connected network


Network( int a[], int b[], int c[], int d[] )

399
{

Neuron a1 = new Neuron(a);


Neuron b1 = new Neuron(b);
Neuron c1 = new Neuron(c);
Neuron d1 = new Neuron(d);

v.add(a1);
v.add(b1);
v.add(c1);
v.add(d1);

public static void main( String args[])


{

int[] pattern1 = { 0, 1, 0, 1};


int[] pattern2 = { 1, 0, 1, 0};

//this set of weights remembers the patterns


//{1,0,1,0} and {0, 1, 0, 1}
//to determine the weights take pattern1 and convert all the
//0's to -1's { -1, 1, -1, 1}
//get A transpose (1 ) * (1, -1, 1, -1) =( 1, -1, 1, -1)
//................(-1) ( -1, 1, -1, 1)
//................(1 ) ( 1, -1, 1, -1)
//................(-1) ( -1, 1, -1, 1)
//now subtract 1 from each number on main diagonal
// ( 0, -1, 1, -1)
// ( -1, 0, -1, 1)
// ( 1, -1, 0, -1)
// ( -1, 1, -1, 0)
//do the same for each pattern and add them together.
//more patterns can be added in provided they are orthoganal
// ( the dot product is 0)
int[] weight1 = { 0, -2, 2, -2};
int[] weight2 = { -2, 0, -2, 2};
int[] weight3 = { 2, -2, 0, -2};
int[] weight4 = { -2, 2, -2, 0};

System.out.println( "\n This is a Hopfield single-layer four neurode" +

400
" network. The network recalls two input patterns" +
" {1,0,1,0} and {0,1,0,1}.\n\n\n");

//create the network


Network hopfield1 = new Network ( weight1, weight2, weight3, weight4 );

//input the first pattern and get activations of neurons


hopfield1.activation( pattern1 );

//see what the network output is


for( int i=0; i<4; i++){

if( hopfield1.output[i] == pattern1[i] ) {


System.out.println( hopfield1.output[i] + " matches " + pattern1[i] +
", element of pattern 1");
}else{
System.out.println( " mismatch for " + i +
", element of pattern 1");
}
}

System.out.println("\n\n");
//try the second pattern
Network hopfield2 = new Network (weight1, weight2, weight3, weight4);
hopfield2.activation(pattern2);

for( int i=0; i<4; i++){

if( hopfield2.output[i] == pattern2[i] ) {


System.out.println( hopfield2.output[i] + " matches "
+ pattern2[i] + ", element of pattern 2");
}else{
System.out.println( " mismatch for " + i
+ ", element of pattern 2");
}
}
}

401
---Neuron.java
//www.timestocome.com
//Fall 2000

//class nueron is needed for the hopfield network

public class Neuron


{

public int activation;

//weights on edges to each other neuron


//this is a directed graph with an edge
//in each direction from and to each neuron
protected int weight[] = new int[4];

//each neuron has a weighted output


//synapse to everyother node.
Neuron(int j[])
{
for(int i=0; i<4; i++){

weight[i] = j[i];
}
}

//threshold function is 0 for the hopfield network


//we take the dot product of the input vector and the
//weight vector. If this is >0 the neuron fires,
//else it does not.
public int act (int a, int b[])
{
int activation = 0;

for(int i=0; i<a; i++){

activation += b[i] * weight[i];

return activation;
}

402
}

403
Chapter 8

AI and Neural Net Related


Math Online Resources

This section contains URLS to examples, tutorials, online books and courses in
various AI/NN math topics. There is never one book that can do everything or
cover everything. I did not wish to discourage those uncomfortable with math
away from using this book and programs. Those of you who are comfortable
with math should pursue the following topics if you are unfamiliar with any of
them.

8.1 General Topics


Calculus ocw.mit.edu/18/18.013a/f01/index.html Calculus with applications
( MIT Open Courseware )
Chaos www.imho.com/grae/chaos/ Chaos Theory
Complex Variables ocw.mit.edu/18/18.04/f99/index.htm Complex Variables
with applications ( MIT Open Courseware )
Dynamics www.ams.org/online bks/coll9/ Dynamical Systems, G. Birkho (
online/downloadable textbook )
www.aa.washinton.edu/courses/aa571 Course AA571 Principles of Dy-
namics ( not yet complete )
www.gris.uni-tuebingen.de/projects/dynsys/latex/dissp/dissp.html Approx-
imation of Continuous Dynamical Systems by Discrete Systems and Their
Graphics Use
Linear Algebra www.maths.uq.oz.au/ krm/ela.html Elementary Linear Alge-
bra ( lecture notes and worked problems ) www.math.unl.edu/ tshores/linalgtext.html
Linear Algebra and Applications ( online textbook )

404
joshua.smcvt.edu/linalg.html Linear Algebra ( online/downloadable text-
book ) ocw.mit.edu/18/18.06/f02/index.html Linear Algebra ( MIT Open
Courseware )

fractals 8.1.1 C OpenGL Sierpinski Gasket

405
--gasket.cpp--
//open a display window
//and generate the Sierpinski gasket

#include <stdlib.h>
#include <stdio.h>
#include <gl/glut.h>
#include <gl/glu.h>
#include <gl/gl.h>

void display (void){


typedef GLfloat point2[2]; //define a point array

point2 vertices[3]={{0.0, 0.0},{250.0, 500.0},{500.0, 0.0}};//triangle

int j;
long int k;
long random_number(); //random_number number generator
point2 p = {75.0, 50.0}; //start somewhere

glClear(GL_COLOR_BUFFER_BIT); //clear window

//compute and plot 120,000 points


for( k=0; k<120000; k++){
j = rand()%3; //pick one of the 3 vertices at random_number

//compute 1/2 way point


p[0] = (p[0] + vertices[j][0])/2.0;
p[1] = (p[1] + vertices[j][1])/2.0;

//plot point
glBegin(GL_POINTS);
glVertex2fv(p);
glEnd();

}
glFlush(); //plot quickly.. only benefit if on a network
}

void myinit (void){


//attributes
glClearColor(1.0, 1.0, 1.0, 0.0); //backround

406
glColor3f(1.0, 0.0, 0.0); //draw color

//set up viewing
glMatrixMode(GL_PROJECTION);

//2d coordinate sys lower left corner 0,0


gluOrtho2D(0.0, 500.0, 0.0, 500.0);
glMatrixMode(GL_MODELVIEW);
}

void main(int argc, char **argv)


{
glutInit(&argc, argv);
glutInitWindowSize(500, 500);
glutInitDisplayMode(GLUT_RGB|GLUT_SINGLE);
(void)glutCreateWindow("2d Sierpinski gasket");
glutDisplayFunc(display);
myinit();
glutMainLoop();

407
8.1.2 C OpenGL 3D Gasket
--3dgasket.cpp
//open a display window
//and generate the Sierpinski gasket
//in 3d

#include <stdlib.h>
#include <stdio.h>
#include <gl/glut.h>
#include <gl/glu.h>
#include <gl/gl.h>

void display (void){


typedef GLfloat point[3]; //define a point array

point vertices[4]={{0.0, 0.0, 0.0},{250.0, 250.0, 250.0},


{250.0, 0.0, 0.0}, {500.0, 0.0, 500.0}};//3d triangle

int j;
long int k;
// long random(); //random number generator
point p = {250.0, 100.0, 250.0}; //start somewhere

glClear(GL_COLOR_BUFFER_BIT); //clear window

//compute and plot 500,000 points


for( k=0; k<500000; k++){
j = rand()%4; //pick one of the 4 vertices at random

//compute 1/2 way point


p[0] = (p[0] + vertices[j][0])/2.0;
p[1] = (p[1] + vertices[j][1])/2.0;
p[2] = (p[2] + vertices[j][2])/2.0;

//plot point
glBegin(GL_POINTS);
//color depends on location
glColor3f(p[0]/250.0, p[1]/250.0, p[2]/250.0);
glVertex3fv(p);
glEnd();

}
glFlush(); //plot quickly.. only benefit if on a network
}

408
void myinit (void){
//attributes
glClearColor(1.0, 1.0, 1.0, 0.0); //backround

//set up viewing
glMatrixMode(GL_PROJECTION);
glLoadIdentity();

//3d coordinate sys lower left corner 0,0


glOrtho(0.0, 500.0, 0.0, 300.0, -500.0, 500.0);
glMatrixMode(GL_MODELVIEW);
}

void main(int argc, char **argv)


{
glutInit(&argc, argv);
glutInitWindowSize(500, 500);
glutInitDisplayMode(GLUT_RGB|GLUT_SINGLE);
(void)glutCreateWindow("3d Sierpinski gasket");
glutDisplayFunc(display);
myinit();
glutMainLoop();

409
8.1.3 C OpenGL Mandelbrot

--mandelbrot.cpp
//mandelbrot in opengl

#include <stdlib.h>
#include <math.h>
#include <gl/glut.h>

#define MAX_ITER 100 //number of times through the loop

//N X M MATRIX
#define N 500
#define M 500

//can't usually pass things to open gl so declare as globals


// any information they might need
double height; //2.5 for a full image
double width; //2.5 for a full image
double cx; //-0.5 for a full image
double cy; //0.0 for a full image
int max = MAX_ITER; //remember this is a nested loop
int n = N;
int m = M;
GLubyte image[N][M]; //unsigned bytes for image
typedef float complex[2];

//prototypes
void calculate(void);
void add(complex a, complex b, complex p);
void mult(complex a, complex b, complex p);
void mult(complex a, complex b, complex p);
float mag2(complex a);
void form(float a, float b, complex p);
void mouse(int btn, int state, int x, int y);
void display(void);
void myReshape(int w, int h);
void myinit();

void main (int argc, char *argv[]){

410
//initial position
cx = -0.5;
cy = 0.0;
width = 2.5;
height = 2.5;

system("clear");

glutInit(&argc, argv);

glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(N, M);
glutCreateWindow("Mandelbrot");
myinit();
glutReshapeFunc(myReshape);
glutDisplayFunc(display);
glutMouseFunc(mouse);

glutMainLoop();

void calculate(void)
{

int i, j, k;
float x, y, v;
complex c0, c, d;

printf("\n cx %lf, cy %lf", cx, cy);

for(i=0; i<n; i++)


for(j=0; j<m; j++){

x = i * (width/(n-1)) + cx - width/2; //begin here


y = j * (height/(m-1)) + cy - height/2;

form(0, 0, c); //create complex number


form(x, y, c0);
for(k=0; k<max; k++){
mult(c, c, d);
add(d, c0, c);
v = mag2(c);

411
if(v > 4.0) break;//assume not in set if mag > 4

}
if(v > 1.0) v = 1.0; //if > 1 set to backround
image[i][j] = 255*v;
}
printf("\n done w/ new image map");
display();

void add(complex a, complex b, complex p){


p[0] = a[0] + b[0]; //real part
p[1] = a[1] + b[1]; //complex part
}

void mult(complex a, complex b, complex p){


p[0] = a[0] * b[0] - a[1] * b[1];
p[1] = a[0] * b[1] + a[1] * b[0];
}

float mag2(complex a){


return(sqrt)(a[0] * a[0] + a[1] * a[1]);
}

void form(float a, float b, complex p){


p[0] = a;
p[1] = b;

void mouse(int btn, int state, int x, int y)


{

//left button magnifies image by 2x's


if(btn ==GLUT_LEFT_BUTTON && state == GLUT_DOWN){

height /=2.0;
width /=2.0;

412
calculate();

//right button moves image


if(btn ==GLUT_RIGHT_BUTTON && state == GLUT_DOWN){
cx = (double)y/500.0; //cx is really center-y
if(x < 250) cx *=(-1);

cy = (double)x/500.0;
if(y > 250) cy *=(-1);

calculate();

void display(void){

glClear(GL_COLOR_BUFFER_BIT);

glutSwapBuffers();

glDrawPixels(n, m, GL_COLOR_INDEX, GL_UNSIGNED_BYTE, image);

void myReshape(int w, int h){

calculate();

glMatrixMode(GL_PROJECTION);

gluOrtho2D(0.0, 0.0, n, m);

glMatrixMode(GL_MODELVIEW);

413
void myinit(){

GLfloat redmap[256], greenmap[256], bluemap[256];


int i;

glClearColor (1.0, 1.0, 1.0, 1.0); //backround color


gluOrtho2D(0.0, 0.0, (GLfloat)n, (GLfloat) m);

for(i=0; i < 256; i++){ //define amount of each color


redmap[i]= 1 - i/100.0; //numbers btwn 0 and 1
greenmap[i] = 1 - i/100.0;
bluemap[i] =1 - i/300.0;
}

//create color maps


glPixelMapfv(GL_PIXEL_MAP_I_TO_R, 256, redmap);
glPixelMapfv(GL_PIXEL_MAP_I_TO_G, 256, greenmap);
glPixelMapfv(GL_PIXEL_MAP_I_TO_B, 256, bluemap);

414
davis.wpi.edu/ matt/courses/fractals/index.htm" Using Fractals to Sim-
ulate Natural Phenomena
www.math.okstate.edu/mathdept/dynamics/lecnotes/lecnotes.html" Dy-
namical Systems and Fractals Lecture Notes
Fuzzy www-2.cs.cmu.edu/Groups/AI/html/faqs/ai/fuzzy/part1/faq.html faq Fuzzy
logic
www.paulbunyan.net/users/gsirvio/nonlinear/fuzzylogic.html Fuzzy Logic
Logic www.trentu.ca/academic/math/sb/pcml/welcome.html" A Problem Course
in Mathematical Logic" ( downloadable/online text book)
Nonlinear systems
Optimization Theory www.economics.utoronto.ca/osborne/MathTutorial/IND.HTM
Tutorial on Optimization Theory and Di erence and Di erential Equa-
tions by Martin Osborne, online book and course outline.
Probability www.dartmouth.edu/ chance/teaching aids/books articles/probability book/book.html
Introduction to Probability ( online/downloadable textbook )
archives.math.utk.edu/topics/statistics.html The Math Archives, proba-
bility
www.netcomuk.co.uk/ vaillant/proba/index.html Probability.net ( tutori-
als )
Statistics www.psych.utoronto.ca/courses/c1/statstoc.htm Statistics Tutorial
archives.math.utk.edu/topics/statistics.html The Math Archives, Statis-
tics
Topology www.geom.umn.edu/ bancho /Flatland/" Flatland, A Romance of
Many dimensions, Edwin A Abbott ( online/downloadable textbook )
at.yorku.ca/i/a/a/b/23.htm Topology Course ( Lecture Notes )
Vector Math www.cubic.org/ submissive/sourcerer/vector1.htm Simple Primer
on Vector Math
kestrel.nmt.edu/raymond/ph13xbook/node21.html Math Tutorial Vectors
www.ping.be/math/ Math Tutorial
Wavelets www.public.iastate.edu/ rpolikar/WAVELETS/waveletindex.html" Wavelet
tutorial
davis.wpi.edu/ matt/courses/wavelets/" Wavelets in Multiresolution Anal-
ysis

415
8.2 Speci c Topics
Bayes balducci.math.ucalgary.ca/bayes-theorem.html Bayes Theorem
members.tripod.com/ Probability/bayes01.htm Bayes' Theorem
engineering.uow.edu.au/Courses/Stats/File2414.html Bayes' Theorem
Boltzmann function www.cs.berkeley.edu/ murphyk/Bayes/bayes.html Boltzmann Equation
www.ph.ed.ac.uk/ jmb/thesis/node18.html The Boltzmann Equation
uracil.cmc.uab.edu/ harvey/Tutorials/math/Boltzmann.html The Boltz-
mann Distribution
Fokker-Planck Equation tangaroa.oce.ordt.edu/cmg3b/node2.html The Fokker-Planck Equation
www.d .aau.dk/ hoyrup/master/node17.html Solution of the Fokker=Planck
Equation
Gradient hyperphysics.phy-astr.gsu.edu/hbase/gradi.html The Gradient
web.mit.edu/wwmath/vectorc/summary.html Vector Calculus Summary
www.ma.iup.edu/projects/CalcDEMma/vecdcalc/vecdi calc.html Vector
Di erential Calculus
www.mas.ncl.ac.uk/ sbrooks/book/nish.mit.edu/2006/Textbook/Nodes/chap01/node26.html
Vector Calculus
Gibbs Probability research.microsoft.com/ szli/MRF Book/Chapter 1/node13.html Markov
Gibbs Equivalence
iew3.technion.ac.il/Academ/Grad/STdep/crystal.php Gibbs Fields and Phase
Segregation
www.blc.arizona.edu/courses/bioinformatics/book pages/gibbs.html The
Gibbs Sampler
bbs Sampler Convergence Theorem www.utdallas.edu/ golden/ANNCOURSESTUFF/lecture notes/lec11.notes
Boltzmann Machine, Brain State in a Box
Hessian Matrix This is the derivative of the Jacobian. It is used to verify critical points
to nd minimums and maximums.
thesaurus.maths.org/dictionary/map/word/2148 Hessian matrix
rkb.home.cern.ch/rkb/AN16pp/node118.html Hessian
www-sop.inria.fr/saga/logiciels/AliAS/node7.html General purpose solv-
ing algorithm with Jacobian and Hessian
Invariant Sets An invariant set is the region of the state space such that any trajectory
initiated in the region will remain there for all time. This is used in judging
the stability of neural networks.
cnls.lanl.gov/ nbt/Book/node105.html Invariant Sets
www.amsta.leeds.ac.uk/ carsten/preprints/article/node4.html Invariant Sets

416
www.cnbc.cmu.edu/ bard/xppfast/lin2d.html The Phase Plane for a Lin-
ear System
Jacobian Matrix These are used to obtain partial derivatives of implicit functions. It can
be used to map a correspondence between two planes.
rkb.home.cern.ch/rkb/AN16pp/node135.html#134 Jacobi Determinant
thesaurus.maths.org/dictionary/map/word/946 Jacobian
www-sop.inria.fr/saga/logiciels/AliAS/niode7.html General purpose solv-
ing algorithm with Jacobian and Hessian
Lagrange Multipliers lagrange.pdf An excellent example from a homework problem from ww2.lafayette.edu/ math/Gary/">P
Gordon @Lafayette
Lipschitz Condition Shows the possibility of nding a global minimum.
thesaurus.maths.org/dictionary/map/word/10115 Lipschitz Condition
m707.math.arizona.edu/ restrepo/475B/Notes/source/node3.html Some im-
portant theorems on odes
www.gris.uni-tuebingen.de/projects/dynsys/latex/dissp/node7.html Con-
tinuous Dynamical Systems
Lyapunov Function This is used to evaluate the stability of a critical point in a dynamical
system. It is also known as the 'characteristic multiplier' or the ' oguet
multiplier'.
The Lyapunov Exponent is also de ned as d(t) = d:  e(landat)
which describes the separation between two trajectories that begin very
close to each other
cepa.newschool.edu/het/essays/math/lyapunov.htm Lyapunov's Method
www.irisa.fr/bibli/publi/pi/1994/845/845.html PI-845 Lyapunov's stabil-
ity of large matrices by projection methods
MAP Risk functions (maximum a posteriori estimate)
www.ccp4.ac.uk/courses/proceedings/1997/g bricogne/main.html Maximum
Entropy Methods and the Bayesian Programme
www.cs.berkeley.edu/ murphyk/Bayes/bayes.html A Brief Introduction to
Graphical Models and Bayesian Networks
Markov Random Fields research.microsoft.com/ szli/MRF Book/Chapter 1/node11.html Markov
Random Fields
omega.math.albany.edu:8008/cdocs/summer99/lecture3/l3.html" An In-
troduction to Markov Chain Monte Carlo
dimacs.rutgers.edu/ dbwilson/exact/ Website for Perfectly Random Sam-
pling with Markov Chains

417
Method of Newton www.ma.iup.edu/projects/CalcDEMma/newton/newton.html Newton's Method
archives.math.utk.edu/visual.calculus/3/newton.5/ Visual Calculus, New-
ton's Method
www.mapleapps.com/categories/mathematics/calculus/html/NewtonSlides.html
Slide Show about Newton's Method
Multivariable Taylor's Theorem (aka 'Mean Value Theorem') This is used to approximate a function.
www.math.gatech.edu/ carlen/2507/notes/Taylor.html Taylor's Theorem
with several variables
thesaurus.maths.org/dictionary/map/word/2933 Taylor's theorem
Probability Mass (Density) functions
www.mathworks.com/access/helpdesk/help/toolbox/stats/tutoria5.shtml
Statistics Toolbox
Sampling error
Sigma Function ce597n.www.ecn.purdue.edu/CE597N/1997F/students/michael.a.kropinski.1/project/tutorial
The Normal Distribution Tutorial
Steepest Descent www.gothamnights.com/Trond/Thesis/node26.html Method of Steepest
Descent
cauchy.math.colostate.edu/Resources/SD CG/sd/index.html Steepest De-
scent Method www.uoxray.uoregon.edu/dale/papers/CCP4 1994/node8.html
Steepest Descent
Stochastic Approximation Theorem Several theories used to show that an unpredictable, or random system
will convert or become stable.
Wald Test
Zipf's Law linkage.rockefeller.edu/wli/zipf/ Zipf's Law references
www.few.eur.nl/few/people/vanmarrewijk/geography/zipf/ Zipf's Law
More General information thesaurus.maths.org/index.html Maths Thesaurus
rkb.home.cern.ch/rkb/titleA.html The Data Anyalysis BriefBook www-
sop.inria.fr/saga/logiciels/AliAS/AliAS.html An Algorithms Library of In-
terval Analysis for Equation Systems
www.cs.utk.edu/ mclennan/Classes/594-MNN/ CS 594 Math for Neural
Nets ( not yet complete )
www.ams.org/online bks/ American Mathematical Society online books
www.math-atlas.org The Mathematical Atlas
www.nr.com Numerical Recipes
ocw.mit.edu/global/department18.html MIT Open Courseware Math Sec-
tion, lectures, notes, quizzes, homework and solutions along with text book
information

418
Chapter 9

Bibliography

9.1 Bibliography
Books
Arti cial Intelligence: A New Synthesis, Nis J. Nilsson, Morgan Kaufmann
Publishers, 1998, #1-55860-467-7
Arti cial Intelligence, A Modern Approach, Stuart Russell and Peter Norvig,
Prentice Hall Series in Arti cial Intelligence, 1995, #0-13-103805-2
C++ Neural Networks and Fuzzy Logic, Dr. Valluru B. Rao and Hayariva
V. Rao, MIS Press, 1995, #1-55851-552-6
Constructing Intelligent Agents with Java, Joseph P. Bigus and Jennifer
Bigus, Wiley Computer Publishing, 1998, #0-471-19135-3
Introduction to Arti cial Intelligence, Philip C. Jackson Jr., Dover Publica-
tions, 1985, #0-486-24864-X
Mathematical Methods for Neural Network Analysis and Design, Richard
M. Golden, Bradford-MIT Press, 1996, #0-262-07174-6
Naturally Intelligent Systems, Maureen Caudill and Charles Butler, A Brad-
ford Book/The MIT Press, 1990, #0-262-03156-6
Neural Network and Fuzzy Logic Applications in C/C++, Stephen T. Wel-
stead, Wiley, 1994, #0-471-30974-5
Programming and Deploying Java Mobile Agents with Aglets, Danny B.
Lange and Mitsuru Oshima, Addison Wesley, 1998, #0-201-32582-9
Programming Intelligent Agents for the Internet, Mark Watson, Computing
McGraw-Hill, 1996, #0-07-912206-X
Signal and Image Processing with Neural Networks, A C++ Sourcebook,
Timothy Masters, John Wiley and Sons, 1994, #0-471-04963-8
Software Agents, Je rey M. Bradshaw, AAAI Press/The MIT Press, 1997,
#0-262-52234-9
Thinking in Complexity, Klaus Mainzer, Springer, 1997, #3-540-62555-0
Online Sources

419
An Introduction to Bayesian Networks and their Contemporary Applica-
tions, Moisies, www.cs.ust.uk/ samee/bayesian/intro.html
The Rete Algorithm, yoda.cis.temple.edu:8080/UGAIWWW/lectures/rete.html
Birth of a Learning Law, Stephen Grossberg, cns-web.bu.edu/Pro les/Grossberg/Learning.html
Overview of Support Vector Machines, Chew, Hong Gunn, www.eleceng.adelaide.edu.au/Personal/hgche
WebMate: A Personal Agent for Browsing and Searching, Liren Chen, Katia
Sycara, citeseer.nj.nec.com/cs
Arti cial Intelligence Gets Real, Stephen W. Plain, www.zdnet.com/computershopper/edit/cshopper/co
Evolution, Error and Intentionality, Daniel C. Dennett, ase.tufts.edu/costud/papers/evolerr.htm
The Construction of Programs with Common Sense, John McCarthy
Arti cial Intelligence, Logic and Formalizing Common Sense, John Mc-
Carthy, www-formal.stanford.edu/jmc
Modeling Adaptive Autonomous Agents, Pattie Maes, pattie@media.mit.edu
Hopkins Scientists Shed Light on How the Brain Thinks, Gary Stephenson,
gstephenson@jhmi.edu
Knowledge Discovery in Databases, Tools and Techniques, Peggy Wright,
www.acm.org/crossroads/xrds5-2/kdd.html
Minds, Brains, and Programs, John R. Searle, www.cogsci.ac.uk/bbs/Archive/bbs.searle2.html
www.opencyc.org Open source version of Cyc
www.markwatson.com/opencontent/opencontent.htm Practical Arti cial In-
telligence Programming in Java, by Mark Watson A downloadable book with
example code.
www.cs.dartmouth.edu/ brd/Teaching/AI/Lectures/Summaries/planning.html#STRIPS
STRIPS
robotics.stanford.edu/ koller/papers/position.html Structured Representa-
tions and Intratiblility
www-cs-students.stanford.edu/ pdoyle/quail/notes/pdoyle/search.html Search
Methods
www.ams.org/new-in-math/cover/turing.html Turning Machines (AMS site)
www.cogs.susx.ac.uk/users/bend/atc/2000/web/nicholn/ A Tutorial Intro-
duction to Turing Machines
www.turing.org.uk/turing/scrapbook/tmjava.html A Turing Machine Ap-
plet
cgi.student.nada.kth.se/cgi-bin/d95-aeh/get/umeng Turing Machines (sev-
eral applets to demonstrate a turing machine)
www.ktiworld.com/GBB/information bibli.html Blackboard Systems
www.cs.cmu.edu/afs/cs/project/tinker-arch/www/html/1998/Lectures/20.Blackboard/base.000.htm
A Slide Show on Blackboard Architectures
Start with this paper! It gives an excellent introduction. IntroToSVM.pdf">Introduction
to Support Vector Machines, by Dustin Boswell I downloaded it from www.work.caltech.edu/ boswell/IntroT
)
Lagrange Multipliers - here is an excellent example that explains how to
use Lagrange Multipliers I got from ww2.lafayette.edu/ math/Gary/ Math 263
Lagrange Multiplier Solutions 1. Find the extreme values ... and a copy here if
that one disappears lagrange.pdf lagrange.pdf

420
www.support-vector-machine.org Support Vector Machines (mailing list and
links)
www.eleceng.adelaide.edu.au/Personal/hgchew/svmdoc/svmdoc.html Overview
of Support Vector Machines this has a nice description of how the kernel is cal-
culated
citeseer.nj.nec.com/burges98tutorial.html A Tutorial on Support Vector Ma-
chines for Pattern Recognition , this is considered the best introduction and is
quite in-depth.
www.cis.ysu.edu/ john/835/notes/notes6.html Situational Calculus

9.2 Links to other AI sites


www.ai.mit.edu AI lab at MIT
www.aaai.org American Assoc. AI
www.cs.cmu.edu/afs/cs.cmu.edu/project/ai-repository/ai/html/air.html
CMU AI Repository
www.cs.reading.ac.uk/people/dwc/ai.html WWW VL AI
www.jair.org Journal of AI Research
www.ai.sri.com SRI AI Center
www.robotwisdom.com/ai/index.html Outsider's Guide to AI
members.vavo.com/Plamen/aicogsci.html Hetrion Guide to Cognitive Sci-
ences and AI
www.primenet.com/pcai PC AI Journal
bubl.ac.uk/link/a/arti cialintelligenceresearch.htm Internet Resources for
AI
www.geocities.com/ResearchTriangle/Lab/8751/ AI
www-formal.stanford.edu/jmc/whatisai/whatisai.html What is AI
www.cs.washington.edu/research/projects/ai/www/ AI Research Group
www.phy.syr.edu/courses/modules/MM/AI/ai.html MMM AI Introduc-
tion
www-aig.jpl.nasa.gov JPL AI Group
www.cs.brown.edu/research/ai/ Brown Univ. CS AI
www.isis.ecs.soton.ac.uk/resources/aiinfo/ ISIS AI Resources
www.gameai.com/ai.html Game AI

421
www.drc.ntu.edu.sg/users/mgeorg/conferences.epl AI Events
www.cs.man.ac.uk/ai/ AI Group
www.calresco.org/tutorial.htm Tutorials on AI
www.enteract.com/ rcripe/aipages/ai-intro.htm What is AI concerned
with?
www.psych.utoronto.ca/ reingold/courses/ai/nn.html AI Neural Nets,
What are they?
www.land eld.com/faqs/ai-faq/neural-nets/part1 AI NN Faq
www.neuroguide.com Neurosciences on the Internet
www.brainsource.com Brain Source, Neuropsychology and Brain Resources
and information
faculty.washington.edu/ wcalvin/bk9/ The Cerebral Code, Thinking a Thought
in the Mosaics of the Mind, William H Calvin an online book.
www.nimh.nih.gov/neuroinformatics/index.cfm Neuroinformatics , The
Human Brain Project
www. rstmonday.dk/issues/issue5 2/ronfeldt/ Game Theory in Auto Rac-
ing
www.economics.utoronto.ca/osborne/ Martin J. Osborne home page Mar-
tin has written several books on game theory and has several chapters of
a coming book 'Introduction to Game Theory' on line that you can down-
load and read. It gives a very clear explanation of the Nash equilibrium.
www.few.eur.nl/few/people/vanmarrewijk/geography/zipf/ Zipf's Law
as it relates to geographical economics, trade, location and growth
www.gametheorysociety.org Game Theory Society not much here yet, but
it does have a good list of books.
plato.stanford.edu/entries/game-theory/ Game Theory, history and in-
troduction a short paper from a philosopher's stand
economics101.org/ch17/micro17/ a powerpoint game theory introduction
linkage.rockefeller.edu/wli/zipf/ Zipf's Law references
news.bbc.co.uk/1/hi/in depth/sci tech/2000/dot life/2225879.stm Computer
Games Start Thinking, BBC Article
www.gameai.com/ The Game AI Page Open Source Software, publications,
and people.

422
www.research.ibm.com/massive/tdl.html Temporal Di erence Learning
and TD Gammon
www.botepidemic.com Bot Epidemic at the forefront of game bot develop-
ment
www.ibm.com/news/morechess.html IBM story on 'Deep Thought'
www.ai.sri.com/ wilkins/bib-chess.html Papers on Chess by David Wilkins
www.rome.ro/ John Romero's Home page
www.gamedev.net GameDev.net {all your game development needs
www-cs-students.stanford.edu/ amitp/gameprog.html Amit's Game Pro-
gramming Page
www.twilightminds.com/bbe.html Brainiac Behavior Engine
personalityforge.com Personality Forge
etext.lib.virginia.edu/helpsheets/regex.html regular expressions
www.alicebot.org/ A.L.I.C.E.
www-ai.ijs.si/eliza/eliza.html Eliza
cogsci.ucsd.edu/ asaygin/tt/ttest.html One of the main benchmarks of
AI is the 'Turing Test'
www.loebner.net/Prizef/loebner-prize.html Loebner Prize gives a turing
test each year and awards a prize to the winner
www-2.cs.cmu.edu/ awb/ Alan Black, Carnegie Mellon has several useful
publications online
www.isip.msstate.edu/projects/switchboard/ Download Switchboard-1 data
transcriptions Switchboard-1 is a corpus of telephone conversations col-
lected by Texas Instruments in 1990/1. I contains 2400 two sided phone
conversations.
www.cs.columbia.edu/nlp/ Columbia Natural Language Processing Group
has some really cool projects you might want to check out
perun.si.umich.edu/ radev/u/db/acl/ Association for Computational Lin-
guistics There are searchable references and information on conferences.
www.research.microsoft.com/ui/persona/home.htm Persona Project Mi-
crosoft This is a project to develop a user interface with emotions, that
interacts socially and appears intelligent.
www-cs-students.stanford.edu/ pdoyle/quail/notes/pdoyle/natlang.html
AI Natural Language

423
www.eas.asu.edu/ cse476/atns.htm Introduction to Natural Language Pro-
cessing
www.bestweb.net/ sowa/misc/mathw.htm Mathematical Background
www.cs.tamu.edu/research/CFL/ Center for Fuzzy Logic, Robotics and
Intelligent Systems
www.seattlerobotics.org/encoder/mar98/fuz/ index.html Fuzzy Logic
Tutorial
www.csu.edu.au/complex systems/fuzzy.html Fuzzy Systems | A Tu-
torial
cbl.leeds.ac.uk/ paul/prologbook/node18.html First Order Predicate Cal-
culus
www.earlham.edu/ peters/courses/logsys/low-skol.htm Skolem Theorem
www.alcyone.com/max/links/alife.html Arti cial Life Links
jasss.soc.surrey.ac.uk/JASSS.html Journal of Arti cial Societies and So-
cial Simulation
www.santafe.edu/s /indexResearch.html Santa Fe Research Institute (there
are several research projects here related to this topic)
www.theatlanticmonthly.com/issues/2002/04/rauch.htm Seeing Around
Corners, The Atlantic Monthly (excellent article)
www.angel re.com/id/chaplincorp Chaplin Corp has a Java/Neural net
program that evolves.
www.aist.go.jp/NIBH/ b0616/Lab/Links.html Applets for neural networks
and arti cial intelligence
lslwww.ep .ch/ moshes/introal/introal.html An Introduction to Arti -
cial Life
www.cs.cmu.edu/afs/cs.cmu.edu/project/alv/member/www/projects/ALVINN.html
Autonomous Land Vehicle In a Neural Network (ALVINN)
www-iri.upc.es/people/ros/WebThesis/tutorial.html Spatial Realizabil-
ity of Line Drawings
www-2.cs.cmu.edu/afs/cs/project/cil/ftp/html/v-pubs.html Computer
Vision Online Publications, books and tutorials
www.kurzweilai.net Ramona
www.alicebot.org ALICE ananova.com Ananova
Di erent interfaces and information

424
www.cs.umd.edu/hcil/pubs/treeviz.shtml TreeViz
ccs.mit.edu/papers/CCSWP181 Experiments with Oval
dq.com/home nd Dynamic HomeFinder is another example of a graphical
interface that speeds up queries and imparts more information than could
be absorbed in a textual display.
www.research.microsoft.com/ui/persona/home.htm Persona Project Mi-
crosoft This is a project to develop a user interface with emotions, that
interacts socially and appears intelligent.
www.arcbridge.com/ACTidoc.htm ACTidoc Is an agent interface that builds
documents on the y for learning.
agents.www.media.mit.edu/groups/agents MIT Media Lab for Software
Agents Group
www.pitt.edu/ circle/Projects/Atlas.html Atlas tutoring system
www.pitt.edu/ vanlehn/andes.html Andes, An intelligent tutoring system
for physics
www.ryerson.ca/ dgrimsha/courses/cps720/agentEnvironment.html
Agent Environment Types robot8.cps.unizar.es/grtr/navegacion/pfnav.htm]
A New Potential Field Based Navigation Method
vision.ai.uiuc.edu/dugad/ Rakesh Dugad's Homepage, has a good down-
loadable tutorial on HMM
uirvli.ai.uiuc.edu/dugad/hmm tut.html A Tutorial on Hidden Markov Mod-
els
home.ecn.ab.ca/ jsavard/crypto/co040503.htm Hidden Markov Models
www-2.cs.cmu.edu/ javabayes/ Java Bayes
powerlips.ece.utexas.edu/ joonoo/Bayes Net/bayes.html Tools for Bayesian
Belief Networks
omega.albany.edu:8008/JaynesBook.html Probability Theory: The Logic
of Science ( a statistics book with lots of information on Bayesian logic)
www.cs.helsinki. /research/cosco/Calendar/BNCourse/ Bayesian Net-
works, Course notes
www.cyc.com CYC is a current attempt at building a common sense program,
there is an open cyc that you can download and play with on your home
computer.
www.ee.cooper.edu/courses/course pages/past courses/EE459/StrIPS
General Problem Solver

425
citeseer.nj.nec.com/vila94survey.html A survey on Temporal Reasoning
www-formal.stanford.edu/jmc/frames.html Programs with Common Sense,
John McCarthy and his home page
www.acm.org/crossroads/xrds5-2/kdd.html Knowledge Discovery in Databases:
Tools and Techniques
www.kdnuggets.com KD Nuggets: Data Mining, Web Mining, and Knowl-
edge Discovery Guide
www.opencyc.org OpenCyc This is an open source project of Cyc, one of the
most general and complete knowledge based systems.
cui.unige.ch/db-research/Enseignement/analyseinfo/AboutBNF.html
About BNF notation
www.mv.com/ipusers/noetic/iow.html InOtherWords Lexical Database is
a good example of a semantic net.
www.botspot.com/ Bot Spot
interviews.slashdot.org/article.pl?sid=02/07/26/0332225 Slashdot inter-
view with ALICE bot creator Dr. Wallace
alice.sunlitsurf.com/ A.L.I.C.E. AI Foundation
www.dis.uniroma1.it/ iocchi/pub/webnet97.html Information Accession
the Web
ict.pue.udlap.mx/people/alfredo/ihc-o99/clases/agentes.html A Tax-
onomy of Agents
lieber.www.media.mit.edu/people/lieber/Lieberary/Letizia/AIA/AIA.html
Autonomous Interface Agents
www.isi.edu/isd/LOOM/LOOM-HOME.html Loom Project Home Page
www.ai.mit.edu/projects/iiip/conferences/www95/kr-panel.html Building
Global Knowledge Webs
www.cs.umbc.edu/kqml KQML Web
meta2.stanford.edu/sharing/knowledge.html Knowledge Sharing
ksi.cpsc.ucalgary.ca/KAW/KAW96/bradshaw/KAW.html KAoS: An Open
Agent Architecture Supporting Reuse, Interoperability, and Extensibility
www.cs.umbc.edu/kse/kif/ KIF Knowledge Interchange Format
piano.stanford.edu/concur/language/ Agent Communication Language (ACL)
myspiders.biz.uiowa.edu/ My Spiders

426
www.microsoft.com/products/msagent/devdownloads.htm MS has a free
agent developer's kit you can download and use
www.bonzi.com Bonzi Buddy, Intelligent Agent (free)
dsp.jpl.nasa.gov/members/payman/swarm/ Swarm Intelligence
www-cia.mty.itesm.mx/ lgarrido/Repositories/IA/index.html Intelligent
Agents Repository
agents.media.mit.edu/index.html MIT Media Lab: Software Agents
homepages.feis.herts.ac.uk/ comqkd/aaai-social.html Socially Intelligent
Agents
www.insead.fr/CALT/Encyclopedia/ComputingSciences/Groupware/VirtualCommunities/
Aglets Library for Java from IBM, this is open source, free code
agents.umbc.edu/ UMBC Agent Web, News and Information on Agents
www.java-agent.org/ Java Agent Services
alicebot.org/ A.L.I.C.E. AI Foundation
agents.umbc.edu/technology/asl.shtml Agent Programming and Script-
ing Languages
www.agentbase.com/survey.html Agent-Based Systems
yoda.cis.temple.edu:8080/UGAIWWW/lectures/rete.html The Rete Al-
gorithm
www.cyc.com CYC a current, Internet based common sense knowledge data
base, there is an open source version you can download and use at home.
www.cis.temple.edu/ ingargio/cis587/readings/wumpus.shtml Wumpus
World
www.cs.cmu.edu/ illah/PAPERS/interleave.txt Time-Saving Tips for Prob-
lem Solving with Incomplete Information
yoda.cis.temple.edu:8080/UGAIWWW/lectures/rete.html The Rete Al-
gorithm
davis.wpi.edu/ matt/courses/soms/ Self Organizing Maps a short course
www.calresco.org/sos/sosfaq.htm Self-Organizing Systems FAQ
www.hh.se/sta /denni/sls course.html Learning and Self Organizing Sys-
tems lecture notes and problems for a graduate level computer class
pespmc1.vub.ac.be/Papers/BootstrappingPask.html Bootstrapping knowl-
edge representations

427
www.c3.lanl.gov/ rocha/ijhms pask.html Adaptive Recommendation and
Open-Ended Semiosis
artsandscience.concordia.ca/edtech/ETEC606/paskboyd.html Re ections
on the Conversation Theory of Gordon Pask
www.cs.colostate.edu/ anderson/res/graphics/ Neural Networks in Com-
puter Graphics
www.ticam.utexax.edu/reports/2002/0202.pdf Neural Nets for Mesh As-
sessment
www.anc.ed.ac.uk/ amos/hop eld.html Why Hop eld Networks?
www.geocities.com/CapeCanaveral/1624/ Neural Networks at your n-
gertips
www.geocities.com/CapeCanaveral/1624/cpn.html Counter Propagation
Network C source code example to determine the angle of rotation using
computer vision
homepages.goldsmiths.ac.uk/nikolaev/311pnn.htm Probabilistic Neural
Networks
www.cs.wisc.edu/ bolo/shipyard/neural/local.html A Basic Introduction
to Neural Networks
www.shef.ac.uk/psychology/gurney/notes/contents.html Neural Nets:
A short online book
www.cse.unsw.edu.au/ cs9417ml/MLP2/BackPropagation.html Backpropagation
rana.usc.edu:8376/ yuri/kohonen/kohonen.html Java applet demonstrat-
ing SOM
www.willamette.edu/ gorr/classes/cs449/Unsupervised/SOM.html Kohonen's
SOM
davis.wpi.edu/ matt/courses/soms/ A course on SOM
www.quantumpicture.com/index.htm Flo Control an image recognition
neural net to keep the cat from bringing its victims into the house.
www.neci.nec.com/homepages/ ake/nodelib/html NODElib, a program-
ming library for rapidly developing neural network simulations
wol.ra.phy.cam.ac.uk/mackay/itprnn/book.html Textbook: Information
Theory, Inference and Learning Algorithms, David Mc Kay a download-
able textbook
www.shef.ac.uk/psychology/gurney/notes/index.html Neural Nets by
Kevin Gurney

428
www.maths.uwa.edu.au/ rkealley/ann all/ Arti cial Neural Networks, An
Introductory Course
nips.djvuzone.org/ Advances in Neural Information Processing Systems, Vol-
umes 0 to 13
www.ee.mu.oz.au/courses/431-469/subjectinfo.html 431-469 Multimedia
Signal Processing Course, lecture notes, problems and solutions from the
University of Melborne
www.willamette.edu/ gorr/classes/cs449/intro.html Neural Networks, an
online course
www.mindpixel.com/ Mindpixel
www.ai-forum.org/forum.asp?forum id=1 AIForums
www.gamedev.net/community/forums/forum.asp?forum id=9 GameDev.net
www.generation5.org/cgi-local/ubb/Ultimate.cgi?action=intro Forum
5
sodarace.net/forum/forum.jsp?forum=16 Sodarace
www.igda.org/Forums/forumdisplay.php?forumid=30 IGDA Forums

429

You might also like