You are on page 1of 10

CHAPTER TWELVE

myth
W . F. CLOCKSIN

J NTRODlJCTJON

Mo:;f research in Hrtifidal intelligence (1\J) and cognitive science is founded on , IH; h:.8Jc prcn1ises that intcJl igence is reaJied th ro ug h a representational Hysfcrn hnscd on a physical syrnboJ system, an d tbat inteHige nt activity is goal olienfcd, syu1boJic acUvit.y. The best kno\vn expositors of th es e premises have Iwon Nvwcll and SinJon ( 1 9 72). The essential in gr ed ie nt i s epesentation (th e Jdc: t.hat rdcvauf a spec ts of the \'\7or1d need to be en co de d in symbolic form in ordcr to he ut-wblc), an d research on the practical pr in ci pl es o f repr esentation l1as he cn "lively area f()r a t least the p a st t\'\7enty y e a rs (e.g. Bobro \v a n d Collins 1 9 7 5 ) By now rr1ost A I a n d cognitive the ories are representationis t: the idea thHI cosnHion I n vol v es the proce ssing of internal 'menta l' tokens that stand liw external cnlitit:s or processes. The st a te o f the art is summed up by Hunt ( I9R<J) at; follows:
.

h 1 11 tiN coni ri h u I ion I sh <t II a Hem pt to begin to show that representationisDI nm hi nl lided: not e ver y body feel s like t a k in g repre 111111111: point sentationisro as the for ('ognitive science and AI. Arguments put forward by Maze

'l'lw cogu Ill w sciencc mov emcn t has been domin ated by the computationa view of l I ht>llglll, whid stcs I hlnking as the m anipulation of an in ternal representation I'JJJI'JJiul >JJodcl') of:JJJ cx lemnl domain. The representation must b ex e pressed in some lnl<'llllll languugc <'o ulalning designs lO r well-formed struct ures and operations upon '"'''" 'l'he unalogy is non lo a compute r programming lan guage than to a natu ral lrtllgllltgl. Jlollowln g flodor (l '175), I he internal langu age will b e called 'mentalese '. A rontmlullnllllf ii1'0 I'Y of thought must define the mentalese language and describ e a hypollullcul much Inc I ha I can execu te progr ams written in it.

W.

P.

Clocbln

by Broob together with science can have useful applteatto111 If rr-1


I&"'....

such as knowledge r epreADta tton aad

, The p urpose of this Is to representatlolu


.._....,.

also propose a Unk between eoplUon 1U14t


... .. cannot be decoupled lrom tbe ....
u

tbe

by

conception of myth IJ to be

ltlODai. Myths are that lbey MCJDot be

at the level of cognition', this aim has the . . pt ua1 structures 'idcnt tfy concc hidden effect of hermeneutic: the id ea that the world must be . l llt(.tngt11 a structuralist . (a ) J .11 of an underlying structure that explains, interconn intcrprc tcd 1 terms anizes what is perceived; nccts and org g as to scope: the organiation of cognition into levels is (b) bdn self-definin eory, not by the expenence of the program; supplied by the th f-defining as to detailed nature: representationist theories are (c) being sel founded on introspection or 'interior reflection'. An y type of reflection on the interior life is bound to be expressed in symbolic form.
.

l<J2

representation and myth Knowledge

lt could be that the sterility of representationist programs as observed by Brooks (1991) and Winograd (1990) is the result of hidden (and rather lilnited) agendas being unintentionally built into programs. It is possible that all these problems are engendered by AI research's affinity with sy1nbolic representations defined within a formal system having classical truth-theoretic semantics or some nonstandard or modal semantics. Two books by Turner (1984, 1990) give a useful state-of-the-art summary of this sy1nbolic logic approach to knowledge representation. But I suggest that this approach is misleading if we expect it to tell us anything about human cognition. Real human cognition and behaviour is not 'rational' that is, it cannot guarantee that contradictions will not be derived. The fact that some people, namely trained logicians, might be rational on paper for a few moments each day does not refute the point. The performa nce of any type of derivation or calculation is a symbol game, the rules of \<Vhich can be applied with more or less precision by a suitably trained person. The development of logic is useful to understand argument and for computer programming by reducing these to a symbol game, but using it to characterize knowledge representations in the service of cognition and behaviour is a job for which it was never intended, and for which a number of researchers consider it unsuited.

EXPERIMENTS WITH REPRESEN TATION-FREE SYSTE MS

Recent research has been inv per cep tio n olved with carrying out complic ated and action tasks without uire men t explicit representations and without the req for articulating the at the goal structure of a problem. It may be argued th general d"treet"ton of tion-fre e connectionist research is toward repr esenta systems' par t 1ar1Y systeii15 tcu when the aim is to train networks to contro1 . . whose input/output sstble to relationship is too difficult or impractical or liDpo .

W. F . Clocksin

193

Spa c e Robotics project (Clocksin and Moore 1989). The experimental system con sists of a six -degrees-of-freedom robot arm and t\vo video cameras con n ected to a minicomputer. Th e system engages in unattended unsuper 'sed real-time trial-and-error learning of reaching and rotary pursuit tasks, and after a short period of learning, the system carries out the tasks successfully. The behaviour of the system is such that it appears as though the system designers have 'solved' the so-called correspondence problem of binocular vision and have 'solved' the so-called real-time inverse dynamics

analytically. However, th is ai m is not exclusive to connectionism and ' 0del . ec ti onism has not stopped researchers fro m trying to build represen. co n n nist schemes 1 to ne tw ork s ( Hinton 198 7). The value of connectionist ta tio arch is not th at It ha s pr ov ed the value of neural nenvorks (it has not), but ese it has motivated investigators w ho might other\vise not do so to consider hat ck box' techniques fo r developing systems that construct their own 'bla pu t/o utpu t relations. Apart fro m this, most results in connectionism m ay be in ced in the category of implementation techniques. pla One example of th e investigation of representation-free systems is the State

equations for the robot arm. In fact, no such algorithms or explicit rep resentations were used. The system operates by trial-and-error filling-in of
a

ten-dimensional state-space memory that represents the product space of

each perceived variable (derived from the video signal) and each output variable (signal to robot arm joints). This space represents the coordination relation of the robot situated in its environment. The robot behaviour incrementally converges toward behaviour that is 'naturally-selected' by the environment, and whic h is not aided by any sort of built-in trajectory plan. The state-space memory is implemented using the textbook technique of a k-D

bina ry tree. The learning procedure, which involves finding near-neighbours in the tree, converges quickly without the requirement for repeated Pres entation of data. The tree-like structures that form in the memory as a res ult of learning are at least a s resemblant of biological neural structures as are the so-called neural networks. Situ ated Robotics (Brooks), State-Space Robotics (Clocksin and Moore), and Animate Perception (Ballard 1991 ; Whitehead and Ballard 1991) are three of th e several that turn out to have a common independent strands of research post ai m: tha systems by a designer.Any t representations are not built into the hoc an 1 structu res which to a alysis of a system's memory contents may revea . . tha the t thtrd -p 1 but the pomt IS ' arty observer appear to be representationa , s ' antly the system s Ystem,s there. More tmport designer did not put them e . destgn h these struc tures ar IC er did not specify rules of formation bY wh the course of the COn str ucted and modified in a context-dependent way m s y stem s operation .

'

194
j\ M\' 'I' I I
-

I'J(' OJI coc:Nf'J'JON /\NJJ HHPP HSE \lTA'J I(J'J ' I I I "' J,MI'N I ' l J'
' , . I '

g dluruwion hCJvc been jnspi cd b h e work of llowin e points raised In 1 he fo Th < . Jura g . S( > rn c acq uCJin tar r,t; WJth hu; Jdeas of the c. the analyt ica1 psychologist . rolled ivc u ruon,.;c iiJu iH HHsumcd archetype, 111yth, ttnd the entatjon is to ngn::o that. the aim of knowledge n;prr;, Most /\ 1 researcher:-; ing aside fo r the l st rurt ur<'s at. the lcvcJ of cognitjon. Leav identify conc eptna temcnt, knowledge e irn plidt . st.ructut;dist h crrr1 enc u 1J c of thCJt Bta motncnt th ginate with some resentations t.ry to represent an interior Jtfc, and the y od rep sort of introspection or interior rdlccUon. Owing tJJ r,ur her itag e (from species and culture) of 'interior archneology, HN t-:l.udied by p<,ychoanaJysts, any type of reflection on the interior life is hou nd t.o be expressed in syrn boHc form- in the same language that has becorne ctssodated with myths. Myth must therefore, he considered a chan.tctcdstic of being human, and is not merely a n1aUcr of archaic prirnitivism used only by uninteJJigent people who can understand in no other way. Nor is myth the soJe property of peoples whose technology has not. developed (indeed, the advanced technology of industrialized countries is such as to project and empower our myths to a high unconscious. I shall not discuss those aspects of myth usually of interest to anthropologists, in which myth is merely a practical service to answer questions of aetiology and eschatology: for personal empowerment, to justify existing social systems, and to account for traditional rites and customs. In the devising and use of knowledge representations as technical artefacts, we are dealing with a myth-creation process. Researchers often think of knowledge representations as animated, metaphorically alive. The represen tations 'live' in a space of human concerns and have components that store data of interest to us. They communicate among themselves along links (or connections or channels) and even reproduce (the 'spawning' of processes is a frequently used metaphor). During the past thirty years, artificial intelligence research has uninten tionally given rise to a whole new mythology. There is a level at which aspects of standard knowledge-representation techniques can be identified with the archetypes of the collective unconscious. For examp that nodes le, the roles degree). Nor is myth a matter of the decay of reason or of a disordered

ay take as suppliers (animus) or receivers (anima) of data, the organization


us tnto layers or hierarchy, and the minor pantheon of active and autonomo 'agents' The 'binding' and 'substitution structures ' of variables within corresponds directly to notions of subs . There titution in the semiotic s of myth are many more such correspondences to to look for be found by those dispo sed them. The use of formal logic as a framework can b e for representing knowledge

W. F. Clocksin

195

only as an operational technique, but as a particularly rich projection e not s r chetypes. The methodology of rigorous axiomatizing of knowledge--0b flou rishes among AI research notwithstanding the fact that human ic h ality' is frail and fallible-may on the one hand be identified with a ation ulsion for order : deeply rooted in the unconscious as the masterly 'hero' comp etype. Alternattvely, the adherence to logic as a representational arch mework can be seen also as a projection of the archetype of the virgin: fra ime, immune to the pitfalls of human weakness, and free from the defects subl of the world. Finally, the idea that knowledge representation in the service of cognition proceeds by means of the matching of terms according to the

unification algorithm (or similar algorithms) has an essential archetypal meaning in terms of the union of the 'above' (the term in the AI program) with the 'below' (the term in the input data): in archetypal terms this is associated with the dream of the earth that reaches up to touch the sky; the symbol of ultimate unity in the mythic concept of a divine marriage. The fruit of this marriage is the binding of variables to terms; so the logician's concept of instantiation is archetypally related to the theologian's concept of incarnation. Researchers often ask each other whether there is a psychological validity to their work on knowledge representation. Certainly the psychocultural/ psychoanalytic validity seems to be there already in the form that representations have taken, but I expect this comes as little comfort, for it says more about the researchers themselves than about the validation of particular scientific results.

PURPOSIVE BEHAVIOUR

One example of a myth is the notion of goal-directed (purposive) behaviour. In Particular, researchers' desires to attribute concepts such as purpose to animals need explanation. People seem to have a predilection for teleological expl anations of behaviour. For example, some researchers of animal behaviour have observed what appears to be cheating, lying, and other forms of d eceit among populations of (non-human) animals. This is taken as evidence for the idea that these animals manipulate and represent knowledge, since it is assumed that a situation needs to be represented to a sufficient degree herore the animal can decide to use the representation as u11ormation :- ecessary to plan an act of deceit. But according to the myth-hermeneutic, the ea th at the animal represents knowledge is actually a myth, conscted not y the ing to psychoanalyti theory but by the researcher, who accord unconscious m order up on the primitive 'trickster' archetype in his in te rpret his observations of the anio'ial.

animal, drawing

196

Knowledge representation and myth

There is no reason at all to believe that an animal's act that appears to us to be deceitful is intentional, even if the behaviour is reliably reproducible. For, what appear to be goals may well be achieved as a consequence of naturally selected behaviour without supposing that the animal 'had in mind' \vhat it 'wanted' to achieve. There is no known limit to the complexity of a (mere' reflex behaviour, even without considering the evolution of such behaviour over the time span of a whole species in its environment. Even the most basic of finite-state machines can be programmed to exhibit complicated fonns of deceit without an inbuilt knowledge-representation scheme. The fact that a suitable program need not be written by a person, but may simply evolve over a period of time as the result of a successive enumeration of incrementally perturbed programs, is in addition to but beside the point. So people seem to be predisposed to assume that deceitful acts by animals are intentional. Such assumptions are not restricted to animals and deceitful acts. The experiments ofMichotte showed vividly that people are willing to attribute not only intentions but whole personalities to little coloured cardboard squares moving about on a screen. We all kno\v that cardboard squares do not have self-awareness, and the attribution of personalities is just an entertaining fantasy. As for machines, we think that machines do not have self-awareness, but many people like to think it might be possible to program machines to have self-awareness. Equipping machines with what we consider to be the necessary mechanism for intelligent behaviour is not considered a fantasy, but a serious business for many AI researchers. As for animals, we are genuinely not sure of the extent of self-awareness, so it is easier for us to attribute ratiocination and its presumed attendant knowledge representations to animals. As for people, we take such behaviour for granted, and furthermore we seem to be generous in the attribution of human characteristics to objects and creatures other than humans. The point is that according to the mythic hermeneutic, all these attributions have a common cause and cognitive our psyche.

foundation: the myth-formation process at work in the most primitive layers of

DISCUSSION

Knowledge representations are outward expressions of an interior life, and th e n act of investigating knowledge representations is also an outward expressio of the lnterlor life. Thus it should not b e surprising that archetypes of unconscious are stirred to conscious expression by any encounter WI

197
ional realities that might correspond to their meaning. \t\Te can orn P utat c rore expect that artificial intelligence research, particularly in the area of th ere 11 edge representation, should be a great call to the forces of the kn owl onscious, because AI has the aim of investigating cognition itself via '\vhat c resumed to be working models. Yet, AI researchers are not accustomed to p

methodology of discernment: in this case the querying of the extent to th e which their new knowledge representation technique or language really is an bjective representation of some aspect of cognition, or the extent to '\Vhich it is o a manifestation of 'the images that surge and tumble in the unconscious arch aeological layers of our psyche', to use Boff's (1979) phrase. Some philosophers of science have also suggested that science and myth are loser relations than has been previously admitted. For example, Hesse (1983) c sugge sts that it is a mistake to interpret the pressure towards universalizable science as the search for a comprehensive true theory corresponding to reality. Instead, scientific theory is interpreted as one type of response to cultural needs for myth and ideology. Thus, conflicting mythologies (including scientific ones) should not be seen as autonomous atoms within the social milieu, but as that 'poetry and science have the same origin. They originate in myths.' interacting systems of cognition and value. Popper ( 19 92) has commented What form might a cognitive architecture based on a mythic-hermeneutic take? First, the . assumption is that at the lowest level of cognition there is found a number of primitive archetypes that interact with perceptions and memories to form subconscious patterns of activation I shall call subnarratives. After further elaboration and interaction with perception and memory, some of these subnarratives may surface into a conscious life to be articulated as myths. THese myths probably take a primitive form of narratives which are articulated over time and are closely coupled with rhythmic motor behaviour. Such a primitive form is to be contrasted to the general AI view of symbolic conceptual statements or a network of such statements that 'reside' in the brain. on narratives, although from Cupitt's standpoin t he must consider narrative to fulfil a cultural need rather than the basic organic need proposed here. I r eg ard cultural influences as essential to the development of cognition, but they function to constrain the variety of perceptions and memories that are eXposed to the system, and consequently to constrain the range of possible narr ativ es generated by the system. Precursors of the architecture may also be discerned from Minsky's (1985) ncept of the society of mind, in which multiple mental agents are speci alized Primitive tasks and are grouped in ways to carry out more complicated tasks. M insky 's Builder and Wrecker agents, for example, may relate to

It is possible that further insight may be obtained from Cupitt's (1991) work

J98

esentation and myth Knowledge repr

ngly suggests the . cription of \t\'recker stro des chetypes. MJDSk y's primitive ar lder is probably a composite of more Bui 1 ter archetype, \Vh'.Ie JungJan Tncks . ch the Hero features. Minsky's notion of . 'f u.rhi . ngian archetypes Ill n pnm JVC Ju . description of how narratives might be rty can be seen as a thrd-pa frames r ply that 1rames have any , . of course doe s not necessarily im . . assembled. 1 his . . . . tionaJ signJf)cance. I constder narratives to pr esenta . further Jnstrumcnta1 or re only within the behaviour of th e neural be concept-free, contextualized

substrate.

CONCLUS roN

There is a connection bct\veen symbol, n1yth, and an important area of AI research: the representation of tv hat are presumed to be cognitive processes underlying knotvledge and behaviour. I have suggested bo\t! knowledge representation might be interpreted according to a hermeneutic ofmytb. Any cognitive architecture based on archetypes and narratives, in contrast to the more widely knotvn architecture based on, f exam pie, tbe kno\vledge base or and inference engine with its essential requirernent for logical representation. And perhaps one day Marvin i\Iinsky,s book The society of rnind \Vill be affectionately remembered, not as a \\'ork of science, but as one of the foremost mythologies of our time. practical consequence of this approach \viii involve the implementation of a

REFERENCES Ballard, D. H .

Animate vision. Artificial intelligen ce, 48, 57-86. Bobrow, D. and Collins, A. (eds) (19 75). Repr esentation and understanding. Academi c Press, London. Boff, L. (1979). The maternal fa ce of God. Col lins, London. Brooks, R.A. ( 1 991 ). Intelligence withou t representation. Artifici al intelligence) 4i, Clocksin, W.F. and Moore, A. W. (19 8 9 ). Experiments in adaptive stat e-space robotics. In Proceedings of the seventh co nference of the society f or artificial intelligenc e an d si1 nulation of beha\,iour (ed. A.G. Co hn), pp. 1 15-25. .1tt, D Cup . (1991). What is a story? SC M Press Londo Fodor,}. (1975). The l nguage of thought. H a ester p a ss, Hemet Hempstead, Herts. Hamad, S. (1990). The symbol grounding prob lem. Physico D, 42, 335-46. Hesse, M. (198 3). Cosmology as myth. Concilium, 166, 49-54.

( 1 991).

139-60.

n:

W. F. Clocksin

199

G.B. (1987). earning distributed representations of concepts. Proceedings of the Hinton, itive science society (CSS-9), 8 . cogn gunt, E.B. (1989). Cognitive science: definition, status and questions. Annual Review of . chology, 4 0 , 603-29 Psy cDermott, D.V. (1976). Artificial intelligence meets natural stupidity. SIGART M New sletter, 57, 4-9. Reprinted in Mind design (ed. ]. Haugeland), pp. 143-60. MIT
Press, Cambridge, Mass. Maze, J.R. (1991). Representationism, realism and the redundancy of 'mentalese'. Theory and Psychology, 1(2), 163-85.

Michotte, A.

(1963). The perception of causality. Basic Books, NewYork. Minsky, M. (1985). The society of mind. Simon and Schuster, NewYork. Newell, A. and Simon, H.A. (1972). Human problem solving. Prentice-Hall, Englewood
Cliffs, NJ. Popper, K. Routledge, London.

(1992). In search of a better world: lectures and essays from thirty years.

Turner, R.

(1984). Logics for artificial intelligence. Ellis Horwood, Chichester, Sussex. Turner, R. (1990). Truth and modality for knowledge representation. Ellis Horwood,
Chichester, Sussex. Whitehead, S.D. and Ballard, D. H.

(1991). Learning to perceive and act by trial and error. Machine learning, 7, 4 5-8 3. Winograd, T. (1990). Thinking machines? Can there be? Are we? In The foundations of
artificial intelligence (ed. D. Partridge andY. Wilks). Cambridge University Press.

You might also like