You are on page 1of 38

Science, Mind, and Limits of Understanding

Noam Chomsky
The Science and Faith Foundation (STOQ), The Vatican, January 2014
One of the most profound insights into language and mind, I think, was Descartess
recognition of what we may call the creative aspect of language use: the ordinary
use of language is typically innovative without bounds, appropriate to
circumstances but not caused by them a crucial distinction and can engender
thoughts in others that they recognize they could have expressed themselves.
Given the intimate relation of language and thought, these are properties of human
thought as well. This insight is the primary basis for Descartess scientific theory of
mind and body. There is no sound reason to question its validity, as far as I am
aware. Its implications, if valid, are far-reaching, among them what it suggests
about the limits of human understanding, as becomes more clear when we consider
the place of these reflections in the development of modern science from the
earliest days.
It is important to bear in mind that insofar as it was grounded in these terms,
Cartesian dualism was a respectable scientific theory, proven wrong (in ways that
are often misunderstood), but that is the common fate of respectable theories.

The background is the so-called mechanical philosophy mechanical science in


modern terminology. This doctrine, originating with Galileo and his contemporaries,
held that the world is a machine, operating by mechanical principles, much like the
remarkable devices that were being constructed by skilled artisans of the day and
that stimulated the scientific imagination much as computers do today; devices with
gears, levers, and other mechanical components, interacting through direct contact
with no mysterious forces relating them. The doctrine held that the entire world is
similar: it could in principle be constructed by a skilled artisan, and was in fact
created by a super-skilled artisan. The doctrine was intended to replace the resort to
occult properties on the part of the neoscholastics: their appeal to mysterious
sympathies and antipathies, to forms flitting through the air as the means of
perception, the idea that rocks fall and steam rises because they are moving to
their natural place, and similar notions that were mocked by the new science.

The mechanical philosophy provided the very criterion for intelligibility in the
sciences. Galileo insisted that theories are intelligible, in his words, only if we can
duplicate [their posits] by means of appropriate artificial devices. The same
conception, which became the reigning orthodoxy, was maintained and developed
by the other leading figures of the scientific revolution: Descartes, Leibniz, Huygens,
Newton, and others.

Today Descartes is remembered mainly for his philosophical reflections, but he was
primarily a working scientist and presumably thought of himself that way, as his
contemporaries did. His great achievement, he believed, was to have firmly
established the mechanical philosophy, to have shown that the world is indeed a
machine, that the phenomena of nature could be accounted for in mechanical terms
in the sense of the science of the day. But he discovered phenomena that appeared
to escape the reach of mechanical science. Primary among them, for Descartes, was
the creative aspect of language use, a capacity unique to humans that cannot be
duplicated by machines and does not exist among animals, which in fact were a
variety of machines, in his conception.

As a serious and honest scientist, Descartes therefore invoked a new principle to


accommodate these non-mechanical phenomena, a kind of creative principle. In the
substance philosophy of the day, this was a new substance, res cogitans, which
stood alongside of res extensa. This dichotomy constitutes the mind-body theory in
its scientific version. Then followed further tasks: to explain how the two substances
interact and to devise experimental tests to determine whether some other creature
has a mind like ours. These tasks were undertaken by Descartes and his followers,
notably Graud de Cordemoy; and in the domain of language, by the logiciangrammarians of Port Royal and the tradition of rational and philosophical grammar
that succeeded them, not strictly Cartesian but influenced by Cartesian ideas.

All of this is normal science, and like much normal science, it was soon shown to be
incorrect. Newton demonstrated that one of the two substances does not exist: res
extensa. The properties of matter, Newton showed, escape the bounds of the
mechanical philosophy. To account for them it is necessary to resort to interaction
without contact. Not surprisingly, Newton was condemned by the great physicists of
the day for invoking the despised occult properties of the neo-scholastics. Newton
largely agreed. He regarded action at a distance, in his words, as so great an
Absurdity, that I believe no Man who has in philosophical matters a competent
Faculty of thinking, can ever fall into it. Newton however argued that these ideas,
though absurd, were not occult in the traditional despised sense. Nevertheless, by
invoking this absurdity, we concede that we do not understand the phenomena of
the material world. To quote one standard scholarly source, By `understand
Newton still meant what his critics meant: `understand in mechanical terms of
contact action.

It is commonly believed that Newton showed that the world is a machine, following
mechanical principles, and that we can therefore dismiss the ghost in the
machine, the mind, with appropriate ridicule. The facts are the opposite: Newton
exorcised the machine, leaving the ghost intact. The mind-body problem in its
scientific form did indeed vanish as unformulable, because one of its terms, body,

does not exist in any intelligible form. Newton knew this very well, and so did his
great contemporaries.

John Locke wrote that we remain in incurable ignorance of what we desire to know
about matter and its effects, and no science of bodies [that provides true
explanations is] within our reach. Nevertheless, he continued, he was convinced
by the judicious Mr. Newtons incomparable book, that it is too bold a presumption
to limit Gods power, in this point, by my narrow conceptions. Though gravitation of
matter to matter is inconceivable to me, nevertheless, as Newton demonstrated,
we must recognize that it is within Gods power to put into bodies, powers and
ways of operations, above what can be derived from our idea of body, or can be
explained by what we know of matter. And thanks to Newtons work, we know that
God has done so. The properties of the material world are inconceivable to us,
but real nevertheless. Newton understood the quandary. For the rest of his life, he
sought some way to overcome the absurdity, suggesting various possibilities, but
not committing himself to any of them because he could not show how they might
work and, as he always insisted, he would not feign hypotheses beyond what can
be experimentally established.

Replacing the theological with a cognitive framework, David Hume agreed with
these conclusions. In his history of England, Hume describes Newton as the
greatest and rarest genius that ever arose for the ornament and instruction of the
species. His most spectacular achievement was that while he seemed to draw the
veil from some of the mysteries of nature, he shewed at the same time the
imperfections of the mechanical philosophy; and thereby restored [Natures]
ultimate secrets to that obscurity, in which they ever did and ever will remain.

As the import of Newtons discoveries was gradually assimilated in the sciences, the
absurdity recognized by Newton and his great contemporaries became scientific
common sense. The properties of the natural world are inconceivable to us, but that
does not matter. The goals of scientific inquiry were implicitly restricted: from the
kind of conceivability that was a criterion for true understanding in early modern
science from Galileo through Newton and beyond, to something much more limited:
intelligibility of theories about the world. This seems to me a step of considerable
significance in the history of human thought and inquiry, more so than is generally
recognized, though it has been understood by historians of science.

Friedrich Lange, in his classic 19th century history of materialism, observed that we
have so accustomed ourselves to the abstract notion of forces, or rather to a
notion hovering in a mystic obscurity between abstraction and concrete
comprehension, that we no longer find any difficulty in making one particle of
matter act upon another without immediate contact,through void space without

any material link. From such ideas the great mathematicians and physicists of the
seventeenth century were far removed. They were all in so far genuine Materialists
in the sense of ancient Materialism that they made immediate contact a condition of
influence. This transition over time is one of the most important turning-points in
the whole history of Materialism, he continued, depriving the doctrine of much
significance, if any at all. What Newton held to be so great an absurdity that no
philosophic thinker could light upon it, is prized by posterity as Newtons great
discovery of the harmony of the universe!

Similar conclusions are commonplace in the history of science. In the mid-twentieth


century, Alexander Koyr observed that Newton demonstrated that a purely
materialistic pattern of nature is utterly impossible (and a purely materialistic or
mechanistic physics, such as that of Lucretius or of Descartes, is utterly impossible,
too); his mathematical physics required the admission into the body of science of
incomprehensible and inexplicable `facts imposed up on us by empiricism, by
what is observed and our conclusions from these observations.

With the disappearance of the scientific concept of body (material, physical, etc.),
what happens to the second substance, res cogitans/mind, which was left
untouched by Newtons startling discoveries? A plausible answer was suggested by
John Locke, also within the reigning theological framework. He wrote that just as
God added to matter such inconceivable properties as gravitational attraction, he
might also have superadded to matter the capacity of thought. In the years that
followed, Lockes God was reinterpreted as nature, a move that opened the
topic to inquiry. That path was pursued extensively in the years that followed,
leading to the conclusion that mental processes are properties of certain kinds of
organized matter. Restating the fairly common understanding of the time, Charles
Darwin, in his early notebooks, wrote that there is no need to regard thought, a
secretion of the brain, as more wonderful than gravity, a property of matter all
inconceivable to us, but that is not a fact about the external world; rather, about our
cognitive limitations.

It is of some interest that all of this has been forgotten, and is now being
rediscovered. Nobel laureate Francis Crick, famous for the discovery of DNA,
formulated what he called the astonishing hypothesis that our mental and
emotional states are in fact no more than the behavior of a vast assembly of nerve
cells and their associated molecules. In the philosophical literature, this
rediscovery has sometimes been regarded as a radical new idea in the study of
mind. To cite one prominent source, the radical new idea is the bold assertion that
mental phenomena are entirely natural and caused by the neurophysiological
activities of the brain. In fact, the many proposals of this sort reiterate, in virtually
the same words, formulations of centuries ago, after the traditional mind-body
problem became unformulable with Newtons demolition of the only coherent notion

of body (or physical, material, etc.). For example, 18th century chemist/philosopher
Joseph Priestleys conclusion that properties termed mental reduce to the
organical structure of the brain, stated in different words by Locke, Hume, Darwin,
and many others, and almost inescapable, it would seem, after the collapse of the
mechanical philosophy that provided the foundations for early modern science, and
its criteria of intelligibility.

The last decade of the twentieth century was designated the Decade of the Brain.
In introducing a collection of essays reviewing its results, neuroscientist Vernon
Mountcastle formulated the guiding theme of the volume as the thesis of the new
biology that Things mental, indeed minds, are emergent properties of brains,
[though] these emergences areproduced by principles that we do not yet
understand again reiterating eighteenth century insights in virtually the same
words.

The phrase we do not yet understand, however, should strike a note of caution.
We might recall Bertrand Russells observation in 1927 that chemical laws cannot
at present be reduced to physical laws. That was true, leading eminent scientists,
including Nobel laureates, to regard chemistry as no more than a mode of
computation that could predict experimental results, but not real science. Soon after
Russell wrote, it was discovered that his observation, though correct, was
understated. Chemical laws never would be reducible to physical laws, as physics
was then understood. After physics underwent radical changes, with the quantumtheoretic revolution, the new physics was unified with a virtually unchanged
chemistry, but there was never reduction in the anticipated sense.

There may be some lessons here for neuroscience and philosophy of mind.
Contemporary neuroscience is hardly as well-established as physics was a century
ago. There are what seem to me to be cogent critiques of its foundational
assumptions, notably recent work by cognitive neuroscientists C.R. Gallistel and
Adam Philip King. The common slogan that study of mind is neuroscience at an
abstract level might turn out to be just as misleading as comparable statements
about chemistry and physics ninety years ago. Unification may take place, but that
might require radical rethinking of the neurosciences, perhaps guided by
computational theories of cognitive processes, as Gallistel and King suggest.

The development of chemistry after Newton also has lessons for neuroscience and
cognitive science. The 18th century chemist Joseph Black recommended that
chemical affinity be received as a first principle, which we cannot explain any more
than Newton could explain gravitation, and let us defer accounting for the laws of
affinity, till we have established such a body of doctrine as he has established
concerning the laws of gravitation. The course Black outlined is the one that was

actually followed as chemistry proceeded to establish a rich body of doctrine.


Historian of chemistry Arnold Thackray observes that the triumphs of chemistry
were built on no reductionist foundation but rather achieved in isolation from the
newly emerging science of physics. Interestingly, Thackray continues, Newton and
his followers did attempt to pursue the thoroughly Newtonian and reductionist task
of uncovering the general mathematical laws which govern all chemical behavior
and to develop a principled science of chemical mechanisms based on physics and
its concepts of interactions among the ultimate permanent particles of matter. But
the Newtonian program was undercut by Daltons astonishingly successful weightquantification of chemical units, Thackray continues, shifting the whole area of
philosophical debate among chemists from that of chemical mechanisms (the why?
of reaction) to that of chemical units (the what? and how much?), a theory that
was profoundly antiphysicalist and anti-Newtonian in its rejection of the unity of
matter, and its dismissal of short-range forces. Continuing, Thackray writes that
Daltons ideas were chemically successful. Hence they have enjoyed the homage
of history, unlike the philosophically more coherent, if less successful, reductionist
schemes of the Newtonians.

Adopting contemporary terminology, we might say that Dalton disregarded the


explanatory gap between chemistry and physics by ignoring the underlying
physics, much as post-Newtonian physicists disregarded the explanatory gap
between Newtonian dynamics and the mechanical philosophy by rejecting the
latter, and thereby tacitly lowering the goals of science in a highly significant way,
as I mentioned.

Contemporary studies of mind are deeply troubled by the explanatory gap


between the science of mind and neuroscience in particular, between
computational theories of cognition, including language, and neuroscience. I think
they would be well-advised to take seriously the history of chemistry. Todays task is
to develop a body of doctrine to explain what appear to be the critically
significant phenomena of language and mind, much as chemists did. It is of course
wise to keep the explanatory gap in mind, to seek ultimate unification, and to
pursue what seem to be promising steps towards unification, while nevertheless
recognizing that as often in the past, unification may not be reduction, but rather
revision of what is regarded as the fundamental discipline, the reduction basis,
the brain sciences in this case.

Locke and Hume, and many less-remembered figures of the day, understood that
much of the nature of the world is inconceivable to us. There were actually two
different kinds of reasons for this. For Locke and Hume, the reasons were primarily
epistemological. Hume in particular developed the idea that we can only be
confident of immediate impressions, of appearances. Everything else is a mental
construction. In particular, and of crucial significance, that is true of identity through

time, problems that trace back to the pre-Socratics: the identity of a river or a tree
or most importantly a person as they change through time. These are mental
constructions; we cannot know whether they are properties of the world, a
metaphysical reality. As Hume put the matter, we must maintain a modest
skepticism to a certain degree, and a fair confession of ignorance in subjects, that
exceed all human capacity which for Hume includes virtually everything beyond
appearances. We must refrain from disquisitions concerning their real nature and
operations. It is the imagination that leads us to believe that we experience
external continuing objects, including a mind or self. The imagination, furthermore,
is a kind of magical faculty in the soul, whichis inexplicable by the utmost efforts
of human understanding, so Hume argued.

A different kind of reason why the nature of the world is inconceivable to us was
provided by the judicious Mr. Newton, who apparently was not interested in the
epistemological problems that vexed Locke and Hume. Newton scholar Andrew
Janiak concludes that Newton regarded such global skepticism as irrelevant he
takes the possibility of our knowledge of nature for granted. For Newton, the
primary epistemic questions confronting us are raised by physical theory itself.
Locke and Hume, as I mentioned, took quite seriously the new science-based
skepticism that resulted from Newtons demolition of the mechanical philosophy,
which had provided the very criterion of intelligibility for the scientific revolution.
That is why Hume lauded Newton for having restored [Natures] ultimate secrets to
that obscurity, in which they ever did and ever will remain.

For these quite different kinds of reasons, the great figures of the scientific
revolution and the Enlightenment believed that there are phenomena that fall
beyond human understanding. Their reasoning seems to me substantial, and not
easily dismissed. But contemporary doctrine is quite different. The conclusions are
regarded as a dangerous heresy. They are derided as the new mysterianism, a
term coined by philosopher Owen Flanagan, who defined it as a postmodern
position designed to drive a railroad spike through the heart of scientism. Flanagan
is referring specifically to explanation of consciousness, but the same concerns hold
of mental processes in general.

The new mysterianism is compared today with the old mysterianism, Cartesian
dualism, its fate typically misunderstood. To repeat, Cartesian dualism was a
perfectly respectable scientific doctrine, disproven by Newton, who exorcised the
machine, leaving the ghost intact, contrary to what is commonly believed.

The new mysterianism, I believe, is misnamed. It should be called truism -- at


least, for anyone who accepts the major findings of modern biology, which regards
humans as part of the organic world. If so, then they will be like all other organisms

in having a genetic endowment that enables them to grow and develop to their
mature form. By simple logic, the endowment that makes this possible also
excludes other paths of development. The endowment that yields scope also
establishes limits. What enables us to grow legs and arms, and a mammalian visual
system, prevents us from growing wings and having an insect visual system.

All of this is indeed truism, and for non-mystics, the same should be expected to
hold for cognitive capacities. We understand this well for other organisms. Thus we
are not surprised to discover that rats are unable to run prime number mazes no
matter how much training they receive; they simply lack the relevant concept in
their cognitive repertoire. By the same token, we are not surprised that humans are
incapable of the remarkable navigational feats of ants and bees; we simply lack the
cognitive capacities, though we can sometimes duplicate their feats with
sophisticated instruments. The truisms extend to higher mental faculties. For such
reasons, we should, I think, be prepared to join the distinguished company of
Newton, Locke, Hume and other dedicated mysterians.

For accuracy, we should qualify the concept of mysteries by relativizing it to


organisms. Thus what is a mystery for rats might not be a mystery for humans, and
what is a mystery for humans is instinctive for ants and bees.

Dismissal of mysterianism seems to me one illustration of a widespread form of


dualism, a kind of epistemological and methodological dualism, which tacitly adopts
the principle that study of mental aspects of the world should proceed in some
fundamentally different way from study of what are considered physical aspects of
the world, rejecting what are regarded as truisms outside the domain of mental
processes. This new dualism seems to me truly pernicious, unlike Cartesian dualism,
which was respectable science. The new methodological dualism, in contrast, seems
to me to have nothing to recommend it.

Far from bewailing the existence of mysteries-for-humans, we should be extremely


grateful for it. With no limits to growth and development, our cognitive capacities
would also have no scope. Similarly, if the genetic endowment imposed no
constraints on growth and development of an organism it could become only a
shapeless amoeboid creature, reflecting accidents of an unanalyzed environment,
each quite unlike the next. Classical aesthetic theory recognized the same relation
between scope and limits. Without rules, there can be no genuinely creative
activity, even when creative work challenges and revises prevailing rules.

Contemporary rejection of mysterianism that is, truism is quite widespread. One


recent example that has received considerable attention is an interesting and

informative book by physicist David Deutsch. He writes that potential progress is


unbounded as a result of the achievements of the Enlightenment and early
modern science, which directed science to the search for best explanations. As
philosopher/physicist David Albert expounds his thesis, with the introduction of
that particular habit of concocting and evaluating new hypotheses, there was a
sense in which we could do anything. The capacities of a community that has
mastered that method to survive, and to learn, and to remake the world according
to its inclinations, are (in the long run) literally, mathematically, infinite.

The quest for better explanations may well indeed be infinite, but infinite is of
course not the same as limitless. English is infinite, but doesnt include Greek. The
integers are an infinite set, but do not include the reals. I cannot discern any
argument here that addresses the concerns and conclusions of the great mysterians
of the scientific revolution and the Enlightenment.

We are left with a serious and challenging scientific inquiry: to determine the innate
components of our cognitive nature in language, perception, concept formation,
reflection, inference, theory construction, artistic creation, and all other domains of
life, including the most ordinary ones. By pursuing this task we may hope to
determine the scope and limits of human understanding, while recognizing that
some differently structured intelligence might regard human mysteries as simple
problems and wonder that we cannot find the answers, much as we can observe the
inability of rats to run prime number mazes because of the very design of their
cognitive nature.

There is no contradiction in supposing that we might be able to probe the limits of


human understanding and try to sharpen the boundary between problems that fall
within our cognitive range and mysteries that do not. There are possible
experimental inquiries. Another approach would be to take seriously the concerns of
the great figures of the early scientific revolution and the Enlightenment: to pay
attention to what they found inconceivable, and particularly their reasons. The
mechanical philosophy itself has a claim to be an approximation to common sense
understanding of the world, a suggestion that might be clarified by experimental
inquiry. Despite much sophisticated commentary, it is also hard to escape the force
of Descartess conviction that free will is the noblest thing we have, that there is
nothing we comprehend more evidently and more perfectly and that it would be
absurd to doubt something that we comprehend intimately, and experience within
ourselves merely because it is by its nature incomprehensible to us, if indeed we
do not have intelligence enough to understand the workings of mind, as he
speculated. Concepts of determinacy and randomness fall within our intellectual
grasp. But it might turn out that free actions of men cannot be accommodated in
these terms, including the creative aspect of language and thought. If so, that might

be a matter of cognitive limitations which would not preclude an intelligible theory


of such actions, far as this is from todays scientific understanding.

Honesty should lead us to concede, I think, that we understand little more today
about these matters than the Spanish physician-philosopher Juan Huarte did 500
years ago when he distinguished the kind of intelligence humans shared with
animals from the higher grade that humans alone possess and is illustrated in the
creative use of language, and proceeding beyond that, from the still higher grade
illustrated in true artistic and scientific creativity. Nor do we even know whether
these are questions that lie within the scope of human understanding, or whether
they fall among what Hume took to be Natures ultimate secrets, consigned to that
obscurity in which they ever did and ever will remain.

Can Civilization Survive Capitalism?


Noam Chomsky
Alternet, March 5, 2013
The term "capitalism" is commonly used to refer to the U.S. economic system, with
substantial state intervention ranging from subsidies for creative innovation to the
"too-big-to-fail" government insurance policy for banks.

The system is highly monopolized, further limiting reliance on the market, and
increasingly so: In the past 20 years the share of profits of the 200 largest
enterprises has risen sharply, reports scholar Robert W. McChesney in his new book
"Digital Disconnect."

"Capitalism" is a term now commonly used to describe systems in which there are
no capitalists: for example, the worker-owned Mondragon conglomerate in the
Basque region of Spain, or the worker-owned enterprises expanding in northern
Ohio, often with conservative support -- both are discussed in important work by the
scholar Gar Alperovitz.

Some might even use the term "capitalism" to refer to the industrial democracy
advocated by John Dewey, America's leading social philosopher, in the late 19th
century and early 20th century.

Dewey called for workers to be "masters of their own industrial fate" and for all
institutions to be brought under public control, including the means of production,

exchange, publicity, transportation and communication. Short of this, Dewey


argued, politics will remain "the shadow cast on society by big business."

The truncated democracy that Dewey condemned has been left in tatters in recent
years. Now control of government is narrowly concentrated at the peak of the
income scale, while the large majority "down below" has been virtually
disenfranchised. The current political-economic system is a form of plutocracy,
diverging sharply from democracy, if by that concept we mean political
arrangements in which policy is significantly influenced by the public will.

There have been serious debates over the years about whether capitalism is
compatible with democracy. If we keep to really existing capitalist democracy -RECD for short -- the question is effectively answered: They are radically
incompatible.

It seems to me unlikely that civilization can survive RECD and the sharply
attenuated democracy that goes along with it. But could functioning democracy
make a difference?

Let's keep to the most critical immediate problem that civilization faces:
environmental catastrophe. Policies and public attitudes diverge sharply, as is often
the case under RECD. The nature of the gap is examined in several articles in the
current issue of Daedalus, the journal of the American Academy of Arts and
Sciences.

Researcher Kelly Sims Gallagher finds that "One hundred and nine countries have
enacted some form of policy regarding renewable power, and 118 countries have
set targets for renewable energy. In contrast, the United States has not adopted any
consistent and stable set of policies at the national level to foster the use of
renewable energy."

It is not public opinion that drives American policy off the international spectrum.
Quite the opposite. Opinion is much closer to the global norm than the U.S.
government's policies reflect, and much more supportive of actions needed to
confront the likely environmental disaster predicted by an overwhelming scientific
consensus -- and one that's not too far off; affecting the lives of our grandchildren,
very likely.

As Jon A. Krosnick and Bo MacInnis report in Daedalus: "Huge majorities have


favored steps by the federal government to reduce the amount of greenhouse gas
emissions generated when utilities produce electricity. In 2006, 86 percent of
respondents favored requiring utilities, or encouraging them with tax breaks, to
reduce the amount of greenhouse gases they emit. Also in that year, 87 percent
favored tax breaks for utilities that produce more electricity from water, wind or
sunlight [ These majorities were maintained between 2006 and 2010 and shrank
somewhat after that.

The fact that the public is influenced by science is deeply troubling to those who
dominate the economy and state policy.

One current illustration of their concern is the "Environmental Literacy Improvement


Act" proposed to state legislatures by ALEC, the American Legislative Exchange
Council, a corporate-funded lobby that designs legislation to serve the needs of the
corporate sector and extreme wealth.

The ALEC Act mandates "balanced teaching" of climate science in K-12 classrooms.
"Balanced teaching" is a code phrase that refers to teaching climate-change denial,
to "balance" mainstream climate science. It is analogous to the "balanced teaching"
advocated by creationists to enable the teaching of "creation science" in public
schools. Legislation based on ALEC models has already been introduced in several
states.

Of course, all of this is dressed up in rhetoric about teaching critical thinking -- a fine
idea, no doubt, but it's easy to think up far better examples than an issue that
threatens our survival and has been selected because of its importance in terms of
corporate profits.

Media reports commonly present a controversy between two sides on climate


change.

One side consists of the overwhelming majority of scientists, the world's major
national academies of science, the professional science journals and the
Intergovernmental Panel on Climate Change.

They agree that global warming is taking place, that there is a substantial human
component, that the situation is serious and perhaps dire, and that very soon,

maybe within decades, the world might reach a tipping point where the process will
escalate sharply and will be irreversible, with severe social and economic effects. It
is rare to find such consensus on complex scientific issues.

The other side consists of skeptics, including a few respected scientists who caution
that much is unknown -- which means that things might not be as bad as thought,
or they might be worse.

Omitted from the contrived debate is a much larger group of skeptics: highly
regarded climate scientists who see the IPCC's regular reports as much too
conservative. And these scientists have repeatedly been proven correct,
unfortunately.

The propaganda campaign has apparently had some effect on U.S. public opinion,
which is more skeptical than the global norm. But the effect is not significant
enough to satisfy the masters. That is presumably why sectors of the corporate
world are launching their attack on the educational system, in an effort to counter
the public's dangerous tendency to pay attention to the conclusions of scientific
research.

At the Republican National Committee's Winter Meeting a few weeks ago, Louisiana
Gov. Bobby Jindal warned the leadership that "We must stop being the stupid
party ... We must stop insulting the intelligence of voters."

Within the RECD system it is of extreme importance that we become the stupid
nation, not misled by science and rationality, in the interests of the short-term gains
of the masters of the economy and political system, and damn the consequences.

These commitments are deeply rooted in the fundamentalist market doctrines that
are preached within RECD, though observed in a highly selective manner, so as to
sustain a powerful state that serves wealth and power.

The official doctrines suffer from a number of familiar "market inefficiencies,"


among them the failure to take into account the effects on others in market
transactions. The consequences of these "externalities" can be substantial. The
current financial crisis is an illustration. It is partly traceable to the major banks and
investment firms' ignoring "systemic risk" -- the possibility that the whole system
would collapse -- when they undertook risky transactions.

Environmental catastrophe is far more serious: The externality that is being ignored
is the fate of the species. And there is nowhere to run, cap in hand, for a bailout.

In future, historians (if there are any) will look back on this curious spectacle taking
shape in the early 21st century. For the first time in human history, humans are
facing the significant prospect of severe calamity as a result of their actions -actions that are battering our prospects of decent survival.

Those historians will observe that the richest and most powerful country in history,
which enjoys incomparable advantages, is leading the effort to intensify the likely
disaster. Leading the effort to preserve conditions in which our immediate
descendants might have a decent life are the so-called "primitive" societies: First
Nations, tribal, indigenous, aboriginal.

The countries with large and influential indigenous populations are well in the lead
in seeking to preserve the planet. The countries that have driven indigenous
populations to extinction or extreme marginalization are racing toward destruction.

Thus Ecuador, with its large indigenous population, is seeking aid from the rich
countries to allow it to keep its substantial oil reserves underground, where they
should be.

Meanwhile the U.S. and Canada are seeking to burn fossil fuels, including the
extremely dangerous Canadian tar sands, and to do so as quickly and fully as
possible, while they hail the wonders of a century of (largely meaningless) energy
independence without a side glance at what the world might look like after this
extravagant commitment to self-destruction.

This observation generalizes: Throughout the world, indigenous societies are


struggling to protect what they sometimes call "the rights of nature," while the
civilized and sophisticated scoff at this silliness.

This is all exactly the opposite of what rationality would predict -- unless it is the
skewed form of reason that passes through the filter of RECD.

Noam Chomsky on Where Artificial Intelligence Went Wrong

Noam Chomsky interviewed by Yarden Katz


The Atlantic, November 1, 2012
[Note to readers: this interview is preceded by an introductory text written by the
interviewer.]

If one were to rank a list of civilization's greatest and most elusive intellectual
challenges, the problem of "decoding" ourselves -- understanding the inner
workings of our minds and our brains, and how the architecture of these elements is
encoded in our genome -- would surely be at the top. Yet the diverse fields that took
on this challenge, from philosophy and psychology to computer science and
neuroscience, have been fraught with disagreement about the right approach.

In 1956, the computer scientist John McCarthy coined the term "Artificial
Intelligence" (AI) to describe the study of intelligence by implementing its essential
features on a computer. Instantiating an intelligent system using man-made
hardware, rather than our own "biological hardware" of cells and tissues, would
show ultimate understanding, and have obvious practical applications in the
creation of intelligent devices or even robots.

Some of McCarthy's colleagues in neighboring departments, however, were more


interested in how intelligence is implemented in humans (and other animals) first.
Noam Chomsky and others worked on what became cognitive science, a field aimed
at uncovering the mental representations and rules that underlie our perceptual and
cognitive abilities. Chomsky and his colleagues had to overthrow the then-dominant
paradigm of behaviorism, championed by Harvard psychologist B.F. Skinner, where
animal behavior was reduced to a simple set of associations between an action and
its subsequent reward or punishment. The undoing of Skinner's grip on psychology
is commonly marked by Chomsky's 1967 critical review of Skinner's book Verbal
Behavior, a book in which Skinner attempted to explain linguistic ability using
behaviorist principles.

Skinner's approach stressed the historical associations between a stimulus and the
animal's response -- an approach easily framed as a kind of empirical statistical
analysis, predicting the future as a function of the past. Chomsky's conception of
language, on the other hand, stressed the complexity of internal representations,
encoded in the genome, and their maturation in light of the right data into a
sophisticated computational system, one that cannot be usefully broken down into a
set of associations. Behaviorist principles of associations could not explain the

richness of linguistic knowledge, our endlessly creative use of it, or how quickly
children acquire it with only minimal and imperfect exposure to language presented
by their environment. The "language faculty," as Chomsky referred to it, was part of
the organism's genetic endowment, much like the visual system, the immune
system and the circulatory system, and we ought to approach it just as we approach
these other more down-to-earth biological systems.

David Marr, a neuroscientist colleague of Chomsky's at MIT, defined a general


framework for studying complex biological systems (like the brain) in his influential
book Vision, one that Chomsky's analysis of the language capacity more or less fits
into. According to Marr, a complex biological system can be understood at three
distinct levels. The first level ("computational level") describes the input and output
to the system, which define the task the system is performing. In the case of the
visual system, the input might be the image projected on our retina and the output
might our brain's identification of the objects present in the image we had
observed. The second level ("algorithmic level") describes the procedure by which
an input is converted to an output, i.e. how the image on our retina can be
processed to achieve the task described by the computational level. Finally, the
third level ("implementation level") describes how our own biological hardware of
cells implements the procedure described by the algorithmic level.

The approach taken by Chomsky and Marr toward understanding how our minds
achieve what they do is as different as can be from behaviorism. The emphasis here
is on the internal structure of the system that enables it to perform a task, rather
than on external association between past behavior of the system and the
environment. The goal is to dig into the "black box" that drives the system and
describe its inner workings, much like how a computer scientist would explain how a
cleverly designed piece of software works and how it can be executed on a desktop
computer.

As written today, the history of cognitive science is a story of the unequivocal


triumph of an essentially Chomskyian approach over Skinner's behaviorist paradigm
-- an achievement commonly referred to as the "cognitive revolution," though
Chomsky himself rejects this term. While this may be a relatively accurate depiction
in cognitive science and psychology, behaviorist thinking is far from dead in related
disciplines. Behaviorist experimental paradigms and associationist explanations for
animal behavior are used routinely by neuroscientists who aim to study the
neurobiology of behavior in laboratory animals such as rodents, where the
systematic three-level framework advocated by Marr is not applied.

In May of last year, during the 150th anniversary of the Massachusetts Institute of
Technology, a symposium on "Brains, Minds and Machines" took place, where

leading computer scientists, psychologists and neuroscientists gathered to discuss


the past and future of artificial intelligence and its connection to the neurosciences.

The gathering was meant to inspire multidisciplinary enthusiasm for the revival of
the scientific question from which the field of artificial intelligence originated: how
does intelligence work? How does our brain give rise to our cognitive abilities, and
could this ever be implemented in a machine?

Noam Chomsky, speaking in the symposium, wasn't so enthused. Chomsky critiqued


the field of AI for adopting an approach reminiscent of behaviorism, except in more
modern, computationally sophisticated form. Chomsky argued that the field's heavy
use of statistical techniques to pick regularities in masses of data is unlikely to yield
the explanatory insight that science ought to offer. For Chomsky, the "new AI" -focused on using statistical learning techniques to better mine and predict data -- is
unlikely to yield general principles about the nature of intelligent beings or about
cognition.

This critique sparked an elaborate reply to Chomsky from Google's director of


research and noted AI researcher, Peter Norvig, who defended the use of statistical
models and argued that AI's new methods and definition of progress is not far off
from what happens in the other sciences.

Chomsky acknowledged that the statistical approach might have practical value,
just as in the example of a useful search engine, and is enabled by the advent of
fast computers capable of processing massive data. But as far as a science goes,
Chomsky would argue it is inadequate, or more harshly, kind of shallow. We wouldn't
have taught the computer much about what the phrase "physicist Sir Isaac Newton"
really means, even if we can build a search engine that returns sensible hits to
users who type the phrase in.

It turns out that related disagreements have been pressing biologists who try to
understand more traditional biological systems of the sort Chomsky likened to the
language faculty. Just as the computing revolution enabled the massive data
analysis that fuels the "new AI", so has the sequencing revolution in modern biology
given rise to the blooming fields of genomics and systems biology. High-throughput
sequencing, a technique by which millions of DNA molecules can be read quickly
and cheaply, turned the sequencing of a genome from a decade-long expensive
venture to an affordable, commonplace laboratory procedure. Rather than
painstakingly studying genes in isolation, we can now observe the behavior of a
system of genes acting in cells as a whole, in hundreds or thousands of different
conditions.

The sequencing revolution has just begun and a staggering amount of data has
already been obtained, bringing with it much promise and hype for new
therapeutics and diagnoses for human disease. For example, when a conventional
cancer drug fails to work a group of patients, the answer might lie in the genome of
the patients, which might have a special property that prevents the drug from
acting. With enough data comparing the relevant features of genomes from these
cancer patients and the right control groups, custom-made drugs might be
discovered, leading to a kind of "personalized medicine." Implicit in this endeavor is
the assumption that with enough sophisticated statistical tools and a large enough
collection of data, signals of interest can be weeded it out from the noise in large
and poorly understood biological systems.

The success of fields like personalized medicine and other offshoots of the
sequencing revolution and the systems-biology approach hinge upon our ability to
deal with what Chomsky called "masses of unanalyzed data" -- placing biology in
the center of a debate similar to the one taking place in psychology and artificial
intelligence since the 1960s.

Systems biology did not rise without skepticism. The great geneticist and Nobelprize winning biologist Sydney Brenner once defined the field as "low input, high
throughput, no output science." Brenner, a contemporary of Chomsky who also
participated in the same symposium on AI, was equally skeptical about new systems
approaches to understanding the brain. When describing an up-and-coming systems
approach to mapping brain circuits called Connectomics, which seeks to map the
wiring of all neurons in the brain (i.e. diagramming which nerve cells are connected
to others), Brenner called it as a "form of insanity."

Brenner's catch-phrase bite at systems biology and related techniques in


neuroscience is not far off from Chomsky's criticism of AI. An unlikely pair, systems
biology and artificial intelligence both face the same fundamental task of reverseengineering a highly complex system whose inner workings are largely a mystery.
Yet, ever-improving technologies yield massive data related to the system, only a
fraction of which might be relevant. Do we rely on powerful computing and
statistical approaches to tease apart signal from noise, or do we look for the more
basic principles that underlie the system and explain its essence? The urge to
gather more data is irresistible, though it's not always clear what theoretical
framework these data might fit into. These debates raise an old and general
question in the philosophy of science: What makes a satisfying scientific theory or
explanation, and how ought success be defined for science?

I sat with Noam Chomsky on an April afternoon in a somewhat disheveled


conference room, tucked in a hidden corner of Frank Gehry's dazzling Stata Center
at MIT. I wanted to better understand Chomsky's critique of artificial intelligence and
why it may be headed in the wrong direction. I also wanted to explore the
implications of this critique for other branches of science, such neuroscience and
systems biology, which all face the challenge of reverse-engineering complex
systems -- and where researchers often find themselves in an ever-expanding sea of
massive data. The motivation for the interview was in part that Chomsky is rarely
asked about scientific topics nowadays. Journalists are too occupied with getting his
views on U.S. foreign policy, the Middle East, the Obama administration and other
standard topics. Another reason was that Chomsky belongs to a rare and special
breed of intellectuals, one that is quickly becoming extinct. Ever since Isaiah Berlin's
famous essay, it has become a favorite pastime of academics to place various
thinkers and scientists on the "Hedgehog-Fox" continuum: the Hedgehog, a
meticulous and specialized worker, driven by incremental progress in a clearly
defined field versus the Fox, a flashier, ideas-driven thinker who jumps from
question to question, ignoring field boundaries and applying his or her skills where
they seem applicable. Chomsky is special because he makes this distinction seem
like a tired old cliche. Chomsky's depth doesn't come at the expense of versatility or
breadth, yet for the most part, he devoted his entire scientific career to the study of
defined topics in linguistics and cognitive science. Chomsky's work has had
tremendous influence on a variety of fields outside his own, including computer
science and philosophy, and he has not shied away from discussing and critiquing
the influence of these ideas, making him a particularly interesting person to
interview. Videos of the interview can be found here.

Katz: I want to start with a very basic question. At the beginning of AI, people were
extremely optimistic about the field's progress, but it hasn't turned out that way.
Why has it been so difficult? If you ask neuroscientists why understanding the brain
is so difficult, they give you very intellectually unsatisfying answers, like that the
brain has billions of cells, and we can't record from all of them, and so on.

Chomsky: There's something to that. If you take a look at the progress of science,
the sciences are kind of a continuum, but they're broken up into fields. The greatest
progress is in the sciences that study the simplest systems. So take, say physics -greatest progress there. But one of the reasons is that the physicists have an
advantage that no other branch of sciences has. If something gets too complicated,
they hand it to someone else.

Katz: Like the chemists?

Chomsky: If a molecule is too big, you give it to the chemists. The chemists, for
them, if the molecule is too big or the system gets too big, you give it to the
biologists. And if it gets too big for them, they give it to the psychologists, and
finally it ends up in the hands of the literary critic, and so on. So what the
neuroscientists are saying is not completely false.

However, it could be -- and it has been argued in my view rather plausibly, though
neuroscientists don't like it -- that neuroscience for the last couple hundred years
has been on the wrong track. There's a fairly recent book by a very good cognitive
neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that
neuroscience developed kind of enthralled to associationism and related views of
the way humans and animals work. And as a result they've been looking for things
that have the properties of associationist psychology.

Katz: Like Hebbian plasticity? [Editor's note: A theory, attributed to Donald Hebb,
that associations between an environmental stimulus and a response to the
stimulus can be encoded by strengthening of synaptic connections between
neurons.]

Chomsky: Well, like strengthening synaptic connections. Gallistel has been arguing
for years that if you want to study the brain properly you should begin, kind of like
Marr, by asking what tasks is it performing. So he's mostly interested in insects. So
if you want to study, say, the neurology of an ant, you ask what does the ant do? It
turns out the ants do pretty complicated things, like path integration, for example. If
you look at bees, bee navigation involves quite complicated computations, involving
position of the sun, and so on and so forth. But in general what he argues is that if
you take a look at animal cognition, human too, it's computational systems.
Therefore, you want to look the units of computation. Think about a Turing machine,
say, which is the simplest form of computation, you have to find units that have
properties like "read", "write" and "address." That's the minimal computational unit,
so you got to look in the brain for those. You're never going to find them if you look
for strengthening of synaptic connections or field properties, and so on. You've got
to start by looking for what's there and what's working and you see that from Marr's
highest level.

Katz: Right, but most neuroscientists do not sit down and describe the inputs and
outputs to the problem that they're studying. They're more driven by say, putting a
mouse in a learning task and recording as many neurons possible, or asking if Gene
X is required for the learning task, and so on. These are the kinds of statements that
their experiments generate.

Chomsky: That's right..

Katz: Is that conceptually flawed?

Chomsky: Well, you know, you may get useful information from it. But if what's
actually going on is some kind of computation involving computational units, you're
not going to find them that way. It's kind of, looking at the wrong lamp post, sort of.
It's a debate... I don't think Gallistel's position is very widely accepted among
neuroscientists, but it's not an implausible position, and it's basically in the spirit of
Marr's analysis. So when you're studying vision, he argues, you first ask what kind
of computational tasks is the visual system carrying out. And then you look for an
algorithm that might carry out those computations and finally you search for
mechanisms of the kind that would make the algorithm work. Otherwise, you may
never find anything. There are many examples of this, even in the hard sciences,
but certainly in the soft sciences. People tend to study what you know how to study,
I mean that makes sense. You have certain experimental techniques, you have
certain level of understanding, you try to push the envelope -- which is okay, I
mean, it's not a criticism, but people do what you can do. On the other hand, it's
worth thinking whether you're aiming in the right direction. And it could be that if
you take roughly the Marr-Gallistel point of view, which personally I'm sympathetic
to, you would work differently, look for different kind of experiments.

Katz: Right, so I think a key idea in Marr is, like you said, finding the right units to
describing the problem, sort of the right "level of abstraction" if you will. So if we
take a concrete example of a new field in neuroscience, called Connectomics, where
the goal is to find the wiring diagram of very complex organisms, find the
connectivity of all the neurons in say human cerebral cortex, or mouse cortex. This
approach was criticized by Sidney Brenner, who in many ways is [historically] one of
the originators of the approach. Advocates of this field don't stop to ask if the wiring
diagram is the right level of abstraction -- maybe it's not, so what is your view on
that?

Chomsky: Well, there are much simpler questions. Like here at MIT, there's been an
interdisciplinary program on the nematode C. elegans for decades, and as far as I
understand, even with this miniscule animal, where you know the wiring diagram, I
think there's 800 neurons or something ...

Katz: I think 300..

Chomsky: ...Still, you can't predict what the thing [C. elegans nematode] is going to
do. Maybe because you're looking in the wrong place.

Katz: I'd like to shift the topic to different methodologies that were used in AI. So
"Good Old Fashioned AI," as it's labeled now, made strong use of formalisms in the
tradition of Gottlob Frege and Bertrand Russell, mathematical logic for example, or
derivatives of it, like nonmonotonic reasoning and so on. It's interesting from a
history of science perspective that even very recently, these approaches have been
almost wiped out from the mainstream and have been largely replaced -- in the field
that calls itself AI now -- by probabilistic and statistical models. My question is, what
do you think explains that shift and is it a step in the right direction?

Chomsky: I heard Pat Winston give a talk about this years ago. One of the points he
made was that AI and robotics got to the point where you could actually do things
that were useful, so it turned to the practical applications and somewhat, maybe not
abandoned, but put to the side, the more fundamental scientific questions, just
caught up in the success of the technology and achieving specific goals.

Katz: So it shifted to engineering...

Chomsky: It became... well, which is understandable, but would of course direct


people away from the original questions. I have to say, myself, that I was very
skeptical about the original work. I thought it was first of all way too optimistic, it
was assuming you could achieve things that required real understanding of systems
that were barely understood, and you just can't get to that understanding by
throwing a complicated machine at it. If you try to do that you are led to a
conception of success, which is self-reinforcing, because you do get success in
terms of this conception, but it's very different from what's done in the sciences. So
for example, take an extreme case, suppose that somebody says he wants to
eliminate the physics department and do it the right way. The "right" way is to take
endless numbers of videotapes of what's happening outside the video, and feed
them into the biggest and fastest computer, gigabytes of data, and do complex
statistical analysis -- you know, Bayesian this and that [Editor's note: A modern
approach to analysis of data which makes heavy use of probability theory.] -- and
you'll get some kind of prediction about what's gonna happen outside the window
next. In fact, you get a much better prediction than the physics department will
ever give. Well, if success is defined as getting a fair approximation to a mass of
chaotic unanalyzed data, then it's way better to do it this way than to do it the way
the physicists do, you know, no thought experiments about frictionless planes and
so on and so forth. But you won't get the kind of understanding that the sciences
have always been aimed at -- what you'll get at is an approximation to what's
happening.

And that's done all over the place. Suppose you want to predict tomorrow's weather.
One way to do it is okay I'll get my statistical priors, if you like, there's a high
probability that tomorrow's weather here will be the same as it was yesterday in
Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick
that in, and you get a bunch of assumptions like that, you run the experiment, you
look at it over and over again, you correct it by Bayesian methods, you get better
priors. You get a pretty good approximation of what tomorrow's weather is going to
be. That's not what meteorologists do -- they want to understand how it's working.
And these are just two different concepts of what success means, of what
achievement is. In my own field, language fields, it's all over the place. Like
computational cognitive science applied to language, the concept of success that's
used is virtually always this. So if you get more and more data, and better and
better statistics, you can get a better and better approximation to some immense
corpus of text, like everything in The Wall Street Journal archives -- but you learn
nothing about the language.

A very different approach, which I think is the right approach, is to try to see if you
can understand what the fundamental principles are that deal with the core
properties, and recognize that in the actual usage, there's going to be a thousand
other variables intervening -- kind of like what's happening outside the window, and
you'll sort of tack those on later on if you want better approximations, that's a
different approach. These are just two different concepts of science. The second one
is what science has been since Galileo, that's modern science. The approximating
unanalyzed data kind is sort of a new approach, not totally, there's things like it in
the past. It's basically a new approach that has been accelerated by the existence
of massive memories, very rapid processing, which enables you to do things like
this that you couldn't have done by hand. But I think, myself, that it is leading
subjects like computational cognitive science into a direction of maybe some
practical applicability...

Katz: ...in engineering?

Chomsky: ...But away from understanding. Yeah, maybe some effective engineering.
And it's kind of interesting to see what happened to engineering. So like when I got
to MIT, it was 1950s, this was an engineering school. There was a very good math
department, physics department, but they were service departments. They were
teaching the engineers tricks they could use. The electrical engineering
department, you learned how to build a circuit. Well if you went to MIT in the 1960s,
or now, it's completely different. No matter what engineering field you're in, you
learn the same basic science and mathematics. And then maybe you learn a little
bit about how to apply it. But that's a very different approach. And it resulted maybe

from the fact that really for the first time in history, the basic sciences, like physics,
had something really to tell engineers. And besides, technologies began to change
very fast, so not very much point in learning the technologies of today if it's going to
be different 10 years from now. So you have to learn the fundamental science that's
going to be applicable to whatever comes along next. And the same thing pretty
much happened in medicine. So in the past century, again for the first time, biology
had something serious to tell to the practice of medicine, so you had to understand
biology if you want to be a doctor, and technologies again will change. Well, I think
that's the kind of transition from something like an art, that you learn how to
practice -- an analog would be trying to match some data that you don't
understand, in some fashion, maybe building something that will work -- to science,
what happened in the modern period, roughly Galilean science.

Katz: I see. Returning to the point about Bayesian statistics in models of language
and cognition. You've argued famously that speaking of the probability of a
sentence is unintelligible on its own...

Chomsky: ..Well you can get a number if you want, but it doesn't mean anything.

Katz: It doesn't mean anything. But it seems like there's almost a trivial way to unify
the probabilistic method with acknowledging that there are very rich internal mental
representations, comprised of rules and other symbolic structures, and the goal of
probability theory is just to link noisy sparse data in the world with these internal
symbolic structures. And that doesn't commit you to saying anything about how
these structures were acquired -- they could have been there all along, or there
partially with some parameters being tuned, whatever your conception is. But
probability theory just serves as a kind of glue between noisy data and very rich
mental representations.

Chomsky: Well... there's nothing wrong with probability theory, there's nothing
wrong with statistics.

Katz: But does it have a role?

Chomsky: If you can use it, fine. But the question is what are you using it for? First of
all, first question is, is there any point in understanding noisy data? Is there some
point to understanding what's going on outside the window?

Katz: Well, we are bombarded with it [noisy data], it's one of Marr's examples, we
are faced with noisy data all the time, from our retina to...

Chomsky: That's true. But what he says is: Let's ask ourselves how the biological
system is picking out of that noise things that are significant. The retina is not trying
to duplicate the noise that comes in. It's saying I'm going to look for this, that and
the other thing. And it's the same with say, language acquisition. The newborn
infant is confronted with massive noise, what William James called "a blooming,
buzzing confusion," just a mess. If say, an ape or a kitten or a bird or whatever is
presented with that noise, that's where it ends. However, the human infants,
somehow, instantaneously and reflexively, picks out of the noise some scattered
subpart which is language-related. That's the first step. Well, how is it doing that?
It's not doing it by statistical analysis, because the ape can do roughly the same
probabilistic analysis. It's looking for particular things. So psycholinguists,
neurolinguists, and others are trying to discover the particular parts of the
computational system and of the neurophysiology that are somehow tuned to
particular aspects of the environment. Well, it turns out that there actually are
neural circuits which are reacting to particular kinds of rhythm, which happen to
show up in language, like syllable length and so on. And there's some evidence that
that's one of the first things that the infant brain is seeking -- rhythmic structures.
And going back to Gallistel and Marr, its got some computational system inside
which is saying "okay, here's what I do with these things" and say, by nine months,
the typical infant has rejected -- eliminated from its repertoire -- the phonetic
distinctions that aren't used in its own language. So initially of course, any infant is
tuned to any language. But say, a Japanese kid at nine months won't react to the RL distinction anymore, that's kind of weeded out. So the system seems to sort out
lots of possibilities and restrict it to just ones that are part of the language, and
there's a narrow set of those. You can make up a non-language in which the infant
could never do it, and then you're looking for other things. For example, to get into
a more abstract kind of language, there's substantial evidence by now that such a
simple thing as linear order, what precedes what, doesn't enter into the syntactic
and semantic computational systems, they're just not designed to look for linear
order. So you find overwhelmingly that more abstract notions of distance are
computed and not linear distance, and you can find some neurophysiological
evidence for this, too. Like if artificial languages are invented and taught to people,
which use linear order, like you negate a sentence by doing something to the third
word. People can solve the puzzle, but apparently the standard language areas of
the brain are not activated -- other areas are activated, so they're treating it as a
puzzle not as a language problem. You need more work, but...

Katz: You take that as convincing evidence that activation or lack of activation for
the brain area ...

Chomsky: ...It's evidence, you'd want more of course. But this is the kind of
evidence, both on the linguistics side you look at how languages work -- they don't
use things like third word in sentence. Take a simple sentence like "Instinctively,
Eagles that fly swim", well, "instinctively" goes with swim, it doesn't go with fly,
even though it doesn't make sense. And that's reflexive. "Instinctively", the adverb,
isn't looking for the nearest verb, it's looking for the structurally most prominent
one. That's a much harder computation. But that's the only computation which is
ever used. Linear order is a very easy computation, but it's never used. There's a
ton of evidence like this, and a little neurolinguistic evidence, but they point in the
same direction. And as you go to more complex structures, that's where you find
more and more of that.

That's, in my view at least, the way to try to discover how the system is actually
working, just like in vision, in Marr's lab, people like Shimon Ullman discovered some
pretty remarkable things like the rigidity principle. You're not going to find that by
statistical analysis of data. But he did find it by carefully designed experiments.
Then you look for the neurophysiology, and see if you can find something there that
carries out these computations. I think it's the same in language, the same in
studying our arithmetical capacity, planning, almost anything you look at. Just trying
to deal with the unanalyzed chaotic data is unlikely to get you anywhere, just like as
it wouldn't have gotten Galileo anywhere. In fact, if you go back to this, in the 17th
century, it wasn't easy for people like Galileo and other major scientists to convince
the NSF [National Science Foundation] of the day -- namely, the aristocrats -- that
any of this made any sense. I mean, why study balls rolling down frictionless planes,
which don't exist. Why not study the growth of flowers? Well, if you tried to study
the growth of flowers at that time, you would get maybe a statistical analysis of
what things looked like.

It's worth remembering that with regard to cognitive science, we're kind of preGalilean, just beginning to open up the subject. And I think you can learn something
from the way science worked [back then]. In fact, one of the founding experiments
in history of chemistry, was about 1640 or so, when somebody proved to the
satisfaction of the scientific world, all the way up to Newton, that water can be
turned into living matter. The way they did it was -- of course, nobody knew
anything about photosynthesis -- so what you do is you take a pile of earth, you
heat it so all the water escapes. You weigh it, and put it in a branch of a willow tree,
and pour water on it, and measure you the amount of water you put in. When you're
done, you the willow tree is grown, you again take the earth and heat it so all the
water is gone -- same as before. Therefore, you've shown that water can turn into
an oak tree or something. It is an experiment, it's sort of right, but it's just that you
don't know what things you ought to be looking for. And they weren't known until
Priestly found that air is a component of the world, it's got nitrogen, and so on, and
you learn about photosynthesis and so on. Then you can redo the experiment and
find out what's going on. But you can easily be misled by experiments that seem to

work because you don't know enough about what to look for. And you can be misled
even more if you try to study the growth of trees by just taking a lot of data about
how trees growing, feeding it into a massive computer, doing some statistics and
getting an approximation of what happened.

Katz: In the domain of biology, would you consider the work of Mendel, as a
successful case, where you take this noisy data -- essentially counts -- and you leap
to postulate this theoretical object...

Chomsky: ...Well, throwing out a lot of the data that didn't work.

Katz: ...But seeing the ratio that made sense, given the theory.

Chomsky: Yeah, he did the right thing. He let the theory guide the data. There was
counter data which was more or less dismissed, you know you don't put it in your
papers. And he was of course talking about things that nobody could find, like you
couldn't find the units that he was postulating. But that's, sure, that's the way
science works. Same with chemistry. Chemistry, until my childhood, not that long
ago, was regarded as a calculating device. Because you couldn't reduce to physics.
So it's just some way of calculating the result of experiments. The Bohr atom was
treated that way. It's the way of calculating the results of experiments but it can't be
real science, because you can't reduce it to physics, which incidentally turned out to
be true, you couldn't reduce it to physics because physics was wrong. When
quantum physics came along, you could unify it with virtually unchanged chemistry.
So the project of reduction was just the wrong project. The right project was to see
how these two ways of looking at the world could be unified. And it turned out to be
a surprise -- they were unified by radically changing the underlying science. That
could very well be the case with say, psychology and neuroscience. I mean,
neuroscience is nowhere near as advanced as physics was a century ago.

Katz: That would go against the reductionist approach of looking for molecules that
are correlates of...

Chomsky: Yeah. In fact, the reductionist approach has often been shown to be
wrong. The unification approach makes sense. But unification might not turn out to
be reduction, because the core science might be misconceived as in the physicschemistry case and I suspect very likely in the neuroscience-psychology case. If
Gallistel is right, that would be a case in point that yeah, they can be unified, but
with a different approach to the neurosciences.

Katz: So is that a worthy goal of unification or the fields should proceed in parallel?

Chomsky: Well, unification is kind of an intuitive ideal, part of the scientific


mystique, if you like. It's that you're trying to find a unified theory of the world. Now
maybe there isn't one, maybe different parts work in different ways, but your
assumption is until I'm proven wrong definitively, I'll assume that there's a unified
account of the world, and it's my task to try to find it. And the unification may not
come out by reduction -- it often doesn't. And that's kind of the guiding logic of
David Marr's approach: what you discover at the computational level ought to be
unified with what you'll some day find out at the mechanism level, but maybe not in
terms of the way we now understand the mechanisms.

Katz: And implicit in Marr it seems that you can't work on all three in parallel
[computational, algorithmic, implementation levels], it has to proceed top-down,
which is a very stringent requirement, given that science usually doesn't work that
way.

Chomsky: Well, he wouldn't have said it has to be rigid. Like for example,
discovering more about the mechanisms might lead you to change your concept of
computation. But there's kind of a logical precedence, which isn't necessarily the
research precedence, since in research everything goes on at the same time. But I
think that the rough picture is okay. Though I should mention that Marr's conception
was designed for input systems...

Katz: information-processing systems...

Chomsky: Yeah, like vision. There's some data out there -- it's a processing system -and something goes on inside. It isn't very well designed for cognitive systems. Like
take your capacity to take out arithmetical operations..

Katz: It's very poor, but yeah...

Chomsky: Okay [laughs]. But it's an internal capacity, you know your brain is a
controlling unit of some kind of Turing machine, and it has access to some external
data, like memory, time and so on. And in principle, you could multiply anything,
but of course not in practice. If you try to find out what that internal system is of
yours, the Marr hierarchy doesn't really work very well. You can talk about the

computational level -- maybe the rules I have are Peano's axioms [Editor's note: a
mathematical theory (named after Italian mathematician Giuseppe Peano) that
describes a core set of basic rules of arithmetic and natural numbers, from which
many useful facts about arithmetic can be deduced], or something, whatever they
are -- that's the computational level. In theory, though we don't know how, you can
talk about the neurophysiological level, nobody knows how, but there's no real
algorithmic level. Because there's no calculation of knowledge, it's just a system of
knowledge. To find out the nature of the system of knowledge, there is no algorithm,
because there is no process. Using the system of knowledge, that'll have a process,
but that's something different.

Katz: But since we make mistakes, isn't that evidence of a process gone wrong?

Chomsky: That's the process of using the internal system. But the internal system
itself is not a process, because it doesn't have an algorithm. Take, say, ordinary
mathematics. If you take Peano's axioms and rules of inference, they determine all
arithmetical computations, but there's no algorithm. If you ask how does a number
theoretician applies these, well all kinds of ways. Maybe you don't start with the
axioms and start with the rules of inference. You take the theorem, and see if I can
establish a lemma, and if it works, then see if I can try to ground this lemma in
something, and finally you get a proof which is a geometrical object.

Katz: But that's a fundamentally different activity from me adding up small numbers
in my head, which surely does have some kind of algorithm.

Chomsky: Not necessarily. There's an algorithm for the process in both cases. But
there's no algorithm for the system itself, it's kind of a category mistake. You don't
ask the question what's the process defined by Peano's axioms and the rules of
inference, there's no process. There can be a process of using them. And it could be
a complicated process, and the same is true of your calculating. The internal system
that you have -- for that, the question of process doesn't arise. But for your using
that internal system, it arises, and you may carry out multiplications all kinds of
ways. Like maybe when you add 7 and 6, let's say, one algorithm is to say "I'll see
how much it takes to get to 10" -- it takes 3, and now I've got 4 left, so I gotta go
from 10 and add 4, I get 14. That's an algorithm for adding -- it's actually one I was
taught in kindergarten. That's one way to add.

But there are other ways to add -- there's no kind of right algorithm. These are
algorithms for carrying out the process the cognitive system that's in your head.
And for that system, you don't ask about algorithms. You can ask about the
computational level, you can ask about the mechanism level. But the algorithm

level doesn't exist for that system. It's the same with language. Language is kind of
like the arithmetical capacity. There's some system in there that determines the
sound and meaning of an infinite array of possible sentences. But there's no
question about what the algorithm is. Like there's no question about what a formal
system of arithmetic tells you about proving theorems. The use of the system is a
process and you can study it in terms of Marr's level. But it's important to be
conceptually clear about these distinctions.

Katz: It just seems like an astounding task to go from a computational level theory,
like Peano axioms, to Marr level 3 of the...

Chomsky: mechanisms...

Katz: ...mechanisms and implementations...

Chomsky: Oh yeah. Well..

Katz: ..without an algorithm at least.

Chomsky: Well, I don't think that's true. Maybe information about how it's used,
that'll tell you something about the mechanisms. But some higher intelligence -maybe higher than ours -- would see that there's an internal system, its got a
physiological basis, and I can study the physiological basis of that internal system.
Not even looking at the process by which it's used. Maybe looking at the process by
which it's used maybe gives you helpful information about how to proceed. But it's
conceptually a different problem. That's the question of what's the best way to
study something. So maybe the best way to study the relation between Peano's
axioms and neurons is by watching mathematicians prove theorems. But that's just
because it'll give you information that may be helpful. The actual end result of that
will be an account of the system in the brain, the physiological basis for it, with no
reference to any algorithm. The algorithms are about a process of using it, which
may help you get answers. Maybe like incline planes tell you something about the
rate of fall, but if you take a look at Newton's laws, they don't say anything about
incline planes.

Katz: Right. So the logic for studying cognitive and language systems using this kind
of Marr approach makes sense, but since you've argued that language capacity is

part of the genetic endowment, you could apply it to other biological systems, like
the immune system, the circulatory system....

Chomsky: Certainly, I think it's very similar. You can say the same thing about study
of the immune system.

Katz: It might even be simpler, in fact, to do it for those systems than for cognition.

Chomsky: Though you'd expect different answers. You can do it for the digestive
system. Suppose somebody's studying the digestive system. Well, they're not going
to study what happens when you have a stomach flu, or when you've just eaten a
big Mac, or something. Let's go back to taking pictures outside the window. One way
of studying the digestive system is just to take all data you can find about what
digestive systems do under any circumstances, toss the data into a computer, do
statistical analysis -- you get something. But it's not gonna be what any biologist
would do. They want to abstract away, at the very beginning, from what are
presumed -- maybe wrongly, you can always be wrong -- irrelevant variables, like do
you have stomach flu.

Katz: But that's precisely what the biologists are doing, they are taking the sick
people with the sick digestive system, comparing them to the normals, and
measuring these molecular properties.

Chomsky: They're doing it in an advanced stage. They already understand a lot


about the study of the digestive system before we compare them, otherwise you
wouldn't know what to compare, and why is one sick and one isn't.

Katz: Well, they're relying on statistical analysis to pick out the features that
discriminate. It's a highly fundable approach, because you're claiming to study sick
people.

Chomsky: It may be the way to fund things. Like maybe the way to fund study of
language is to say, maybe help cure autism. That's a different question [laughs]. But
the logic of the search is to begin by studying the system abstracted from what you,
plausibly, take to be irrelevant intrusions, see if you can find its basic nature -- then
ask, well, what happens when I bring in some of this other stuff, like stomach flu.

Katz: It still seems like there's a difficulty in applying Marr's levels to these kinds of
systems. If you ask, what is the computational problem that the brain is solving, we
have kind of an answer, it's sort of like a computer. But if you ask, what is the
computational problem that's being solved by the lung, that's very difficult to even
think -- it's not obviously an information-processing kind of problem.

Chomsky: No, but there's no reason to assume that all of biology is computational.
There may be reasons to assume that cognition is. And in fact Gallistel is not saying
that everything is in the body ought to be studied by finding read/write/address
units.

Katz: It just seems contrary to any evolutionary intuition. These systems evolved
together, reusing many of the same parts, same molecules, pathways. Cells are
computing things.

Chomsky: You don't study the lung by asking what cells compute. You study the
immune system and the visual system, but you're not going to expect to find the
same answers. An organism is a highly modular system, has a lot of complex
subsystems, which are more or less internally integrated. They operate by different
principles. The biology is highly modular. You don't assume it's all just one big mess,
all acting the same way.

Katz: No, sure, but I'm saying you would apply the same approach to study each of
the modules.

Chomsky: Not necessarily, not if the modules are different. Some of the modules
may be computational, others may not be.

Katz: So what would you think would be an adequate theory that is explanatory,
rather than just predicting data, the statistical way, what would be an adequate
theory of these systems that are not computing systems -- can we even understand
them?

Chomsky: Sure. You can understand a lot about say, what makes an embryo turn
into a chicken rather than a mouse, let's say. It's a very intricate system, involves all
kinds of chemical interactions, all sorts of other things. Even the nematode, it's by
no means obviously -- in fact there are reports from the study here -- that it's all just
a matter of a neural net. You have to look into complex chemical interactions that

take place in the brain, in the nervous system. You have to look into each system on
its own. These chemical interactions might not be related to how your arithmetical
capacity works -- probably aren't. But they might very well be related to whether
you decide to raise your arm or lower it.

Katz: Though if you study the chemical interactions it might lead you into what
you've called just a redescription of the phenomena.

Chomsky: Or an explanation. Because maybe that's directly, crucially, involved.

Katz: But if you explain it in terms of chemical X has to be turned on, or gene X has
to be turned on, you've not really explained how organism-determination is done.
You've simply found a switch, and hit that switch.

Chomsky: But then you look further, and find out what makes this gene do such and
such under these circumstances, and do something else under different
circumstances.

Katz: But if genes are the wrong level of abstraction, you'd be screwed.

Chomsky: Then you won't get the right answer. And maybe they're not. For
example, it's notoriously difficult to account for how an organism arises from a
genome. There's all kinds of production going on in the cell. If you just look at gene
action, you may not be in the right level of abstraction. You never know, that's what
you try to study. I don't think there's any algorithm for answering those questions,
you try.

Katz: So I want to shift gears more toward evolution. You've criticized a very
interesting position you've called "phylogenetic empiricism." You've criticized this
position for not having explanatory power. It simply states that: well, the mind is the
way it because of adaptations to the environment that were selected for. And these
were selected for by natural selection. You've argued that this doesn't explain
anything because you can always appeal to these two principles of mutation and
selection.

Chomsky: Well you can wave your hands at them, but they might be right. It could
be that, say, the development of your arithmetical capacity, arose from random
mutation and selection. If it turned out to be true, fine.

Katz: It seems like a truism.

Chomsky: Well, I mean, doesn't mean it's false. Truisms are true. [laughs].

Katz: But they don't explain much.

Chomsky: Maybe that's the highest level of explanation you can get. You can invent
a world -- I don't think it's our world -- but you can invent a world in which nothing
happens except random changes in objects and selection on the basis of external
forces. I don't think that's the way our world works, I don't think it's the way any
biologist thinks it is. There are all kind of ways in which natural law imposes
channels within which selection can take place, and some things can happen and
other things don't happen. Plenty of things that go on in the biology in organisms
aren't like this. So take the first step, meiosis. Why do cells split into spheres and
not cubes? It's not random mutation and natural selection; it's a law of physics.
There's no reason to think that laws of physics stop there, they work all the way
through.

Katz: Well, they constrain the biology, sure.

Chomsky: Okay, well then it's not just random mutation and selection. It's random
mutation, selection, and everything that matters, like laws of physics.

Katz: So is there room for these approaches which are now labeled "comparative
genomics", like the Broad Institute here [at MIT/Harvard] is generating massive
amounts of data, of different genomes, different animals, different cells under
different conditions and sequencing any molecule that is sequenceable. Is there
anything that can be gleaned about these high-level cognitive tasks from these
comparative evolutionary studies or is it premature?

Chomsky: I am not saying it's the wrong approach, but I don't know anything that
can be drawn from it. Nor would you expect to.

Katz: You don't have any examples where this evolutionary analysis has informed
something? Like Foxp2 mutations? [Editor's note: A gene that is thought be
implicated in speech or language capacities. A family with a stereotyped speech
disorder was found to have genetic mutations that disrupt this gene. This gene
evolved to have several mutations unique to the human evolutionary lineage.]

Chomsky: Foxp2 is kind of interesting, but it doesn't have anything to do with


language. It has to do with fine motor coordinations and things like that. Which
takes place in the use of language, like when you speak you control your lips and so
on, but all that's very peripheral to language, and we know that. So for example,
whether you use the articulatory organs or sign, you know hand motions, it's the
same language. In fact, it's even being analyzed and produced in the same parts of
the brain, even though one of them is moving your hands and the other is moving
your lips. So whatever the externalization is, it seems quite peripheral. I think
they're too complicated to talk about, but I think if you look closely at the design
features of language, you get evidence for that. There are interesting cases in the
study of language where you find conflicts between computational efficiency and
communicative efficiency.

Take this case I even mentioned of linear order. If you want to know which verb the
adverb attaches to, the infant reflexively using minimal structural distance, not
minimal linear distance. Well, it's using minimal linear distances, computationally
easy, but it requires having linear order available. And if linear order is only a reflex
of the sensory-motor system, which makes sense, it won't be available. That's
evidence that the mapping of the internal system to the sensory-motor system is
peripheral to the workings of the computational system.

Katz: But it might constrain it like physics constrains meiosis?

Chomsky: It might, but there's very little evidence of that. So for example the left
end -- left in the sense of early -- of a sentence has different properties from the
right end. If you want to ask a question, let's say "Who did you see?" You put the
"Who" infront, not in the end. In fact, in every language in which a wh-phrase -- like
who, or which book, or something -- moves to somewhere else, it moves to the left,
not to the right. That's very likely a processing constraint. The sentence opens by
telling you, the hearer, here's what kind of a sentence it is. If it's at the end, you
have to have the whole declarative sentence, and at the end you get the
information I'm asking about. If you spell it out, it could be a processing constraint.
So that's a case, if true, in which the processing constraint, externalization, do affect
the computational character of the syntax and semantics.

There are cases where you find clear conflicts between computational efficiency and
communicative efficiency. Take a simple case, structural ambiguity. If I say, "Visiting
relatives can be a nuisance" -- that's ambiguous. Relatives that visit, or going to
visit relatives. It turns out in every such case that's known, the ambiguity is derived
by simply allowing the rules to function freely, with no constraints, and that
sometimes yields ambiguities. So it's computationally efficient, but it's inefficient for
communication, because it leads to unresolvable ambiguity.

Or take what are called garden-path sentences, sentences like "The horse raced
past the barn fell". People presented with that don't understand it, because the way
it's put, they're led down a garden path. "The horse raced past the barn" sounds like
a sentence, and then you ask what's "fell" doing there at the end. On the other
hand, if you think about it, it's a perfectly well formed sentence. It means the horse
that was raced past the barn, by someone, fell. But the rules of the language when
they just function happen to give you a sentence which is unintelligible because of
the garden-path phenomena. And there are lots of cases like that. There are things
you just can't say, for some reason. So if I say, "The mechanics fixed the cars". And
you say, "They wondered if the mechanics fixed the cars." You can ask questions
about the cars, "How many cars did they wonder if the mechanics fixed?" More or
less okay. Suppose you want to ask a question about the mechanics. "How many
mechanics did they wonder if fixed the cars?" Somehow it doesn't work, can't say
that. It's a fine thought, but you can't say it. Well, if you look into it in detail, the
most efficient computational rules prevent you from saying it. But for expressing
thought, for communication, it'd be better if you could say it -- so that's a conflict.

And in fact, every case of a conflict that's known, computational efficiency wins. The
externalization is yielding all kinds of ambiguities but for simple computational
reasons, it seems that the system internally is just computing efficiently, it doesn't
care about the externalization. Well, I haven't made that a very plausible, but if you
spell it out it can be made quite a convincing argument I think.

That tells something about evolution. What it strongly suggests is that in the
evolution of language, a computational system developed, and later on it was
externalized. And if you think about how a language might have evolved, you're
almost driven to that position. At some point in human evolution, and it's apparently
pretty recent given the archeological record -- maybe last hundred thousand years,
which is nothing -- at some point a computational system emerged with had new
properties, that other organisms don't have, that has kind of arithmetical type
properties...

Katz: It enabled better thought before externalization?

Chomsky: It gives you thought. Some rewiring of the brain, that happens in a single
person, not in a group. So that person had the capacity for thought -- the group
didn't. So there isn't any point in externalization. Later on, if this genetic change
proliferates, maybe a lot of people have it, okay then there's a point in figuring out a
way to map it to the sensory-motor system and that's externalization but it's a
secondary process.

Katz: Unless the externalization and the internal thought system are coupled in
ways we just don't predict.

Chomsky: We don't predict, and they don't make a lot of sense. Why should it be
connected to the external system? In fact, say your arithmetical capacity isn't. And
there are other animals, like songbirds, which have internal computational systems,
bird song. It's not the same system but it's some kind of internal computational
system. And it is externalized, but sometimes it's not. A chick in some species
acquires the song of that species but doesn't produce it until maturity. During that
early period it has the song, but it doesn't have the externalization system. Actually
that's true of humans too, like a human infant understands a lot more than it can
produce -- plenty of experimental evidence for this, meaning it's got the internal
system somehow, but it can't externalize it. Maybe it doesn't have enough memory,
or whatever it may be.

Katz: I'd like to close with one question about the philosophy of science. In a recent
interview, you said that part of the problem is that scientists don't think enough
about what they're up to. You mentioned that you taught a philosophy of science
course at MIT and people would read, say, Willard van Orman Quine, and it would go
in one ear out the other, and people would go back doing the same kind of science
that they were doing. What are the insights that have been obtained in philosophy
of science that are most relevant to scientists who are trying to let's say, explain
biology, and give an explanatory theory rather than redescription of the
phenomena? What do you expect from such a theory, and what are the insights that
help guide science in that way? Rather than guiding it towards behaviorism which
seems to be an intuition that many, say, neuroscientists have?

Chomsky: Philosophy of science is a very interesting field, but I don't think it really
contribute to science, it learns from science. It tries to understand what the sciences
do, why do they achieve things, what are the wrong paths, see if we can codify that
and come to understand. What I think is valuable is the history of science. I think we
learn a lot of things from the history of science that can be very valuable to the
emerging sciences. Particularly when we realize that in say, the emerging cognitive

sciences, we really are in a kind of pre-Galilean stage. We don't know what we're
looking for anymore than Galileo did, and there's a lot to learn from that. So for
example one striking fact about early science, not just Galileo, but the Galilean
breakthrough, was the recognition that simple things are puzzling.

Take say, if I'm holding this here [cup of water], and say the water is boiling [putting
hand over water], the steam will rise, but if I take my hand away the cup will fall.
Well why does the cup fall and the steam rise? Well for millennia there was a
satisfactory answer to that: they're seeking their natural place.

Katz: Like in Aristotelian physics?

Chomsky: That's the Aristotelian physics. The best and greatest scientists thought
that was answer. Galileo allowed himself to be puzzled by it. As soon as you allow
yourself to be puzzled by it, you immediately find that all your intuitions are wrong.
Like the fall of a big mass and a small mass, and so on. All your intuitions are wrong
-- there are puzzles everywhere you look. That's something to learn from the history
of science. Take the one example that I gave to you, "Instinctively eagles that fly
swim." Nobody ever thought that was puzzling -- yeah, why not. But if you think
about it, it's very puzzling, you're using a complex computation instead of a simple
one. Well, if you allow yourself to be puzzled by that, like the fall of a cup, you ask
"Why?" and then you're led down a path to some pretty interesting answers. Like
maybe linear order just isn't part of the computational system, which is a strong
claim about the architecture of the mind -- it says it's just part of the externalization
system, secondary, you know. And that opens up all sorts of other paths, same with
everything else.

Take another case: the difference between reduction and unification. History of
science gives some very interesting illustrations of that, like chemistry and physics,
and I think they're quite relevant to the state of the cognitive and neurosciences
today.

You might also like