Professional Documents
Culture Documents
Issues concerning the unity of minds, bodies and the world have often
recurred in the history of philosophy and, more recently, in scientific
models. Taking into account both the philosophical and scientific knowl-
edge about consciousness, this book presents and discusses some the-
oretical guiding ideas for the science of consciousness. The authors
argue that, within this interdisciplinary context, a consensus appears
to be emerging assuming that the conscious mind and the functioning
brain are two aspects of a complex system that interacts with the world.
How can this concept of reality one that includes the existence of
consciousness be approached both philosophically and scientifically?
The Unity of Mind, Brain and World is the result of a three-year online
discussion between the authors who present a diversity of perspectives
that tend towards a theoretical synthesis, aimed to contribute to the
insertion of this field of knowledge in the academic curriculum.
Edited by
Alfredo Pereira Jr. and Dietrich Lehmann
University Printing House, Cambridge CB2 8BS, United Kingdom
Introduction 1
a l f r e d o pe r e i r a j r. a n d d i e t r i c h l e h m a n n
1 Body and world as phenomenal contents of the brains
reality model 7
bjorn merker
2 Homing in on the brain mechanisms linked to
consciousness: The buffer of the perception-and-action
interface 43
c h r i s t i n e a . g o d w i n , a d a m g a z z a l e y, a n d
ezequiel morsella
3 A biosemiotic view on consciousness derived from
system hierarchy 77
ron cottam and willy ranson
4 A conceptual framework embedding conscious
experience in physical processes 113
wo l f g a n g b a e r
5 Emergence in dual-aspect monism 149
r a m l . p. v i m a l
6 Consciousness: Microstates of the brains electric
field as atoms of thought and emotion 191
dietrich lehmann
7 A foundation for the scientific study of consciousness 219
a r n o l d tr e h u b
v
vi Contents
Index 338
Figures
vii
viii List of figures
ix
Contributors
x
List of contributors xi
REFERENCES
Chalmers D. J. (1995). Facing up to the problem of consciousness. J Consciousness
Stud 2:200219.
Chalmers D. (1996). The Conscious Mind. New York: Oxford University Press.
Damasio A. (2003). Looking for Spinoza: Joy, Sorrow, and the Feeling Brain.
Orlando, FL: Harcourt.
Jackson F. (1986). What Mary didnt know. J Philos 83:291295.
Kim J. (1993). Supervenience and Mind. Cambridge University Press.
Lehmann D. (1990). Brain electric microstates and cognition: The atoms of
thought. In John E. R. (ed.) Machinery of the Mind. Boston: Birkhauser, pp.
209224.
Lehmann D. (2004). Brain electric microstates as building blocks of mental
activity. In Oliveira A. M., Teixeira M. P., Borges G. F., and Ferro M. J.
(eds.) Fechner Day 2004. Coimbra: International Society for Psychophysics,
pp. 140145.
Levine J. (1983). Materialism and qualia: The explanatory gap. Pac Philos Quart
64:354361.
Nagel T. (1974). What is it like to be a bat? Philos Rev 83(4):435450.
Pereira Jr. A. and Ricke H. (2009). What is consciousness? Towards a preliminary
definition. J Consciousness Stud 16:2845.
Pereira Jr. A., Edwards J., Lehmann D., Nunn C., Trehub A., and Vel-
mans M. (2010). Understanding consciousness: A collaborative attempt
6 Alfredo Pereira Jr. and Dietrich Lehmann
Bjorn Merker
1.1 Introduction 7
1.2 Stratagems of solitary confinement 9
1.3 A dual-purpose neural model 11
1.3.1 The orienting domain: A nested remedy for the liabilities
of mobility 11
1.3.2 The decision domain: Triple play in the behavioral final
common path 14
1.3.3 A forum for the brains final labors 18
1.3.4 A curious consequence of combined implementation 22
1.4 Inadvertently conscious 24
1.5 Conclusion: A lone but real stumbling block on the road to a science
of consciousness 32
1.1 Introduction
The fact that we find ourselves surrounded by a world of complex objects
and events directly accessible to our inspection and manipulation might
seem too trivial or commonplace to merit scientific attention. Yet here,
as elsewhere, familiarity may mask underlying complexities, as we dis-
cover when we try to unravel the appearances of our experience in causal
terms. Consider, for example, that the visual impression of our surround-
ings originates in the pattern of light and darkness projected from the
world through our pupils onto the light sensitive tissue at the back of our
eyes. On the retina a given sudden displacement of that projected image
behaves the same whether caused by a voluntary eye movement, a passive
displacement of the eye by external impact, or an actual displacement
of the world before the still eye. Yet only in the latter two cases do we
experience any movement of the world at all. In the first case the world
remains perfectly still and stable before us, though the retinal image has
undergone the selfsame sudden displacement in all three cases. But that
means that somewhere between our retina and our experience, the facts
I am indebted to Louise Kennedy for helpful suggestions regarding style and presentation,
and also to Wolfgang Baer, Henrik Malmgren, and Ezequiel Morsella for helpful comments
on matters of content.
7
8 Bjorn Merker
the brains indirect access to the world (Helmholtz 1867; Witkin 1981,
pp. 2936).
Nor is the brains control of its body any more direct than is its access
to circumstances in the world on which that control must be based.
Between the brain and the body movements it must control lie sets of
linked skeletal joints, each supplied by many muscles to be squirted
with acetylcholine through motor nerves in a sequence and in amounts
requisite to match the resultant movement to targets in the world. In
such multi-joint systems, degrees of freedom accumulate across linked
joints (not to mention muscles). A given desired targeting movement
accordingly does not have a unique specification either in terms of the
joint kinematics or the muscle dynamics to be employed in its execution
(Bernstein 1967; Gallistel 1999, pp. 67).
On both the sensory and motor sides of its operations the brain is faced,
in other words, with under-specified or ill-posed problems in the sensing
and control tasks it must discharge. We know, nevertheless, that somehow
it has managed to finesse these so-called inverse problems, because we
are manifestly able to function and get about competently even in quite
complex circumstances. The brain has in fact mastered its problems in
this regard to such an extent that it allows us to remain oblivious to the
difficulties, to proceed with our daily activities in a habitual stance of
naive realism. We look, and appear to confront the objects of the world
directly. We decide to reach for one or another of them, and our arm
moves as if by magic to land our hand and fingers on the target. Much
must be happening behind the scenes of our awareness to make such
apparent magic possible.
Reliable performance in inherently underconstrained circumstances is
only possible on the basis of the kind of inferential, probabilistic, and
optimization approaches to which engineers resort when faced with sim-
ilar problems in building remote controllers for power grids or plant
automation (McFarland 1977). In such approaches a prominent role is
played by so-called forward and inverse models of the problem domain to
be sensed or controlled, and they have been proposed to play a number of
roles in the brain as well (Kawato et al. 1993; Wolpert et al. 1995; Kawato
1999). In effect they move the problem domain inside the brain (note:
this does not mean into our inner life ) in the form of a neural model,
in keeping with a theorem from the heyday of cybernetics stating that an
optimal controller must model the system it controls (Conant and Ashby
1967).
There is every reason to believe that a number of these neural models
contribute crucially to shaping the contents of our experience. They may
be involved, for example, in the cancellation of sensory consequences of
Body-world: Phenomenal contents of the reality model 11
1.3.1 The orienting domain: A nested remedy for the liabilities of mobility
The already mentioned contamination of information about the world by
the sensory consequences of self-motion is not the brains only problem
caused by bodily mobility. The body moves not only with respect to the
world, but relative to itself as well. The brains sensor arrays come in
12 Bjorn Merker
several modalities differently distributed on the body and move with its
movements. Its twisting and flexing cause sensors to move with respect to
one another, bringing the spatial information they convey out of mutual
alignment. In the typical case of gaze shifts employing combined eye and
head movements, vision and audition are displaced with respect to the
somatosensory representation of the rest of the body. To this is added
misalignment between vision and audition when the eyes deviate in their
orbits (Sparks 1999). The brains sensors-in-motion problem combines,
in other words, aspects of sensor fusion (Mitchell 2007) with those of
movement contamination of sensor output (von Holst and Mittlestaedt
1950).
A number of local solutions or piecemeal remedies for one or another
part of this problem complex are conceivable. Insects, for example,
rely on a variety of mechanisms of local feedback, gating, efference
copy, inter-modal coordinate transformations, and perhaps even for-
ward models to this end (see examples reviewed by Webb 2004; also
Altman and Kien 1989). More centralized brains than those of insects
offer the possibility of re-casting the entire sensors-in-motion problem
in the form of a comprehensive, multi-modal solution. In so doing,
the fundamental role of gaze displacements in the orchestration of
behavior can be exploited to simplify the design of the requisite neural
mechanism.
The first sign of evolving action in the logistics of the brains con-
trol of behavior is typically a gaze movement. Peripheral vision suffices
for many purposes of ambient orientation and obstacle avoidance (Tre-
varthen 1968; Zettel et al. 2005; Marigold 2008). However, when loco-
motion is initiated or redirected towards new targets or planned while
traversing complex terrain, the gaze leads the rest of the body by fixating
strategic locations ahead (Marigold and Patla 2007). This is even more
so for reaching and manipulative activity, down to their finely staged
details. Fine-grained behavioral monitoring of eyes, hand, and fingers
during reaching and manipulation in the laboratory has disclosed the
lead given by the gaze in such behavior (Johansson et al. 2001). Arm and
fingers follow the gaze as if attached to it by an elastic band. In fact the
coupling of arm or hand to the gaze appears to be the brains default
mode of sensorimotor operation (Gorbet and Sergio 2009; Chang et al.
2010).
The centrality of gaze shifts, also called orienting movements, in the
orchestration of behavior makes them the brains primary and ubiquitous
output. The gaze moves through combinations of eye and head move-
ments and these can be modelled to a first approximation as rotatory
displacements of eyes in orbit and head on its cervical pivot, using a con-
venient rotation-based geometry (see Masino 1992; Smith 1997; Merker
Body-world: Phenomenal contents of the reality model 13
while the head map turns around it, interposed between egocenter and sta-
tionary model world surround. Translatory and other locomotion-related
sensory effects would be registered in the model space as continuous
replacement of the mapped contents of the world map as new portions
of the physical world come within range of receptor arrays during loco-
motion. Note the purely geometric terms in which these map operations
are introduced. They imply no commitment regarding the manner in
which they might be implemented neurally, whether through gain-fields
or other means. Figure 1.1 illustrates the principle of the orienting
domain in minimal outline.
So far this model scheme is only an attempt to manage the sensors-
in-motion problem by segregating its natural clusters of correlated vari-
ances into the separate zones of mobile and deformable body on the one
hand and enclosing movement-stabilized world surround on the other.
Needless to say this model sketch employing rotation-based nesting is
a bare-bones minimum only. To accommodate realistic features such as
limb movements it must be extended through means such as supplemen-
tal reference frames centered on, say, shoulder or trunk (see McGuire
and Sabes 2009). These might be implemented by yoking so-called gain-
fields associated with limbs to those of the gaze (Chang et al. 2009), which
would directly exploit the leading behavioral role of the gaze emphasized
in the foregoing. However implemented, the centrality and ubiquity of
gaze movements in the orchestration of behavior means that a simplifi-
cation of the brains sensors-in-motion problem is available in the nested
rotation-based format proposed for the orienting domain.
W
O R L D
visual
aperture
ego
center
B
O D Y
for control over behavior (McFarland and Sibly 1975). Few situations
are entirely devoid of opportunities for meeting alternate needs, and one
or more alternatives may present themselves at any time in the course of
progress toward a prior target. The utility of switching depends in part
on when in that progress an option makes its appearance. Close to a goal,
it often makes sense to discount the utility of switching (McFarland and
Sibly 1975), unless, of course, a windfall is on offer. Keeping options
open can pay, but the capacity for doing so must not result in dithering.
The liveliness of the world sets the pace of the controllers need to
update assessments, and saddles it with a perpetual moment-to-moment
decision process regarding what to do next. In the pioneering analysis
just cited, McFarland and Sibly (1975) introduced the term behavioral
final common path for a hypothetical interface between perceptual, motor,
and motivational systems engaged in a final competitive decision process
determining moment-to-moment behavioral expression. In a previous
publication I sketched out how savings in behavioral resource expenditure
are available by exploiting inherent functional dependencies among the
brains three principal task clusters (Merker 2007a), and to do so the
brain needs such an interface, as we shall see.
The three task clusters consist of selection of targets for action in
the world (target selection), the selection of the appropriate action for a
given situation and purpose (action selection), and the ranking of needs
by motivational priority (motivational ranking). Though typically treated
as separate functional problems in neuroscience and robotics, the three
are in fact bound together by intimate mutual dependencies, such that
a decision regarding any one of them seldom is independent of the state
of the others (Merker 2007a, p. 70). As an obvious instance, consider
prevailing needs in their bearing on target selection. More generally,
bodily action is the mediator between bodily needs and opportunities in
the world. This introduces the on-going position, trajectory, and energy
reserves of the body and its parts as factors bearing not only on target
selection (see Kording and Wolpert 2006) but also on the ranking of
needs. Thus the three task clusters are locked into mutual dependencies.
The existence of these dependencies means that savings are available
by subjecting them to an optimizing regime. To do so, they must be
brought together in a joint decision space in which to settle trade-offs,
conflicts, and synergies among them through a process amounting to
multiple constraint satisfaction in a multi-objective optimization frame-
work (for which, see Pearl 1988; Tsang 1993). Each of the three task
clusters is multi-variate in its own right and must be interfaced with
the others without compromizing the functional specificities on which
their mutual dependencies turn. Those specificities include, for sensory
Body-world: Phenomenal contents of the reality model 17
combined eye and head movements that furnish the rationale for casting
the orienting domain in nested rotation-based format. Taken together
these indications suggest that the two domains lend themselves to joint
implementation in a single unitary neural mechanism.
3 As such it would be in receipt of signals from any system in the brain, cortical or
subcortical, relevant to decision making aimed at the very next action (typically a targeted
gaze movement, as we have seen). The decision making in question therefore should by
no means be identified with deliberative processes or prefrontal executive activity. Such
activities serve as inputs among many others converging on the more basic and late
decision process envisaged here (see Merker 2007b, pp. 114, 118).
20 Bjorn Merker
Let its place in the system be taken, then, by a miniature analog map,
compactly housing an intrinsic connectivity presumably inhibitory
dedicated to competitive decision making (Merker 2007a, p. 76; see also
Richards et al. 2006). This central map would be connected with both
world and body zones, in parallel and in tandem as far as its afferents
go, but with principal efference directed to eye and head movement
control circuitry associated with the body zone (cf. Deubel and Schneider
1996). This competitive mechanism lodged at the systems egocentric
origin might be regarded as either a decision maker or a monitoring
function, depending on which stage of its dynamic progress towards its
principal output triggering the next gaze shift is under consideration.
Its monitoring aspect is most in evidence at low levels of situational
decision-making pressure. It is marked e in Fig. 1.2.
So far, this decision nexus lacks one of the three principal sources of
afference it needs in order to settle residual options among the brains
three task clusters, namely afference from the composite realm of moti-
vational variables. Again, the orienting domain offers a convenient topo-
logical space, so far unused, through which a variety of biasing signals
of this kind can be brought to bear on the decision mechanism. While
the world zone must extend up to the very surface of the body map,
there is nothing so far occupying the space between that surface and the
decision nexus occupying the egocenter inside its head representation.
That space can be rendered functional through the introduction of a vari-
ety of biasing signals, motivational ones among them, along and across
the connectivity by which the decision nexus is interfaced with the body
and world zones. This would embed the miniature analog decison map
in a system of afferents of diverse origin injecting bias into its decision
process, as depicted schematically in Fig. 1.2.
Interposed between the model body surface and the egocentric deci-
sion nexus, this multi-faceted system of extrinsically derived signals
would interact with those derived from body and world zones in their
influence on the central decison nexus. Current values of motivational
state variables would thus be introduced into the constraint satisfaction
regime as a whole (see Sibly and McFarland 1974 for a state-space treat-
ment of motivation). In keeping with their biasing function they would
not assume the spatially articulated forms characterizing the contents of
world and body zones, but something more akin to forces, fields, and ten-
sional states. Even then, each would have some specificity reflecting the
source it represents, implemented by means of differential localization
within the space enclosed by the model body surface. There they would
in effect supply the neural body with what amounts to agency vectors,
animating it from within, as it were. Motivational needs undoubtedly
Body-world: Phenomenal contents of the reality model 21
O R L D
W
visual
aperture
BIAS B
e O
D
Y
MOTIVATION
represent the principal source of such signals, but the functional logic
is not limited to these. Other signals of global import, such as the out-
come of a range of cognitive and memory-based operations conducted
elsewhere, could enter the scheme as biasing signals in this manner.
The introduction of the biasing mechanism into the body interior of
the reality model completes in outline the mechanism as a whole. Each of
its components three separate content zones hosting bias, body, and
world contents nested and arrayed in tandem around a central egocen-
tically positioned decision nexus is essential to its mode of operation.
The mechanism is unlikely to serve any useful purpose in the absence
of any one of them. It is thus a functionally unitary mechanism which
allows the highly diverse signals reflecting needs, bodily conformations,
and world circumstances to interact directly. Dynamic interactions across
the three within their shared coordinate space supplies a kind of func-
tional common currency (cf. McFarland and Sibly 1975; Cabanac
1992) that allows the brain to harvest in real time the savings hidden
among their multiple mutual dependencies (Merker 2007).
The substantial and mutually reinforcing advantages of implementing
constraint satisfaction among the brains principal task clusters in the
setting of the orienting domain suggest that the brain may in fact have
equipped itself with an optimizing mechanism of this kind.4 Whether it
actually has done so can only be determined by empirically canvassing
the brain for a candidate instantiation of a neural system that matches
the functional properties of the proposed mechanism point for point.
Preliminary results of pursuing such a search into the functional anatomy
of the brain are contained in a recent publication of mine (Merker 2012).
Here, however, we are concerned only with the functional implications
and consequences of such a hypothetical arrangement, and turn to one
of those consequences next.
4 The present account violates two programmatic commitments of the subsumption archi-
tecture of behavior based robotics introduced by Brooks (1986), namely the stipulations
little sensor fusion and no central models. It would therefore seem to have to forego
some of the advantages claimed for subsumption architectures, but this is only apparently
so. The reality model of the present account is assumed to occupy the highest functional
level (which is not necessarily cortical; see Merker 2007a) of such an architecture with-
out being the sole input to behavior control. The initial phase of limb withdrawal on
sudden noxious stimulation and the vestibulo-ocular reflex are examples of behaviors
which access motor control independently of the reality model. See Merker (2007a,
pp. 69, 70, and 116) for further details.
Body-world: Phenomenal contents of the reality model 23
as the mechanism that settles upon one or another of the open options,
with agency relative to the contents of the reality model.
In the setting of the models spatial organization, this places the deci-
sion nexus in the presence of those contents in the sense that its egocen-
trically defined here is separated from the there of any and all con-
tents, and relates to them via relationally unidirectional causal options.
Relative to the vantage point of the decision nexus such a state of affairs
has all the attributes of a first-person perspective imbued with agency,
and accordingly defines a conscious mode of operation for the reality
model it serves (see Merker 1997 and Merker forthcoming for additional
detail).
To be clear on this crucial point of attribution of conscious status: the
crux of the matter is neither decision making itself nor its occurrence
in a neural medium. Decisions are continually being made in numerous
subsystems of the brain in fact wherever outcomes are settled compet-
itively without for that reason implying anything regarding conscious
status. It is only in the setting of the orienting domain, on account of the
interactions mandated by global constraint satisfaction within its egocen-
tric and spatially nested format, that decision making of the particular
kind we have considered has this consequence. This is because it entails
a global partitioning of the decision space into an asymmetric relation
between monitor and monitored, marked by an intervening set of causal
options. It is on account of this functional asymmetry that an inherently
perspectival relation between an agent the decision-making vantage
point (egocenter) and the tri-partite contents of the reality space (bias,
body, world) is established. Such a relation is the defining feature of
the first-person perspective of consciousness, and of it alone, rendering
the mechanism that implements it conscious. It operates consciously,
that is, by virtue of this functional format alone, and not by virtue of
anything that has been or needs to be added to it in order to make it
conscious.
The only natural setting in which such a format is likely to arise would
seem to be the evolution of centralized brains, given the numerous spe-
cific and interlocking functional requirements that must conspire in order
to fashion such a format. Its functional utility is predicated solely on the
predicament of a brain captive in a skull and under pressure to opti-
mize its efficiency as controller. Since the proposed mechanism would
generate savings in behavioral resource expenditure, it would hardly be
surprising if some lineages of animals, our own included, had in fact
evolved such a mechanism. If, therefore, the claim that such a functional
format defines a conscious mode of operation is sound, it would be worth
examining the thesis that it is the so-far hypothetical presence of such a
26 Bjorn Merker
mechanism in our own brains that accounts for our status as conscious
beings. For that to be the case we ourselves would have to be a part of
and a quite specific part of that mechanism. This follows from the fact
that the functional asymmetry at the heart of the mechanism ensures
that the only way to attain to consciousness on its terms is to occupy the
position of egocentrically placed decision maker within it. Let us exam-
ine, therefore, the fit of that possibility with some of what we know about
our own conscious functioning.
Consider, first, the curious format of our visual waking experience, that
namely, by which we face, from a position inside our head, a surrounding
panoramic visual world through an empty, open, cyclopean aperture in
our upper face region. Anyone can perform the Hering triangulation to
convince themselves that the egocentric perspective point of their visual
access to the world is actually located inside their head, some 4 cm right
behind the bridge of the nose (Roelofs 1959; Howard and Templeton
1966). That, in other words, is the point from where we look. But that
point, biology tells us, is occupied and surrounded on all sides by biological
tissues rather than by empty space. How then can it be that looking from
that point we do not see body tissues, but rather the vacant space through
which in fact we confront our visual world in experience?
From the present perspective such an arrangement would be the brains
elegant solution to the problem of implementing egocentric nesting of
body and world, given that the body in fact is visually opaque, but the
egocenter must lodge inside the model bodys head, for simplicity of
compensatory coordinate transformations between body and world. The
problem the brain faces in this regard is the following: the body must
be included in the visual mapping of things accessible from the egocen-
tric origin inside the bodys head. Such inclusion is unproblematic for
distal parts of the body, which can be mapped into the reality space as
any other visual objects in the world. However, in the vicinity of the
egocenter itself, persistence in the veridical representation of the body
as visually opaque would block the egocenter from visual access to the
world, given the egocenters location inside the head representation of the
body.
In this mapping quandary the brain has an option regarding the design
of a model neural body that is not realizable in a physical body. That
option is to introduce a neural fiction into the reality model. This is the
cyclopean aperture through which the egocenter is interfaced with the
visual world from its position inside the head (Merker 2007a, p. 73). But
this is exactly the format in which our visual experience demonstrably
comes to us, namely that format of direct confrontation of the surround-
ing world from inside the body which naive realism erroneously interprets
as a direct confrontation of the physical universe.
Body-world: Phenomenal contents of the reality model 27
We actually do find ourselves looking out at the world from inside our
heads through an oval empty aperture. This view, though for one eye
only, is captured in Ernst Machs classical drawing, reproduced here as
Fig. 1.3 (Mach 1897). When both eyes are open the aperture is a symmet-
rical ovoid within which the nose typically disappears from view. What
then is this paramount fact of our conscious visual macrostructure other
than direct, prima facie, evidence that our brain in fact is equipped with
a mechanism along the lines proposed here, and that we do in fact form
part of this mechanism by supplying its egocentric perspectival origin?
This body, which we can see and touch and which is always present
wherever we are and obediently follows our every whim, would accord-
ingly be the model neural body contrived as part of the brains reality
model. And this rich and complex world that extends all around us and
stays perfectly still, even when we engage in acrobatic contortions, would
be the brain models synthesized world, stabilized at considerable neural
expense. How else could that world remain unaffected by those contor-
tions, given that visual information about it comes to us through signals
from our retinae, organs which flail along with the body during those
contortions?
From this point of view a number of phenomena pertaining to the
nature and contents of our consciousness can be interpreted as products
of the workings of the proposed reality model and of our suggested place
as decision-making egocenter within its logistics. Recall the suggestion
that the representationally unutilized space between the model neural
body wall and the egocenter lodged inside of it could be used to introduce
a variety of biasing signals from motivational and other systems. This is
where in fact we do experience a variety of impulses and tensional states
impelling us to a variety of actions in accordance with their location and
qualitative attributes. Motivational signals such as hunger, fear, bladder
distension, and so forth, do in fact enter our consciousness as occurences
in various parts of our body interior (such as our chest region). Each of
these variously distributed biases feels a bit different and makes us want
to do different things (Sachs 1967; Izard 1991, 2007). Thus bladder dis-
tension is not only experienced in a different body location than is hunger
or anger, it feels different from them, and each impels us to do different
things. Common to them all is their general, if vague, phenomenological
localization to the body interior, in keeping with what was proposed in
the section devoted to the decision domain.
Far from all bodily systems and physiological mechanisms are thus
able to intrude on our consciousness, or have any reason to do so. An
analysis by Morsella and colleagues shows that those among them that
do so involve, in one way or another, action on the environment (or
on the body itself) by musculoskeletal means (Morsella 2005; Morsella
Fig. 1.3 Ernst Machs classical rendition of the view through his left
eye. Inspection of the drawing discloses the dark fringe of his eyebrow
beneath the shading in the upper part of the figure, the edge of his
moustache at the bottom, and the silhouette of his nose at the right-
hand edge of the drawing. These close-range details framing his view
are available to our visual experience, particularly with one eye closed,
though not as crisply defined as in Machs drawing. In a full cyclo-
pean view with both eyes open the scene is framed by an ovoid within
which the nose typically disappears from view (see Harding, 1961, for a
detailed first-person account). Apparently, Mach was a smoker, as indi-
cated by the cigarette extending forward beneath his nose. The original
drawing appears as Figure 1 in Mach (1897, p. 15). It is in the pub-
lic domain, and is reproduced here in a digitally retouched version,
courtesy of Wikimedia (http://commons.wikimedia.org/wiki/File:Ernst
Mach Innenperspektive.png, accessed March 1, 2013).
Body-world: Phenomenal contents of the reality model 29
and Bargh 2007; Morsella et al. 2010). This principle fits well with the
present perspective, which traces the very existence and nature of con-
sciousness to functional attributes of a mechanism designed to optimize
exactly such behavioral deployment. Thus, the regulation of respiration
is normally automatic and unconscious, but when blood titres of oxygen
and carbon dioxide go out of bounds it intrudes on consciousness in the
form of an overwhelming feeling of suffocation (Liotti et al. 2001; see also
Merker 2007a, p. 73). Correcting the cause of such blood gas deviations
may require urgent action on the environment (say, to remove a physical
obstruction from the airways or to get out of a carbon dioxidefilled pit).
The critical nature of the objective matches the urgency of the feeling
that invades our consciousness in such emergencies. For additional con-
siderations and examples bearing on the relation between motivational
factors and consciousness, see Cabanac (1992, 1996) and Denton et al.
(2009).
Just as many bodily processes lack grounds for being represented in
consciousness, so do many neural ones. As noted in the introduction,
the busy neural traffic that animates the many way stations along our
sensory hierarchies is not accessible to consciousness in its own right.
Only its final result a synthetic product of many such sources con-
jointly enters our awareness: the rich and multi-modal world that sur-
rounds us (Merker 2012). There is no dearth of evidence regarding neural
activity unfolding implicitly without entering consciousness (for vision
alone, see Rees 2007 and references therein). This includes activity at
advanced stages of semantic interpretation, motor preparation at the
cortical level, and even instances of prefrontal executive activity (Luck
et al. 1996; Dehaene et al. 1998; Eimer and Schlaghecken 1998; Frith
et al. 1999; Gaillard et al. 2006; Lau and Passingham 2007; van Gaal et al.
2008).
One straightforward interpretation of this kind of evidence assigns
conscious contents to a separate and dedicated neural mechanism, as
proposed under the name of the conscious awareness system by Daniel
Schacter (1989, 1990). The present conception of a dedicated reality
model is in good agreement with that proposal in its postulation of a
unitary neural mechanism hosting conscious contents. In fact, on the
present account, the reality model must exclude much of the brains
ongoing activity sensory as well as motor in order to protect the
integrity of its constraint satisfaction operations. To serve their purpose
those operations must range solely over the models internal contents
with respect to one another, and they should occur exclusively within
the nested format that hosts them in the reality space. Such functional
independence is presumably most readily achieved through local and
30 Bjorn Merker
5 The cerebral cortex appears to offer a most inhospitable environment for such an arrange-
ment. The profuse, bidirectional, and exclusively excitatory nature of cortical inter-areal
connectivity poses a formidable obstacle to any design requiring a modicum of functional
independence (see Merker 2004). There is also no known cortical area (or combination
of areas) whose loss will render a patient unconscious (cf. Merker 2007a). On the present
account, the cerebral cortex serves, rather, as a source of much of the sophisticated infor-
mation utilized by the models reality synthesis, supplied to it by remote and convergent
cortical projections. Candidate loci of multi-system convergence are of course available
in a number of subcortical locations. See Merker (2012) for further details.
Body-world: Phenomenal contents of the reality model 31
in present terms reality space contents are the contents of our sensory
consciousness, this means that we should be oblivious to veridical sen-
sory changes introduced at the exact time of a reset. So we are, indeed, as
demonstrated by the well-documented phenomenon of change blindness
(Simons and Ambinder 2005).
The only conscious contents that appear to reliably escape oblivion
in the reset are those held in focal attention (Rensink et al. 1997), a
circumstance most readily interpretable as indicating a privileged rela-
tion between the contents of focal attention and memory (Turatto et al.
2003; see also Wolfe 1999, and Merker 2007a, p. 77), allowing pre- and
post-reset focal content to be compared. Focal attention and its contents
accordingly would be the key factor maintaining continuity of conscious-
ness across frequent resets of model content, as might be expected from
the central role of a competitive decision mechanism of limited capac-
ity at its operational core. The more intense and demanding the focal
attentional engagement, the higher the barrier against its capture by an
alternate content, as dramatically demonstrated in inattention blindness
(Mack 2003; Most et al. 2005; see also Cavanagh et al. 2010 for further
considerations relevant to update operations).
More generally, our limited capacity to keep in mind or track inde-
pendent items or objects simultaneously (Miller 1956; Mandler 1975;
Cowan 2001) presumably reflects the late position of the reality model
(consciousness) in the brains functional economy. As emphasized pre-
viously, the decision nexus engages only the final (residual) options to
be settled in order to trigger the next gaze movement (and reset), and
as such forms the ultimate informational bottleneck of the brain as a
whole. It receives convergent afference from the more extensive (though
still compact) world, body, and bias maps of the reality space, and they
in turn are convergently supplied by the rest of the brain. Some such
arrangement is necessary if the brains distributed labors are to issue,
as they must, in unitary coherent behavioral acts (McFarland and Sibly
1975). Moreover, in its capacity as final convergence zone, the decision
nexus brings the informational advantages of the quite general bottleneck
principle to bear on the models global optimization task (Damasio 1989;
Moll and Miikulainen 1997; Kirby 2002).
The crucial position of the decision nexus at the core of the reality space
may account for a further, quite general, aspect of our consciousness as
well: our sense of facing the world as an arena of possiblities within
which we exercise choice among alternative actions. As we have seen, the
models operational logic interposes causal options between the decision
nexus and the veridical contents of the reality space. Our sense of having
options and making choices a sense that presumably underlies the
32 Bjorn Merker
SENSORY MOTOR
R L D
W O
visual b
aperture r
a
i
n
BODY
e
b
o
d
y
M
O T I VA T I O N
w
o
r
l
d
not be allowed to deceive us into thinking that the world that appears to
us in direct sensory experience is the physical universe itself rather than
a model of it.
A sound theory of consciousness therefore must abandon, in its turn
and on this point uniquely, trust in the deliverances of consciousness as
a guide to the realities we wish to understand. It must affirm, in other
words, the soundness of the fundamental tenet of philosophical idealism,
namely that the first person has direct access to contents of consciousness
alone, and to nothing but contents of consciousness. That tenet mandates
that the world we experience around us be included among the contents
of consciousness. Doing so keeps our conception of consciousness from
being confined to less than its actual scope, and so saves us from fatal
error.
REFERENCES
Achlioptas D., Naor A., and Peres Y. (2005). Rigorous location of phase transi-
tions in hard optimization problems. Nature 435:759764.
Adelson E.H. and Bergen J.R. (1991). The plenoptic function and the elements
of early vision. In Landy M. S. and Movshon J. A. (eds.) Computational
Models of Visual Processing. Cambridge, MA: MIT Press, pp. 120.
Altman J. S. and Kien J. (1989). New models for motor control. Neural Comput
1:173183.
Andersen R. A. and Mountcastle V. B. (1983). The influence of the angle of gaze
upon the excitability of the light-sensitive neurons of the posterior parietal
cortex. J Neurosci 3:532548.
Bernstein N. A. (1967). The Co-ordination and Regulation of Movements. Oxford:
Pergamon.
Brandt T., Dichgans J., and Koenig E. (1973). Differential effects of central
versus peripheral vision on egocentric and exocentric motion perception.
Exp Brain Res 16:476491.
Brooks, R.A. (1986) A robust layered control system for a mobile robot. IEEE J
Robotic Autom 2:1423.
Cabanac M. (1992). Pleasure: The common currency. J Theor Biol 155:173200.
Cabanac M. (1996). The place of behavior in physiology. In Fregly M. J. and
Blatteis C. (eds.) Handbook of Physiology, Section IV: Environmental Physiol-
ogy. Oxford University Press, pp. 15231536.
Cavanagh P., Hunt A. R., Afraz A., and Rolfs M. (2010). Visual stability based
on remapping of attention pointers. Trends Cogn Sci 14:147153.
Chang S. W. C., Papadimitriou C., and Snyder L. H. (2009). Using a compound
gain field to compute a reach plan. Neuron 64:744755.
Conant R. C. and Ashby W. R. (1970). Every good regulator of a system must
be a model of that system. Int J Syst Sci 1:8997.
Cowan N. (2001). The magical number 4 in short-term memory: A reconsider-
ation of mental storage capacity. Behav Brain Sci 24:87185.
38 Bjorn Merker
2.1 Introduction
Discovering the events in the nervous system that are responsible for the
instantiation of conscious states remains one of the most daunting chal-
lenges in science (Crick 1995; Koch 2004). This puzzle is often ranked
as one of the top unanswered scientific questions (e.g., Roach 2005).
To the detriment of the scientist, the problem is unfortunately far more
difficult than what non-experts may surmise: Investigators focusing on
the problem are not only incapable of having an inkling regarding how
something like consciousness could arise from something like the brain,
they cannot even begin to fathom how something like consciousness
could emerge from any set of real or hypothetical circumstances what-
soever. When speaking about conscious states, we are referring to the
most basic form of consciousness, the kind falling under the rubrics of
subjective experience, qualia, sentience, basic awareness, and
phenomenal state. This basic form of consciousness has been defined
by Nagel (1974), who claimed that an organism has basic consciousness
43
44 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella
1 Similarly, Block (1995) claimed, the phenomenally conscious aspect of a state is what
it is like to be in that state (p. 227). For good reason, some theoreticians argue that the
term should be conscious experience instead of conscious state. We will continue to
use the latter only because it will make the terminology consistent with that of previous
publications.
2 The unconscious mind comprises information-processing events in the nervous system
that, though capable of systematically influencing behavior, cognition, motivation, and
emotion, do not influence the organisms subjective experience in such a way that the
organism can directly detect, understand, or report the occurrence or nature of these
events (Morsella and Bargh 2010b).
Mechanisms of consciousness: The perception-action buffer 45
associated with conscious states (Section 2.3), and (3) home in on the
mental representations (the tokens of mental operations) associated with
conscious states (Section 2.4). In a funneled approach, each section
attempts to home in on the correlates of consciousness at a more micro
level than the previous section. As is evident later, the literature reviewed
in the three sections reveals that conscious states are restricted to only a
subset of nervous and mental processes. We conclude our chapter with
a simplified framework the Buffer of the Perception-and-Action Interface
(BPAI) that attempts to present in schematic form the modal findings
(but not the exhaustive findings) and conclusions about the nature of
conscious states in the nervous system.
3 In binocular rivalry (Logothetis and Schall 1989), an observer is presented with different
visual stimuli to each eye (e.g., an image of a house in one eye and of a face in the
other). It might seem reasonable that, faced with such stimuli, one would perceive an
image combining both objects a house overlapping a face. Surprisingly, however, an
observer experiences seeing only one object at a time (a house and then a face), even
though both images are always present. At any moment, the observer is unaware of
the computational processes leading to this outcome; the perceptual conflict and its
resolution are unconscious.
46 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella
appears to be more tractable and less widespread in the brain than that
of, say, vision or higher-level processing such as music perception. Unlike
other sensory modalities, afference from the olfactory sensory system
bypasses the thalamic first order relay neurons and, after processing
in the olfactory bulb, directly influences the olfactory (piriform) cortex
(Haberly 1998). Specifically, afferents from the olfactory sensory system
bypass the thalamus and directly target regions of the ipsilateral cortex
(Shepherd and Greer 1998; Tham et al. 2009). Importantly, this is not
to confirm that a conscious brain experiencing only olfaction does not
require the thalamus: in post-cortical stages of processing, the mediodor-
sal nucleus of the thalamus does receive inputs from cortical regions that
are involved in olfactory processing (Haberly 1998).
Because olfactory afferents bypass the relay thalamus, one can con-
clude that, at least for olfactory experiences and under the assumptions
described in the following, a conscious state of some sort need not include
the first-order thalamic nuclei (Morsella et al. 2010). Accordingly, pre-
vious findings suggest that the olfactory bulb, which has been described
as being functionally equivalent to the first-order relay of the thalamus
(Kay and Sherman 2007), is not required for endogenic olfactory con-
sciousness (Mizobuchi et al. 1999; Henkin et al. 2000). Specifically,
knowledge regarding the neural correlates of conscious olfactory percep-
tions, imagery, and hallucinations (Markert et al. 1993; Mizobuchi et al.
1999; Leopold 2002), as revealed by direct stimulation of the brain (Pen-
field and Jasper 1954), neuroimaging (Henkin et al. 2000), and lesion
studies (Mizobuchi et al. 1999), suggests that endogenic, olfactory con-
sciousness does not require the olfactory bulb. In addition, it seems that
patients can still experience explicit, olfactory memories following bilat-
eral olfactory bulbectomies, though the literature is in want of systematic
studies regarding this important observation.
Regarding the mediodorsal thalamic nucleus (MDNT), although it
likely plays a significant role in olfactory discrimination (Eichenbaum
et al. 1980; Slotnick and Risser 1990; Tham et al. 2011), identification,
and hedonics (Sela et al. 2009), as well as in more general cognitive pro-
cesses, including memory (Markowitsch 1982), learning (Mitchell et al.
2007), and attentional processes (Tham et al. 2009; Tham et al. 2011),
no study we are aware of has found a lack of basic conscious olfactory
experience resulting from lesions of the MDNT (but see theorizing in
Plailly et al. 2008). Regarding second-order thalamic relays such as the
MDNT, one must keep in mind that they seem to be similar with respect
to their internal circuitry to first-order relays (Sherman and Guillery
2006), which are quite simplistic compared to, say, a cortical column.
Nevertheless, because to date there is no strong theorizing regarding the
48 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella
involve competition for control of the skeletal muscle output system and
are triggered by incompatible skeletomotor plans, as when one holds
ones breath while underwater, suppresses uttering something, or inhibits
a prepotent response in a laboratory response interference paradigm (e.g.,
the Stroop and flanker tasks; Stroop 1935; Eriksen and Eriksen 1974).
(In the Stroop task, one must name the color in which a word is written.
When the word and color are incongruous [e.g., RED in blue], response
conflict leads to interference [e.g., increased response times]. When the
color matches the word [e.g., RED in red] or is presented on a neutral
stimulus [e.g., XXXX], there is little or no interference.)
multiple agentic systems. Each system has its peculiar operating prin-
ciples and phylogenetic origins. Most effector systems do not suffer
from this particular kind of multi-determined guidance. Although sim-
ple motor acts suffer from the degrees of freedom problem, because
there are countless ways to instantiate a motor act such as grasping a
handle (Rosenbaum 2002), action goal selection (e.g., what action goal to
implement next) suffers from this problem to a greater extent (Morsella
and Bargh 2010a). In action goal selection the challenge is met, not
by unconscious motor algorithms (as in the case of motor programming;
Rosenbaum 2002), but by the ability of conscious states to crosstalk infor-
mation and constrain what we do by having the inclinations of multiple
systems constrain and curb skeletomotor output: one system protests
one exploratory act (e.g., touching a flame), while another reinforces
another act (e.g., eating something sweet).
It has been known since at least the nineteenth century that, though
often functioning unconsciously (as in the frequent actions of breath-
ing, blinking, and postural shifting), skeletal muscle is the only bodily
effector that can be consciously controlled, but why this is so has never
been addressed theoretically. SIT introduces a systematic reinterpreta-
tion of this age-old fact: skeletomotor actions are at times consciously
mediated because they are directed by multiple, encapsulated systems
that, when in conflict, require conscious states to crosstalk and yield
adaptive action. Although identifying still higher-level systems is beyond
the purview of SIT, PRISM correctly predicts that certain aspects of the
expression (or suppression) of emotions (e.g., aggression, affection, dis-
gust, and so forth), reproductive behaviors, parental care, and addiction-
related behaviors should be coupled with conscious states, for the action
tendencies of such processes may compromise skeletal muscle plans (of
other systems).
In support of SIT, experiments have revealed that incompatible skele-
tomotor intentions (e.g., to point right and left, to inhale and not inhale)
do produce strong, systematic intrusions into consciousness, but no
such changes accompany smooth muscle conflicts or conflicts occur-
ring at perceptual stages of processing (e.g., intersensory processing; see
meta-analysis of evidence in Morsella et al. 2011). Accordingly, of the
many conditions in interference paradigms, the strongest perturbations
in consciousness (e.g., urges to err) are found in conditions involving
the activation of incompatible action plans (Morsella et al. 2009a,d).
Conversely, when distinct processes lead to harmonious action plans, as
when a congruent Stroop stimulus activates harmonious word-reading
and color-naming plans (e.g., RED is presented in red), there are lit-
tle such perturbations in consciousness, and participants may even be
Mechanisms of consciousness: The perception-action buffer 53
unaware that more than one cognitive process led to a particular overt
action plan (e.g., uttering red). This phenomenon, called synchrony
blindness (Molapour et al. 2011), is perhaps more striking in the con-
gruent (pro-saccade) condition of the anti-saccade task, in which dis-
tinct brain regions/processes indicate that the eyes should move in the
same direction (see Morsella et al. 2011). Regarding the Stroop data,
after carefully reviewing the behavioral and psychophysiological evidence,
MacLeod and MacDonald (2000) concluded that participants often do
read the word inadvertently in the congruent condition but that they
may be unaware of this process: The experimenter (perhaps the par-
ticipant as well) cannot discriminate which dimension gave rise to the
response on a given congruent trial (p. 386; see also Eidels et al. 2010;
Roelofs 2010). For additional evidence regarding synchrony blindness in
the Stroop task, see Molapour et al. (2011).
In summary, SIT is unique in its ability to explain the effects in con-
sciousness of (1) intersensory conflicts, (2) smooth muscle conflicts,
(3) synchrony blindness, and (4) conflicts from action plans (e.g., hold-
ing ones breath). In synthesis, the SIT framework has been success-
ful in homing in on the component processes of action production
that are associated with consciousness. Based on the crosstalk func-
tion of the phenomenal state, integrated action-goal selection can take into
account the votes of the often conflicting component response sys-
tems (Morsella 2005), as when one system wants to approach a stimulus
and another system wants to avoid the stimulus. It has been proposed
that the well-known lateness of consciousness in processing stems from
the fact that phenomenal states must integrate information (which is
necessary for one system to veto another) from neural sources hav-
ing different processing speeds (Libet 2004). These votes can be con-
strued as tendencies based on inborn or learned knowledge. This knowl-
edge has been proposed to reside in the neural circuits of the ventral
thalamocortical processing stream (Goodale and Milner 2004; Sher-
man and Guillery 2006), where information about the world is repre-
sented in a unique manner (e.g., representing the invariant aspects of the
world, involving allocentric coordinates) unlike that of the dorsal stream
(e.g., representing the variant aspects of the world, using egocentric
coordinates).
control seems more far reaching than it actually is at one moment of time.
Goodale and Milner (2004) go on further to propose that it is through the
top-down activation of low-level retinotopic perceptual representations,
representations that are common to both the ventral and dorsal process-
ing streams (e.g., retinotopic representations in the visual system), that
the ventral system interacts with the dorsal system.
acting (e.g., saying hello), dreaming (e.g., saying hello in the dream
world), or observing the action of another (e.g., hearing hello). This is
consistent with the Sensorium Hypothesis (Muller 1843; James 1890; Gray
2004; Morsella and Bargh 2010a) that action/motor processes are largely
unconscious (Grossberg 1999; Goodale and Milner 2004; Gray 2004),
and the contents of consciousness are influenced primarily by perceptual-
based (and not action-based) events and processes (e.g., priming by per-
ceptual representations). (See brain stimulation evidence in Desmurget
et al. 2009.) Accordingly, it has been proposed that, in terms of stages
of processing, that which characterizes conscious content is the notion
of perceptual afference (information arising from the world that affects
sensory-perceptual systems) or perceptual re-afference (information aris-
ing from corollary discharges or efference copies of our own actions;
cf. Christensen et al. 2007; Obhi et al. 2009), both of which are cases of
afferent processing. Sherrington (1906) aptly referred to these two, sim-
ilar kinds of information as exafference (when the source of information
stems from the external world) and reafference (when the source is from
our own actions). As mentioned earlier, it seems that we do not have
direct, conscious access to motor programs or other kinds of efference
generators (Grossberg 1999; Rosenbaum 2002; Morsella and Bargh
2010a), including those for language (Levelt 1989), emotional systems
(e.g., the amygdala; Anderson and Phelps 2002; Ohman et al. 2007), or
executive control (Crick 1995; Suhler and Churchland 2009). It is for
this reason that, when speaking, one often does not know exactly which
words one will utter next until the words are uttered or subvocalized
following word retrieval (Levelt 1989; Slevc and Ferreira 2006).
Importantly, these conscious contents (e.g., urges and perceptual rep-
resentations) are similar to (or perhaps one and the same with) the
contents that occupy the buffers in working memory, a large-scale
mechanism that is intimately related to both consciousness and action
production (Fuster 2003; Baddeley 2007). Working memory is one of
the main topics of our next section.
even one and the same. When delaying uttering the word LChayim
(the action goal) until the appropriate cue is experienced (a toast is
made), the knowledge guiding the realization of the action goal (i.e., to
utter LChayim) could have stemmed from (1) hearing another person
say the word, (2) imagining saying the word, or (3) having a memory
of what the word sounds like. In this way, the action-goal representa-
tions that influence action are provided either by the external world, as
in the case of interference during the Simon task, or by memory sys-
tems that historically have been part of the ventral processing stream
(Milner and Goodale 1995), a system concerned with adaptive action
goal selection (Morsella and Bargh 2010a) rather than motor control,
which has been associated with the dorsal pathway (Goodale and Milner
2004).
Our review leads one to conclude that the voluntary action pro-
duction (for an explanation regarding why skeletal muscles have been
regarded as voluntary muscles, see earlier) is usually guided by the
organisms ability to foreground one action goal representation over
another (Curtis and DEsposito 2009; Johnson and Johnson 2009). Such
refreshing of a representation (Johnson and Johnson 2009) keeps a
representation in the foreground of, say, working memory, and is inti-
mately related to consciousness the representation that is intentionally
refreshed occupies the conscious field. Critical to this foregrounding pro-
cess is attention, which is a limited resource (Cowan 1988). Interestingly,
it was James (1890) who concluded that, to guide action (which is medi-
ated in large part unconsciously), all the conscious will can do is attend
to one representation over another. Thus, the will usually resides in
a buffer that is concerned with skeletal muscle action and is limited to
selecting (the modal process), vetoing (Libet 2004), or manipulating
(e.g., in the case of mental rotation; Shepard and Metzler 1971) these
perceptual-like representations.
Figure 2.1 illustrates the basic components of the BPAI, in its modal
form of processing. In the figure, the phonological representation of the
word cheers is held in mind consciously, activated above a conscious
threshold (i.e., supraliminally) by some external stimulus or sustained
through refreshing and attentional processing in working memory. In this
case, the conscious representation can be construed as a memory of the
perceptual consequences of the action. The representation is flanked
by unconscious representations that, because they are unconscious, are
incapable of being broadcast to the same extent as the conscious repre-
sentation for cheers. Below the conscious representation is a schematic
of the conscious field through which the representation is broadcasted.
The detectors of response systems receive and process the broadcasted
Mechanisms of consciousness: The perception-action buffer 65
Unconscious memory of
perceptual consequences
of action
Conscious memory of
perceptual consequences
of action
HOUSE TOAST
SALUD WIND
CHEERS
Conscious field for Conscious field for
broadcasting broadcasting
If selected for
production
Detectors of response
systems; operations therein
may be unconscious
Unconscious motor
programming
2.6 Conclusion
With a brutally reductionistic approach (Morsella et al. 2010), in our
literature review we attempted to home in on both the unique func-
tions and nervous events/organizations (e.g., circuits) associated with
conscious states (e.g., Morsella et al. 2010). Specifically, we sought to
(1) home in on the neuroanatomical loci constituting conscious states
(Section 2.2), (2) home in on the basic component mental processes
associated with conscious states (Section 2.3), and (3) home in on the
mental representations associated with conscious states (Section 2.4).
Each section attempted to home in on the correlates of consciousness at
a more micro level than the last section. The literature reviewed in the
three sections reveals that conscious states are restricted to only a subset
of nervous and mental processes. Our BPAI model illustrates the modal
findings and conclusions in schematic form.
REFERENCES
Anderson A. K., and Phelps E. A. (2002). Is the human amygdala critical for the
subjective experience of emotion? Evidence of intact dispositional affect in
patients with amygdala lesions. J Cognitive Neurosci 14:709720.
Arendes L. (1994). Superior colliculus activity related to attention and to con-
notative stimulus meaning. Cognitive Brain Res 2:6569.
Baars B. J. (1998). The function of consciousness: Reply. Trends Neurosci 21:201.
Baars B. J. (2002). The conscious access hypothesis: Origins and recent evidence.
Trends Cogn Sci 6:4752.
Baars B. J. (2005). Global workspace theory of consciousness: Toward a cognitive
neuroscience of human experience. Prog Brain Res 150:4553.
Baddeley A. D. (2007). Working Memory, Thought and Action. Oxford University
Press.
Bargh J. A. and Morsella E. (2008). The unconscious mind. Perspect Psychol Sci
3:7379.
Mechanisms of consciousness: The perception-action buffer 67
Barr M. L. and Kiernan J.A. (1993). The Human Nervous System: An Anatomical
Viewpoint, 6th Edn. Philadelphia, PA: Lippincott.
Barsalou L. W. (1999). Perceptual symbol systems. Behav Brain Sci 22:577609.
Baumeister R. F. and Masicampo E. J. (2010). Conscious thought is for facil-
itating social and cultural interactions: How simulations serve the animal-
culture interface. Psychol Rev 117:945971.
Bellebaum C., Koch B., Schwarz M., and Daum I. (2008). Focal basal ganglia
lesions are associated with impairments in reward-based reversal learning.
Brain 131:829841.
Berger C. C., Bargh J. A., and Morsella E. (2011). The what of doing:
Introspection-based evidence for Jamess ideomotor principle. In Durante
A., and Mammoliti C. (eds.) The Psychology of Self-Control. New York: Nova
Publishers, pp. 145149.
Berthoz A. (2002). The Brains Sense of Movement. Cambridge, MA: Harvard
University Press.
Berti A. and Pia L. (2006). Understanding motor awareness through normal and
pathological behavior. Curr Dir Psychol Sci 15:245250.
Bindra D. (1974). A motivational view of learning, performance, and behavior
modification. Psychol Rev 81:199213.
Bindra D. (1978). How adaptive behavior is produced: A perceptual-motivational
alternative to response-reinforcement. Behav Brain Sci 1:4191.
Block N. (1995). On a confusion about a function of consciousness. Behav Brain
Sci 18:227287.
Boly M., Garrido M. I., Gosseries O., Bruno M. A., Boveroux P., Schnakers C.,
et al. (2011). Preserved feedforward but impaired top-down processes in the
vegetative state. Science 332:858862.
Buchsbaum B. R. and DEsposito M. (2008). The search for the phonological
store: From loop to convolution. J Cogn Neurosci 20:762778.
Buck L. B. (2000). Smell and taste: The chemical senses. In Kandel E. R.,
Schwartz J. H., and Jessell T. M. (eds.) Principles of Neural Science, 4th Edn.
New York: McGraw-Hill, pp. 625647.
Buzsaki G. (2006). Rhythms of the Brain. New York: Oxford University Press.
Chaiken S. and Trope Y. (1999). Dual-Process Models in Social Psychology. New
York: Guilford.
Chalmers D. (1996). The Conscious Mind: In Search of a Fundamental Theory.
New York: Oxford University Press.
Christensen M. S., Lundbye-Jensen J., Geertsen S. S., Petersen T. H., Paulson
O. B., and Nielsen J. B. (2007). Premotor cortex modulates somatosensory
cortex during voluntary movements without proprioceptive feedback. Nature
Neurosci 10:417419.
Cicerone K. D. and Tanenbaum L. N. (1997). Disturbance of social cognition
after traumatic orbitofrontal brain injury. Arch Clin Neuropsych 12:173188.
Clark A. (2002). Is seeing all it seems? Action, reason and the grand illusion. J
Consciousness Stud 9:181202.
Coenen A. M. L. (1998). Neuronal phenomena associated with vigilance and
consciousness: From cellular mechanisms to electroencephalographic pat-
terns. Conscious Cogn 7:4253.
68 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella
Cohen J. D., Dunbar K., and McClelland J. L. (1990). On the control of auto-
matic processes: A parallel distributed processing account of the Stroop
effect. Psychol Rev 97:332361.
Cooney J. W. and Gazzaniga M. S. (2003). Neurological disorders and the struc-
ture of human consciousness. Trends Cogn Sci 7:161166.
Cowan N. (1988). Evolving conceptions of memory storage, selective attention,
and their mutual constraints within the human information-processing sys-
tem. Psychol Bull 104:163191.
Crick F. (1995). The Astonishing Hypothesis: The Scientific Search for the Soul. New
York: Touchstone.
Crick F. and Koch C. (2003). A framework for consciousness. Nat Neurosci
6:18.
Curtis C. E. and DEsposito M. (2009). The inhibition of unwanted actions.
In Morsella E., Bargh J. A. and Gollwitzer P. M. (eds.) Oxford Handbook of
Human Action. New York: Oxford University Press, pp. 7297.
Damasio A. R. (1989). Time-locked multiregional retroactivation: A systems-
level proposal for the neural substrates of recall and recognition. Cognition
33:2562.
Damasio A. R. (2010). Self Comes to Mind: Constructing the Conscious Brain. New
York: Pantheon.
Dehaene S. and Naccache L. (2001). Towards a cognitive neuroscience of con-
sciousness: Basic evidence and a workspace framework. Cognition 79:1
37.
Del Cul A., Baillet S., and Dehaene S. (2007). Brain dynamics underlying the
nonlinear threshold for access to consciousness. PLoS Biol 5:e260.
Desmurget M., Reilly K. T., Richard N., Szathmari A., Mottolese C., and Sirigu
A. (2009). Movement intention after parietal cortex stimulation in humans.
Science 324(5928):811813.
Desmurget M. and Sirigu A. (2010). A parietal-premotor network for movement
intention and motor awareness. Trends Cogn Sci 13:411419.
Di Lollo V., Enns J. T., and Rensink R. A. (2000). Competition for consciousness
among visual events: The psychophysics of reentrant visual pathways. J Exp
Psychol Gen 129:481507.
Doesburg S. M., Green J. L., McDonald J. J., and Ward L. M. (2009). Rhythms
of consciousness: Binocular rivalry reveals large-scale oscillatory network
dynamics mediating visual perception. PLoS ONE 4:114.
Duprez T. P., Serieh B. A., and Reftopoulos C. (2005). Absence of memory
dysfunction after bilateral mammillary body and mammillothalamic tract
electrode implantation: Preliminary expereince in three patients. Am J Neu-
roradiol 26:195198.
Edelman G. M. and Tononi G. (2000). A Universe of Consciousness: How Matter
Becomes Imagination. New York: Basic Books.
Eichenbaum H., Shedlack K. J., and Eckmann K. W. (1980). Thalamocor-
tical mechanisms in odor-guided behavior. Brain Behav Evolut 17:255
275.
Eidels A., Townsend J. T., and Algom D. (2010). Comparing perception of
Stroop stimuli in focused versus divided attention paradigms: evidence for
dramatic processing differences. Cognition 114:129150.
Mechanisms of consciousness: The perception-action buffer 69
Haggard P., Aschersleben G., Gehrke J., and Prinz W. (2002). Action, binding
and awareness. In Prinz W. and Hommel B. (eds.) Common Mechanisms in
Perception and Action: Attention and Performance, Vol. 19. Oxford University
Press, pp. 266285.
Hallett M. (2007). Volitional control of movement: The physiology of free will.
Clin Neurophysiol 117:11791192.
Harle E. (1861). Der Apparat des Willens [The apparatus of the will] Zeitschrift
fur Philosophie und philosophische Kritik 38:499507.
Heath M., Neely K. A., Yakimishyn J., and Binsted G. (2008). Visuomotor
memory is independent of conscious awareness of target features. Exp Brain
Res 188:517527.
Heilman K. M., Watson R. T., Valenstein E. (2003). Neglect: Clinical and
anatomic issues. In Feinberg T. E. and Farah, M .J. (eds.) Behavioral Neurol-
ogy and Neuropsychology, 2nd Edn. New York: McGraw-Hill, pp. 303311.
Henkin R. I., Levy L. M., and Lin C. S. (2000). Taste and smell phantoms
revealed by brain functional MRI (fMRI). Neuroradiology 24:106123.
Herz R. S. (2003). The effect of verbal context on olfactory perception J Exp
Psychol: Gen 132:595606.
Ho V. B., Fitz C. R., Chuang S. H., and Geyer C. A. (1993). Bilateral basal
ganglia lesions: Pediatric differential considerations. RadioGraphics 13:269
292.
Hommel B. (2009). Action control according to TEC (theory of event coding).
Psychol Res 73:512526.
Hommel B. and Elsner B. (2009). Acquisition, representation, and control of
action. In Morsella E., Bargh J. A., and Gollwitzer P. M. (eds.) Oxford
Handbook of Human Action. New York: Oxford University Press, pp. 371
398.
Hommel B., Musseler J., Aschersleben G., and Prinz W. (2001). The theory of
event coding: A framework for perception and action planning. Behav Brain
Sci 24:849937.
Hubbard J., Gazzaley A., and Morsella E. (2011). Traditional response interfer-
ence effects from anticipated action outcomes: A response-effect compati-
bility paradigm. Acta Psychol 138:106110.
Hull C. L. (1943). Principles of Behavior. New York: Appleton-Century.
Hummel F. and Gerloff C. (2005). Larger interregional synchrony is associated
with greater behavioral success in a complex sensory integration task in
humans. Cereb Cortex 15:670678.
Jackendoff R. S. (1990). Consciousness and the Computational Mind. Cambridge,
MA: MIT Press.
James W. (1890). Principles of Psychology. New York: Holt.
Jeannerod M. (2006). Motor Cognition: What Action Tells the Self. New York:
Oxford University Press.
Johnson H. and Haggard P. (2005). Motor awareness without perceptual aware-
ness. Neuropsychologia 43:227237.
Johnson M. R. and Johnson M. K. (2009). Toward characterizing the neural
correlates of component processes of cognition. In Roesler F., Ranganath
C., Roeder B., and Kluwe R. H. (eds.) Neuroimaging of Human Memory:
Mechanisms of consciousness: The perception-action buffer 71
Llinas R., Ribary U., Contreras D., and Pedroarena C. (1998). The neuronal
basis for consciousness. Philos T Roy Soc B 353:18411849.
Logothetis N. K. and Schall J. D. (1989). Neuronal correlates of subjective visual
perception. Science 245:761762.
Lotze R. H. (1852). Medizinische Psychologie oder Physiologie der Seele. Leipzig:
Weidmannsche Buchhandlung.
MacLeod C. M. and McDonald P. A. (2000). Interdimensional interference in
the Stroop effect: Uncovering the cognitive and neural anatomy of attention.
Trends Cogn Sci 4:383391.
Mainland J. D. and Sobel N. (2006). The sniff is part of the olfactory percept.
Chem Senses 31:181196.
Markert J. M., Hartshorn D. O., and Farhat S. M. (1993). Paroxysmal bilateral
dysomia treated by resection of the olfactory bulbs. Surg Neurol 40:160163.
Markman A. B. (1999). Knowledge Representation. Hillsdale, NJ: Lawrence Erl-
baum Associates, Publishers.
Markowitsch H. J. (1982). Thalamic mediodorsal nucleus and memory: A critical
evaluation of studies in animals and man. Neurosci Biobehav R 6:351380.
McGurk H. and MacDonald J. (1976). Hearing lips and seeing voices. Nature
264:746748.
Merker B. (2007). Consciousness without a cerebral cortex: A challenge for
neuroscience and medicine. Behav Brain Sci 30:63134.
Metcalfe J. and Mischel W. (1999). A hot/cool-system analysis of delay of grati-
fication: Dynamics of willpower. Psychol Rev 106:319.
Miller B. L. (2007). The human frontal lobes: An introduction. In Miller B. L.
and Cummings J. L. (eds.) The Human Frontal Lobes: Functions and Disorders,
2nd Edn. New York: Guilford, pp. 311.
Miller N. E. (1959). Liberalization of basic S-R concepts: Extensions to conflict
behavior, motivation, and social learning. In Koch S. (ed.) Psychology: A
Study of Science, Vol. 2. New York: McGraw-Hill, pp. 196292.
Milner B. (1966). Amnesia following operation on the temporal lobes. In Whitty
C. W. M. and Zangwill O. L. (eds.) Amnesia. London: Butterworths,
pp. 109133.
Milner A. D. and Goodale M. (1995). The Visual Brain in Action. New York:
Oxford University Press.
Mitchell A. S., Baxter M. G., and Gaffan D. (2007). Dissociable performance on
scene learning and strategy implementation after lesions to magnocellular
mediodorsal thalamic nucleus. J Neurosci 27:1188811895.
Mizobuchi M., Ito N., Tanaka C., Sako K., Sumi Y., and Sasaki, T. (1999).
Unidirectional olfactory hallucination associated with ipsilateral unruptured
intracranial aneurysm. Epilepsia 40:516519.
Molapour T., Berger C. C., and Morsella E. (2011). Did I read or did I
name? Process blindness from congruent processing outputs. Conscious
Cogn 20:17761780.
Moody T. C. (1994). Conversations with zombies. J Consciousness Stud 1:196
200.
Morsella E. (2005). The function of phenomenal states: Supramodular interac-
tion theory. Psychol Rev 112:10001021.
Mechanisms of consciousness: The perception-action buffer 73
Plailly J., Howard J. D., Gitelman D. R., and Gottfried J. A. (2008). Attention to
odor modulates thalamocortical connectivity in the human brain. J Neurosci
28:52575267.
Prinz W. (2003). How do we know about our own actions? In Maasen S., Prinz W.,
and Roth G. (eds.) Voluntary Action: Brains, Minds, and Sociality. London:
Oxford University Press, pp. 2133.
Rakison D. H. and Derringer J. L. (2008). Do infants possess an evolved spider-
detection mechanism? Cognition 107:381393.
Rizzolatti G., Sinigaglia C., and Anderson F. (2008). Mirrors in the Brain: How
Our Minds Share Actions, Emotions, and Experience. New York: Oxford Uni-
versity Press.
Roach J. (2005, June 30). Journal Ranks Top 25 Unanswered Science Questions.
National Geographic News. URL: news.nationalgeographic.com (accessed
March 6, 2013).
Roelofs A. (2010). Attention and facilitation: Converging information versus
inadvertent reading in Stroop task performance. J Exp Psychol Learn 36:411
422.
Rolls E. T., Judge S. J., and Sanghera M. (1977). Activity of neurons in the
inferotemporal cortex of the alert monkey. Brain Res 130:229238.
Rosenbaum D. A. (2002). Motor control. In Pashler H. (series ed.) Yantis S.
(vol. ed.) Stevens Handbook of Experimental Psychology: Vol. 1. Sensation and
Perception, 3rd Edn. New York: John Wiley & Sons, Inc., pp. 315339.
Rossetti Y. (2001). Implicit perception in action: Short-lived motor represen-
tation of space. In Grossenbacher P. G. (ed.) Finding Consciousness in the
Brain: A Neurocognitive Approach. Amsterdam: John Benjamins Publishing,
pp. 133181.
Schmahmann J. D. (1998). Dysmetria of thought: Clinical consequences of cere-
bellar dysfunction on cognition and affect. Trends Cogn Sci 2:362371.
Sela L., Sacher Y., Serfaty C., Yeshurun Y., Soroker N., and Sobel N. (2009).
Spared and impaired olfactory abilities after thalamic lesions. J Neurosci
29(39):1205912069.
Sergent C. and Dehaene S. (2004). Is consciousness a gradual phenomenon?
Evidence for an all-or-none bifurcation during the attentional blink. Psychol
Sci 15:720728.
Sheerer E. (1984). Motor theories of cognitive structure: A historical review.
In Prinz W. and Sanders A. F. (eds.) Cognition and Motor Processes. Berlin:
Springer-Verlag.
Shepard R. N. (1984). Ecological constraints on internal representation: Res-
onant kinematics of perceiving, imagining, thinking and dreaming. Psychol
Rev 91:417447.
Shepard R. N. and Metzler J. (1971). Mental rotation of three dimensional
objects. Science 171:701703.
Shepherd G. M. and Greer C. A. (1998). Olfactory bulb. In Shepherd G. M.
(ed.) The Synaptic Organization of the Brain, 4th Edn. New York: Oxford
University Press, pp. 159204.
Sherman S. M. and Guillery R. W. (2006). Exploring the Thalamus and Its Role in
Cortical Function. Cambridge, MA: MIT Press.
Mechanisms of consciousness: The perception-action buffer 75
Voss M. (2011). Not the mystery it used to be: Theme program: Consciousness.
APS Observer 24(6). URL: www.psychologicalscience.org/index.php/
publications/observer/2011/july-august-11/not-the-mystery-it-used-to-be
.html (accessed February 27, 2013).
Weiskrantz L. (1992). Unconscious vision: The strange phenomenon of blind-
sight. The Sciences 35:2328.
Wolford G., Miller M. B., and Gazzaniga M. S. (2004). Split decisions. In Gaz-
zaniga M. S. (ed.) The Cognitive Neurosciences III. Cambridge, MA: MIT
Press, pp. 11891199.
Zatorre R. J. and Jones-Gotman M. (1991). Human olfactory discrimination
after unilateral frontal or temporal lobectomy. Brain 114:7184.
Zeki S. and Bartels A. (1999). Toward a theory of visual consciousness. Conscious
Cogn 8:225259.
3 A biosemiotic view on consciousness derived
from system hierarchy
3.1 Biosemiotics 77
3.2 Consciousness 79
3.3 Awareness versus consciousness 80
3.4 From awareness to consciousness 82
3.5 Scale and its implications 85
3.6 Hierarchy and its properties 88
3.7 Ecosystemic inclusion through birationality 91
3.8 The implications of birationality 93
3.9 Hyperscale: Embodiment and abstraction 95
3.10 A hierarchical biosemiosis 97
3.11 Energy and awareness 99
3.12 Stasis neglect and habituation 101
3.13 A birational derivation of consciousness 102
3.14 Coda 106
3.1 Biosemiotics
Biosemiotics (from the Greek words bios meaning life and semeion
meaning sign) is the interpretation of scientific biology through semi-
otics the representation of natural entities and phenomena as signs and
sign processes. A sign is taken to indicate any entity or characteristic to
an interpretant, which may itself be interpretation by an intelligent being
or another sign. De Saussure (2002) maintained that a sign is dyadic
consisting of signifier and signified whose elements must be com-
bined in the brain to have meaning, whereas Peirce [19311958] held
more generally that any sign process (semiosis) is irreducibly triadic, in
terms of representative sign, represented entity and interpretant. Peirce
enumerated three categories of experiencing signs:
r Firstness, as a quality of feeling (referenced to an abstraction);
r Secondness, as an actuality (referenced to a correlate);
r Thirdness, as a forming of habits (referenced to an interpretant).
77
78 Ron Cottam and Willy Ranson
1 Our thanks to Edwina Taborsky for this example of firstness, secondness, and thirdness.
Biosemiotics of consciousness: System hierarchy 79
3.2 Consciousness
Consciousness is notoriously difficult to define, especially as it must be
attempted from within consciousness itself. Vimal (2009) has presented
an overview of numerous meanings attributed in the literature to the term
consciousness, grouping them according to the two criteria of function
(easy problems, such as detection, discrimination, recognition, cogni-
tion, etc.) and experience (i.e., aspects of the hard problem [Chalmers
1995]). In general, the easy problems could be carried out by a suitably
programmed digital computer. However, all of the lowest-level processing
elements of a digital computer (the gates) are prevented from inter-
acting with each other by the computers clock signal, which is intended
to synchronize their (local) operations. This means that any gates unde-
sirable physical characteristics are completely isolated from those of all
the others, and individual gates are only capable of reacting according to
the large-scale (global) design imposed by the computers manufacturer
and programmer. Consequently, in its instantiation as an information
processor, a digital computer itself has no unified global character at all.2
We question, therefore, whether function is a suitable criterion for the
unified nature of consciousness, while noting that function is, in many
cases, driven by intention itself a current content of the evolved state
of experience.
2 Local and global are words which may have very different meanings in different situations.
By local we mean: not broad or general; confined to a particular location or site; spatially
isolated. In its extreme form, local reduces to a dimensionless spatiotemporal point, or a
closed impenetrable logic system. By global we mean: all-inclusive; relating to an entire
system; spatially unrestricted. In its extreme form, global expands to simultaneously
encompass the entire Universe as nonlocality.
80 Ron Cottam and Willy Ranson
Bohm I would say that the degree of consciousness in the atomic world is very
low, at least of self-consciousness.
Weber But its not dead or inert. That is what you are saying.
Bohm It has some degree of consciousness in that it responds in some way, but
it has almost no self-consciousness (the italics are Webers).
...
Weber you are saying: This is a universe that is alive (in its appropriate way) and
somehow consciousness at all the levels (the italics are Webers).
Bohm Yes, in a way.
5 Definitions of life are many and varied, from the philosophical to the purely biological.
Most of these are to some degree self-referential, being based on criteria which delineate
the recognizably living from the recognizably non-living (e.g., metabolism; reproduction).
The definition we will adopt here is an extension of the biosemiotic one (Sebeok 1977)
that life and semiosis are equivalent that living entities are capable of interpreting signs,
of signaling, and of self-sustenance.
6 The immediate present, could we seize it, would have no character but its Firstness.
Not that I mean to say that immediate consciousness (a pure fiction, by the way) would
be Firstness, but that the quality of what we are immediately conscious of, which is no
fiction, is Firstness (Peirce 19311958).
Biosemiotics of consciousness: System hierarchy 83
lungs and fins turn into shovels. Whatever happens to be at hand is made
use of (Sigmund 1993). Consequently, although we believe that the
nature tends towards the hierarchical relationships we describe, this may
not always be apparent in the results of evolution. Gilaie-Dotan et al.
(2009), for example, have demonstrated non-hierarchical functioning in
the human cortex.
9 We use the words externalist and internalist here in a general philosophical sense, indi-
cating a view from outside or a view from inside a system, respectively, and not in
relation to the positions in theory of mind of externalism or internalism, which refer to the
neural-plus-external or purely neural origin of the mind.
86 Ron Cottam and Willy Ranson
Size
Regions of partial inter-communication
11 The property of a fractal we call upon here is that of detailed recursive self-similarity
across endless magnifications of the internal structure of the complex inter-scalar layer.
88 Ron Cottam and Willy Ranson
this gain can be simply associated with scale itself, or whether there are
further properties associated with a scalar assembly, or hierarchy?12
12 The reader should note that, given the partial nature of inter-scalar and intra-scalar
communications, heterarchy is subsumed into hierarchy as we will describe it.
13 Salthe has now renamed the scale hierarchy a composition hierarchy, and the specification
hierarchy a subsumption hierarchy (Salthe 2012).
14 In the literature there is often a distinction made between the scales in terms of physical
size and the levels in terms of function of a hierarchical description. Although this
distinction has value for either a scale or specification hierarchy, it disappears for a model
hierarchy, which can be successfully used to represent either structure or function.
15 Private communication.
Biosemiotics of consciousness: System hierarchy 89
Simpler
models
More
complicated
models
Complex inter-
model regions
16 N.B. For simplicity we have not explicitly taken account of subatomic levels in this
sequence.
90 Ron Cottam and Willy Ranson
and process closure for an individual scale resolves the open or closed
system? dichotomy; vagueness of the scalar representations in hyperscale
resolves the this scale or that scale? multiple operational partitioning
of a multi-scalar system.
We should now begin to distinguish between inanimate and animate
entities, which until now we have lumped together. The distinction from a
structural assessment is one of degree of the extent to which inter-scalar
communications are more or less complete. In a clearly inanimate entity,
for example, a crystal, the inter-scalar informational differences are so
limited that microscopic and macroscopic properties approximately coin-
cide. We say approximately, because there are no real physical systems
where microscopic and macroscopic properties precisely coincide. Even
for crystals the archetypical inanimate entities minute differences in
cross-scalar measurements appear (Cottam and Saunders 1973). Organ-
isms, however, exhibit enormous inter-scalar informational divergences,
making first-person viewpoints dominate third-person ones and making
the definition of properties elusive. The reader should note that we are
here in no way belittling the importance of other observable character-
istics of living systems, merely relating their implications to inter-scalar
differences.
The most important aspect of hyperscale to an organism is the way in
which it provides a means for the organism to exhibit itself to the outside
world; it may set up a facade which corresponds more or less to different
high- or low-scalar properties. A cell, for example, tries to close itself
off from the outside world by a lipid barrier, while permitting specific
survival-related in- and out-ward communications.
19 In this work we refer to logic as a set of rules which may/must be followed, and rationality
as the path of signs towards a desired end which is followed using logic.
20 The De Broglie-Bohm interpretation of quantum theory (Bohm and Hiley 1993) has
been described in terms of hidden variables which must be added in to standard quantum
theory to make it nonlocally self-consistent and complete.
21 Following David Bohms nomenclature for hidden order and for explicit order.
Biosemiotics of consciousness: System hierarchy 93
23 Note that we are here only indirectly referring to Einsteins General Relativity, as one
of a large family of different relativistic phenomena.
Biosemiotics of consciousness: System hierarchy 95
Cottam et al. 2004a). Simple and complex have a wide range of related
implications, and any attempt to describe them as being only disjoint
categories is doomed to failure. In this context a specific semiotic sign
cannot be in any way independent of others: its implications are subject
to the same birational local/global control as are the scalar levels.
Hoffmeyer and Emmeche (1991) have maintained that life necessitates
two types of coding: digital related to the genetic code of DNA and
analogue related to semiotic structures of metabolism.
more or less exist; some do not. But which are which? Simply, we can
take partial abstractions which are the result of embodiment as being
more real than those which are not. Existence is not absolute: existence
depends on the grounds from which it is derived, and the domain in
which it exists!
This previous discussion is not just for its own sake: we must decide
on the reality of a systems unification, and on its character. We are
consequently led to accept the reality of unification and its character. It
is, however, impossible to formulate an accurate representation of the
internal scales of an organism from any one of its scalar levels or from
outside. But we can create from the outside an approximate model of the
organisms various scales which will be sufficient for most purposes. We
should remember that any internal details which are invisible from our
outside platform will only cause a problem if they contradict the overall
picture we create. We will refer to this external approximate model of the
internal scales of an organism as a hyperscalar representation. There does
not appear to be any reason why such an external model cannot be
created inside the brain of the organism.
If we ourselves form a representation of any entity, then it is by its very
nature a more- or less-complete third-person hyperscalar image. If we do
so for an organism, we risk great inaccuracy, as the cross-scalar informa-
tion transfers are restrictedly structural and more novel in character. If
an organism forms such an external viewpoint of itself and its environ-
ment in its own brain, then this corresponds to the first-person perspec-
tive of its mind. We form a hyperscalar perspective of ourselves and the
world about us, which we construct from the entire history of our individ-
ual and social existences, including the facts of our believed reality,
numerous apparently consistent but insufficiently investigated logical
suppositions, and as-yet untested or normally abandoned hypothetical
models which serve to fill in otherwise inconvenient or glaringly obvious
omissions in its landscape for example, the convenient supposition of
a flat Earth.
So it is not only that your brain doesnt react to things that do not
change, it also finally removes from consciousness temporally regular
features of a scene. This combination of neglect of stasis and habituation
is of vital importance in reducing the amount of incoming information
that the brain must do something with. It is not that static features
or regularities do not arrive at the retina or brain; it is simply that their
processing is curtailed early on. This is somewhat analogous to the tech-
nique in electronics of setting up a circuit by first applying to it a static
bias voltage, thus fixing the region within which the time-varying signal
voltage can be successfully processed.28
In noticing things that change we are reacting to their signs; initially in
our electric train example we react to the firstness of something which
flashes by the window. This sign may be equated to ? a questioning.
After we notice that the same effect occurs again it seems that there is an
outside effect, but it is not immediately clear what. The sign throws us
into secondness; throws us back on our history and experience to attempt
to abductively correlate the effect with some prior knowledge or phe-
nomenon. With further repetitions of the effect we will (probably) focus
on the trains electric power, the necessity to get that power from some-
where, the need for cables to supply the power, and the need for pylons
to support the cables in a regular array alongside the track. We sink
into the thirdness of habit, of no longer being surprised by the pylons
appearance. But now the sign drives us towards acceptance, towards no
longer being startled by the regular effects because each appearance is
predictable on the basis of previous appearances: the sign slowly decays
and finally disappears through habituation.
It is not only our brains which carry out this selective process. As we
described earlier, most of the information which crosses between the
scales of a natural hierarchy is statically structural and not novel. It is
information which supports the hierarchys status quo. Particularly as a
result of stasis neglect, information processing across the scales of any
natural hierarchy is greatly reduced. But it is in the brain that we can find
the most important consequence of habituation.
28 Note that this bias voltage can be effectively zero, as in the quasi-linear setup of a class
B amplifier.
Biosemiotics of consciousness: System hierarchy 103
Hyperscale Hyperscale
(a) (b)
Fig. 3.4 Unifications of the first and second hyperscales. (a) The unifi-
cation of hyperscale. (b) The second hyperscale derived from the ecosys-
temic complex regions.
and produce system unification. We have pointed out that this unifica-
tion really exists, in the form of hyperscale (Fig. 3.4a). Complex regions
develop between adjacent scalar levels, and these regions are also corre-
lated, to produce a second hierarchy, whose properties are complemen-
tary to those of the first one. Consequently, the complex region hierarchy
is also associated with a hyperscalar unification, which is complementary
to the first one (see Fig. 3.4b). In common with the individual scales,
these two sub-hierarchies are partially isolated, partially in communica-
tion, and are thus endowed with partial autonomy. This biosemiotic dual-
ity of character an extreme generalization of Hoffmeyer and Emmeches
(1991) dual-coding is a major departure from conventional monora-
tional views of an organism, and it applies equally well both to individual
scales and their ecosystems and to their hyperscalar unifications. A con-
sequence is that some kind of evolution29 will take place at all scalar levels
of an organism, from its macro relationships with the natural world down
to interactions of the components of its cells.
Quantum systems exhibit multiple context-dependent discrete lev-
els of existence, from low to higher energies. It is more than coin-
cidence that this structure, with its quantally defined discrete mod-
els separated by unformalizable intermediate regions, mirrors that of
a natural ecosystemic birational hierarchy. The general hierarchy we
have described appears to be generic for all other hierarchies, including
quantum mechanical energy structures, which are its low-dimensional
derivatives. In its general form, a natural ecosystemic birational hierarchy
provides a framework within which life and consciousness can flourish
29 We consider that the formalization of Evolution into the three Darwinian operators of
mutation, reproduction, and selection is a late-stage crystallization of an earlier more
fluid process, closer to the evolution of chemical reactions.
104 Ron Cottam and Willy Ranson
Hyperscale Hyperscale
Mutual
observation
Fig. 3.5 Mutual observation between the model hyperscale and its ecosys-
tem hyperscale.
3.14 Coda
We strongly believe that high levels of conceptual consciousness are
impossible without embodiment, and that therefore any idea that con-
sciousness could transcend the physicality of life is mistaken. The deriva-
tion of consciousness from lower more localized forms of awareness poses
a more pragmatic question: why, from a high level of consciousness,
are we apparently unaware of these lower awarenesses? The adoption
of a carrier-plus-signal description for the mutual observation of the
two hyperscales suggests that these lower-level awarenesses may well
be present, but that they may not occupy the center of attention, and
may only be recognizable as lower-level neural noise. Striking support
for neural birationality comes from the degree to which the two hemi-
spheres of the brain apparently concentrate on different styles of process-
ing (Tommasi 2009; Glickstein and Berlucchi 2010): linear, sequential,
logical, symbolic for the left hemisphere and holistic, random, intuitive,
concrete, nonverbal for the right (Rock 2004, p. 124), corresponding
to the dual rationalities we have described, and to the primitives of logic
and emotion, respectively (Cottam et al. 2008b).
The two hemispheres of the brain are normally connected together
by the largest nerve tract in the brain: the corpus callosum. Only bilat-
eral and extensive damage of the cerebral hemispheres provokes stupor
or coma. Unilateral lesions cannot by (themselves) itself cause coma
(Piets 1998). Studies carried out in the 1940s following sectioning of
the corpus callosum in human patients (Akelaitis 1941) as a treatment
for intractable epilepsy (Akelaitis et al. 1942) intriguingly indicated that
this massive neural intervention resulted in no definite behavioral deficits.
Later experiments carried out by Sperry et al. (1969) provided even more
startling results: the split-brain subjects of neural bifurcation provided
direct verbal confirmation that the left and right hemispheres afford sepa-
rate domains of consciousness,30 apparently supporting the dual-hyperscalar
argument we have presented.
REFERENCES
Adams F. C. and Laughlin G. (1997). A dying universe: The long-term and
evolution of astrophysical objects. Rev Mod Phys 69(2):337372.
Akelaitis A. J. (1941). Psychobiological studies following section of the corpus
callosum: A preliminary report. Am J Psychiat 97:11471157.
Akelaitis A. J., Risteen W. A., Herren R. Y., and van Wagenen W. P. (1942).
Studies on the corpus callosum. III. A contribution to the study of dyspraxia
in epileptics following partial and complete section of the corpus callosum.
Arch Neuro Psychiatr 47:9711008.
Antoniou I. (1995). Extension of the conventional quantum theory and logic
for large systems. Presented at the International Conference: Einstein meets
Magritte An Interdisciplinary Reflection on Science, Nature, Human Action
and Society, Brussels, Belgium, May 29June 3, 1995.
Aristotle (2012). Physics. Trans Hardie R. P. and Gaye R. K. http://etext.library
.adelaide.edu.au/a/aristotle/physics/ (accessed February 27, 2013).
Artmann S. (2007). Computing codes versus interpreting life. In Barbieri M.
(ed.) Introduction to Biosemiotics: The New Biological Synthesis. Dordrecht:
Springer.
Barendregt M. and van Rappard J. F. H. (2004). Reductionism revisited. Theory
and Psychology 14(4):453474.
Barry R. J. (2009). Habituation of the orienting reflex and the development of
preliminary process theory. Neurobiol Learn Mem 92:235242.
Brenner J. E. (2008). Logic in Reality. Berlin: Springer.
Brenner J. E. (2010). The philosophical logic of Stephane Lupasco. Logic and
Logical Philosophy 19:243284.
Brier S. (2006). The paradigm of Peircean biosemiotics. Signs 2:2081.
Bob P. (2011). Brain, Mind and Consciousness: Advances in Neuroscience Research.
New York: Springer.
Bohm D. and Hiley B. J. (1993). The Undivided Universe: An Ontological Interpre-
tation of Quantum Theory. London: Routledge.
Bohr N. H. D. (1998). Causality and complementarity Supplementary papers.
In Faye J. and Folse H. J. (eds.) The Philosophical Writings of Niels Bohr,
Vol. 4. Woodbridge, CT: Oxbow Press.
Boltzmann L. (1896). Lectures on Gas Theory. Trans. Brush S. G. Berkeley, CA:
University of California Press.
Busch P., Heinonen T., and Lahti P. (2007). Heisenbergs uncertainty principle.
Phys Rep 452:155176.
Chalmers D. J. (1995). Facing Up to the Problem of Consciousness. J Conscious-
ness Stud 2(3):200219.
Chandler D. (2012). Semiotics for Beginners. URL: www.aber.ac.uk/media/
Documents/S4B/semiotic.html (accessed February 27, 2013).
Chapman M. (1999). Constructivism and the problem of reality. J Appl Dev
Psychol 20(1):3143.
Clausius R. (1865). The Mechanical Theory of Heat. Trans. Browne W. R.
Charleston, SC: BiblioBazaar.
Collier J. D. (1999). Autonomy in anticipatory systems: Significance for func-
tionality, intentionality and meaning. In Dubois D. M. (ed.) Computing
Anticipatory Systems: CASYS98 2nd International Conference, AIP
108 Ron Cottam and Willy Ranson
Melzack R. and Wall P. D. (2008). The Challenge of Pain. London: Penguin Books.
Merker B. (2007). Consciousness without a cerebral cortex: A challenge for
neuroscience and medicine. Behav Brain Sci 30:6381.
Merrell F. (2003). Sensing Corporeally: Toward a Posthuman Understanding.
Toronto: University of Toronto Press.
Metzinger T. (2004). The subjectivity of subjective experience: A representation-
alist analysis of the first-person perspective. Networks 34:3364.
Morin A. (2006). Levels of consciousness and self-awareness: A comparison
and integration of various neurocognitive views. Conscious Cogn 15(2):358
371.
Morris C. W. (1971). Writings on the General theory of Signs. Part One: Foundations
of the Theory of Signs. The Hague: Mouton.
Nagel T. (1986). The View from Nowhere.: Oxford University Press.
Nakayama K. (1985). Biological image motion processing: A review. Vision Res
25(5):625660.
Oppenheim P. and Putnam H. (1958). Unity of science as a working hypothesis.
In Feigl H., Scriven M., and Maxwell G. (eds.) Concepts, Theories and the
Mind-Body Problem. Minneapolis: University of Minnesota Press, pp. 336.
Pauli W. and Jung C. G. (1955). The Interpretation of Nature and the Psyche. New
York: Pantheon Books.
Pearson C. (1999). The semiotic paradigm. Presented at the 7th International
Congress of the International Association of Semiotic Studies, Dresden, Germany,
October 611, 1999.
Peirce C. S. (19311958). Collected Papers of Charles Sanders Peirce. Hartshorne
C. and Weiss P. (eds.) Vols. 16; Burks A. (ed.) Vols. 78. Cambridge, MA:
Belknap Press.
Piets C. (1998). Anatomical substrates of consciousness. Eur J Anaesthesiol 15:4
5.
Pirsig R. (1974). Zen and the Art of Motorcycle Maintenance. New York: William
Morrow.
Polkinghorne J. (2005). Exploring Reality. New Haven, CT: Yale University
Press.
Popper K. R. (1990). A World of Propensities. Bristol: Thoemmes.
Ravasz E., Somera A. L., Mongru D. A., Oltval Z. N., and Barabasi A.-L.
(2002). Hierarchical organization of modularity in metabolic networks. Sci-
ence 297:15511555.
Rock A. (2004). The Mind at Night: The New Science of How and Why We Dream.
New York: Basic Books.
Rosen R. (1991). Life Itself. New York: Columbia University Press.
Ruiz-Mirazo K. and Moreno A. (2004). Basic autonomy as a fundamental step
in the synthesis of life. Artificial Life 10(3):235259.
Salthe S. N. (1985). Evolving Hierarchical Systems. New York: Columbia Univer-
sity Press.
Salthe S. N. (2012). Hierarchical structures. Axiomathes 22. doi: 10.1007/
s10516-012-9185-0.
Schopenhauer A. (1818). The World as Will and Representation. Translation EFJ
Payne. Indian Hills, CO: The Falcons Wing, 1958.
Biosemiotics of consciousness: System hierarchy 111
West G. B., Brown J. H., and Enquist B. J. (2000). The origin of universal
scaling laws in biology. In Brown J. H. and West G. B. (eds.) Scaling in
Biology. Oxford University Press, p. 87.
Wiener, N. (1954). The Human Use of Human Beings: Cybernetics and Society.
Boston, MA: Houghton Mifflin.
Wilby J. (1994). A critique of hierarchy theory. Syst Practice 7(6):653670.
4 A conceptual framework embedding
conscious experience in physical processes
Wolfgang Baer
4.1 Introduction
Traditional neuroscience assumes that consciousness is a phenomenon
that can be explained within the context of classical physics, chemistry,
and biology. This approach relates conscious processes to the activity
of a brain, composed of particles and fields evolving within a space-
time continuum. However, logical inconsistencies emphasized by the
explanatory gap (Levine 1983) and the hard problem of conscious-
ness (Chalmers 1997) suggest that no system as currently defined by
classic physics could explain how physical activities in the brain can pro-
duce conscious sensations located at large distances from that brain.
Though physiological investigations highlight important data process-
ing paths and provide useful medical advances, the hard problem of
consciousness is not to be answered by detailed knowledge of the prop-
erties of biochemical structures. Rather, the conceptual foundations of
our conscious experience need to be re-examined in order to determine
the adequacy of our physical theories to incorporate consciousness and
to suggest new ideas that might be required.
113
114 Wolfgang Baer
1998; Bieberich 2000; Hagan 2002) are examples of this research direc-
tion.
The very likely possibility that the brain employs quantum effects, and
operates as a quantum computer, does little to explain the hard prob-
lem of consciousness, because quantum theory is itself an ontological
mystery that has defied our understanding since its inception. Though
successful as a calculation tool, quantum theory itself does not provide
a coherent interpretation of what its symbols, such as the wave function
, actually mean, let alone how consciousness could arise from them.
A comprehensive review of interpretations (Blood 2009) shows that no
interpretation is fully adequate. The three most popular ones are:
The Copenhagen probability interpretation from Max Born
(Faye 2008): The wave function completely describes the
physical system. Interprets the wave function squared as a
probability for getting a possible measurement result. The
probability is spread out in space but collapses instantaneously
at the point of measurement to the one result observed.
The Pilot Wave interpretation from David Bohm (Goldstein
2012): The wave function is like a message to the pilot of a
ship, that is, of a particle; the message accompanies the actual
particles and ferrets out the best path for the actual particle to
take. Since real particles are only guided by the pilot waves
no collapse to the actual observed reality is necessary.
The Many Worlds interpretation from Hugh Everett (Everett
1983; Mensky 2006): The wave function squared is a proba-
bility, but rather than collapsing into a single observed result
all possibilities are real since all realities exist in parallel
worlds.
I would also like to mention one additional interpretation by Alfred
Lande (1973), which proposes a tangible physical world of particles that
can only change their momentum, angular momentum, and energy by
discrete multiples of Plancks constant, and derive the probabilities of
such changes from the shape of the gravito-electric object in question.
This eliminates probabilities and maintains a belief in a tangible inde-
pendent reality.
Despite interpretation difficulties, quantum theory does however pro-
vide a significant step toward an explanation of conscious phenomena by
admitting a necessary role for the observer and his extensions through
measuring instruments. The question How does a classic object gener-
ate consciousness? is unanswerable in classical physics, because clas-
sical physics is based upon a nave realist philosophy that simply
assumes entities are what they appear to be. Quantum theory, on the
Framework embedding consciousness in physics 117
a classic brain that has been attached as an ad hoc and external non-
quantum element in order to make the theory useful. This is not to say
that the analysis of the measurement process or the construction of mea-
suring instruments does not involve physics and does not represent a
valuable body of knowledge. Similarly, the analysis of the brain or the
reverse engineering performed by neurophysiologists and psychologists
also represents a valuable body of knowledge. However, these investi-
gations address the easy problem of consciousness because they look
at the brain from an external third-person perspective, which does not
answer the question of how biochemical activity becomes conscious expe-
riences, or how possibility waves become observables in our externalized
measuring instruments.
Since neither classic nor quantum physics adequately addresses the
consciousness phenomena we will introduce the third approach men-
tioned above. This approach follows Bohm by assuming there is some
ontological explanation underlying the Copenhagen probabilistic inter-
pretation of quantum theory and advances some of his major ideas. These
include:
1. Thought is a cyclic process connecting memory with its environment
(Bohm 1983, p. 59),
2. The perception of space is derived from a physical plenum that is the
ground for the existence of everything (Bohm 1983, p. 192),
3. The plenum can be visualized as a field of space cells each modeled
as a clock (Bohm 1983, p. 96),
4. An additional quantum force directs the motion of physical particles
(Bohm 1983, p.65).
It is generally accepted that Bohms hidden variables model underly-
ing quantum theory was disproved by experiments testing Bells Theorem
(Aspect 1982). In our opinion these experiments are flawed. They may
have shown that physical reality is not composed of independent objects
but do not exclude independent cognitive processes as building blocks
of a cognitive universe. Our goal is to provide a simple mathematical
description of a conscious universe that can be graphically presented as
a set of interacting processing loops of which we are one. This approach
begins with a review of what we do when achieving conscious awareness
of our bodies and its environment. The key concept is the recognition
that our perceived environment, including space and its objective con-
tent, is the display node happening inside our own processing loop. Such
a loop is conceived as a closed cycle in time, with time defined as the
name of the state of a system. The concepts of processes and events will
replace the concepts of fundamental particles as the building blocks of the
Universe. Interacting processing loops will contain classic entities mass,
Framework embedding consciousness in physics 119
charge, and their fields in space-time and provide a new basis of physics
and, by extension, a new basis for the host of neuroscience-related disci-
plines.
The implication of this new direction is that quantum theory describes
a linear approximation to the activities performed by a cognitive being.
The solidity of space-time as the a priori background to quantum oper-
ations will be identified with the permanence of memory cells within
which processing activities occur. The possibility waves of quantum the-
ory will be identified with the data content of such permanent memory
cells, which are themselves stable repeating processing cycles. Newtonian
physics of the classical world is conceived as the physics of observables, while
quantum theory describes the physics of our knowledge processing capacity
within which observables appear.
The idea that the Universe is conscious and can be divided into indi-
vidual cognitive parts has been proposed by many authors, representing
a Panpsychist philosophical viewpoint (Nagel 1979, p. 181). The fact
that it can be modeled as a processing machine has also been advanced
as a logical extension of that idea (Fredkin 1982; Kafatos and Nadeau
1990; Schmidhuber 1990; Svozil 2003). Both religious and philosophi-
cal traditions have long identified the physical world as a manifestation
of the consciousness of a god or gods. The idea was also adopted in
Niels Bohrs contention that measurement creates electrons (McEvoy
2001) or Archibald Wheelers conclusion that the universe measures
itself (Wheeler 1983). The final jump, connecting our consciousness to
its manifestation as our physical world, is addressed in this chapter. A dis-
covery of the mechanism by which the universe is conscious answers the
question of how a piece of the Universe, that is, our brain, is conscious.
Such an advance in our understanding of the physical world is therefore
equivalent to an advance in our understanding of consciousness.
r r
e e
External
t External t
Measurement eXplanation
i x( )= i
m( )=
n n
a a
Internal Internal
Measurement eXplanation
=1( ) = M( ) =1( ) = X( )
Description of
nave physical
reality
t
and a nave physical reality model that explains those sensations in the
lower section. A much more realistic version of the first-person visual
view was originally drawn by Ernest Mach (1867) who promoted the
idea that all theory of reality should be based upon sensory experience.
For this reason we have called the man sitting in his armchair Professor
Mach.
Machs ideas greatly influenced the Vienna positivists (Carnap 2000)
who believed there should be a clear distinction between an observa-
tional language, describing what can be seen, and a theoretical language,
describing the causal reality generating those appearances. To the extent
that pictures are a form of language this distinction is reflected in the
difference between the upper and lower portion of Fig. 4.1. In the upper
portion of Fig. 4.1 the desert, sky, armchair, body, and apple describe
optical sensations experienced by the first-person observer with what
would be called observational language by Positivists. In the lower por-
tion the desert, sky, armchair, body of, and apple being held by Mach
describe what are believed to be real physical objects and would be drawn
in theoretical language by positivists.
Framework embedding consciousness in physics 121
the explanatory role in our cognitive process. Both Machs NCC as well
as our own first-person NCC incorporate our belief in an external world,
and this incorporated belief generates our sensation of a whole world,
including the small parts that represent our observation of both our
brains.
Once we understand the architecture of a cognitive cycle and the role
a reality model plays in it we can turn our attention to the question of
accuracy and functionality of the model we have chosen to believe in. The
question before us is whether our incorporation of a nave reality belief
is adequate to explain consciousness and, if not, what incorporation of a
new belief might be proposed to replace it.
Though the nave reality model is efficient and useful for individu-
als who are concerned with navigating their bodies, brains, and psyches
through everyday challenges it does little to help those who wish to under-
stand how these entities actually work and in what context these everyday
challenges are imbedded. Machs brain, seen by a first-person neurophys-
iologist, is only the observational display resulting from a measurement
made by the neurophysiologist on his own underlying reality belief. The
brain he can observe does not generate sensations anymore than one
image on a television screen generates its neighboring image. Both are
the result of a generation process that occurs outside the observables
displayed. It is the process and the mechanism behind the appearance
of the observable brain that explains the appearance not the appearance
itself. Machs brain, defined as a classic biological object by nave real-
ists and described in the lower portion of Fig. 4.1, is likewise, only the
first-persons internal mechanism that generates the observable brain we
see, not the actual reality. This generation mechanism is more analogous
to the refresh memory of a television system. The ultimate cause of dis-
played images lies outside the television system boundary, but what is
seen is derived from the content of the refresh memory inside that sys-
tem. This generation mechanism would therefore incorporate our idea
of what Machs brain is like, not be the actual mechanism that gener-
ates Machs thoughts, which is outside the boundaries of our first-person
loop.
To understand this cognitive generation process it will be necessary
to replace the nave-reality model of physical reality with a generalized
symbolic version and separate the sensory modalities, normally fused
in everyday operations, into distinct display channels. This will allow
us to examine how observational display channels influence each other
through our physical reality model and how the visualization of such a
model, rather than the model itself, is often falsely taken to be explanation
for sensations.
Framework embedding consciousness in physics 123
r r
e e
t m( ) x( ) t
i i
n n
a a
The processing involving the two observable modalities and the gen-
eralized model can be described as follows. Starting with an optical
sensation at the top the first-person calculates the change in space chan-
nel ( 3) in our simulated retina. The simulated retina is shown as a
rectangle with one side in his mental processing mechanism and the
other side in his model of physical reality. This information is combined
with the content of the other sensor arrays to produce an updated array
output in the reality model function. This is formally indicated by the
equation
( 1 , 2 , 3 , . . .) = T( 1, 2, 3. . . . t , t). (4.1)
Further details of this function will presented as the discussion pro-
ceeds. For now we note that the expected touch array content 2 has
been updated to 2 which is measured to produce an updated expected
touch display. This display serves to provide the first-person with the
feeling of knowing where entities are. The expected-touch-sensations
are then explained by the configuration of simulated touch sensors in the
model. These confirm the solid outlines of model objects that could be
measured by simulated optic sensors and processed back into the optic
display with which we started. During normal operation the expected-
touch-sensation is quickly superimposed with the optical sensation to
produce a feeling of solidity and comfort. When they do not match a
feeling of dizziness is felt. This feeling of discomfort is evidence of the
continued calculations carried out by the first-person loop in its attempt
to establish stability and equilibrium.
The utilization of optical and expected touch modalities are used in
the last paragraph in order to familiarize the reader with examples of how
sensations are processed around the cognitive cycle architecture. How-
ever, looking at an object and generating a co-located touch sensation
does not define ones stake-your-life-on-it reality. We may see an apple
and document our belief that it is real by generating an expected-touch-
sensation, but, before finalizing this conclusion, incorporating additional
information is appropriate. Was the optical sensation a reflection in a
mirror? The sensation of the moon might require observation of its orbit.
For a bird the motion of passing wind or depression of branches it lands
on may be adequate to verify its objective actuality. The exact informa-
tion is not important to our discussion. What is important is that one
registers the result of reality testing by generating a sensation that visual-
izes ones true beliefs. An example of a much-held belief has been added
as a third sensation category incorporated in the inner cognitive cycle
shown in Fig. 4.2. The quad circle and square are designed to look like
a center-of-mass/charge icon. It is intended to document the belief that
126 Wolfgang Baer
Physical Observable
change sensation
converted to converted
observable to physical
sensation change
Fig. 4.3 Architecture of a human thought process that creates the feeling
of permanent objects in our environment.
branch. We have designated the L() and P() boxes symbols differing
from M() and X() in order to emphasize their role in establishing and
recalling the permanent entity that changes in contrast to the change
itself. The dual-cognitive cycle architecture connecting change with the
entity changing can be applied to many situations and used to build
hierarchies of change within change within change and so on.
The arcs show the transport of entities from one process to the next
and have no further processing significance.
The model of the human thought-process contains several symbols
that are defined as follows. A is a description of what models the real
apple while A is a description of the change in A. I will use bold
underlined first capital letters as symbols-of-physical-reality that refer
to descriptions of entities-in-themselves. Lower case letters will refer to
symbols-of-sensations or observable descriptions in the Positivist tra-
dition discussed earlier. Hence Fig. 4.3 shows the optical apple sen-
sation, , referenced in English as apple and contracted to a math-
ematical symbol a. This sensation is connected to a change A,
while the apple reality belief sensation, , referenced as the apples
mass-charge and contracted to a, is connected to the real Apple, A,
doing the changing. The words real apple, or the capitalized name
Apple, refer to entities outside the cognitive loop which can never be
experienced directly and may or may not be correctly modeled by the
vectors A.
The logic of the calculation in the cognitive loop is that the apple
sensation propagates through a series of transformations, X(), backwards
through a causal chain until it is finally explained as a change in a model
of a real external entity. At some point inside the T() transform, A
+ A exists as a combined system and then separates to release A.
This change is propagated through the model and to the simulated first-
person optical detector array where it is processed by the M() function
to produce the report of a change, a, which is displayed as the first-
persons optical apple sensation. The inner circle represents a processing
path that handles the change in the entity itself, while the outer processing
cycle maintains a fixed and permanent belief in the entity itself. If no
change happens, the feeling of the permanent structure persists in the
first-person.
The A is created as a reality belief during a creation-learning process
L() and similar symbols populate the physical reality model as a set of
permanent structures used to explain sensations. These structures were
created as an explanation of the permanent reality feeling symbolically
represented by the mass-charge icon, . This feeling is generated by the
projection function P() from A. Thus the outer cycle constantly refreshes
Framework embedding consciousness in physics 133
The groundwork for such a substitution has already been laid. Figure
4.3 shows the architecture of the human thought-process required to cre-
ate the feeling of reality we experience when looking at everyday objects.
This architecture shows two interacting loops. In the inner loop an apple
sensation is transformed into a change A that interacts with an entity A
being changed. This entity in turn determines the feeling of solid reality
associated with a in the outer loop. Nothing has been said about the size
of this entity. It is used as an example of any entity being accommodated.
As presented, A refers to the entire macroscopic body of an apple and A
as the cumulative changes occurring on its surface required to emit light
rays. We are talking on the order of Avagadros number (6.02 1023 ) of
individual quantum actions. This is nothing close to the quantum limit.
In this domain one would expect classic physics to apply. However, we
are not presenting the physics of observables but rather the physics of the
processing system within which observables occur. That is, the physics of
a cognitive being reduced to the essential form of a cognitive loop. The
loop does quantum theory.
A single cycle of such a loop processes the physical change of an entity
we call the Brain from the past through a display of sensory experience
called now which then influences the changes in the Brain at the future
side of the time interval circumscribed by the cycle. A single closed
cycle inner loop as shown in Fig. 4.3 simply holds the change as a static
observable experience, that is, the recall of a memory appears as a thing
but is a stable activity. The general architecture describes what you the
reader, conceived as a processing loop, does. The formation, changes,
and destruction of processing loops forms a new vision of reality as inter-
acting cognitive loops. The development of the physics describing such a
new vision is in its infancy. What can be said at this juncture is that in the
limit of small changes which do not destroy the entities being changed
the theory of cognitive loops will converge to the theory of quantum
mechanics. This approximation will be discussed in the next section.
emitting a photon and returns to its ground state . The photon hits
the concave mirror and bounces back toward the Atom and the process
repeats. The inner cycle of Fig. 4.3 has now been completed. During
each cycle a change is processed from to a photon and back again.
If we identify as a mass-charge separation change and a passing
photon with an observable sensation, an atom emitting and absorbing a
photon would be an extremely simple cognitive system that, if completely
isolated, would maintain the single experience of a light flash forever.
This simple case shows how the architecture of quantum theory can
be used to explain consciousness. The conscious system described by
quantum theory is visualized as a pattern of observables explained as
a mass-charge separation structure which acts as the content of a per-
manent Hilbert, that is, memory, space (see NCCs in Fig. 4.2). The
mass-charge structure emits gravito-electric influence patterns which
are processed through interconnecting logic gates with which a quan-
tum physical reality model is implemented to produce influences that
determine new mass-charge separation patterns. The action required to
produce the separation patterns are measured as observable sensations.
Some of these sensations are derived from and are used to control the
mass-charge separation patterns not in an internal Hilbert space defining
memory but rather an external Hilbert space defining the sensor arrays
that interact with the rest of physical reality.
The qualitative descriptions provided above do not provide proof that
quantum theory described a cognitive loop containing consciousness.
However, the plausibility of this hypothesis is increased by providing a
detailed mapping between the nomenclature of quantum theory and the
general description of operations and functions of a cognitive loop. To
do so requires us to formalize an assumption about the nature of physical
reality. Let us assume for the moment that A models a physical entity that
is composed of a mass field, m(x), and a charge field, ch(x), spread out in
space. Here, x, is the name of a space cell in which the mass or charge dis-
tributions are located. Furthermore the mass projects a gravito-inertial
influence field while the charge projects an electromagnetic field. The
generally accepted functions relating masses to their influence fields are
Einsteins general relativity equations and those relating charges to their
influences are Maxwells equations. These influence fields apply physical
forces so that each mass-charge point particle feels the combined forces
from all other such entities. Since the gravito-inertial forces do not nec-
essarily pull the mass in the same direction as the electromagnetic-forces
pull the charge the mass-charge combination is pulled apart.
This introduces a new possibility that has not been used by physicists
because of their habit of defining particles as single bundles of properties
without asking the question, What holds mass and charge together?
Framework embedding consciousness in physics 139
The answer is not known at this point; however, we can speculate that if
the separation is small enough the two properties will be held together
by a linear restoring force characterized by Fc = kc z where k is a mass-
charge attraction spring constant and z is the world line distance between
a mass and charge. This force has been identified as the cognitive force
(Baer et al. 2012). The action held in the separation can be formally
identified with quantum theory by a series of substitutions,
E(x)dt = kc Z dZ = Z(kc d/dt) Z dt
= (ih d/ dt) dt = H dt (4.3)
Where:
E(x) dt = action in a cycle of standard length dt, that is small enough
so that the energy density E(x) can be treated as a constant.
k = ih/; the spring constant.
H = (ih/2)d /dt; the Schrodinger equation.
Z = , the wave function, for small enough Z.
h = Plancks constant.
x = the Hilbert space cell name labeling each of the Z(x) or
(x). If space is the only observable, x equals Cartesian
coordinates (x,y,z).
The total energy in the entire mass-charge configuration is calculated
by integrating Equation 4.3 over all space cells. The reader will recognize
the integral on the far right as the form of the measurement equation of
quantum theory. The full mapping to the architecture of quantum theory
is accomplished in Fig. 4.4.
Here the classic world observable is defined by a spatial-temporal
energy function. The existence of this observable is processed into a
real physical world displacement by the explanation Process III. The
result is a description of the physical reality in terms of a displacement
pattern as a function of time and space. The time derivative of this dis-
placement pattern is related to the Hamiltonian energy operator. This
relation is known as the Schrodinger equation or von Neumanns Pro-
cess II. Lastly, observables are extracted from the displacement pattern
by von Neumanns Process I (von Neumann 1955).
Figure 4.4 shows material energy in the inner ring corresponding to
the explain-measurement cycle in Fig. 4.3. This cycle carried the change
around the cycle [ . . . A - a- A . . . ] which in this case is inter-
preted as a displacement-energy cycle. In Fig. 4.3 the inner loop pertains
to a change while the outer loop pertains to the permanent entity being
changed. In Fig. 4.4 the permanent entity being changed is felt as the
140 Wolfgang Baer
Classic World
Process 0
E(x,t) t
v. Neumann Measurement
Explanation Process III L( )
Process 1
= E(x,t) t = iE(x,t)t/-h
(x,t) = e =
P(X) t+t X
t (x,t) H (x,t) dt
Quantum word
(x,t)
v. Neumann-Schrdinger Process II
H (x,t) = i h d(x,t)/dt
=T ( (
Z L(E)
E P(Z)
4.7 Conclusion
We have analyzed the human thinking process and identified its cognitive
processes with the activities described by the formulation of quantum
theory. This identification allows us to conclude that physics already
describes an integrated mind-body mechanism. Although quantum the-
ory is only a linear approximation to the full understanding of cognitive
beings, the recognition that matter generates influences that generate
matter in an endless loop coupled with the recognition that those influ-
ences are the sensations experienced by and in turn remembered by the
146 Wolfgang Baer
brain opens the door for further development both in physics and the
cognitive sciences.
The build up of mass-charge configurations into electrons, atoms,
molecules, and biological structures organizes fundamental cognitive
activity into forms that could be called human. However, when seek-
ing the origin of consciousness one must reduce even an electron to its
fundamental mass-charge pattern and recognize the process that prop-
agates influence fields and controls that mass-charge configuration. We
propose a new tool that visualizes the cause of mental experiences as
separation patterns and directly equates those experiences to the energy
fields which hold the charge and mass together. This separation energy
is not limited to the human brain but is exhibited by all material. Thus
the entire universe and every part of it exhibits a form of primitive con-
sciousness.
REFERENCES
Aspect A., Grangier P., and Roger G. (1982). Experimental realization of
EinsteinPodolskyRosenBohm gedankenexperiment: A new violation of
Bells inequalities. Phys Rev Lett 49(2):9194.
Atmanspacher H. and Hans P. (2006). Paulis ideas on mind matter in the context
of contemporary science. J Consciousness Stud 13(3):34.
Baer W. (2010a). Introduction to the physics of consciousness. J Consciousness
Stud 17(34):16591.
Baer W. (2010b). Theoretical discussion for quantum computation in biological
systems. Quantum Information and Computation VIII, Paper #7702-31, URL:
http://dx.doi.org/10.1117/12.850843 (accessed March 22, 2013).
Baer W. (2011). Cognitive operations in the first-person perspective. Part 1: The
1st person laboratory. Quantum Biosystems 3(2):2644.
Baer W., Pereira A., and Bernroider G. (2012). The Cognitive Force in the Hierarchy
of the Quantum Brain. URL: https://sbs.arizona.edu/project/consciousness/
report poster detail.php?abs=1278 (accessed March 22, 2013).
Bernroider G. and Roy S. (2004). Quantum classical correspondence in the
brain: Scaling, action distances and predictability behind neural signals.
Forma 19:5568.
Bieberich E. (2000). Probing quantum coherence in a biological system by means
of DNA amplification. Biosystems 57(2):109124.
Blood C. (2009). Constraints on Interpretations of Quantum Mechanics. URL: http://
arxiv.org/abs/0912.2985 (accessed March 6, 2013).
Bohm D. (1983). Wholeness and the Implicate Order. London: Ark.
Carnap R. (2000). The observation language versus the theoretical language. In
Carnap R. and Schlick T. (eds.) Readings in the Philosophy of Science. Mt.
View, CA: Mayfield, pp. 166 ff.
Cahill R. T. (2003). Process Physics. Proc Stud Suppl 2003 (5) URL:
www.mountainman.com.au/process physics/HPS13.pdf (accessed March 6,
2013).
Framework embedding consciousness in physics 147
Rosal L. P. and Faber J. (2004). Quantum models of the mind: Are they com-
patible with environment decoherence? Phys Rev E 70:031902.
Schmidhuber, J. (1990). Zuses Thesis: The Universe is a Computer. URL: www.
idsia.ch/juergen/digitalphysics.html (accessed February 27, 2013).
Schwartz J. M., Stapp H. P., and Beauregard M. (2004). Quantum physics in
neuroscience and psychology: a neurophysical model of mind-brain interac-
tion. Philos T R Soc B 360(1458):13091327.
Stapp H. P. (1993). Mind, Matter, and Quantum Mechanic. Berlin: Springer.
Summhammer J. and Bernroider G. (2007). Quantum entanglement in the volt-
age dependent sodium channel can reproduce the salient features of neuronal
action potential initiation. URL: arXiv:0712.1474v1 (accessed February 27,
2013).
Svozil K. (2003). Calculating Universe. URL: arXiv:physics/0305048v2 (accessed
February 27, 2013).
Tegmark M. (2000). The importance of quantum decoherence in brain processes.
Phys Rev E 61(4):41944206.
Velmans M. (2000). Understanding Consciousness. London: Routledge.
Vitiello G. (2001). My Double Unveiled: The Dissipative Quantum Model of the
Brain. Amsterdam: John Benjamins.
von Neumann J. (1955). The Mathematematical Foundations of Quantum Mechan-
ics. Princeton University Press.
Walker H. (2000). The Physics of Consciousness. New York: Perseus.
Wheeler J. A. (1983). Law without law. In Wheeler J. A. and Zurek W. H. (eds.)
Quantum Theory and Measurement. Princeton University Press, pp. 182 ff.
Wigner E. P. (1983). The problem of measurement. In Wheeler J. A. and Zurek
W. H. (eds.) Quantum Theory and Measurement. Princeton University Press,
pp. 324 ff.
5 Emergence in dual-aspect monism
Ram L. P. Vimal
5.1 Introduction
Subjective experiences potentially pre-exist in the Universe, in analogy to a
tree that potentially pre-exists in the seed. However, the issue of how a spe-
cific subjective experience (SE) is actualized/realized/experienced needs
rigorous investigation. In this regard, I have developed two hypotheses:
(1) the existence of mechanisms of matching and selection of SE patterns
in the brain-mind-environment, and (2) the possibility of explaining the
emergence of consciousness from the operation of these mechanisms.
The former hypothesis was developed in the theoretical context of
Dual-Aspect1 Monism (Vimal 2008b) with Dual-Mode (Vimal 2010c)
The work was partly supported by VP-Research Foundation Trust and Vision Research
Institute Research Fund. The author would like to thank Alfredo Pereira, Wolfgang
Baer, Andrew Fingelkurts, Ron Cottam, Dietrich Lehmann, and other colleagues for
their critical comments, suggestions, and grammatical corrections.
1 One could argue that the term dual aspect resembles dualism and the term dou-
ble aspect suggests complementarity (such as wave-particle complementarity). In this
149
150 Ram L. P. Vimal
chapter, however, these terms are used interchangeably and represent the inseparable
mental (from the subjective first-person perspective) and physical (from the objective
third-person perspective, and/or matter-in-itself) aspects of the same state of same
entity. This is close to the double aspect theory of Fechner and Spinoza (Stubenberg
2010).
Emergence in dual-aspect monism 151
5.2.1 Materialism
The current dominant view of science is materialism, which assumes
that mind/consciousness/SE somehow arises from non-experiential mat-
ter such as NNs of brain. In materialism (Levine 1983; Loar 1990,
1997; Levin 2006, 2008; Papineau 2006), qualia/SEs (such as redness)
are assumed to mysteriously emerge or reduce to (or to be identical with)
relevant states of NNs. This is taken as a brute fact (thats just the way
it is).
2 See t Hooft (2005, p. 4) for primitive quantum field, Bohm (1990) for quantum
potential, and Hiley and Pylkkanen (2005) for primitive mind-like quality at the quantum
level via active information.
152 Ram L. P. Vimal
should observe the appearance of the electron, but we cannot see the
electron (as it is too small); we can measure its physical properties (such
as mass, spin, charge in the Standard Model). We will never know the
first-person experience (if any) of the electron because for that we would
need to be an electron. One could then argue that the physical aspect
of a state of the electron is dominant and its mental aspect is latent
for us.
3 In general, PEs are precursors of SEs. In hypothesis H1 , PEs are precursors of SEs in
the sense that PEs are superposed SEs in unexpressed form in the mental aspect of every
entity-state, from which a specific SE is selected via matching and selection process in
brain-environment system. In hypotheses H2 and H3 , PEs are precursors of SEs in the
sense that SEs somehow arise/emerge from PEs.
154 Ram L. P. Vimal
4 The dual-mode concept is derived from thermofield dissipative quantum brain dynamics
(Globus 2006; Vitiello 1995).
Emergence in dual-aspect monism 155
7 The DAMv framework was discussed in detail in (Vimal 2008b, 2010c) and was elabo-
rated further in (Bruzzo and Vimal 2007; Caponigro et al. 2010; Caponigro and Vimal
2010; MacGregor and Vimal 2008; Vimal 2008a, 2009a, 2009b, 2009c, 2009d, 2010a,
2010b, 2010d, 2010e, 2010f, 2010g; Vimal and Davia 2010).
158 Ram L. P. Vimal
other human-like cognitions. The states of dead bodies (of human, ani-
mals, birds, and plants) and inert entities (such as cars, rocks, buildings,
roads, bridges, water, air, fire, Sun, Moon, planets, galaxies, and so on)
and other classical macro entities and micro entities (such as elementary
particles) have the dominant physical aspect and latent mental aspect.
When we march on to quantum entities, the dominance of aspects
needs further clarification: we are puzzled on a third-person perspective
of them, as we are unable to visualize and depend on our models and
indirect effects to know about them. We see quantum effects, such as non-
local effects (EPR hypothesis; Einstein et al. 1935), proved in Aspects
experiments (Aspect 1999). These results allow a description in terms
of probabilities/potentialities. These are mind-like effects (Stapp 2009a,
2009b, 2001) from the objective third-person perspective. Furthermore,
we will never know what quantum entities experience; so, the mental
aspect of a state of a quantum entity is hidden. Therefore, we propose
that the state of a quantum entity has a dominant physical aspect and a
latent mental aspect. However, the quantum mental aspect is not like a
human mind; rather, the quantum mind-like aspect has to co-evolve with
its inseparable physical aspect over billions of years and the end product is
the human mind (mental aspect) and inseparable human brain (physical
aspect), respectively.
This concept of varying degrees of the dominance of aspects depending on
the levels of entities is introduced to encompass most views. For example:
(1) in materialism, matter is the fundamental entity and mind arises from
matter. This can be re-framed by considering the state of the fundamen-
tal entity in materialism as a dual-aspect entity with dominant physical
aspect and latent mental aspect. (2) In interactive substance dualism, mind
and matter are on equal footing, they can independently exist, but they
can also interact. This can be re-framed as: the state of mental entity
has dominant mental aspect and latent physical aspect, and that of mate-
rial entity has dominant physical aspect and latent mental aspect. (3) In
idealism, consciousness/mind is the fundamental reality, and matter (i.e.,
matter-in-itself in addition to its appearances) emerges from it. This can
be re-framed, as the state of the fundamental entity (in idealism) is a dual-
aspect entity with dominant mental aspect and latent physical aspect; the
matter-in-itself arises from the physical aspect. Thus, the DAMv frame-
work encompasses and bridges most views; and hence it is closer to be a
general framework.
the universe starting from the physical and mental aspect of the state
of quantum empty-space at the Big Bang to finally the physical and
mental aspect of the states of brain-mind over 13.72 billion years. It
can be summarized as: [Dual-aspect fundamental primal entity (such as
unmanifested state of Brahman, sunyata, quantum empty-space/void at
the ground state of quantum field with minimum energy, or Implicate
Order: same entity with different names)] [Quantum fluctuation in
the physical/mental aspect of the unmanifested state of primal entity]
Big Bang [Very early dual-aspect universe (Planck epoch, Grand
unification epoch, Electroweak epoch: Inflationary epoch and Baryoge-
nesis): dual-aspect universe with dual-aspect unified field dual-aspect
four-fundamental forces/fields (gravity as curvature of space, electro-
magnetic, weak and strong) via inflation in dual-aspect space-time con-
tinuum] [Early dual-aspect universe (supersymmetry breaking, Quark
epoch, Hadron epoch, Lepton epoch, Photon epoch: Nucleosynthesis,
Matter domination, Recombination, Dark ages): Dual-aspect funda-
mental forces/fields, elementary particles (fermions and bosons), and
antiparticles (anti-fermions) in dual-aspect space-time continuum]
[Dual-aspect Structure formation (Reionization, Formation of stars, For-
mation of galaxies, Formation of groups, clusters and superclusters, For-
mation of our Solar System, Todays Universe): Dual-aspect matter
(fermions and composites, galaxies, stars, planets, earth, and so on),
bosons, and fields and dual-aspect life and brain-states (experiential
and functional consciousness including thoughts and other cognition as
the mental aspect (Vimal 2009b, 2010d), and NNs and electrochem-
ical activities as the physical aspect) in dual-aspect space-time contin-
uum] [Ultimate fate of the dual-aspect universe: Big Freeze, Big Crunch,
Big Rip, Vacuum Metastability Event, and Heat Death OR dual-aspect
Flat Universe (Krauss 2012)]. In the DAMv framework, the state of the
dual-aspect unified field has the inseparable mental and physical aspects,
which co-evolved and co-developed eventually over 13.72 billion years
(Krauss 2012) to our mental and physical aspects of brain-state. The
mental aspect was latent until life appeared; then its degrees of dom-
inance increased from inert matter to plant to animal to human; for
awake, conscious, active humans, both aspects are equally dominant; for
inert entities, the mental aspect is latent and physical aspect is dominant.
According to Velmans:
Reflexive Monism [RM] is a dual-aspect theory . . . which argues that the one
basic stuff of which the universe is composed has the potential to manifest both
physically and as conscious experience. In its evolution from some primal undif-
ferentiated state, the universe differentiates into distinguishable physical enti-
ties, at least some of which have the potential for conscious experience, such as
human beings . . . the human mind appears to have both exterior (physical) and
interior (conscious experiential) aspects . . . According to RM . . . conscious states
and their neural correlates are equally basic features of the mind itself. . . . the
reflexive model also makes the strong claim that, insofar as experiences are any-
where, they are roughly where they seem to be. . . . representations in the mind/brain
have two (mental and physical) aspects, whose apparent form is dependent on
the perspective from which they are viewed. (Velmans 2008)
the self-as-object (me) and (2) the three steps of the Self-as-knower:
protoself, core self, and autographical self.
According to Damasio:
8 During non-REM (slow wave) sleep, the inferior frontal gyrus, the parahippocampal
gyrus, the precuneus and the posterior cingulate cortex, as well as the brain stem and
cerebellum are active (Dang-Vu et al. 2008).
9 Prophets/rishis/seers usually have three kinds of transcendental experiences during reve-
lation/samadhi/mystic states with altered activities in various brain-areas: bliss, inner light
perception, and the unification of subject and objects.
164 Ram L. P. Vimal
One could query precisely how can sentience arise, be acquired, hap-
pen, or emerge from non-sentient matter? In other words, it seems that
he assumes that subjective experiences, including the self (protoself, core
self, and autobiographical self) somehow emerge from non-mental/non-
experiential matter such as their related neural networks and their activ-
ities. It is unclear: precisely how can an experiential entity emerge from
a non-experiential entity and what is the evidence for that mechanism?
Damasio writes further:
Feeling states first arise from the operation of a few brain-stem nuclei . . . The
signals are not separable from the organism states where they originate. The
ensemble constitutes a dynamic, bonded unit. I hypothesize that this unit enacts a
functional fusion of body states and perceptual states . . . protofeeling . . . . (Dama-
sio 2010, pp. 257263)
10 Leibnizs monads and parallel (soul experience and bodyrepresentation) duals (Leib-
niz 1714) seem to address the problems of Descartes and Spinoza, namely, the prob-
lematic interaction between mind and matter arising in Descartes framework and the
lack of individuation (individual creatures as merely accidental) inherent in Spinozas
framework. Monads could be the ultimate elements of the universe, human being,
and/or God. Leibnizs monad could be absolutely simple, without parts, and hence
without extension, shape or divisibility . . . subject to neither generation nor corruption
[ . . . ] a monad can only begin by creation and end by annihilation (Rutherford 1995)
(pp. 132133).
170 Ram L. P. Vimal
Siva is the mental aspect and Sakti is the physical aspect of same state
of primal entity (such as Brahm) (Raina Swami Lakshman Joo 1985).
Kashmir Shaivism seems close to neutral monism: Siva (Purus.a, con-
sciousness, mental aspect) and Sakti (Prakr.ti, Nature, matter, physical
aspect) are two projected aspects of the third transcendental ground
level entity (Brahm, Mahatripurasundar) (personal communication by
S.C. Kak).
The primal neutral entity of Neutral Monism (Stubenberg 2010)
might have various names such as: (1) primal information, (2) aspect-
less unmanifested state of Brahman (also called karan (causal) Brahman)
of Sankaracharyas Advaita (Radhakrishnan 1960), (3) Buddhist empti-
ness (Sunyata) (Nagarjuna and Garfield 1995), (4) Kashmir Shaivisms
Mahatripurasundar/Brahm (Raina Swami Lakshman Joo 1985), and
(5) physics empty-space at the ground state of quantum field (such as
the Higgs field with non-zero strength everywhere) along with quantum
fluctuations (Krauss 2012).
The state of primal entity appears aspectless (or neutral) because its
mental and physical aspects are latent. After cosmic fire (such as the
Big Bang) the manifestation of universe starts from the latent dual-aspect
unmanifested state of primal entity, and then the latent physical and
mental aspects gradually change their degree of dominance, depending
on the levels of entities over about 13.72 billion years of co-evolution;
perhaps, first, the physical aspect (matter-in-itself and its appearances,
such as formation of galaxies, stars, planets) evolved and then after bil-
lions of years (perhaps, about 542 million years ago during the Cambrian
explosion) the mental aspect (consciousness/experiences) co-evolved in
humans/animals. In other words, the mental aspect (from a first-person
perspective) becomes evident or dominant in conscious beings after over
13 billion years of co-evolution, rather than being evident before the
onset of the universe, when the mental aspect was presumably latent.
However, one could argue for a cosmic consciousness, different from
our consciousness, which might be the mental aspect of any state of uni-
verse. As there are certainly innumerable states of the universe, cosmic
consciousness might vary according to these states.
In our conventional daily mind-dependent reality, Neutral Monism
may be unpacked in the DAMv as follows: the state of apparent aspect-
less neutral entity (quantum spacetime geometry or information) pro-
posed by Neutral Monism would have both mental and physical aspects
latent/hidden. These latent aspects become dominant depending on mea-
surements. If it is the subjective first-person measurement, then the
mental aspect of a brain-state shows up as subjective experiences. If
it is the objective third-person measurement (such as in fMRI), then the
Emergence in dual-aspect monism 173
domain . . . My own view is that, relative to the physical domain, there is just one
sort of strongly emergent quality, namely, consciousness. (Chalmers 2006)
(1) radical novelty (features not previously observed in the system); (2) coher-
ence or correlation (meaning integrated wholes that maintain themselves over
some period of time); (3) a global or macro level (i.e., there is some property
of wholeness); (4) it is the product of a dynamical process (it evolves); and
(5) it is ostensive it can be perceived . . . The mind is an emergent result of
neural activity . . . Emergence requires some form of interaction its not sim-
ply a matter of scale . . . Emergence does not have logical properties; it cannot be
deduced (predicted). (Corning 2012)
clearly have NNs in brain (physical aspect) and related subjective expe-
riences (mental aspect); however, it is indeed an assumption. This
assumption is similar to the assumption of God, soul, Brahman, physics
vacuum/empty-space with virtual particles, strings in string theory,
and other fundamental assumptions. Further investigation is needed to
address the brute fact problem. One speculative attempt is as follows.
One could ask: what is the origin of the inseparable mental and physical
aspects of the state of each entity in the DAMv framework? To address
this, let us consider wave-particle duality and brains NN-state.
As per Fingelkurts et al. (2010b), the physical brain produces a highly
structured and dynamic electromagnetic field. If we apply the concept
of wave-particle inseparable dual-aspect of the state of wavicle to brain-
NN-state, then it seems that there are three inseparable aspects of the same
brain-NN-state: (1) physical particle-like NN, (2) wave-like electromag-
netic field generated by activities of the NN in brain, and (3) related
phenomenal subjective experience (SE). The wave-like electromagnetic
field is mind-like as per mind-like nondual monism based on the wave-only
hypothesis (Stapp 2009a, 2009b, 2001). Moreover, as per CEMI field
theory (McFadden 2002a, 2002b, 2006; see also Lehmann, this volume,
Chapter 6), SE is like looking from the inside CEMI field. In the previ-
ous list, one could argue that (2) and (3) can be combined as the mental
aspect of brain-NN state. If this is acceptable, then one could argue that:
(1) the origin of the mental aspect is the wave-aspect of wave-particle
duality (as electromagnetic field radiation is mind-like because a photon
can be anywhere within a field of radius of 186 000 miles in one second
of electromagnetic radiation); and (2) the origin of the physical aspect
is its particle aspect. Thus, in physics, it seems that the mental aspect
is already built-in from the first principles, and we do not have to insert
mental aspect in physics by hand. If this is correct, then the brute
fact problem is addressed. However, one could argue that both wave
and particle aspects of wavicle are physical aspect because energy (E),
frequency of wave (), and mass (m) of particle are related by E = h =
mc2 , where h is the Planck constant and c is the speed of light. Thus, it
is debatable.
REFERENCES
Allen M. and Williams G. (2011). Consciousness, plasticity, and connectomics:
The role of intersubjectivity in human cognition. Front Psychol 2:20, e-
document, 16 pp. URL: www.frontiersin.org/Consciousness_Research/10
.3389/fpsyg.2011.00020/abstract (accessed February 28, 2013).
Aspect A. (1999). Bells inequality test: More ideal than ever. Nature 398:189
190.
Atmanspacher H. (2007). Contextual emergence from physics to cognitive neu-
roscience. J Consciousness Stud 14(12):1836.
Baars B. J. (2005). Global workspace theory of consciousness: Toward a cognitive
neuroscience of human experience. Prog Brain Res 150: 4553.
Bedeau M. A. (1997). Weak emergence. Philos Perspectives 11:375399.
Bernroider G. and Roy S. (2005). Quantum entanglement of K+ ions, multiple
channel states and the role of noise in the brain. In Stocks N. G., Abbott D.,
and Morse A. P. (eds.) Fluctuations and Noise in Biological, Biophysical, and
Biomedical Systems III, SPIE Conference Proceedings, 584129.
Blackmore S. (1993). Dying to Live: Science and the Near Death Experience. Lon-
don: Grafton.
Blackmore S. J. (1996). Near-death experiences. J Roy Soc Med 89(2):7376.
Bohm D. (1990). A new theory of the relationship of mind and matter. Philos
Psychol 3(2):271286.
Broad C. D. (1925). The Mind and Its Place in Nature. London: Routledge &
Kegan Paul.
Bruzzo A. A. and Vimal R. L. P. (2007). Self: An adaptive pressure arising from
self-organization, chaotic dynamics, and neural Darwinism. J Integr Neurosci
6(4):541566.
Caponigro M., Jiang X., Prakash R., and Vimal R. L. P. (2010). Quantum
entanglement: Can we see the implicate order? Philosophical speculations.
NeuroQuantology 8(3):378389.
Caponigro M. and Vimal R. L. P. (2010). Quantum interpretation of Vedic
theory of mind: An epistemological path and objective reduction of thoughts.
Journal of Consciousness Exploration and Research 1(4):402481.
184 Ram L. P. Vimal
Hamer D. (2005). The God Gene: How Faith Is Hardwired into Our Genes. New
York: Anchor Books.
Hameroff S. (1998). Funda-Mentality: Is the conscious mind subtly linked to
a basic level of the universe? Trends Cogn Sci 2(4):119127.
Hameroff S. and Penrose R. (1998). Quantum computation in brain micro-
tubules? The PenroseHameroff Orch OR model of consciousness. Philos
T Roy Soc A 356:18691896.
Hameroff S. and Powell J. (2009). The conscious connection: A psycho-physical
bridge between brain and pan-experiential quantum geometry. In Skrbina
D. (ed.) Mind That Abides: Panpsychism in the New Millennium. Amsterdam:
John Benjamin, pp. 109127.
Hanna R. and Chadha M. (2011). Non-conceptualism and the problem of per-
ceptual self-knowledge. Eur J Philos 19:184223.
Hari S. (2010). Eccless Mind Field, Bohm-Hiley Active Information, and
Tachyons. Journal of Consciousness Exploration and Research 1(7):850
863.
Hari S. D. (2011). Mind and Tachyons: How Tachyon changes quantum potential
and brain creates mind. NeuroQuantology 9(2), e-document.
Hartline P. H., Vimal R. L. P., King A. T., Kurylo D. D., and Northmore D.
P. (1995). Effects of eye position on auditory localization and neural repre-
sentation of space in superior colliculus of cats. Exp Brain Res 104(3):402
408.
Hiley B. J. and Pylkkanen P. (2005). Can mind affect matter via active informa-
tion? Mind Matter 3(2):727.
t Hooft G. (ed.) (2005). Fifty Years of Yang-Mills Theory. Hackensack, NJ, and
London: World Scientific Publishing.
Kant I. (1929). Critique of Pure Reason, Trans. Smith N. K. London: Macmillan.
Kim J. (1999). Making sense of emergence. Philos Stud 95:336.
Klemenc-Ketis Z., Kersnik J., and Grmec S. (2010). The effect of carbon diox-
ide on near-death experiences in out-of-hospital cardiac arrest survivors: A
prospective observational study. Crit Care 14(2):R56.
Koch C. (2012). Consciousness: Confessions of a Romantic Reductionist. Cambridge,
MA: MIT Press.
Krauss L. M. (2012). A Universe from Nothing: Why There Is Something Rather
than Nothing? New York: Free Press.
Leibniz G. W. (1714). Monadologie (The Monadology: An Edition for Students)
Trans. Rescher N. University of Pittsburgh Press.
Levin J. (2006). What is a phenomenal concept? In Alter T. and Walter S. (eds.)
Phenomenal Concepts and Phenomenal Knowledge. New Essays on Consciousness
and Physicalism. Oxford University Press, pp. 87110.
Levin J. (2008). Taking type-B materialism seriously. Mind Lang 23(4):402
425.
Levine J. (1983). Materialism and qualia: The explanatory gap. Pac Philos Quart
64:354361.
Litt A., Eliasmith C., Kroona F. W., Weinstein S., and Thagarda P. (2006). Is
the brain a quantum computer? Cognitive Sci 30(3):593603.
Loar B. (1990). Phenomenal states. Philosophical Perspectives 4:81108.
Emergence in dual-aspect monism 187
Loar B. (1997). Phenomenal states. In Block N., Flanagan O., and Guzeldere G.
(eds.) The Nature of Consciousness, revised edn. Cambridge, MA: MIT Press,
pp. 597616.
Lutz A. and Thompson E. (2003). Neurophenomenology: Integrating subjec-
tive experience and brain dynamics in the neuroscience of consciousness. J
Consciousness Stud 10(910):3152.
Lyon P. (2004). Autopoiesis and knowing: Reflections on Maturanas biogenic
explanation of cognition. Cybernetics and Human Knowing 11(4):2116.
MacGregor R. J. and Vimal R. L. P. (2008). Consciousness and the structure of
matter. J Integr Neurosci 7(1):75116.
Maturana H. (2002). Autopoiesis, structural coupling and cognition: A history of
these and other notions in the biology of cognition. Cybernetics and Human
Knowing 9(34):534.
McFadden J. (2002a). The conscious electromagnetic information (Cemi) field
theory: The hard problem made easy? J Consciousness Stud 9(8):4560.
McFadden J. (2002b). Synchronous firing and its influence on the brains elec-
tromagnetic field: Evidence for an electromagnetic field theory of conscious-
ness. J Consciousness Stud 9(4):2350.
McFadden J. (2006). The CEMI field theory: Seven clues to the nature of con-
sciousness. In Tuszynski J. A. (ed.) The Emerging Physics of Consciousness.
Heidelberg: Springer, pp. 385404.
McLaughlin B. P. (1992). The rise and fall of British emergentism. In Becker-
mann A., Flohr H., and Kim J. (eds.) Emergence or Reduction? Essays on the
Prospects of Nonreductive Physicalism (Foundations of Communication). Berlin:
De Gruyter, pp. 4993.
Nagarjuna and Garfield J. L. (1995). The Fundamental Wisdom of the Middle
Way: Nagarjunas Mulamadhyamakakarika (translation and commentary by
Garfield J. L.). New York/Oxford: Oxford University Press.
Nani A. and Cavanna A. E. (2011). Brain, consciousness, and causality. J Cos-
mology 14:44724483.
Northoff G. and Bermpohl F. (2004). Cortical midline structures and the self.
Trends Cogn Sci 8(3):102107.
Northoff G., Heinzel A., de Greck M., Bermpohl F., Dobrowolny H., and
Panksepp J. (2006). Self-referential processing in our brain a meta-analysis
of imaging studies on the self. Neuroimage 31(1):440457.
Papineau D. (2006). Phenomenal and Perceptual Concepts. In Alter T. and
Walter S. (eds.) Phenomenal Concepts and Phenomenal Knowledge. New Essays
on Consciousness and Physicalism. Oxford University Press, pp. 111144.
Pereira A., Jr. (2007). Astrocyte-trapped calcium ions: The hypothesis of a
quantum-like conscious protectorate. Quantum Biosystems 2:8092.
Poznanski R. R. (2002). Towards an integrative theory of cognition. J Integr
Neurosci 1(2):145156.
Poznanski R. R. (2009). Model-based neuroimaging for cognitive computing. J
Integr Neurosci 8(3):345369.
Prevos P. (2002a). A persistent self? The Horizon of Reason, weblog post.
URL: http://prevos.net/humanities/philosophy/persistent/ (accessed March
6, 2013).
188 Ram L. P. Vimal
Prevos P. (2002b). The self in Indian philosophy. The Horizon of Reason, weblog
post. URL: http://prevos.net/humanities/philosophy/self/ (accessed February
28, 2013).
Radhakrishnan S. (1960). Brahma Sutra: The Philosophy of Spiritual Life. London:
Ruskin House, George Allen & Unwin Ltd.
Raina Swami Lakshman Joo (1985). Kashmir Shaivism: The Secret Supreme. Sri-
nagar and New York: Universal Shaiva Trust and State University of New
York Press.
Raju P. T. (1985). Structural Depths of Indian Thought (SUNY Series in Philosophy).
New York and New Delhi: State University of New York and South Asian
Publishers.
Rudrauf D., Lutz A., Cosmelli D., Lachaux J. P., and Le Van Quyen M. (2003).
From autopoiesis to neurophenomenology: Francisco Varelas exploration
of the biophysics of being. Biol Res 36(1):2765.
Rutherford D. (1995). Metaphysics: The later period. In Jolley N. (ed.) The
Cambridge Companion to Leibniz. Cambridge University Press.
Sankaracharya A. (1950). The Brihadaranyaka Upanishad, 3rd Edn. Trans.
Madhavananda S. Mayavati, Almora, Himalayas, India: Swami Yogesh-
warananda, Advaita Ashrama.
Sayre K. (1976).Cybernetics and the Philosophy of Mind. Atlantic Highlands:
Humanities Press.
Schwalbe M. L. (1991). The autogenesis of the self. J Theor Soc Behav 21:269
295.
Schwartz G. E. and Russek L. G. (2001). Celebrating Susy Smiths soul: Pre-
liminary evidence for the continuance of Smiths consciousness after her
physical death. J Relig Psychical Res 24(2):8291.
Searle J. (2007). Biological naturalism. In Velmans M. and Schneider S. (eds.)
The Blackwell Companion to Consciousness. Oxford: Blackwell, pp. 325
334.
Searle J. R. (2004). Comments on Noe and Thompson, are there neural corre-
lates of consciousness? J Consciousness Stud 11(1):8082.
Shoemaker S. (2002). Kim on emergence. Philos Stud 58(12):5363.
Skrbina D. (2009). Minds, objects, and relations: Toward a dual-aspect ontology.
In Skrbina D. (ed.) Mind that Abides: Panpsychism in the New Millennium.
Amsterdam: John Benjamins, pp. 361382.
Stapp H. P. (2001). Von Neumanns Formulation of Quantum Theory and
the Role of Mind in Nature. URL: http://arxiv.org/abs/quant-ph/0101118
(accessed February 28, 2013).
Stapp H. P. (2009a). Mind, Matter, and Quantum Mechanics, 3rd Edn. Heidel-
berg: Springer.
Stapp H. P. (2009b). Nondual Quantum Duality. Plenary talk at Marin Confer-
ence Science and Nonduality (Oct 25, 2009). URL: www-physics.lbl.gov/
stapp/NondualQuantumDuality.pdf (accessed February 28, 2013).
Stenger V. J. (2011). Life After Death: Examining the Evidence. In Loftus J. (ed.)
The End of Christianity. Amherst: Prometheus Books, pp. 305332.
Steriade M., McCormick D. A., and Sejnowski T. J. (1993). Thalamocorti-
cal oscillations in the sleeping and aroused brain. Science 262(5134):679
685.
Emergence in dual-aspect monism 189
Dietrich Lehmann
191
192 Dietrich Lehmann
1 This chapter does not offer a general review. It is centered around the work done by our
research unit, represents my proposals, and specifically draws on results obtained in our
experimental studies.
Microstates of brain electric field: Atoms of thought/emotion 193
present as soon as all parts are in place and in the requisite condition
(gasoline, temperature, ignition machinery), and it is not present if not
all requisite conditions are met. Speed (consciousness) is not a property
of the engines (of the brains) individual parts, and is not available if
the functional state of the system (its condition) is not adequate. The
difference between railroad engines that produce speed and brains that
produce consciousness is that we know how to construct railroad engines
and how to put them into the adequate condition, but that we do not know
how to do this for brains. But, this metaphor only concerns the emergent
property of consciousness not the incorporation of consciousness as
measurable configuration and dynamics of the brains electromagnetic
field that is discussed in Section 6.2.
It is reasonable to assume that brains with increasing structural com-
plexity develop higher levels of consciousness, thereby providing the pos-
sibility for richer, more detailed inner aspects.
Only relatively few components of brain work qualify as candidates for
access into consciousness. The vast majority of brain processes run their
course non-consciously (see, e.g., Dehaene and Changeux 2011). For
example, it is not possible to consciously know why something cannot be
remembered and why something else can be remembered, although the
results of these inaccessible processes are available in consciousness as
recall or failure to recall. Thus, the results of the non-conscious processes
qualify as candidates for consciousness, but few actually reach this stage.
projected into the outside world, and that this outside world will act in
ways that cannot be directly controlled by the self. But over time, multiple
trial and error experience makes it evident to the child that not everything
that happens is due to independent outside activity: experience shows
that it is not helpful to hit the wall if one happened to bounce ones head
against it. Hence, the laboriously acquired ability to project experience
to the outside world must be restricted to appropriate cases. This reality-
driven distinction is not always self-evident and simple, especially in
stressful conditions: we, as adults, still wrestle with the temptation to
ascribe to the outside world much of what we generate in ourselves.
In the course of individual development, the widening and refining
of the properties of consciousness is evident in the individuals ability
to recognize more of the sources of his/her subjective experience. Con-
scious awareness of the self is formed by interactions with other people;
it seriously deteriorates during social isolation (Reemtsma 1997).
The development of consciousness in the individual reflects the devel-
opment of consciousness in the history of mankind. Not too long ago,
the predominating belief in society was that personal decisions actually
are instilled into people by gods or other exterior, non-human, good
or evil forces, deities, or devils. Homer told the story that the goddess
Athena disguised as Telemachos uncle Mentor walked in front of
Telemachos guiding him on his way into the world. These convictions
about control of personal decisions by exterior forces led to dire conse-
quences. People who were thought to act under the influence of bad forces
often were tortured to rid them of these evil influences. Slowly, insight
grew that motivations develop in the individual itself, even though the
generating mechanisms remained totally obscure for a long time. Even-
tually, the rule of kings was not accepted any more as god-given, and
individual experience moved into the center of attention. But for a long
time philosophers thought about thinking and not about conscious-
ness: In Europe, a need for and thus the use of the word conscious-
ness arose first in the seventeenth century in England, distinguishing
consciousness from conscience. The German word Bewusstsein for
consciousness was introduced in the eighteenth century (Wolff 1720).
Before these times apparently there was no need to name the concept
of a reflective awareness of ones own thoughts. In fact, still today the
same word is used for consciousness and conscience in several European
languages.
Consciousness is detectable in behavior observations in animals. Chim-
panzees, dolphins, and crows display conscious behavior when viewing
themselves in a mirror (Gallup 1970; Prior et al. 2008). They attempt
to remove disfiguring marks (e.g., a mark on the forehead), quite like
196 Dietrich Lehmann
gravity in mass. The earth exists. Why it exists is a matter of belief sys-
tems. The question why anything exists can also be formulated correctly
in terms of grammar and syntax, but similarly, there is no meaningful
answer.
15
BODY IMAGE DISTURBANCES
VISUAL HALLUCINATIONS
10
0
5 10 15 20 25 Hz
Fig. 6.1 Power spectra of EEG recordings during times when subjects
signaled experiencing visual hallucinations or body image disturbances.
Mean power (vertical, arbitrary units) per frequency bin (Hz, horizontal)
across the six subjects that had both types of experience after cannabis
ingestion. Dots mark frequency bins that showed significant differences
(single dot p : 0.05; double dot p : 0.025; after Koukkou and Lehmann
1976).
the organization of the system (Ashby 1960). The brain potential maps
describe the potential landscape of higher and lower potential values by
iso-potential lines, similar to geographical maps whose iso-altitude lines
describe mountains and valleys. Positive and negative potential values are
defined in reference to the potential value at an arbitrarily preselected
location (the reference location, i.e., the location where the refer-
ence electrode is attached). This arbitrary choice of a reference location
naturally does not influence the measured potential landscape, just as a
rising or falling water level does not alter a geographical landscape. Figure
6.2 shows a sequence of maps of momentary potential distributions. It
is noteworthy that at visual examination, the map series shows no wave
fronts and no wave travelling phenomena, although these aspects are
classical topics in EEG studies. Examination of series of momentary brain
potential maps shows that the mapped landscapes show brief time peri-
ods of quasi-stable spatial configurations that are concatenated by very
rapid changes of landscape (Fig. 6.2). Thus, the map series is reminis-
cent of a volcano landscape with outbreaks once here, then there. This
is illustrated by plots of the locations of extreme potentials over time
(Fig. 6.3). The plots show that these reference electrode-independent
map features occur in restricted sub-areas of the head surface (Lehmann
1971).
Data-driven analysis approaches for brain electric data can parse
the series of momentary maps into temporal segments of quasi-stable
map landscapes (Lehmann and Skrandies 1980; Lehmann et al. 1987;
Pascual-Marqui et al. 1995). We called these segments microstates
(Lehmann et al. 1987). Their mean duration during no-task resting is
in the range of about 100 ms. Microstates also are observed in event-
related potential (ERP) data (Lehmann and Skrandies 1980; Michel and
Lehmann 1993; Koenig et al. 1998; Gianotti et al. 2007). Figure 6.3
shows a series of maps of momentary potential distributions where the
field was mapped at successive time points of maximal field strength, and
where the terminations of identified microstates are marked.
Microstates of brain electric field: Atoms of thought/emotion 205
+ _ _ + _
+ +
+ _ +
_ + _ _ _ +
_ + _ + _ + + _
+ _
_
_ + + _ + _ + + _
The microstates are classified into different classes on the basis of the
spatial configuration of their electric landscapes. Four microstate classes
with specific spatial configurations of their maps of electric potential
distribution (Fig. 6.4) were observed during no-task resting conditions
(Wackermann et al. 1993; Koenig et al. 2002; Britz et al. 2010).
Physics tells us that different potential landscapes on the surface of
the head must have been generated by activity of different geometries
of the generating sources, that is, of different populations of neurons in
the brain. It is reasonable to assume that different neuronal activity will
execute different functions in information processing. Thus, one could
imagine developing a microstate dictionary that describes the function
executed by each type of microstate. Indeed, we found that different
classes of microstates are associated with different subjective experiences
or different types of information processing. We observed the results
during spontaneously occurring thinking as well as during reading of
words displayed on a computer screen. The studies are briefly reviewed
next.
206 Dietrich Lehmann
A B C D
Fig. 6.4 Maps of the potential distribution on the head surface of the
four standard microstate classes during no-task resting, obtained from
496 healthy 6 to 80-year-old subjects (data of Koenig et al. 2002).
Head seen from above, nose up, left ear left; iso-potential lines in equal
microvolt steps; maps are normalized for unity Global Field Power.
Black and white areas indicate opposite polarities, but note that for
spontaneous brain activity, the continually reversing polarity of the brain
electric field is irrelevant; only the spatial configuration is used for the
microstate assessment.
6.2.6 External and internal input of same content is processed in the same
cortical area: Visual-concrete versus abstract thoughts
In common for both studies reviewed previously, when applying a con-
junction analysis we found (Lehmann et al. 2010) that the results for
visual-concrete and abstract thoughts were similar for information gen-
erated interiorly (spontaneous thoughts) and for information presented
from exterior sources (words read on the computer display): (1) The
microstate potential maps of visual-concrete thought content had orien-
tations of the brain electric field axis that were rotated counterclockwise
referred to the microstate potential maps of abstract thought content.
(2) The brain electric gravity center of the microstate maps was more
posterior and more right-hemispheric for visual-concrete thought content
compared to abstract thought content. (3) Subsequent LORETA func-
tional tomography conjunction analyses of the microstate data demon-
strated activation significant in common in the two studies right posterior
for visual-concrete thought content (Brodmann areas 20, 36, 37) and
left anterior for abstract thought content (Brodmann areas 47, 38, 13)
as illustrated in Fig. 6.5.
Fig. 6.5 Glass brain views of the brain sources that were active dur-
ing microstates associated with spontaneous or induced visual-concrete
imagery and during microstates associated with spontaneous or induced
abstract thought. The displayed localizations reached Fishers p:0.05 in
a conjunction analysis that combined the results of two studies that
investigated spontaneous unrestrained thinking and reading of visual-
concrete or abstract words (after Lehmann et al. 2010).
6.2.13 Microstates and fMRI resting state networks and default states
The four typical microstate classes were also observed in studies ana-
lyzing simultaneous recordings of multichannel EEG and of functional
magnet resonance imaging (fMRI) (Britz et al. 2010; Musso et al. 2010)
that demonstrated microstate-associated networks which corresponded
to fMRI-described resting state networks (see also Yuan et al. 2012).
REFERENCES
Allport D. A. (1968). Phenomenal simultaneity and the perceptual moment
hypothesis. Br J Psychol 59(4):395406.
Ashby W. R. (1960). Design for a Brain; The Origin of Adaptive Behavior, 2nd
Edn. New York: John Wiley & Sons, Inc.
Baars B. J. (1997). In the Theater of Consciousness: The Workspace of the Mind. New
York: Oxford University Press.
Baars B. J. (2002). Behaviorism redux? Trends Cogn Sci 6(6):268269.
Baars B. J. and Gage N. M. (2010). Cognition, Brain and Consciousness: An Intro-
duction to Cognitive Neuroscience, 2nd Edn. London: Academic Press.
Barendregt H. (2006). The Abhidhamma model AM0 of consciousness and
some of its consequences. In Kwee M. G. T., Gergen K. J., and Koshikawa
F. (eds.) Buddhist Psychology: Practice, Research and Theory. Taos, NM: Taos
Institute Publishing, pp. 331349.
Buddha (2000). The Connected Discourses of the Buddha: A New Translation of the
Samyutta Nikaya. Trans. Bodhi B. Boston, MA: Wisdom Publications.
Bleuler E. (1911). Dementia Praecox oder Gruppe der Schizophrenien. Leipzig:
Deuticke.
Blumenthal A. L. (1977). The Process of Cognition. Englewood Cliffs, NJ: Prentice-
Hall.
Boas F. (1920). The methods of ethnology. Am Anthropol, New Series 22(4):311
321.
Bodenstein G., Schneider W., and Malsburg C. V. (1985). Computerized
EEG pattern classification by adaptive segmentation and probability-
density-function classification. Description of the method. Comput Biol Med
15(5):297313.
Breakspear M., Williams L. M., and Stam C. J. (2004). Topographic analysis
of phase dynamics in neural systems reveals formation and dissolution of
dynamic cell assemblies. J Comput Neurosci 16:4968.
Microstates of brain electric field: Atoms of thought/emotion 213
Kinoshita T., Strik W. K., Michel C. M., Yagyu T., Saito M., and Lehmann
D. (1995). Microstate segmentation of spontaneous multichannel EEG
map series under Diazepam and Sulpiride. Pharmacopsychiatry 28:51
55.
Klinger E. (1978). Modes of normal conscious flow. In Pope K. S. and Singer
J.L. (eds.) The Stream of Consciousness. London: Plenum Press, pp. 91
116.
Koenig T., Kochi K., and Lehmann D. (1998). Event-related electric microstates
of the brain differ between words with visual and abstract meaning. Electroen-
ceph Clin Neurophysiol 106:535546.
Koenig T., Lehmann D., Merlo M. C. G., Kochi K., Hell D., and Koukkou
M. (1999). A deviant EEG brain microstate in acute, neuroleptic-naive
schizophrenics at rest. Europ Arch Psychiat Clin Neurosci 249:205211.
Koenig T., Prichep L. S., Lehmann D., Valdes-Sosa P., Braeker E., Kleinlogel
H., et al. (2002). Millisecond by millisecond, year by year: Normative EEG
microstates and developmental stages. NeuroImage 16:4148.
Kondakor I., Lehmann D., Michel C. M., Brandeis D., Kochi K., and Koenig T.
(1997). Prestimulus EEG microstates influence visual event-related poten-
tial microstates in field maps with 47 channels. J Neural Transm (Gen Sect)
104(23):161173.
Koukkou M. and Lehmann D. (1976). Human EEG spectra before and during
cannabis hallucinations. Biol Psychiat 11(6):663677.
Koukkou M., Lehmann D. (1983). Dreaming: The functional state shift
hypothesis, a neuropsychophysiological model. Brit J Psychiat 142(3):221
231.
Koukkou M., Lehmann D. and Angst J. (eds.) (1980). Functional States of the
Brain: Their Determinants. Amsterdam: Elsevier.
Koukkou M., Lehmann D., Strik W. K., and Merlo M. C. (1994). Maps of
microstates of spontaneous EEG in never-treated acute schizophrenia. Brain
Topography 6(3):251252.
Kounios J. and Smith R. W. (1995). Speed-accuracy decomposition yields a
sudden insight into all-or-none information processing. Acta Psychol 90(1
3):229241.
Kuhlo W., Heintel H., and Vogel F. (1969). The 45 c-sec rhythm. Electroenceph
Clin Neurophysiol 26(6):613618.
LeDoux J. E., Wilson D. H., and Gazzaniga M. S. (1977). A divided mind:
Observations on the conscious properties of the separated hemispheres. Ann
Neurol 2(5):417421.
Lehmann D. (1971). Multichannel topography of human alpha EEG fields. Elec-
troenceph Clin Neurophysiol 31(5):439449.
Lehmann D. (1977). Cortical activity and phases of the respiratory cycle. Proc.
18th Int. Congress, International Society for Neurovegetative Research, Tokyo,
Japan, pp. 8789. URL: http://dx.doi.org/10.5167/uzh-77939 accessed
February 28, 2013).
Lehmann D. (1990). Brain electric microstates and cognition: The atoms of
thought. In John E. R. (ed.) Machinery of the Mind. Boston, MA: Birkhauser,
pp. 209224.
216 Dietrich Lehmann
Lehmann D., Faber P. L., Achermann P., Jeanmonod D., Gianotti L. R. R., and
Pizzagalli D. (2001). Brain sources of EEG gamma frequency during voli-
tionally meditation-induced, altered states of consciousness, and experience
of the self. Psychiatry Res: Neuroimaging 108(2):111121.
Lehmann D., Faber P. L., Galderisi S., Herrmann W. M., Kinoshita T., Koukkou
M., et al. (2005). EEG microstate duration and syntax in acute, medication-
nave, first-episode schizophrenia: A multi-center study. Psychiatry Res Neu-
roimaging 138(2):141156.
Lehmann D., Faber P. L., Tei S., Pascual-Marqui R. D., Milz P., and Kochi
K. (2012). Reduced functional connectivity between cortical sources in five
meditation traditions detected with lagged coherence using EEG tomogra-
phy. Neuroimage 60(2):15741586.
Lehmann D., Grass P., and Meier B. (1995). Spontaneous conscious covert
cognition states and brain electric spectral states in canonical correlations.
Int J Psychophysiol 19(1):4152.
Lehmann D., Ozaki H., and Pal I. (1987). EEG alpha map series: brain micro-
states by space-oriented adaptive segmentation. Electroenceph Clin Neuro-
physiol 67(3):271288.
Lehmann D., Pascual-Marqui R. D., and Michel C. (2009). EEG microstates.
Scholarpedia 4(3):7632. URL: http://goo.gl/uks7i (accessed February 28,
2013).
Lehmann D., Pascual-Marqui R. D., Strik W. K., and Koenig T. (2010). Core
networks for visual-concrete and abstract thought content: A brain electric
microstate analysis. NeuroImage 49(1):10731079.
Lehmann D. and Skrandies W. (1980). Reference-free identification of com-
ponents of checkerboard-evoked multichannel potential fields. Electroenceph
Clin Neurophysiol 48(6):609621.
Lehmann D., Strik W. K., Henggeler B., Koenig T., and Koukkou M. (1998).
Brain electric microstates and momentary conscious mind states as building
blocks of spontaneous thinking: I. Visual imagery and abstract thoughts. Int
J Psychophysiol 29(1):111.
Lehmann D., Wackermann J., Michel C. M., and Koenig T. (1993). Space-
oriented EEG segmentation reveals changes in brain electric field maps
under the influence of a nootropic drug. Psychiatry Res Neuroimaging
50(4):275282.
Levi-Strauss C. (1955). Tristes Tropiques. Paris: Plon.
Libet B., Gleason C. A., Wright E. W., and Pearl D. K. (1983). Time of conscious
intention to act in relation to onset of cerebral activity (readiness-potential).
The unconscious initiation of a freely voluntary act. Brain 106:623
642.
Mark V. (1996). Conflicting communicative behavior in a split-brain patient:
Support for dual consciousness. Chapter 12 in Hameroff S. R., Kaszniak
A. W., and Scott A. C. (eds.) Toward a Science of Consciousness: The First
Tucson Discussions and Debates. Cambridge, MA: MIT Press, pp. 189196.
McFadden J. (2002). Synchronous firing and its influence on the brains electro-
magnetic field: Evidence for an electromagnetic field theory of conscious-
ness. J Consciousness Stud 9(4):2350.
Microstates of brain electric field: Atoms of thought/emotion 217
Strik W. K., Chiaramonti R., Muscas G. C., Paganini M., Mueller T. J., Fallgatter
A. J., et al. (1997). Decreased EEG microstate duration and anteriorisation
of the brain electrical fields in mild and moderate dementia of the Alzheimer
type. Psychiatry Res 75(3):183191.
Strik W. K., Dierks T., Becker T., and Lehmann D. (1995). Larger topographical
variance and decreased duration of brain electric microstates in depression.
J Neural Transm (Gen Sect) 99:213222.
Stroud J. M. (1955). The fine structure of psychological time. In Quastler H.
(ed.) Information Theory in Psychology. Glencoe, IL: Free Press.
Tei S., Faber P. L., Lehmann D., Tsujiuchi T., Kumano H., Pascual-Marqui
R. D., et al. (2009). Meditators and non-meditators: EEG source imaging
during resting. Brain Topography 22(3):158165.
Trehub A. (1969). A Markov model for modulation periods in brain output.
Biophys J 9(7):965969.
Trehub A. (2007). Space, self, and the theater of consciousness. Conscious Cogn
16(2):310330.
Trimmel M. and Schweiger E. (1998). Effects of an ELF (50 Hz, 1 mT) electro-
magnetic field (EMF) on concentration in visual attention, perception and
memory including effects of EMF sensitivity. Toxicol Lett 96209; 97:377
382.
Uleman J. S. and Bargh J. A. (eds.) (1989). Unintended Thought. New York:
Guilford Press.
Van Rullen R., Carlson T., and Cavanagh P. (2007). The blinking spotlight of
attention. Proc Natl Acad Sci USA 104(49):1920419209.
Wackermann J., Lehmann D., Michel C. M., and Strik W. K. (1993). Adap-
tive segmentation of spontaneous EEG map series into spatially defined
microstates. Int J Psychophysiol 14(3):269283.
Wackermann J., Putz P., Buchi S., Strauch I., and Lehmann D. (2002). Brain
electrical activity and subjective experience during altered states of con-
sciousness: Ganzfeld and hypnagogic states. Int J Psychophysiol 46:123146.
Wever R. (1977). Effects of low-level, low-frequency fields on human circadian
rhythms. Neurosci Res Program Bull 15(1):3945.
Whitehead A. N. (1929). Process and Reality: An Essay in Cosmology. New York:
Macmillan.
Wolff C. (1720). Vernunfftige Gedancken von Gott, der Welt und der Seele des Men-
schen, auch allen Dingen uberhaupt. Halle, Germany: Rengerische Buchhand-
lung.
Woodworth R. S. and Schlosberg H. (1954). Experimental Psychology. New York:
Holt.
Yuan H., Zotev V., Phillips R., Drevets W. C., and Bodurka J. (2012). Spa-
tiotemporal dynamics of the brain at rest exploring EEG microstates as
electrophysiological signatures of BOLD resting state networks. NeuroImage
60(4):20622072.
7 A foundation for the scientific study
of consciousness
Arnold Trehub
1 2 n
3 4 5
W W W W W W
1 2 3 4 5 n
I! I! I! I! I! I!
For any instance of conscious content, there is a corresponding analog in the biophysical
state of the brain.
Brain representations are transparent because they are about ones world
and are not experienced as the activity of ones brain. A conscious brain
representation is privileged because no one other than the owner of the
egocentric space can experience its contents from the same perspective.
These conditions are definitive of subjectivity.
3-D RETINOID
(Egocentric space)
Farthest
Self Locus
Nearest
Z-Planes
I-Token
Sensory/Perceptual
I! Needs & Motive
Experience
Fig. 7.2 The retinoid system. The self-locus anchors the I-token (I!) to
the retinoid origin of egocentric space. I! has reciprocal synaptic links
to all sensory/cognitive processes. The retinoid system is the brains
substrate for the first-person perspective; that is, subjectivity (Trehub
2007).
WORLD
BODY E2
E1
BRAIN
R1 RETINOID SPACE R2
PHENOMENAL WORLD
E1 E2
1!` 1!`
1!
SELF IMAGE
(A)
UNCONSCIOUS PROCESSORS
ACTION
(A)
Fig. 7.4 Illusory experience of a central surface sliding over the back-
ground (after Pinna and Spillmann 2005).
Adjust
it has the neuronal mechanisms that accomplish this task. When the per-
spective drawing is viewed, the heuristic self-locus traces the converging
perspective lines through the depth of the retinoids Z-planes. As this hap-
pens, the excitation patterns on the depth planes are successively primed
and objects are represented in the retinoids Z-plane space from near
to far. Because of the retinoids size-constancy mechanism, the brains
representation of the far disc in the 2D display is enlarged relative to
the near disc, and this is reflected in the relative size of fMRI activa-
tion in V1. Thus what has been a puzzling illusion is explained in the
retinoid model as the result of the natural operation of a particular kind
of neuronal brain mechanism.
7.4 Conclusion
I have argued that a solid foundation for the scientific study of conscious-
ness can be built on three general principles. First is the metaphysical
assumption of dual-aspect monism in which private descriptions and
public descriptions are separate accounts of a common underlying real-
ity. Second is the adoption of the bridging principle of corresponding
230 Arnold Trehub
Wr
Lr
Tr
L
Wr
W Lr
REFERENCES
Lubke J., Markram H., Frotscher M., and Sakmann B. (1996). Frequency and
dendritic distribution of autapses established by layer 5 pyramidal neurons
in the developing rat neocortex: Comparison with synaptic innervation of
adjacent neurons of the same class. J Neurosci 16:32093218.
Murray S.O., Boyaci H., and Kersten D. (2006). The representation of perceived
angular size in human primary visual cortex. Nat Neurosci 9:429434.
Pereira Jr. A. and Ricke H. (2009). What is consciousness? Towards a preliminary
definition. J Consciousness Stud 16:2845.
Pereira A., Jr. Edwards J.C.W., Lehmann D., Nunn C., Trehub A., and Vel-
mans M. (2010). Understanding consciousness: A collaborative attempt to
elucidate contemporary theories. J Consciousness Stud 17:213219.
Pinna B. and Spillmann L. (2005). New illusions of sliding motion in depth.
Perception 34:14411458.
Tamas G., Buhl E.H., and Somogyi P. (1997). Massive autaptic self-innervation
of GABAergic neurons in cat visual cortex. J Neurosci 17:63526364.
Trehub A. (1977). Neuronal models for cognitive processes: Networks for learn-
ing, perception and imagination. J Theor Biol 65:141169.
232 Arnold Trehub
Trehub A. (1978). Neuronal model for stereoscopic vision. J Theor Biol 71:479
486.
Trehub A. (1991). The Cognitive Brain. Cambridge, MA: MIT Press.
Trehub A. (2007). Space, self, and the theater of consciousness. Conscious Cogn
16:310330.
Trehub A. (2011). Evolutions gift: Subjectivity and the phenomenal world. Jour-
nal of Cosmology 14:48394847.
Trehub A. (2013). Where am I? Redux. J Consciousness Stud 20(12):207225.
van der Loos H. and Glaser E.M. (1972). Autapses in neocortex cerebri: synapses
between a pyramidal cells axon and its own dendrites. Brain Res 48:355
360.
Velmans M. (2009). Understanding Consciousness, 2nd Edn. New York: Routledge.
8 The proemial synapse:
Consciousness-generating glial-neuronal
units
Bernhard J. Mitterauer
8.1 Introduction
Present philosophical foundations of neuroscience (Bennett and
Hacker 2003) are exclusively based on the functions of the neuronal
system. But the first and elementary philosophical question should be:
Why has nature created our brain with a double cellular structure consist-
ing of both the neuronal and the glial systems? Therefore, a real natural
philosophy of the brain must refer to the structures and functions of both
cell types or systems. Neurophilosophy or philosophy of neuroscience
233
234 Bernhard J. Mitterauer
presynapse
g.j.
Ca2+
inhibition prR excitation astrocyte
GLU
10 v astrocytic
syncytium
GLU acR 4 intention
t Ca2+ memory
9 2
ATP
negative feedback
7 6
5
poR
GLU
postsynapse excitation
neurotransmission
inhibition
11 8
neuronal
networks
Fig. 8.1 Schematic diagram of possible glial-neuronal interactions at the glutamatergic tripartite synapse (modified after Newman 2005). Sensori-
motor networks compute environmental information activating the presynapse (1). The activated presynapse releases glutamate (GLU) from
vesicles (v) occupying both postsynaptic receptors (poR) and receptors on the astrocyte (acR) (2). GLU also activates gap junctions (gj) in the
astrocytic syncytium, enhancing the spreading of Ca2+ waves (3). In parallel, the occupancy of acR by GLU also activates Ca2+ within the
astrocyte (4). This mechanism exerts the production of GLU (5) and adenosinetriphosphate (ATP) (6) within the astrocyte, now functioning as
gliotransmitters. Whereas the occupancy of the extrasynaptic pre- and postsynaptic receptors by GLU is excitatory (7), the occupancy of these
receptors by ATP is inhibitory (8). In addition, neurotransmission is also inactivated by the reuptake of GLU in the membrane of the presynapse
mediated by transporter molecules (t) (9). ATP inhibits the presynaptic terminal via occupancy of cognate receptors (prR) temporarily turning
off synaptic neurotransmission in the sense of a negative feedback (10). Synaptic information processing is transmitted to neuronal networks
activating the synapse again (11).
The proemial synapse: glial-neuronal units, consciousness 239
8.4.1 Hypothesis
According to Guenther (1976), there are two basic ways in which brain
research can proceed. One can treat the brain as a mere physical piece of
matter. Or, we can investigate how nature has constructed all its compo-
nents, and following which laws or principles behavior is produced. The
second approach in brain theory is faced with both theoretical and tech-
nical obstacles, since it is incapable to unravel how the brain contributes
to the solution of the riddle of subjectivity. Instead of going uphill from
the cellular or molecular level, we may proceed by posing the following
questions: What is the highest achievement of the human brain? Which
role does subjectivity play? How and where does consciousness arise?
Presently, I start out with this question: where and how in the brain
could the basic interplay between the subjective and objective parts of
subjectivity be generated, based on the dialectics of volition and cogni-
tion?
My hypothesis is that the interplay of the subjective subjectivity (Ego)
and the objective subjectivity (Thou), or in other words, the dialectics of
volition and cognition, occurs already on the synaptic level of the brain.
Applying the model of a GNU, a component embodying the subjective
(volitional) subjectivity and a second component embodying the objec-
tive (cognitive) subjectivity can be described. The subjective volitional
functions are formalized as ordered relations (), the objective cogni-
tive functions as exchange relations (). Both synaptic components and
their special types of relations interact in a dialectic manner generating a
cyclic proemial relationship (Guenther 1976). This novel type of rela-
tionship may underlie all consciousness-generating processes in the brain
based on intersubjective reflection.
Ri (xi1 ,yi1 )
where Ri+1 =xi+1 or yi+1 . The subscript i signifies higher or lower logical orders.
environment
Presynaptic
GT peR Neuron
3
gR
Glia 1 Glia
NT 2 NT
GT 4
gj poR gj
Postsynaptic
Neuron
Glia Glia
neuronal cell systems may allow the interpretation that the former system
operates more subjectively and the latter system more objectively.
Pereira and Furlan (2010) argued that the astroglial network is the
organisms Master Hub that integrates somatic signals with neuronal
processes to generate subjective feelings. Neuro-astroglial interactions in
the whole brain compose the domain where the knowing and feeling com-
ponents of consciousness get together (Pereira Jr, this volume, Chapter
10). Since orthodox neuroscience does not refer to the functions of the
glial system and its pertinent interactions with the neuronal system, a
distinction between a subjective and an objective component of brain
operations as an organ that embodies and generates subjectivity is not
possible. Moreover, our models open a new window to the study of the
brain basis of the pathophysiology of mental disorders and ethics.
N1 N2
Sy Sy
Rq abcd Rq a
P4 P1
g.j.
Acy Acx
P3 P2
Sy Rq abc Rq ab Sy
N4 N3
n=2 1 1
1 2
1 1 1 1 1
n=3 1 1 2 2 2
1 2 1 2 3
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
n=4 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2
1 1 2 2 2 1 1 1 2 2 2 3 3 3 3
1 2 1 2 3 1 2 3 1 2 3 1 2 3 4
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
n=5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
1 1 2 2 2 1 1 1 2 2 2 3 3 3 3 1 1 1 2 2 2 3 3 3 3 1 1 1 2 2 2 3 3 3 3 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 4
1 2 1 2 3 1 2 3 1 2 3 1 2 3 4 1 2 3 1 2 3 1 2 3 4 1 2 3 1 2 3 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 5
Fig. 8.4 Tritogrammatic tree. Generation of 52 tritograms (n = 5) corresponding to 52 astrocytic processes. Each
tritogram represents a qualitative astrocytic receptor sheet. The structure for tritograms with length 15 is represented
by a tree. Generation rule: a tritogram x with length n + 1 may be generated from a tritogram y with length n if x is
equal to y on the first n places, for example, 12133 may be generated from 1213 but not from 1212. The numerals
are representations as places of the same or different qualities interpreted as astrocytic receptors on the endfeet of the
processes. Each tritogram corresponds to an astrocytic process.
248 Bernhard J. Mitterauer
receptor qualities
astrocytic
processes
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 2 2 2 2 2 2 2 2 2 2
1 1 2 2 2 1 1 1 2 2 2 3 3 3 3
1 2 1 2 3 1 2 3 1 2 3 1 2 3 4
tritograms [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
the issue how and where such motile behavior of astrocytes may be con-
trolled?
to adopt about them for the purpose of predicting their behavior. Bennett
and Hacker (2003) are right in their criticism that Dennett misconstrues
what modern philosophers since Brentano have called intentionality. Of
course, in theoretical neurobiology intention and intentionality implic-
itly play a role, but these conceptions are mostly used undefined (Kelso
2000). Especially in the chaos-theoretical approach to brain function a
definition of these conceptions is as yet not possible (Werner 2004). Here
an attempt is made to define the conception of intentional programs in
terms of the underlying theory of intentionality. Accordingly, an inten-
tional program generates a specific multi-relational structure in an inner
or outer appropriate environment, based on the principle of feasibility of
that program.
Sy
g.j.
Ac1 Ac2
Ac6 Ac3
Ac5 Ac4
1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4
2 2 3 3 4 4 1 1 3 3 4 4 1 1 2 2 4 4 1 1 2 2 3 3
3 4 2 4 2 3 3 4 1 4 1 3 2 4 1 4 1 2 2 3 1 3 1 2
4 3 4 2 3 2 4 3 4 1 3 1 4 2 4 1 2 1 3 2 3 1 2 1
number of the 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
permutation
P N 1. 2. 3. 2. 3. 2. 1. 2. 1. 2. 3. 2. 3. 2. 1. 2. 1. 2. 3. 2. 3. 2. 1. 2. P
1 2 3 4 4 3 2 1 1 2 3 4 4 3 2 1 1 2 3 4 4 3 2 1 1
2 1 1 1 1 1 1 2 3 3 2 2 3 4 4 4 4 4 4 3 2 2 3 3 2
3 3 2 2 3 4 4 4 4 4 4 3 2 2 3 3 2 1 1 1 1 1 1 2 3
4 4 4 3 2 2 3 3 2 1 1 1 1 1 1 2 3 3 2 2 3 4 4 4 4
1 4
2 to 3
3 2
4 1
1 2; 2 3; 3 4
(N1 ) (N2 ) (N3 )
computer systems for robot brains have also been proposed (Mitterauer
1988; Thomas and Mitterauer 1989). Here, it is attempted to further
elaborate on this possible intentional programming in our brains, focus-
ing on glial-neuronal interaction.
permutations 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4
2 2 3 3 4 4 1 1 3 3 4 4 1 1 2 2 4 4 1 1 2 2 3 3
3 4 2 4 2 3 3 4 1 4 1 3 2 4 1 4 1 2 2 3 1 3 1 2
4 3 4 2 3 2 4 3 4 1 3 1 4 2 4 1 2 1 3 2 3 1 2 1
number of the permutation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Hamilton loop 1 1 8 24 9 17 16 2 7 23 10 18 15 3 6 22 11 19 14 4 5 21 12 20 13
Hamilton loop 2 24 1 17 8 16 9 23 2 18 7 15 10 22 3 19 6 14 11 21 4 20 5 13 12
Hamilton loop 3 24 17 1 16 8 9 23 18 2 15 7 10 22 19 3 14 6 11 21 20 4 13 5 12
Hamilton loop 4 17 24 16 1 9 5 18 22 15 2 10 7 19 22 14 3 11 6 20 21 13 4 12 5
Hamilton loop 5 17 16 24 9 1 9 18 15 23 10 2 7 19 14 22 11 3 6 20 13 21 12 4 5
Hamilton loop 6 16 17 9 24 8 1 15 18 10 23 7 2 14 19 11 22 6 3 13 20 12 21 5 4
Hamilton loop 7 24 7 23 8 16 15 1 6 22 9 17 14 2 5 21 10 18 13 3 4 20 11 19 12
Hamilton loop 8 23 24 16 7 15 8 22 1 17 6 14 9 21 2 18 5 13 10 20 3 19 4 12 11
Hamilton loop 9 23 16 24 15 7 8 22 17 1 14 6 9 21 18 2 13 5 10 20 19 3 12 4 11
Hamilton loop 10 16 23 15 24 8 7 17 22 14 1 9 6 18 21 13 2 10 5 19 20 12 3 11 4
Hamilton loop 11 16 15 23 8 24 7 17 14 22 9 1 6 18 13 21 10 2 5 19 12 20 11 3 4
Hamilton loop 12 15 16 8 23 7 24 14 19 9 22 6 1 13 18 10 21 5 2 12 19 11 20 4 3
Hamilton loop 13 23 22 6 15 7 14 24 21 5 16 8 13 1 20 4 17 9 12 2 19 3 18 10 11
Hamilton loop 14 22 23 15 6 14 7 21 24 16 5 13 8 20 1 17 4 12 9 19 2 18 9 11 10
Hamilton loop 15 22 15 23 14 6 7 21 16 24 13 5 8 20 17 1 12 4 9 19 18 2 11 3 10
Hamilton loop 16 15 22 14 23 7 6 16 21 13 24 8 5 17 20 12 1 9 4 18 19 11 2 10 3
Hamilton loop 17 15 14 22 7 23 6 16 13 21 8 24 5 17 12 20 9 1 4 18 11 19 10 2 3
Hamilton loop 18 14 15 7 22 6 23 13 16 8 21 5 24 12 17 9 20 4 1 11 18 10 19 3 2
Hamilton loop 19 22 5 21 6 14 13 23 4 20 7 15 12 24 3 19 8 16 11 1 2 18 9 17 10
Hamilton loop 20 5 22 6 21 13 14 4 23 7 20 12 15 3 24 8 19 11 16 2 1 9 18 10 17
Hamilton loop 21 5 6 22 13 21 14 4 7 23 12 20 15 3 8 24 11 19 16 2 9 1 10 18 17
Hamilton loop 22 14 21 13 22 6 5 15 20 12 23 7 4 16 19 11 24 8 3 17 18 10 1 9 2
Hamilton loop 23 14 13 21 6 22 5 15 12 20 7 23 4 16 11 19 8 24 3 17 10 18 9 1 2
Hamilton loop 24 13 14 6 21 5 22 12 15 7 20 4 23 11 16 8 19 3 24 10 17 9 18 2 1
The permutation where the counting starts is stepwise displaced from the extreme left to the extreme right. However, one can start on every
permutation. The matrix shows 24 Hamilton loops.
The proemial synapse: glial-neuronal units, consciousness 257
2 1 3 2 4 3
left to the extreme right. However, one can start on every permutation.
Biologically speaking, a Guenther matrix formalizes a combinatorics of
cyclic pathways generated in an astrocytic syncytium. The length of a
cycle determines the expansion of an astrocytic domain.
If each astrocyte forms a domain as a glial-neuronal unit (Hub), the
interactions of astrocytes via gap junctions can produce larger domains
(Master Hubs). A Hamilton loop generates a specific cyclic pathway
through the gap junctions of two or more astrocytes dependent on the
valuedness of the permutation system. Note, in a quadrivalent permuta-
tion system (shown in Table 8.4), the Hamilton loops define a complex
combinatorics within a Master Hub. This may be in accordance with
the concept of a dynamic compartmentalization in the glial syncytium
(Dermietzel 1998).
Generally, a loop, or a cycle, can be interpreted as an elementary
reflection system. Importantly, in the generation of a Hamilton loop
based on negative language, a special kind of reflection is hidden, say as in
proemial relations already described in GNUs. At a closer look, negations
operate on a cyclic proemial relationship as illustrated in Fig. 8.6.
In each negation (N1 , N2 , N3 ) first the lower value dominates the
higher one (), then the relation reverses (), since the higher value
now dominates the lower one (). Therefore, a negation operates as a
cyclic proemial relationship. If we assume that the proemial relationship
may underlie all consciousness-generating processes in the brain based on
intersubjective reflection, the astrocytic syncytia can also be interpreted
as elementary consciousness-generating systems.
Mathematically, the Hamilton loop problem is a problem of type NP,
which stands for non-deterministic polynomial time. Cook (1971) was
able to provide a means of demonstrating that certain NP-type problems
are highly unlikely to be solvable by an efficient, polynomial time algo-
rithm. Moreover, Cook (1971) proved that if a particular NP-problem
can be solved by a polynomial time algorithm, every other problem of
type NP can also be solved. Since the number of Hamilton loops is not
computable even in a pentavalent (n = 5) permutation system (Thomas
and Mitterauer 1989), our brain may be endowed with an unimaginable
reflection potency. Of course, the neuronal networks also play a basic
258 Bernhard J. Mitterauer
N 1 2 1 2 1 3 1 2 1 2 1 3 1 2 3 2 1 2 1 2 3 2 1 3
A 122312231234122312231234122334231223122334231234
213221322143213221322143213243322132213243322143
R 331133113311331133113311331111113311331111113311P
444444444422444444444422444422444444444422444422
virtue. There was no result until Protagoras realized that the opponents
only repeatedly changed their positions (Plato 1903).
Admittedly, my mainly theoretical interpretation of synaptic informa-
tion processing as a basic mechanism of intersubjective reflection is exper-
imentally not really testable. However, it can demonstrate the limits of
experimental brain research not only concerning consciousness research,
but also in regard to the explanation of the individuality of subjective sys-
tems as animals and humans. As I mention again and again, robotics
may represent an alternative approach (Mitterauer 2000). Because if we
are able to implement principles of subjectivity in a robot brain based
on biomimetic structures and functions, we can learn from its behavior
where we are right and where we are wrong or where we are confronted
with metaphysical limits of our scientific investigations. As a modest step
in this direction, we have proposed a biomathematical model of inten-
tional autonomous multiagent systems (Pfalzgraf and Mitterauer 2005).
It represents a formal interpretation of the components and interactions
in a glial-neuronal synapse, especially considering the role of intentions
in the design of conscious robots.
REFERENCES
Aquinas T. St. (1988). In Martin C. (ed.) The Philosophy of Thomas Aquinas:
Introductory Readings. New York: Routledge, pp. 3849.
The proemial synapse: glial-neuronal units, consciousness 261
Araque A., Parpura V., Sanzgiri R. P., and Haydon P. Q. (1999). Tripar-
tite synapses: Glia, the unacknowledged partner. Trends Neurosci 22:208
215.
Atmanspacher H. and Rotter S. (2008). Interpreting neurodynamics: Concepts
and facts. Cogn Neurodyn 2(4):297318.
Auld D. S. and Robitaille R. (2003). Glial cells and neurotransmission: an inclu-
sive view of synaptic function. Neuron 40:389400.
Baars B. J. (2002). The conscious access hypothesis. Trends Cogn Sci 6:4752.
Bennett M. R. and Hacker P. M. S. (2003). Philosophical Foundations of Neuro-
science. Malden, MA: Blackwell.
Brentano F. (1995). Psychology from an Empirical Standpoint. London: Routledge.
Buber M. (1970). I and Thou. New York: Free Press.
Chialvo D. R. (2006). The Brain Near the Edge. 9th Granada Seminar on Compu-
tational Physics, Granada, Spain. URL: http://arxiv.org/pdf/q-bio/0610041.
pdf (accessed February 28, 2013).
Churchland P. (2007). Neurophilosophy at Work. Cambridge University Press.
Cook S. A. (1971). The complexity of theorem-proving procedures. In Proceed-
ings of Third Annual ACM Symposium on Theory of Computing. New York:
ACM, pp. 151158.
Cooper M. S. (1995). Intercellular signalling in neuronal-glial networks. BioSys-
tems 34:6585.
Dennett D. (1978). The Intentional Stance. Cambridge, MA: Little, Brown.
Dermietzel R. (1998). Diversification of gap junction proteins (connexins) in the
central nervous system and the concept of functional compartments. Cell
Biol Int 22:71930.
Dermietzel R. and Spray D. C. (1998). From neuroglue to glia: a prologue. Glia
24:17.
Gaietta G., Deerinck T. J., Adams S. R., Bouwer J., Tour O., Laird D. W.,
et al. (2002). Multicolour and electron microscopic imaging of connexion
trafficking. Science 296:503507.
Giaume C. and Theis M. (2009). Pharmacological and genetic approaches to
study connexion-mediated channels in glial cells of the central nervous sys-
tem. Brain Res Rev 63(12):160176.
Gourine A. V., Kasimov V., Marina N., Tang F., Figueiredo M. F., Lane S.,
et al. (2010). Astrocytes control breathing through pH-dependent release of
ATP. Science 329:571575.
Guenther G. (1962). Cybernetic ontology and transjunctional operations. In
Yovits M. C., Jacobi G. T., and Goldstein G. D. (eds.) Self-Organizing Sys-
tems. Washington DC: Spartan Books, pp. 313392.
Guenther G. (1963). Das Bewutsein der Maschinen. Baden-Baden: Agis-Verlag.
Guenther G. (1966). Superadditivity. Biological Computer Laboratory 3,3. Urbana,
IL: University of Illinois.
Guenther G. (1976). Beitrage zur Grundlegung einer operationsfahigen Dialektik,
Vol. 1. Hamburg: Meiner.
Guenther G. (1980). Martin Heidegger und die Weltgeschichte des Nichts. In
Guenther G. (ed.) Beitrage zur Grundlegung einer operationsfahigen Dialektik.
Hamburg: Meiner.
262 Bernhard J. Mitterauer
Leonid Perlovsky
9.1 Introduction
Cognitive modeling of consciousness assumes that conscious process-
ing enables attention to mental processes (Baars 1988; Perlovsky 2006a,
2011). During evolution, it is possible that this ability became adaptive
when the increasing complexity of mental life and its corresponding brain
functions offered choices beyond mere instinctual drives. The evolution
of consciousness consisted of a differentiation of the psyche into vari-
ous aspects. With the emergence of language, human cultural evolution
overtook genetic evolution. Using language, humans have created a large
number of mental representations which are available to consciousness.
265
266 Leonid Perlovsky
How does language interact with cognition? What are functions of con-
scious and unconscious processes in everyday conversations, thinking
processes, and creativity?
Humans subjectively feel conscious most of the time, but most men-
tal operations are unconscious. In this chapter, I consider functions of
conceptual and emotional mechanisms; free will and how it could be
reconciled with scientific determinism; scientific understanding of self,
aesthetic emotions in language and cognition; beautiful, sublime, musical
emotions, their cognitive functions, and cultural evolution.
In simple organisms, only minimal adaptation is required. An instinct
directly wired to action is sufficient for survival, and unconscious pro-
cesses can efficiently allocate resources and will. However, in complex
organisms, various instincts might contradict one another. Undifferenti-
ated unconscious mental functions result in ambivalence and ambiten-
dency; every position entails its own negation, leading to an inhibition.
This inhibition cannot be resolved by unconscious processes that do
not differentiate among alternatives (see Godwin et al., this volume,
Chapter 2). The ability for conscious processing is needed to resolve
these instinctual contradictions, by suppressing some processes and allo-
cating power to others. By differentiating alternatives, consciousness can
direct a psychological function to a goal.
This chapter emphasizes that consciousness is not a single word with a
capital C but a differentiated phenomenon. It appears in the evolution
of life, initially with simple contents, and gradually differentiates. Its
contents become diverse and complex. The biological evolution from
animals to humans and the cultural evolution of humans consist mostly
of a differentiation of the contents of consciousness.
In the pre-human world, the differentiation of psyche and increase of
consciousness (differentiation of its contents) was a slow process. One
reason is possibly a fundamental ambivalence of the value of conscious-
ness for organisms. Whereas consciousness requires differentiation of
mental functions, survival demands unification; all mental mechanisms
of an organism must be coordinated with instinctual drives and among
themselves. Evolutionary increase in differentiation and consciousness
is advantageous only if it is paralleled with unified functioning; in other
words, differentiation must be combined with unity.
This interplay between differentiation and unification suggests that the
evolution of consciousness must be a slow genetic process coordinating
differentiation and unification. Indeed, mental states of animals seem to
be unified. Animals are capable of complex dishonest behavior (e.g.,
when distracting a predator from a nest), but this behavior has devel-
oped with evolution; an individual animal is not making a conscious
decision and (as far as existing data suggest) does not face paralyzing
A cognitive model of language and conscious processes 267
9.2.2 Dynamic logic and the role of mathematics in the scientific method
Barsalou (1999) emphasized that representations are distributed in the
brain (e.g., color is stored in a different part of the brain than shape).
During a concrete perception or cognition, the concept-representation
required to match an object or event in the world is reassembled, or a
similar experience is recreated from memory. These processes Barsalou
(1999) called simulators.
A mathematical theory of these simulator processes, modeling the
emergence of concrete and conscious representations from vague, dis-
tributed, and unconscious ones, have been developed in Perlovsky (1987,
2001, 2006a,b, 2007c, 2009b, 2010b), in Ilin and Perlovsky (2010), and
Perlovsky and Ilin (2010a).
Mathematical models are essential for science. Even so, mathemat-
ics by itself does not prove anything about the world either outside or
inside the mind. The power of mathematics and its importance for sci-
ence comes from its use in the scientific method. The scientific method
began from Newtons mathematical models of planetary motions. These
models described the known motions of planets and predicted unknown
phenomena, such as existence and orbit of Pluto. Theoretical predic-
tions of unknown phenomena and confirmations of these predictions by
experimental observations constitute the essence of the scientific method.
Mathematics by itself does not explain nature; the most fundamental
essence of science is scientific intuitions about how the world works.
Whereas it is possible to have many different intuitions about complex
phenomena, mathematics leads to unambiguous predictions that could
be experimentally verified, thus proving or disproving the theory. Intu-
itions about the world and mathematical methods, which describe these
intuitions, explaining vast amount of available knowledge from few first
principles are rare events signaling the coming of a new theory. Math-
ematically explaining vast knowledge from few basic principles is what
Einstein, Poincare, Dirac, and other scientists called the beauty of a
scientific theory, the first proof of its validity (Dirac 1982; Einstein [see
McAllister 1999]; Poincare 2001). The final proofs of a scientific theory
are experimental confirmations of its mathematical predictions.
A mathematical theory predicting the theoretical and experimental
results of Barsalou (1999) and Bar et al. (2006) have been developed in
Perlovsky (1987, 2001, 2006a). This model (or theory) is called dynamic
logic, and its fundamental property is the emergence of concrete and con-
scious representations from vague, distributed, and unconscious ones.
The reason for its name, dynamic logic, is that unlike classical logic
describing static states (e.g., this is a chair), dynamic logic describes
270 Leonid Perlovsky
COGNITION
abstract ideas
situations
objects
sensory-motor signals
phrases for
situations
objects words
words for
objects
Fig. 9.3 The dual hierarchy of language and cognition. Language learn-
ing is grounded in surrounding language at all levels of the hierarchy.
Learning of embodied cognitive models is grounded in direct experience
of sensory-motor perceptions only at the lower levels. At higher levels,
their learning from experience has to be guided by contents of language
models. This connection of language and cognition is motivated by KI
and the corresponding aesthetic emotions. Different emotionalities of
languages produce different cognition and different cultural evolutions.
acquired from personal experience, or what they have read or heard from
TV anchorpersons, teachers, or peers. High in the hierarchy all of us are
like five-year-old children: we talk, but we do not understand; contents of
cognitive representations are vague and unconscious, while due to crisp-
ness of language representations we may remain convinced that these are
our own clear conscious thoughts.
To summarize this argument, abstract ideas cannot be perceived by
senses. Language acts like eyes for abstract concepts. But unlike eyes,
language eyes cannot be closed. We cannot switch off language and
directly experience vagueness and diminished consciousness of cognitive
representations of abstract concepts. This principal difference between
consciousness about language and about cognition creates much misun-
derstandings and wrong mysteries about consciousness (like the word
consciousness itself).
This combination of conscious language and unconscious vague cog-
nition is even more valid about the highest ideas near the top of the
mind hierarchy. Even distinguished scientists and philosophers can talk
at length about the meaning of life, or what is beautiful, or what is spir-
itually sublime, about their belief or disbelief in God, but do they really
understand? All these ideas are related to representations near the top of
mental hierarchy (Perlovsky 2007d, 2008, 2010a,d, 2011, 2012c, at press
c). Nobody can be conscious about contents of cognitive representations
at the highest levels.
This does not mean that conversations or books on this topic are
useless. On the opposite, understanding contents of the highest rep-
resentations is extremely important for everyones life. The more we
understand the better we can achieve what is important in our lives.
And still, as we understand contents of representations at high levels, the
mind would always create still higher representations, whose contents
would forever remain unconscious, hidden. We never become mecha-
nistic automata; there is always room for mystery in human life and
beliefs.
9.7 Creativity
Creativity is an ability to make contents of mental representations more
conscious. In this way, every learning process involves creativity. The
word learning is used when one creates more conscious personal con-
tents, which have already existed in culture and in language, and usually is
directed from language (collective culture) to cognition (personal under-
standing). Creativity is reserved for the personal discovery of contents,
which have not existed in culture and language. Creativity is directed
278 Leonid Perlovsky
1 We note that the meaning of the word heuristic changed over centuries. When
Archimedes cried out Eureka! in the streets of his city, Syracuse, he meant a genuinely
creative discovery. Today, especially in cognitive science, after Tversky and Kahneman
(1974), heuristic means readily available knowledge.
282 Leonid Perlovsky
discussed within classical logic. Let us repeat, free will is not about logi-
cally connecting two logical states of mind.
How can the question about free will be answered within the devel-
oped theory of mind? Free will does not exist in inanimate matter. Free
will exists as a cultural concept-representation (in addition it has ancient
animal roots, possibly much older than representations). The contents
of this concept include all related discussions in cultural texts, literature,
poetry, art, in cultural norms. This cultural knowledge gives the basis for
developing corresponding language representations in individual minds;
language representations are mostly conscious. Clearly, individuals differ
by how much cultural contents they acquire from surrounding language
and culture. The dual model suggests that, based on this personal lan-
guage representation of free will, every individual develops his or her
personal cognitive representation of this idea, which assembles his or
her related experiences in real life, language, thinking, and acting into
a coherent whole (Perlovsky 2011). Free will really exists in the ability
to make logical and conscious decisions from unconscious and illogi-
cal contents of cognitive representations. Free will is a mental ability to
increase ones understanding and to make conscious decisions by bringing
unconscious contents into consciousness.
9.9.1 Self
The culturally acquired knowledge of free will becomes an inseparable
part of intuition of Self. Self is another sometimes controversial topic of
discussion of consciousness. What is Self? Self is more than a concept.
Like all concepts, we understand it due to the corresponding mental
representation. But this representation is likely of a more ancient origin
than most representations in human mind. It belongs to what Jung called
archetypes (1921) and what today we might understand as primordial
neural mechanisms, precursors of representations. The unified percep-
tion of entire organismic functioning is imperative for survival. It existed
unconsciously long before consciousness or representations originated.
With emergence of conscious differentiated perception of ones own func-
tioning, the Self became a representation. Once in simple organisms it has
been an automatic unconscious part of functioning. Today human diverse
knowledge has to be reconciled with unitary understanding of Self, which
has become a complex task imperative for survival. With emergence of
language, and ability to consciously differentiate language representation
284 Leonid Perlovsky
special names, but which comprise a larger part of conscious and uncon-
scious emotional life.
An understanding of ones surroundings is imperative for survival and
for satisfaction of instinctual needs; therefore, the most important and
fundamental instinctual mechanism is the instinct for knowledge (some-
times called the need for understanding), which drives the mind to
improve similarities between cognitive representations and correspond-
ing objects, events, and their language representations (learning language
representations is driven by the instinct for language, Pinker 1994, which
does not connect language to the surrounding world, except surrounding
language). At lower levels of mental hierarchy (say, objects) the knowl-
edge instinct acts automatically and associated emotions of its satisfaction
or dissatisfaction (if minor) are below the threshold of consciousness. At
higher levels these emotions are conscious. At the level of situations, these
emotions become conscious if a situation is not understood or contradicts
expectations (this is a staple of thriller movies). Positive aesthetic emo-
tions may become conscious if we understand something after exerting
significant effort.
Emotional research mostly discusses basic emotions, related to sat-
isfaction of bodily instincts. Basic emotions usually are named by words
(such as rage); there are about 150 English words with emotional con-
notations, but only a few different basic emotions (Ortony and Turner
1990; Izard 1992; Ekman 1999; Russell and Barrett 1999; Lindquist
et al. at press; Petrov et al. at press). Richness of human emotional life
(Cabanac 2002) is mostly due to aesthetic emotions, which are experi-
enced as emotions of beautiful, sublime, as musical emotions, emotions
of cognitive dissonances, and emotions heard in prosody of human voice
(Perlovsky 2006b, 2007b, 2008, 2009a, 2010a,b,c,d,e, 2011, at press
a,d; Perlovsky et al. 2010; Fontanari et al. at press).
Emotions of the beautiful are aesthetic emotions related to satisfaction
of the instinct for knowledge at higher levels of the hierarchy of the mind
(Perlovsky 2001, 2006a, 2010a, 2010b,c,d, 2011). Contents of cognitive
representations at high levels are vague and unconscious. As previously
discussed, these contents are related to the meaning of life, and they
cannot be made conscious; yet the knowledge instinct drives the mind to
a better understanding of these contents. Understanding of the meaning
of life is so important that when we feel these contents becoming a bit
more clarified and conscious, or even when we feel that something like
this really exists, we feel the presence of emotion of the beautiful.
The purpose of art since immemorial times was to penetrate into this
mystery, to use this striving for meanings to invoke the feeling of the
beautiful, and to convince us that the meaning really exists. So it is not
286 Leonid Perlovsky
REFERENCES
Arbib M. A. (2005). From monkey-like action recognition to human language:
An evolutionary framework for neurolinguistics. Behav Brain Sci 28:105
124.
Aristotle (1995). In Barnes J. (ed.) The Complete Works. Princeton University
Press [the revised Oxford translation; original work VI BCE].
Baars B. (1988). A Cognitive Theory of Consciousness. Cambridge University Press.
Bar M., Kassam K. S., Ghuman A. S., Boshyan J., Schmid A. M., Dale A. M.,
et al. (2006). Top-down facilitation of visual recognition. P Natl Acad Sci
USA 103:44954.
Barsalou L. W. (1999). Perceptual symbol systems. Behav Brain Sci 22:577660.
A cognitive model of language and conscious processes 295
Bering J. (2010). Scientists say free will probably doesnt exist, but urge: Dont
stop believing! Sci Am Mind. URL: http://blogs.scientificamerican.com/
bering-in-mind/2010/04/06/scientists-say-free-will-probably-doesnt-exist-
but-urge-dont-stop-believing/ (accessed March 8, 2013).
Bielfeldt D. (2009). Freedom and neurobiology: reflections on free will, language,
and political power. Zygon 44(4):9991002.
Cabanac M. (2002). What is emotion? Behav Process 60:6983.
Darwin C. R. (1871). The Descent Of Man, And Selection In Relation To Sex.
London: John Murray.
Deacon, T. W. (1989). The neural circuitry underlying primate calls and human
language. Human Evolution, 4(5): 367401.
Deming R. W. and Perlovsky L. I. (2007). Concurrent multi-target localization,
data association, and navigation for a swarm of flying sensors. Inform Fusion
8:316330.
Dirac P. A. M. (1982). The Principles of Quantum Mechanics. Oxford University
Press.
Ekman P. (1999). Basic emotions. In Dalgleish T. and Power M. (eds.) Handbook
of Cognition and Emotion. Chichester: John Wiley & Sons, Ltd.
Festinger L. (1957). A Theory of Cognitive Dissonance. Evanston, IL: Row,
Peterson.
Fontanari J. F. and Perlovsky L. I. (2004). Solvable null model for the distribution
of word frequencies. Phys Rev E 70:042901.
Fontanari J. F., Cabanac M., Cabanac M.-C., and Perlovsky L.I. (at press). A
structural model of emotions of cognitive dissonances, Neural Networks.
Fontanari J. F., Tikhanoff V., Cangelosi A., Ilin R., and Perlovsky L. I. (2009).
Cross-situational learning of objectword mapping using Neural Modeling
Fields. Neural Networks 22(56):579585.
Glassman R. (1983). Free will has a neural substrate: critique of Joseph F.
Rychlaks discovering free will and personal responsibility. Zygon 18(1):
1782.
Godel K. (1934). Kurt Godel Collected Works, Vol. I. Feferman S (ed.). New York:
Oxford University Press.
Grossberg S. (1988). Neural Networks and Natural Intelligence. Cambridge, MA:
MIT Press.
Grossberg S. and Levine D. S. (1987). Neural dynamics of attentionally mod-
ulated Pavlovian conditioning: Blocking, interstimulus interval, and sec-
ondary reinforcement. Appl Opt 26(23):50155030.
Ilin R. and Perlovsky L. I. (2010). Cognitively inspired neural network for recog-
nition of situations. Int J Nat Comp Res 1(1):3655.
Izard C. E. (1992). Basic emotions, relations among emotions, and emotion-
cognition relations. Psychol Rev 99:561565.
Jaynes J. (1976). The Origin of Consciousness in the Breakdown of the Bicameral
Mind. Boston, MA: Houghton Mifflin.
Jung C. G. (1921). Psychological Types. In The Collected Works, Vol. 6. Bollingen
Series, Vol. 20. Princeton University Press.
Kant I. (1790). Critique of Judgment. Trans. Bernard J. H. (1914). London:
Macmillan.
296 Leonid Perlovsky
Perlovsky L. I. (2001). Neural Networks and Intellect: Using Model Based Concepts.
New York: Oxford University Press.
Perlovsky L. I. (2006a). Toward physics of the mind: Concepts, emotions, con-
sciousness, and symbols. Phys Life Rev 3(1):2255.
Perlovsky L. I. (2006b). Music the first principle. Music Theatre. URL: www
.ceo.spb.ru/libretto/kon_lan/ogl.shtml (accessed March 8, 2013).
Perlovsky L. I. (2007a). Cognitive high level information fusion. Inform Sciences
177:20992118.
Perlovsky L. I. (2007b). Evolution of languages, consciousness, and cultures.
IEEE Comput Intell M 2(3):2539.
Perlovsky L. I. (2007c). Neural dynamic logic of consciousness: the knowledge
instinct. In Perlovsky L. I. and Kozma R. (eds.) Neurodynamics of Higher-
Level Cognition and Consciousness. Heidelberg: Springer.
Perlovsky L. I. (2008). Music and consciousness. Leonardo 41(4):420421.
Perlovsky L. I. (2009a). Language and emotions: Emotional Sapir-Whorf
Hypothesis. Neural Networks 22(56):518526.
Perlovsky L. I. (2009b). Vague-to-Crisp neural mechanism of perception. IEEE
Trans Neural Network 20(8):13631367.
Perlovsky L. I. (2010a). Intersections of mathematical, cognitive, and aesthetic
theories of mind. Psychol Aesthetics Creativity Arts 4(1):1117.
Perlovsky L. I. (2010b). Neural mechanisms of the mind, Aristotle, Zadeh, and
fMRI. IEEE Trans. Neural Networ 21(5):718733.
Perlovsky L. I. (2010c). The mind is not a kludge. Skeptic 15(3):5155.
Perlovsky L. I. (2010d). Beauty and art. Cognitive function, evolution,
and mathematical models of the mind. WebmedCentral PSYCHOL
2010;1(12):WMC001322. URL: www.webmedcentral.com/article view/
1322 (accessed March 8, 2013).
Perlovsky L. I. (2010e). Musical emotions: functions, origins, evolution. Phys
Life Rev 7(1):227.
Perlovsky L. I. (2011). Consciousness and free will, a scientific pos-
sibility due to advances in cognitive science. WebmedCentral PSY-
CHOL 2011;2(2):WMC001539. URL: www.webmedcentral.com/article
view/1539 (accessed March 8, 2013).
Perlovsky L. I. (at press a). Cognitive function of musical emotions. Psychomusi-
cology.
Perlovsky L. I. (at press b). Mirror neurons, language, and embodied cognition.
Neural Networks.
Perlovsky L. I. (at press c). The cognitive function of emotions of spiritually
sublime. Rev Psychol Front.
Perlovsky L. I. (at press d). Cognitive function of music. Interdiscipl Sci Rev.
Perlovsky L. I., Bonniot-Cabanac M.-C., and Cabanac M. (2010). Curios-
ity and pleasure. WebmedCentral PSYCHOL 2010;1(12):WMC001275.
URL: www.webmedcentral.com/article view/1275 (accessed March 8,
2013).
Perlovsky L. I., Chernick J. A., and Schoendorf W. H. (1995). Multi-sensor ATR
and identification friend or foe using MLANS. Neural Networks 8(7/8):1185
1200.
298 Leonid Perlovsky
10.1 Introduction
In this chapter, I tackle the double problem of defining what makes a
natural process mental, and what makes a mental process conscious. My
short answer to the first question is that mental processes are those that
operate with forms (in the Aristotelian sense of the term, discussed in the
following) embedded in material systems and with transmission of such
forms from one system to another. A reformulation of the Aristotelian
approach by Peirce (1931) focuses on chains of signs composing semei-
otic processes. The transmission of forms and/or signs composing mental
processes has been scientifically approached with the use of informa-
tion theory, dating back to Broadbent (1958). In contemporary views, a
similar concept of information flow supporting the formation of knowl-
edge was proposed by Dretske (1981), Skyrms (2008, 2010) and others,
including authors in this volume.
The answer to the second problem cannot be so short and occupies
most of this chapter. Information transmission and respective mental
processes are often unconscious. There are complex chains and loops of
This research benefited from financial support by CNPQ (Brazilian Research National
Funding Agency), FAPESP (Foundation for Support of Research in the State of Sao
Paulo), and POSGRAD-UNESP (Sao Paulo State University Post-Graduation Office).
My thanks to Chris Nunn and Dietrich Lehmann for helping with language, style,
and concepts; Max Velmans, Leonid Perlovsky, Wolfgang Baer, and Ram Vimal for
useful comments; Maria Eunice Quilici Gonzales for our discussion on the nature of
information; and all colleagues who directly or indirectly contributed to this chapter.
299
300 Alfredo Pereira Jr.
1 Consciousness is a process that occurs in a subject (the living individual) and the subject
has an experience (he/she interacts with the environment, completing action-perception
cycles) and the experience has reportable informational content (information patterns
embodied in brain activity that can be conveyed by means of voluntary motor activity)
(Pereira and Ricke 2009, p. 16).
Please note that being reportable does not mean that first-person experiences would
be fully translated to the third-person perspective. Translation of content from the first
to the third-person perspective is always an approximation.
Triple-aspect monism: A framework consciousness science 301
3 My thanks to Max Velmans (personal communication) for raising the issue of cognitive
and perceptual feelings.
4 In the current machine modeling of consciousness paradigm, a machine is consid-
ered to be conscious if possessing control mechanisms with the ability to model the
world and themselves (Aleksander 2007, p. 97). Contributing authors do not deny
the importance of emotional processes in conscious machines, but their models do not
relate consciousness with subjective feelings neither include hypotheses about how to pro-
vide their prototypes with mechanisms able to generate feelings about the information
they process.
Triple-aspect monism: A framework consciousness science 303
Pockett (2000) and McFadden (2002) have argued that the brains elec-
tromagnetic field is the basis of phenomenal consciousness. A refinement
of this kind of approach is Microstate theory (Lehmann, this volume,
Chapter 6), stating that elementary units of thought are incorporated in
split-second spatial patterns of the brains electric field. Microstate the-
ory offers, for the methodology of consciousness science, a sophisticated
treatment of EEG data, allowing the identification of patterns of activity
that characterize normal and abnormal mental functioning.
A second relation within the brain-mind-world triangle is between
knowledge and the world. This conception has faraway roots, in the
Aristotelian theory of truth, which was conceived as a correspondence
of mental forms and forms of beings in the world. Merker (this volume,
Chapter 1) draws the picture of a brain locked in the skull that is nev-
ertheless able to support mental activity that reflects the world outside
itself, by means of interactions between the neural model of the body
and the neural model of the world in the brain.
Consciousness appears as a process that has physical causes inside the
skull, but also as with Leibniz monads a capacity to reflect the whole
world. The relation of mental and (other) natural forms is not a simple
one; there are forms in someones mind that do not occur exactly alike in
the world, and there are forms in the world that do not occur exactly alike
in ones mind. Assuming a single (first-person OR third-person) perspec-
tive, we tend to conceive mind to be contained in nature, or nature to
be contained in a mind. This apparent paradox solved in a practical
fashion by Aleksander (2005) is illustrated in Fig. 10.1. TAM theoreti-
cally overcomes this paradox by means of the concept of an evolutionary
process where and when possibilities are progressively actualized. In the
domain of possibilities, any mind is contained in nature in the sense that
all mental forms, including those that refer to natural impossibilities, are
produced by combinations of natural forms. In the domain of actualities,
nature is contained in minds, in the sense that all ideas/concepts about
nature belong to someones mind, which also has ideas/concepts other
than those about nature.
In this epistemological framework, the physical world is both a mind-
independent or noumenal affair, and a phenomenal representation from
the third-person perspective of natural sciences (such as physics, chem-
istry, biology, and sociology). According to the latter, it is described by
means of categories such as matter/energy, forces, space-time, laws of
nature, chemical properties of matter/energy (as in the Periodic Table),
biological properties (genomes, proteomes, metabolomes), biological
populations, and their ecological interactions. All these concepts are (like
any other concepts) mental entities, but (unlike other concepts) believed
306 Alfredo Pereira Jr.
Nature Mind
Mind Nature
Fig. 10.1 The apparent mind and nature paradox: In the noumenal
domain (nature) a phenomenal domain (mind) emerges (left circle).
The phenomenal world contains a representation of the noumenal (right
circle). The domains are not mutually exclusive, but reflect each other
(as in Velmans 1990, 2008). This is, at the same time, a neo-Aristotelian
and post-Kantian view of the relations of mind and nature.
to correspond at least partially to realities out there in the world; that is, it
is assumed that our mind-dependent physics reveals mind-independent
features of the world.6
Viewing consciousness as an important factor in the interaction of
the living individual with the world raises the issue of biological and
behavioral functions of conscious processing. Although agreeing with
Chalmers (1996) that the consideration of functions cannot explain the
phenomenal aspect of conscious experiences, the role of consciousness in
the control of actions, guiding the living individual towards biological fit-
ness and adaptation, is of central importance for the scientific approach.
This guidance also includes reporting of conscious experiences by exper-
imental subjects, to establish a connection between properties of stimuli,
6 The dispute of epistemological Idealism against Realism may lead to a long series of
philosophical arguments. I assume, with Trehub (this volume, Chapter 7), that our
perceptual representations are most frequently transparent, revealing veridical features
of the world. When looking at a street scene through the glass window we usually assume
that the medium is transparent, and what we see is veridical. It would not be impossible
that the glass is actually an opaque, high-tech screen and that the street scene is generated
by a computer. Even though it may be a great scientific fiction, in everyday life this
possibility is not plausible. In the context of the study of communication media, it
is meaningful to say with theorists like Marshall McLuhan that the medium is
the message, but direct perception with its transparency is still a player in the
technological game (e.g., direct perception is necessary to watch TV and be affected by
intrinsic messages of this communication medium).
Triple-aspect monism: A framework consciousness science 307
7 This concept appears in Spinozas (1677) propositions X (An idea, which excludes
the existence of our body, cannot be postulated in our mind) and XI (Whatsoever
increases or diminishes, helps or hinders the power of activity in our body, the idea
thereof increases or diminishes, helps or hinders the power of thought in our mind).
310 Alfredo Pereira Jr.
qualia such as hunger, thirst, feeling hot or cold, fear, anger, pain,
pleasure, happiness, sadness, color, sound, taste . . . composing men-
tal states? Are they in some sense physical, chemical or biological? Or
are they more fundamental, like Platos Ideas, being (or not) embed-
ded in material systems, as in Aristotelian hylomorphism? The dominant
assumption in scientific approaches to consciousness has been that men-
tal forms are biological, resulting from an evolutionary process (there are
several biological views of mind and consciousness, e.g., Millikan 1984;
Block 2009; Edelman et al. 2011; for a criticism of biological reduction-
ism, see Velmans, 2008). The three most influential contemporary the-
ories of consciousness; Information Integration (Tononi 2005), Neural
Darwinism (Edelman 1987, 1989) and Global Workspace (Baars 1988)
theories, attempt to explain how composite forms are selected and/or
constructed from elementary forms, but do not identify their ultimate
nature.
Although the assumption of a biological origin of mental forms is
likely to be true, it leaves unquestioned the issue of what comes first in
nature, mental or biological forms. Do mental forms somehow emerge
from selective pressure on biological non-mental matter, or do they pre-
exist in nature in potential states, contributing to drive evolutionary pro-
cesses as long as they are actualized? I assume, with Vimal (this volume,
Chapter 5) and possibly with Peirces philosophy the second option:
elementary forms composing conscious processes exist in nature in a
potential state, depending on specific processes as those found in the
brain of living individuals to be integrated and actualized into conscious
episodes. Churchland (2012) also assumes the preexistence of forms, but
moves into a Platonic approach when locating them in an abstract math-
ematical space. For TAM, the conceptual space of consciousness should
be constructed on the basis of lived experiences (as argued in Pereira Jr.
and Almada 2011).
I give two examples of potential elementary mental forms and the
process by which they are actualized. The smell of sulfur exists in a
potential state in nature, since this element of the periodic table came
to existence. However, it was not actualized (i.e., felt) until a signal
from the element reached a receptor with an adequate mechanism to
actualize the form of the smell. Only when someone felt the smell of
sulfur the potential form was fully actualized. However, the smell of sulfur
is not a creation of the receiver (although susceptible of modulation by
different receivers). The basic property of this quale is determined by
the electronic structure of the element and existed in a potential state
since it came to existence.
Another example is the taste of salt. This quale exists in a potential
state since sodium and chloride were first chemically bound. To be felt,
Triple-aspect monism: A framework consciousness science 311
ACTUALITY
Conscious
Processes
Biology
Chemistry
Physics
TIME
POTENTIALITY
Fig. 10.2 The TAM tree. According to TAM, a conscious living system
is composed by three superposed aspects, physical-chemical-biological,
informational, and conscious. On the vertical axis, there is continuity
between the aspects; they are conceived as stages in an evolutionary
process, by which elementary potential states eternally existing in nature
are progressively combined and actualized. The informational aspect is
characterized, in the horizontal axis, by subjective and objective poles.
A part of information processes, including both subjective and objective
features, emerges as conscious experience. The whole system is moving
in time (represented by the oblique arrow).
We explained first that only the contraries were principles, and later that a sub-
stratum was indispensable, and that the principles were three; our last statement
has elucidated the difference between the contraries, the mutual relation of the
principles, and the nature of the substratum. Whether the form or the substratum
is the essential nature of a physical object is not yet clear. But that the principles
are three, and in what sense, and the way in which each is a principle, is clear.
(Aristotle 2012; Physics, Book 1, Section 7)
In the Zeta book of Metaphysics, he argues that substances are the ulti-
mate reality, and that differences in form enable us to distinguish one
substance from another (for a discussion of this controversial thesis,
see Aubenque 1962). For Aristotle, matter is possibility of being.
Form is responsible for determining the kind of being (e.g., the bio-
logical species), while matter is responsible for individuation (distinction
between individuals of the same species). In the study of substantial gen-
eration (summarized in Metaphysics, Book Z, Section 8), the reciprocal
action of form and matter suggests that every existing being comes from
the intersection between a material possibility and the action of a form.
Matter and form, as constitutive principles of beings, are also consid-
ered as causes, that is, the material cause and formal cause. In the
passage from potency to act, there is a common term. A potentiality that
is already present in a piece of matter (e.g., sculptable marble) meets
316 Alfredo Pereira Jr.
t1 KINDS OF RELATION t2
Sensitive
Informational
Mental NC Mental NC
State C State D
Affective
our monist assumption, these partial results are supposed to reflect the
bigger picture to a reasonable degree.
The above cross-aspect relations (sensitive and affective feelings) are
not causal in the ordinary usage of the term in science (making refer-
ence to physical forces, as in Harnad and Scherzer 2008), but can be
conceived as similar to Aristotles formal causation, which applies
among other possibilities to the relations between aspects of the same
system. The unity and individuality of a conscious being in time depends
on such resonances between their aspects. These resonances have been
scientifically approached in the fields of psychosomatics and integrative
medicine (Walach et al. 2012), as well as by means of the somatic marker
hypothesis in neuropsychology (Damasio 1994). For each individual,
the three aspects (physical-chemical-biological, informational, and con-
scious) cannot be separated. When a person dies, his/her conscious activ-
ity apparently goes away (this expression, goes away, is intentionally
dubious), but some of the complexes of forms that he/she constructed
can survive, when re-actualized by other individuals (for instance, when
they read a book written by the first-person). Once a complex is actual-
ized for an individual, it can be reproduced to the same individual or to
others.
A necessary condition for reproductibility is the complex being rep-
resented. In this sense, a representation is a copy (Pereira 1997) of a
presented complex. For the individual, a copy may be used for executive
functions and working memory. Reasoning with representations (e.g.,
counterfactual thinking), the individual can go beyond the here and now
of existence, reconstruct the past, and project the future. The basis of
cultural evolution is the embodiment of presented complexes in material
mediums, as texts and paintings. This embodiment generates cultural
units that can be further copied and re-actualized by other individuals.
Central to the conscious life of individuals in society is an interchange
of information with the environment. Such an environment contains not
only physical-chemical-biological objects and processes, but also cultural
entities. Described as objective spirit (Hegel) or memes (Richard
Dawkins), cultural forms can also be regarded as potential forms to be
actualized in individual consciousness. Each individual is exposed to
physical and cultural information patterns corresponding to potential
forms that can be actualized in his/her conscious mind.
8 Although using different terminologies (for a criticism of this kind of terminology, see
for example, Clark 1997), the approaches advocated by Panksepp and myself are highly
compatible. What he calls sensory-affects partially corresponds to what I call sen-
sitive feelings; his homeostatic affects correspond to my affective feelings, while
his emotional affects partially correspond to my affective feelings. In the following
discussion, I will continue to use my own terminology.
Triple-aspect monism: A framework consciousness science 325
reach the conscious spotlight together with higher level cognitive rep-
resentations; one does not exclude the other.
Several experimental results indicate that a stimulus that is subliminal
for the cognitive network may be supraliminal for the feeling network. For
example, the picture of a spider is presented for a brief time or masked by
another stimulus. The subjects do not form a visual representation of the
spider, and therefore cannot report visual features of the presented stim-
uli (Siegel and Weinberger 2009). However, this presentation can elicit a
supraliminal effect on the feeling network, for example, the subject feel-
ing fear, developing fearful behaviors and forming a memory of the event,
besides an increase in skin conductance and other unconscious effects.
Current paradigms based on a narrow view of GWT have led researchers
to classify these feelings as unconscious and the respective memory
as being of the implicit kind (Yang et al. 2011). An extended GWT
approach would help to revise these assumptions, leaving the classifica-
tion of unconscious only to those forms which are really not-conscious
(e.g., pre-motor contrasted to parietal cortical activations; see Desmurget
et al. 2009).
problems that bedevil the study of the brain, the mind, and the world,
and also opens a new avenue of dialogue with philosophical and religious
traditions.
TAMs framework is close to the philosophy of Hegel, the first philoso-
pher to elaborate on the concept of consciousness (according to Dove
2006). Hegels system, described with detail in his Encyclopedia of Philo-
sophical Science, is composed of three aspects of reality: Idea, Nature,
and Spirit. The world of Ideas contains all possibilities of effective devel-
opment of reality. The Ideas express themselves in Nature, but in a very
limited way. In his philosophy of nature a piece of German romanti-
cism that shares similar views with other contemporary authors, such as
Goethe, Fichte, and Schelling Nature is pregnant with Ideas. However,
full expression of these Ideas cannot occur in Nature itself; their devel-
opment requires Nature to be negated as a finite reality and resumed as
human culture (the Spirit). Only for human consciousness for Hegel,
particularly the German culture of his time the initial Ideas would reveal
their full meaning. In spite of the explicit ethnocentrism of this concep-
tion, Fleischmann (1968) convincingly argued that it is compatible with
current interdisciplinary scientific worldviews.
Charles Sanders Peirces semeiotic triad of Firstness, Secondness, and
Thirdness can be interpreted as a version of Hegels Idealism. In the
TAM approach, Firstness is the domain of potentiality, Secondness is the
domain of individual, discrete determinations of being, and Thirdness is
the domain of habits, conceived as projections of continuity in time and
space, obeying the laws of nature, and moving towards self-established
goals. Instead of a dialectical process mediated by the logical operation of
negation, in Peirce we find a semeiotic process of potentialities actualized
as signs possibly converging to the truth in the long run.
TAM gives a twist to the Hegelian major triad inherited by Peirce:
an inversion of order regarding the two first aspects of the triad, from
Mind-Nature to Nature-Mind, as originally proposed by Karl Marx. In
this sense, it points towards a new, fallible Dialectics of Nature based on
scientific developments and emphasizing dynamic interactions instead
of contradictions as the modus operandi of natural processes. The Proto-
Panpsychism of TAM suggests a re-conceptualization of natural sciences,
pointing to the existence of potentialities present in Nature. These poten-
tialities are like seeds that in adequate conditions develop into form, infor-
mation, and consciousness. If assumed by scientists, this ontology would
contribute to the re-enchantment of the world proposed by Prigogine
and Stengers (1979).
In the brain sciences, TAM leads to a richer view of the ionic, molec-
ular, intra-, and inter-cellular processes such as electrical currents and
330 Alfredo Pereira Jr.
3. The Holy Spirit is the actual God, the actualization of unity for
a community of individuals who have faith (strong feelings) about it.
When individuals pray, they act towards the unity. This move can have
effects on their brains/bodies, for example, helping to heal a disease.
TAM provides an adequate framework for ethical discussions of recent
advances in the fields of biotechnology, synthetic biology, personalized
medicine, multi-scale self-organizing systems, and machine conscious-
ness. Scientific and technological progress in these fields raises concerns
about the possibility and limits of human control of the combination of
natural forms. For TAM, these possibilities are conceived as an evolu-
tionary step in the process of actualization of forms that characterize the
evolution of the universe. Therefore, there would be no sufficient a priori
reason to veto these scientific/technological projects; on the contrary, an
adequate focus would be to discuss benefits and risks.
Considering all the above possibilities, TAM is likely to be of interest
for a wide range of theoreticians who look for an integrative view of
consciousness as the unity of mind, brain, and the world.
REFERENCES
Aleksander I. (2005). The World in My Mind, My Mind in the World: Five Steps to
Consciousness. Exeter: Imprint Academic.
Aleksander I. (2007). Machine Consciousness. In Velmans M. and Schneider S.
(eds.) The Blackwell Companion to Consciousness. Malden, MA: Blackwell, pp.
8798.
Aleksander I. and Morton H. B. (2011). Informational minds: From Aristotle to
laptops (book extract). Int J Mach Consciousness 3(2):383397.
Almada L. F., Pereira Jr. A., and Carrara-Augustenborg C. (2013). What the
affective neuroscience means for a science of consciousness. Mens Sana
Monographs 11(1):253273.
Aristotle (2012). The Complete Aristotle. Adelaide, Australia: Feedbooks. URL:
www.feedbooks.com/book/4960/the-complete-aristotle.
Aristotle (1953). La Metaphysique. Trans. and comments Tricot J. Paris: Vrin.
Aubenque, P. (1962). Le Probleme de lEtreChez Aristote. Paris: PUF.
Baars B. (1988). A Cognitive Theory of Consciousness. New York: Cambridge
University Press.
Baars B. (1997). In the Theater of Consciousness: The Workspace of the Mind. New
York: Oxford University Press.
Baars B. and Franklin S. (2003). How conscious experience and working memory
interact. Trends Cogn Sci 7(4):166172.
Baldwin J. M. (1896). Consciousness and evolution. Psychol Rev 3, 300309.
URL: www.brocku.ca/MeadProject/Baldwin/Baldwin 1896 b.html.
Bateson G. (1979). Mind and Nature: A Necessary Unity. New York: Dutton.
Triple-aspect monism: A framework consciousness science 333
LeDoux J. (1996). The Emotional Brain. New York: Simon & Schuster.
Leybaert L. and Sanderson M. J. (2012). Intercellular Ca2+ waves: Mechanisms
and function. Physiol Rev 92(3):13591392.
Leknes S. and Tracey I. (2008). A common neurobiology for pain and pleasure.
Nat Rev Neurosci 9:314320.
Mayer E., Martory M. D., Pegna A. J., Landis T., Delavelle J., and Annoni J. M.
(1999). A pure case of Gerstmann syndrome with a subangular lesion. Brain
122(6):11071120.
McFadden J. (2002). The conscious electromagnetic information (CEMI)
field theory: The hard problem made easy? J Consciousness Stud 9(4):23
50.
Millikan R. (1984). Language, Thought and Other Biological Categories.
Cambridge, MA: MIT Press.
Nagel T. (1974). What is it like to be a bat? Philos Rev 83(4):435450.
Newman J. B. and Baars B. J. (1993). A neural attentional model for access to
consciousness: A global workspace perspective. Concept Neurosci 4(2):255
290.
Nixon G. (2010). Hollows of experience. Journal of Consciousness Exploration and
Research 1(3):234288.
Nunn C. (2007). From Neurons to Notions: Brains, Minds and Meaning. Edin-
burgh: Floris Books.
Nunn C. (2010). Who Was Mrs Willett? Landscapes and Dynamics of the Mind.
London: Imprint Academic.
Panksepp J. (1998). Affective Neuroscience: The Foundations of Human and Animal
Emotions. New York: Oxford University Press.
Panksepp J. (2005). Affective consciousness: Core emotional feelings in animals
and humans. Consciousness Cogn 14:3080.
Panksepp J. (2007). Affective consciousness. In Velmans M. and Schnei-
der S. (eds.) The Blackwell Companion to Consciousness. Malden, MA:
Blackwell.
Peirce C. S. (19311958). Collected Papers of Charles Sanders Peirce.
Hartshorne C. and Weiss P. (eds.) Vols 16; Burks A. (ed.) Vols. 78. Cam-
bridge, MA: Belknap Press.
Pereira Jr. A. (1997). The concept of representation in cognitive neuroscience.
In Riegler A. and Peschl M. (eds.) Does Representation Needs Reality? Proceed-
ings of the International Conference New Trends in Cognitive Science. Vienna:
Austrian Society for Cognitive Science, pp. 4956.
Pereira Jr. A. and Rocha A. (2000). Auto-organizacao fsico-biologica e a
origem da consciencia. In Gonzales M. E. and DOttaviano I. (eds.) Auto-
Organizacao: Estudos Interdisciplinares. Campinas, Brasil: CLE/UNICAMP,
pp. 98115.
Pereira Jr. A. (2003). The quantum mind/classical brain problem. Neuroquantol-
ogy 1:94118.
Pereira Jr. A. and Ricke H. (2009). What is consciousness? Towards a preliminary
definition. J Consciousness Stud 16:2845.
Pereira Jr. A. and Furlan F. (2010). Astrocytes and human cognition: Modeling
information integration and modulation of neuronal activity. Prog Neurobiol
92(3):405420.
336 Alfredo Pereira Jr.
338
Index 339
sublime, 272, 277, 278, 281, 285, 286, unconscious, 59, 60, 61, 62, 63, 64, 161,
287, 289, 293, 294 197, 266, 267, 268, 269, 270, 271,
subliminal, 51, 328 272, 275, 276, 277, 278, 279, 280,
subliminal stimuli, 51 281, 282, 283, 284, 285, 286, 287,
superposition, 124, 153, 168, 173 288, 289, 292, 299, 302, 303, 307,
supervenience, 301 308, 327, 328, 330
supramodular interaction theory, 50 mental-unconscious, 328
symbols-of-reality, 133 unconscious emotions, 294
functional meaning, 133 unconscious-to-conscious, 293
referential meaning, 133 unconscious counterparts
synchrony blindness, 53 cerebellum, 44, 45
unification, 83, 86, 90, 96, 97, 98, 103,
theories of consciousness, 310 104, 105, 106
thirdness, 77, 78, 80, 82, 102, 303, 329
third-person event, 219 varying degrees of dominance, 150, 152,
third-person perspective, 3, 85, 90, 118, 157, 182
150, 152, 157, 158, 160, 182, 304, Vedanta, 171
305, 322 vegetative state, 46, 164
three-dimensional, 9, 18 Visis.t.advaita, 151, 171
tilde mode, 154, 155, 156 visual angle, 227
time visual-concrete thinking, 206
as internal parameter, 127 volition, 239, 240, 241, 288
time transport function, 127
top-down signals, 58, 59, 61, 268 wakefulness, 154, 164, 166, 178, 197
tritostructure, 246 wave function collapse, 117, 307
2D perspective drawings into 3D what is it like to be, 304
representations, 225 will, the, 64