You are on page 1of 24

Neuroscience of Mindfulness: Sugar, Drugs, & Dopamine

BY MATTHEW JANUARY 10, 2014

MACAROONS Photo Credit: murilocardoso via Compfightcc

Lets imagine that you were to give up sugary sweets for a month like I recently did as
part of a requirement for a nutrition class. The first few days might be relatively easy
because you are still galvanized by the novelty of your commitment, but as the days
fold themselves into weeks, your resolve will likely begin to waver. Maybe after a
week, the completion of dinner prompts your mind to develop a sugary itch, imagining
various potential items for dessert. Maybe during the second week, the caramels at work
seem to be winking at you each time you walk past their candy jar-home, inviting you
to unleash their sugary goodness on your parched taste buds. Finally, week four arrives;
you hop in the car and speed off to your favorite ice cream parlor, indulging in the
sultry sweet flavor of your preferred variety.
Today we will be studying why sugar, as well as many other substances, has such a
hold on our mind and body. In particular we will learn about our endogenous (native to
our body) stimulus-reward learning system. That is, the system that allows us to learn
the link between a given stimulus (e.g. a caramel candy) and the reward that it provides
(e.g. gustatory pleasure).
To understand our stimulus-reward learning system we will begin by viewing the
system as it functions naturally, and then we will examine the system under the aberrant
influence of substance abuse and addiction. I began the article by using the example of
sugar withdrawal and craving because sugar turns out to cause the same stimulation of
endogenous opioid and dopamine receptors that some substances of abuse do.

Additionally, many of the neural adaptations to repeated sugar binges are also seen in
chronic substance abuse. Thus, your candy bar may seem morally and ethically distinct
from an illicit drug, but to your brain the difference is really only one of magnitude.
(Avena, Rada, & Hoebel, 2008)
Neuroscience is progressing at such a rapid pace that by placing information in
writing today I am essentially guaranteeing some degree of inaccuracy tomorrow.
I hope to buffer this danger by drawing from a multitude of research material as well as
by utilizing a certain degree of creative license.
Those individuals who have read my previous Neuroscience of Mindfulness posts will
know that I use nicknames to indicate the role of a particular neuroanatomical structure
(for example, the medial prefrontal cortex mPFC/Emotional Sensor). These
nicknames are meant to be both accurate and purposefully vague. Accurate in that the
nickname attempts to summarize a central role of the neuroanatomical structure in the
brain and vague because any given structure has many functions and any given function
has many more structures that participate in it.
Thus, I hope that by maintaining a degree of ambiguity, these structure-nickname pairs
will serve to familiarize readers with scientifically esoteric nomenclature while
retaining the capacity to absorb evolving scientific knowledge.
With this disclaimer in mind let us now turn to the stimulus-reward learning system.
For simplicitys sake, some structures will be omitted while others will be
oversimplified. I will do my best to include a reference to these omissions so that the
interested reader can learn more if desired.
To begin our discussion of reward we must start with the two most basic units of the
brain: neurons and glial cells. The brain is made up of approximately equal parts neuron
and glial cell. Glial cells provide support for, and participate in, the general
maintenance of the nearly 86 billion neurons that make up the information processing
networks of the brain. We will study the neuron in great detail over the ensuing
paragraphs because these key cells generate the chemical and electrical signals that
produce our cognitive and emotional experience. (Azevedo et al., 2009)

NEURON By BruceBlaus (Own work) [CC BY 3.0], via Wikimedia Commons

Neurons consist of multiple dendrites, a cell body, and an axon. Neurons talk to one
another through electrical and chemical signals. Neuronal transmission begins with a
depolarization that triggers an action potential, a sort of biological electrical current.
The action potential travels down the axon and causes the release of neurotransmitters
stored in vesicles in the axon terminal. These neurotransmitters diffuse across the
synaptic space between the axon terminal of one neuron and the dendrites of another,
binding to receptors on the post-synaptic neuron (there are some exceptions to this axon
to dendrite synapse formation that we will not consider).
The post-synaptic neuron will either depolarize and fire an action potential of its
own or be inhibited into silence depending upon the sum excitatory and inhibitory
qualities of the neurotransmitters bound to its various receptors. (Paxinos & Mai,
2004)
Neurotransmitters are biological substances that come in many different forms. For our
purposes we will be focusing primarily on the neurotransmitters key to the central
nervous system (the brain and spinal cord): the amines, amino acids, and peptides.

ACTION POTENTIAL By OpenStax College [CC BY 3.0], via Wikimedia Commons

Neurons have a resting electrical potential of about -70 millivolts (mV).


Neurotransmitters bind to specific ion channels and secondary messenger receptors on
post-synaptic neurons, inducing a net change in the resting electrical potential. If this
change moves the neuron towards 0 mV (more positive) it is referred to as
a depolarization, while a change in the negative direction is known as
ahyperpolarization. If a neuron is sufficiently depolarized, an action potential will be
triggered. Hyperpolarized neurons are much more difficult to depolarize, thus
hyperpolarization inhibits action potentials. (Baynes & Dominiczak, 2014)
Neurotransmitters are broadly referred to as excitatory (tending to cause depolarization)
or inhibitory (tending to cause hyperpolarization). The net excitatory and inhibitory
neurotransmitters influence on a post-synaptic neuron determines whether or not an
action potential is triggered.

SYNAPSE By Nrets at en.wikipedia [GFDL or CC-BY-SA-3.0], from Wikimedia Commons

To ensure that the synapse does not become overrun with neurotransmitters, presynaptic neurons have reuptake pumps that pump free neurotransmitters back into the
axon terminal to be recycled. One such reuptake pump known as the dopamine
transporter (DAT) will become important in later discussions. There are also enzymes
in the synapse and the axon terminals that breakdown unused neurotransmitters, thereby
inactivating them. (Baynes & Dominiczak, 2014)
The main excitatory neurotransmitter in the brain is the amino acid glutamate.
While the main inhibitory neurotransmitter is another amino acid known as
gamma-amino butyric acid, or GABA for short. (Paxinos & Mai, 2004)
Unfortunately for scientists, many neurotransmitters do not fit neatly into an excitatory
or inhibitory classification system. One such classification-averse neurotransmitter
(technically a neuromodulator) is an amine known as dopamine. As a point of interest,

other neurotransmitters in the amine family include such well-known chemicals as


acetylcholine, norepinephrine, epinephrine, and serotonin. But before we turn to the
dopaminergic star of todays show, we have to quickly mention one final group of
neurotransmitters: the peptides. In particular, we will look briefly at the neuropeptides
endorphin and enkephalin.
Endorphin and enkephalin are part of our endogenous opioid system. Endogenous
opioids are inhibitory neurotransmitters that are integral to our pain modulation
pathways as well as the euphoric experience associated with opiate abuse.
Opioids have a wide range of effects in the brain. Despite their inhibitory nature, opioid
receptors often act to inhibit an inhibitor, an action known as disinhibition. Opioid
disinhibition actually activatesa neuron further on down the transmission chain. The
exact details are less important than the basic knowledge of the pain-relieving and
subjective

euphoria-causing

properties

of

endogenous

opioids.

(McMahon,

Koltzenburg, Tracey, & Turk, 2013)


Now lets return to dopamine. The monoamine dopamine is an important
neurotransmitter involved in such wide-ranging systems as movement, emotion, reward,
and learning. Dopamine binds to post-synaptic neurons at two different receptor sites:
D -like receptors and D -like receptors. Both families of receptors act through a slow
1

second-messenger pathway involving cyclic adenosine monophosphate (cAMP). D -like


1

pathways increase cAMP while D -like pathways decrease cAMP. cAMP itself has a
2

complex mechanism of action that we will not examine today. For our purposes, we
will learn about dopamine by examining its effects on a larger scale. (Stern,
Rosenbaum, Fava, Biederman, & Rauch, 2008)

DOPAMINE By Cacycle (Own work) [GFDL, CC-BY-SA-3.0or Public domain], via Wikimedia Commons

There remains a longstanding misconception that dopamine is a so-called pleasure


molecule, producing the experience of pleasure associated with the consumption of

stimuli such as food, sex, or drugs. In fact, dopamine acts at a more basic level by
indicating the incentive salience of a stimulus (McClure, Daw, & Montague, 2003).
Incentive salience refers to the want we feel towards a particular stimulus. A want for a
stimulus is distinct from the pleasure (reward) gained from obtaining said stimulus.
Wants motivate an organism to use the action-selection networks in the brain to
deploy the behavior needed to acquire the stimulus. The pleasure that we enjoy after
obtaining and experiencing the stimulus is a product of a highly diverse set of
neuroanatomical structures that are beyond the scope of todays article. (Schultz, 2002)
Rather than producing pleasure, dopamine functions as a motivating neurotransmitter
in the brain. It is released in amounts proportional to the value of a given stimulus. We
will thus nickname dopamine (DA) Motivation.
DA/Motivation makes quantitative distinctions between stimuli rather than qualitative
ones. That is to say that if an apple and an orange are equally valued by an individual,
then the amount of dopamine released in response to the consumption of either fruit will
be the same despite the qualitative differences in taste profiles. Dopamines role in
behavior and learning will hopefully be clarified when we turn to the larger
dopaminergic pathways in the brain later in this article. (McClure et al., 2003)
Aberrant DA/Motivation systems have long been hypothesized to play a role in
psychotic illnesses. Psychosis is a broad term referring to a syndrome that often consists
of, among other symptoms, hallucinations and delusions. Patients with schizophrenia,
bipolar disorder, major depressive disorder, and many other psychiatric disorders can all
manifest the symptomatology of psychosis. With our new knowledge of the role that
DA/Motivation plays in the brain we can briefly examine the proposed mechanisms
behind psychosis.
Psychosis is often treated with a class of drugs known as antipsychotics. Antipsychotics
were first discovered in the 1950s and treat psychosis by, among other mechanisms,
blocking the D2 receptor (a receptor for DA/Motivation found in the brain). Psychotic
illnesses are hypothesized to be precipitated by an overactivity in the DA/Motivation
system (among many other causative factors). Thus, D2 receptor-blocking
antipsychotics are thought to relieve psychotic symptomatology by mitigating this
abnormal activity.

Lets examine the role of DA/Motivation in delusions, one of the many symptoms
that make up the psychotic syndrome. Delusions are beliefs that are firmly held despite
contradictory evidence from objective reality. For example, a common delusion
involves a patients belief that an unidentified nefarious character is following them.
Lets study this delusion with our knowledge of DA/Motivation physiology in mind.
Imagine that a patient suffering from psychosis sees a few different men wearing white
shirts throughout the course of a day. An individual who is not suffering from a
psychotic illness might shrug this occurrence off as reflecting the popularity of white
shirts, but a patient suffering from psychosis is not as biologically fortunate. The patient
has an overactivity of DA/Motivation that identifies not only objectively significant
but also insignificant stimuli as being highly important. Thus, the patient suffering from
a psychotic illness ascribes a high level of salience to the white-shirted entities. (Kapur,
2003)

WHITE SHIRT Photo Credit: stevendepolo via Compfightcc

With the white-shirted individuals firmly tagged with the DA/Motivation importance
signal, the patients cortex must construct a believable story to explain why these whiteshirted folks are so important. Out of this explanatory process may arise the conviction
that the seemingly separate white-shirted individuals were, in actuality, the same
person. Furthermore, the cortex may reason that the patient keeps seeing this same
individual because they are following the patient.
Lets come back to the brains endogenous stimulus-reward learning system and
investigate a little further.
Two areas in the brain that contain a very high density of DA/Motivation-releasing
neurons are thesubstantia nigra and the ventral tegmental area. There are three major

DA/Motivation tracts that originate from these two areas: the nigrostriatal tract,
the mesolimbic tract, and the mesocortical tract. A fourth pathway that we will ignore
connects the hypothalamus/Cruise Control to the pituitary gland and is known as
the tubero-infundibular tract. Of these four tracts, we will be primarily concerned with
the mesolimbic and mesocortical tracts. (Kaufman & Milstein, 2012)
Each tract is named for its origin and destination; i.e. the nigro-striatal tract originates
in the substantia nigra and projects to the striatum (basal ganglia/Pattern Generator).
Thus, the 2 tracts of primary interest for todays discussion have the
same meso (meaning middle) origin, indicating that their mutual origin is in the
region of the midbrain known as the ventral tegmental area (VTA). If we describe DA
as the Motivation of the brain, then we will refer to the VTA as the Motivator
because of its role in the production and release of DA/Motivation.
We will consider the mesolimbic and mesocortical pathways together because the brain
is so interconnected as to render complete intellectual dichotomization of any two
networks a nearly impossible and extremely confusing task.
As touched on in previous paragraphs, the VTA/Motivator contains an abundance of
DA/Motivation-releasing neurons. When these neurons depolarize, action potentials
travel upwards along axons that project to various parts of the brain. When action
potentials arrive at the axon terminals they cause the release of DA/Motivation from
pre-synaptic vesicles into the synapse. DA/Motivation diffuses across the synapse
and binds to receptors on the post-synaptic neuronal receptors.
We will simplify the pathway by treating it as bidirectional and interconnected so as to
avoid the complications inherent in parsing out various fiber projections. Additionally,
we will omit structures that are not required for our subsequent discussion. I have
chosen to simplify the discussion because I have found that the level of intellectual
clarity is often inversely related to the level of detail; and for the non-neuroscientist,
clarity is paramount.
Of the VTA/Motivators projections we will consider three primary destinations. The
first VTA/Motivator destination that we will consider is the projection to a structure
known as the nucleus accumbens. The nucleus accumbens are paired nuclei found in the

basal forebrain and are a central part of a group of structures integral to reward known
collectively as the ventral striatum. (Haines, 2012)
The nucleus accumbens is theorized to connect a given stimulus to the reward
associated with its consumption or performance. Thus, the nucleus accumbens provides
our ability to learn that the caramel is connected to the pleasurable experience of eating
it. We will nickname the nucleus accumbens (NAcc) the Connector. (Sescousse,
Cald, Segura, & Dreher, 2013)
Next up, we see that the VTA/Motivator projects to the amygdala/Emoter. We have
discussed the amygdala/Emoter many times in previous articles, always focusing on
the role it plays in negative emotion. As with most things, the closer one looks at the
amygdala/Emoter the more complex it becomes.
The amygdala/Emoter turns out to be involved in both negative and positive
emotions. The activity of the amygdala/Emoter is not correlated with the type of
emotion, but it is instead correlated with the intensity of the emotion. The
amygdala/Emoter shows increased activity in relation to the salience (intensity) not
the valence (emotional quality) of an emotion. This means that sadness (valence) is not
associated with greater activation of the amygdala/Emoter than happiness; only the
intensity of the respective emotions predicts the degree of amygdala/Emoter
activation. In other words, the amygdala/Emoter paints in all colors of emotion, only
varying the weight of its brush stroke based on intensity. (Sescousse et al., 2013)
The last projection of the VTA/Motivator that we will consider is to the ventromedial
prefrontal cortex and the orbitofrontal cortex. The ventromedial prefrontal cortex is a
subdivision of the medial prefrontal cortex (mPFC/Emotional Sensor) that has played
such a central role in many of my previous articles. The orbitofrontal cortex is a portion
of the prefrontal cortex that is found on the underside of the frontal lobes, directly
above the orbits (eye sockets).
The ventromedial prefrontal cortex is involved in mental reasoning about the self and
others while the orbitofrontal cortex is involved in evaluating potential costs or benefits
of a decision (Barbey, Krueger, & Grafman, 2009). For the purposes of todays
discussion we will use the mPFC/Emotional Sensor as an umbrella term to refer to
the ventromedial prefrontal cortex and the orbitofrontal cortex. While not entirely
anatomically or functionally accurate, the orbitofrontal cortex does involve the medial
portion of the prefrontal cortex, and so our anatomical indiscretion may be forgiven.

DA/MOTIVATION STIMULUS-REWARD LEARNING PATHWAY By Cacycle (Own work) [GFDL, CC-BY-SA-3.0 or Public domain], via Wikimedia
Commons

In summary, the VTA/Motivator projects DA/Motivation-releasing axon


terminals

to

the

NAcc/Connector,

the

amygdala/Emoter,

and

the

mPFC/Emotional Sensor.
Now that we have summarized the DA/Motivation stimulus-reward learning system,
lets examine the set of structures involved in generating and remembering our
experience of a given stimulus. We will refer to these structures collectively as forming
an Experiencing Network.
The Experiencing Network begins with our perception of the environment courtesy of
our sense organs (eyes, ears, nose, mouth, and touch-sensitive nerve endings). Our
sense organs provide us with our ability to see, hear, smell, taste, and touch the objects
that make up our external environment. A sixth sense is generated by our ability to
imagine and mentally construct ideas, environments, and scenarios based on previous
experience. These six senses are the portal to our outside world.
The experience of our inside world is provided by a structure known as
the insula/Internal Sensor (among others). The insula/Internal Sensor allows us to
sense our internal bodily states (e.g. the sensation of our heart beat, breathing, or
grumbling stomach). Importantly, this information is sensed at a preconscious level and
preempts our cognitive elaboration. Internal bodily states are often experienced as an
emotional feel because of this preconscious nature. When were anxious, part of the
reason that we know we are anxious is that our insula/Internal Sensor has provided us

with the preconscious experience of a tightened stomach, racing heart, and thirst for air.
(Sescousse et al., 2013)

LATERAL VIEW OF THE BRAIN

The raw, sensate informational packets gathered from our external and internal
environments are passed along to the cortex, which makes up the outer layer of the
brain and is involved in information processing. The cortex then constructs a
meaningful story of our experience from the raw sensory data. Cortical structures
involved in this interpretation and production of our conscious experience include
structures that we have previously discussed such as the mPFC/Emotional Sensor,
the somatosensory cortex/External Sensor, as well as many other structures that we
have not.
The mPFC/Emotional Sensor is the primary interpretative structure that we will
consider today. The mPFC/Emotional Sensor produces our conscious experience of
the emotional and factual components of a situation based on the sensory and emotional
information it receives. Our sense organs provide the details of the setting, stimulus,
and reward while the preconscious bodily sensations from the insula/Internal Sensor
are interpreted as an emotional feel by the mPFC/Emotional Sensor. The
amygdala/Emoter adds a final sprinkle of emotional flavor to the now conscious
experience. Whether it is a dash of raw joy or a hint of primal fear, the
amygdala/Emoter makes sure that the mPFC/Emotional Sensor is aware of the
intensity of the emotional milieu.

MEDIAL BRAIN

This cognitive narrative is simultaneously recorded into memory by the hippocampus


(HCMP/Memorizer)

and the

amygdala/Emoter.

We have

discussed the

HCMP/Memorizer and its role in generating memories in previous articles. We will


briefly revisit how the amygdala/Emoter and the HCMP/Memorizer work together
to form memories.
If a memory were a painting, then the HCMP/Memorizer would sketch the black and
white outline of the event while the amygdala/Emoter would color the remembered
experience with an emotional brush. Thanks to the HCMP/Memorizer and the
amygdala/Emoter, our memories are full of emotional and contextual detail of the
remembered past event.
As we can see, the DA/Motivation stimulus-reward learning system intersects at
many points along our Experiencing Network. These intersections are key to
understanding how we learn about stimuli and rewards.
Before we complete the picture we need to add one more layer to the DA/Motivation
stimulus-reward learning system. At this point you may be wondering, What exactly
triggers the release of DA/Motivation? There is a two-part answer to this critical
question.
First, we need to understand the difference between tonic and phasic activity. A tonic
process is one in which there is a constant, low-level of activity. While a phasic process
consists of intermittent peaks and valleys, often triggered in response to a certain
stimulus. If a tonic process were like a ball rolling slowly down a never-ending hill,

then a phasic process would be like a ball being intermittently kicked along a field by a
recurring stimulus.
The VTA/Motivator is normally tonically active, meaning that it always produces a
baseline low-level of DA/Motivation discharge. While tonic activity of the
DA/Motivation system is required for survival, it turns out that phasic (intermittent)
DA/Motivation activity is necessary for the learning of stimulus-reward pairings.
Unfortunately, here is where the picture gets a little murky (or murkier). Common sense
would suggest that our DA/Motivation stimulus-reward learning system would
respond proportionally to the inherent value of a given reward. Regrettably, this logical
assumption is incorrect. Instead, the degree of activation of the DA/Motivation
stimulus-reward learning system is determined by something called prediction error
(McClure et al., 2003). Prediction error refers to the difference between the expected
value and the actual value of a predicted stimulus-generated reward (McMahon et al.,
2013).
This means that the degree of phasic (intermittent) activation of the
DA/Motivation stimulus-reward learning system depends on the novelty of a
reward and not on the inherent quality of it.
Lets look at an example to clarify this key concept.

DA/MOTIVATION STIMULUS-REWARD LEARNING PATHWAY

Imagine that you have a favorite bakery that makes the most unbelievable scone (my
personal favorite member of the pastry family). The first time you found this bakery the

expected value (reward) of the scone (stimulus) you purchased was low because you
expected just another run of the mill scone. But when you took your first bite and the
sweet, buttery flavor greeted your unprepared taste buds, the actual value of the scone
was revealed to far exceed the expected value. Thus, your first encounter with your new
favorite bakerys scone produced a large prediction error and a proportionally large
phasic activation of the VTA/Motivator and the DA/Motivation stimulus-reward
learning system.
As an evolutionary side note, the DA/Motivation stimulus-reward learning system
evolved so that our ancestors could learn novel associations swiftly. The first time your
ancestor found a red berry that was sweet instead of the bitter he or she expected, the
VTA/Motivator inundated the brain with DA/Motivation to make sure red berry =
delicious calorie source. Once the association was learned, DA/Motivation could
back off (courtesy of an equilibration between expected and actual reward value),
allowing the memory and cortical systems to take over the knowledge of the arboreal
treat.

SCONE Photo Credit: sweetbeetandgreenbean viaCompfight cc

Returning now to your favorite pastry shop, lets imagine that a few months have
passed and your dedication to this wonderful bakery has provided you with many
experiences of the magical scone. Now when you order your scone, the expected value
of the pastry is high and roughly equals the actual value when you take a bite. As a
result, the prediction error is extremely small, and the resultant phasic release of
DA/Motivation is proportionally small, or even absent altogether. (McClure et al.,
2003)

This example is somewhat complicated by the intrinsic DA/Motivation-stimulating


capacity of sugar, but for our purposes we will ignore this (Avena et al., 2008). Also we
should note that your continued enjoyment of your sugary scone despite the lack of
phasic DA/Motivation release is made possible by your sense organs, cognitive
networks, and endogenous opioid system. We must remember that DA/Motivation is
not the pleasure neurotransmitter perpetuated in popular culture.
Lets examine one final wrinkle before we exhaust our discussion of the now legendary
scone. Maybe the head baker is on vacation and you purchase a substandard scone made
by an unworthy apprentice. Now your expected value is higher than the actual value of
the scone and the prediction error is negative. Not only does your VTA/Motivator fail
to release a phasic dose of DA/Motivation, but also your baseline tonic
DA/Motivation activity actually transiently decreases. If this tonic dip below baseline
occurs too many times in response to bad scones you will learn a new aversion to the
once celebrated bakery.
Lets review this scenario and include all of the structures we have discussed so far.
The sights, sounds, and smells of the bakery likely dominated your perception the first
time you walked through its doors. All of this raw data regarding the bakery
environment gathered by your internal and external sensory structures was then
arranged into an intelligible, conscious story by the mPFC/Emotional Sensor. And as
time passed, the HCMP/Memorizer and amygdala/Emoter continually recorded
your experience into memory.
The first time you tasted your soon-to-be favorite scone, the large prediction error
triggered an equally large release of DA/Motivation from your VTA/Motivator.
The DA/Motivation impacted three important structures within the DA/Motivation
stimulus-reward

learning

system.

The

DA/Motivation

release

in

the

NAcc/Connector connected the scone, the ongoing black and white memory of the
bakery

environment,

and

the

rewarding

consummatory

experience.

The

VTA/Motivators axonal projections then arrived at the amygdala/Emoter, spilling


DA/Motivation into the synapse and coloring the affective component of the
HCMP/Memorizer-generated memory as highly salient.

As you ate your scone, your insula/Internal Sensor generated wonderful bodily
undertones of the consummatory experience that were dutifully passed along to your
mPFC/Emotional Sensor. The mPFC/Emotional Sensor gathered all of the
aforementioned data points to create your seemless conscious experience of the bakery
and the heavenly scone. Finally this conscious experience was marked as extremely
salient by the final projection from the VTA/ Motivator and the DA/Motivation
stimulus-reward learning system to the mPFC/Emotional Sensor itself.

EXPERIENCING NETWORK WITH ACCOMPANYING MEMORY FORMATION

Thus, through a concert of networks and structures, the new bakery was learned
to connect to the reward of this heavenly sconal experience.
Now that we have completed our review of the more or less natural reward of a
scrumptious pastry, lets turn to the pathological reward associated with substance
abuse.
Medicinal and recreational substance use has been around for at least 9 millennia
(McGovern et al., 2004). We sometimes conceive of substance use as being a modern
phenomenon, but humans have been exploiting their neurobiology and augmenting their
stimulus-reward learning system for many thousands of years. However, it is only
recently that we have been able to describe the mechanisms of the various substances
with which our species partakes.

Substances of abuse have many different mechanisms for creating the high associated
with their use. This high is made possible by the substances activity at the neuronal
level.
Americas favorite drug, alcohol, potentiates the inhibitory amino acid neurotransmitter
GABA by binding to the GABA receptor. This increased inhibition produces an
anxiety-relieving effect similar in sensation and mechanism to benzodiazepines such as
Ativan, Xanax, or Valium. Alcohol does not limit its action to the GABA receptor.
Alcohol is hypothesized to stimulate opioid systems, inhibit glutamate pathways, and
affect a wide-range of sites throughout the brain.
Opiates, derived from the opium plant, are perhaps the oldest family of non-alcoholic
drugs known to man. The opiate family includes drugs such as heroin, Oxycontin, and
Vicodin to name a few (Vicodin also contains acetaminophen, the active ingredient in
Tylenol). Opiates of abuse bind to the opioid -receptor, which through a net inhibitory
process produces a subjective experience of euphoria, pain relief, and drowsiness.
Unfortunately, inhibition of pain and anxiety are not opiates only effects; they also
inhibit breathing at higher doses, making opiates very dangerous in overdose.
Marijuana contains many psychoactive chemicals, but the primary chemical, -9tetrahydrocannabinol (THC) acts on cannabinoid (CB) receptors. CB receptors
normally allow the binding of the endogenous chemical anandamide. When THC binds
to the CB receptor, the second messenger cAMP is reduced and this in turn decreases
the general excitability of the brain. This inhibitory action of THC at CB receptors
produces the euphoric and anxiety-reducing effects of marijuana use.
Stimulants include drugs like amphetamines and cocaine. Both drugs act at the
DA/Motivation transporter (DAT). But each drug has its own unique effect on DAT.
DAT normally functions as a reuptake pump to remove unused DA/Motivation from
the synapse. Cocaine blocks DAT (along with serotonin and norepinephrine reuptake
transporters), stopping the reuptake of DA/Motivation and causing a build up of
DA/Motivation in the synapse. Amphetamines are structurally similar to
DA/Motivation so they trick DAT into taking them up into the pre-synaptic axon
terminal while simultaneously making DAT leaky and thus amenable to the
bidirectional movement of DA/Motivation (normally DA/Motivation only

moves in to the axon terminal through DAT). Once inside, amphetamines displace
DA/Motivation from their pre-synaptic vesicles. The combination of a now leaky
DAT and the displacement of endogenous DA/Motivation lead to the release of
DA/Motivation into the synapse through DAT. Thus, both cocaine and amphetamines
cause a net increase in DA/Motivation (along with norepinephrine and serotonin)
through different mechanisms of action. (Haines, 2012)
Despite their disparate mechanisms of primary action, all addictive drugs increase the
release of DA/Motivation. Alcohol, opiates, and marijuana may act on remote
receptors, but these initial actions cause the inhibition of normal inhibitory control
(disinhibition) of the VTA/Motivator, thus increasing its release of DA/Motivation.
In fact, researchers believe that for a substance to be addictive it must activate the
DA/Motivation system. (Cecil, 2012)
Lets use amphetamines as a prototypical substance of abuse to investigate why
exogenous substances are so different from natural stimulus-reward pairs.
The first time an individual consumes amphetamines the series of rewarding events he
or she experiences is very similar to our experience with the scone, with one additional
layer that we will turn to shortly.
The actual value likely far exceeded the expected value the first time our hypothetical
individual used amphetamines, and so the prediction error was quite large. This large
prediction error translated into a large flood of DA/Motivation from the
VTA/Motivator into the NAcc/Connector and amygdala/Emoter.
The sense organs generated the raw data of the external drug-taking experience while
the insula/Internal Sensor added its own bodily flavor of the internal experience.
These unrefined informational packets were then delivered to the mPFC/Emotional
Sensor along with the affective component of the experience courtesy of the
amygdala/Emoter.

The

mPFC/Emotional

Sensor

organized

all

of

these

informational packets into an orderly conscious experience of the high, attaching a large
degree of salience to the experience thanks to the healthy dose of DA/Motivation
from the VTA/Motivator delivered to the mPFC/Emotional Sensor. Simultaneously
the conscious experience was recorded into memory in full color by the
amygdala/Emoter and HCMP/Memorizer.

This may seem very familiar from our bakery discussion in earlier paragraphs.
Unfortunately, we cannot stop here. In addition to this natural cascade of stimulusreward learning, amphetamines play a nasty trick on the brain.
Amphetamines act directly on the VTA/Motivator (among other places) to directly
release large amounts of DA/Motivation. Thus, not only do amphetamines stimulate
the natural side of reward learning through prediction error, but they also adulterate
the system and make it impervious to the natural prediction error decay.
The DA/Motivation- producing effects of natural environmental rewards slowly
decay as the expected value approaches the actual value just like with our scone. The
more times we visit the bakery, the more accurate our predictions of value become, and
before long there is no prediction error to generate a DA/Motivation release. But
amphetamines, along with other addictive drugs, will continue to stimulate large
amounts of DA/Motivation release every time they are consumed no matter how
similar the expected and actual values become.

SUBSTANCE ABUSE & THE DA/MOTIVATION STIMULUS-REWARD LEARNING PATHWAY

In fact, even if the high is subjectively unpleasant, the DA/Motivationstimulating properties of the drug will ensure that the experience is still recorded
as rewarding.(Sescousse, 2013)
Further complicating the picture is the brains adaptation to the increased
DA/Motivation levels caused by the amphetamine abuse. Over time the brain adapts
by desensitizing the entire DA/Motivation stimulus-reward learning system,
downregulating both the DA/Motivation receptor density as well as the production of
DA/Motivation.

Thus, the amphetamine-adapted brain has a lower tonic level of DA/Motivation and
prefers amphetamines to natural rewards. Previously reward-stimulating activities such
as socializing with family and friends, exercise, and eating become secondary to the use
of the addictive substance.
As we can see, substance abuse has a whole litany of negative biological, social, and
emotional consequences. The journey from addiction to sobriety is one measured in
large time intervals, and in the interest of concluding an already long article, we will
skip ahead to the recovery process. The choice to begin the recovery process is very
rarely based on a single event, and instead represents the summation of many small and
large events. Also, recovery is rarely defined by a single, successful bid at sobriety.
Instead, recovery is very often punctuated by multiple periods of abstinence and relapse
that are variable in duration and intensity.
Hopefully now we can better appreciate the complexity of the recovery process with our
new knowledge of the DA/Motivation stimulus-reward learning system. Despite the
challenges of recovery, millions of patients enjoy a sustained sobriety as a result of a lot
of hard work and support. And as the length of sobriety increases, the brain slowly
heals itself (with most drugs of abuse).
I will often tell my patients that getting sober is easy; its recovery that is hard. I say
this tongue-in-cheek because no part of giving up an addiction is easy, but the saying
highlights a key issue. When a patient is in the process of sobering up from whatever
substance he or she abused there are plenty of reminders as to why that person should
get sober. These reminders come in the form of withdrawal symptoms.
Withdrawal is a clinical phenomenon that can be simplistically understood as the
reverse of the substance-induced state. Euphoria is replaced by despair, increased
appetite is replaced by nausea, and sedation is replaced by insomnia (to name a few of
the common withdrawal symptoms). In the early stages of recover, the profoundly
uncomfortable symptoms of withdrawal serve as a constant reminder of the dangers of
substance abuse and motivate the patient to maintain sobriety.
The challenge for many patients comes when they start to feel better and enter the
indefinite recovery stage. After the withdrawal symptoms have abated, all of the

profoundly unpleasant sobriety motivators disperse and the patient is left with all of the
old reasons why he or she might have used in the first place (i.e. anxiety, sadness, etc.).
Additionally, all of the DA/Motivation stimulus-reward learning from previous drug
abuse is still strongly wired into the neuronal architecture of the brain.
While the patient was still abusing a given substance, the DA/Motivation stimulusreward learning pathway linked the environments, paraphernalia, and even friends
(stimuli) to the substance-induced high (reward). Thus, just as the pastry aficionado
learned to associate the fabled bakery with an amazing scone, so too does the patient
with a history of substance abuse link the environments, paraphernalia, and friends he
or she used with to an impending high. This strong DA/Motivation link produces the
craving response that patients experience when they return to old environments or
friends he or she used with.
Importantly, these triggers are gradually unlearned over time as a patient is exposed to
various previously triggering stimuli without succumbing to the urge to use. Thus, the
same biological system that so strongly linked drug abuse to reward can be rewired over
time to support an abstinent brain.
Recovery is a process of redefinition and unflinching self-inquiry; it requires giving up
a large part of the past and opening oneself up to an uncertain future. Old habits must be
relinquished and both physical and social environments must be changed.
Despite the difficulties, if a patient is able to get clean, then time is a powerful ally in
maintaining his or her sobriety. A patients chance of relapse decreases from a high of
over 60% with only one year of sobriety to less than 15% with five years of sobriety
(Dennis, Foss, & Scott, 2007). With each passing year, patients abandon old habits and
learn new ways of coping with the stresses inherent to their internal and external
environment that had previously drove them to abuse substances.
We have traveled our own long path from sugar addiction, to amphetamine abuse, and
at long last, on to recovery. I hope that the neuroscience of reward has been an
interesting view under the hood of our deepest desires and cravings. As always, it is my
hope that by elucidating the biology of our brains we can gain the distance necessary to
reclaim control of our faculties.

Instead of imagining sugar or drug addiction as a moral failing, it is my hope to reframe


addiction as an unfortunate permutation of our biological stimulus-reward learning
system. Because when we get our moral judgments out of the way and approach the
problem logically, we can use the same biological system to free ourselves from the
grips of reward addiction.

REFERENCES
Avena, N. M., Rada, P., & Hoebel, B. G. (2008). Evidence for sugar addiction:
behavioral

and

neurochemical

effects

of

intermittent,

excessive

sugar

intake. Neuroscience & Biobehavioral Reviews,32(1), 20-39.


Azevedo, F. A., Carvalho, L. R., Grinberg, L. T., Farfel, J. M., Ferretti, R. E., Leite, R.
E., & HerculanoHouzel, S. (2009). Equal numbers of neuronal and nonneuronal cells make the human brain an
isometrically scaledup primate brain.

Journal of Comparative Neurology, 513(5), 532-541.

Baynes, J., & Dominiczak, M. H. (2014). Medical Biochemistry: with STUDENT


CONSULT Online Access. Elsevier Health Sciences.
Barbey, A. K., Krueger, F., & Grafman, J. (2009). Structured event complexes in the
medial

prefrontal

cortex

planning. Philosophical

support

Transactions

counterfactual
of

the

representations

Royal

Society

B:

for

future

Biological

Sciences, 364(1521), 1291-1300.


Cauda, F., Cavanna, A. E., Dagata, F., Sacco, K., Duca, S., & Geminiani, G. C. (2011).
Functional connectivity and coactivation of the nucleus accumbens: a combined
functional connectivity and structure-based meta-analysis. Journal of cognitive
neuroscience, 23(10), 2864-2877.
Cecil, R. L. F. (2012). Goldmans Cecil Medicine, Expert Consult Premium Edition
Enhanced Online Features and Print, Single Volume, 24: Goldmans Cecil
Medicine (Vol. 2). L. Goldman, & A. I. Schafer (Eds.). Elsevier Health Sciences.
Dennis, M. L., Foss, M. A., & Scott, C. K. (2007). An eight-year perspective on the
relationship

between

the

duration

of

abstinence

and

other

aspects

of

recovery. Evaluation Review, 31(6), 585-612.


Gussow, L. (2013). Toxicology Rounds: Opium, from Ancient Sumeria to Paracelsus to
Kerouac.Emergency Medicine News, 35(4), 25.

Haines, D. E. (2012). Fundamental Neuroscience for Basic and Clinical Applications:


with STUDENT CONSULT Online Access. Elsevier Health Sciences.
Hyman, S. E., Malenka, R. C., & Nestler, E. J. (2006). Neural mechanisms of addiction:
the role of reward-related learning and memory. Annu. Rev. Neurosci., 29, 565-598.
Johnstone, E. C., Owens, D. C., & Lawrie, S. M. (2010). Companion to psychiatric
studies. Elsevier Health Sciences.
Kapur, S. (2003). Psychosis as a state of aberrant salience: a framework linking
biology, phenomenology, and pharmacology in schizophrenia. American Journal of
Psychiatry, 160(1), 13-23.
Kaufman, D. M., & Milstein, M. J. (2012). Kaufmans Clinical Neurology for
Psychiatrists. Elsevier Health Sciences.
Kober, H., Barrett, L. F., Joseph, J., Bliss-Moreau, E., Lindquist, K., & Wager, T. D.
(2008). Functional grouping and corticalsubcortical interactions in emotion: A metaanalysis of neuroimaging studies.Neuroimage, 42(2), 998-1031.
Khn,

S.,

&

Gallinat,

J.

(2012).

The

neural

correlates

of

subjective

pleasantness. Neuroimage, 61(1), 289-294.


McClure, S. M., Daw, N. D., & Montague, P. R. (2003). A computational substrate for
incentive salience. Trends in neurosciences, 26(8), 423-428.
McGovern, P. E., Zhang, J., Tang, J., Zhang, Z., Hall, G. R., Moreau, R. A., &
Wang, C. (2004). Fermented beverages of pre-and proto-historic China. Proceedings of
the National Academy of Sciences of the United States of America, 101(51), 1759317598.
McMahon, S., Koltzenburg, M., Tracey, I., & Turk, D. C. (2013). Wall & Melzacks
Textbook of Pain: Expert Consult-Online. Elsevier Health Sciences.
Nolte, J. (2009). The human brain: An introduction to its functional anatomy. Elsevier
Health Sciences.
Otto, M. W., OCleirigh, C. M., & Pollack, M. H. (2007). Attending to emotional cues
for drug abuse: Bridging the gap between clinic and home behaviors.
Paxinos, G., & Mai, J. K. (2004). The human nervous system. Academic Press.
Phan, K. L., Wager, T., Taylor, S. F., & Liberzon, I. (2002). Functional neuroanatomy
of

emotion:

meta-analysis

of

fMRI. Neuroimage, 16(2), 331-348.

emotion

activation

studies

in

PET

and

Sescousse, G., Cald, X., Segura, B., & Dreher, J. C. (2013). Processing of primary and
secondary rewards: a quantitative meta-analysis and review of human functional
neuroimaging studies.Neuroscience & Biobehavioral Reviews, 37(4), 681-696.
Schultz, W. (2002). Getting formal with dopamine and reward. Neuron, 36(2), 241-263.
Squire, L. R. (Ed.). (2013). Fundamental neuroscience. Academic Press.
Stern, T. A., Rosenbaum, J. F., Fava, M., Biederman, J., & Rauch, S. L.
(2008). Massachusetts General Hospital comprehensive clinical psychiatry. Elsevier
Health Sciences.

You might also like