You are on page 1of 111

1

Experiment No.1

Reaction time

Problem Statement

To find out reaction time for words.

Introduction

A test for assessing personality traits and conflicts, in which the subject responds to a

given word with the first word that comes to mind or with a predetermined type of word, such

as an antonym (Harcourt, 2013). The word association test is a common method within

psychology which has been used to reveal the private world of an individual. In its simplest

form a series of disconnected words (stimulus words) are projected orally or in writing to the

respondents who must respond with the first word which comes to mind (response words).

These associations reveal the respondents’ verbal memories, thought processes, emotional

states, and personalities.

The presentation of stimulus words varies a lot, depending on the methodology. One

distinguishes between several different test methods. In a discrete test a stimulus word is

presented once, and the respondent must associate one response (Nielsen, 1999).

Associative learning

Associative learning is a learning principle that states that ideas and experiences

reinforce each other and can be mentally linked to one another (Spanella, 2015).

 Laws of Association

Aristotle counted four laws of association when he examined the processes of

remembrance and recall:

 The Law of Contiguity.


2

Things or events that occur close to each other in space or time tend to get linked together

in the mind. If you think of a cup, you may think of a saucer; if we think of thunder, we

immediately think of lightning, since the two often occur one after the other.

 The Law of Frequency.

The more often two things or events are linked, the more powerful will be that

association. If you have an eclair with your coffee every day, and have done so for the last

twenty years, the association will be strong indeed -- and you will be fat.

 The Law of Similarity.

If two things are similar, the thought of one will tend to trigger the thought of the other.

If you think of one twin, it is hard not to think of the other. If you recollect one birthday, you

may find yourself thinking about others as well.

 The Law of Contrast.

On the other hand, seeing or recalling something may also trigger the recollection of

something completely opposite. For example, when we hear the word "hot," we often think

of the word "cold" or if you think of the tallest person you know, you may suddenly recall the

shortest one as well. (Boeree, 2000)

Reaction time

Psychologists examine the nature and probabilities of the response words, and

sometimes the amount of time it takes to respond (Nielsen, 1999). The time that passes

between the introduction of a stimulus and the reaction by the subject to that stimulus is

called reaction time (Psychology Dictionary).


3

Paired Associative Learning

Paired-associate (PA) learning was invented by Mary Whiton Calkins in 1894 and

involves the pairing of two items (usually words)—a stimulus and a response. Two items (a

Stimulus and Response item) paired as stimuli (e.g., BOAT-CHAIR). When the items pairs

are committed to memory, the presentation of the first word (the stimulus word) should evoke

the second word (the response word). So presenting “boat” should elicit a response of “chair”

(Hilgard, 1987).

Free Association

Free association is the unstructured way to associate a stimulus word with a response

word; one can recall words in any order they’d like. This technique was introduced by

Sigmund Frued to reveal uncoscious mind (Jones, 1999).

Participants

Participants consisted of both males and females.


4

Experimenter

Name: A.A

Age: 21

Gender: Female

Subject

Name: H.A

Age: 21

Gender: Female

Apparatus

Word Association Test, stopwatch, pencil, paper.

Procedure

The subject was seated comfortably in the testing lab. The subject was relaxed

and willing for the participation. Word Association Test consisting of 100 words was used for

doing the experiment. The words on the test were spoken one by one before her and asked by

the participant to tell the first thought that comes into her mind. The experiment conducted in

a peaceful place that noise not interferes in experiment. The time participant took in giving

the response to the word was noted as reaction time.

Results

Word Response Reaction Word Response Reaction

Time Time

Table Chair 1sec Stem Root 1sec

Dark Room 1sec Lamp Light 1sec

Music Enjoyment 1sec Dream GC 1sec


5

Sickness Pain 2sec Yellow Dirty fellow 1sec

Man Mean 1sec Bread Sandwich 3sec

Deep Thought 1sec Justice Leadership 1sec

Soft Hard 1sec Boy Flirty 2sec

Eating Favriot 3sec Light Dim 1sec

Mountain High 1sec Health Care 1sec

House Peaceful 1sec Bible Holy book 1sec

Black Favorit 1sec Memory Sharp 4sec

Mutton Food 1sec Sheep Animal 4sec

Comfort Zone 1sec Bath Everyday 1sec

Hand Pretty 1sec Cottage Dream 2sec

Short Long 1sec Swift Car 1sec

Fruit Blessing 1sec Blue River 1sec

Butterfly Colorful 1sec Hungry Everytime 2sec

Smooth Road 1sec Priest Religious 3sec

Command Computer 1sec Ocean Deep 1sec

Chair Comfortable 1sec Head Headach 3sec

Sweet Chocolate 1sec Stove Sandwich 3sec

Whistle Enjoyment 2sec Long Journey 1sec

Women Mean 2sec Religion Identity 3sec

Cold Drink 1sec Whisky Fun 1sec

Slow Fast 1sec Child Cute 1sec

Wish High 1sec Bitter Sweet 2sec

River Deep 1sec Hammar Instrument 1sec

White Peaceful 2sec Thirsty Crow 1sec


6

Beautiful Me 1sec City Lahore 2sec

Window Shopping 1sec Square Root 1sec

Rough Look 1sec Butter Bread 1sec

Citizen Patroit 2sec Doctor Engieener 2sec

Foot Blessing 3sec Loud Voice 2sec

Spider Dangerous 1sec Theif Clever 5sec

Needle Pain 1sec Lion Brave 1sec

Red Beautiful 3sec Joy All the time 2sec

Sleep Disturbance 1sec Bed Comfortable 1sec

Anger Aggressive 4sec Heavy Look 3sec

Carpet Smooth 1sec Tobbaco Smooking 6sec

Girl Attractive 2sec Baby Cute 1sec

High Hopes 1sec Moon Light 1sec

Working Hours 1sec Scissors Dangerous 1sec

Sour Sweet 2sec Quiet Never 1sec

Earth Round 1sec Green Trees 1sec

Trouble Creator 1sec Salt Pepper 1sec

Soldier Brave 1sec Street Light 1sec

Cabbage Love 1sec King Queen 1sec

Hard Work 1sec Cheese Tasty 1sec

Eagle Clever 2sec Blossom Attractive 1sec

Stomach Empty 2sec Afraid Darkness 1sec

Average Reaction Time = 1.57 sec

Discussion
7

The subject gave contradictory responses to the stimulus words mostly which showed

that subject has strong association of contradiction.

Conclusion

The average reaction time of the participant was 1.57 seconds.


8

Experiment No. 2

Flavor Identification

Problem Statement

To find out does the sense of smell facilitate the sense of taste.

Introduction

The combination of taste and smell, as well as other sensations is called flavor

(Goldstein, 2010). Flavor is the overall impression that we experience from the combination

of nasal and oral stimulation (Lawless, 2001). Flavor is defined as the combined perception

of odor, taste and mouthfeel (Ney, 1988).

Taste and smell are chemical senses because they operate by detecting molecules of

dissolved or vaporous substances that come into contact with the organs of sense and react

chemically within their membranes, stimulating neurotransmitters those send messages to the

brain and produce sensations (Korsmeyer, 1999).

Smell and taste belong to our chemical sensing system (chemosensation). The

complicated process of smelling and tasting begins when molecules released by the

substances around us stimulate special nerve cells in the nose, mouth, or throat. These cells

transmit messages to the brain, where specific smells or tastes are identified (Finger, 2008).

Sense of taste

Gustation (sense of taste) detects chemicals in solution that come into contact with

receptors inside the mouth. Taste is an essential component of flavor. The tongue is a large,

mobile organ comprising several muscles and covered by a mucous membrane. Food entering

the mouth is chewed and mixed with saliva to facilitate swallowing. Taste is a chemical sense

that requires a substance to be dissolved before it can be tasted (Evans & Tippins, 2008). The

sense of taste (gustation) serves as a protector from rotten or putrid food and provides
9

delightful sensations of creamy chocolate, crunchy carrots etc. (White, Duncan, Baumle,

2010).

The tongue’s taste buds are little bumps that are linked to the brain by nerves. The

taste buds can only recognize tastes when they are in liquid form. So, when we put food into

our mouths it is mixed with saliva to form a liquid that we can taste. The brain tells us what

we taste (Nicanor, 2001).

Taste buds are located mainly along the outside edge of the tongue. Taste receptors

are in taste buds located in papillae on the surface of the tongue. A given papillae may

contain up to 10 or more taste buds and each taste buds contain about 50 receptor cells. The

receptors for taste are true neurons but modified skin cells. Taste receptors are gradually

sloughed off and replaced, each one lasting about 10 or 14 days (Kalat, 2007).

Basic types of taste

There are four basic types of taste

 Bitter taste buds

 Sour taste buds

 Salty taste buds

 Sweet taste buds

Sense of smell

Sensations are usually thought to be simple basic experiences elicited by simple

stimuli. The sense of smell is also called as olfaction and it is an essential for the survival of

many animals and makes life much more interesting in humans. Olfaction is done by the

olfactory receptors (Goldstein, 1984).

Olfaction (sense of smell) detects chemicals that are airborne, or volatile. Smell is

referred to as one of the chemical senses. The nose is the sensory organ of smell. The sense of

smell has been the subject of research both in the laboratory and in the field. Taste is a poor
10

sense because much of what we think we taste is really an olfactory contribution of the nose

(Korsmeyer, 1999). Olfaction plays a significant role in the perception of foods. Much of the

taste of a meal derives from olfactory stimulation (Wysocki & Pelchat, 1993).

Odours from food pass upwards into the nasopharynx and nasal cavity to stimulate

olfactory receptors. Much of our interpretation of taste is actually gained through the sense of

smell. Our perception of flavor is comprised of the sensory combination and integration of

odors, tastes, oral irritations & thermal sensations and mouthfeels that arise from a particular

food (Breslin, 2001).

Therefore olfaction is basically the result of the interaction between a chemical

stimuluszx and an olfactory receptor system causing biological and the psychological effects

in a living organism (Goldstein, 1984).

Eharnett (2007) concluded in his research that we didn’t have a sense of smell, than

we could distinguish between something that might be sweet and something that is bitter, but

we wouldn’t know which food was which because we identify the food based on smell.

Hypothesis

The sense of smell facilitates the sense of taste.

Method

Research Design

Experimental research design was used for studying the role of sense of smell in sense

of taste.

Participants

The university students from the Institute of Applied Psychology were included in the

sample. The age range of the participants was 19-24.

Apparatus

o Paper and Pencil


11

o Juices (Apple, Mango, Peach, Pineapple and Grapes)

o Straws

Procedure

The participant were asked to come in the experimental room. The participant was

seated comfortably. The following instructions were given properly. She was told that, “this

is flavor identification experiment. It is a part of our course. You will be given different

flavors and you have to identify the flavor. You will be blindfolded at first”.

After blindfolding the participant, Apple juice was tasted to her with closed nose and

asked her to identify the flavor. Her response was noted down. When the participant has

tasted Apple juice then she were introduced with Peach flavor. Her response was noted down

Mango flavor with closed nose. The fourth and last flavor (Grapes) was given to participant

one by one in the same manner.

Now the participant was again requested to come in the experimental room one by

one. She was tasted the juices again but with open nose. Same procedure was repeated with

all participant with open nose in the same order and her responses were noted down. At the

end of experiment the participant was thanked for their cooperation.

Results

Table 1

Flavor identification with closed eyes and closed nose

Participant

Correct Incorrect

Flavors 1 F % f %

A Yes 4 20% 16 80%

B Yes 5 25% 15 75%


12

C Yes 4 20% 16 80%

D yes 2 10% 18 90%

E yes 5 25% 15 75%

The frequency of correct responses is 20 which is less than incorrect response i.e

80.Therefore, a greater number of participants were unable to correctly identify the flavors

with nose clippers.

Table 2

Flavor identification with closed eyes and open nose

Participant

Correct Incorrect

Flavors 1 f % f %

A Yes 19 95% 1 05%

B Yes 18 90% 2 10%

C Yes 20 100% 0 0%

D yes 20 100% 0 0%

E yes 20 100% 0 0%

The frequency of correct responses is 93 which is greater than incorrect response i.e

3.Therefore, a greater number of participants were able to correctly identify the flavors

without nose clippers.

Results showed that the percentage of total correct responses (45%) with closed nose

was lower than the percentage of total wrong responses (55%) and with open nose the

percentage of total correct responses (85%) was greater than the percentage of total wrong

responses (15%). These results indicate that the sense of taste of the participants was good.
13

Discussion

Results support the hypothesis that the sense of smell facilitates the sense of

taste. Without the sense of smell we would not be able to taste most of the things. The

participants’ response “yes” with the open nose and close eyes was more as compared

with the closed nose. This shows that the sense of smell plays an important role in the

sense of taste.

Conclusion

This experiment shows that the sense of smell plays a role in the sense of taste.
14

References

Evans, C., & Tippins, E. (2008). Foundations of nursing: An integrated approach.

UK: McGraw Hill Education

Goldstein, E. B. (2010). Sensation and perception. USA: Wadsworth Cengage

Learning

Goldstein, E. B. (2005). The blackwell handbook of sensation and perception. UK:

Blackwell Publishing Ltd.

Korsmeyer, C. (1999). Making sense of taste: Food and Philosophy. USA: Cornell

University Press

White, L., Duncan, G., & Baumle, W. (2010). Foundations of adult health nursing

(3rd Ed.). USA: Cengage Learning, Inc.


15

Experiment No. 3

Measurement of Differential Threshold by the

Method of Adjustment

Problem Statement

To find out and compare differential threshold (DL) for larger comparison line with

smaller comparison line.

Introduction

Prior to a century ago the approach to psychological problems consisted primarily of

philosophical speculation. The transition of psychology from a philosophical to a scientific

discipline was greatly facilitated when the German Physicist G. T. Fechner introduced

techniques for measuring mental events (1860). The attempt to measure sensations through

the use of Fechner’s procedures was termed psychophysics and constituted the major research

activity of early experimental psychologists. Since this time psychophysics has consisted

primarily of investigating the relationships between sensations in the psychological domain

and stimuli in the physical domain. Psychophysics is the scientific study of the relation

between stimulus and sensation (Gescheider, 1997).

G. T. Fechner, in his book Elements of Psychophysics set out the principles of

psychophysics, describing the various procedures that experimentalists use to map out the

relationship between matter and mind (Kingdom, 2010).

In the early nineteenth century, German scientists such as E. H. Weber and G. T.

Fechner were interested in the measurement of the sensitivity limits of the human sense

organs. Using measurement techniques of physics and well trained human observers, they

were able to specify the weakest detectable sensations in terms of the stimulus energy

necessary to produce them (Gescheider, 1997).


16

Threshold

Threshold is the level at which a stimulus is just sufficient to produce a sensation

(Stach, 2008). There are two types of threshold.

Absolute Threshold (AL)

The absolute threshold or stimulus threshold is the smallest amount of stimulus

energy necessary to detect a stimulus (Goldstein, 2010).

Difference Threshold (DL)

Difference threshold is the smallest difference between two stimuli that a person can

detect (Goldstein, 2010).

Fechner’s enduring contribution was to work out the details of three important

sensory test methods and from these methods describe how several important operating

characteristics of sensory systems could be measured. The three basic methods are

 Method of Limits

 Method of Constant Stimuli

 Method of Adjustment

Method of Limits

In method of limits the experimenter present stimuli in either ascending or descending

order.

Method of Constant Stimuli

In method of constant stimuli (also called the method of right and wrong cases) the

experimenter presents five to nine stimuli with different intensities in random order.

Method of Adjustment

In method of adjustment (also called the method of average error) the observer or the

experimenter adjusts the stimulus intensity in a continuous manner (as opposed to the

stepwise presentation of method of limit) until the observer can barely detect stimulus. This
17

barely detectable intensity is then taken as absolute threshold. This procedure can be repeated

several times and the threshold determined by taking the average setting (Goldstein, 2002).

The method could be used to determine difference thresholds based on the variability

of the subject over many attempts at matching. This method is the simplest and most

straightforward technique for deriving threshold data. In it the observer controls the stimulus

magnitude and adjusts it to a point that is just perceptible or just perceptibly different from a

starting level. The threshold is taken to be the average setting across a number of trials by one

or more observers. The method of adjustment has the advantage that it is quick and easy to

implement. Often the method of adjustment is used to obtain a first estimate of the threshold

to be used in the design of more sophisticated experiments (Fairchild, 2005).

Hypothesis

Differential threshold (DL) for larger comparison line is greater than smaller

comparison line.

Method

Research Design

Experimental research design was used

Participants

The university students from Applied Psychology Department were included

in the sample. The age range of the participants was 19-24

Apparatus

o Paper

o Pencil

o Muller Lyer cards


18

Procedure

At first the subject was seated comfortably. She was instructed about the trials of

Muller Lyer Illusion Card. She was told that there are total 40 trials consisting of ascending

and descending order. There would be 10 trials from Left to Right in ascending and 10 would

be from Right to Left in ascending. Same would be performed for descending trials.

Experiment was first started from ascending trials (Right to Left). Right to left 10

ascending trials were performed by the participant and the readings were noted at the point

where she found lines matching on standard and variable card.

After the completion of 10 right to left ascending trials she was then asked to perform

10 left to Right ascending trials and the reading of the trials were noted.

Same trials were done in descending order again, 10 trials were from Left to Right

and 10 were from right to left and the readings were noted again but now the order was

descending. After the completion of 40 trials, Muller Lyer Illusion Card was taken from the

participant. She was thanked for her cooperation for performing the experiment.

Results

Table 1

Descending and ascending trials (left to right and right to left)

Trials Descending Trials Ascending Trials

Right to Left Left to Right Right to Left Left to Right

1 4.5 4.3 4.2 4.4

2 4.7 4.7 4.4 4.3

3 4.6 4.3 4.4 4.5

4 4.7 4.5 4.1 4.7

5 4.4 4.7 4.2 4.8

6 4.4 4.6 4.0 4.7


19

7 4.3 4.7 4.9 4.7

8 4.6 4.7 4.1 4.8

9 4.5 4.4 4.3 4.6

10 4.4 4.4 4.4 4.3

Total T1= 45.1 T2= 44.9 T3=43 T4=45.8

Average of descending trials Average of ascending trials

T1= 4.51 T2=4.49 T3 = 4.3 T4 = 4.58

Point of subjective equality = PSE=4.5 Point of subjective equality (PSE) = 4.44

Interval of Uncertainty (I.U) = -0.2 Interval of Uncertainty (I.U) = 2.8

Differential Threshold (D.L) = -0.1 Differential Threshold (DL) = 1.4

Discussion

The illusion is the misinterpretation of the perception. The Muller Lyer illusion is the

form of illusion and it occurs normally during perceiving arrow head and inverted arrow lines

at the same time. This phenomenon thrive us to know the limits or benefit of the human eye

system. The present results show that we perceive two lines to be different as in fact they are

equal (arrow head and invert arrow headlines). The present experiment results also revealed

that descending order errors difference are greater than ascending order errors that is the point

for PSE. Differential threshold for the descending trials is -0.1 and for the ascending trials is

1.4. This means differential threshold for the ascending trials is greater than the descending

trials. Therefore, hypothesis is proved that Differential threshold (DL) for larger comparison

line is greater than smaller comparison line.

Conclusion

Results showed that Differential threshold for larger comparison line was greater than

for smaller comparison line.


20

References

Fairchild, M. D. (2005). Color appearance models (2nd Ed.). Retrieved from

http://www. books.google.com.pk/books?isbn=0470012692

Gescheider, G. A. (1997). Psychophysics: The fundamentals. Retrieved from

http://www. books.google.com.pk/books?isbn=113480122X

Goldstein, E. B. (2002). Sensation and perception (6th Ed.). USA: Thomson Learning,

Inc

Goldstein, E. B. (2010). Sensation and perception (8th Ed.). USA: Wadsworth

Cengage Learning

Kingdom, F. A. A. (2010). Psychophysics:A practical introduction. Retrieved from

http://www.books.google.com.pk/books?isbn=0080920225

Stach, B. A. (2010). Clinical audiology: An introduction (2nd Ed.). Retrieved from

http://www. books.google.com.pk/books?isbn=0766862887
21

Experiment No. 4

Digit Span

Problem Statement

To find out does the Forward Digit Span is greater than Backward Digit Span.

Introduction

Memory is essential to all our lives. Without a memory of the past, we cannot operate

in the present or think about the future. We would not be able to remember what we did

yesterday, what we have done today or what we plan to do tomorrow. Without memory we

could not learn anything. Memory is involved in processing vast amount of information. This

information takes many different forms, e.g. images, sounds or meaning. Memory is the term

given to the structures and processes involved in the storage and subsequent retrieval of

information. Memory is the process of maintaining information over time (Matlin, 2005).

Memory is the means by which we draw on our past experiences in order to use this

information in the present (Sternberg, 1999). Memory is a mental system that receives,

stores, organized, alters and recovers information from sensory input (Coon, 1997).

Encoding refers to making mental representations of information so that it can be placed into

our memories. Storing is the process of placing encoded information into relatively

permanent mental storage for later recall. Retrieving is the process of getting or recalling

information that has been placed into short-term or long-term storage (Plotnik, &

Kouyoumdjian, 2011).

Information Processing Model describes that human memory is much like that of a

computer where information enters the system, is processed and coded in various ways and is

then stored. According to its popular version information first enters a memory store, some of

that information is coded into short term memory and some of short term memory is

transferred into long term memory (Kalat, 2002).


22

Basic Types of Memory

 Sensory memory
 Short-term memory
 Long-term memory

These are the three basic types. Information first enters sensory memory, which holds

an exact copy of the data for a few seconds. Short-term memory is the next step, and it holds

small quantities of information for a brief period longer than sensory memory. Selective

attention is utilized at this time to regulate what information is transferred to short-term

memory. Unimportant information is removed permanently (Coon, 1997).

Sensory memory refers to an initial process that receives and holds environmental

information in its raw form for a brief period of time, from an instant to several seconds. It

has two subtypes; (1) Iconic memory is a form of sensory memory that automatically holds

visual information for about a quarter of a second or more; as soon as you shift your

attention, the information disappears. The word icon means “image.”, (2) Echoic memory is a

form of sensory memory that holds auditory information for 1 or 2 seconds (Plotnik, &

Kouyoumdjian, 2011). The brief storage of sensory information is called sensory store also

known as iconic memory (Kalat, 2002).

Short-term memory also called working memory, refers to another process that can

hold only a limited amount of information, an average of seven items, for only a short period

of time, 2 to 30 seconds (Plotnik, & Kouyoumdjian, 2011). Miller’s (1956) Magic number 7

(plus or minus two) provides evidence for the capacity of short term memory. Most adults

can store between 5 and 9 items in their short-term memory. This idea was put forward by

Miller (1956) and he called it the magic number 7. He though that short term memory could

hold 7 (plus or minus 2 items) because it only had a certain number of “slots” in which items

could be stored.
23

Long-term memory refers to the process of storing almost unlimited amounts of

information over long periods of time. There are two types of long term memory; 1.

Declarative (explicit) memory involves memories for facts or events, such as scenes, stories,

words, conversations, faces, or daily events. 2. Procedural memory, also called non-

declarative (implicit) memory, involves memories for motor skills (playing tennis), some

cognitive skills (learning to read), and emotional behaviors learned through classical

conditioning (fear of spiders). We cannot recall or retrieve procedural memories (Plotnik, &

Kouyoumdjian, 2011).

Hypothesis

Forward memory span for digits is greater than backward memory span for digits.

Method

Research Design

Experimental research design was used.

Participants

One university student from Institute of Applied Psychology participated in the

experiment. The age of the participant was 22.

Apparatus

 Paper

 Pencil

 Cards

Procedure

At first the client was seated comfortably. She was instructed about the trials of

memory experiment. She was told that there would be total 16 trials consisting of forward

and backward trials in which 8 trials will be for forward memory recall of digits and 8 will be

for backward memory recall. She was told that she will be shown 8 cards consisting of
24

different digits starting from 3 digits with an increase in number of digit with each card. Each

card will be shown to her for 2 seconds and she is expected to recall the digits after 20

seconds after watching them.

Experiment was first started from Forward Trials. In the first forward trial the

participant was shown cards consisting of different digits for 2 seconds. She was asked to

recall 3 digits of card 1in a forward manner after 20 seconds and her response was noted. In

next trial Card 2 was shown to her for 2 seconds and she was asked to recall the 4 digits of

second card after 20 seconds in the same manner as Card 1 by noting her response. Same

procedure for repeated for the remaining 6 trials and the memory recall of the participant was

noted.

After forward trials backward trials were started by using different digits on cards. In

the first backward trial the participant was shown cards consisting of different digits for 2

seconds. She was asked to recall 3 digits of card 1in a backward manner after 20 seconds and

her response was noted. In next trial Card 2 was shown to her for 2 seconds and she was

asked to recall the 4 digits of second card after 20 seconds in the same manner as Card 1.

Same procedure for repeated for the remaining 6 trials and the backward memory recall of

the participant was noted.

Results

Table 1

Forward Memory Recall for Digits (N =8)

No. of Trials Digits Scores

1 173 3

2 3642 4

3 86647 5

4 247832 6
25

5 7815574 7

6 55467231 7

7 643485721 7

8 4317645891 6

Table 2

Backward Memory Recall for Digits (N =8)

No. of Trials Digits Scores

1 487 3

2 6545 4

3 58647 5

4 354672 6

5 6832547 7

6 42579136 7

7 637842175 9

8 4237932162 9

The result shows that the backward memory recall for digits is greater than the

forward memory recall for digits. The scores of the participant were greater for backward

memory recall for digits that reflect her better backward memory recall.
26

Graph

Line Graph for Forward and Backward Digit Span

10
9
8
7
6
Scores

5
Forward Digits
4
3 Backward Digits

2
1
0
1 2 3 4 5 6 7 8
Trials

Graph showing the results of forward and backward digit span. Forward digit span

memory remains perfect till 6th trial and then declines at last 2 trials. Backward digit span

memory remains perfect till 6th trial and increases till last 2 trials.

Discussion

The participant scored higher on backward memory recall for digits (3, 4, 5, 6, 7, 7, 9,

and 9). Her forward memory recall for digits was relatively poor so the hypothesis was not

supported. There could be many reasons behind this. May the participant was very quick in

learning backward digits and organizing them well. There could be a lack of attention while

she was being shown forward digits for recall that led to her poor scores in forward memory

recall (3, 4, 5, 6, 7, 7, 7and 6).

Conclusion

Backward memory recall for digits is greater than forward memory recall.
27

References

Coon, D. (1997). Essentials of Psychology. New York: Brooks/Cole Publishing.

Matlin, M. W. (2005). Cognition. Crawfordsville: John Wiley & Sons, Inc.

Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our

capacity for processing information. The psychological review, 63, 81-97.

Plotnik, R., & Kouyoumdjian, H. (2011). Introduction to psychology (9th ed.).

Canada: Wadsworth , Cengage Learning.

Sternberg, R. J. (1999). Cognitive psychology (2nd ed.). Fort Worth, TX: Harcourt

Brace College Publishers

Kalat, W. J. (2002). Introduction to psychology (6th ed.).USA: Wadsworth Thomson

Learning.
28

Experiment 5

Word association
Problem Statement
To find out the personality traits.

Introduction

The Revised NEO Personality Inventory is a psychological personality inventory,


first published in 1990 as a revised version of inventories dating to 1978. The NEO PI-R
consists of 240 questions intended to measure the Big Five personality traits: Extraversion,
Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience. Additionally,
the test measures six subordinate dimensions (known as facets) of each of the main
personality factors. The test was developed by Paul Costa, Jr. and Robert McCrae for use
with adult men and women without overt psychopathology, but was later shown to be
potentially useful at younger ages.A shortened version, the NEO Five-Factor
Inventory (NEO-FFI), uses 60 items (12 items per domain).

Both the NEO PI-R and NEO-FFI have been updated over the years, with their last
update published in 2010. While the NEO PI-R is still being published, the NEO-PI-3 and
NEO-FFI-3 feature updated normative data and new forms.

Followings are the five factors which are measured in Neo Five Factor Inventory:
Extraversion

Energy, positive emotions, surgency, assertiveness, sociability and the tendency to

seek stimulation in the company of others, and talkativeness. High extraversion is often

perceived as attention-seeking, and domineering. Low extraversion causes a reserved,

reflective personality, which can be perceived as aloof or self-absorbed.

Agreeableness

A tendency to be compassionate and cooperative rather than suspicious

and antagonistic towards others. It is also a measure of one's trusting and helpful nature, and

whether a person is generally well-tempered or not. High agreeableness is often seen as naive
29

or submissive. Low agreeableness personalities are often competitive or challenging people,

which can be seen as argumentative or untrustworthy.

Neuroticism

The tendency to experience unpleasant emotions easily, such as anger, anxiety,

depression, and vulnerability. Neuroticism also refers to the degree of emotional stability and

impulse control and is sometimes referred to by its low pole, "emotional stability". A high

need for stability manifests as a stable and calm personality, but can be seen as uninspiring

and unconcerned. A low need for stability causes a reactive and excitable personality, often

very dynamic individuals, but they can be perceived as unstable or insecure.

Conscientiousness

A tendency to be organized and dependable, show self-discipline, act dutifully, aim

for achievement, and prefer planned rather than spontaneous behavior. High

conscientiousness often perceived as stubborn and obsessive. Low conscientiousness are

flexible and spontaneous, but can be perceived as sloppy and unreliable.

Openness to experience

Appreciation for art, emotion, adventure, unusual ideas, curiosity, and variety of
experience. Openness reflects the degree of intellectual curiosity, creativity and a preference
for novelty and variety a person has. It is also described as the extent to which a person is
imaginative or independent, and depicts a personal preference for a variety of activities over a
strict routine. High openness can be perceived as unpredictability or lack of focus. Moreover,
individuals with high openness are said to pursue self-actualization specifically by seeking
out intense, euphoric experiences, such as skydiving, living abroad, gambling, et cetera.
Conversely, those with low openness seek to gain fulfillment through perseverance, and are
characterized as pragmatic and data-driven—sometimes even perceived to be dogmatic and
closed-minded. Some disagreement remains about how to interpret and contextualize the
openness factor.
Hypothesis
30

Participants

Participants consisted of both male and females.

Experimenter

Name: S.Z

Age: 22

Gender: Female

Subject

Name: H.A

Age: 21

Gender: Female

Apparatus

NEO Big Five Factor Inventory, Pencil.

Procedure

The subject was seated comfortably in the testing lab. The environment of the lab was

calm and well lighted. The subject was relaxed, calm and willing for the participation.

Participant was informed about right of confidentiality and privacy and told that participant

have right to withdraw from the experiment at any time he/she wants without any

undesireable outcomes. The participant was given the inventory and instructions to fill the

inventory were explained well.

Results

Word Response Type of Word Response Type of

association association

Table Chair contrast Stem Root contrast

Dark Room similarity Lamp Light similarity

Music Enjoyment contrast Dream sleep similarity

Sickness Pain contrast Yellow sunflower similarity


31

Man women contrast Bread Sandwich similarity

Deep Thought Similarity Justice Leadership continuity

Soft Hard contrast Boy Flirty Similarity

Eating Favriot Similarity Light Dim contrast

Mountain High Similarity Health Care Similarity

House Peaceful Similarity Bible Holy book Similarity

Black Favorit proximity Memory Sharp continuity

Mutton Food Similarity Sheep Animal Similarity

Comfort Zone Similarity Bath Everyday Similarity

Hand Pretty Similarity Cottage Dream Similarity

Short Long contrast Swift Car Similarity

Fruit Blessing Similarity Blue River Similarity

Butterfly Colorful Similarity Hungry Every time Continuity

Smooth Road contrast Priest Religious Similarity

Command Computer Similarity Ocean Deep Similarity

Chair Comfortable Similarity Head Headach continuity

Sweet Chocolate Similarity Stove Sandwich Similarity

Whistle Enjoyment Similarity Long Journey Similarity

Women Mean contrast Religion Identity continuity

Cold Drink Similarity Whisky Fun Similarity

Slow Fast contrast Child Cute Similarity

Wish High Similarity Bitter Sweet contarst

River Deep Similarity Hammar Instrument Similarity

White Peaceful Similarity Thirsty Crow Similarity

Beautiful Me Similarity City Lahore continuity


32

Window Shopping Similarity Square Root Similarity

Rough Look proximity Butter Bread cuntinuity

Citizen Patriot Similarity Doctor Engieener contrast

Foot Blessing Similarity Loud Voice continuity

Spider Dangerous Similarity Theif Clever Similarity

Needle Pain Similarity Lion Brave Similarity

Red Beautiful Similarity Joy All the time proximity

Sleep Disturbance continuity Bed Comfortable Similarity

Anger Aggressive Similarity Heavy Look contrast

Carpet Smooth Similarity Tobbaco Smooking Similarity

Girl Attractive Similarity Baby Cute Similarity

High Hopes Similarity Moon Light Similarity

Working Hours proximity Scissors Dangerous Similarity

Sour Sweet contrast Quiet Never proximty

Earth Round Similarity Green Trees Similarity

Trouble Creator Similarity Salt Pepper contrast

Soldier Brave Similarity Street Light Similarity

Cabbage vegetable Similarity King Queen contrast

Hard Work contrast Cheese Tasty Similarity

Eagle Clever Similarity Blossom Attractive Similarity

Stomach Empty continuity Afraid Darkness Similarity


33

Table 1

The following are the scores

Trait Scores

Neuroticism 27

Extraversion 33

Openness 24

Agreeableness 20

Conscientiousness 31

Discussion

Form the experiment it was concluded that participant has high score in extraversion

means the participant has social personality and like making friends. Participant also has high

score in conscientiousness which means that participant has impulse control.

Conclusion

NEO BFI is a good inventory which tells about the personality of person without any

biasness.
34

Experiment No. 7

Preparation for Pigeon Experiments

Anatomy and Sensory Capacities of Pigeon Brain

Pigeons are incredibly complex and intelligent animals. They are one of only a small

number of species to pass the mirror test, a test of self-recognition. Anatomy of avian brain

will help us to understand working of avian brain.

The Avian Nervous System consists of central nervous system, including the brain &

spinal cord and peripheral nervous system, including cranial & spinal nerves, autonomic

nerves & ganglia, & sense organs. In anatomy the brain is part of the central nervous system

which is enclosed by the cranium, and in birds it consists of three principal divisions, named

after their position: Hindbrain, Midbrain and Forebrain.

The hindbrain is composed of the medulla oblongata, the direct and comparatively

little modified continuation of the spinal cord, and of the cerebellum, these two parts being

connected with each other by the pedunculi or crura cerebelli.

The midbrain contains the peduncles of the great or forebrain, and the cortex or rind

of the optic lobes. The forebrain is subdivided into the thalamencephalon and into the

cerebral hemispheres.

Due to common ancestry, the brains of reptiles & birds are similar. However, birds

have relatively larger cerebral hemispheres & cerebella. In addition, birds have larger optic

lobes & smaller olfactory bulbs (Husband and Shimizu 1999).

The functions of the avian nervous system are to obtain (via sensory receptors)

information about the internal & external environment, analyze as needed, to respond to that

information, store information as memory & learning and coordinate outgoing motor

impulses to skeletal muscles & the viscera (smooth muscle, cardiac muscle, & glands).
35

Birds, compared to other vertebrates, have few taste buds. Avian taste buds can be

located on the tongue & floor of the pharynx as well as the palate. Some birds appear to have

a well-developed sense of taste: Like Sanderlings and Dunlins can distinguish between sand

where no worms had been present & sand where worms had been present. Hummingbirds can

distinguish different types of sugars & solutions with different concentrations of sugar. Many

birds are tolerant of acidic & alkaline solutions, which may permit the exploitation of

otherwise unpalatable food resources, e.g., unripe fruit (Mason & Clark 2000)

Touch receptors known as Herbst corpuscles are abundant in the bills of some birds,

such as waterfowl & shorebirds, & in the tongues of other birds, such as woodpeckers.

Additional touch/pressure receptors (Merkel cells) are found in the dermis (skin) of birds.

Traditionally thought that birds have limited sense of smell (because of their small

olfactory lobes) but, most birds probably can smell & use odors in daily activities & some

birds appear to have a very well developed sense of smell, including Turkey Vultures, Kiwis,

& many seabirds. Pigeons have excellent hearing abilities. They can detect sounds at far

lower frequencies than humans are able to, and can thus hear distant storms and volcanoes.

Humans and birds have brains that are wired in a similar way. Areas important for

high-level cognition such as long-term memory and problem solving are wired up to other

regions of the brain in a similar way. This is despite the fact that both mammal and bird

brains have been evolving down separate paths over hundreds of millions of years.

Evolution has discovered a common blueprint for high-level cognition in brain

development (Nakabayashi, 1999).

Successful navigation in birds is based first on an ability to determine directions in

space (compass sense), relying on the sun, stars and earth's magnetic field. This compass

sense promotes the development of an ability to determine relative location in space (map

sense), which, depending on distance to a goal, exploits predictable variation in the spatial
36

properties of visual landmarks, atmospheric odors and perhaps the earth's magnetic field. The

hippocampus of birds is a brain region particularly well suited for implementing navigation

based on the map-like representation of familiar landmarks.

Motor neurons of pigeons control muscles of the wings and legs, these cells may have

multiple behavioral functions, perhaps innervating muscles controlling the elaborate dancing

and wing-snapping of these birds. This evidence indicates that sex steroids may control

diverse behaviors in male birds in part by acting directly on the spinal neural circuits (Arends

& Zeigler, 1986).

Birds have been shown in previous studies to possess a range of skills such as a

capacity for complex social reasoning, an ability to problem solve and some have even

demonstrated the capability to craft and use tools. Areas called ‘hub nodes’, are regions of

the brain that are major centers for processing information and are important for high level

cognition. Prefrontal cortex is important for complex thought such as decision making

(Jechura & Kahn, 2009).

Purchase of Pigeon

The pigeon was purchased from the bird market by the in charge of the experimental

lab. Gender of the bird was male to avoid the process of laying eggs in the case of female

bird.

General Care and Housing

The room, in which the pigeon was housed, was dry, well ventilated and tried to get

as lighter, as possible. Grain was stored in covered, plastic containers, kept them dry, clean

and away from rats and mice. Dry food was given to the pigeon. Food and water containers in

the home cage were well separated. Fresh and clean water was given at all times. Water

container was placed outside the cage where the pigeon could not walk in it, and where the
37

water was less likely to be contaminated by droppings. Unclean water and dampness in the

room or cage encourage the spread of many diseases and parasites.

Pigeons are tough, hardy, and healthy animals, and most diseases can be prevented by

buying animals from reputable breeders by keeping the room and cages clean and dry, by

feeding sound, seasoned grain, and by avoiding the contamination of food and water

containers.

The pigeon was kept into the animal lab. There was separate cage for each pigeon. It

was tried to maintain the environment of the lab according to the pigeon, because they were

not familiar from that environment. The home cage was notified with names of the

experimenters, their roll numbers, and number was also given to the pigeon. The home cage

was of metal.

The food and water containers were of plastic, and were placed on the outside of the

cage. Pigeons are unique in the avian world in that they drink by inserting the whole beak

under water and pumping the water in. They cannot drink any other way. A deep food

container was presented so that the grain could be an inch below the top of the container.

Food and water containers had hooks that were hanged on the strip of wood at the front of the

box. In water container the water was clean and fresh.

Pigeons unlike chickens, should not be fed mash or green feed. All the grains were

thoroughly dried and seasoned. Food was brought ready mixed from a commercial feed

house. The fresh and clean water was supplied constantly.

Handling

In catching, holding, or carrying a pigeon it is important to support the bird and to

keep its wings folded. Avoid sudden movements and loud noises for these will produce

emotional behavior in pigeon. In holding a pigeon have the bird’s head facing towards the

experimenter and his tail away from him. If the pigeon is facing away, the hand will force its
38

wrists (wing butts). If the pigeon’s face is towards the experimenter, the wing butts will lie in

the natural position. Support the pigeon in the palms of hands, holding down his wings with

the thumbs and securing his legs between first and second fingers. A pigeon has strong legs

and if he places his feet against the palm of the hand and the experimenter will probably be

unable to hold him. Hold the pigeon firmly but gently with his breast against the heel of the

hands. If he struggles, even though the first reaction may be to let go. When transferring a

pigeon to another person, do not release the hold until the other person has the pigeon

securely in hand.

Calculation of Ad libitum Weight

When the pigeon has free access to food and his weight becomes approximately

constant, he has reached what is called his ad libitum weight. For the first few days after the

pigeon arrived in the laboratory, he was given the food and water all time. He weighted

approximately the same time every day and a record was kept of his weight. The easiest way

to weigh a pigeon is to put him in a covered box. He can stand quietly as long as he is in

darkness. When the pigeon’s weight became almost consistent then his weight of last three

days was added and divided by 3. Then ad libitum weight was determined.

The calculation of ad lib weight of pigeon was

Ad libitum weight = 265 + 259 + 250/3

Ad libitum weight = 258

Pigeon Weight Chart

Date Weight (g)

27 Oct 2015 265

28 Oct 2015 259

29 Oct 2015 250

30 Oct 2015 263


39

31 Oct 2015 245

1 Nov 2015 251

2 Nov 2015 246

3 Nov 2015 253

4 Nov 2015 260

5 Nov 2015 252

6 Nov 2015 243

7 Nov 2015 240

8 Nov 2015 241

9 Nov 2015 250

10 Nov 2015 242

11 Nov 2015 250

12 Nov 2015 270

13 Nov 2015 265

14 Nov 2015 257

15 Nov 2015 248

16 Nov 2015 233

17 Nov 2015 230

18 Nov 2015 225

19 Nov 2015 220

20 Nov 2015 217

21 Nov 2015 220

22 Nov 2015 210

Calculation of Experimental Weight


40

Once the adlibitum weight was determined, the pigeon’s weight was reduced because

he was given measured food of 3 grams. Hunger motivation is very effective in conditioning.

The pigeon was brought down to his 80 percent weight. The food and water was given only 3

grams to reduce his weight until he reached his 80 percent weight. To calculate the

experimental weight adlibitum weight divided by 100 and multiplied by 80. The pigeon’s

experimental weight was 206 grams.

Experimental weight= 258/100 x80

Experimental weight = 206

Weight Reduction

The pigeon’s weight was reduced because to perform experiments. Once the pigeon’s

adlibitum and experimental weight were determined, his experimental weight was his 85

percent of total weight that was 206 grams and his actual weight was 265 grams. The purpose

was to reduce the pigeon’s weight from 265 to 206 grams. For that purpose his measured

food was determined. He was given 5 grams food in whole day. When his weight reached to

272 grams and there was difference of 20-25 grams between experimental and adlibitum

weight his experiments were started.

Chamber Adaptation

Chamber adaptation started after weight reduction of the pigeon. The main focus of

chamber adaptation was to make the pigeon familiarize with the chamber. So that at the time

of experiment the pigeon did not show any hesitation. When the pigeon was taken in the

laboratory settings, it was quite different setting for pigeon. It was a new environment for

him. As the cage and chamber both were different settings for him, he was made familiar

with the settings, and he learned that now he had to take food from the chamber. Before

starting the chamber adaptation grains or food were placed in the same container that he used

in his cage before.


41

Chamber Adaptation day 1

On first day he was not active. He took a big round in the chamber and changed his

position. He was wandering here and there and was observing the chamber. He was again still

and didn’t show any movement. He made a little movement. Then he again became silent. He

didn’t eat any grain of food.

Chamber Adaptation day 2

On the second day his weight was reduced to 262 gm. When he was taken to chamber

he remained still for 2 minutes and then started to take food from the container. During the

process of taking food he was moving his head around the chamber for an almost 3 seconds.

He was active today and was pecking intensely. He was withdrawn from the chamber.

Magazine Training

When pigeon was well adapted to the chamber, then the magazine training was

started. In magazine training pigeon was trained to take food from magazine. Magazine

training is pre experimental period of adapting the pigeon to the experimental box and to the

food magazine which provides a good opportunity to measure the operant level of pecking.

During magazine training pigeon learned to associate the light and sound of magazine with

food. There was a need to note the total time the subject is in the experimental box and total

number of times he pecks the stimulus box. This will provide a measure of operant level of

pecking the food box (Reese, 1964). Pigeon’s weight was noted before starting the magazine

training. It was done in two days.

Magazine Training day 1

On first day his weight was 253 grams. The pigeon was placed in the chamber, the

house light was remained on during the session food was presented in the magazine. Thirty

trials were taken by the experimenters, with fixed intervals of time, the time interval between

trials was determined, total of 150 seconds. The VITI was fixed of 5 seconds. The food was
42

presented in the magazine for five seconds of time. The magazine was remained on for 5

seconds and was remained off for 5 seconds. At the start of experiment the pigeon was

confused because he didn’t know from where to take food. When he placed into the chamber

he was passive and didn’t show any movement. He remained still at one corner of the

chamber and didn’t peck on the magazine.

Magazine Training day 2

The next day same procedure was repeated except the interval of time, (VITI). VITI

was fixed with interval of time, total of 150 seconds. The pigeon’s weight was noted before

experiment, his weight was 260 grams. He was placed into the chamber he did not show any

movement till 8 trials. At the 9th trial he changed his position to another. The magazine

remained on for different times of periods, like 5 seconds, 6 seconds, and 3 seconds but

remained off for 5 seconds. At the 11th trial he moved his head little, looked above and came

near to the magazine. At 13th trial he started pecking on the magazine. At the 18th trial he

looked at the window of the chamber. Whenever the magazine was off he started to move

around the chamber. He was again still at one on the 18th trial. At 31st trial he started to peck

again and till 43rd trial he was continuously taking the food when the magazine was on and

moving around the chamber when the magazine was off.

Table 1

Trials and Responses of Second Day of Magazine Training (N=43)

No. of Trials VITI (sec)

1 4

2 3

3 5

4 7

5 6
43

6 5

7 3

8 4

9 6

10 7

11 7

12 4

13 5

14 3

15 6

16 7

17 5

18 4

19 6

20 3

21 5

22 3

23 4

24 6

25 7

26 7

27 4

28 5

29 3

30 6
44

31 6

32 3

33 5

34 7

35 4

36 3

37 6

38 7

39 5

40 4

41 7

42 4

43 5

Magazine Training day 3

On the third day the pigeon was passive at 1st trial. He moved his head a little in the

2nd trial. He moved to the light at 3rd trial. From the 4th trial to till end he was actively

responding to the magazine and was making intense pecks on food. Whenever the magazine

light remained off during the trials he started moving around the chamber and when the light

was on he started eating food from magazine. He was showing very aggressive behavior and

making intense pecks.

Table 2

Trials and Responses of Third Day of Magazine Training (N=30)

No. of Trials VITI (sec)

1 4
45

2 3

3 5

4 7

5 6

6 5

7 3

8 4

9 6

10 7

11 7

12 4

13 5

14 3

15 6

16 7

17 5

18 4

19 6

20 3

21 5

22 3

23 4

24 6

25 7

26 7
46

27 4

28 5

29 3

30 6

Experiment No. 6

Behavior Shaping through Continuous Reinforcement

Problem Statement

To find out does the behavior get strengthen through successive approximation by

using continuous reinforcement.

Introduction

Learning has been defined as a relatively permanent change in behavior that occurs as

a result of experience (Goldstein, 1994). Behavior shaping is the successive approximation of

desired behaviors. Shaping involves gradually changing the form of an existing response into

a desired behavior through reinforcement (Skinner, 1953). Successive approximation is the

form of an existing response that is gradually changed across successive trials towards a

desired target behavior by rewarding exact segments of behavior. Shaping is the form of

positive reinforcement of operant conditioning.

There are two types of learning; classical and operant conditioning:

Classical Conditioning

It is a type of learning, first discovered by Ivan Pavlov accidently. It is a process of

behavior modification by which a subject comes to respond in a desired manner to a


47

previously neutral stimulus that has been repeatedly presented along with an unconditioned

stimulus that elicits the desired response (Pavlov, 1927).

Operant Conditioning

Operant conditioning is a type of learning, first introduced by B.F Skinner. A process

of behavior modification in which the likelihood of a specific behavior is increased or

decreased through positive or negative reinforcement each time the behavior is exhibited, so

that the subject comes to associate the pleasure or displeasure of the reinforcement with the

behavior.

According to Skinner, operant conditioning as a form of association between behavior

and consequence, it is also called response stimulus conditioning because it forms the

association between animal response (behaviors) and stimulus that follow a consequence.

Operant conditioning is also called instrumental learning (Skinner, 1953).

Schedules of Reinforcement

A schedule of reinforcement is the response requirement that must be met to obtain

reinforcement. In other words, a schedule indicates what exactly has to be done for the

reinforcer to be delivered (Powell, Symbaluk, & Honey, 2009).

 Partial reinforcement

 Continuous reinforcement

Partial Reinforcement

A partial (intermittent) reinforcement schedule is one in which only some responses

are reinforced (Powell, Symbaluk, & Honey, 2009). Partial reinforcement is when the

organism or individual is reinforced at some accession not at all.

Continuous Reinforcement

A continuous reinforcement schedule is one in which each specified response is

reinforced (Powell, Symbaluk, & Honey, 2009). Continuous reinforcement is used at an early
48

stage of operant conditioning, when the goal is to familiarize the organism being conditioned

with the basic ground rules of the situation. Continuous reinforcement is very useful when a

behavior is first being shaped or strengthened. Continuous reinforcement must be provided

promptly and consistently in order to work.

Hypothesis

The behavior gets strengthen through successive approximation by using continuous

reinforcement.

Method

Experimenter Students of MSc III Replica

Subject Pigeon

Apparatus Skinner box, Stop watch, Weight

machine, Paper, Pencil and Eraser

Procedure

Experiment was performed in two days. On first day before starting the experiment,

functioning of Skinner box was properly checked. Magazine was filled with food. It was

made sure that no grain of food was on the floor of Skinner box. All the group members were

assigned different responsibilities for example to control the panel, to record timing, to count

number of pecks etc. Then pigeon was weighted, brought to the experimental room and was

put in the Skinner box. The door of Skinner box was closed and experiment was started. As

soon as the pigeon entered the Skinner box, his behavior observation was started.

The experiment was done on the continuous reinforcement in which each and every

response was reinforced. Pigeon’s weight was noted down before starting the experiment. His

weight was 265 grams before experiment. The amber light was on during the whole

experiment. The pigeon was very active and aggressive but he was continuously pecking on
49

the ground and did not respond to light. He looked at the light but did not learn how to get

food. At first he was reinforced on watching the light or pecking around the light. After

thirteen trials, pigeon accidently peck on light and at that time reinforcement was given to

him. After that act, he was also become vigilant and active to get food as a reward. Whenever

the pigeon pecked on the light, it was considered as one trial and food was presented for 5

seconds, behavior of pigeon and his number of pecks were noted down. In the start the pigeon

was not responding but later on he learned and pecked on every trial. Experiment was

completed in two days.

On the second day same procedure was repeated .The pigeon was brought to the

animal lab. Magazine was filled with food. The amber light was on during the whole

experiment. Total 30 trials were taken. Pigeon pecked on light in 3 seconds. He performed

very well by continuously pecking and taking food.

Results

Table 1

Pigeon Response on Amber Light Continuous Reinforcement Day 1

No. of Trials No. of Pecks VITI (sec) Reinforcement(sec)

1 - 45 10

2 - 25 10

3 - 30 10

4 - 30 10

5 - 20 10

6 - 15 10

7 - 10 10

8 - 10 10

9 - 15 10
50

10 - 15 10

11 - 10 10

12 - 15 10

13 1 9 10

14 2 2 10

15 3 2 10

16 1 1 10

17 2 1 10

18 3 1 10

19 2 1 10

T 20 2 2 10

h 21 3 1 10

e 22 3 1 10

s 23 2 1 10

e 24 2 1 10

25 5 1 10

r 26 4 1 10

e 27 3 0 10

s 28 2 0 10

u 29 2 0 10

l 30 3 0 10

s show that the pigeon didn’t peck in the beginning trials. The VITI was greater in the

start and then it gradually decreased. After 12 trials pigeon started pecking on Amber

light. As the no. of trials increased no. of pecks increased and VITI decreased.
51

Table 1

Average Results of Pecks and VITI Day 1

Trials Pecks VITI (sec)

1-5 0 30

6-10 0 13

11-15 2 7.6

16-20 6.8 1.2

21-25 5.2 1

26-30 3.8 0.2

Graph 1

Line Chart for Continuous Reinforcement (no. of pecks) Day 1

8
7
6
5
4
3 No. of Pecks
2
1
0
5 10 15 20 25 30

Graph 2

Line Chart for Continuous Reinforcement (VITI) Day 1


52

35
30
25
20
15
10 VITI
5
0
5 10 15 20 25 30

These graphs are showing the average results of continuous reinforcement on

day 1. The number of pecks increases and VITI decreases as the number of trials

increase.

Table 2

Pigeon Response on Amber Light Continuous Reinforcement Day 2

No. of Trials No. of Pecks VITI(sec) Reinforcement(sec)

1 2 3 10

2 3 2 10

3 4 0 10

4 2 0 10

5 5 0 10

6 7 0 10

7 8 0 10

8 4 0 10

9 3 0 10

10 6 0 10

11 2 0 10
53

12 5 0 10

13 3 0 10

14 7 0 10

15 5 0 10

16 5 0 10

17 3 0 10

18 3 0 10

19 4 0 10

20 6 0 10

21 3 0 10

22 2 0 10

23 5 0 10

24 6 0 10

25 5 0 10

26 4 0 10

27 7 0 10

28 6 0 10

29 4 0 10

30 6 0 10

This table shows that the no. of pecks was greater in the beginning of experiment and

VITI was near zero. Gradually the no. of pecks decreased and VITI was also decreased to

zero.

Table 2

Average Results of Pecks and VITI Day 2

Trials Pecks VITI (sec)


54

1-5 8.4 1

6-10 7.4 0

11-15 5.2 0

16-20 4.6 0

21-25 6.2 0

26-30 5.2 0

Graph 1

Line Chart for Continuous Reinforcement (no. of pecks) Day 2

9
8
7
6
5
4
3 No. of Pecks
2
1
0
5 10 15 20 25 30

Graph 2

Line Chart for Continuous Reinforcement (VITI) Day 2


55

1.2

0.8

0.6

0.4
VITI
0.2

0
5 10 15 20 25 30

These graphs show that in the start of experiment the number of pecks was greater

and then gradually decreased. After the 20th trial number of pecks increased again and after

25th trial it was decreased. The VITI was near zero throughout the experiment.

Discussion

Our hypothesis has been supported that the behavior is strengthened through

successive approximation by using continuous reinforcement. The average no. of pecks was

greater on the second day as compared to the 1st day and the VITI was lower on the second

day.

Conclusion

Thus, we can conclude that the behavior can be strengthened through successive

approximation by using continuous reinforcement.


56

References

Bernstein, D. A. (2013). Essentials of psychology. USA: Cengage Learning

Bruno, F. J. (2002). Psychology: A self teaching guide. New Jersey: John Wiley &

Sons, Inc.

Chance, P. (2009). Learning and behavior: Active learning edition. USA: Wadsworth

Cengage Learning

Charles, M. G. (2010). Psychology for nurses. India: Pearson Education

Kalat, J. (2013). Introduction to psychology. USA: Cengage Learning

Klein, S. B. (2002). Learning principles and applications. USA: McGraw-Hill

Companies

Nicholas, L. (2009). Introduction to psychology. Cape Town: Juta and Company Ltd.

Pavlov, I. P. (1927). Conditioned reflexes. UK: Oxford University Press.

Powell, R. A., Symbaluk, D. G., & Honey, P. L. (2009). Introduction to learning and

behavior. USA: Wadsworth Cengage Learning

Skinner, B.F. (1953). Science and human behavior. New York: Macmillan.
57

Experiment No. 7

Behavior Shaping through Partial Reinforcement

Problem Statement

To find out does the behavior get strengthen by using partial reinforcement.

Introduction

Learning occurs not only through associating stimuli but also through associating

behavior with its consequences. Thorndike’s law of effect holds that any response that

produces satisfaction becomes more likely to occur again and that any response that produces

discomfort becomes less likely to recur (Bernstein, 2013).

It is not necessary to reinforce every correct response in order for learning to occur

(Loudon, 2001). Reinforcement may be delivered on a continuous reinforcement schedule or

on one of four types of partial or intermittent reinforcement schedules. Continuous

reinforcement schedules, which reward every correct response, yield rapid changes in

behavior. Conversely partial reinforcement schedules yield a slower rate, but also result in

learning that is more permanent in nature (Loudon, 2001). Behavior learned through partial

reinforcement is very resistant to extinction; this phenomenon is called the partial

reinforcement effect (Bernstein, 2013).

In a partial (or intermittent) reinforcement schedule, some instances of behaviors are

reinforced and others not. Skinner discovered this schedule. There are two main types of

partial reinforcement schedule: ratio and interval. In a ratio schedule, the subject is reinforced

only after making a certain number of responses. In an interval schedule, reinforcement is

based on the amount of time that elapses in between reinforcers. Both the ratio and the

interval schedules may be fixed or variable (Nicholas, 2009).


58

Schedules of Partial Reinforcement

Fixed Ratio Schedules (FR)

In a fixed ratio schedule, the subject is reinforced after a fixed number of responses

(Nicholas, 2009). On a fixed ratio schedule, reinforcement is contingent upon a fixed,

predictable number of responses (Powell, Symbaluk, & Honey, 2009).

For example a subject on a fixed ratio -10 schedule will be reinforced every tenth time the

desired response is emitted, independently of how long it took to do so.

Variable Ratio Schedules (VR)

This form of reinforcement is more common in the natural environment. In a variable

ratio schedule the number of responses and the delivery of reinforcement changes from time

to time (Nicholas, 2009). On a variable ratio schedule, reinforcement is contingent upon a

varying, unpredictable number of responses (Powell, Symbaluk, & Honey, 2009).

For example a subject on variable ratio schedule may be reinforced after ten responses,

then after the next five, and then after the next fifteen. Although the number of responses per

reinforcement is not fixed, they are averaged to a certain value.

Fixed Interval Schedules (FI)

Reinforcement in interval schedules is not tied to the subject’s behavior. In a fixed

interval schedule the first response to occur after a fixed amount of time has elapsed is

reinforced. Two conditions must be satisfied for a fixed interval schedule: the prescribed

interval must have elapsed and the subject must make a response (Nicholas, 2009). On a

fixed interval schedule, reinforcement is contingent upon the first response after a fixed,

predictable period of time (Powell, Symbaluk, & Honey, 2009).

For example a subject on a fixed interval-60 schedule will be reinforced for the first

response occurring after an interval of 60 seconds.


59

Variable Interval Schedules (VI)

In a variable interval schedule, reinforcement follows the first response after a

variable amount of time has elapsed. Although the interval in between reinforced responses is

not fixed, it is averaged to a certain value (Nicholas, 2009). On a variable interval schedule,

reinforcement is contingent upon the first response after a varying, unpredictable period of

time (Powell, Symbaluk, & Honey, 2009).

For example a subject may be rewarded for every first response that occurs after 5

minutes, 12 minutes, 15 minutes, 8 minutes, etc.

Hypothesis

Behavior gets strengthen by using partial reinforcement (Interval).

Experiment No. 7a

Fixed Ratio Reinforcement

Method

Experimenter Students of MSc III Replica

Subject Pigeon

Apparatus Skinner box, Stop watch, Weight

machine, Paper, Pencil and Eraser

Procedure

Experiment was performed in two days. On first day before starting the experiment,

functioning of Skinner box was properly checked. Magazine was filled with food. It was

made sure that no grain of food was on the floor of Skinner box. All the group members were

assigned different responsibilities for example to control the panel, to record timing, to count

number of pecks etc. Then pigeon was weighted, brought to the experimental room and was

put in the Skinner box. The door of Skinner box was closed and experiment was started. As
60

soon as the pigeon entered the Skinner box, his behavior observation was started. The

responses of pigeon were also noted when pigeon started pecking on amber light.

The behavior of pigeon was shaped through partial reinforcement i.e. fixed ratio. The

pigeon in fixed ratio schedule was reinforced for the first response occurring after an interval

of 5 seconds. So when the time completed, it was reinforced i.e. the magazine light was

switched on and food was accessible to pigeon. The pigeon was allowed to eat for only five

seconds. After five seconds the magazine light was switched off and food was no longer

accessible to pigeon. Here first trial was completed and second trial was started. In the second

trial, again the pigeon was reinforced after 5 seconds. In the same way thirty trials were

completed on day 1of reinforcement on a fixed ratio schedule. The behavior of pigeon was

observed and recorded during each trial. Pigeon was active, giving very quick responses and

was keeping vigilant eye on light while he was taking food from magazine.

After completing thirty trials the pigeon was taken out from Skinner box and weighed

again to make sure that he did not eat more food than required. Then he was brought back to

his home cage.

On the second day, same procedure was repeated and the response of the pigeon was

noted. Weight of the pigeon was recorded both after and before. No food was given to pigeon

during 24 hours gap of session 1 and session 2 because the hunger drive had to be used as a

motivation force to eat food during experiment.

Results

Table 1

Responses of Partial Reinforcement (fixed ratio) Day 1

Additional
No. of trials Fixed pecks Reinforcement(sec) VITI (sec)
pecks

1 5 0 15 2
61

2 5 0 15 1

3 5 0 15 2

4 5 0 15 0

5 5 0 15 0

6 5 0 15 0

7 5 0 15 0

8 5 0 15 0

9 5 0 15 0

10 5 0 15 0

11 5 0 15 0

12 5 0 15 0

13 5 0 15 2

14 5 0 15 0

15 5 0 15 0

16 5 0 15 0

17 5 0 15 0

18 5 0 15 0

19 5 0 15 2

20 5 0 15 0

21 5 0 15 0

22 5 0 15 0

23 5 0 15 0

24 5 0 15 0

25 5 0 15 1

26 5 0 15 1
62

27 5 0 15 0

28 5 0 15 0

29 5 0 15 1

30 5 0 15 0

These results show that the no. of pecks was greater in the beginning of experiment.

From 1st trial to 10th trial the no. of pecks increased and then it decreased. The VITI was also

greater in the starting and then it decreased to zero.

Table 1

Average Results of Pecks and VITI Day 1

Trials Pecks VITI (sec)

1-5 5 1

6-10 5 0

11-15 5 0.4

16-20 5 0.4

21-25 5 0.2

26-30 5 0.4
63

Graph 1

Line Chart for Fixed Interval Schedule (no. of pecks) Day 1

7
6
5
4
3
2 No. of Pecks

1
0
5 10 15 20 25 30

Graph 2

Line Chart for Fixed Interval Schedule (VITI) Day 1

1.2

0.8

0.6

0.4
VITI
0.2

0
5 10 15 20 25 30

These graphs show that the no. of pecks increased till 10 trials and then it was

decreased till 25 trials. From 25th trial the no. of pecks again increased. The VITI was greater

till 5th trial and then it decreased.


64

Table 2

Responses of Partial Reinforcement (fixed ratio) Day 2

No. of trials Fixed Pecks Additional Reinforceme VITI (sec)

Pecks nt(sec)

1 5 0 15 5

2 5 0 15 1

3 5 0 15 0

4 5 0 15 0

5 5 0 15 0

6 5 0 15 0

7 5 0 15 0

8 5 0 15 0

9 5 0 15 0

10 5 0 15 0

11 5 0 15 0

12 5 0 15 0

13 5 0 15 0

14 5 0 15 0

15 5 0 15 0

16 5 0 15 1

17 5 0 15 1

18 5 0 15 0

19 5 0 15 0

20 5 0 15 0

21 5 0 15 0
65

22 5 0 15 0

23 5 0 15 0

24 5 0 15 0

25 5 0 15 0

26 5 0 15 0

27 5 0 15 0

28 5 0 15 1

29 5 0 15 1

30 5 0 15 1

This table shows that the no. of pecks remained constant throughout the experiment

and VITI decreased till the end of trials.

Table 2

Average Results of Pecks and VITI Day 2

Trials Pecks VITI (sec)

1-5 5 1.2

6-10 5 0

11-15 5 0

16-20 5 0.4

21-25 5 0

26-30 5 0.6
66

Graph 1

Line Chart for Fixed Interval Schedule (no. of pecks) Day 2

8
7
6
5
4
3
No. of Pecks
2
1
0
5 10 15 20 25 30

Graph 2

Line Chart for Fixed Interval Schedule (VITI) Day 2

1.4
1.2
1
0.8
0.6
0.4 VITI

0.2
0
5 10 15 20 25 30

These graphs are showing the no. of pecks and VITI. The no. of pecks was greater

throughout the experiment and the VITI was greater till the 5th trial and then it decreased.

Discussion

In the present experiment behavior was reinforced by using fixed interval schedule.

Partial reinforcement (Fixed Interval) was done in two consecutive days. On day first the no.

of pecks was greater in the beginning of experiment and then it was decreased but on second
67

day the no. of pecks was greater throughout the experiment. The trend of VITI was same in

both days. So the response was greater on the second day as compared to the first day. And

the VITI was same in both days.

Conclusion

Thus, we can conclude that the behavior gets strengthen by using partial

reinforcement (fixed interval).

Experiment No. 7b

Variable Ratio Reinforcement

Method

Experimenter Students of MSc III Replica

Subject Pigeon

Apparatus Skinner box, Stop watch, Weight

machine, Paper, Pencil and Eraser

Procedure

Experiment was performed in two days. On first day before starting the experiment,

functioning of Skinner box was properly checked. Magazine was filled with food. It was

made sure that no grain of food was on the floor of Skinner box. All the group members were

assigned different responsibilities for example to control the panel, to record timing, to count

number of pecks etc. Then pigeon was weighted, brought to the experimental room and was

put in the Skinner box. The door of Skinner box was closed and experiment was started. As

soon as the pigeon entered the Skinner box, his behavior observation was started. The

responses of pigeon were also noted when pigeon started pecking on amber light.

The behavior of pigeon was shaped through partial reinforcement i.e. variable ratio.

The pigeon in ratio interval schedule was reinforced for the first response occurring after 5±2

seconds. So when the time completed, it was reinforced i.e. the magazine light was switched
68

on and food was accessible to pigeon. The pigeon was allowed to eat for only five seconds.

After five seconds the magazine light was switched off and food was no longer accessible to

pigeon. Here first trial was completed and second trial was started. In the second trial, again

the pigeon was reinforced after variable intervals. In the same way thirty trials were

completed on day 1of reinforcement on a variable interval schedule. The behavior of pigeon

was observed and recorded during each trial. Pigeon was active, giving very quick responses

and was keeping vigilant eye on light while he was taking food from magazine.

After completing thirty trials the pigeon was taken out from Skinner box and weighed

again to make sure that he did not eat more food than required. Then he was brought back to

his home cage.

On day 2 the same procedure was repeated and the response of the pigeon was noted.

Weight of the pigeon was recorded both after and before. No food was given to pigeon during

24 hours gap of session 1 and session 2 because the hunger drive had to be used as a

motivation force to eat food during experiment.

Results

Table 1

Responses of Partial Reinforcement (variable ratio) Day 1

No. of trials Variable pecks Additional Reinforceme VITI (sec)

pecks nt(sec)

1 4 0 15 0

2 6 0 15 0

3 3 0 15 0

4 5 0 15 0

5 7 0 15 1

6 6 0 15 0
69

7 4 0 15 0

8 3 0 15 0

9 5 0 15 0

10 4 0 15 0

11 3 0 15 2

12 6 0 15 0

13 6 0 15 1

14 7 0 15 0

15 3 0 15 0

16 4 0 15 0

17 5 0 15 0

18 7 0 15 0

19 6 0 15 0

20 4 0 15 0

21 5 0 15 0

22 3 0 15 0

23 7 0 15 0

24 7 0 15 0

25 3 0 15 0

26 6 0 15 0

27 5 0 15 0

28 5 0 15 0

29 4 0 15 0

30 7 0 15 0
70

These results in the table show that the no. of pecks was greater throughout the

experiment and the VITI was near to zero.

Table 1

Average Results of Pecks and VITI Day 1

Trials Pecks VITI (sec)

1-5 6 0.2

6-10 4.8 0

11-15 5.8 0.6

16-20 6 0

21-25 5 0

26-30 5.2 0

Graph 1

Line Chart for Variable Interval Schedule (no. of pecks) Day 1

7
6
5
4
3
2 No. of Pecks

1
0
5 10 15 20 25 30

Graph 2

Line Chart for Variable Interval Schedule (VITI) Day 1


71

0.7
0.6
0.5
0.4
0.3
0.2 VITI

0.1
0
5 10 15 20 25 30

These graphs show that the no. of pecks was greater in the beginning trials then

gradually decreased till 10th trial and then again increased. The VITI was near to zero

throughout the experiment.

Table 2

Responses of Partial Reinforcement (variable ratio) Day 2

No. of trials Variable Additional Reinforceme VITI (sec)

Pecks Pecks nt(Sec)

1 4 0 15 0

2 6 0 15 0

3 3 0 15 0

4 5 0 15 0

5 7 0 15 0

6 6 0 15 0

7 4 0 15 0

8 3 0 15 0

9 5 0 15 0

10 4 0 15 0

11 3 0 15 0
72

12 6 0 15 0

13 6 0 15 0

14 7 0 15 0

15 3 0 15 0

16 4 0 15 0

17 5 0 15 0

18 7 0 15 0

19 6 0 15 0

20 4 0 15 0

21 5 0 15 0

22 3 0 15 0

23 7 0 15 0

24 7 0 15 0

25 3 0 15 0

26 6 0 15 0

27 5 0 15 0

28 5 0 15 0

29 4 0 15 0

30 7 0 15 0

These results show that the no. of pecks was greater throughout the experiment and

VITI was constantly zero in the whole experiment.

Table 2

Average Results of Pecks and VITI Day 2

Trials Pecks VITI (sec)

1-5 4.8 0
73

6-10 4.2 0

11-15 5 0

16-20 6.2 0

21-25 4.6 0

26-30 4.8 0

Graph 1

Line Chart for Variable Interval Schedule (no. of pecks and VITI) Day 2

7
6
5
4
3
no. of Pecks
2
VITI
1
0
5 10 15 20 25 30

Total no. of

trials
This graph shows that the no. of pecks was greater in the whole experiment. The

pecks were increased till 20th trial and then started to decrease. The VITI was zero throughout

the experiment.

Discussion

The experiment was done on behavior shaping through partial reinforcement (variable

interval). The results show that the average number of responses on amber light was greater

in both days. The VITI was constantly zero in both days.

Conclusion

Thus, we can conclude that the behavior gets strengthen by using partial

reinforcement (variable interval).


74

References

Bernstein, D. A. (2013). Essentials of psychology. USA: Cengage Learning

Bruno, F. J. (2002). Psychology: A self teaching guide. New Jersey: John Wiley &

Sons, Inc.

Chance, P. (2009). Learning and behavior: Active learning edition. USA: Wadsworth

Cengage Learning

Charles, M. G. (2010). Psychology for nurses. India: Pearson Education

Kalat, J. (2013). Introduction to psychology. USA: Cengage Learning

Klein, S. B. (2002). Learning principles and applications. USA: McGraw-Hill

Companies

Nicholas, L. (2009). Introduction to psychology. Cape Town: Juta and Company Ltd.

Pavlov, I. P. (1927). Conditioned reflexes. UK: Oxford University Press.

Powell, R. A., Symbaluk, D. G., & Honey, P. L. (2009). Introduction to learning and

behavior. USA: Wadsworth Cengage Learning

Skinner, B.F. (1953). Science and human behavior. New York: Macmillan.
75

Experiment No. 8

Stimulus Generalization

Problem Statement

To find out does the stimulus S1 (Amber light) and stimulus S2 (Red light) are

equally rewarding to strengthen the behavior.

Introduction

Certain situations or objects may resemble one another so closely that the learner will

react to one as he or she has learned to react to the other (Charles, 2010). The concept of

stimulus generalization comes out of behaviorist laboratory research on Pavlov’s classical

conditioning paradigm. Stimulus generalization is the evocation of a non-reinforced response

to a stimulus that is very similar to an original conditioned stimulus (Haskell, 2001).

In operant conditioning, generalization is the ability to emit a learned behavior in

response to a similar stimulus (Bruno, 2002). Someone who receives reinforcement for a

response in the presence of one stimulus will probably make the same response in the

presence of a similar stimulus. The more similar a new stimulus is to the original reinforced

stimulus, the more likely is the same response. This phenomenon is known as stimulus

generalization (Kalat, 2013).

Stimulus generalization occurs when a stimulus that is similar to an original

conditioned stimulus elicits a conditioned reflex (Bruno, 2002). This depends on the degree

of similarity between the new stimulus and the conditioned stimulus; the greater the

similarity the greater the response (Nicholas, 2009). It plays an important role in our daily

lives. For example a child who burns her finger while playing with matches. Most likely,

lighted matches will become conditioned fear stimuli for her. Because of stimulus

generalization she may also have a healthy fear of flames from lighters, fireplaces, and stoves

etc.
76

Hypothesis

Stimulus S1 (Amber light) and stimulus S2 (Red light) are equally rewarding to

strengthen the behavior.

Method

Experimenter Students of MSc III Replica


Subject Pigeon
Apparatus Skinner box, Stop watch, Weight
machine, Paper, Pencil and Eraser
Procedure

Experiment was performed in two days. On day 1 before starting the experiment,

functioning of Skinner box was properly checked. Magazine was filled with food. It was

made sure that no grain of food was on the floor of Skinner box. All the group members were

assigned different responsibilities for example to control the panel, to record timing, to count

number of pecks etc. Then pigeon was weighted and his weight was 236. He was brought to

the experimental room and was put in the Skinner box. The door of Skinner box was closed

and experiment was started. As soon as the pigeon entered the Skinner box, his behavior

observation was started.

In this experiment the stimulus generalization was done. The pigeon learned to

generalize between stimulus S1 (amber light) and stimulus S2 (red light). In the previous

experiments pigeon learned to give response on amber light. In this experiment the amber

light and red light both were used. The Amber key light was S1 and the Red key light was S2

and each light was kept on for 5 seconds. First amber key light was switched on for 5

seconds. Pigeon pecked on S1 then the reinforcement (food) was given to the pigeon. In the

second trial red key light was turned on. Pigeon pecked on red light after 5 seconds. Both

lights were randomly switched on according to a schedule of red and amber key lights which

was prepared earlier. On day 1 total 30 trials were given 15 trials of red key light and 15 trials
77

of amber key light. By keeping in view the weight of pigeon reinforcement was given for 5

seconds.

After completing thirty trials the pigeon was taken out from Skinner box and weighed

again. After the experiment pigeon’s weight was 240. Then he was brought back to his cage.

On day 2 the same procedure was repeated and the response of the pigeon was noted.

Weight of the pigeon was recorded both after and before. No food was given to pigeon during

24 hours gap of session 1 and session 2 because the hunger drive had to be used as a

motivation force to eat food during experiment.

Results

Table 1

Stimulus (Red and Amber light) Generalization Day 1

No. of trials Stimulus Key No. of pecks VITI


(sec) Reinforcement(sec)
1 A 4 0 15
2 R 5 5 15
3 R 4 2 15
4 A 3 0 15
5 R 5 0 15
6 A 4 0 15
7 R 6 3 15
8 A 4 0 15
9 R 3 0 15
10 A 5 0 15
11 R 4 0 15
12 A 5 0 15
13 A 7 0 15
14 R 4 0 15
15 A 3 0 15
16 R 5 0 15
17 A 6 0 15
18 R 5 0 15
19 A 3 0 15
20 R 4 0 15
21 A 5 0 15
22 R 6 0 15
78

23 A 4 0 15
24 A 5 0 15
25 R 3 0 15
26 A 6 0 15
27 R 3 0 15
28 A 3 0 15
29 R 5 0 15
30 R 5 0 15

These results show that the no. of pecks on amber light was greater than no. of pecks

on the red light. After 10th trial the no. of pecks on red light started to increase and it

increased till the end of experiment. The VITI for amber light was constantly zero throughout

the experiment and the VITI for red light was greater in the beginning of experiment then it

gradually decreased.

Table 1

Average Results of Pecks and VITI of Amber light and Red light Day 1

Trials Amber Light Red Light

Pecks VITI Pecks VITI

10 5.4 0 3 2

20 5.2 0 3.8 0

30 4.8 0 3.6 0
79

Graph 1

Line Chart for no. of pecks on Amber & Red lights Day 1

6
p
5
e
4c
3k
s Amber Light
2
Red Light
1
0
10 20 30

Total no. of

trials
This graph shows the responses on amber light and red light. As we can see that the

number of pecks on amber light was greater than the number of pecks on red light.

Graph 2

Line Chart for VITI of Amber & Red lights Day 1

2.5

2 V
1.5
I
1
Amber Light
0.5 T
Red Light
0
I
10 20 30

Total no. of

trials
This graph shows that the VITI for amber light was constantly zero throughout the

experiment and the VITI for red light was at point 2 till 10th trial and then it gradually

decreased to zero second.


80

Table 2

Stimulus (Red and Amber light) generalization Day 2

No. of trials Stimulus Key No. of pecks VITI Reinforcement(sec)


(sec)
1 A 5 2 15
2 R 7 1 15
3 R 5 2 15
4 A 4 4 15
5 R 3 3 15
6 A 4 2 15
7 R 5 3 15
8 A 7 2 15
9 R 5 0 15
10 A 4 1 15
11 R 5 2 15
12 A 6 4 15
13 A 7 2 15
14 R 4 3 15
15 A 6 2 15
16 R 4 1 15
17 A 6 2 15
18 R 5 2 15
19 A 3 4 15
20 R 6 5 15
21 A 5 4 15
22 R 3 3 15
23 A 4 5 15
24 A 6 2 15
25 R 5 5 15
26 A 7 6 15
27 R 5 2 15
28 A 3 6 15
29 R 7 2 15
30 R 4 1 15

This table shows the number of pecks and VITI of amber light and red light. The

responses on amber light gradually increased till the 20th trial and then started to decrease as

compared to the number of responses on red light. The VITI of red light was greater than the

VITI of amber light.


81

Table 2

Average Results of Pecks and VITI of Amber light and Red light Day 2

Trials Amber Light Red Light


Pecks VITI Pecks VITI
10 3.8 0.4 4 0.6
20 6.8 0 5.2 0.4
30 4.4 0 4.8 0

Graph 1

Line Chart for no. of pecks on Amber & Red lights Day 2

8
7
6p
5e
c
4
k
3s Amber Light
2 Red Light
1
0
10 20 30

This graph shows the number of pecks on amber light and red light. It can be seen that

the number of pecks on both amber and red lights were same in the starting of experiment

and at the end the number of responses were also same on both lights. At the 20th trial the

slope of responses on amber light is higher than the slope of red light.
82

Graph 2

Line Chart for Stimulus Generalization (VITI of Amber & Red lights) Day 2

0.7
0.6
0.5
V
0.4
0.3I Amber Light
0.2 Red Light
T
0.1
0I
10 20 30

Total no. of

trials
The graph shows that the VITI of red light is greater than the VITI of amber light.

Discussion

The experiment showed the significant results. The results of the experiment are in

favor of the hypothesis. Pigeon learned to give response on both stimuli (amber light and red

light). Results suggested that the hypothesis was proved i.e. stimulus generalization occurs if

both stimuli are rewarded.

Conclusion

Thus the stimulus S1 (Amber light) and stimulus S2 (Red light) are equally rewarding

to strengthen the behavior.


83

References

Bernstein, D. A. (2013). Essentials of psychology. USA: Cengage Learning

Bruno, F. J. (2002). Psychology: A self teaching guide. New Jersey: John Wiley &

Sons, Inc.

Chance, P. (2009). Learning and behavior: Active learning edition. USA: Wadsworth

Cengage Learning

Charles, M. G. (2010). Psychology for nurses. India: Pearson Education

Kalat, J. (2013). Introduction to psychology. USA: Cengage Learning

Klein, S. B. (2002). Learning principles and applications. USA: McGraw-Hill

Companies

Nicholas, L. (2009). Introduction to psychology. Cape Town: Juta and Company Ltd.

Pavlov, I. P. (1927). Conditioned reflexes. UK: Oxford University Press.

Powell, R. A., Symbaluk, D. G., & Honey, P. L. (2009). Introduction to learning and

behavior. USA: Wadsworth Cengage Learning

Skinner, B.F. (1953). Science and human behavior. New York: Macmillan.
84

Experiment No. 9

Stimulus Discrimination

Problem Statement

To find out does the stimulus S1 (Red light) and stimulus S2 (Green light) are not

potential equally to strengthen the behavior.

Introduction

Learning what to do has little value if one does not know when to do it. Learning that

a response is triggered is pointless if the person does not know which response is right.

Discrimination in operant conditioning consists of reinforcing only a specific desired

response and only in the presence of a specific stimulus (Charles, 2010). The opposite of

stimulus generalization in operant conditioning is stimulus discrimination. It is the tendency

for an operant response to be emitted more in the presence of one stimulus than another.

More generalization means less discrimination, and less generalization means more

discrimination (Powell, Symbaluk, & Honey, 2009).

If reinforcement occurs for responding to one stimulus and not another, the result is

discrimination between them, yielding a response to one stimulus and not the other (Kalat,

2013). For example you smile and greet someone you think you know, but then you realize it

is someone else. After several such experiences you learn to recognize the difference between

the two people. A stimulus that indicates which response is appropriate or inappropriate is

called a discriminative stimulus (Kalat, 2013).

Hypothesis

Stimulus S1 (Red light) and stimulus S2 (Green light) are not potential equally to

strengthen the behavior.


85

Method

Experimenter Students of MSc III Replica

Subject Pigeon

Apparatus Skinner box, Stop watch, Weight

machine, Paper, Pencil and Eraser

Procedure

Experiment was performed in two days. On day 1 before starting the experiment,

functioning of Skinner box was properly checked. Magazine was filled with food. It was

made sure that no grain of food was on the floor of Skinner box. All the group members were

assigned different responsibilities for example to control the panel, to record timing, to count

number of pecks etc. Then pigeon weighted and his weight was 244. He was brought to the

experimental room and was put in the Skinner box. The door of Skinner box was closed and

experiment was started. As soon as the pigeon entered the Skinner box, his behavior

observation was started.

In this experiment the stimulus discrimination was done. The pigeon learned to

discriminate between stimulus S1 (red light) and stimulus S2 (green light). In the previous

experiment, the pigeon was reinforced on red light. In this experiment the red light and green

light were presented. The red key light was S1 and the green key light was S2 and each light

was kept on for 5 seconds. First red key light was switched on for 5 seconds. Pigeon pecked

on S1 then the reinforcement (food) was given to the pigeon. In the second trial amber key

light was again turned on. Pigeon pecked on red light and reinforcement was given. In the

third trial green key light was turned on for 5 seconds, pigeon pecked on green light but

reinforcement was not given because the purpose was to learn pigeon to discriminate between

amber light and green light. Whenever the pigeon pecked on green light, the reinforcement

was not given. Only on the amber light the reinforcement was provided. Both lights were
86

randomly switched on according to a schedule of red and green key lights which was

prepared earlier. In the starting trials pigeon kept on pecking on the green light, but later on

he learned to not to peck on green light. On day 1 total 60 trials were given; 30 trials of red

key light and 30 trials of green key light. The reinforcement was given for 5 seconds.

After completing 60 trials the pigeon was taken out from Skinner box and weighed

again. After the experiment pigeon’s weight was 252. Then he was brought back to his cage.

On day 2 the same procedure was repeated and the response of the pigeon was noted.

Weight of the pigeon was recorded both after and before. No food was given to pigeon during

24 hours gap of session 1 and session 2 because the hunger drive had to be used as a

motivation force to eat food during experiment.

Results

Table 1

Discrimination between red Light and Green Light day 1

No. of trials Stimulus Key No. of pecks VITI Reinforcement(sec)

(sec)

1 R 4 1 15

2 R 3 0 15

3 G 5 0 0

4 R 3 0 15

5 R 8 0 15

6 G 5 0 0

7 R 5 0 15

8 G 5 1 0

9 G 7 0 0

10 R 4 0 15
87

11 G 7 0 0

12 G 4 0 0

13 R 9 0 15

14 G 4 0 0

15 G 8 0 0

16 R 4 0 15

17 G 7 0 0

18 G 6 2 0

19 R 7 0 15

20 R 6 0 15

21 G 5 0 0

22 R 8 0 15

23 G 4 3 0

24 R 7 0 15

25 G 8 0 0

26 R 9 0 15

27 R 5 0 15

28 G 7 4 0

29 G 3 1 0

30 R 6 0 15

31 R 7 0 15

32 G 4 2 0

33 R 8 0 15

34 R 9 0 15

35 G 5 4 0
88

36 R 7 0 15

37 G 4 4 0

38 R 3 0 15

39 G 9 0 0

40 R 7 0 15

41 R 5 0 15

42 G 4 1 0

43 R 7 0 15

44 G 5 0 0

45 R 8 0 15

46 R 5 0 15

47 G 9 4 0

48 G 6 3 0

49 R 7 0 15

50 G 3 2 0

51 R 7 0 15

52 G 8 3 0

53 R 5 0 15

54 G 9 5 0

55 G 5 2 0

56 G 3 1 0

57 R 8 0 15

58 G 4 3 0

59 G 7 4 0

60 G 4 3 0
89

These results show that the no. of pecks on green light gradually decreased and VITI

was increased. As compared to green light the no. of pecks on red light was greater and VITI

decreased gradually.

Table 1

Average Results of Pecks and VITI of red light and Green light Day 1

Trials Red Light Green Light

Pecks VITI Pecks VITI

10 6.5 0.2 4 0.2

20 5.8 0 3.7 0.3

30 7.6 0 3.4 1.6

40 6.2 0 3.5 2.5

50 6.4 0 2.2 2

60 5.7 0 1.7 3.1

Graph 1

Line Chart for Stimulus Discrimination (pecks on red & Green lights) Day 1

8
7
6
p
5e
4c
k
3 Red Light
s
2 Green Light
1
0
10 20 30 40 50 60

Total no. of

trials
This graph shows that the no. of pecks on red was much greater than on the green light.
90

Graph 2

Line Chart for Stimulus Discrimination (VITI of red & Green lights) Day 1

3.5
3
2.5V
I
2
T
1.5I
Red Light
1
Green Light
0.5
0
10 20 30 40 50 60

Total no. of

trials
This graph shows that the VITI for red light was zero second throughout the

experiment but for green light VITI increased after the 20th trial.

Table 2

Discrimination between Red Light and Green Light day 2

No. of trials Stimulus Key No. of pecks VITI Reinforcement(sec)

(sec)

1 R 5 0 15

2 R 6 0 15

3 G 3 2 0

4 R 6 0 15

5 R 3 0 15

6 G 6 2 0

7 R 8 0 15

8 G 9 3 0

9 G 5 2 0
91

10 R 7 0 15

11 G 4 2 0

12 G 8 5 0

13 R 9 0 15

14 G 5 4 0

15 G 3 5 0

16 R 6 0 15

17 G 3 2 0

18 G 6 5 0

19 R 7 0 15

20 R 8 0 15

21 G 7 5 0

22 R 5 0 15

23 G 8 4 0

24 R 9 0 15

25 G 3 5 0

26 R 6 0 15

27 R 4 0 15

28 G 6 5 0

29 G 3 4 0

30 R 5 0 15

31 R 7 0 15

32 G 6 3 0

33 R 9 0 15

34 R 6 0 15
92

35 G 4 3 0

36 R 7 0 15

37 G 6 5 0

38 R 4 0 15

39 G 7 5 0

40 R 4 0 15

41 R 7 0 15

42 G 7 5 0

43 R 4 0 15

44 G 8 4 0

45 R 4 0 15

46 R 7 0 15

47 G 9 5 0

48 G 5 3 0

49 R 8 0 15

50 G 5 5 0

51 R 7 0 15

52 G 5 3 0

53 R 3 0 15

54 G 7 5 0

55 G 3 5 0

56 G 7 5 0

57 R 6 0 15

58 G 5 5 0

59 G 6 5 0
93

60 G 8 0

These results show that on second day the no. of responses on green light decreased to

zero at the end of experiment. The VITI was increased throughout the trials. As compared to

the green light, the no. of pecks on red light gradually increased and as well as VITI

decreased to zero.

Table 2

Average Results of Pecks and VITI of Red light and Green light Day 2

Trials Red Light Green Light

Pecks VITI Pecks VITI

10 6.7 0 2.5 2.3

20 5.3 0 1 3.8

30 6.2 0 0.4 4.6

40 6.7 0 2.3 4

50 6.8 0 0.6 4.4

60 8 0 0.1 4.7
94

Graph 1

Line Chart for Stimulus Discrimination (pecks on red & Green lights) Day 2

9
8
7p
6
e
5
4c
Red Light
3k
Green Light
2
s
1
0
10 20 30 40 50 60

Total no. of

trials
Graph 2

Line Chart for Stimulus Discrimination (VITI of red & Green lights) Day 2

5
4.5
4
V 3.5
3
I
2.5
T Red Light
2
I 1.5 Green Light
1
0.5
0
10 20 30 40 50 60

Total no. of

trials
The graph 3 and graph 4 shows that the number of pecks on red light was greater than

the number of pecks on green light. The VITI for red light was remained zero in the whole

experiment but the VITI for green light constantly increased.


95

Discussion

The experiment showed the significant results. Pigeon learned to give response on

only red light. These findings supported the hypothesis.

Conclusion

Thus the stimulus S1 (red light) and stimulus S2 (Green light) are not potential

equally to strengthen the behavior.


96

References

Bernstein, D. A. (2013). Essentials of psychology. USA: Cengage Learning

Bruno, F. J. (2002). Psychology: A self teaching guide. New Jersey: John Wiley &

Sons, Inc.

Chance, P. (2009). Learning and behavior: Active learning edition. USA: Wadsworth

Cengage Learning

Charles, M. G. (2010). Psychology for nurses. India: Pearson Education

Kalat, J. (2013). Introduction to psychology. USA: Cengage Learning

Klein, S. B. (2002). Learning principles and applications. USA: McGraw-Hill

Companies

Nicholas, L. (2009). Introduction to psychology. Cape Town: Juta and Company Ltd.

Pavlov, I. P. (1927). Conditioned reflexes. UK: Oxford University Press.

Powell, R. A., Symbaluk, D. G., & Honey, P. L. (2009). Introduction to learning and

behavior. USA: Wadsworth Cengage Learning

Skinner, B.F. (1953). Science and human behavior. New York: Macmillan.
97

Experiment No. 10

Extinction

Problem Statement

To find out does the behaviors which are not reinforced, extinct gradually

Introduction

A behavior that has been strengthened through reinforcement can also be weakened

through extinction. If the conditioned stimulus is repeatedly presented without the

unconditioned stimulus, the conditional response will become weaker and weaker. Extinction

is the non-reinforcement of a previously reinforced response, the result of which is a decrease

in the strength of that response (Powell, Symbaluk, & Honey, 2009). Pavlov was the first to

demonstrate extinction in the laboratory.

The procedure of repeatedly presenting the conditioned stimulus without the

unconditioned stimulus is called extinction (Chance, 2009). When, as a result of extinction,

the conditioned response no longer occurs (or occurs no more than it did prior to

conditioning), it is said to have been extinguished.

In operant conditioning, extinction happens as a result of withholding reinforcement.

The effect usually is not immediate. In fact when reinforcement is discontinued, first there is

often a brief increase in the strength or frequency of responding before a decline sets in

(Charles, 2010).

Extinction is an active process that is designed to eliminate a conditioned reflex. For

example, if you put coins in a vending machine and it fails to deliver the goods, you may

push the button more forcefully and in rapid succession before you finally give up.

Hypothesis

The behaviors which are not reinforced, extinct gradually


98

Method

Experimenter Students of MSc III Replica

Subject Pigeon

Apparatus Skinner box, Stop watch, Weight

machine, Paper, Pencil and Eraser

Procedure

Experiment was performed in one day. Before starting the experiment, functioning of

Skinner box was properly checked. It was made sure that no grain of food was on the floor of

Skinner box. All the group members were assigned different responsibilities for example to

control the panel, to record timing, to count number of pecks etc. He was brought to the

experimental room and was put in the Skinner box. The door of Skinner box was closed and

experiment was started. As soon as the pigeon entered the Skinner box, his behavior

observation was started.

In this experiment the extinction was done. The purpose was to extinct the learned

behavior of pigeon by withholding the reinforcement. In the previous experiment pigeon

learned to give response on red light but not on green light. In this experiment the red light

and green light were presented. The red key light was S1 and the green key light was S2 and

each light was kept on for 5 seconds. First amber key light was switched on for 5 seconds.

Pigeon pecked on S1 the reinforcement (food) was not given to the pigeon. In the second trial

red key light was again turned on but reinforcement was not given. In the third trial green key

light was turned on for 5 seconds, pigeon didn’t peck on green light. Whenever the pigeon

pecked on red and green lights, the reinforcement was not given. Both lights were randomly

switched on according to a schedule of amber and green key lights which was prepared

earlier. In the starting trials pigeon kept on pecking on the amber light, but later on his pecks

on red light were reduced to 0. On day 1 total 60 trials were given; 30 trials of red key light
99

and 30 trials of green key light. The reinforcement was not given. After completing 60 trials

the pigeon was taken out from Skinner box and weighed again. Then he was brought back to

his cage. The food was given in the cage. No food was given to pigeon during experiment.

Results

Table 1

Extinction of the learned behavior Day 1

No. of trials Stimulus Key No. of pecks VITI Reinforcement(sec)

(sec)

1 R 5 0 -

2 R 6 0 -

3 G 4 5 -

4 G 3 0 -

5 G 6 0 -

6 R 8 5 -

7 R 9 1 -

8 G 5 5 -

9 G 8 3 -

10 G 7 0 -

11 R 4 5 -

12 G 4 5 -

13 R 6 0 -

14 R 3 5 -

15 R 4 5 -

16 G 6 0 -

17 R 0 5 -
100

18 R 0 5 -

19 G 0 0 -

20 G 0 0 -

21 G 0 5 -

22 R 0 1 -

23 R 0 5 -

24 R 0 1 -

25 G 0 5 -

26 G 0 0 -

27 G 0 0 -

28 G 3 5 -

29 G 6 5 -

30 R 4 0 -

31 R 0 1 -

32 R 0 5 -

33 R 0 1 -

34 G 0 1 -

35 G 0 5 -

36 G 0 0 -

37 G 3 5 -

38 R 0 1 -

39 R 0 5 -

40 R 0 5 -

41 R 0 3 -

42 R 0 5 -
101

43 G 0 2 -

44 G 0 5 -

45 G 0 1 -

46 G 0 1 -

47 G 0 5 -

48 G 0 5 -

49 G 0 5 -

50 R 0 5 -

These results show that the no. of pecks on red light gradually decreased and VITI

increased. At the end of experiment the pigeon didn’t respond.

Table 1

Average Results of Pecks and VITI of red light Day 1

Trials Red Light Green Light

Pecks VITI Pecks VITI

10 7 0.2 0.2 4.5

20 6.3 0 0 5

30 5 0.4 0 5

40 3 1.5 0 5

50 1.8 2.4 0 5

60 0.3 4.7 0 5
102

Graph 1

Line Chart for Extinction (pecks on Red & Green lights) Day 1

8
7
6p
5
4e
3 Red Light
c
2 Green Light
1k
0
s 10 20 30 40 50 60

Total no. of

trials
This graph shows that the no. of pecks on red light gradually decreased and there was

zero response on green light.

Graph 2

Line Chart for Extinction (VITI of Red & Green lights)

5
V
4
I
3

2 T Red Light
Green Light
1 I
0
10 20 30 40 50 60

Total no. of

trials
This graph shows that the VITI for red light gradually increased after the 30th trial and

the VITI for green light remained constant after 20th trial.
103

Discussion

By analysis of the result it was concluded that the hypothesis is fully accepted and the

pigeon’s response gradually decreased at the end and extinction occurred. By performing this

experiment, it was proved that if the reinforcement is not given the pigeon will stop

responding.

Conclusion

Thus, the behaviors which are not reinforced, extinct gradually


104

References

Bernstein, D. A. (2013). Essentials of psychology. USA: Cengage Learning

Bruno, F. J. (2002). Psychology: A self teaching guide. New Jersey: John Wiley &

Sons, Inc.

Chance, P. (2009). Learning and behavior: Active learning edition. USA: Wadsworth

Cengage Learning

Charles, M. G. (2010). Psychology for nurses. India: Pearson Education

Kalat, J. (2013). Introduction to psychology. USA: Cengage Learning

Klein, S. B. (2002). Learning principles and applications. USA: McGraw-Hill

Companies

Nicholas, L. (2009). Introduction to psychology. Cape Town: Juta and Company Ltd.

Pavlov, I. P. (1927). Conditioned reflexes. UK: Oxford University Press.

Powell, R. A., Symbaluk, D. G., & Honey, P. L. (2009). Introduction to learning and

behavior. USA: Wadsworth Cengage Learning

Skinner, B.F. (1953). Science and human behavior. New York: Macmillan.
105

Experiment No. 11

Spontaneous Recovery

Problem Statement

To find out does it takes less time to learn the behavior second time.

Introduction

Learning can be defined as an experimental process resulting in a relatively

permanent change in the behaviors that cannot be explained by temporary states, maturation

or innate response tendencies. Thus learning has three important components; first learning

reflects a change in the potential for a behavior. Second, the behavior changes that learning

causes are not always permanent. As a result of new experiences, previously learned

behavior may not be exhibited. Third, the changes in behavior can be due to the process of

other learning. Our behavior can change rather than because of learning. Many behavioral

changes are due to maturation (Klein, 2002).

Although extinction is a reliable process for weakening a behavior, it would be a

mistake to assume that once a response has been extinguished, it has been permanently

eliminated. Extinction in operant conditioning does not completely erase what has been

learned. Even though much time has passed since a behavior was last rewarded and the

behavior seems extinguished, it may suddenly reappear (Charles, 2010).

As with extinction of a classically conditioned response, extinction of an operant

response is likely to be followed by spontaneous recovery (Skinner, 1938). Spontaneous

recovery is the reappearance of an extinguished response following a rest period after

extinction (Powell, Symbaluk, & Honey, 2009).

Hypothesis

It takes less time to learn the behavior second time.


106

Method

Experimenter Students of MSc III Replica

Subject Pigeon

Apparatus Skinner box, Stop watch, Weight

machine, Paper, Pencil and Eraser

Procedure

Experiment was performed in one day. Before starting the experiment, functioning of

Skinner box was properly checked. Magazine was filled with food. It was made sure that no

grain of food was on the floor of Skinner box. All the group members were assigned different

responsibilities for example to control the panel, to record timing, to count number of pecks

etc. Then pigeon was weighted, brought to the experimental room and was put in the Skinner

box. The door of Skinner box was closed and experiment was started. As soon as the pigeon

entered the Skinner box, his behavior observation was started.

The experiment was done on the spontaneous recovery. The reinforcement was given

by pecking on amber light. The amber light was on during the whole experiment. The pigeon

was very active and aggressive but he was continuously pecking on the ground and did not

respond to light. He looked at the light but did not peck on the light because in the previous

day he was learned to not to peck on any light. But after 2 seconds, pigeon accidently peck on

light and at that time reinforcement was given to him. After that act, he was also become

vigilant and active to get food as a reward. Whenever the pigeon pecked on the light, food

was presented for 5 seconds, behavior of pigeon and his number of pecks was noted down.

After completing 30 trials the pigeon was taken out from Skinner box and weighed again.

Then he was brought back to his cage.


107

Results

Table 1

Spontaneous Recovery on Amber light Day 1

No. Stimulus No. of pecks VITI (sec) Reinforcement(sec)

of Trials key

1 A 5 2 15

2 A 7 2 15

3 A 6 1 15

4 A 2 0 15

5 A 5 0 15

6 A 3 0 15

7 A 6 0 15

8 A 2 0 15

9 A 5 0 15

10 A 2 0 15

11 A 5 0 15

12 A 2 0 15

13 A 5 0 15

14 A 2 0 15

15 A 6 0 15

16 A 2 0 15

17 A 5 0 15

18 A 6 0 15

19 A 3 0 15

20 A 8 0 15
108

21 A 4 1 15

22 A 2 0 15

23 A 5 1 15

24 A 2 0 15

25 A 5 0 15

26 A 7 0 15

27 A 3 0 15

28 A 5 0 15

29 A 2 0 15

30 A 4 0 15

These results show that the no. of responses gradually increased on the amber light

and VITI was decreased till the end.

Table 1

Average Results of Pecks and VITI Day 1

Trials Pecks VITI (sec)

1-5 5.2 1

6-10 5 0

11-15 6.2 0

16-20 4.8 0

21-25 4.2 0.4

26-30 4.6 0
109

Graph 1

Line Chart for Spontaneous Recovery (no. of pecks) Day 1

2 No. of Pecks

0
5 10 15 20 25 30

Graph 2

Line Chart for Spontaneous Recovery (VITI) Day 1

1.2

0.8

0.6

0.4 VITI

0.2

0
5 10 15 20 25 30

Discussion

By analysis of the results it was concluded that our hypothesis is fully accepted and

the pigeon’s response gradually increased in spontaneous recovery and he pecked quickly in

relation to his previous experiments. By performing this experiment, it was proved that
110

extinction does not erase what has previously been learned, a conditioned response (CR), but

produces a decline in conditioned behavior.

Conclusion

Thus, it takes less time to learn the behavior second time.


111

References

Bernstein, D. A. (2013). Essentials of psychology. USA: Cengage Learning

Bruno, F. J. (2002). Psychology: A self teaching guide. New Jersey: John Wiley &

Sons, Inc.

Chance, P. (2009). Learning and behavior: Active learning edition. USA: Wadsworth

Cengage Learning

Charles, M. G. (2010). Psychology for nurses. India: Pearson Education

Kalat, J. (2013). Introduction to psychology. USA: Cengage Learning

Klein, S. B. (2002). Learning principles and applications. USA: McGraw-Hill

Companies

Nicholas, L. (2009). Introduction to psychology. Cape Town: Juta and Company Ltd.

Pavlov, I. P. (1927). Conditioned reflexes. UK: Oxford University Press.

Powell, R. A., Symbaluk, D. G., & Honey, P. L. (2009). Introduction to learning and

behavior. USA: Wadsworth Cengage Learning

Skinner, B.F. (1953). Science and human behavior. New York: Macmillan.

You might also like