You are on page 1of 72

Modeling the Refractive and

Neuro-Sensor Systems of the Eye


Larry N. Thibos and Arthur Bradley
School of Optometry, Indiana University
Bloomington, IN 47405
email: thibos@indiana.edu, bradley@indiana.edu

Published as Chapter 4 (pp. 101-159) in:


VISUAL INSTRUMENTATION:
Optical Design and Engineering Principles
Pantazis Mouroulis, Editor
Copyright 1999 By Mcgraw-Hill, Inc., New York

TABLE OF CONTENTS

4.1 Introduction - The Purpose of Model Eyes................................2


4.2 Historical Development of Schematic Eyes ..............................8
4.3 The Indiana Eye - a Simplified Schematic Model for
Evaluating the Quality of Retinal Images .........................................12
4.4 Geometrical Optical Analysis of the Indiana Eye ......................20
4.4.1 Surface Power..............................................................20
4.4.2 Magnification...............................................................20
4.4.3 Defocus .......................................................................21
4.4.4 Longitudinal Chromatic Aberration (LCA)....................26
4.4.5 Chromatic Difference of Magnification (CDM)............29
4.4.6 Transverse Chromatic Aberration (TCA)......................31
4.4.7 Spherical Aberration....................................................33
4.4.8 Oblique Astigmatism....................................................36
4.4.9 Non-linear Projection of Visual Space..........................38
4.5 Calculation of Image Quality for the Indiana Eye......................39
4.5.1 Wave Aberration Function ...........................................41
4.5.2 Polychromatic OTFs .....................................................43
4.5.3 Predicting Visual Performance .....................................44
4.6 Modeling the Neuro-sensor System of the Eye .........................49
4.6.1 Neural Sampling of the Retinal Image ..........................51
4.6.2 Functional Implications of Neural Sampling.................56
4.6.3 Optical vs. Neural Limits to Visual Performance...........59
4.7 Validation of Eye Models..........................................................61
REFERENCES......................................................................................63

- 4.1 -
4.1 Introduction - The Purpose of Model Eyes
The human eye is an elegant electro-optical image processing
system with many parallels to modern optical instruments. For
example, the eye could be compared to an advanced video camera
with wide field of view, auto-focus and auto-exposure optics, a
variable-resolution sensor with 3 color channels plus an ultra-
sensitive black and white channel, local and global automatic gain
control mechanisms which produces uniform sensitivity over a large
dynamic range, all packaged in a pair of light-weight spheres
mounted in a lubricated gimbal that allows rapid, servo-controlled,
coordinated stereoscopic positioning and the tracking of moving
objects. At the same time, the eye is a typical biological system with
an inherent complexity and variability which cannot easily be
represented by a simple equation or model. Nevertheless, there are
many features of the eye that are common to virtually all human
eyes which makes it feasible and desirable to represent the average
eye by a functional, neuro-optical, schematic model.

- 4.2 -
Cross-section of the Human Eye

optic n
erve

Figure 4.1 An anatomically correct drawing of a human eye in cross-


section. (Redrawn from Walls, 1942.)

Most model eyes strive to represent the optical components of


the average human eye, which is shown in anatomical cross-section
in Fig. 4.1 . As a result, most models fail to capture important
differences between individual eyes or changes in the eye over time.
For example, at birth, the human eye is much smaller, and optically
more powerful than the adult eye. The eye increases in size
gradually throughout childhood, optically coordinating the change
in eye size with the change in optical power. During childhood and

- 4.3 -
early adulthood the auto-focus capability (called accommodation)
of the human eye provides an impressive dynamic range from
infinity to less than 10 cm from the eye. Throughout adulthood,
however, the capacity to accommodate declines and eventually
disappears at about 55 years of age. In addition to these age
changes that occur in all eyes, there are other changes that occur in
a significant minority of eyes. For example, many people experience
additional ocular growth during childhood or early adulthood that is
not accompanied by a corresponding decrease in optical power of
the eye's dioptric components. These eyes effectively become
optically over-powered, or “near-sighted” (myopic), thus requiring a
spectacle or contact lens with negative power in order to see distant
objects clearly. Given these large differences in the optical
properties of real eyes, the goal of a schematic model eye is to
represent the typical healthy eye, supplemented with tolerances on
individual components which give a sense of the range of values
found in the population.

Aside from natural, biological variability, recent surgical


developments have introduced new sources of variance into the
human eye population. For example, although myopic eyes result
from excessive ocular growth, they are sometimes treated by
surgically reducing the optical power of the cornea. There are two
general strategies for this type of refractive surgery. One uses a
series of radial incisions made around the optical margins of the
cornea which reduce the curvature of the corneal center, but
introduce highly aberrated corneal margins. The other employs
laser technology to reshape the cornea by photo-ablation. Also, in
many older eyes, when the natural lens starts to lose its
transparency, it is surgically removed and replaced with a synthetic
lens and sometimes this synthetic lens is multifocal. Thus, one of
the reasons for the current resurgence of interest in model eyes is
the need to account for the optical consequences of these clinical
procedures.

Numerous model eyes have been developed during the last 150
years to satisfy a variety of needs. In some cases the model eye is
aimed primarily at anatomical accuracy, whereas other models have
made no attempt to be anatomically correct or even have any
anatomical structure at all. There are many reasons why
anatomically correct models are useful. As a biological organ, it
would be impossible to understand the physiology, development,
and biological basis of the eye’s optical properties without an

- 4.4 -
accurate anatomical model. In the clinical world of refractive
surgery, an anatomically correct model is an essential tool for
planning operations designed to modify the eye’s optical properties.
A good example of this need is the routine cataract surgery to
remove an eye's internal, opacified lens and replace it with a
synthetic lens designed as a substitute for the removed natural lens
and to correct for any refractive error that the eye may have. Since
the refractive error depends upon the lens, cornea and eye length,
and the effective power of the introduced lens depends upon its
location with respect to the cornea and retina, an anatomically
correct optical model is essential for clinicians to calculate the
desired lens power of the synthetic, implanted lens.

The primary goals of anatomically-accurate optical models


developed in the 19th century were to match the gross anatomy of
the eye and to model the paraxial geometrical optics of the eye's
thick-lens system. In the 20th century the goal has been to compute
retinal illumination levels and image quality. In this case, a good
deal of simplicity is achieved by avoiding anatomical details, that is,
by treating the eye as an equivalent thin lens with the same
aberrations, diffraction, scatter, reflection and absorption.
Although such simplified models are not anatomically correct, at
least they are physical models exhibiting physical refracting surfaces
and physical dimensions. To the contrary, some models ignore
anatomy completely, becoming purely analytical without any
physical features. The optical properties of such models are
embodied in equations which represent the wave aberration
function and pupil function of the eye. From these functions one
may compute the optical transfer function (OTF) and point spread
function (PSF) by the standard methods described elsewhere in this
book. An important disadvantage of such models is the failure to
account for the effects of light scattering by the lens and retina,
which are increasingly important in older eyes.

As with all modeling, a balance must be struck between


simplification and accurate representation of the system’s details.
The balance point in published model eyes varies from simplistic,
single surface spheres with a fixed refractive index to elaborate
models with multiple aspheric surfaces enclosing gradient-index
(GRIN) refractive media. The two driving forces behind increased
complexity in model eyes are anatomical accuracy and
computational accuracy. The former has produced models of the
gradient-index profile of the eye's lens based on numerous layers of

- 4.5 -
refracting surfaces. The latter has spurred the use of aspheric
refracting surfaces and variable refractive indices to better model
the monochromatic and polychromatic aberrations of the eye.

Most attempts at modeling the optics of the human eye are


concerned with paraxial questions: is the Gaussian image plane
coincident with the sensory array? and how good is the image in the
plane of the sensory array? In this sense the model eye acts as a
standard for comparison against which one can compare the
performance of real eyes. Although most published models only
consider local image quality, some wide-angle models have been
proposed to predict image location, image geometry, and retinal
illumination across the entire visual field.

Until recently, model eyes were concerned only with light passing
into the eye (the path necessary for vision). However, recent
interest in retinal imaging for medical diagnostic purposes has
highlighted the importance of accurate modeling of light exiting the
eye following reflection from the fundus. Although most individuals
only experience this phenomenon as an annoying “red eye” on their
photographs when a flash is used, clinicians for over a hundred
years have been using reflected light to view the blood vessels and
neural retina at the back of the eye (fundus). For these clinical
applications, the retina becomes the object which is imaged by the
eye’s optical system in conjunction with an ophthalmic instrument
(e.g. fundus camera or ophthalmoscope). This is a challenging task
because the eye contains dense visual pigmentation which absorbs
light for vision, plus additional melanin pigment which absorbs
excess light, thereby reducing internal light scatter. Consequently,
only a small fraction of the light entering the eye actually reflects
back out again. Combining this very low reflectance (e.g. 10-4 ) with
the susceptibility of the retina to photic injury leads to a practical
problem of dim fundus images. The clinical solution to this problem
is to dilate the pupil, which unfortunately releases the full impact of
the eye's aberrations on the reflected image. Only recently have the
eye's aberrations been measured with sufficient accuracy to allow
correction with adaptive optics, thereby producing a high resolution
image of the fundus.42

This chapter is devoted to schematic models of the eye and is


organized as follows. In the following section 4.2 we review the
historical development of model eyes from their inception to the
present. In section 4.3 we describe a simplified physical model

- 4.6 -
called the Indiana Eye which was designed specifically for the
computation of image quality. The optical properties of the Indiana
Eye are examined in detail in section 4.4 and the use of the model to
compute image quality is outlined in section 4.5. A model of the
neuro-sensor features of the human retina is presented in section
4.6. We conclude our chapter with a discussion in section 4.7 of the
problem of validating eye models which are intended to predict the
optical and neuro-sensory effects on visual performance.

Table 4.1: Summary of Indiana Eye Model

FEATURES
• Single, aspheric refracting surface with variable shape parameter
• Adjustable pupil (axial, lateral, diameter)
• Dispersive, homogeneous medium
• Variable focus (adjustable three ways)

APPLICATIONS
• Focal power
• Retinal magnification factor
• Defocus
• Longitudinal chromatic aberration (LCA)
• Chromatic difference of magnification (CDM)
• Transverse chromatic aberration (TCA)
• Spherical aberration
• Oblique astigmatism
• Non-linear projection of visual space
• Wave aberration function
• Polychromatic optical transfer functions (OTFs)
• Image quality
• Visual performance

Table 4.1 provides a summary of the main features and


application areas of the Indiana Eye which are described in detail in
this chapter. Because of the model's extreme simplicity, optical
analysis of the Indiana Eye will be a straightforward exercise for
experienced optical engineers. The greater challenge will be in
gaining familiarity with the different vocabulary and unconventional
approach used by authors of the visual optics literature. To help
bridge this "jargon gap" which stands as a real barrier to effective

- 4.7 -
communication between the fields of engineering optics and visual
optics, we conclude this introduction by briefly mentioning a few
terms and conventions.

An eye which is well focused for objects located at optical infinity


is called emmetropic. Simple focusing errors responsible for near-
sightedness (myopia), far-sightedness (hyperopia) and orientation-
specific defocus (astigmatism) may be attributed to the cornea, to
the crystalline lens, to elongation of the globe, or to some
combination of these factors. The term astigmatism has an
unconventional meaning in visual optics that can lead to
unnecessary confusion. In standard optical theory of rotationally
symmetric optical systems, astigmatism is one of the classic Seidel
aberrations which is field dependent and vanishes on the optical
axis. 53 In visual optics, however, there is no presumption that the
eye's optical system is rotationally symmetric. Therefore the term
astigmatism is used in visual optics to describe the type of focusing
error produced by a toric refracting surface. To avoid confusion,
the field-dependent astigmatism of Seidel is usually called oblique
astigmatism (or off-axis astigmatism) in the visual optics literature.
We adhere to that convention here. Field angle, usually called
eccentricity in visual optics, is measured with respect to the point of
fixation. By definition the fixation point is conjugate to the retinal
fovea, the highly specialized region of retina where cones
photoreceptors are most densely packed and visual acuity is
maximum.

Axial distances of objects and images are typically expressed as


vergences (i.e. index of refraction/distance) using diopters (D =
inverse meters) as the physical unit of measure. This convention is
popular in visual optics because it transforms the awkward Gaussian
conjugate equation n ′ / l′ = n / l + ( n ′ − n)/ r into a simpler linear
summation rule L′ = L + F in which image vergence L´ equals the
object vergence L plus the power F of the refracting surface.

4.2 Historical Development of Schematic Eyes


Some of the greatest optical scientists of all time have worked to
explain and quantify the underlying optical principles of the human
eye. According to Helmholtz's authoritative Treatise on
Physiological Optics 85, Kepler (1602) was the first to understand
that the eye formed an inverted image on the retina and Scheiner
(1625) demonstrated this experimentally in an excised human eye.

- 4.8 -
Newton (1670) was the first to appreciate that the human eye
suffered from chromatic aberration, and Huygens (1702) actually
built a model eye to demonstrate the inverted image and the
beneficial effects of spectacle lenses. Astigmatism was observed by
Thomas Young (1801) in his own eye and spherical aberration was
measured by Volkman (1846). The first quantitative paraxial model
eye was developed by Listing (1851), which employed a series of
spherical surfaces separating homogeneous media with fixed
refractive indices. Listing's model set a trend that produced
numerous models over the next 100 years, of which the most widely
quoted variants are due to Emsley.28

All of the early paraxial models were designed with spherical


surfaces, but since the emphasis was on paraxial calculation of
focal length, blur circles, image location, and image size, the
considerable off-axis aberrations of these models were ignored. The
most widely used paraxial model eyes were described by the Swedish
ophthalmologist Gullstrand32 in his appendices to Volume 1 of
Helmholtz’s treatise. 85 Gullstrand was fully aware that his models
failed to capture the known asphericity of the cornea, but his six
surface model eye did try to capture known GRIN properties of the
lens by including a lens with two refractive indices.

Using paraxial approximations it is possible to reduce a six


(Gullstrand), four (Le Grand), or three (Listing) surface model down
to a single surface model eye as conceived originally by Huygens
(1652) and dubbed a "reduced eye" by Listing (1851). Several
reduced-eye models were produced during the century 1850-1950
(Listing, Gullstrand, Emsley) which allowed calculations of paraxial
image plane, size and position to be made with ease. Surprisingly,
none of these models explicitly included a pupil, an oversight which
appears to have discouraged the use of the reduced eye to model
aberrations of the eye, diffraction effects, or retinal illumination.
Only in the past decade has the reduced eye been shown to be a
legitimate model of these optical properties.

In recent years, several wide-angle models have been developed


to capture three important properties of off-axis image quality in
human eyes. First, oblique astigmatism had been observed in
several studies early in this century 29, 63 which prompted the use of
aspherical corneas and lenses to ensure that the sagittal and
tangential foci of their models straddled the retina26, 45, 88 as is
typical for emmetropic eyes. Important practical benefits of

- 4.9 -
correcting oblique astigmatism include improved imaging of the
peripheral fundus and improved efficacy and accuracy of laser
photocoagulation in the peripheral retina. Some interesting issues
regarding the ocular source of oblique astigmatism have been raised
in the modeling work of Dunne and colleagues. They observed that
their model, and those of others, failed to capture both off-axis
astigmatism and on-axis spherical aberration even though both are
similarly affected by changes in model asphericity.26 In a later study
they observed that off-axis astigmatism is virtually unchanged in
Gullstrand’s model eye when the lens is removed.27 This conflicted
with experimental reports that aphakic eyes (those with the lens
surgically removed) have less off-axis astigmatism. 51 By displacing
the model pupil backwards along the optic axis, they were able to
reduce the off-axis astigmatism to levels observed experimentally in
aphakic eyes, and therefore came to the conclusion that aphakic
eyes have reduced off-axis astigmatism due to axial displacement of
the pupil and not because the lens contributes significantly to the
off-axis astigmatism of the eye. Axial displacement of the pupil
(which is typically located at the anterior surface of the lens in
previous aspheric models) will have a profound effect on the model
eye’s off-axis astigmatism, but very little impact on axial spherical
aberration. Therefore, using a model without a lens in which the
pupil can be placed in any plane, it is possible to manipulate surface
asphericity and pupil position to accurately model both off-axis
astigmatism and on-axis spherical aberration.

The second major property required of a wide angle model is an


accurate representation of the non-linear projection of object space
onto the retina. This has been modeled by Lotmar45, Drasdo and
Fowler24 and Kooijman39, all predicting a compression of the
peripheral image. As noted by Lotmar, this non-linear projection is
important in scaling the fundus images of peripheral retinal features
seen by the clinician. Tangential and sagittal minification of
peripheral images are unequal (e.g. radial and tangential
minifications are approximately 64% and 73%, respectively, at 80°
eccentricity according to the model of Drasdo and Fowler) and,
therefore, peripheral images in these models suffer some distortion
as well as minification. This image compression might be expected
to increase retinal illumination for peripheral images. However, this
effect is offset by the oblique projection of the pupil, with the net
result being almost constant retinal illumination across the retina in
these wide angle model eyes.18, 39, 60

- 4.10 -
The third, and dominant, motivation for recent development of
model eyes has been to account for the effects of monochromatic
and polychromatic aberrations on vision. In general, models
constructed with spherical refracting surfaces have been found to
have far more aberration than is present in real eyes. For example,
the early spherical models of Gullstrand and others, which were not
designed to model spherical aberration, exhibit between 8 D and 12
D of longitudinal spherical aberration (LSA) at the edge of an 8 mm
diameter pupil, whereas human eyes have less than 2.5 D for the
same ray location.43, 81 Discrepancies of this kind have prompted
the adoption of aspheric surfaces and dispersive media to help bring
the models into closer agreement with real eyes. For example, Liou
and Brennan use a four surface, aspheric model plus a GRIN lens to
achieve a realistic value of about 2 D of LSA at a 4 mm ray height.44
Similarly, Navarro et al.54 designed an aspheric multi-surfaced
model with dispersive media in order to compute the effects of
monochromatic and chromatic aberrations on image quality.

There is general agreement that chromatic aberration is among


the more serious aberrations of the human eye. Nevertheless, in
spite of the fact that ocular chromatic aberration was first described
over 300 years ago by Newton55 when he was exploring chromatic
dispersion by glass prisms, and has been studied extensively during
the last 50 years, there have been few attempts to model the
chromatic aberration of the human eye. All of the optical media in
the classical paraxial model eyes have a fixed refractive index, as do
most recent aspheric model eyes.26, 39, 45 Few models have included
dispersive media even though it has been known since Helmholtz
that, so far as chromatic aberration is concerned, the eye behaves as
if it were a volume of water separated from air by a single refracting
surface. Surprisingly, a century passed before Le Grand41 took the
next step by modifying the reduced eye to allow refractive index to
vary with wavelength according to a specific formula. According to
a recent paper by Villegas et al.84 this variable-index reduced-eye
model has the same amount of chromatic aberration as Le Grand’s
earlier four surface model eye, which included dispersion by all four
components of the model (cornea, aqueous, lens and vitreous).
Both of Le Grand’s models exhibit approximately the same amount
of transverse and longitudinal chromatic aberration as found in real
eyes.66, 75, 84

The drive to develop models for computing image quality has led
some authors to abandon anatomical correctness in favor of

- 4.11 -
analytical or physical simplicity. For example, Van Meeteren
(1974) developed a mathematical model of the pupil function and
its variation with wavelength in order to compute image quality in
the human eye for monochromatic and polychromatic light.82 The
drawback of a purely mathematical model, however, is that it does
not readily provide insight into the physics of image formation in
the eye. For this reason we prefer to work with a physical model,
from which we may derive the optical properties which determine
image quality. In the spirit of parsimony, we would like the model
to be as simple as possible, preferably with a single refracting
surface. Much to our surprise, our group at Indiana University has
discovered over the past decade that the classic reduced eye
satisfies these requirements, provided that a few small modifications
are adopted. As shown in the following sections, a modified
reduced eye is capable of accurately describing the chromatic,
spherical, and astigmatic aberrations of the typical human eye.
Because of its simplicity of form the model may be analyzed
geometrically, analytically, and numerically to gain insight into
visual optics problems such as computing the effects of diffraction,
defocus, and aberrations on retinal image quality and visual
performance.

4.3 The Indiana Eye - a Simplified Schematic Model for


Evaluating the Quality of Retinal Images
The Indiana Eye is a reduced-eye model which has evolved over
the past decade for the purpose of computing the quality of retinal
images. The design goal has been to produce a physical model that
was simple enough to allow "back-of -the-envelope" reasoning about
ocular aberrations and diffraction, the two major factors which limit
image quality. In designing the model we wished to avoid the
computational complexity of modeling real eyes, with their multiple
refracting surfaces, GRIN profiles, and variable power lenses
constructed from imperfect biological components. Instead, we
aimed to create a tool for thinking about how the eye performs as
an optical instrument, and how the eye interacts with other optical
instrumentation. Thus, unlike most other schematic eye models
described in the visual optics literature, anatomical correctness was
sacrificed to achieve simplicity of form and analysis. Ultimately the
success of the model would be measured not by its structural

- 4.12 -
similarity to real eyes, but by its ability to predict visual
performance on optically-limited tasks.

Our original impetus for developing a simplified model eye was


the need to account for the visual effects of the eye's chromatic
aberration. For this purpose we used Emsley's reduced eye filled
with water, for which Le Grand41 suggested using Cornu's hyperbolic
formula

b
n( ) = a + (4.1)
−c

to specify how the refractive index n of the ocular medium varies with
wavelength λ. In this equation λ is specified in microns and the other
parameters are: a=1.31848, b=0.006662, c=0.1292.

In order to account for the impact of chromatic aberration and


diffraction on image quality, Thibos71 introduced a pupil into the
reduced eye at the same axial location relative to the mean nodal
point as occurs in Gullstrand's schematic eye.32 Another important
step in the evolution of the reduced-eye model was to change the
refracting surface to an aspheric shape. The initial motivation for
this change was to isolate the effects of chromatic aberration for
study, unencumbered by the large amounts of spherical aberration
associated with a spherical refracting surface in Emsley's model. For
this purpose an ellipsoidal refracting surface with zero spherical
aberration was introduced and the resulting aspheric model was
dubbed the Chromatic Eye.80

To achieve the extra degree of freedom needed to model the


eye's spherical aberration, the aspheric refracting surface of the
Chromatic Eye model has been generalized recently81 to include an
entire family of rotationally symmetric models built from quadric
surfaces of revolution about the z-axis. This family of models is
defined by the equation

x2 + y 2 + pz 2
z= . (4.2)
2r

In this equation r is the paraxial radius of curvature and p is a shape


parameter which determines the asphericity as follows:

p > 1, oblate ellipsoid,


p = 1, sphere,

- 4.13 -
0 < p < 1, prolate ellipsoid,
p = 0, paraboloid,
p < 0, hyperboloid,

The shape parameter p, which determines the degree of spherical


aberration of the surface, is related to the eccentricity ε of the
surface by the equation = 1 − p . To achieve the special case of
zero spherical aberration of the Chromatic Eye model requires that
p = 1 − 1/ n2 , which yields an ellipsoidal surface that is also a Cartesian
oval.81

One of the convenient features of formulating the aspheric


refracting surface using eqn. 4.2 is that all of the members of this
family of models have the same radius of curvature r at the origin,
and thus have the same paraxial power and the same paraxial
chromatic aberration. A family member of special interest is the
Chromatic Eye, which was designed to be emmetropic at 589 nm
(the sodium D-line of the solar spectrum). According to eqn. 4.1,
the refractive index of the ocular medium of the model at this
reference wavelength is nD = 1.333. Therefore, to achieve a model
which is free of spherical aberration at this particular wavelength
requires that p = 1 − 1/ nD2 = 0.4375 . In other words, to achieve stigmatic
imaging of a distant axial point source emitting 589 nm light, which
is a special feature of the Chromatic Eye, requires an ellipsoidal
surface of eccentricity = 1/ nD = 0.75.

- 4.14 -
Gullstrand's
Schematic Eye

P P′ N N′
F F´

3.60
23.89

General
Aspheric Reduced Eye
Surface
Achroma Fovea
tic Axis φ
Optical Axis E′ α
ψ C
Axis
F Fixation N F´
xis
T Visual A 11.0
distant
fixation n(λ) = a+b/(λ-c)
point
n D = 1.333
a = 1.320535
5.55 b = 0.004685
c = 0.214102
16.67 22.22

2.75

Indiana Eye

Optical, Fixation, Visual Axis Fovea


T
F E′ N C F´
Asphericity 11.0
p = 0.6
n(λ) = a+b/(λ-c)
n D = 1.333

(FIGURE 4.2)

Figure 4.2 Comparison of Gullstrand's schematic eye (top) with the


general form of the reduced eye (center) and the simplified Indiana Eye
(bottom). All dimensions are in mm. Circles mark the location of the nodal
point (N), center of retinal sphere (C), center of exit pupil (E′), anterior

- 4.15 -
focal point (F) and posterior focal point (F ′) of the reduced eye. Surface of
the reduced eye is located midway between the principal points P, P′ of the
Gullstrand model. (Redrawn from Thibos, Ye, Zhang, & Bradley, 1997)

The aspheric reduced eye described above is compared in Fig.


4.2 with Gullstrand’s classic schematic eye as described by
Emsley.28. In its most general form, the reduced eye has a number
of free parameters which can be systematically varied to explore
their impact on retinal image quality or to customize the model to
describe individual eyes. In addition to the parameters specified in
eqns. 4.1 and 4.2, the lateral and axial position of the pupil and the
location of the foveal region of the retina are of major importance
for vision. Pupil position is important because it determines how
image quality will vary across the retinal surface. Foveal location is
important because it establishes the region of maximum neural
resolution. The fovea is a specialized region of the retina where
cone photoreceptors are very tightly packed together in order to
maximize spatial resolution in the approximate center of our visual
field. Outside the fovea, spatial resolution of the neural retina falls
substantially (see section 4.6) and therefore it becomes important
to know the relative locations of the locus of maximum image
quality and the locus of maximum neural resolution.

To help specify the centration of optical and neural axes of the


eye, various reference axes have been defined as shown in Fig. 4.2 .
In the general reduced eye there is no particular relationship
assumed between the optical axis (axis of revolution of the aspheric
refracting surface) and the visual axis (join of fovea, nodal point,
and distant fixation point) or the fixation axis (path of chief ray
from distant fixation point to fovea) or the achromatic axis of the
eye (path of chief nodal ray, which is the axis of zero transverse
chromatic aberration). Angle α (the angle between visual and
optical axes) specifies the location of the fovea relative to the
model's axis of symmetry, which will affect the magnitude of off-
axis aberrations present on the fovea. For polychromatic light,
angle ψ (between the visual and achromatic axes) and the closely
related angle φ (between the fixation and visual axes in image space)
are even more important for assessing the quality of foveal images.
This is because ψ and φ are sensitive to decentration of the pupil
relative to the visual axis of maximum neural resolution. If the pupil
is centered on the visual axis, then ψ = φ = 0 which means that the
foveal chief ray coincides with the visual axis and transverse

- 4.16 -
chromatic aberration (TCA) will be zero at the fovea. However, if
the pupil is not well centered on the visual axis (i.e. φ ≠ 0) , or
equivalently, if the visual axis does not coincide with the eye's
achromatic axis (i.e. ψ ≠ 0), then the fovea will be subjected to the
deleterious effects of TCA, which can have a dramatic impact on
retinal image quality and on visual performance.5, 76, 83

For many engineering and scientific applications, the general


reduced eye has too many degrees of freedom to be useful. What is
needed is an even simpler model constrained by empirical data from
typical human eyes. This simplified model, which we call the
Indiana Eye, is shown in the bottom panel of Fig. 4.2 . The primary
simplifying assumption of the Indiana Eye is that, on average, the
pupil is well centered on the visual axis (i.e. achromatic and visual
axes coincide, so ψ = φ = 0). This assumption is based on recent
studies which have shown that, although individual pupils may be
decentered from the visual axis by as much as 1 mm as shown in
Fig. 4.3 , the statistical mean of angle ψ in the population is not
significantly different from zero in either the horizontal or vertical
meridians.65, 66, 75 This remarkable result implies that the fovea of
the hypothetical average eye (representing the population mean of
human eyes) is spared the deleterious effects of transverse
chromatic aberration which exist everywhere else in the retinal
image. We do not know if this fortunate optical configuration in
human eyes is the result of an active process or an evolutionary
adaptation that has maximized polychromatic retinal image quality
where it would do the most good, namely, in that part of the visual
field in which neural resolving power is maximum. Regardless of
how this happy situation arose, there is a lesson here for the design
of any visual instrument which has the potential for displacing the
exit pupil of the eye+instrument system from the visual axis, thereby
inadvertently introducing unwanted transverse chromatic aberration
at the observer's fovea.10

- 4.17 -
Horizontal Angle ψ (deg)
Vertical Pupil Center (mm) -20 -10 0 10 20
1.5 20

Vertical Angle ψ (deg)


Left Eyes
Right Eyes
1
10
0.5

0 0

-0.5
-10
-1
-20
-1.5
-1.5 -1 -0.5 0 0.5 1 1.5
Horizontal Pupil Center (mm)
Figure 4.3. Distribution of pupil displacement from the visual axis
(bottom and left axes) and distribution of angle ψ (angle between visual
axis and achromatic axis of the eye, top and right axes) in a population of
young adult eyes. Ellipse encompasses half of the data points. Redrawn
from Rynders et al., 1995

A second simplifying assumption adopted for the Indiana Eye is


that the visual axis and optical axis coincide (i.e. angle alpha = 0).
This avoids the complications associated with asymmetric
aberrations which, although present in human eyes, are highly
variable in the population.38, 87 Other constraints of the Indiana Eye
pertain to the free parameters of eqns. 4.1, 4.2. Analysis of
published data indicates the need to increase the dispersion of the
ocular medium slightly so as to bring the model into closer
alignment with experimental measurements of ocular chromatic
aberration. 80 The revised values of parameters a,b,c of eqn. 4.1 for
both the Chromatic Eye and the Indiana Eye are: a=1.320535,
b=0.004685, c=0.214102. These changes reduce the constringence

- 4.18 -
of the ocular medium (as defined by Mouroulis and Macdonald (p.
190) 53) from 55 for water to 50 for the Indiana Eye. The radius of
curvature r of eqn. 4.2 was set by Emsley to be 5.55 mm in order
for the reduced eye to have the same anterior and posterior focal
lengths as Gullstrand's schematic eye at the emmetropic reference
wavelength (589 nm). These values for r and for the reference
wavelength were adopted also for the Indiana Eye. (The fact that the
radius of curvature of the model eye is significantly less than that of
the cornea alone, which is 7.8 mm in Gullstrand's schematic eye,
reflects our intentional disregard for anatomical accuracy in order
to represent the eye by a single refracting surface.) As for
asphericity, a recent study of the eye's spherical aberration suggests
a shape-parameter value of p = 0.6 is required to match the
spherical aberration of the Indiana Eye with that of typical human
eyes.81 (This p value of the model is significantly greater than that
of the typical human cornea). For convenient reference, the
parameters of the Indiana Eye model are collected in Table 4.2 .

Table 4.2: Parameters of the Indiana-Eye model.

emmetropic wavelength of the model, λ D 589 nm

refractive index for emmetropic wavelength, nD 1.333


a=
refractive index for wavelength λ in microns,
1.320535
n(λ)=a+b/(λ-c) b=
0.004685
c=
0.214102
constringence of ocular medium, VD = (nD −1)/( nF − nC ) 50.25

apex radius of curvature, r 5.55 mm

shape parameter p 0.6


elliptical eccentricity ε = √(1-p) 0.6325

axial position of the physical pupil from the apex, z 2.55 mm

axial position of the entrance pupil from the apex 2.16 mm

focal power for emmetropic wavelength FD = (nD -1)/r, 60

diopters

- 4.19 -
anterior focal length for emmetropic wavelength fD = 16.67 mm

(1/FD)

posterior focal length for emmetropic wavelength fD´= 22.22 mm

(nD/FD)

4.4 Geometrical Optical Analysis of the Indiana Eye


The goal of this section is to summarize the known optical
properties of the Indiana Eye model. Interested readers are directed
to the published literature for further details.
4.4.1 Surface Power

The paraxial power of the refracting surface in the Indiana Eye is equal to the
change in refractive index across the interface divided by the radius of curvature

F = ( n′ −1)/ r (4.3a)

= n′ / f ′ (4.3b)

= 1/ f . (4.3c)

The alternative expression (4.3b) can be derived from the Gaussian


conjugate equation assuming the object distance is infinite and a
second alternate expression (4.3c) can be obtained in the same way
by assuming the image distance is infinite. From these three
expressions for F follows two simple relationships involving the
difference and ratio of focal lengths which characterize the reduced
eye

f′− f = r
(4.4)
f ′ / f = n′

Equations 4.3 and 4.4 are valid for any emmetropic reduced-eye
model. In what follows we use subscript notation to indicate that a
particular parameter assumes the value from Table 4.2 associated
with the Indiana Eye model.

- 4.20 -
4.4.2 Magnification

Image sizes on the retina and object dimensions in the external


world are usually expressed not in terms of linear distances, but as
an angle subtended at the nodal point of the eye. This practice
avoids unnecessary multiplication by the eye's magnification factor
and keeps image size and object size equal. For example, in Fig.
4.4(a) the linear distance s along the retina subtends the angle
= s /( f ′ − r) = s/ f at the nodal point. It is less convenient to describe
the angle ζ subtended at the principal plane (pp., which is tangent to
the refracting surface at the vertex) because, unlike ω, angle ζ
applies only to image space but not to object space. Notice,
however, that for small angles the ratio of these two angles is fixed,
/ = f ′ / f = n′ .

(a) (b)
pp
ζ s
ω
β b
N d

r
f f′ f′ ∆f ′

Figure 4.4 Retinal magnification (a) and blur circle diameter (b) for the
Indiana Eye model.

The linear distance on the retina corresponding to 1 degree of


visual angle is called the retinal magnification factor in visual
science. This factor is easily determined for the Indiana Eye model.
The distance from nodal point to retina is 16.67 mm, which means
that 1 mm on the retina subtends roughly 1/17 radian, or 3.5 deg at
the nodal point. Thus each degree of visual angle corresponds to
about 300 microns on the retina. From an anatomical perspective,
300 microns is a long way on the human retina. Individual cone
photoreceptors are about 3 microns in diameter in the central
fovea, so when you look at your fingernail at arm’s length, which
then subtends about 1 degree of visual angle, the retinal image of
your nail spans roughly 100 x 100 = 10,000 cones in area. Little
wonder that even the smallest detail is clearly visible!

- 4.21 -
4.4.3 Defocus

Uncorrected refractive error is the most common cause of


reduced vision in daily life. Therefore one of the goals of modeling
the eye's optical system is to account for the effect of defocus on
retinal image quality and visual performance. As specified in Fig.
4.2 , the retinal image is clearly focused in the Indiana Eye for
paraxial objects emitting 589 nm light from an infinite distance. To
model focusing errors with the Indiana Eye, changes can be made to
any of the following four parameters: (1) object distance, (2)
surface radius of curvature, (3) axial location of the retina, (4)
wavelength of light. Regardless of which of these factors is used to
model refractive error, the same geometrical optics analysis
illustrated in Fig. 4.4(b) applies. The example chosen for analysis
is myopic defocus in which the incident rays from a point source
come to a focus in front of the retina, thus producing a blur circle
of linear diameter b on the retina. The problem is to derive the
angular subtense β of the retinal blur circle in terms of the pupil
diameter P and the focusing error E in diopters defined below.

A simple approximate solution to the stated problem emerges if


we ignore the small distance between the entrance pupil and the
principal plane of the model. The power of the surface is given by
eqn. 4.3b as n ′ / f ′ whereas the required power to produce a focused
image is n ′ /( f ′ + ∆f ′). The difference between these two quantities is
the vergence error E in image space

n′ n′
E= −
f ′ f ′ + ∆f ′
(4.5)
n′ ∆ f ′

f ′2

From the geometry of similar triangles in Fig. 4.4(b) ,

b ∆f ′
= . (4.6)
d f′

Dividing both sides of eqn. 4.6 by f ′ and combining the result with
eqn. 4.5 gives

n ′b
= PE . (4.7)
f′

- 4.22 -
Since the distance ∆f ′ is much smaller than f ′, the ratio b/f ′ may be
taken as an approximate formula for the angle subtended by the
blur circle at the principal plane. As shown previously in
connection with Fig. 4.4(a), the angle subtended at the nodal point
is larger by the factor n′. Thus the left side of eqn. 4.7 is recognized
as the desired visual angle β and so the final approximate formula
for the angular size of the blur circle is the simple linear expression

= dE (4.8)

Notice that blur circle diameter β will be in radians if pupil diameter


d is in meters and focus error E is in diopters.

The preceding analysis formulates the problem of ocular


refractive error in image space. However, focusing errors of the eye
are usually specified in terms of refractive correction, which is
defined as the power K of the spectacle lens or contact lens required
in object space to render the eye emmetropic. To correct the eye's
refractive error requires that K satisfy the modified conjugate
equation

K = L′ − L − F (4.9)

where L is the dioptric distance from the object to the refracting


surface, L′ is the dioptric distance from the refracting surface to the
retina, F is the dioptric power of the refracting surface, and K is the
dioptric power of the correcting lens (assumed to be close to the
eye). For K to be non-zero implies that one or more of the terms on
the right side of eqn. 4.9 is different from the value assigned to the
emmetropic model. We wish to know which term is affected, but
that depends upon the source of the refractive error. To illustrate
the problem, the four methods available for introducing refractive
error into the Indiana Eye are shown in Fig. 4.5 . In each diagram of
this figure the lower ray trace depicts the focus error in image space
and the upper ray trace shows the refractive error corrected by a
spectacle lens. Our approach is to compare the refractive error
specified by eqn 4.9 for the modified model with the following
condition for the emmetropic model

K0 = L0′ − F0 = 0 (4.10)

where the subscript zero is used to denote parameters of the


Indiana Eye given in Table 4.2. This equation has one fewer terms

- 4.23 -
than eqn 4.9 because L0 = 0 when an image is clearly focused on the
retina of the Indiana Eye. Subtracting eqn. 4.10 from 4.9 gives the
desired expression for refractive error,

K = ( L′ − L0′ ) − L − (F − F0 ) (4.11)

In the first case of interest (Fig. 4.5(a) ), the focusing error is


due to a change in axial location of the object and therefore L is the
only term in eqn. 4.11 which changes. One use of such a model
might be to investigate the effects of presbyopia, a condition
associated with aging in which the eye assumes fixed power because
the crystalline lens is unable to accommodate to changes in object
distance. From eqn. 4.11 we conclude that K = -L, which indicates
that the refractive power of the correcting lens must match the
object vergence, but have opposite sign.

Finite Object Distance Altered Axial Length


(a) (b) K
K

fo′ ∆f ′ fo′ ∆f ′

Altered Surface Curvature Altered Refractive Index


(c) K (d)
K

f′ ∆f ′ f′ ∆f ′
f′
o fo′

Figure 4.5 Four cases of refractive error cause by altering (a) object
distance, (b) axial length, (c) surface curvature, (d) refractive index. In
each diagram, the lower ray trace shows the focus error in image space and
the upper ray trace shows the focus corrected by a correcting lens of
power K in object space.

The second case of interest (Fig. 4.5(b) ) is when the length of


the eye changes, which is the most common cause of myopia. In
this case the refractive error is due entirely to the change in L′, the

- 4.24 -
dioptric distance from refracting surface to the retina. Given
L0′ = n D / f0′ and L′ = nD /( f 0′ +∆ f ′) we conclude from eqn. 4.11 that
K / L0′ = −∆ f ′ /( f0′ + ∆ f ′ ) . This result indicates that the ratio of the power
of the correcting lens to the dioptric length of the standard eye is
the same as the ratio of the change in axial length to the overall
length of the eye.

The third case of interest (Fig. 4.5(c) ) is when refractive error


is caused by a change in the radius of curvature of the refracting
surface. Such a model might be used to investigate the effect of
contact lenses or laser refractive surgery which alters the surface
curvature of the cornea. Given F0 = (nD −1)/ r and F = (nD −1)/(r + ∆r) we
conclude from eqn. 4.11 that K / F0 = −∆r /(r +∆r). This result indicates
that the ratio of the power of the correcting lens to the power of the
standard eye is the same as the ratio of the change in radius of
curvature to the total radius of curvature of the refracting surface.

The fourth case of interest, altering the refractive index n′ of the


ocular medium (Fig. 4.5(d) ) , is slightly more complicated than the
others because changing refractive index affects two factors in eqn.
4.11. First, the dioptric power of the refracting surface changes by
the amount F − F0 = (n ′ − n D )/ r . Second, the dioptric distance from
refracting surface to the retina changes by the amount
L′ − L′0 = ( n′ − nD )/ f 0′ . These two factors have opposite sign in eqn. 4.11
so they tend to counteract each other. Specifically,

n′ − nD n′ − nD
K= − . (4.12)
f0′ r

Because of their different denominators, the two terms in eqn. 4.12


have unequal weight so they only partially cancel. From eqn. 4.4 we
know that f0′ = rnD /(nD −1) = 4r for the Indiana Eye, which means that
the contribution to refractive error made by the change in surface
power is 4 times that made by the change in dioptric length of the
eye. Another form of this result is obtained by applying eqn. 4.4,
which simplifies eqn. 4.12 to

nD − n ′ 1
K= ⋅ . (4.13)
r nD

This result says that the spectacle lens required to correct refractive
error caused by changing refractive index is 1/ n D = 0.75 times the

- 4.25 -
change in power of the refracting surface. For example, an increase
in refractive index sufficient to produce a 4 D increase in surface
power of the Indiana Eye can be corrected by a -3 D spectacle lens,
with the difference being made up by the 1 D increase in dioptric
distance from refracting surface to the retina.

It is worth pointing out that the refractive error specified in


object space by eqn. 4.12 gives the same numerical result as eqn.
4.5, which quantifies focusing error in image space. Thus, focusing
errors specified in diopters apply equally well to image and object
space characterizations, which is one of the attractions of the
vergence method of formulating imaging problems. Another point
to note is that the above analysis ignored the fact that the effective
power of a spectacle lens depends on its distance d from the eye.8
Taking this effectivity factor into account, the actual power of the
spectacle required to correct the refractive errors described above
is K/(1+dK).

4.4.4 Longitudinal Chromatic Aberration (LCA)

The Chromatic Eye and the Indiana Eye suffer from the same
amount of paraxial chromatic aberration caused by the dispersive
properties of the ocular medium. Refractive index in these models
varies with wavelength according to eqn. 4.1, which in turn
produces a variable refractive error quantified by eqn. 4.13. We
quantify the magnitude of longitudinal chromatic aberration K by
the difference in refractive error between wavelengths 1 and 2

n ′( 1) − n′ ( )
K = 2
(4.14)
rnD

This longitudinal form of chromatic aberration has been widely


studied in human eyes and the results are summarized in Fig. 4.6 .
Various symbols show the chromatic aberration of human eyes measured in 13
different studies over the past 50 years using a wide variety of experimental
methods. The dashed curve is a graph of eqn. 4.14 when the interocular medium
is water, and the solid curve is for the slightly more dispersive medium of the
Indiana Eye. The sign convention is best illustrated by example: the human eye
has too much power (i.e. is myopic) for short wavelengths and so a negative
spectacle lens is required to correct this focusing error. The good agreement
between these various studies is testimony to the fact that, unlike other ocular
aberrations, there is little variability between eyes with regard to chromatic
aberration. Likewise, chromatic aberration does not change significantly over
the life span.37

- 4.26 -
1
λ ref = 589 nm

0.5
Refractive Error (diopters) j

0
Wald & Griffin, 1947
-0.5 j Bedford & Wyszecki, 1957
j Ivanoff, 1953
Millodot & Sivak, 1973
-1 j Millodot, 1976
Charman & Jennings, 1976
Powell, 1981
-1.5 Lewis et al., 1982
Ware, 1982
-2 Mordi & Adrian, 1985
Howarth & Bradley, 1986
Cooper & Pease, 1988
-2.5 Thibos et al., 1992
Water Eye
Chromatic Eye
-3
300 400 500 600 700 800
Wavelength (nm)
Figure 4.6 Comparison of published measurements of ocular chromatic
aberration with the traditional water-eye model and with the chromatic
eye model. Published results were put on a common basis by translating
data points vertically until the refractive error was zero at the reference
wavelength (589 nm). (Redrawn from Thibos, Ye, Zhang, & Bradley, 1992).

P4 phosphor 550nm in focus


100
Relative Luminance

+0.25 D
-0.25 D

80
-0.5 D

60
-0.75 D

+0.5 D
-1.0 D

40

20

0
350 400 450 500 550 600 650 700 750
Wavelength (nm)

- 4.27 -
Figure 4.7 Chromatic defocus in the context of the source luminance
spectrum. The solid curve shows the luminance spectrum of white light
emitted by the P4 phosphor of cathode-ray tubes and arrows mark the
amount of defocus if the eye accommodates for 550 nm. When the peak of
the luminance spectrum is in focus, most of the light is less than 0.25
diopters out of focus. (Redrawn from Thibos, Bradley & Zhang, 1991)

At first glance, 3 D of LCA across the visible spectrum would


seem to be a major problem for maintaining retinal image quality in
the human eye. In clinical terms, 3 D of uncorrected refractive
error would be a serious visual handicap. However, the situation is
not as bad as it seems. If the eye accommodates to focus a
wavelength in the middle of the spectrum (which we would model
by placing the appropriate spectacle lens in front of the Indiana
Eye), then the range of refractive error is effectively halved to ±1.5
D. Furthermore, the retina is not equally sensitive to all wavelengths
in the spectrum and so the greatest amount of defocus will affect
those wavelengths which are least visible. This line of reasoning is
illustrated in Fig. 4.7, which shows the luminance spectrum of
white light tagged with the amount of defocus caused by ocular
chromatic aberration. When the peak of the luminance spectrum is
in focus, most of the light is less than 0.25D out of focus. To put
this in perspective, 0.25 D is standard clinical tolerance for
prescribing spectacle or contact lenses. Consequently, when all the
images of different wavelengths are superimposed on the retina,
visibility of the target will be dominated by wavelengths that are
only slightly defocused. Thus we should not expect an attempt to
correct the chromatic difference of focus of the eye to yield large
improvements in polychromatic retinal image quality or visual
performance for white light. This expectation is born out by
everyday experience and by careful scientific studies.17, 33 In fact,
not only are the potential benefits of achromatization small, but
new problems surface when attempting to correct the eye's
chromatic aberration which can actually make the situation worse.9,
13

It is worth noting that ophthalmic instruments designed to look at the retinal


features of the eye may not receive the same level of protection against
chromatic blur. This is because protection will be determined by the
instrument's spectral sensitivity, which might be broader and flatter than the
retina's spectral sensitivity. Nevertheless, the Indiana Eye is still useful as an
engineering design tool in this context because it quantifies the amount of

- 4.28 -
defocus to be expected for a retinal object as a function of the wavelength of
light.

4.4.5 Chromatic Difference of Magnification (CDM)

Chromatic dispersion has another effect on the retinal image


which becomes apparent when imaging extended objects. As
illustrated schematically in Fig. 4.8(a) , the size of the retinal image
will depend on wavelength. It is important to keep in mind that the
images of different wavelengths will also be out of focus and that
image size for defocused objects is strongly affected by axial pupil
location. For example, if the pupil is well in front of the nodal point,
as in the Chromatic Eye, then the rays admitted by the pupil from an
eccentric point of the object are those which strike the refracting
surface of the eye with greater angle of incidence. By Snell's law,
these rays will be subjected to stronger chromatic dispersion and so
different wavelengths will end up at different places on the retina,
yielding a chromatic difference of magnification. On the other
hand, if the pupil were to be centered on the nodal point of the eye,
then the admitted rays would strike the surface with normal
incidence and thus be spared the effects of chromatic dispersion.
This condition holds strictly for a spherical refracting surface, but is
approximately true also in the paraxial region of an elliptical
surface.

- 4.29 -
CDM ≅ z ⋅ K 0.8

0.6 Slope = 0.037 (%/mm)

CDM (%)
0.4
z
0.2

0
S
L -0.2
Position = -5.7
-8 -4 0 4 8 12
Pupil distance from apex (mm)

Figure 4.8. Chromatic difference of magnification (CDM) of the reduced


eye. Ray diagram at left shows that axial distance z from entrance pupil to
nodal point determines angle of incidence of rays admitted to the retina,
hence the amount of chromatic dispersion. The result is differential image
magnification which is directly proportional to the chromatic difference
of refraction of the eye. Solid rays show path of long wavelengths, dashed
rays show short wavelengths. Graph at right shows experimental
measurement of CDM for red (650 nm) and green (556 nm) light as a
function of the axial position of an artificial pupil (open symbols) and the
natural pupil (filled symbol) reported by Zhang et al. (1993). Solid line is
the least-squares linear regression.

The above ideas can be summarized with a simple formula.


Within the paraxial region it can be shown104 that to first
approximation the chromatic difference of magnification (in
percent) for two wavelengths of light is proportional to the axial
distance z (in cm) from the entrance pupil to nodal point:

CDM = zK (4.15)

The constant of proportionality in this equation is the chromatic


difference of refractive error K (in diopters) for the given pair of
wavelengths. Evaluating this formula between 400-700 nm ( K = 2
D) for the Chromatic Eye (z = 0.4 cm) we find CDM =0.8%. Such a
tiny difference in magnification is generally thought to be
insignificant for vision, which would explain why successful
measurements on human eyes have been difficult to obtain.
Experimental estimates of CDM as a function of the z-axis location
of an artificial pupil reported by Zhang et al. 103 are shown in Fig.
4.8(b) . These results confirm the linear relation predicted by eqn.
4.15 and the slope of the least-squares regression line (0.37%/cm)
matches the chromatic difference of refractive error (0.40D) of the
eye for the test wavelengths used in these experiments. However,

- 4.30 -
the value of CDM determined for the natural eye without an
artificial pupil was less than half the value expected from the
Chromatic Eye model, which suggests that a better match between
experiment and theory would be obtained if the pupil were closer to
the nodal point. A need for a slight axial adjustment of the pupil
(0.6 mm) is also indicated by an analysis of oblique astigmatism of
the model, as will be shown further on.

The major point to note from this discussion is that the


magnitude of CDM depends on the axial location of the pupil.
Consequently, chromatic differences of magnification will become
an increasingly important factor when an observer views through an
artificial pupil or any optical or clinical instrument which moves the
entrance pupil outside the eye.13, 104 Dramatic effects on depth
perception can also result under such circumstances if one eye is
viewing short-wavelength light while the other eye is viewing long-
wavelength light. Depending on the nature of the visual target, the
visual system may interpret the different sized retinal images as
indicating a difference in relative depth of colored objects, resulting
in a distorted perception of the 3-dimensional world. 103

4.4.6 Transverse Chromatic Aberration (TCA)

The foregoing discussion of magnification is developed around


the problem of imaging an extended, polychromatic source. If
attention is directed to a single point of the extended object, then
chromatic dispersion causes the image to be spread out across the
retina as a colored fringe as illustrated in Fig. 4.9(a) . Only one
color can be perfectly in focus at a time and for that emmetropic
wavelength we can locate the retinal image by finding the
intersection of the nodal ray with the retinal sphere. To locate the
center of the blurry image for any other color in the fringe,
however, we must trace the path of the chief ray passing through
the center of the pupil. The angular spread τ of the spectrum of
chief rays generated by a single, white point of light is a measure of
transverse chromatic aberration. In the visual optics literature, this
spread is sometimes called chromatic difference of position.76

- 4.31 -
= K z sin = hK

ω z
h L
τ
N τ S N S

(a) L (b)

Figure 4.9 Transverse chromatic aberration of the reduced eye in image


space for peripheral vision through the natural pupil (a) and for foveal
vision through a displaced artificial pupil (b). In both cases the angular
difference τ in locations of the images at different wavelengths, subtended
at the nodal point, is proportional to the chromatic difference of
refraction. (Redrawn from Thibos, Bradley & Zhang, 1991)

Just as for CDM, pupil location is a critical factor which


determines the magnitude of TCA. For any given wavelength, only
one chief ray can avoid the effects of chromatic dispersion and that
is the one which passes through the nodal point. This chief nodal
ray plays a central role in the study of chromatic aberration because
it defines the axis of zero TCA. For this reason it is known as the
achromatic axis of the eye. 75 By definition, this reference axis joins
the nodal point to the center of the pupil, which means it doesn't
necessarily intersect the retina at the fovea. However, as mentioned
earlier in connection with Fig. 4.3 , the fovea of the average person
is very nearly centered on the achromatic axis and for this reason
the achromatic axis is assumed to coincide with the visual axis in the
Indiana Eye.

For object points off the achromatic axis, a simple approximate


formula76 specifies the amount of transverse chromatic aberration τ
(in radians) as a function of the axial location of the entrance pupil
z (in meters, relative to nodal point), the magnitude of LCA Kλ (in
diopters), and the field angle of eccentricity ω:

= K z sin (4.16)

For example, assuming Kλ = 2D and z = 4 mm, a point source 30° in


the peripheral visual field will produce a colored fringe on the retina

- 4.32 -
which subtends 0.2° of visual angle, which is 8 times the diameter of
cone photoreceptors in that part of the retina, 4 times cone spacing,
and about equal to the spacing between neighboring retinal ganglion
cells.

Although central vision may be spared any large effects of


transverse chromatic aberration in the average naked eye, the fovea
becomes highly vulnerable when viewing through an optical
instrument or an artificial pupil located in front of the eye. This
situation is depicted in Fig. 4.9(b) . If the limiting aperture is
displaced by amount h from the visual axis, then the magnitude of
transverse chromatic aberration present at the fovea is
approximately proportional to h and to the magnitude of LCA75

= hK (4.17)

Comparing equations 4.16, 4.17 indicates that each millimeter of


displacement of an external entrance aperture is equivalent to 15° of
eccentric field angle for the naked eye, in the sense that they both
generate the same amount of TCA.

We conclude this discussion of chromatic aberration by noting that CDM in


the Chromatic Eye or Indiana Eye is equivalent to the rate of change of TCA
with field angle76

d
CDM = = K zcos (4.18)
d

For small field angles cos ω ≈ 1, which brings eqn. 4.18 into
agreement with eqn. 4.15.

4.4.7 Spherical Aberration

Figure 4.10a depicts spherical aberration of the Indiana Eye in


image space. To quantify the magnitude of spherical aberration, a
ray parallel to the optical axis, but displaced by the distance y, is
traced from left to right. In the absence of spherical aberration, the
ray would intersect the optical axis at point F in the paraxial focal
plane. An aberrated ray, on the other hand, will deviate from the
reference ray by the angle σ, and will cross the optical axis at
distance S instead of F from the vertex at O. We take the angular
error σ as a measure of the transverse spherical aberration (TSA)
and the vergence error (n′/S - n′/F) as a measure of longitudinal
spherical aberration (LSA). (Although most optics textbooks define

- 4.33 -
TSA as the linear distance in the focal plane from the optical axis to
the point of intersection of the marginal ray, the choice of an
angular measure for TSA is favored in visual optics.) A proportional
relationship between LSA and TSA is easily derived for small angles
a,b for which the approximation x=tan(x) applies (and neglecting
the small distance z=OA)

TSAimage = = b− a
1 1
≅ y −  (4.19)
S F
y⋅ LSA
=
n′

An equally valid formulation depicts spherical aberration in


object space, as shown in Fig. 4.10b . Again neglecting the small
distance OA, the vergence error of the exiting ray is 1/T, which we
take as a measure of LSA in object space. From the geometry of the
diagram we conclude that the transverse aberration s in object
space is

TSAobject = s ≅ y /T
(4.20)
= yLSA

Notice that if TSA is specified in radians, and y is in meters, then LSA


will have units of diopters. Equations 4.19, 4.20 indicate that TSA is
larger in object space than in image space by the factor n′, the index
of refraction of the ocular medium. Although this is a valid
approximation in the paraxial region, the ratio grows significantly
larger than n′ as ray height increases.81

- 4.34 -
(a) y TSA = (b) y TSA = s = n′
n′ n′ 1
Q(y,z) LSA = − LSA =
S F Q(y,z) T
s
σ
b a z z
O A N S F T O A N F

air n′ Paraxial air n′ Paraxial


focal plane focal plane

Figure 4.10. Spherical aberration of the reduced eye depicted in image


space (a) and in object space (b). Transverse spherical aberration (TSA) is
defined as the angular error between the refracted ray (marked with
arrow head) and the un-aberrated reference ray. Longitudinal spherical
aberration (LSA) is defined by the vergence error of the refracted ray. Q is
the point of intersection of the ray with the refracting surface and has
coordinates (0,y,z) with respect to the reference frame shown. N is the
paraxial nodal point, which lies at the distance r from the vertex point O.
The z-axis is the axis of symmetry of the refracting surface and the x-axis is
orthogonal to the plane of the diagram at point O. (Redrawn from Thibos,
Ye, Zhang & Bradley, 1997)

Spherical aberration of the aspheric refracting surface specified


by eqn. 4.2 has been determined by ray tracing81 and the results,
specified in object space, are shown by symbols in Fig. 4.11 .
According to fifth-order optical theory, longitudinal spherical
aberration should vary with ray height as

LSA = Ay 2 + By4 (4.21)

where coefficient A is the Seidel value for primary LSA, coefficient B


is the value for secondary LSA, and y is ray height. To determine the
values of the unknown coefficients A and B, polynomials of the fifth
degree were fit to the results of ray tracing and the resulting
functions are shown by the solid curves in Fig. 4.11 . The
numerical values of the two Seidel coefficients determined by
regression are summarized for image space in Fig. 4.12(a) and for
object space in Fig. 4.12(b) . In both cases B is nearly zero for 0 ≤
p ≤ 0.7, indicating that third-order theory provides an adequate
account of spherical aberration of the Indiana Eye over this range of
shape factors. In this domain of third-order theory, the primary
Seidel coefficient A is well described by the linear equations

A = 0.950p − 0.418 Diopters/mm 2 (image space) (4.22a)

- 4.35 -
A = 0.937p − 0.419 Diopters/mm 2 (object space)
(4.22b)

For example, if p = 0.6 and pupil diameter is 6 mm, then by eqns.


4.22 we calculate that primary LSA in object space is 1.3 Diopters at
the pupil margin. Since the two formulas for Seidel coefficient A are
nearly identical, we infer that the difference between LSA in object
and image spaces noted above may be attributed to secondary LSA.
20
a 1.0
200 b 1.0
0.9
15
0.8 0.9

LSA (diopters)
100 0.7 10 0.8
0.6
0.5 0.7
(arcmin)

0 0.4 5 0.6
0.3
TSA

0.2 0.5
-100 0.1 0 0.4
0.01 0.3
p 0.2
-5
0.1
-200 0.01
-10
-4 -2 0 2 4 -4 -2 0 2 4
Ray height (mm) Ray height (mm)

Figure 4.11. Object-space variation of transverse (A) and longitudinal (B)


spherical aberration of the Indiana Eye as a function of ray height. Family
of curves is parameterized by p, the shape parameter of the refracting
surface, as indicated by the small number at right of each curve. The flat
curve indicating zero spherical aberration is for p = 0.437. Symbols show
results of ray tracing, continuous curves show 5th degree polynomials fit
to symbols by method of least squares. (Redrawn from Thibos, Ye, Zhang &
Bradley, 1997)

- 4.36 -
0.6 0.6
(a) Image Space (b) Object Space
0.4 0.4

Seidel Coefficient
0.2 0.2

0 0

-0.2 LSA = Ay2 + By4 -0.2 LSA = Ay2 + By4


A: Primary LSA A=Primary LSA
-0.4 B: Secondary LSA -0.4 B=Secondary LSA
A = 0.950p - 0.418 A = 0.937p - 0.419
-0.6 -0.6
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
p-value p-value

Figure 4.12. Variation of Seidel polynomial coefficients for 5th-order


theoretical analysis of spherical aberration of the Indiana Eye in image
space (a) and in object space (b). Primary spherical aberration (A) shown
by filled symbols is linearly related to p-value over the range 0 < p < 0.7.
Secondary spherical aberration (B) shown by open symbols is negligible
for p < 0.7. Units of Seidel coefficient A are Diopters/mm2, units for B are
Diopters/mm 4. (Redrawn from Thibos, Ye, Zhang & Bradley, 1997)

Spherical aberration of human eyes is often described in the


visual optics literature as being highly variable between individuals.
However, some of this variability may be attributed to
contamination of experimental data by coma and other odd-
symmetric aberrations. A recent re-evaluation of the classical
literature on spherical aberration of eyes indicates that the Indiana
Eye with a shape factor in the neighborhood of p=0.6 is
representative of the ocular spherical aberration typically reported
in the literature.81

4.4.8 Oblique Astigmatism

The previous sections have shown that a co-axial, aspheric, reduced-eye


model is able to account for several forms of the eye's chromatic and spherical
aberrations. In this section we examine the model's ability to also model oblique
astigmatism, which affects the retinal image of objects located in the peripheral
visual field.

Oblique astigmatism of an aspheric refracting surface may be computed by


using the appropriate principal radius of curvature (i.e. sagittal or tangential) in
Coddington's well-known equations.94 For a given field angle, the steps for
computing oblique astigmatism of the Indiana Eye are (i) find the intersection of
the incident chief ray with the refracting surface; (ii) determine the tangential and
sagittal radii of curvature at the intersection point; (iii) determine cosine values of
the angles of incidence and refraction; (iv) apply the modified Coddington’s

- 4.37 -
equations to determine tangential and sagittal focal lengths for distant objects;
(v) determine oblique astigmatism in diopters in both image and object spaces.
Detailed equations which implement these five steps are provided elsewhere.89
Figure 4.13(a) shows the computed tangential and sagittal focal powers for the
Chromatic Eye (i.e. p=0.437, zero spherical aberration). For object points on the
optical axis, focal power of the model eye is 60 D. With the increase of field
angle, focal powers in both tangential and sagittal planes increase. The difference
between tangential and sagittal focal powers is the Sturm's interval in diopters,
which is shown in Fig 4.13(b) for the Indiana Eye (p=0.6).

Sturm's Interval (diopters)


90 6
Focal Power (diopters)

(a) Chromatic Eye (p=0.437, Z=1.95) (b) Indiana Eye (p=0.6, Z=2.75mm)
85 5 Rempt, et al., 1971

80 PT 4
75 P
S 3
70
2
65
1
60
0
55
-80 -60 -40 -20 0 20 40 60 80 -80 -60 -40 -20 0 20 40 60 80
Field angle (deg) Field angle (deg)

Figure 4.13. Oblique astigmatism of two members of the family of reduced


schematic eyes. (a) Focal powers of the Chromatic Eye in the tangential
and sagittal planes are compared across the visual field. (b) Sturm's
dioptric interval between tangential and sagittal planes in the Indiana Eye
is compared with mean values of astigmatism measured for a population of
human eyes by Rempt et al., 1971. (Redrawn from Wang and Thibos, 1997)

Several studies of the refractive error of human eyes have found that large
amounts of oblique astigmatism are normally present in the peripheral visual
field,29, 52, 63 although the amount and type of astigmatism varies considerably
between individuals.50, 63 Early attempts to simultaneously model oblique
astigmatism and spherical aberration with the same model were unsuccessful.39,
41, 45, 88 The same is true for the general reduced-eye model in Fig. 4.2(b).
Regardless of whether the model is chosen to have large amounts of spherical
aberration (p=1) or zero spherical aberration (p=0.437), the model over-estimates
the oblique astigmatism measured in the population of human eyes.89
However, the model is easily brought into agreement with the empirical data
simply by adjusting the axial location of the pupil slightly. This maneuver works
because axial location of the entrance pupil controls the obliquity of the chief rays
from off-axis object points and thus the degree of astigmatism. The exact
location of the pupil required to fit the astigmatism data depends slightly on the
p-value of the refracting surface. For the Indiana Eye (p=0.6), the pupil must be
shifted 0.8 mm closer to the nodal point to provide the optimum fit to the human
population data as shown in Fig. 4.13(b). With this justification, the Indiana Eye
has a pupil located 2.75 mm from the apex as shown in Fig. 4.2(c). In this
configuration, the Indiana Eye accounts simultaneously for the chromatic

- 4.38 -
aberration, spherical aberration, and oblique astigmatism of the typical human
eye. Although we have not specifically attempted to reproduce the various
types of oblique astigmatism described in the literature (e.g. myopic, hyperopic,
mixed),25, 29 we anticipate that these various forms could be modeled by
modifying the retinal contour of the Indiana Eye to have an aspheric shape.

4.4.9 Non-linear Projection of Visual Space

The Indiana Eye is a wide-angle model which may be used to compute the
non-linear projection of object space onto the retina. Following Drasdo and
Fowler 24 we assumed a spherical retina with 11 mm radius of curvature and
used ray tracing to compute the intersection of the chief ray with the retinal
sphere as a function of field angle. The distance from this point of intersection to
the optic axis along the retinal surface of the Indiana Eye model is compared in
Fig. 4.14(a) with the widely quoted results of Drasdo and Fowler for their 3-
surface aspheric model. This comparison indicates that the Indiana Eye model
has less compression of the peripheral field than does the Drasdo & Fowler
model. Both models have less compression than that of the single human eye
investigated by Frisén and Schöldström.30 As far as we are aware, this latter
study using photocoagulation lesions to mark the retina at known perimetric
angles prior to surgical enucleation is the only direct measurement of the visual
field projection in humans. Although the Indiana Eye does not fit these limited
experimental data as well in the far periphery, the projection of visual space in
the reduced-eye model depends upon the shape factor of the refracting surface,
axial location of the pupil, and radius of curvature of the retinal sphere. Various
combinations of these parameters can produce a range of mapping functions
which includes all those shown in Fig. 4.14(a). Thus, in applications for which the
wide-angle projection of the visual field onto the retina is of importance (e.g.
clinical imaging of peripheral retina), the general reduced eye represents a
flexible tool for examining the impact of eye parameters on the non-linear
projection of visual space onto the retina. However, currently there is
insufficient evidence to warrant adjusting the Indiana Eye to more accurately
represent the projection of visual space in the average eye.

- 4.39 -
Retinal magnification (mm/deg)
25 0.30
Frisen & Scholdstrom (1977) a b

Retinal arc length (mm)


Drasdo & Fowler (1974)
20 Indiana Eye

0.25
15

10
0.20

5
Indiana Eye
Drasdo & Fowler (1974)
0 0.15
0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80
Field angle (deg) Field angle (deg)

Figure 4.14. Retinal projection of the visual field in the Indiana Eye
model. (a) Distance from an eccentric image to the visual axis, measured
along the curved retinal surface, as a function of field angle of the source.
(b) Retinal magnification factor for Indiana Eye across the visual field,
obtained by differentiating the curve in (a). Also shown are predictions of
a wide-angle model eye from Drasdo and Fowler (1974) and experimental
data from a human eye reported by Frisén and Schöldström.

In section 4.4.2 we mentioned briefly the retinal magnification factor, which is


defined as the linear distance on the retina which corresponds to 1 degree in
visual space. Although the paraxial value for the general reduced-eye model is
0.29 mm/deg, this value falls inversely with field angle because of the
compressive mapping of the peripheral field onto the retina. These changes are
quantified by differentiating the curves in Fig. 4.14(a) to produce the curves in
Fig. 4.14(b). The vertical difference between the two curves in Fig. 4.14(b)
indicates that the Indiana Eye model has a slightly longer axial distance from
nodal point to retina than does the model of Drasdo & Fowler. Qualitatively,
both models indicate a compression of the peripheral field, although they differ
slightly in their quantitative details.

4.5 Calculation of Image Quality for the Indiana Eye


Optical models of the eye for which wave aberration functions
can be easily specified are especially useful for the evaluation of
those visual instruments which are coupled coherently with the eye
(e.g. microscopes, telescopes).14, 57 For this class of instruments
the eye is an integral part of the wavefront-modifying optical
system. Therefore, a specification of the eye's wave aberration
function is necessary for a complete analysis of the interaction

- 4.40 -
between eye and instrument and their combined performance (see
chapter 5 of this book for further analysis of this issue).

Because of its simplicity of form, the Indiana Eye is easily analyzed to


determine its wave aberration function, from which the monochromatic optical
transfer function (OTF) and related measures of image quality may be derived.
This approach was described for the eye by van Meeteren,82 who formulated a
wave aberration function using Seidel coefficients taken from the visual optics
literature and calibrated in dioptric units. Accordingly, van Meeteren's method
may be applied to the Indiana Eye using theoretical values for the various
aberrations determined above in section 4. Below we deal with the additional
requirement for computing the polychromatic OTF of the model across the
visual field. Aside from the refractive errors that are easily corrected by
spectacle lenses (i.e. defocus, astigmatism), chromatic aberration is an important
aberration in the human eye which has an effect on image quality comparable to
the monochromatic aberrations of the eye.5, 74, 82, 83 In the following section we
isolate these effects of chromatic aberration by utilizing the Chromatic Eye
model, which by design is free of monochromatic aberrations paraxially.

Analyzing the retinal image of a large, extended, polychromatic grating is not


a trivial problem because, although the defocusing effects of LCA on the OTF are
confined to an attenuation of contrast, the eye's CDM causes image spatial
frequency to vary with wavelength. To avoid this non-linear complication, we
concentrate our attention on a small patch of grating as shown in Fig. 4.15(a) so
that overall magnification differences associated with CDM can be treated as
local differences of image position attributed to TCA. Assuming the effect of
TCA is to displace the retinal image without affecting spatial frequency, the end
result is a spatial phase shift that varies with wavelength. Maximum phase shift
will occur when the grating is oriented perpendicular to the direction of
displacement of the image and zero phase shift occurs when the grating is
parallel to the direction of displacement.

- 4.41 -
(a) (b) refracting
surface
reference sphere (center @ S)
exit
pupil myopic wavefront
(center @ G)

A B
paraxial
a* C focus image
ω a screen
(retina)
S
φ L
D

θ
z P G S
f′ ∆ f′
fD′

Figure 4.15. Effect of chromatic aberration on images of gratings. (a) Transverse chromatic
aberration produces wavelength-dependent phase shifts in the components of a
polychromatic grating. (b) Longitudinal chromatic aberration is a focus error which induces
a wavefront aberration error.

In general, chromatic phase shifts due to TCA will affect the hue, saturation,
and brightness of the target. Consider, for example, the retinal image of a
grating target with two chromatic components, one with red and black bars, and
the other with green and black bars. Assuming the red and green bars have the
same luminance, the appearance of the target will depend dramatically upon
their relative spatial phase. When the components have the same phase, the
target will look like yellow and black bars, i.e. a luminance pattern with no
chromatic variation. When the components have the opposite phase, however,
the target will look like red and green bars, i.e. a chromatic pattern with no
luminance variation. It follows, therefore, that the eye's TCA has the potential to
introduce luminance artifacts into chromatic targets, and chromatic artifacts into
luminance targets, both of which can have important visual consequences. 9, 12, 98,
100, 102

Here we are concerned solely with the effect of phase shift on luminance
contrast so that we may compute the luminance OTF for polychromatic objects.
The situation is complicated somewhat by the fact that chromatic errors of focus
will differentially affect the image contrast of each chromatic component of
polychromatic targets. Nevertheless, we may calculate the polychromatic OTF
for arbitrary field angle ω by taking into account the separate effects of LCA on
contrast, and TCA on phase, as described above for the Chromatic Eye model.

4.5.1 Wave Aberration Function

The geometry for determining the wave aberration function W(x,y,λ) for the
reduced eye is shown in Fig. 4.15(b). Our goal is to find the optical path length
W = n AC in terms of the longitudinal focal shift ∆ f ′ = SG and the half-angle θ

- 4.42 -
subtended by the exit pupil at the image screen. For the case of simple defocus,
Hopkins34 showed that if one accepts the approximation that the distance
AD = AG then for any given wavelength

W = n ∆ f ′ (1− cos( )) (4.23)

The axial location z of the pupil affects angle θ because a given pupil will subtend
a larger angle if it shifts towards the retina. To account for this effect it is
convenient to scale the pupil radius a by the factor fD′ /( f D′ − z) to create an
effective pupil radius ap in the principal plane tangent to the refracting surface at
its vertex. Then from the geometry of Fig. 4.15(b)
−1/2
  a  2
cos = [1+ tan ]
−1 / 2
= 1 +  p   (4.24)
2

  fD′  

Combining eqns. 4.23, 4.24 and applying the power series


approximation (1+x)m = 1+mx yields a simple expression for the
wave aberration function

n ∆ f ′ ap 2
2

W ( x, y, ) = ( x + y2 ) (4.25)
( fD′ ) 2
2

where (x, y) are coordinates in the pupil plane, normalized to unity


at the border of the pupil, and λ is the wavelength of light.

The wave aberration function given in eqn. 4.25 may be simplified further if it
is expressed in dioptric terms. Recall from section 4.4.3 that longitudinal
chromatic aberration is a wavelength-dependent focusing error E which is given
by the approximate formula

n ∆f ′
E( ) = (4.26)
( fD′ )2

This is the vergence error produced by the combined effects of a


change in refracting power of the surface plus a change in the
dioptric distance to the retina, both of which are caused by a change
in refractive index following a change of wavelength. Substituting
eqn. 4.26 into 4.25 gives the wave aberration function with dioptric
coefficients

ap2 2
W ( x, y, ) = E( ) ( x + y2 ) (4.27)
2

- 4.43 -
Dimensionally, W will be in meters if E is in diopters and the
projected pupil radius is in meters. Note that, except for a
difference in notation, eqn. 4.27 is the same as eqn. 2.4 in chapter 2
of this book.

4.5.2 Polychromatic OTFs

The results contained in eqns. 4.25 and 4.27 are sufficient to


formulate a pupil function for the purpose of computing a series of
monochromatic OTFs by the usual autocorrelation method in
preparation for integrating over wavelength to derive the
polychromatic OTF. However, for many purposes the closed form
solution for square pupils given by Hopkins35 is adequate and easier
to implement. By this method the modulation transfer function
(MTF) is given by

M(s) = Bsinc(AB) (4.28)

where s = / d is the spatial frequency ν normalized to the


diffraction cutoff, B = 1− s is a diffraction factor, A = 8sW / is a
defocus factor, and sinc(x) = sin( x)/ x . Numerical comparison of
these two methods indicates that they agree closely provided that an
equivalence of square and circular pupils is based on pupil area
rather than on pupil diameter. One advantage of the general
autocorrelation method is that it permits the inclusion of pupil
apodization, which can be used to model the directional selectivity
of retinal cone photoreceptors known as the Stiles-Crawford
Effect.10, 48

In the case of off-axis objects or displaced pupils, the phase shift


induced by TCA must be taken into account. This phase shift is
directly proportional to grating spatial frequency ν, to the angular
displacement τ, and to sin(α) where α is the orientation of the
grating relative to the visual meridian (i.e. α = 0 for horizontal
gratings imaged on the horizontal meridian)71

( )= sin . (4.29)

To compute a polychromatic OTF we treat the image of a white


grating as the sum of a multitude of monochromatic gratings, each
of the same spatial frequency but with a spatial phase that varies
with wavelength according to eqn. 4.27 and with contrast
attenuation that varies with wavelength according to eqn. 4.28 (or

- 4.44 -
equivalent). Strictly, since magnification varies with wavelength, so
must image frequency. However the change in frequency is less
than 1% over the spectral range of 380-780 nm and therefore is
assumed to be constant locally. If L(x,λ) is the luminance profile (in
a direction x orthogonal to the bars of the grating ) of a
monochromatic component of the image of a polychromatic grating
of unit contrast, then the profile of each spectral component of the
image is given by

L(x, ) = S( )(1 + M( , )cos2 ( x − ( , ))) (4.30)

where the weighting factor S(λ) is the mean luminance of the object
at wavelength λ, which takes into account the photopic spectral
sensitivity curve of the eye and the spectral content of the object,
and M(λ,ν) is the monochromatic modulation transfer function. The
luminance distribution of a polychromatic grating is found by
integrating L(x,λ) over wavelength to yield71

[
L(x ) = L 0 1 + A2 + B 2 cos(2 x −Φ ) ] (4.31)

where the functions A, B, and Φ are given by

1
S( )M( ) cos(2 ( ))d
L0 ∫
A( ) = , ,

1
S( ) M ( , ) sin(2 ( ))d
L0 ∫
B( ) = , (4.32)

Φ( ) = arctan(B / A)

Inspection of eqn. 4.31 indicates that the polychromatic MTF of the


system is given by A 2 + B2 and the phase transfer function is given
by Φ.

4.5.3 Predicting Visual Performance

An example of a white-light MTF computed for axial objects


(TCA=0) is shown by the dashed curve in Fig. 4.16(a) . The exact
spectral composition of white light has little influence on the result
and we chose to base the computation on the P4 phosphor
commonly found in cathode ray tubes. For comparison we also
show the monochromatic MTFs of the Chromatic Eye at the
emmetropic wavelength (589 nm) for a distant object and for

- 4.45 -
objects which are placed at 4 m or 2 m so as to defocus the images
by 0.25 D and 0.5 D, respectively. These results, which were
computed for a 2.5 mm pupil, show that the effect of LCA on the
polychromatic MTF is very nearly the same as the effect of 0.25 D
defocus on monochromatic light. In visual terms, this is a relatively
minor amount of defocus which is comparable in magnitude to the
depth of field of the human eye15 or to the residual refractive error
expected after routine spectacle correction. Given that optical
attenuation of image contrast described by an MTF should translate
directly into a loss of contrast sensitivity (because the contrast of a
test target would have to be boosted proportionally to compensate
for optical losses), these computational results suggest that the
visual loss of contrast sensitivity due to LCA is slight. These results
agree with experimental data for central vision gathered by
Campbell and Gubisch17 who reported less than 0.2 log unit
difference in contrast sensitivity for white and monochromatic
lights over a 10-40 cyc/deg range of spatial frequencies.
Modulation Transfer Ratio

Modulation Transfer Ratio


1 1

P4 0D
0.25 D

0.1 0.1
Peripheral 15°
Neural
0.5 D Threshold
0.01 0.01
30°
45°

Foveal Neural Threshold Foveal Neural Threshold


0.001 0.001
0 10 20 30 40 50 60 0 10 20 30 40 50 60
Spatial Frequency (cyc/deg) Spatial Frequency (cyc/deg)

Figure 4.16. White-light MTFs of the Chromatic Eye model. (a)


Comparison of the polychromatic MTF (2.5 mm pupil diameter) for TCA=0
(dashed curve, P4 phosphor) is compared with monochromatic MTFs of the
same model for the diffraction-limited case (0D) and two levels of defocus
(0.25D, 0.5D). Dotted curve represents neural contrast threshold for
detecting interference fringes reported by Campbell and Green (1965). (b)
Off-axis MTFs show the strong effect of TCA on image contrast. Numbers
next to curves indicate field angle. Peripheral neural threshold data are
from Still (1989).

Optical attenuation of image contrast due to chromatic defocus


should also affect visual acuity. In order to estimate acuity loss, we
need to know how low image contrast can go before the target is no
longer visible. This requires knowledge of the neural threshold
function, which is available in the literature16 and has been

- 4.46 -
replotted as the dotted curve (labeled "foveal neural threshold") in
Fig. 4.16(a). Through graphical analysis we can determine the
spatial frequency for which the image contrast of a target with 100%
object contrast will fall below threshold. This endpoint is given by
the intersection of the optical MTF and the retinal threshold curve.
For the white-light MTF shown the intersection is at 50 cyc/deg and
for the monochromatic MTF it is 55 cyc/deg. These predicted acuity
values agree almost exactly with the experimental values (52 vs. 55
cyc/deg for subject R.W.G.) reported by Campbell and Gubisch.17

The combined effects of LCA and TCA on white-light MTFs which


arise for off-axis objects or, equivalently, for displaced entrance
pupils are shown by the solid curves in Fig. 4.16(b) . For these
calculations the model was configured the same as described above
and the curve labeled 0° in Fig. 4.16(b) is the same as the dashed
curve in Fig. 4.16(a) . Additional computations were performed
for 15°, 30°, and 45° field angles, which corresponds to mid-
peripheral vision. However, the same MTFs could be interpreted as
pertaining to foveal vision through an artificial pupil displaced by 1
mm, 2 mm, or 3 mm from the visual axis. In either case the loss of
image contrast is much greater for the combined effects of TCA and
LCA than for LCA alone. For example, at 20 cyc/deg contrast is
attenuated by a factor of 50, which is sufficient to render invisible
an object of 100% contrast. Paradoxically, even greater attenuation
of contrast occurs when TCA acts alone, for example when the
defocusing effect of LCA is minimized by using a small diameter
pupil. This is because LCA has a protective effect which defeats the
mechanism by which TCA attenuates contrast. Phase shifting will
cancel the luminance contrast of a white-light grating only if the
wavelength components have comparable contrasts. However, LCA
defocuses the phase-shifted component gratings, thereby
attenuating their contrast, which reduces the cancellation of
luminance contrast normally associated with TCA.

To estimate the combined effects of TCA and LCA on visual


acuity, we graphically determine the cutoff frequency where each
curve crosses the neural threshold function in Fig. 4.16(b) . The
prediction is that acuity should fall by a factor of 2.5 (from 50 to 20
cyc/deg) when viewing a white test grating through an artificial
pupil displaced by 3 mm. About the same loss of acuity is predicted
for the P-31 phosphor used by Green31, who reported a threefold
loss of acuity when observers viewed through a displaced artificial
pupil. The comparison should not be pushed too far because

- 4.47 -
accurate predictions depend upon the precise wavelength which is
in-focus on the retina and upon the degree of other off-axis
aberrations. Nevertheless, we may conclude that ocular chromatic
aberration accounts for most of the acuity loss suffered when
viewing through a displaced pupil.5 Similar predictions apply when
viewing polychromatic interference fringes or conventional targets
produced by a decentered visual stimulator seen in Maxwellian-view
(i.e. illumination source is imaged in the plane of the entrance pupil
of the eye).11, 72 We have experimentally verified these theoretical
predictions by showing that acuity drops threefold when the visual
stimulator is displaced 3 mm from the visual axis.11

Experimental data confirming the loss of image contrast on


peripheral vision through the natural pupil are harder to obtain.
Theoretically, the magnitude of the aberration should increase as
the visual stimulus is moved further into the peripheral field.
However, the high neural threshold of peripheral vision,70 shown by
the dashed curve in Fig. 4.16(b), makes the effect less visible.
Nevertheless, using a contrast detection task which extends the
range of vision to well beyond the resolution limit, Cheney19 has
shown a threefold loss of detection acuity at 30° eccentricity in the
peripheral field as predicted by graphical analysis of Fig. 4.16(b).
Alternatively, ophthalmic instruments looking at the peripheral
retina will feel the full effect of the eye's TCA, unprotected by a high
neural threshold.

One of the interesting results to emerge from polychromatic MTF


computations is that the model's emmetropic wavelength of 589 nm
is not necessarily the best wavelength to have in focus on the retina.
With a 2800°K white target, for example, the polychromatic MTF
improves for all spatial frequencies if 570 nm is placed in focus by
locating the object at 10 m. to compensate for the 0.1 D refractive
error of the model at 570 nm. This optimum wavelength-in-focus
corresponds to the ridge of the surface plot of MTF vs. wavelength
in Fig. 4.17(a) .

- 4.48 -
Effect of Wavelength-in-Focus
Effecton
ofMTF
Wavelength-in-Focus
of of ReducedEye
Eye
on MTF Reduced

1.0
MTF

0.5

0.0

0
10 Spa
20 tial f 650
30 550 570
requ
40
ency450 th (nm)
Waveleng
50
60 (c/d)
3 mm pupil
2800K tungsten a

Foveal Modulation Transfer Function (MTF)


1.0 Campbell & Green, 1965
Campbell & Gubisch, 1967
Modulation transfer

Van Meeteren & Dunnewold, 1983


WaterEye, optimal focus
Diffraction-limited WaterEye

0.5

Ecc=0; 2mm pupil; P31 phosphor


0.0
0 5 10 15 20 25 30 35 40 45 50 55 60

b Spatial frequency (cyc/deg)

Figure 4.17. Use of Chromatic Eye model to evaluate the defocusing effect
of changing wavelength (a) and as a standard for comparison against
experimental measurements of polychromatic MTFs of human eyes (b).

An important use of the chromatic eye model is to establish a


standard for comparison against which one can compare the
performance of real eyes. For example, Fig. 4.17(b) shows the
diffraction-limited, polychromatic MTF of the reduced eye for the P-
31 phosphor commonly used in CRTs compared to the performance
loss when longitudinal chromatic aberration is included in the
calculation. This Chromatic Eye model matches almost exactly the
more elaborate mathematical model of van Meeteren and
Dunnewold.83 These results provide an upper bound to

- 4.49 -
psychophysical determinations of the polychromatic MTF by
Campbell and Green16 and Campbell and Gubisch.17

4.6 Modeling the Neuro-sensor System of the Eye


Our goal in developing an optical model of the eye has been to
provide a useful tool for evaluating the performance of visual
instrumentation operating in two distinct modes: (1) when an
observer uses the instrument to aid viewing of the external world
and (2) when an ophthalmic instrument is used to image the
internal structures of a patient's eye for clinical purposes. In both
of these operational modes the imaging quality of the eye's optical
system has the potential to limit overall system performance.
However, the first mode is much more complicated because it
involves the neuro-sensory system of the observer's retina and
brain, which has the potential to introduce myriad biological and
psychological limitations on performance of the human + machine
system. Which factor, optical or neural, is the weakest link in the
complex chain of events leading to human visual performance on a
visual task depends strongly on the nature of the task. Thus an
engineering design goal of improving retinal image contrast will not
necessarily yield a corresponding improvement in visual
performance - it all depends on the task. Specifying the nature of
the task in a "real world" environment can be a challenge in itself
since visual behavior typically involves a complex combination of
chromatic, spatial, temporal, and stereoscopic aspects of perception
which lead ultimately to a motor response. A comprehensive
account of the limiting factors for such complex behavior is
currently beyond the grasp of visual science. However, recent
research has shown that two fundamental visual tasks of spatial
vision - contrast detection and resolution - are largely constrained
by the quality of the eye's optical system and the neural architecture
of the retina. Thus for these two basic aspects of visual
performance the limiting mechanisms appear to be in the eye itself,
not the brain, and therefore may be usefully included in a schematic
model eye.

Spatial contrast in a retinal image is defined by the difference in


luminance of neighboring points, normalized by the mean
luminance over the area of interest.59 Typical examples of the
contrast detection task would be locating a bright object against a

- 4.50 -
darker or lighter background, or discriminating a grating target
from a uniform background of the same mean luminance.
Performance on such tasks is usually quantified by the minimum
contrast in the test object required by an observer to perform the
task. However, the determining factor is whether the amount of
contrast on the retina exceeds the neural threshold for detection.
Since contrast in the retinal image is determined by the optical
quality of the imaging system, the optical limitations for contrast
detection tasks are fully described by the system MTF. For example,
if some optical instrument were to double the retinal contrast of a
given target over that obtained by the unaided eye, then the
minimum contrast of the target required for detection would be
halved.

Typical examples of neural threshold functions for detecting


sinusoidal gratings displayed in central or peripheral vision are
shown in Fig. 4.16 . To obtain such functions requires that the
normal, contrast-attenuating effects of the eye's optical system be
avoided by stimulating the retina with interference fringes produced
by a pair of coherent point sources imaged near the eye's pupil
plane. Major aspects of neural threshold functions have been
attributed to the physical features of individual cone photoreceptors
and to the presence of quantal noise and internal sources of
biological noise.6, 7, 69 Theoretically, cutoff frequency for detecting
gratings with 100% contrast on the retina will be limited by spatial
integration across the entrance aperture of cone photoreceptors.49
Experimental verification of this prediction has been obtained for
foveal vision95 and in the periphery where cone apertures are
significantly larger.77 Thus, for the purposes of modeling
performance on contrast detection tasks, the neural portion of the
human visual system is often simplified to a single photodetector
which performs a weighted integration of luminance falling within
its acceptance aperture. For example, in the "Static Performance
Model" of the USAF Night Vision Laboratory62 the human visual
system is represented by a concatenation of two linear filters, one
optical and the other neural, with performance quantified
statistically to account for the effects of noise. 40 Such filtering-
limited and noise-limited models of contrast detection have a firm
scientific basis which is well documented in the literature on human
foveal vision23, 56 and many applied models follow a similar
approach (ACQUIRE,40 VIDEM,1 ORACLE,58 PHIND86).

- 4.51 -
Although filtering models of contrast detection have been used
extensively to describe performance of human foveal vision, these
models have not always been generalized correctly to cover the
peripheral field.73 To adapt foveal models for peripheral vision, the
usual approach has been to broaden the filter's impulse response
function sufficiently to reduce the filter's low-pass cutoff frequency
to match the reduced resolving power of peripheral vision.
Unfortunately, this is an invalid approach which produces too much
filtering and predicts a cutoff frequency for detection much lower
than has been measured experimentally.79 These difficulties reflect
a failure to distinguish clearly between the visual tasks of contrast
detection and spatial resolution. Although the cutoff spatial
frequency for these two tasks are nearly equal for normal foveal
vision, they can differ by as much as an order of magnitude in
peripheral vision. The underlying reason for this large difference in
performance levels is that a filtering-limited task like contrast
detection is limited by the size of neural apertures, whereas
resolution is sampling-limited and therefore is determined by the
spacing between neurons. Since the relationship between size and
spacing of neural apertures varies across the retina, it is
inappropriate to use information about spacing derived from
measurements of spatial resolution to set aperture size in a filtering
model. Instead, an anatomically correct approach would broaden
the filter's impulse response function in proportion to the diameter
of the entrance aperture of cone photoreceptors, or subsequent
neural stages, for any given location in the peripheral retina. To
illustrate these points further we present below a simplified model
of neural sampling of the retinal image which is developed in greater
detail elsewhere. 73

4.6.1 Neural Sampling of the Retinal Image

Neural processing of the retinal image begins with the


transduction of light energy into corresponding changes of
membrane potential of individual light-sensitive photoreceptor cells
called rods and cones. Photoreceptors are laid out as a thin sheet
which varies systematically in composition across the retina as
shown schematically in Fig. 4.18(a) . (This cartoon is only
intended to convey a sense of the relative size and spacing of rods
and cones, not their true number, which is of the order 10 8 and 106,
respectively, per eye.) At the very center of the foveal region, which
corresponds roughly to the center of the eye's field of view, the
photoreceptors are exclusively cones. Because cone-based vision is

- 4.52 -
not as sensitive as rod-based vision, the central fovea is blind to dim
lights (e.g. faint stars) that are clearly visible when viewed
indirectly. At a radial distance of about 0.1-0.2 mm along the
retinal surface (=0.35-0.7° field angle) from the foveal center, rods
appear and in the peripheral retina rods are far more numerous
than cones.22, 61 Each photoreceptor integrates the light flux
entering the cell through its own aperture which, for foveal cones, is
about 2.5 µ in diameter on the retina or 0.5 arcmin of visual
angle.22, 49 Where rods and cones are present in equal density (0.4-
0.5 mm from the foveal center, = 1.4-1.8°) cone apertures are about
double their foveal diameter and about three times larger than rods.
Although rod and cone diameters grow slightly with distance form
the fovea, the most dramatic change in neural organization is the
increase in spacing between cones and the filling in of gaps by large
numbers of rods. For example, in the mid-periphery (30° field
angle) cones are about three times larger than rods, which are now
about the same diameter as foveal cones, and the spacing between
cones is about equal to their diameter. Consequently cones occupy
only about 30% of the retinal surface and the density of rods is
about 30 times that of cones. Given this arrangement of the
photoreceptor mosaic, we may characterize the first neural stage of
the visual system as a sampling process by which a continuous
optical image on the retina is transduced by two inter-digitated
arrays of sampling apertures. The cone array supports photopic
(daylight) vision and the rod array supports scotopic (night) vision.
In either case, the result is a discrete array of neural signals which
we call a neural image.

- 4.53 -
Schematic Photoreceptor Mosaic

Neural Organization of P-system


Photoreceptors Bipolar cells Ganglion cells
(rods & cones) (interneurons) (output neurons)

“On” optic
nerve

“Off”

Figure 4.18. Schematic model of neural sampling of the retinal image.


Diagram at left illustrates the relationship between cones (open circles)
and rods (filled circles) across the visual field. Diagram at right illustrates
the neural architecture of the P-system of retinal cells. At the second and
third stages open circles represent on-neurons, hatched circles represent
off-neurons.

- 4.54 -
Although the entrance apertures of photoreceptors do not
physically overlap on the retinal surface, it is often useful to think
of the cone aperture as being projected back into object space
where it can be compared with the dimensions of visual targets.
This back-projection can be accomplished mathematically by
convolving the optical point-spread function of the eye with the
uniformly-weighted aperture function of the cone. (In taking this
approach we are ignoring the effects of diffraction at the cone
aperture, which would increase the cone aperture still further.) The
result is a spatial weighting function called the receptive field of the
cone. Since foveal cones are tightly packed on the retinal surface,
and since the effect of the eye's optical system is to broaden and
blur the acceptance aperture of cones, the receptive fields of foveal
cones in object space must overlap to some degree. Furthermore,
the convolution result will be dominated by optics since the
minimum width of the optical point spread function (PSF) of the
well-corrected eye is greater than the aperture of foveal cones. Just
the opposite is true in the periphery where cones are widely spaced
and larger in diameter than the optical PSF, provided off-axis
astigmatism and focusing errors are corrected with spectacle lenses.

The neural images encoded by the rod and cone mosaics are
transmitted from eye to brain over an optic nerve which, in humans,
contains roughly one million individual fibers per eye. Each fiber is
an outgrowth of a third order retinal neuron called a ganglion cell.
It is a general feature of the vertebrate retina that ganglion cells are
functionally connected to many rods and cones by means of
intermediate, second order neurons called bipolar cells. As a result,
a given ganglion cell typically responds to light falling over a
relatively large receptive field covering numerous rods and cones.
Neighboring ganglion cells may receive input from the same
receptor, which implies that ganglion cell receptive fields may
physically overlap. Thus in general the mapping from
photoreceptors to optic nerve fibers is both many-to-one and one-
to-many. The net result, however, is a significant degree of image
compression since the human eye contains about 5 times more
cones, and about 100 times more rods, than optic nerve fibers.21, 22
For this reason the optic nerve is often described as an information
bottleneck through which the neural image must pass before
arriving at visual centers of the brain where vast numbers of
neurons are available for extensive visual processing.

- 4.55 -
It would be a gross oversimplification to suppose that the array
of retinal ganglion cells form a homogeneous population of neurons.
In fact, ganglion cells fall into a dozen or more physiological and
anatomical classes, each of which looks at the retinal image through
a unique combination of spatial, temporal, and chromatic filters.
Each class of ganglion cell then delivers that filtered neural image
via the optic nerve to a unique nucleus of cells within the brain
specialized to perform some aspect of either visually-controlled
motor-behavior (e.g. accommodation, pupil constriction, eye
movements, body posture, etc.) or visual perception (e.g. motion,
color, form, etc.). Different functional classes thus represent
distinct sub-populations of ganglion cells which exist in parallel to
extract different kinds of biologically useful information from the
retinal image.

In humans, one particular class of retinal ganglion cells called P-


cells (by physiologists) or midget cells (by anatomists) is by far the
most numerous everywhere across the retina.64 This P-system has
evolved to meet the perceptual requirements for high spatial
resolution by minimizing the degree of convergence from cones
onto ganglion cells.92 The ultimate limit to this evolutionary
strategy is achieved in the fovea where individual cones exclusively
drive not just one, but two ganglion cells via separate interneurons
called bipolar cells. These bipolar cells are arranged in a push-pull
configuration in which one cell type ("On" cells) signals those
regions of the retinal image which are brighter than average, and a
complementary type of cell ("Off" cell) signals those regions which
are darker than average. Further into the periphery, beyond about
10-15° field angle, there are more cones than ganglion cells so some
convergence is necessary. Nevertheless, even in the mid-periphery
the retina preserves high spatial resolution through the bipolar
stage, thereby delaying convergence until the output stage of
ganglion cells.93

A schematic diagram of the neural organization of the P-system is


shown in Fig. 4.18(b) . Individual cone photoreceptors make
exclusive contact with two interneurons, one On-bipolar and one
Off-bipolar, through synaptic connections of opposite sign. In
general, signals from several on-bipolars are pooled by a given on-
ganglion cell, and similarly for off-cells, to produce complimentary
neural images. In the fovea, each ganglion cell connects to a single
bipolar cell, which connects to a single cone, thus producing a
ganglion cell with a receptive field which is identical to that of an

- 4.56 -
individual cone. Incidentally, the cone population which drives the
P-system consists of two subtypes with slightly different spectral
sensitivities. Since foveal ganglion cells are functionally connected
to a single cone, the ganglion cell will inherit the cone's spectral
selectivity, thereby preserving chromatic signals necessary for color
vision. In peripheral retina, P-ganglion cells may pool signals
indiscriminately from different cone types, thus diminishing our
ability to distinguish colors.

4.6.2 Functional Implications of Neural Sampling

The neural architecture of the retina outlined above has


important functional implications for the twin tasks of contrast
detection and spatial resolution. Contrast detection depends
ultimately upon the size of the largest receptive fields in the chain
of neurons which supports contrast perception. Since ganglion cell
receptive fields can be no smaller than that of individual cones, and
are generally expected to be larger, the retinal limit imposed on
contrast detection will be set by the spatial filtering characteristics
of ganglion cell receptive fields. To first approximation, the cutoff
spatial frequency for an individual cell is given by the inverse of its
receptive field diameter. For example, a ganglion cell connected to
a foveal cone of diameter 2.5 µm would have a cutoff frequency of
about 120 cyc/deg, which is about twice the optical bandwidth of
the human eye under optimum conditions. Although this is an
extremely high spatial frequency by visual standards, this prediction
has been verified experimentally by using interference fringes to
avoid the eye's optical limitations.95 However, under natural
viewing conditions the optical bandwidth of the retinal image is
typically about 60 cyc/deg,96 which is approximately half the
bandwidth of individual foveal cones. This implies that the cutoff
spatial frequency for signaling contrast by ganglion cells in foveal
vision is determined more by optical attenuation than by cone
diameter.

The situation is a little more complicated in peripheral retina


where a ganglion cell's receptive field may be the union of several
disjoint, widely spaced, cone receptive-fields. We show elsewhere
that ganglion cells of this kind will have secondary lobes in their
frequency response characteristics which extend the cell's cutoff
spatial frequency up to that of individual input cones.73 Thus at 30°
field angle where cones are three times larger than in the fovea,
cutoff frequency would be expected to be three times smaller, or 40

- 4.57 -
cyc/deg. This is also a very high spatial frequency which rivals the
detection cutoff for normal foveal vision and is an order of
magnitude beyond the resolution limit in the mid-periphery.
Nevertheless, the prediction has been verified using interference
fringes as a visual stimulus.79 A slightly lower cutoff frequency for
contrast detection is obtained under natural viewing conditions with
refracted errors carefully corrected, indicating that optical
attenuation of the eye sets a lower limit to contrast detection than
does neural filtering in peripheral vision, just as it does in central
vision.

Implications of neural sampling for spatial resolution follow from


the application of Shannon's sampling theorem, which states that
the highest spatial frequency in the continuous retinal image which
can be faithfully represented by the discrete neural image is set by
the neural sampling density. This so-called Nyquist cutoff frequency
for resolution is equal to half the neural sampling density, or
equivalently, half the inverse of the spacing between sample points.
Spatial frequencies beyond this Nyquist limit will be undersampled
and therefore will be subject to aliasing. What this means in
practice is that fine spatial patterns in the periphery may be visible
but they will be seen as distorted moiré patterns which are not
resolved correctly. Thus, if we understand the term "resolution" to
mean the veridical separation of a spatial pattern into its component
parts, then the presence of aliasing in the neural image indicates
that the retinal image was not resolved by the neural sampling array.
Thus, the sampling limit to visual resolution is characterized by the
onset of aliasing.

Given that the visual system consists of a series of anatomically


distinct stages (e.g., photoreceptors, bipolars, ganglion cells), each
of which samples the preceding stage, it stands to reason that the
lowest sampling limit will be set by the coarsest array of the visual
pathway. Over most of the retina, excluding the foveal region, cone
photoreceptors and midget bipolar cells greatly outnumber midget
ganglion cells, which implies that peripheral ganglion cells sub-
sample the photoreceptor array. Consequently, if retinal
undersampling is the limiting factor for spatial resolution, then
human resolution acuity should match the Nyquist frequency of the
cone array in central vision but match the ganglion cell array for
peripheral vision. Many previous studies of human resolution
across the visual field have produced data which are consistent with
this prediction. However, a definitive experiment requires proof

- 4.58 -
that the resolution task performed by observers is sampling-limited,
rather than contrast limited. In other words, it must be
demonstrated that targets beyond the resolution limit remain visible
as aliases of the actual target. Experimental confirmation of
perceptual aliasing was obtained initially for central95, parafoveal97
and peripheral vision79 using interference fringe stimuli, and similar
results have been obtained subsequently using natural viewing of
parafoveal and peripheral targets.2-4, 68, 78

100 Cone cutoff


Optical cutoff
Spatial Frequency (c/deg)

50

Aliasing zone
10

5
Detect Resolve
Natural view
Interferometer
RGC Nyquist
1
0 10 20 30 40
Visual Eccentricity (deg)

Figure 4.19. Summary of optical and neural limits to pattern detection


and pattern resolution across the visual field in humans. Symbols show
psychophysical performance (mean of 3 subjects from Thibos et al., 1987)
for grating detection (squares) and resolution (triangles) tasks under
normal viewing conditions (open symbols) or when viewing interference
fringes (closed symbols). The aliasing zone extends from the resolution
limit to the detection limit. Solid curve drawn through open squares
indicates the optical cutoff of the eye and marks the upper limit to the
aliasing zone for natural viewing (horizontally-hatched area). The
expanded aliasing zone observed with interference fringes (vertically-
hatched area) extends beyond the optical cutoff to a higher value set by
neural aperture size. Upper dotted curve shows computed detection limit of
individual cones (from Curcio et al., 1990) and lower dotted curve shows
computed Nyquist limit of retinal ganglion cells (RGC; from Curcio & Allen,
1990). (Redrawn from Thibos & Bradley, 1995).

Results of our own systematic exploration of the limits to


contrast detection and resolution in human vision are summarized
in Fig. 4.19 . A series of experiments were conducted in which
cutoff spatial frequency was measured for two different tasks

- 4.59 -
(contrast detection, pattern resolution), for two different types of
visual targets (interference fringes, sinusoidal gratings displayed on
a computer monitor with the eye's refractive error corrected by
spectacle lenses), at various locations along the horizontal nasal
meridian of the visual field.77, 79 These results show that for the
resolution task, cutoff spatial frequency was the same regardless of
whether the visual stimulus was imaged on the retina by the eye's
optical system (natural view) or produced directly on the retina as
high-contrast, interference fringes. This evidence supports our
conclusion that, for a well-focused eye, pattern resolution is limited
by the ambiguity of aliasing caused by undersampling, rather than
by contrast attenuation due to optical or neural filtering. Aliasing
occurs for frequencies just above the resolution limit, so the
triangles in Fig. 4.19 also mark the lower limit to the aliasing
portion of the spatial frequency spectrum. This lower boundary of
the aliasing zone is accurately predicted by the Nyquist limit
calculated for human P- ganglion cells.21

The upper limit to the aliasing zone is determined by


performance on the contrast detection task. Detection cutoff
frequency is significantly lower for natural viewing than for
interferometric viewing at all eccentricities. Consequently, the
spectrum of frequencies for which aliasing occurs is narrower for
natural viewing than for interference fringes. This difference is
directly attributable to imperfections of the eye's optical system
since all else is equal. In both cases the neural system is faced with
identical tasks (contrast detection) of the same stimulus (sinusoidal
gratings). Notice that for natural viewing the aliasing zone narrows
with decreasing field angle and vanishes completely at the fovea,
where contrast sensitivities for detection and for resolution of
gratings are nearly identical. Thus we may conclude that under
normal viewing conditions, the fovea is protected from aliasing
because of optical low-pass filtering. However, in the periphery the
optical bandwidth of the retinal image exceeds the neural Nyquist
frequency (assuming refractive errors are corrected) and so the
eye's optical system is ineffective as an anti-aliasing filter for the
relatively coarse sampling array of the peripheral retinal ganglion
cells.

- 4.60 -
4.6.3 Optical vs. Neural Limits to Visual Performance

The preceding discussion of neural architecture was motivated by


the practical design question of whether it might be possible to
improve retinal image quality and yet gain no benefit in visual
performance. Although it is natural to suppose that increasing
retinal image quality should be beneficial, we have found that this is
not always true. The natural imperfections of the human eye limit
the optical bandwidth of the retinal image to less than the Nyquist
limit of the foveal cone mosaic, thus protecting foveal vision from
the aliasing effects of neural undersampling. If these optical
imperfections are overcome with the aid of improved visual
instrumentation, then the fovea will become exposed to images
beyond the Nyquist limit that may produce perceptual aliasing.8, 95

In orthodox engineering circles, aliasing is generally held to be an


undesirable consequence of undersampling which is to be avoided
by anti-alias, low-pass filtering. In a biological system, on the other
hand, anti-alias filtering may have a greater cost than benefit. For
example, optical low-pass filters attenuate not only the high
frequencies to be rejected, but they also attenuate the low
frequencies to be passed. Nature appears to have taken the
orthodox approach for central vision by using the eye's optical
system to eliminate high spatial-frequency components of retinal
images before they can be undersampled by the neural retina, even
at the cost of lower contrast overall. In the human peripheral visual
system, however, the design trade-off nature has made is to tolerate
the possibility of erroneous perception caused by aliasing in
exchange for improved retinal contrast. As pointed out by Snyder et
al., 69 retinal image contrast is greatly increased by avoiding the
substantial amount of low-pass filtering which would be required to
avoid aliasing altogether in the periphery.91 Depending upon the
specific visual task or application, such filtering may or may not be
a hindrance.

The foregoing discussion raises a related question which is


relevant to the design of wide-angle visually-coupled systems: what
kind of information is appropriate for peripheral display? Although
central vision is commonly regarded as highly superior to peripheral
vision, in many regards just the opposite is true. Night vision is an
obvious example for which the central blind spot is attributed to the
lack of rods in the retinal fovea. Another broad area in which
peripheral vision excels is in the sensing and control of self

- 4.61 -
movement. For example, the visual control of posture, locomotion,
head, and eye movements are largely under the control of motor
mechanisms sensitive to peripheral stimulation. 36, 47 Many of these
functions of peripheral vision are thought of as reflex-like actions
which, although they can be placed under voluntary control, largely
work in an "automatic-pilot" mode with minimal demands for
conscious attention. This suggests that information regarding body
attitude, self-motion through the environment, and moving objects
are ideally suited for peripheral display since such a strategy
matches the information to be displayed with the natural ability of
the peripheral visual system to extract such information. The
danger, however, is that retinal undersampling in the periphery can
lead to erroneous perception of space, motion, or depth which may
have unintended or undesirable consequences.

4.7 Validation of Eye Models


Validation of the optical properties of model eyes requires
application of paraxial geometric optics equations to predict
Gaussian properties such as total power and image plane location.
Since most models have been developed to account for a specific set
of experimental data, quantitative comparisons are drawn between
model behavior and the experimental data which inspired the
model. For example, Thibos et al. demonstrated75, 80 that their
Chromatic Eye accurately predicts ocular LCA and TCA, and Liou and
Brennan44 show that their model accurately predicts ocular
spherical aberration. Interestingly, Liou and Brennan demonstrated
that the earlier aspheric models of Lotmar, Kooijman, Navarro, etc.
all overestimated ocular spherical aberration, but not by as much as
the paraxial spherical models.43

More challenging validation studies involve model predictions of


PSFs, MTFs, resolution limits, and Strehl ratios. Such validations
have been performed on two model eyes. Navarro’s four surface
aspheric model eye54 successfully predicted experimental measures
of the polychromatic MTFs and the effect of pupil size on the Strehl
ratio. Using the reduced eye with neuro-sensory model retina
described above, Thibos and colleagues 71, 74, 76 calculated the
effects of chromatic aberration on the polychromatic MTF and
resolution limit of an eye with either centered or decentered pupils.
The model supported experimental evidence17, 31, 74 that the
polychromatic MTF with a centered pupil is only slightly reduced
compared to a focused monochromatic MTF, and that the TCA

- 4.62 -
produced by pupil decentration has a profound effect on
polychromatic image quality, visual resolution, and contrast
sensitivity.

The most convincing validation of a model occurs when it


predicts a seemingly unrelated phenomenon, as when a schematic
eye successfully explains complex visual phenomena. For example,
the Indiana Eye model was used to establish that a variety of
perceptual illusions of stereoscopic depth are based solely on the
presence of different amounts of horizontal TCA in the two eyes.98,
100, 101 Also, Woods et al. were able to use a computational model

eye exhibiting the same transverse ray aberrations as real eyes to


predict the perceived image doubling (diplopia) observed by many
humans who are far sighted or under-accommodated.99 They were
also able to predict highly notched MTFs observed experimentally
under the same circumstances. By augmenting optical models with
neural sampling models of visual resolution, others have
successfully predicted novel illusions of motion reversal and other
motion aliasing effects in peripheral vision.2, 4, 20, 90

Ultimately, the validity of model eyes will be proven by successful


application in fields outside visual science to solve important
practical problems. For example, in his book on laser safety Sliney67
describes how a simple model eye can be used to calculate retinal
image intensity for the purpose of defining possible retinal hazards.
However, for point sources, he had to employ experimental data on
point-spread functions from real eyes since a viable model was not
then available to calculate PSFs. Similarly, the visual effects of
aberrations in instruments designed to be used with the eye have
been evaluated experimentally but not yet accounted for by neuro-
optical models of the visual process.14, 46 Such examples illustrate
the need for, and potential utility of, a working model that can
accurately predict retinal image quality and visual performance. In
this spirit, we hope our engineering colleagues will find the Indiana
Eye to be a useful and flexible tool for the optical design and
evaluation of visual instrumentation.

- 4.63 -
ACKNOWLEDGMENTS

The ideas presented in this paper have developed in parallel with


the dissertation research of D. Walsh, F. Cheney, D. Still, X. Zhang,
M. Ye, M. Wilkinson, R. Anderson, M. Rynders, Y. Wang, and T.
Salmon. We thank K. Haggerty for technical support and the
National Eye Institute (National Institutes of Health) for financial
support (grant R01 EY05109).

REFERENCES
1. Akerman, A. and Kinzly, R.E., (1979). Predicting aircraft
detectability. Human Fact., 21, 277-291.

2. Anderson, S.J. and Hess, R.F., (1990). Post-receptoral


undersampling in normal human peripheral vision. Vision Res.,
30, 1507-1515.

3. Anderson, S.J., Mullen, K.T. and Hess, R.F., (1991). Human


peripheral spatial resolution for achromatic and chromatic
stimuli: limits imposed by optical and retinal factors. J. Physiol.,
442, 47-64.

4. Artal, P., Derrington, A.M. and Colombo, E., (1995). Refraction,


aliasing, and the absence of motion reversals in peripheral
vision. Vision Res., 35, 939-947.

5. Artal, P., Marcos, S., Iglesias, I. and Green, D.G., (1996). Optical
modulation transfer and contrast sensitivity with decentered
small pupils in the human eye. Vision Res., 35, 3575-3586.

6. Banks, M.S., Geisler, W.S. and Bennett, P.J., (1987). The physical
limits of grating visibility. Vision Res., 27, 1915-1924.

7. Banks, M.S., Sekuler, A.B. and Anderson, S.J., (1991). Peripheral


spatial vision: limits imposed by optics, photoreceptors, and
receptor pooling. J. Opt. Soc. Am. A, 8, 1775-1787.

8. Bennett, A.G. and Rabbetts, R.B., Clinical Visual Optics, 2nd. Ed.,
(Butterworths, London, 1989).

- 4.64 -
9. Bradley, A., (1992). Perceptual manifestations of imperfect
optics in the human eye: attempts to correct for ocular
chromatic aberration. Optom. Vis. Sci., 69, 515-521.

10. Bradley, A. and Thibos, L.N., Modeling off-axis vision - I: the


optical effects of decentering visual targets or the eye's
entrance pupil, in Vision Models for Target Detection and
Recognition, ed. E. Peli (World Scientific Press, Singapore, 1995),
313-337.

11. Bradley, A., Thibos, L.N. and Still, D.L., (1990). Visual acuity
measured with clinical Maxwellian-view systems: effects of
beam entry location. Optom. Vis. Sci., 67, 811-817.

12. Bradley, A., Zhang, X. and Thibos, L.N., (1992). Failures of


isoluminance caused by ocular chromatic aberrations. Appl.
Opt., 31, 3657-3667.

13. Bradley, A., Zhang, X.X. and Thibos, L.N., (1991). Achromatizing
the human eye. Optom. Vis. Sci., 68, 608-616.

14. Burton, G.J. and Haig, N.D., (1984). Effects of the Seidel
aberrations on visual target discrimination. J. Opt. Soc. Am. A,
1, 373-385.

15. Campbell, F.W., (1957). The depth-of-field of the human eye.


Optica Acta, 4, 157-164.

16. Campbell, F.W. and Green, D.G., (1965). Optical and retinal
factors affecting visual resolution. J. Physiol., 181, 576-593.

17. Campbell, F.W. and Gubisch, R.W., (1967). The effect of


chromatic aberration on visual acuity. J. Physiol., 192, 345-358.

18. Charman, W., (1989). Light on the peripheral retina.


Ophthalmic. Physiol. Opt., 9, 91-92.

19. Cheney, F.E., Detection Acuity in the Peripheral Retina, Ms.


Thesis (Indiana University, 1989)

20. Coletta, N.J., Williams, D.R. and Tiana, C.L.M., (1990).


Consequences of spatial sampling for human motion perception.
Vision Res., 30, 1631-1648.

- 4.65 -
21. Curcio, C.A. and Allen, K.A., (1990). Topography of ganglion
cells in human retina. J. comp. Neurol., 300, 5-25.

22. Curcio, C.A., Sloan, K.R., Kalina, R.E. and Hendrickson, A.E.,
(1990). Human photoreceptor topography. J. comp. Neurol.,
292, 497-523.

23. De Valois, R.L. and De Valois, K.K., Spatial Vision, (Oxford


University Press, New York, 1988).

24. Drasdo, N. and Fowler, C.W., (1974). Non-linear projection of


the retinal image in a wide-angle schematic eye. Br. J. Ophthal.,
58, 709-714.

25. Dunne, M.C.M., (1995). A computing scheme for determination


of retinal contour from peripheral refraction, keratometry and
A-scan ultrasonography. Ophthalmic. Physiol. Opt., 15, 133-
143.

26. Dunne, M.C.M. and Barnes, D.A., (1987). Schematic modelling


of peripheral astigmatism in real eyes. Ophthalmic. Physiol.
Opt., 7, 235-239.

27. Dunne, M.C.M., Barnes, D.A. and Mission, (1991). Effect of iris
displacement on oblique astigmatism in aphakic eyes. Optom.
Vis. Sci., 68, 957-959.

28. Emsley, H.H., Visual Optics, 5th, (Hatton Press Ltd, London,
1952).

29. Ferree, C.E., Rand, G. and Hardy, C., (1931). Refraction for the
peripheral field of vision. Arch. Ophthalmol., 5, 717-731.

30. Frisén, L. and Schöldström, G., (1977). Relationship between


perimetric eccentricity and retinal locus in a human eye. Acta
Ophthalmol., 55, 63-68.

31. Green, D.G., (1967). Visual resolution when light enters the eye
through different parts of the pupil. J. Physiol., 190, 583-593.

32. Gullstrand, A., Appendix II.3. The optical system of the eye., ed.
H.v. Helmholtz, Physiological Optics. English translation:
Southall, J.P.C. (ed.) (Optical Society of America, Washington,
D.C., 1924, 1909), 350-358.

- 4.66 -
33. Hartridge, H., (1947). The visual perception of fine detail.
Philos. Trans. R. Soc. Lond. B, 232, 519-671.

34. Hopkins, H.H., Wave Theory of Aberrations, (Oxford University


Press, London, 1950).

35. Hopkins, H.H., (1955). The frequency response of a defocused


optical system. Proc. Roy. Soc. A, 231, 91-103.

36. Howard, I., The Perception of Posture, Self motion, and the
Visual Vertical, in Handbook of Perception and Human
Performance, ed. K.R. Boff, L. Kaufman and J.P. Thomas (John
Wiley & Sons, New York, 1986), Ch. 18.

37. Howarth, P.A., Zhang, X., Bradley, A., Still, D.L. and Thibos, L.N.,
(1988). Does the chromatic aberration of the eye vary with age?
J. Opt. Soc. Am. A, 5, 2087-2092.

38. Howland, H.C. and Howland, B., (1977). A subjective method


for the measurement of monochromatic aberrations of the eye.
J. Opt. Soc. Am., 67, 1508-1518.

39. Kooijman, A.C., (1983). Light distribution on the retina of a


wide angle theoretical eye. J. Opt. Soc. Am., 73, 1544-1550.

40. Kornfeld, G.H. and Lawson, W.R., (1971). Visual-perception


models. J. Opt. Soc. Am., 61, 811-820.

41. Le Grand, Y., Form and Space Vision, revised, ed. G.G. Heath and
M. Millodot (Indiana University Press, Bloomington, 1967).

42. Liang, J. and Williams, D.R., (1997). Aberrations and retinal


image quality of the normal human eye. J. Opt. Soc. Am. A, (in
press) ,

43. Liou, H.L. and Brennan, N.A., (1996). The prediction of spherical
aberration with schematic eyes. Ophthalmic. Physiol. Opt., 16,
348-354.

44. Liou, H.L. and Brennan, N.A., (1997). Anatomically accurate,


finite model eye for optical modeling. J. Opt. Soc. Am. A, 14,
1684-1695.

45. Lotmar, W., (1971). Theoretical eye model with aspherics. J.


Opt. Soc. Am., 61, 1522-1529.

- 4.67 -
46. Lyons, K., Mouroulis, P. and Cheng, X., (1996). The effect of
instrumental spherical aberration on visual image quality. J.
Opt. Soc. Am. A, 13, 193-205.

47. Matin, L., Visual Localization and Eye Movements, in Handbook


of Perception and Human Performance, ed. K.R. Boff, L. Kaufman
and J.P. Thomas (John Wiley & Sons, New York, 1986), Ch. 20.

48. Metcalf, H., (1965). Stiles-Crawford apodization. J. Opt. Soc.


Am., 55, 72-74.

49. Miller, W.H. and Bernard, G.D., (1983). Averaging over the
foveal receptor aperture curtails aliasing. Vision Res., 23, 1365-
1369.

50. Millodot, M., (1981). Effect of ametropia on peripheral


refraction. Am. J. Optom. Physiol. Optics, 58, 691-695.

51. Millodot, M., (1984). Peripheral refraction in aphakic eyes. Am.


J. Optom. Physiol. Optics, 61, 586-589.

52. Millodot, M. and Lamont, A., (1974). Refraction of the


periphery of the eye. J. Opt. Soc. Am., 64, 110-111.

53. Mouroulis, P. and Macdonald, J., Geometrical Optics and Optical


Design, (Oxford University Press, New York, 1997).

54. Navarro, R., Santamaria, J. and Bescos, J., (1985).


Accommodation-dependent model of the human eye with
aspherics. J. Opt. Soc. Am. A, 2, 1273-1281.

55. Newton, I., The Optical Papers of Isaac Newton, Vol 1: The
Optical Lectures 1670-1672., ed. A.E.E. Shapiro, 1984)
(Cambridge University Press, Cambridge, 1670).

56. Olzak, L.A. and Thomas, J.P., Seeing spatial patterns, in


Handbook of Perception and Human Performance, ed. K. Boff, L.
Kaufman and J.P. Thomas (J. Wiley & Sons, New York, 1986),
7.1-7.56.

57. Overington, I., (1973). Interaction of vision with optical aids. J.


Opt. Soc. Am., 63, 1043-1049.

58. Overington, I., (1982). Towards a complete model of photopic


visual threshold performance. Opt. Engin., 21, 2-13.

- 4.68 -
59. Peli, E., (1990). Contrast in complex images. J. Opt. Soc. Am. A,
7, 2032-2040.

60. Pflibsen, K.P., Pomerantzeff, O. and Ross, R.N., (1988). Retinal


illuminance using a wide-angle model of the eye. J. Opt. Soc.
Am. A, 5, 146-150.

61. Polyak, S.L., The Retina, (Univ. Chicago Press, Chicago, 1941).

62. Ratches, J.A., (1976). Static performance model for thermal


imaging systems. Opt. Engin., 15, 525-529.

63. Rempt, F., Hoogerheide, J. and Hoogenboom, W.P.H., (1971).


Peripheral retinoscopy and the skiagram. Ophthalmologica,
162, 1-10.

64. Rodieck, R.W., (1988). The primate retina. 4, 203-278.

65. Rynders, M.C., Lidkea, B.A., Chisholm, W.J. and Thibos, L.N.,
(1995). Statistical distribution of foveal transverse chromatic
aberration, pupil centration, and angle psi in a population of
young adult eyes. J. Opt. Soc. Am. A, 12, 2348-2357.

66. Simonet, P. and Campbell, M.C.W., (1990). The optical


transverse chromatic aberration on the fovea of the human eye.
Vision Res., 30, 187-206.

67. Sliney, D. and Wolbarsht, M., Safety with Lasers and Other
Optical Sources, (Plenu Press, New York, 1980).

68. Smith, R.A. and Cass, P.F., (1987). Aliasing in the parafovea with
incoherent light. J. Opt. Soc. Am. A, 4, 1530-1534.

69. Snyder, A.W., Bossomaier, T.R.J. and Hughes, A., (1986). Optical
image quality and the cone mosaic. Science, 231, 499-501.

70. Still, D.L., Optical Limits to Contrast Sensitivity in Human


Peripheral Vision, Ph. D. Thesis (Indiana University, 1989)

71. Thibos, L.N., (1987). Calculation of the influence of lateral


chromatic aberration on image quality across the visual field. J.
Opt. Soc. Am. A, 4, 1673-1680.

72. Thibos, L.N., (1990). Optical limitations of the Maxwellian-view


interferometer. Appl. Opt., 29, 1411-1419.

- 4.69 -
73. Thibos, L.N. and Bradley, A., Modeling off-axis vision - II: the
effect of spatial filtering and sampling by retinal neurons, in
Vision Models for Target Detection and Recognition, ed. E. Peli
(World Scientific Press, Singapore, 1995), 338-379.

74. Thibos, L.N., Bradley, A. and Still, D.L., (1991). Interferometric


measurement of visual acuity and the effect of ocular chromatic
aberration. Appl. Opt., 30, 2079-2087.

75. Thibos, L.N., Bradley, A., Still, D.L., Zhang, X. and Howarth, P.A.,
(1990). Theory and measurement of ocular chromatic
aberration. Vision Res., 30, 33-49.

76. Thibos, L.N., Bradley, A. and Zhang, X.X., (1991). Effect of ocular
chromatic aberration on monocular visual performance.
Optom. Vis. Sci., 68, 599-607.

77. Thibos, L.N., Cheney, F.E. and Walsh, D.J., (1987). Retinal limits
to the detection and resolution of gratings. J. Opt. Soc. Am. A,
4, 1524-1529.

78. Thibos, L.N., Still, D.L. and Bradley, A., (1996). Characterization
of spatial aliasing and contrast sensitivity in peripheral vision.
Vision Res., 36, 249-258.

79. Thibos, L.N., Walsh, D.J. and Cheney, F.E., (1987). Vision beyond
the resolution limit: aliasing in the periphery. Vision Res., 27,
2193-2197.

80. Thibos, L.N., Ye, M., Zhang, X. and Bradley, A., (1992). The
chromatic eye: a new reduced-eye model of ocular chromatic
aberration in humans. Appl. Opt., 31, 3594-3600.

81. Thibos, L.N., Ye, M., Zhang, X. and Bradley, A., (1997). Spherical
aberration of the reduced schematic eye with elliptical
refracting surface. Optom. Vis. Sci., (in press) ,

82. van Meeteren, A., (1974). Calculations on the optical


modulation transfer function of the human eye for white light.
Optica Acta, 21, 395-412.

83. van Meeteren, A. and Dunnewold, C.J.W., (1983). Image quality


of the human eye for eccentric entrance pupil. Vision Res., 23,
573-579.

- 4.70 -
84. Villegas, E.R., Carretero, L. and Fimia, A., (1996). Le Grand eye
for the study of ocular chromatic aberration. Ophthalmic.
Physiol. Opt., 16, 528-531.

85. von Helmholtz, H., Treatise on Physiological Optics, 3rd, ed.


J.P.C.t. Southall (Optical Society of America (1924),
Washington, 1909).

86. Vos, J.J. and van Meeteren, A., (1991). PHIND: an analytical
model to predict acquisition distance with image intensifiers.
Appl. Opt., 30, 958-966.

87. Walsh, G., Charman, W.N. and Howland, H.C., (1984). Objective
technique for the determination of monochromatic aberrations
of the human eye. J. Opt. Soc. Am. A, 1, 987-992.

88. Wang, G., Pomerantzeff, O. and Pankratov, M.M., (1983).


Astigmatism of oblique incidence in the human model eye.
Vision Res., 23, 1079-1085.

89. Wang, Y.Z. and Thibos, L.N., (1997). Oblique (off-axis)


aberration of the reduced schematic eye with elliptical
refracting surface. Optom. Vis. Sci., (in press) ,

90. Wang, Y.Z., Thibos, L.N. and Bradley, A., (1996). Undersampling
produces non-veridical motion perception, but not necessarily
motion reversal, in peripheral vision. Vision Res., 36, 1737-
1744.

91. Wang, Y.Z., Thibos, L.N. and Bradley, A., (1997). Effects of
refractive error on detection acuity and resolution acuity in
peripheral vision. Invest. Ophthal. Vis. Sci., (in press) ,

92. Wässle, H. and Boycott, B.B., (1991). Functional architecture of


the mammalian retina. Physiological Reviews, 71, 447-480.

93. Wässle, H., Grünert, U., Martin, P. and Boycott, B.B., (1994).
Immunocytochemical characterization and spatial distribution
of midget bipolar cells in the macaque monkey retina. Vision
Res., 34, 561-579.

94. Welford, W.T., Aberrations of the Symmetrical Optical System,


(Academic Press, London, 1974).

- 4.71 -
95. Williams, D.R., (1985). Aliasing in human foveal vision. Vision
Res., 25, 195-205.

96. Williams, D.R., Brainard, D.H., McMahon, M.J. and Navarro, R.,
(1994). Double pass and interferometric measures of the
optical quality of the eye. J. Opt. Soc. Am. A, 11, 3123-3135.

97. Williams, D.R. and Coletta, N.J., (1987). Cone spacing and the
visual resolution limit. J. Opt. Soc. Am. A, 4, 1514-1523.

98. Winn, B., Bradley, A., Strang, N.C., McGraw, P.V. and Thibos,
L.N., (1995). Reversals of the colour-depth illusion explained by
ocular chromatic aberration. Vision Res., 35, 2675-2684.

99. Woods, R.L., Bradley, A. and Atchison, D.A., (1996). Monocular


diplopia caused by ocular aberrations and hyperopic defocus.
Vision Res., 36, 3597-3606.

100. Ye, M., Bradley, A., Thibos, L.N. and Zhang, X., (1991).
Interocular differences in transverse chromatic aberration
determine chromosteropsis for small pupils. Vision Res., 31,
1787-1796.

101. Ye, M., Bradley, A., Thibos, L.N. and Zhang, X.X., (1992). The
effect of pupil size on chromostereopsis and chromatic
diplopia: Interaction between the Stiles-Crawford effect and
chromatic aberrations. Vision Res., 32, 2121-2128.

102. Zhang, X., Bradley, A. and Thibos, L., (1991). Achromatizing


the human eye: The problem of chromatic parallax. J. Opt. Soc.
Am. A, 8, 686-691.

103. Zhang, X., Bradley, A. and Thibos, L.N., (1993). Experimental


determination of the chromatic difference of magnification of
the human eye and the location of the anterior nodal point. J.
Opt. Soc. Am. A, 10, 213-220.

104. Zhang, X., Thibos, L.N. and Bradley, A., (1991). Relation
between the chromatic difference of refraction and the
chromatic difference of magnification for the reduced eye.
Optom. Vis. Sci., 68, 456-458.

- 4.72 -

You might also like