Professional Documents
Culture Documents
Pierre-Yves Bondiau, Gregoire Malandain, Pierre Chauvel, Frdrique Peyrade, Adel Courdi, Nicole Iborra,
Jean-Pierre Caujolle, and Pierre Gastaud
Citation: Medical Physics 30, 1013 (2003); doi: 10.1118/1.1564092
View online: http://dx.doi.org/10.1118/1.1564092
View Table of Contents: http://scitation.aip.org/content/aapm/journal/medphys/30/6?ver=pdfcov
Published by the American Association of Physicists in Medicine
Articles you may be interested in
Three-dimensional x-ray fluorescence mapping of a gold nanoparticle-loaded phantom
Med. Phys. 41, 031902 (2014); 10.1118/1.4863510
Preliminary investigations on the determination of three-dimensional dose distributions using scintillator blocks
and optical tomography
Med. Phys. 40, 082104 (2013); 10.1118/1.4813898
Three-dimensional localization of fluorescent targets in turbid media using time reversal optical tomography
Appl. Phys. Lett. 101, 251103 (2012); 10.1063/1.4771997
Three-dimensional imaging of xenograft tumors using optical computed and emission tomography
Med. Phys. 33, 3193 (2006); 10.1118/1.2217109
Model-based image reconstruction for four-dimensional PET
Med. Phys. 33, 1288 (2006); 10.1118/1.2192581
Gregoire Malandain
Institut National de Recherche en Informatique et Automatique, 06 902 Sophia Antipolis, France
Received 30 July 2002; accepted for publication 5 February 2003; published 23 May 2003
Recently, radiotherapy possibilities have been dramatically increased by software and hardware
developments. Improvements in medical imaging devices have increased the importance of threedimensional 3D images as the complete examination of these data by a physician is not possible.
Computer techniques are needed to present only the pertinent information for clinical applications.
We describe a technique for an automatic 3D reconstruction of the eye and CT scan merging with
fundus photographs retinography. The final result is a virtual eye to guide ocular tumor protontherapy. First, we make specific software to automatically detect the position of the eyeball, the
optical nerve, and the lens in the CT scan. We obtain a 3D eye reconstruction using this automatic
method. Second, we describe the retinography and demonstrate the projection of this modality.
Then we combine retinography with a reconstructed eye, using a CT scan to get a virtual eye. The
result is a computer 3D scene rendering a virtual eye into a skull reconstruction. The virtual eye can
be useful for the simulation, the planning, and the control of ocular tumor protontherapy. It can be
adapted to treatment planning to automatically detect eye and organs at risk position. It should be
highlighted that all the image processing is fully automatic to allow the reproduction of results, this
is a useful property to conduct a consistent clinical validation. The automatic localization of the
organ at risk in a CT scan or an MRI by automatic software could be of great interest for radiotherapy in the future for comparison of one patient at different times, the comparison of different
treatments centers, the possibility of pooling results of different treatments centers, the automatic
generation of dosesvolumes histograms, the comparison between different treatment planning for
the same patient and the comparison between different patients at the same time. It will also be less
time consuming. 2003 American Association of Physicists in Medicine.
DOI: 10.1118/1.1564092
Key words: eye, protontherapy, 3D reconstruction
INTRODUCTION
Software and hardware developments have dramatically increased radiotherapy possibilities. Dosimetry software packages, using three-dimensional 3D images from the CT scan,
provide high precision for both radiotherapy and brachytherapy.
Simultaneously, significant improvements in medical imaging devices have increased the importance of 3D images.
The complete examination of this data by a physician is time
consuming. Thus computer techniques are used more frequently to extract and present only the pertinent information
for clinical applications.
In this paper, we propose a further improvement to radiotherapy for a specific case: the treatment of eye tumors. To
that end, we will use both 3D reconstruction techniques and
multimodalities merging methods to accurately determine the
1013
0094-2405200330610138$20.00
1013
1014
1014
TABLE I. Details of the two CT-scan protocols used.
CT-scan protocol 1
CT-scan protocol 2
Three patients
Field of view: 25 cm
Pixels size: 0.4880.488 mm2
Centers slice distance: 1 mm
Slice thickness is 1.5 mm
Overlap: 0.25 mm
Two patients
Field of view: 16.5 cm
Pixels size: 0.3220.322 mm2
Centers slice distance: 1 mm
Slice thickness is 1.5 mm
Overlap: 0.25 mm
CT-scan images
The CT-scan images are acquired on a Philips LX scanner. We use the same settings mAs and kVp as for the head
CT scan. Images are made of 35 slices of 512512 pixels.
Two different protocols were used: one allowing us to see the
whole skull and the other with a better resolution for the
ocular structures see Table I. We did not use spiral/helical
CT scan because this technology uses an interpolation between two slices resulting in a loss of accuracy.
Hough transform
Principle: The basic tool we use is the Hough
transform.13 It is a mathematical transformation that can fit
different shapes in a space. It was adapted to each particular
structure we wanted to segment. The principle is simple.
Given a parametric shape, s(xi), depending on n parameters
xi, we build an n-dimensional space S, each dimension cor-
1015
FIG. 2. Search volume for lens detection. The search for the lens is constrained to a specific volume which is determined automatically with respect
to the sclera detection. This volume is in the anterior part of the eye, from
the eye radius to half of the eye radius, in a truncated cone.
1015
largest points of this subregion, the first point being the global maximum of S typically N100). Let us suppose that
the last point among these N points has V min votes. The even
weighted barycenter of those N points may yield a bad
shape if they represent two or more structures detected by the
Hough transform. To ensure getting only one structure, we
first extract the connected component in S consisting of
points with values larger than V min and containing the global
maximum we perform a hysteresis thresholding in S with a
lower threshold of V min and an upper threshold of the global
maximum. Second, we compute the weighted barycenter of
this connected component to get the best parameters xi. This
last operation finally gives us a better precision than the discretization of S.
The sclera: We model the eye as a sphere. The Hough
space for the sphere search is four-dimensional, three parameters for the center and one for the radius. First we extract
points belonging to the sclera the eye border by thresholding the original 3D CT scan image. The search is restricted to
speed it up.
The user specifies a rough region of interest ROI containing the eye in the 3D CT scan image.
We build the Hough array S to limit the search in an
anatomy correct range for the radius, and in the middle of the
region of interest for the center, to make sure that the complete eye is included in the ROI.
The final output of this step is a sphere a center and a
radius R that represents the probability for a voxel to belong
to the eye. This sphere will represent the eyeball for the
virtual eye.
The lens: The same approach is used to detect the lens,
which is modeled by an ellipsoid with specified ratios between its axes: two have a length of r, while the third has a
length of r/2. These axes are oriented along the 3D CT scan
image dimensions, this seems reasonable since the acquisition protocol more or less aligns the Y dimension with the
1016
1016
The clips
FIG. 4. Flowchart of the data processing step used in this project. With a CT
scan we do a 3D image where we defined the region of interest the region
of the eye. By thresholding and barycenter method we get the clips coordinates. By thresholding and with the Hough transform, we get the eyeball
coordinates. With the eyeball coordinates, the search volume for lens and the
search volume for the optical nerve are defined. With the search volume for
lens and the Hough transform we get the lens coordinates. With the search
volume for the optical nerve and the Hough transform we get the optic nerve
coordinates. With the coordinate of eyeball, clips, lens, and optic nerve we
do a 3D view of the eye.
FIG. 5. Automatic CT-scan reconstruction. 3/4 view: skull, two eyeballs, two
lens, optical nerve, and clips.
1017
1017
spherical retinal surface out onto 2D. The radial distances are
considered correct but considerable distortion and circumferential stretching occur in regions anterior to the equator.
Distortion computation
The current two-dimensional retinal drawing chart is an
azimuth equidistant image of the retina see Fig. 6. A few
works have been done on the distortion produced by this
chart.16,17 Borodkins and Thompson built a computerized
sphere and measured on this model the length of the 3D arc
and compared it numerically with the length of the 2D arc on
the retinography. They have shown that the circumferential
distortion is about 57% at the equator and up to 138% for the
anterior region. For the region posterior to the equator, we
establish the geometrical distortion equation for circumferences lying between the macula and the equator. Referring to
Fig. 6, C is the center of the eyeball, M is the macula, E is a
point on the equator, and A is a point lying on the 3D arc
EM. In the retinography, the projections of, respectively C,
M, E, and A are C , M , E , and A . From the 3D retina
left to the retinal drawing chart right radial distances are
accurate but circumferential distances are distorted.
1018
1018
,
arc EM E M
R
r
that is
.
R/2 R
It comes from 1 that 2 r 4 * R .
We obtain then the following ratio:
2D circumference 2 r
2R
3D circumference 2 r
R sin
This last result is the retinography transformation equation and fits well with the experimental measures presented
by Borodkins. The original retinography is transformed by
the retinography transformation equation to get the normalized retinography NR. See Fig. 7.
RESULTS
CT-scan reconstruction
Eye diameter
ON diameter
Clip1-to-Clip2 distance
Clip1-to-ON distance
Clip2-to-ON distance
Obs. 1
Obs. 2
Obs. 3
Obs. 4
33.68
5.90
11.30
14.90
5.40
33.65
5.87
12.10
15.30
5.30
33.88
5.59
11.90
15.50
5.50
33.69
5.99
12.05
15.1
5.1
Eye diameter
ON diameter
Clip1-to-Clip2 distance
Clip1-to-ON distance
Clip2-to-ON distance
Mean Obs.
1 4
Software
results
33.75
5.84
11.91
15.27
5.32
33.64
6.00
11.37
14.46
5.24
0.11
0.16
0.54
0.81
0.08
1019
1019
We know the eyeball radius, we can thus scale the orthogonal projection to make it consistent with the 3D model.
Two particular points may be seen or estimated in both
the 3D model and the retinography. These are the origin of
the optical nerve, and the macula center it lies on the optical
axis.
These constraints determine a single projection of the retinography onto the posterior hemisphere.
As we compute the geometrical distortion of the retinography to get the NR, we can simulate an orthogonal projection of the NR to get a three-dimensional mapping of the
retina. The result of the retinography transformation equation
FIG. 10. Incorporating the 3D retinography into the CTscan reconstruction. Zoomed view. We removed the
eyeball to see the relationship between clips, tumor, and
retinography.
1020
1020
cally helpful, this approach has to be coupled with the dosimetry software to obtain a complete simulation, planning
and control procedure. The localization of the organ at risk
depends on the operator and is time consuming. The automatic localization of the organ at risk by software could be
of great interest for radiotherapy in the future.
1