You are on page 1of 6

FPGA Implementation of a Pixel

Auto-configuration System for an


NDB-encoder Sensor
Nicolás Calarco∗1 , Federico Zacchigna†2 , Fabian Vargas‡3 , Ariel Lutenberg†¶4 ,
José Lipovetzky§¶5 and Fernando Perez Quintián∗¶6
∗ Facultad
de Ingenierı́a, Universidad Nacional del Comahue
Buenos Aires 1400, Neuquén Capital, Argentina
1
nicolas.calarco@fain.uncoma.edu.ar
6
fernando.perezq@fain.uncoma.edu.ar
† Facultadde Ingenierı́a, Universidad de Buenos Aires
Paseo Colón 850. Capital Federal, Argentina
2
fzacchigna@fi.uba.ar
4
lse@fi.uba.ar
‡ Catholic
University - PUCRS
Av. Ipiranga 6681, Porto Alegre, Brazil
3
vargas@computer.org
§ Comisión Nacional de Energı́a Atómica and Instituto Balseiro - Centro Atómico Bariloche
Av. Bustillo 9500, Bariloche, Rı́o Negro, Argentina
5
jose.lipovetzky@ieee.org
¶ Consejo Nacional de investigaciones Cientı́ficas y Técnicas - CONICET

Abstract—In previous works we introduced optical encoders of photodetectors located at the head, which generates a
based on non-diffractive beams (NDB) and showed that they pair of electrical signals that codifies the movement [4]–
extend the mechanical tolerance limits beyond the marks of [6]. Non-diffractive beam (NDB) optical encoders (Fig. 1)
the optical encoders technologies currently in use. However,
in order to make it suitable for commercial fabrication, present a remarkable performance [7]–[9] by means of using
the alignment of the light sensor with the NDB should be a special detection geometry with an ad-hoc detectivity
automatized. In this work we present a new design for function, consisting of a set of concentric annular pixels
the pixels and an algorithm implemented in an FPGA that with a programmable gain [10]–[13], as it is shown in Fig.
finds the center of the beam. This algorithm is the first 1. The scale is a typical Ronchi grating used in standard
step to automatically align the sensor and is validated using
commercial simulation tools. optical encoders. In order to work properly, the NDB has to
be centered with the photodetector. This is not a trivial issue,
Resumen— En trabajos anteriores presentamos los because the beam size is just a few micrometers. In order to
codificadores ópticos basados en haces no difractivos y automatize the alignment of the photodetector, we propose
mostramos que extienden los lı́mites de tolerancia mecánica a new sensor that it is able to execute an auto-alignment
más allá de las tecnologı́as de los codificadores ópticos
actuales. Sin embargo, con el fin de hacerlo adecuado para algorithm. To reach this goal, the proposed sensor consists
fabricación comercial, la alinieación del sensor de luz con of a photodetector array made of hexagonal configurable
el haz no difractivo debe ser automatizada. En este trabajo pixels (see Fig. 1), that are driven by an IP-core. In this way,
presentamos un nuevo diseño de pixeles y un algoritmo que once the beam impinges on the sensor, the system is able
encuentra el centro del haz, implementado en FPGA. Este to find the center of the beam and configure the detection
algoritmo es el primer paso para alinear automáticamente
el sensor y es validado usando herramientas de simulación pattern around it.
comerciales. FPGAs are increasingly being used in image sensor appli-
cations such as image processing, due to its reconfiguration
I. I NTRODUCTION capability and the possibility of processing the information
Optical encoders are displacement and rotation sensors in parallel hardware architectures. They provide a much
that are used in a broad variety of equipments and appli- more flexible and faster design capabilities that a standard
cations [1]. For example, they are used in domestic appli- processor. Some examples of this are the usage of FPGA for
cations like home printers, in the automotive industry for object tracking [14], [15], target recognition [16], real time
the measurement system, in industrial equipment’s motion stereo vision applications [17] or optical flow algorithms
control and factory automation, in medicine for imaging [18]. However, all these applications work with information
systems, or in high technology equipments such as radars or provided by the sensor, but do not modify the internal
robotics [2], [3]. Typically, optical encoders are composed sensor configuration. Instead, we introduce a CMOS pixel
by a moving head and a fixed scale, such that the head photodetector array that can be externally configured in
movement produces light intensity variations over a set order to obtain a particular detection geometry and a detector
pattern associated. The FPGA is used to program the sensor
behavior and not just to process the information it provides.
In this work, we describe the specifications of the system
configuration and the implementation of the algorithm to
find the center of the beam. The paper is organized as
follows: in section II the proposed detector is introduced,
in section III the design to be implemented in the FPGA is
presented, which is described in detail in section IV. Finally,
in section V the conclusions are presented.

II. P HOTODETECTOR ARRAY PROPOSED DESIGN


The proposed photodetector is an array of hexagonal Fig. 2. Scheme of the proposed photodetector array, the honeycomb array,
with the representation of a particular detection pattern. The configuration
pixels, similar to a honeycomb array. One of the major lines of the pixels can also be seen.
advantages of hexagonal arrays is that all of the six first
neighbors are at the same distance. This, along with the sim-
ple fact that hexagons are “rounder” than squares, provides not necessary for our purpose to connect every pixel to a
a better representation of circularly symmetric images than bitline. Once the array is configured, each group of pixels
under square pixels. Each pixel of the array will be described is connected to a bitline. The value of the current of this
by coordinates x and y of two axis at an angle of 120◦ . group of pixels can be read trough the ADC by selecting
An illustrative diagram is shown in Fig. 2, together with a the appropriated entry of the MUX.
representation of a particular detection pattern. The detection
pattern representation is as follows: The group of pixels that III. S EARCH ALGORITHM
have the same color must be interconnected (that is, they There are several types of NDB [19], but the best perfor-
must act as a monolithic detector) and the different colors mance of the encoder is achieved with a zero order Bessel
mean that a different gain factor is needed for each group NDB [9]. Fig. 5 shows the profile of the NDB that is used
of pixels. With this technique the system is able to form in the experimental set up to validate the proposed design
the desired detection pattern around the center of the beam, [13]. It can be seen that the beam has circular symmetry,
as was mentioned in section I. For these purposes, each and its center is much more brighter than any other part of
pixel needs a configuration signal in order to know how to it. Thus our goal is to find the more illuminated pixel.
connect to its neighbors and, eventually, to the amplification The algorithm used to find the brightest pixel is illustrated
system. The configuration signals are directed to every pixel in Fig. 6. Considering the disposition of the bit-lines, at the
by parallel lines, one per row, as can also be seen in Fig. 2. beginning we can make a quick search to limit the zone of
To achieve the interconnection, each pixel has three interest: First, each bitline is connected to all the pixels in its
switches that allows it to connect to three of its neighbors: row and in the next four rows below it, collecting therefore
the West (W), the NorthEast (NE) and the SouthEast (SE) all the current in the horizontal stripe consisting of these
switches, as it is shown in Fig. 3. In this way, each pixel can five rows. In the figure, the stripes are delimited by thick
be connected with any of its neighbors. To read the current borders and the points of connection to the bit-lines by pink
of each group of pixels, reading bitlines (BL) are disposed dots. The current from the bit-lines are then read one by one
in one row out of five, as it is shown in Fig. 4. In that row, and compared, and the information of the more illuminated
some pixels (those with a black point in Fig. 4) have the horizontal stripe is stored. This is called ”vertical search”,
chance to be connected to the bitline. Those pixels have an because it gives us information about the vertical position of
extra register that drives the connection switch, named as the center of the beam. Then, a similar procedure is carried
“Bitline Connection” in Fig. 3. In this way, one pixel out out in the ”horizontal search” and the information of the
of twenty-five (one pixel out of five from one row out of
five) has the possibility of being connected to a BL. It is

Fig. 1. Diagram of an NDB encoder, along with the desired detector Fig. 3. Diagram of the pixel interconnection system and how it looks into
geometry and its implementation by using hexagonal pixels. an array, with a bitline connection register.
(a) (b)

Fig. 4. Diagram of the connections for the reading operation of the sensor.

(c) (d)
more illuminated vertical stripe is saved. At the end of this
procedure the zone of 5x5 pixels that contains the center of Fig. 6. Illustration of the first search. (a) Search of the horizontal
the beam is identified. It can be seen in the figure that the stripe more illuminated (Vertical Search). (b) Search of the vertical stripe
more illuminated (Horizontal Search). (c) Brighter zone. (d) Pixel more
current reading value of the stripe that contains the center illuminated. The numbers on the BL represent the reading values.
of the beam is bigger than the others.
The images shown in Fig. 6 are illustrative, in order to
show the process described above. The size of the pixels
logic in the SoC were simulated. How this simulation was
and the width of the stripes not represent the real ones. In
performed is explained in section IV-B.
order to show quantitative results, a more realistic simulation
was made. The results are showed in Fig. 7. The histogram The IP-core that configures the sensor has been developed
shows the sum of the currents of each stripe that is read by using VHDL and ISE software, the Xilinx synthesys tool
the corresponding bitline, in arbitrary units (AU). It can be for Xilinx FPGAs. A Xilinx Spartan 3E FPGA XC3S500E
seen that the current obtained from the stripe that contains model, has been used for the synthesis and implementation
the center of the beam is clearly higher than the others. of the IP-core. From the Xilinx ISE software reports, some
metrics related to the design, have been extracted. Regarding
After this ”brighter zone” is found, a pixel-by-pixel search
the timing parameters, the minimum period that can be used
is made. First just one pixel, the one that has connection
for the clock (maximum delay) is 9.084ns, which means
to the bit-line, is read and its value is stored. Then, the
that the implementation can work at a Maximum Frequency
contiguous pixel is connected and the value of the bit-line
of 110.084MHz. Although this frequency implies that the
is read again. The new reading, that corresponds to the
sensor can be configured very quickly, this is not the more
sum of the two connected pixels, is stored. Both values are
remarkable feature, because it is only performed during
subtracted in order to obtain the current of the last pixel. The
the sensor initialization, just once every time the sensor
coordinates and the reading value of the higher current pixel
is powered on. Table I shows other important parameters
are stored. This procedure is repeated with the 25 pixels of
involved in the use of the IP-core.
the area of interest, connecting one more pixel each time.
At the end of the process the value and coordinates of the As it can also be seen in the table, the occupation area
more illuminated pixel are known. of the IP-core is extremely low (2%), so that a bottleneck
of the design, if there is one, could reside in the number
IV. I MPLEMENTATION
A. Architecture
Fig. 8(a) shows a diagram of a System on Chip (SoC) im-
plementation: the light sensor plus de IP-core. In this work
the IP-core was implemented in an FPGA to be interfaced to
the photodector array. The proposed architecture is shown in
8(b). The pixels array and the ADC as well as the remaining

(a) (b)
Fig. 7. Histogram that shows the sum of the currents over the stripes
Fig. 5. Representation of the NDB used. (a) Profile. (b) Bidimensional formed in the horizontal and vertical search when the NDB impinges the
representation. Units in microns. photodetector array.
Fig. 9. Data exchange interface between C emulated sensor and VHDL
IP-core.

program and the IP-core. The data exchange between VHDL


and the C code is performed through two pipes, named “from
C to VHDL” and “from VHDL to C”. This is represented in
(a) Fig. 9.
The VHDL module writes the data in the “from VHDL to
C” pipe, which is read by the C program and used as input
for the sensor emulator. After the sensed value is calculated,
the C program writes the response in the “from C to VHDL”
pipe, which is then read by the VHDL module. Every time,
new configuration data is sent to the sensor, this process is
repeated and a new sensed value is obtained.
The testbench emulates an N by N photodetector array,
which is represented by a matrix array of the same size.
Each value represents the amount of light impinging on
the corresponding pixel. Pixels are arranged as explained
in section II and a scheme of how it is emulated for the
simulations is shown in Fig. 10.

(b) C. Simulations
Fig. 8. Diagrams of the architecture of the design. (a) IP-core SoC Simulations were made using ISIM, the simulation tool
implementation (b) Hardware implementation aimed for this work. of the ISE software package [20]. The search algorithm
was emulated by using different sensor values and con-
figurations. As it was explained in section III, after this
of IOs needed, that depends on the size of the sensor (i.e. initialization process, the center of the beam is found and
the number of needed bitlines). This issue can be sorted out the more illuminated pixel is identified. Fig. 11 shows the
by using a Serial-Parallel interface, what would increment state of some relevant variables at the end of the search for
the execution time of the algorithm. However, this would an arbitrary array representation. The internal variables br x
not be a problem, because it will only be executed once and br y, marked in red, indicates the x and y coordinates
at the beginning, as a starting routine. The low area occu- of the brighter pixel in the pixel array. This information
pation also make the design particularly suitable for SoC remains available for the final configuration of the chip. The
implementations. The best solution is a trade off between state register status indicates that the system is ready for the
occupation area and performance. next stage, called sensor mode (sm, blue mark in Fig. 11)
For the evaluation of the system functionality, many con-
B. Testbench figurations of light intensities distributions were analyzed.
A C program that emulates the photodetector array be- Fig. 12 illustrates the more relevant cases. Figs. 12(a)
havior was written, with the purpose of being used during and 12(b) show the case of a distribution that represent
the IP-core testbenching. Also, a VHDL module has been a non-diffractive beam (NDB) centered an non centered,
developed to represent the sensor, and interfaces with the

TABLE I
D EVICE UTILIZATION SUMMARY

Selected Device : 3s500efg320-4 Occupation


Number of Slices: 142 out of 4656 3%
Number of Slice Flip Flops: 96 out of 9312 1%
Number of 4 input LUTs: 260 out of 9312 2%
Number of IOs: 29
Number of bonded IOBs: 29 out of 232 12%
Fig. 10. Scheme of the array used to represent the values read from the
Number of GCLKs: 1 out of 24 4% pixels in the simulations
more illuminated zone of 5 by 5 pixels, as was discussed
in section III. However, this extreme simulated case of a
random distribution of light is not expected for a properly
designed optical encoder based on an NDB.
V. C ONCLUSIONS
In this work a new approach of pixel design with an auto-
configuration system has been presented.
The system that auto-configures the hexagonal pixel array
accordingly to the position of the NDB center was imple-
mented by means of a new algorithm and synthesized by
Fig. 11. Illustration of the results of the simulations for an arbitrary array. using an FPGA. For the prototype validation, a C program
The coordinates of the more illuminated pixel at the end of the simulation was written to represent the sensor behavior. These two
process are remarked in red.
achievements endow the hexagonal photodotector array with
the auto-configuration abilities that it was necessary to reach
for the functionality of the NDB encoder.
respectively. The case of the representation of an NDB with In the other hand, the IP-core generated has a good
noise is shown in Fig. 12(c), and Fig. 12(d) shows the performance in terms of stability, reliability and operation
case of a random noise. The hexagonal arrays shown in frequency, and accomplish perfectly with the requirements
Fig. 12 correspond to the region of interest of the detector. of the system. Its main advantages compared to existing
The region covers an area of approximately 260µm by approaches found in the literature is the extremely low area
260µm, discretized in 7µm side hexagonal pixels. The NDB overhead, that make it specially convenient for implementa-
representations are correspondent with the beams used in tions in either commercial low cost FPGA or SoC designs.
[13]. Future works will include the full implementation of the
In cases in which a strong NDB is impinging over the design, including the final configuration of the sensor for
sensor (Figs. 12(a) and 12(b)) the algorithm yields the right the operation mode, the physical implementation of the pixel
position coordinates, regardless the location of the center of array (now in process) and the external circuitry for the gain
the beam. Fig. 12(c) shows the case of a weak NDB plus system.
environmental noise. Despite the fact that the noise is of
the same intensity level than the NDB rings, the system ACKNOWLEDGMENT
accurately retrieves the coordinates of the center. These The authors want to thank to Consejo Nacional de Inves-
results indicate that the system is adequate for our purpose. tigaciones Cientı́ficas y Técnicas CONICET. This work has
Finally, for arrays representing random distributions of light, been done in the context of research project ANPCyT PICT
such as the shown in Fig. 12(d), the algorithm yields 2013-0951 and the Programa de Centros Asociados para el
wrong results. This is because regardless the location of Fortalecimiento de Posgrados Brasil/Argentina (CAFP-BA).
the more illuminated pixel, the algorithm first found the
R EFERENCES
[1] E. O. Doebelin, Measurement Systems Application and Design: 5th
Edition. The McGraw-Hill Companies, May 2003, ch. 4, pp. 327–
335.
[2] “Encoder Products Company,” accessed March-2016. [Online].
Available: http://encoder.com/applications/by-industry/
[3] “The Encoder Company | Impulse Automation Limited,” accessed
March-2016. [Online]. Available: http://www.theencodercompany.co.
uk/applications
[4] L. Wronkowski, “Diffraction model of an optoelectronic displacement
measuring transducer,” Optics & Laser Technology, vol. 27, no. 2, pp.
81–88, Apr. 1995.
[5] D. Crespo, E. Bernabeu, and Morlanes, Toms, “Optical encoder based
on the Lau effect,” Optical engineering, vol. 39, pp. 817–824, 2000.
(a) (b)
[6] D. Crespo, J. Alonso, and E. Bernabeu, “Reflection optical encoders
as three-grating moir systems,” Applied optics, vol. 39, no. 22, pp.
3805–3813, 2000.
[7] A. Lutenberg, F. Perez-Quintián, and M. A. Rebollo, “Optical encoder
based on a non diffractive beam,” Applied Optics, vol. 47, pp. 2201–
2206, 2008.
[8] A. Lutenberg and F. Perez-Quintián, “Optical encoder based on a non
diffractive beam II,” Applied Optics, vol. 48, pp. 414–424, 2009.
[9] ——, “Optical encoder based on a non diffractive beam III,” Applied
Optics, vol. 48, pp. 5015–5024, 2009.
[10] N. Rigoni, R. Lugones, A. Lutenberg, and J. Lipovetzky, “Design of a
customized CMOS active pixel sensor for a non-diffractive beam op-
tical encoder,” in 6ta. Escuela Argentina de Micro-nanoelectrónica,
(c) (d) Tecnologı́a y Aplicaciones, Bahı́a Blanca, 2011, pp. 1–5.
[11] ——, “Programmable gain CMOS photodetector array for a non-
Fig. 12. Simulated illumination representations. (a) A centered NDB. (b) diffractive beam optical encoder,” in 7ma. Escuela Argentina de
A non centered NDB. (c) An NDB with noise. (d) A random distribution. Micro-nanoelectrónica, Tecnologı́a y Aplicaciones, Bahı́a Blanca,
The bars at the right side of the image are the colormaps of light intensities. 2012, pp. 58–61.
[12] N. Calarco, F. P. Quintián, A. Lutenberg, and J. Lipovetzky, “Pro-
grammable gain CMOS photodetector array for a non-diffractive
beam optical encode ii,” in 8va. Escuela Argentina de Micro-
nanoelectrónica, Tecnologı́a y Aplicaciones, Bahı́a Blanca, 2013, pp.
62–65.
[13] F. P. Quintián, N. Calarco, A. Lutenberg, and J. Lipovetzky, “Per-
formance of an optical encoder based on a nondiffractive beam
implemented with a specific photodetection integrated circuit and a
diffractive optical element,” Applied Optics, vol. 54, no. 25, p. 7640,
Sep. 2015.
[14] C. T. Johnston, K. T. Gribbon, and D. G. Bailey, “FPGA based remote
object tracking for real-time control,” in International Conference on
Sensing Technology, 2005, pp. 66–72.
[15] S. Liu, A. Papakonstantinou, H. Wang, and D. Chen, “Real-time
object tracking system on fpgas,” in Application Accelerators in High-
Performance Computing (SAAHPC), 2011 Symposium on. IEEE,
2011, pp. 1–7.
[16] J. Jean, Xiejun Liang, B. Drozd, and K. Tomko, “Accelerating an
IR automatic target recognition application with FPGAs.” IEEE
Comput. Soc, 1999, pp. 290–291.
[17] P. Zicari, H. Lam, and A. George, “Reconfigurable computing ar-
chitecture for accurate disparity map calculation in real-time stereo
vision,” in Proc. International Conference on Image Processing,
Computer Vision, and Pattern Recognition (IPCV13), 2013, pp. 3–
10.
[18] J. Diaz, E. Ros, F. Pelayo, E. Ortigosa, and S. Mota, “FPGA-based
real-time optical-flow system,” IEEE Transactions on Circuits and
Systems for Video Technology, vol. 16, no. 2, pp. 274–279, Feb. 2006.
[19] J. Durnin, “Exact solutions for nondiffracting beams. I. The scalar
theory,” Journal of the Optical Society of America A, vol. 4, no. 4,
p. 651, Apr. 1987.
[20] ISE Design Suite Overview. Accessed March-2016. [On-
line]. Available: http://www.xilinx.com/support/documentation/sw
manuals/xilinx11/ise c overview.htm

You might also like