You are on page 1of 6

Hardware-in-the-Loop Testing of Computer Vision Based Driver

Assistance Systems
Mirko Nentwig
Hardware-in-the-Loop Functional Testing
Audi AG
85045 Ingolstadt, Germany
mirko.nentwig@audi.de
Marc Stamminger
Computer Graphics Group
Friedrich-Alexander-Universit at
Erlangen-Nuremberg
91058 Erlangen, Germany
stamminger@cs.fau.de
AbstractIn this research paper we investigate the appli-
cability of real-time generated computer graphics for mono
camera lane and vehicle detection algorithms. First we intro-
duce a developed hardware-in-the-loop simulator and describe
two solutions for the input of the synthetic images. Then the
creation of the camera and environment model used for the
tests is focused. Therefore rst requirements hold by the lane
and vehicle detection algorithms are mentioned and different
elds of application are described. Finally we demonstrate the
applicability on three scenarios derived from real test drives
and compare the results to reality.
I. INTRODUCTION
A. Motivation
In modern production cars camera systems are increas-
ingly used for the realisation of advanced driver assistance
and safety systems. The reasons are low costs, high ro-
bustness and multiple purpose applications. Computer vision
systems are able to detect and classify objects in the trafc
area, furthermore they can estimate describing measures, e.g.
dimension, distance, speed, acceleration, movement direction
and optical ow. This information is used to realize func-
tions, which support the driver to reduce the severity of ac-
cidents or to avoid these. Some commonly known examples
are: lane keeping assistance, emergency braking, road sign
recognition, pedestrian detection and distance control.
The most critical aspect when developing advanced driver
assistance systems (ADAS) for production cars is the func-
tional safety of these electronic systems - How can it be
assured that the systems work correctly in the specied
situations? Therefore the ability to test these systems within
critical situations and/or with system failures is most impor-
tant.
B. State-of-the-Art
Nowadays computer vision based ADAS are mainly val-
idated on real road test drives or via recorded video se-
quences. The rst approach offers the best test conditions
and allows the testing of feedback control systems. But in
near future it will not be possible to test all system states in
a satisfactory manner without extensive technical, temporal
and economic efforts [1] [2], due to the gaining functional
complexity and autonomy of the assistance and active safety
functions.
The application of recorded video sequences is a compromise
between the technically feasible and necessary efforts for the
integration of computer vision based ADAS. This approach
holds the limitation that no feedback control systems can
be tested, because the recorded video can not be changed
according to the current vehicle state.
In the recent years a paradigm shift towards simulation based
testing process for computer vision systems began [3] [4]
[5] [6] [7] [8]. Those simulation system are a promising
alternative to real road test drives or record video sequences.
It is possible to achieve satisfactory results in respect to
acceptable efforts and manageable costs. In conjunction
with a hardware-in-the-loop test bench it is also possible
to evaluate feedback control systems in real-time. Further
these simulated test drives allow testing near to or over the
limits of the systems. There is no risk for the driver and the
vehicle, like during a real test drive. The ADAS can be tested
without a real vehicle prototype, but with reproducible test
conditions. Last but not least it is easy to test a huge number
of different ECU variants.
The rst known application of computer graphics for the
validation of an image processing system used for automotive
applications was mentioned in the year 2000 [9]. In this
publication a lane detection algorithm was evaluated with
different noise patterns acting as disturbance. The testing of
lane keeping assistance system with synthetic images and
a hardware-in-the-loop simulator is described in [10] [11].
In [12] the utilization of a simulation environment for the
pre-adjustment of vision based lane tracker is documented.
The authors evaluated in [13] the performance of pedestrian
detection system on synthetic video. The system parameters
were optimised by the ground-truth data obtained from the
simulation. The real-time hardware-in-the-loop integration of
functions for surface, lane, vehicle, and road sign recognition
including a sensor data fusion between video, RADAR and
GPS by application of a was presented in [14].
C. Contribution
The applicability of real-time generated computer graphics
for mono camera lane and vehicle detection algorithms is
investigated. First we introduce a developed hardware-in-the-
loop simulator and describe two solutions for the input of
the synthetic images. Then the creation of the camera and
2011 IEEE Intelligent Vehicles Symposium (IV)
Baden-Baden, Germany, June 5-9, 2011
978-1-4577-0891-6/11/$26.00 2011 IEEE 339
environment model used for the tests is focused. Therefore
rst requirements hold by the lane and vehicle detection
algorithms are mentioned and different elds of application
are described. Finally we demonstrate the applicability on
three scenarios derived from real test drives and compare
the results to reality.
II. HARDWARE-IN-THE-LOOP TESTING
In contrast to vehicle road tests the electronic control
units (ECUs) and actuators are connected to a real-time
(t 1ms) vehicle dynamics and powertrain simulator.
Like during a real road test drive the electronic components
receive sensor and bus information depending on the current
vehicle state and driving actions. The ECUs are able to
inuence the vehicle state via real or simulated actuators,
e.g. brakes, steering or engine control. This ability allows
to test complete feedback control systems under safe and
reproducible conditions. The used hardware-in-the-loop test
bench for testing of computer vision based driver assistance
systems is depicted in gure 1. As it can be seen the
test system is based upon a conventional hardware-in-the-
loop simulator. This was extended by components which are
necessary for the simulation of the environment outside the
vehicle. The trafc and environment simulation supplies the
vehicle dynamics model with data about the course of the
road. Further it controls the actions of the other trafc objects
and is responsible for the scenario control. The simulated test
Fig. 1. Overview: Hardware-in-the-Loop simulation
vehicle is controlled by a driver model within the virtual
environment. The model is able to perform deterministic
driving manoeuvres according to previous specied scenar-
ios. Sensors for the perception of the environment outside
the vehicle, e.g. front/rear radar, ultra sound, cameras and
GPS, are simulated by different models connected to trafc
simulation. These generate coherent sensor data depending
on the vehicle position for the corresponding electronic
control units.
In the case of optical cameras there are two common ways
of feeding synthetic sensor data into the system. The rst
is to install the vehicle camera in front of a computer
display and then to calibrate it with care to achieve an
almost parallel alignment of the display plane and the camera
image plane. For proving the applicability in [15] a error
analysis has been carried out, assuming that the real camera
covers almost the pin-hole paradigm. The author shows that
small inaccuracies in calibration are negligible. In the case
of a optical transmission further effects might inuence the
camera systems overall performance, such as lens blur or
lens distortion. Since real cameras suffer from motion blur
a discrete image generation process may cause signicant
ghost images, if the camera takes a new image while the
display is updated.
A more preferable way is to feed the synthetic image data
direct into the camera ECU. A digital real-time interface
can be used to write the image information directly into the
image memory.
III. CAMERA AND ENVIRONMENT MODEL
In this chapter we describe in detail the developed real-
time camera and environment model which is used to gener-
ate realistic synthetic images within the simulator framework.
A. Requirements
The knowledge about the requirements on the simulation
hold by the lane and vehicle detection algorithms is
necessary to build a reliable sensor model.
Lane Detection Algorithms: In the last two decades a
huge number of lane detection algorithms were developed
[16] using different sensors for lane perception, such as
camera or LIDAR. In the case of a camera based system
the process of lane detection is structured in the following
way, see g. 2 according to [16]. Almost every algorithm
Fig. 2. Flowchart lane detection algorithm
performs an edge based operation for extraction of relevant
features from the input images, but utilizes further features
for robust lane detection. Hence the model of the road and
lanes will directly inuence the algorithms performance.
Further the internal road model assumes a real and drivable
road. Therefore the simulation must cover the following
sensor inputs: camera image and internal vehicle sensor
340
information.
The appearance of the road may vary due to different
kinds of markings: solid or broken lines, bots dots, circular
reectors, physical barriers or nothing at all. Further these
could have different colours or suffer from aging effects.
The next important aspect is the road surface itself, it could
be made of light or dark pavement, concrete, cobblestones,
etc. and combinations of different materials.
As a simple simulation it might be enough to simulate a
dark plane, representing the road pavement, with white
lane markings on it. More sophisticated models should
incorporate the structure of the tarmac and depending on the
current testing objective the previously mentioned variants
of the surfaces and markings.
The internal vehicle sensors are stimulated from the
hardware-in-the-loop simulator depending on the current
vehicle state.
Vehicle Detection Algorithms: Vehicle detection is a
more challenging task in computer vision compared to lane
detection. The classication process is mostly split into two
tasks: hypothesis generation and verication [17], see g. 3.
Knowledge-based approaches for hypotheses generation may
Fig. 3. Flowchart vehicle detection algorithm
use features like: symmetry, colour or luminance, shadow
below the car, corners, vertical/horizontal edges or textures.
Also it is possible to use the optical ow for vehicle
detection. In the next step hypotheses are veried via a
template based approach, that means some kind of analytical
model is used for matching. Alternatively appearance based
similarity measures are used, such as Haar-wavelets. These
are commonly trained with real vehicle images.
We can summarize the used features in three groups:
Shape: it is dened by the geometrical polygon models
of the simulated vehicles.
Movement: the objects should move in the same way
like in reality.
Visual Appearance: in computer graphics the visual
appearance is dened by the shading equation. In the
case a qualitative model is desired, the vehicle surface
can be described by a diffuse lambertian material. This
means the surface does not contain any disturbances
by highlights from light sources. Otherwise the Phong
shading equation is a simple but sufcient approxi-
mation. In the case that realistic shading is desired
containing reections on coating (see [18] for a good
approximation). The most common real-time graphics
APIs do only support local illumination, which means
every object is shaded independently. That means we
need to compute the vehicle shadow separately. A good
and fast approximation is to use planar shadows for
achieving the dark shadow below the vehicle. If more
realistic shadows are desired the application of shadow
mapping techniques might be a good solution [19].
B. Field of Application
Since a model is always an abstraction of reality it is de-
signed based upon the desired eld of application. Basically
we distinguish two different use cases, which may overlap:
Qualitative:
In this case the simulation is used to test functional
aspects of the ADAS depending on the image pro-
cessing algorithms. For example does the lane keeping
assistance react correctly, if the vehicle crosses the
marking of an recognized driving lane. Therefore the
model must provide at least the basic features of the
relevant objects to ensure that the image processing
system works properly. It can be suitable to strengthen
relevant features in comparison to reality.
Quantitative:
The quantitative testing focuses on the performance
of image processing algorithms in different situations,
e.g. the detection performance on different kinds of
lane markings. For evaluation estimated outputs can be
compared to ground-truth information obtained from
the simulation. Furthermore it is imaginable to evaluate
the performance in respect of different environment
conditions and other disturbances. This requires
realistic and validated models of the environment and
disturbances.
In gure 4 different qualities of road modelling are de-
picted. The simplest and most ideal model is depicted in the
top left and simply consists of a single coloured pavement
with white markings. This simulation quality is suitable for
qualitative tests. A more realistic appearance of the same
scene is depicted on the right. The road surface contains
structured tarmac and some light lane grooves. In the lower
section a synthetic and the real image of test-drive on a wet
road is depicted. It can be seen in both images that reections
on the wet road of the surrounding environment and skylight
lead to noticeable artefacts. Due to inaccuracies there are
some differences.
Variations of realistic car models are shown in comparison
to a real vehicle in gure 5. On the left the Audi R8 is
modelled with a 3d model of more than 20000 vertices.
The vehicle is shaded with a simple diffuse surface model
341
Fig. 4. Different kinds of road models versus reality, Top/Left: Single
coloured pavement with lane markings, Top/Right: Structured pavement
and structured lane markings, Bottom/Left: Wet pavement with reections,
Bottom/Right: Real scene
neglecting reections. The shadow is simulated with a drop
shadow texture. In comparison in the middle a more accurate
shaded model with reections and realistic paint simulation.
Furthermore it contains a more advanced shadow simulation
depending on the solar altitude.
Fig. 5. Different kinds of car models versus reality, Left: Model with
diffuse light and drop shadow only, Middle: Model with more sophisticated
paint simulation and advanced shadow, Right: Real car
C. Realized Model
The realized camera and environment model generates
realistic camera images at real-time speed. The frame rate
is higher or equal to the sampling frequency of the camera.
Due to the utilisation of the trafc simulation the movement
of all trafc objects including the ego vehicle is causal. This
means there are no jumps in object movement, which lead
to a wrong optical ow. Since in reality no object contains
sharp edges we used anti-aliasing techniques for smoothing.
The road and vehicles are described by detailed models suf-
cient for achieving realistic results in comparison to reality.
The modelling of the natural environment is a challenging
and complex task. These objects and their attributes are
hard to describe by computer graphics due to the complex
geometry. Further it is commonly known that there are
always differences in modelling quality between simulation
and reality. Therefore we approximated these elements by
using inhomogeneous textures, with similar intensity and
average luminance, or by a 3d model with similar optical
properties.
IV. APPLICATION EXAMPLE
We demonstrate the capabilities of the developed sensor
model in three test scenarios based upon real road drives
with the method proposed in [20].
A. Scenario Generation
First necessary information for the scenario reconstruction
is extracted from the test drive record: the course of the road,
information about the ego-vehicle, trajectories of other trafc
and relevant information about the environment, see g. 6. In
the next step the road itself is modelled. The recorded GPS
positions are mapped onto a real road database. The road
description serves as input for generating a 3d polygon road
model [21]. The road pavement is simulated via a tarmac
texture with same intensity and similar structure. The road
model serves as reference object for later modelling steps.
Therefore all objects are positioned relative to it. As already
Fig. 6. Scenario generation process
mentioned before we use some simplications for creation
of the environment model. Surrounding meadows and road
shoulders are simulated by inhomogeneous texture with the
same intensity, trees and bushes are represented by 3d models
of the similar shape.
In contrast trafc objects are simulated by highly detailed
models of the vehicles that belong to the certain scenario.
Finally the models for the environment and vehicles were
parametrised. The solar altitude was set according to the date
and time of day while the real test drive was executed.
B. Scenarios
The generated scenarios are shown in g. 7. The the real
test-drive is depicted on the left side and the corresponding
simulation output on the right side.
Description:
Scenario 1: A vehicle and lane detection algorithm is
investigated under foggy weather conditions. The ego-
vehicle is closing up to a car ahead until the vehicle is
detected.
Scenario 2: A lane detection algorithm was tested on a
wet road with heavy reections.
Scenario 3: A vehicle and lane detection algorithm were
tested on a cloudy day with diffuse lighting only.
C. Results and Evaluation
The simulation quality is measured by an image wise
comparison of the lane and vehicle detection algorithms
output variables y. Hence we do not rate the performance
342
Fig. 7. Scenarios overview
of the computer vision systems in these scenarios, we use a
similarity measure:
s =

y
Sim
y
Real

1.0

y
Sim
y
Real

2.0
1.0

y
Sim
y
Real

> 2.0
where y
Sim
and y
Real
are the measured outputs in
simulation and reality. For evaluation the similarity measure
s was plotted versus the simulation duration [%].
a) Vehicle Detection: The verication of the simulation
as a suitable tool for the validation of vehicle detection
is done by comparing the estimated relative position and
dimension to the output in reality. The results for scenario 1
and 3 are plotted in gure 8 and gure 9.
Fig. 8. Vehicle distance prediction
In the rst scenario the measurement vehicle is closing up
to a vehicle ahead on a foggy day. At a simulation progress
of 62.7% the vehicle is detected in reality - in the simulation
at 64.1%. This results in a short deviation. Subsequently the
deviation in the distance prediction is in average 9.3%, with
a maximum of 25.9%. In the third scenario the ego-vehicle
Fig. 9. Vehicle dimension prediction
is following a car ahead. The average deviation to real video
is 5.1%. At the simulation interval from 28% to 43% the
measure drops down to -29%. In conjunction with this the
dimensions of the vehicle are estimated too small, see gure
9. While simulation the difference to the real reference is
10%.
b) Lane Detection: In the case of the lane detection
algorithm we veried the simulation quality by comparing
the lane foresight distance and the lane width of the ego
lane. In g. 10 and 11 the results of the validation process are
depicted. In every scenario the lane foresight has a high vari-
Fig. 10. Lane foresight distance
ance in comparison to reality. In the rst scenario the average
deviation is only about 3.19% with a maximal deviaton of
56.48%. On the rainy road scenario the average deviation of
the foresight distance is about 18.8%, the maximal deviation
is 100.0%. The huge deviation is caused by the fact that in
the real test drive the centerlane marking served as reference.
In the third scenario the average deviation is about 11.4%
with a maximum deviation of 44.4%. In all three simulation
scenarios the positions and gaps between the markings do not
correspond perfectly to reality. This leads to high deviations
when comparing single images. In gure 11 the lane width
estimation for the scenarios is shown. In the rst and second
343
Fig. 11. Estimated lane width
scenario the deviation is about 0.6 % and 0.7%. The deviation
of the lane width perdication in the third scenario is 11.8%
with a maximum deviation of 34.5%.
V. CONCLUSIONS AND OUTLOOK
In this publication we presented the prospects of a
hardware-in-the-loop simulator for the test of vehicle and
lane detection algorithms. Therefore a simulation model
according to the driver assistance systems requirements was
created. We demonstrated the performance of the camera
model in three scenarios derived from real test drives. This
scenarios were modelled and parametrized utilising video
and GPS data records of the test drives.
We achieved a similar systems performance in comparison
to reality. Nevertheless there are still some inaccuracies in
the simulation environment. Therefore in future emphasis
should be put on developing improved weather models and
algorithms for the automatic adjustment of the scenario
parametrization in accordance to a reference data source.
Further the model should be evaluated on greater number
of test drives and modelling process should be improved.
REFERENCES
[1] Daimler, Automatisiertes fahren pr azisiert pr ufmethodik, Daimler
Technicity, Tech. Rep., 2010.
[2] W. Mielich and K. Golowko, Erprobung von assistenzsystemen zur
minderung von auffahrunf allen, automotive engineering partner, in
Automotive Engineering Partner, vol. 06, 2009, pp. 32 35.
[3] C. Berger and B. Rumpe, Hesperia framework zur szenario-gest utzten
modellierung und entwicklung sensor-basierter systeme, in Workshop
Automotive Software Engineering, 2009.
[4] F. Schmidt and E. Sax, Funktionaler softwaretest f ur aktive fahreras-
sistenzsysteme mittels parametrierter szenario-simulation, in Proceed-
ings of Workshop Automotive Software Engineering, 2009.
[5] R. Preiss, C. Gruner, S. T., and H. Winter, Mobile vision devel-
oping and testing of visual sensors for driver assistance systems, in
Advanced Microsystems for Automotive Applications, 2004, p. 95
108.
[6] D. Tellmann, Echtzeitf ahige simulationsumgebung zum test von
fahrerassistenzsystemen, in VDI-Berichte Nr. 1931, 2006, p. S. 271
280.
[7] K. von Neumann-Cosel, M. Dupuis, and C. Weiss, Virtual test
drive - provision of a consistent tool-set for [d,h,s,v]-in-the-loop, in
Proceedings of the Driving Simulation Conference Monaco, 2009.
[8] D. Gruyer, S. Glaser, and B. Monnier, Sivic, a virtual platform for
adas and padas prototyping, test and evaluation, in FISITA 2010 World
Automotive Congress, Budapest, Hungary, 2010.
[9] K. A. Redmill, J. I. Martin, and .

Ozg uner, Virtual environment
simulation for image processing sensor evaluation, in Proceedings
of Intelligent Transportation Systems, Dearborn, MI, 2000, pp. 64
70.
[10] S.-O. M uller, Integration vernetzter fahrerassistenz-funktionen mit hil
f ur den vw passat cc, in Automotive Engineering Partner, vol. 06,
2009, pp. 60 65.
[11] F. Coskun, . Tuncer, E. K. M., and L. G uvenc, Real time lane
detection and tracking system evaluated in a hardware-in-the-loop
simulator, in 13th International IEEE Conference on Intelligent
Transportation Systems, 2010, pp. 1336 1343.
[12] K. von Neumann-Cosel, M. Nentwig, D. Lehmann, J. Speth, and
A. Knoll, Preadjustment of a vision-based lane tracker - using virtual
test drive wihtin a hardware in the loop simulator, in Proceedings of
the Driving Simulation Conference Monaco, 2009.
[13] J. Bossu, D. Gruyer, J. Smal, and J. Blosseville, Validation and
benchmarking for pedestrian video detection based on a sensors
simulation platform, in Intelligent Vehicles Symposium, 2010.
[14] M. Miegler, R. Schieber, M. Nentwig, and T. Ganslmeier, Virtuelle
erprobung, in Automobilelektronik A8 Sonderausgabe, 2010, p. 95
108.
[15] C. Burdea, C. Kreis, R. Greul, and B. Torsten, Eine kamera zur
spurerkennung in einer virtuellen umgebung, in 15. VDI-Tagung
Erprobung und Simulation in der Fahrzeugentwicklung, 2010.
[16] J. McCall and M. Trivedi, Video-based lane estimation and tracking
for driver assistance: survey, system, and evaluation, in IEEE Trans-
actions on Intelligent Transportation Systems, vol. 7, 2006, pp. 20
37.
[17] Z. Sun, G. Bebis, and R. Miller, On-road vehicle detection: A review,
IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 28, no. 5, pp. 694 710, May 2006.
[18] C. Oat, N. Tatarchuk, and J. Isidoro, Layered Car Paint Shader.
Wordware Publishing Inc., 2003.
[19] J. M. Hasenfratz, M. Lapierre, N. Holzschuch, F. Sillion, and
A. GRAVIR/IMAG-INRIA, A survey of real-time soft shadows
algorithms, Computer Graphics Forum, vol. 22, pp. 753 774, 2003.
[20] M. Nentwig and M. Stamminger, A method for the reproduction
of vehicle test drives for the simulation based evaluation of image
processing algorithms, in 13th International IEEE Conference on
Intelligent Transportation Systems, 2010, pp. 1307 1312.
[21] T. Ganslmeier, M. Nentwig, K. von Neumann-Cosel, and E. Roth,
Vehicle environment simulation using realistic road networks for pre-
dictive driver assistance systems, in FISITA 2010 World Automotive
Congress, Budapest, 2010.
344

You might also like