You are on page 1of 39

Anti Collision Mechanism in

Vehicles
Submitted by:
Naeem Iqbal 2010-EE-092
Zohair Ali Sulahri 2010-EE-096
Mujahid Majeed 2010-EE-101
Usman Ahmed 2010-EE-109
Supervised by: Prof. Dr. Noor Muhammad Sheikh
Department of Electrical Engineering
University of Engineering and Technology Lahore
Anti Collision Mechanism in
Vehicles
Submitted to the faculty of the Electrical Engineering Department
of the University of Engineering and Technology Lahore
in partial fulllment of the requirements for the Degree of
Bachelor of Science
in
Electrical Engineering.
Internal Examiner External Examiner
Director
Undergraduate Studies
Department of Electrical Engineering
University of Engineering and Technology Lahore
i
Declaration
We declare that the work contained in this thesis is our own, except where explicitly
stated otherwise. In addition this work has not been submitted to obtain another degree
or professional qualication.
2010-EE-092:
2010-EE-096:
2010-EE-101:
2010-EE-109:
Date:
ii
Acknowledgments
We would like to thank our parents and teachers for supporting us and especially Dr.
Noor Muhammad Sheikh for helping us throughout this project. . .
iii
Dedicated to our parents and teachers whose unconditional support
has always been there for our studies.
iv
Contents
Acknowledgments iii
List of Figures vi
List of Tables vii
Abbreviations viii
Abstract ix
1 Introduction 1
1.1 Obstacle Detection Techniques . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 IR Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Sonar Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 Laser Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.4 Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Motivations and Problem Statement 4
2.1 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.1 Risky Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Labor Shortage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.3 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Proposed Approach 6
4 Hardware Implementation and Design 8
4.1 Raspberry Pi Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1.2 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.1.2.1 Operating System . . . . . . . . . . . . . . . . . . . . . . 9
4.1.2.2 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.1.2.3 GPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.1.2.4 Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.1.2.5 PWM Generation . . . . . . . . . . . . . . . . . . . . . . 10
4.2 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3 Atmega16L . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.4 Motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.5 Motor Driver (H-Bidge) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
v
Contents vi
4.5.1 Direction Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.5.2 Speed Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.6 Mechanical Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5 Software Implementation 16
5.1 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.2 Implementation Using Simulink . . . . . . . . . . . . . . . . . . . . . . . . 17
5.3 Block Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6 Conclusion and Future Directions 21
A Circuit Diagram 22
A.1 PWM Generation Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
A.2 H-Bridge Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
B Codes 24
B.1 Code for PWM Generation in ATMEGA16L . . . . . . . . . . . . . . . . 24
B.2 Code for ACM in Raspberry Pi . . . . . . . . . . . . . . . . . . . . . . . . 24
References 28
List of Figures
1.1 Working of IR Sensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Sonar Sensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Laser Scanning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1 WHO Statistics About Road Accidents. . . . . . . . . . . . . . . . . . . . 4
3.1 Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.1 Raspberry Pi Board. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.2 Connections of Raspberry Pi and Atmega16L. . . . . . . . . . . . . . . . . 11
4.3 Logitech C210. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.4 Typical H-Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.5 Direction Control of H-Bridge. . . . . . . . . . . . . . . . . . . . . . . . . 13
4.6 H-Bridge Motor Driver Hardware. . . . . . . . . . . . . . . . . . . . . . . 14
4.7 Hardware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.1 Original and Edge Detected Image. . . . . . . . . . . . . . . . . . . . . . . 16
5.2 Edge Detection Using MATLAB. . . . . . . . . . . . . . . . . . . . . . . 17
5.3 Edge Detection Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.4 Block Diagram of ACM in MATLAB. . . . . . . . . . . . . . . . . . . . . 18
5.5 Block Diagram of pyhton Algorithm. . . . . . . . . . . . . . . . . . . . . . 19
5.6 Division of input image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.7 Canny Edge Detection Algorithm. . . . . . . . . . . . . . . . . . . . . . . 20
A.1 PWM Generator for H-Bridge. . . . . . . . . . . . . . . . . . . . . . . . . 22
A.2 H-Bridge Simplied Circuit. . . . . . . . . . . . . . . . . . . . . . . . . . . 23
A.3 H-Bridge Complete Circuit. . . . . . . . . . . . . . . . . . . . . . . . . . . 23
vii
List of Tables
4.1 Raspberry Pi Model B Specications . . . . . . . . . . . . . . . . . . . . . 10
4.2 GPIO Truth Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.3 Motors Standard Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 12
viii
Abbreviations
ACM Anti Collision Mechanism
PWM Pulse Width Modulation
USB Universal Serial Bus
RISC Reduced Instruction Set Computer
HDMI High Denation Multimedia Interface
UART Universal Asynchronous Transmitter
GPIO General Purpose Input Output
ix
Abstract
We have used image processing techniques to avoid obstacles in our prototype. These
images are taken from a camera mounted at the front of the vehicle. This stream of
images is fed to the Raspberry Pi to be processed , where Raspberry Pi makes the
prototype maneuver such that the obstacle can be avoided. This is a very basic form of
an autopilot system in vehicles.
Chapter 1
Introduction
Both humans and animals have dierent naturally built in system to avoid obstacles in
from of them. For example bats use echo vision to detect obstacles in front of them
without having any vision. Humans use their eyes to get a direct vision of the obstacle
which is processed by the brain to decide how to avoid the obstacle. Our prototype uses
the very same basic methodology where camera and processor act as a replacement of
eyes and brain. Based on the similarities with many living organisms there are many
dierent types of obstacle detection techniques which are described in 1.1
1.1 Obstacle Detection Techniques
Four famous obstacle detection techniques are:
IR(infra-red) Sensor
Sonar Sensor
Laser Scanning
Image Processing
1.1.1 IR Sensor
IR sensor technique uses an IR rays emitter and a receiver. It emits IR rays inter-
Figure 1.1: Working of IR Sensor.
mittently and from the dierence between the sending and the receiving rays, obstacle
presence can be approximated. However, it is a very crude technique and has a very
1
Chapter 1. Introduction 2
limited range. Moreover, for the complete spanning of the front side, a whole array of IR
sensors would be required which would increase the complexity of the module. Figure
1.1 shows the working principle of these sensors.
1.1.2 Sonar Sensor
Sonar sensors emit ultrasonic waves and have a range of 0-255 inches. They also send
and receive rays to detect the obstacle. The problem with these sensors remains the
same that they have a limited range and give no idea about the size of the obstacle.
Moreover, the performance of the sonar sensor depends upon the environment conditions
too like surface condition and humidity etc. Sonar sensor is shown in gure 1.2
Figure 1.2: Sonar Sensor.
1.1.3 Laser Scanning
Laser scanning technique is better than the latter two techniques as it has a broad range
and can also determine the size of the obstacle. But for the sake of getting information
about the size of the obstacle, we have to install an array of laser beams and optical
sensors which would be both inecient and expensive.
Figure 1.3: Laser Scanning.
1.1.4 Image Processing
In this technique, we can both detect the obstacle and also determine its size. There is
no concern about the range, because as long as the obstacle is in the video no matter
Chapter 1. Introduction 3
how small or how large, it will be detected. Moreover, it does not require any receiver.
Only a video camera would sucient.
Analyzing the above mentioned techniques, Image processing turns out to be most e-
cient. Hence in our project, we have implemented edge detection based image processing
algorithm to detect our obstacle. We will use less expensive and easily available resources
to lower the cost of this project.
Chapter 2
Motivations and Problem
Statement
Road accidents are one of the leading causes of human fatalities in most of the countries
around the globe. According to WHO 1.24 million died each year in road accidents
and more than 90% of these accidents are due to human error. Figure 2.1 shows some
statistics about the deaths in road accidents by WHO[1]. These accidents can be greatly
reduced if only there were an assistance system installed in the vehicle. Most of the
researchers are trying to produce such modules. We also aim to design a video camera
based system that avoids the obstacles in front of the vehicle automatically.
Figure 2.1: WHO Statistics About Road Accidents.
2.1 Motivations
There are many other factors that contribute to the development of automated vehicles.
Some of them are:
4
Chapter 2. Motivations & Problem Statement 5
Risky Jobs
Labor Shortage
Reliability
2.1.1 Risky Jobs
There are some jobs which involve high levels of risk and cannot be done without casu-
alties. For example in Coal Mining, large numbers of people die every year and many
others.
2.1.2 Labor Shortage
Many developed countries do not have the man force to drive their industry. Such an
unmanned vehicle can be a suitable alternative.
2.1.3 Reliability
Machines do not need to take rest and give a guaranteed eciency round the clock.
2.2 Problem Statement
Considering the survey report, it is quite desirable to reduce the number of casualties
happening all over the world. These casualties can be reduced by adding an extra driver
assistant module installed in the vehicle. This module should work as an assistant to the
driver of the vehicle and if the driver somehow fails to avoid the obstacle, the module
takes over and avoids the collision of the vehicle; thus named as anti-collision module.
We have designed a prototype of such a module which uses a camera as a sensor of
obstacles, which is quite inexpensive considering the price of the vehicle.
Chapter 3
Proposed Approach
As the purpose of our project is to avoid collision by getting information about sur-
rounding environment. We are doing this with the help of image processing, which is
algorithmic technique and requires a central processing unit. We are using Raspberry
Pi board as central processing unit because it is small in size and also cheap as compare
to other boards and it fullls our requirements. Raspberry Pi board is the main unit of
our project and takes all the decisions. It supports a large number of operating systems
as discuss in table 4.1, we are using Raspbian operating system. We are using web cams
instead of expensive cameras to capture images and send to Raspberry Pi board for pro-
cessing. All the components e.g Raspberry Pi, motor drivers, PWM generation circuits
and camera are places on mechanical structure as shown in gure 4.7. Figure 3.1 shows
the simple block diagram of our proposed approach. The camera captures the images
Figure 3.1: Approach.
and image processing algorithm runs in Raspberry Pi and divides the image into gray
scale apply CANNY EDGE Detector. Then after thresholding image is divided into two
parts i.e left and right. These images are converted into arrays using Numpy module
and then count the non zero entries in both the images. If the left and right image
has same number of zeros then Raspberry Pi gives interrupt signal to PWM generator
such that it varies the PWM in such a way that prototype moves forward, if left zeros
are greater than right zeros then prototype moves toward left and if left zeros are less
than right zeros then t moves toward right and this cycle continues. We have also set
a threshold value, below this value no interrupt signal is given to motor driver circuit
and prototypes continues its motion. We are generating four PWMs with the help of
Atmega16L micro controller, two for each motor. Speed can be adjusted by varying the
duty cycle of PWM. To prevent any damage to micro controller due to high voltages
6
Chapter 5. Proposed Approach 7
and current at motor end, PWM signal is given to motor driver (H-bridge) circuit with
the help of apto coupler TLP250. So if the anything goes wrong on motor side like short
circuit it cannot reach micro controller. In short an innite loop is running in Raspberry
Pi which takes image as a input from web cam and reach to a secure decision and then
again takes image input and it continues.
Chapter 4
Hardware Implementation and
Design
Before going into design details lets have a brief introduction of all the components
which we used in this project. As we are using image processing for ACM so our
system consists of following components:
Raspberry Pi Board
Camera
Atmega16L Micro-controller
Motor
Motor Driver (H-Bridge)
Mechanical Structure
4.1 Raspberry Pi Board
Raspberry Pi is a credit card sized Single-board computer developed in UK by the
Raspberry Pi Foundation.
4.1.1 Hardware
Raspberry Pi is manufactured in two board congurations:
Model A
Model B
We have used Model B shown in gure 4.1 which has advantages of being available in our
universitys lab, Support for Camera, Two Usb Ports and an Ethernet 10/100 Controller
so that it can be used in Headless Mode. Table 4.1 shows the complete specications of
Model B.
8
Chapter 3. Hardware Implementation and Design 9
Figure 4.1: Raspberry Pi Board.
4.1.2 Operation
4.1.2.1 Operating System
Raspberry Pi has many operating systems designed for it. We have used Raspbian
operating system. The reason for using Raspbian is that it is a LINUX integrated OS
and so has a LINUX GUI which we are already familiar with.
4.1.2.2 Camera
Raspberry Pi Foundation has manufactured a camera module for video capture opera-
tions but we have used a USB webcam to achieve our goal instead of using Raspi Cam,
because USB cam is much cheaper than Raspi Cam( Raspi Cam costs almost $40 along
with the shipment cost ). However, the results would have been far better if we had
used Raspi Camera module.
We have used Logitech C210 for our project. This camera is connecting using USB port
instead of CSI camera connector on the board. The RPI supported resolutions for this
camera are:
320 x 240
640 x 480
Chapter 3. Hardware Implementation and Design 10
SoC Broadcom BCM2835 (CPU, GPU, DSP, SDRAM)
CPU 700 MHz ARM1176JZF-S core
GPU Broadcom VideoCore IV, OpenGL ES 2,
1080p30 Full HD HP H.264
Memory 512 MB (shared with GPU)
USB 2 Ports 2
Video Input A CSI input connector for RPF designed camera module
Video Outputs Composite RCA, HDMI
Audio Outputs 3.5mm jack, HDMI
On Board Storage SD / MMC / SDIO card slot
On Board Network 10/100 Mbit/s Ethernet
Low-level Peripherals GPIO pins, SPI, I
2
C, UART
Power Ratings 700 mA (3.5 W)
Power Source 5 V via MicroUSB or GPIO header
Size 85.60 mm 56 mm
Operating System Arch Linux ARM, Debian GNU/Linux, Gentoo, Fedora,
FreeBSD, NetBSD, Plan 9, Raspbian OS, RISC OS,
Slackware Linux
Table 4.1: Raspberry Pi Model B Specications
4.1.2.3 GPIO
We have used GPIO pins of the RPI to generate interrupts so that it can be received
by the micro controller and generated PWMs for the motors. The GPIO truth table for
the operations dened the Micro Controller is given in table 4.1.2.5.
BCM Mode RPI Pins/ Pin 17 Pin 18 Pin 4 Function
ATMEGA16 pins /Pin38 /Pin39 /Pin40
0 0 1 Forward
0 1 1 Turn Left
1 0 0 Turn Right
0 0 0 Stop
Table 4.2: GPIO Truth Table
4.1.2.4 Display
For working with our project instead of having an HDMI costly display device, we used
RPI in headless mode; with HEAD implying HDMI display device.
In headless mode, we connect our raspberry pi and laptop with the same LAN (local
area network) and open an SSH (Secure Shell) on our RPI using our laptop. Using SSH
approach, we have the window of the RPI on our laptop making it easy to work. One
of the many reasons why there is no windows operating system designed for RPI.
4.1.2.5 PWM Generation
RPI has a built-in support for generating PWM. However, we need 4 PWMs for the
proper working of two H-Bridge driven motors of the Robot, whereas RPI can generate
Chapter 3. Hardware Implementation and Design 11
only one PWM. For that we have used ATMEGA16L to generate 4 PWMs which are
varied depending upon the input at the pin 38, 39 and 40 of the controller as shown in
gure 4.2. These inputs of the ATMEGA16L are driven by the GPIO output pins of
the RPI, as shown table .
Figure 4.2: Connections of Raspberry Pi and Atmega16L.
4.2 Camera
We are using Logitech C210 camera shown in gure 4.3. Specications of this camera
are:
1.3 Mega Pixels Resolution
Auto Focus
Adjustable Base
USB 2 Interface
The purpose of this camera is to capture images and feed input to Raspberry Pi board
so that secure decision is made on the basis of algorithm designed.
4.3 Atmega16L
Atmega16L is a low-power CMOS 8-bit micro controller based on the AVR enhanced
RISC architecture. By executing powerful instructions in a single clock cycle, the AT-
mega16L achieves throughputs approaching 1 MIPS per MHz allowing to optimize power
consumption versus processing speed. Schematic is shown in Appendix ??
We generate for PWM at pin 4(OC0), 18(OC1B), 19(OC1A) & 21(OC2). Then pin 4
& 19 are connected to motor B while pin 18 & 21 are connected to motor A. We use
Atmega16L because Raspberry Pi can generate only single PWM which cannot serve
our purpose.
Chapter 3. Hardware Implementation and Design 12
Figure 4.3: Logitech C210.
4.4 Motor
Parameters Value
Rated Voltage 24 V
No Load Current 180 mA
No Load Speed 6000 rev/min
Load Current 1000 mA
Load Speed 4500 r/min
Output Power 13.5 W
Stall Current 4000 mA
Load Torque 300 gf.cm
Small Torque 1200 gf.cm
Table 4.3: Motors Standard Parameters
4.5 Motor Driver (H-Bidge)
H-bridge consists of four switching element, with the motor at the center in an H-like
conguration as shown in gure 4.4. It allows to control the speed as well as direction of
motor. As motors are mostly controlled by micro controller, it provides the instructions
to the motors but cannot provide the power required to drive the motors. An H-bridge
circuit inputs the micro controller instructions and amplies them to drive motor. The
H-bridge made takes in the small electrical signal and results in high power output for
mechanical motor. H-Bridge and micro controller is linked with the help of apto coupler,
it is a component that transfers electrical signals between two isolated circuits by using
light. Opto-isolators prevent high voltages from aecting the system receiving the signal
4.5.1 Direction Control
Most DC Motors can rotate in two directions depending on how the battery is connected
to the motor. An H-Bridge circuit allows a DC motor to be run in any direction with
a low level logic input signal. Here switches represents the electronic Power MOSFETs
which are used for switching. In the gure 4.4 if switch 1 and 4 are closed then motor
Chapter 3. Hardware Implementation and Design 13
Figure 4.4: Typical H-Bridge.
rotates in forward direction and if switch 2 and 3 are made closed then motor change its
direction and rotates in reverse direction as shown in gure 4.5. So by using a simple
H-Bridge we can easily control the direction of rotation of motors.
Figure 4.5: Direction Control of H-Bridge.
4.5.2 Speed Control
Speed can be controlled by using PWM. For that purpose we generate four PWMs with
the help of Atmega16L micro controller as shown in appendix A.1. The main principle
is to control the power by varying the duty cycle so that the conduction time to the
load is controlled. The main advantage of PWM is that power loss in the switching
devices is very low. When a switch is o there is no current, and when it is on, there is
almost no voltage drop across the switch. So losses are negligible. If we use an analog
input to control the speed of motor, it will not produce signicant torque at low speeds.
The magnetic eld created by the small current will be too weak to turn the rotor.
While PWM current can create short pulses of magnetic ux at full strength, which can
Chapter 3. Hardware Implementation and Design 14
turn the rotor at extremely slow speeds. Micro controllers oer simple commands to
vary the duty cycle and frequencies of the PWM control signal. PWM is also used in
communications to take advantage of the higher immunity to noise provided by digital
signals. So by getting these pulses generated by a micro controller we can increase the
eciency, accuracy and thus the reliability of the system. H-Bridge circuit consists of
following ICs:
IR2103 (Half-Bridge Driver): It is eight pin high voltage, high speed power
MOSFET and IGBT drivers with dependent high and low side referenced output
channels.
IRF3205: Advanced HEXFET Power MOSFETs from International Rectier
utilize advanced processing techniques to achieve extremely low on-resistance per
silicon area.
IN4148: It is a high-speed switching diode fabricated in planar technology, and
encapsulated in hermetically sealed leaded glass.
TLP250: It is an apto coupler.
Appendix A.2 shows the circuit diagram and gure 4.6 shows the hardware of H-Bridge
Motor drivers.
Figure 4.6: H-Bridge Motor Driver Hardware.
4.6 Mechanical Structure
Mechanical Structure consists of steel sheet and Teon tyres and all the other circuitry
s placed on it as shown in gure 4.7.
Chapter 3. Hardware Implementation and Design 15
Figure 4.7: Hardware.
Chapter 5
Software Implementation
We have mounted a camera in front of the prototype, to get a live video feed into the
processing module i.e laptop. Image processing can be done using two below mentioned
options:
C, Java or Python language
MATLAB
5.1 Comparison
In C, Java or Python language, we have to start from the scratch, create our own
methods and then use them. Although, it is faster than MATLAB but it is much more
complex than MATLAB. So we rst used MATLABs built in SIMULINK library to
take a start.
Using MATLAB, we perform edge detection on the images which gives us a binary
matrix containing 1s wherever there is an edge and 0s otherwise. From these processed
images, we determined the obstacle. One of these processed images is shown below in
gure 5.1 where white portion indicates an edge and the black portion indicates that
there are no edges.
Figure 5.1: Original and Edge Detected Image.
16
Chapter 4. Software Implementation 17
5.2 Implementation Using Simulink
For a start, we used the SIMULINK library of MATLAB. In simulink, we made use of
the IMAGE AND VIDEO PROCESSING TOOLBOX. The simple Block diagram is
given in gure 5.2.
Figure 5.2: Edge Detection Using MATLAB.
For proceeding further, these binary matrices had to be stored in particular variables
which are further forwarded to next M-le which determines whether an obstacle is
present or not. This interlink could not be achieved in SIMULINK. So, we wrote our
own M-les which take input and process it. Moreover, the unnecessary portion of the
video was also removed to reduce the chunk of unnecessary computations done i.e only
that portion of the video was kept which covers only the frontal view of the vehicle.
5.3 Block Diagrams
Figure 5.3 & 5.4 shows the block diagrams of Edge Detection Algorithm and complete
ACM in MATLAB respectively.
Chapter 4. Software Implementation 18
Figure 5.3: Edge Detection Algorithm.
Figure 5.4: Block Diagram of ACM in MATLAB.
Chapter 4. Software Implementation 19
After successful implementation of ACMin MATLAB,We moved to Python and OpenCV
because they are faster and laptop can be replaced by a small board(Raspberry Pi). Fig-
ure 5.5 & 5.6 shows the block diagrams of ACM Algorithm using Pyhton and division
of input image respectively. Due to excessively noisy results in the background subtrac-
tion algorithm, we had to switch to Canny Edge Detection algorithm to obtain Binary
Images that only include obstacles as edges and no background noise(oor cracks) are
detected as obstacles. Figure 5.7 shows the block diagrams of ACM using Canny Edge
Detection algorithm.
Figure 5.5: Block Diagram of pyhton Algorithm.
Chapter 4. Software Implementation 20
Figure 5.6: Division of input image.
Figure 5.7: Canny Edge Detection Algorithm.
Chapter 6
Conclusion and Future Directions
21
Appendix A
Circuit Diagram
A.1 PWM Generation Circuit
Figure A.1: PWM Generator for H-Bridge.
22
Appendix A. Circuit Diagram 23
A.2 H-Bridge Circuit
Figure A.2: H-Bridge Simplied Circuit.
Figure A.3: H-Bridge Complete Circuit.
Appendix B
Codes
B.1 Code for PWM Generation in ATMEGA16L
B.2 Code for ACM in Raspberry Pi
import cv2.cv as cv
import time
import numpy as np
from scipy import stats
import RPi.GPIO as GPIO
cap = cv.CaptureFromCAM(0)
GPIO.setmode(GPIO.BCM)
GPIO.setup(4,GPIO.OUT)
GPIO.setup(17,GPIO.OUT)
GPIO.setup(18,GPIO.OUT)
cv.SetCaptureProperty(cap,cv.CV_CAP_PROP_FRAME_HEIGHT,240)
cv.SetCaptureProperty(cap,cv.CV_CAP_PROP_FRAME_WIDTH,320)
start-time = time.time()
i=0
while True:
img = cv.QueryFrame(cap)
24
Appendix B. Codes 25
grey = cv.CreateImage(cv.GetSize(img),img.depth,1)
cv.CvtColor(img,grey,cv.CV_RGB2GRay)
ul = grey[0:120,0:160]
cv.ShowImage("crop",ul)
ll = grey[120:240,0:160]
cv.ShowImage("crop2",ll)
ur = grey[0:120.160:320]
lr = grey[120:240,160:320]
cv.AbsDiff(ll,ul,ul)
cv.AbsDiff(lr,ur,ur)
array_ul = np.array(ul)
array_ur = np.array(ur)
stats.threshold(array_ul,1,70,0)
stats.threshold(array_ul,71,255,1)
stats.threshold(array_ur,1,70,0)
stats.threshold(array_ur,71,255,1)
left = np.count_nonzero(array_ul)
right = np.count_nonzero(array_ur)
if(left-right<100)
print "forward"
GPIO.output(17,False)
GPIO.output(18,False)
GPIO.output(4,True)
if(left-right>100)
print "right"
GPIO.output(17,True)
GPIO.output(18,False)
GPIO.output(4,False)
if(right-left>100)
Appendix B. Codes 26
print "left"
GPIO.output(17,False)
GPIO.output(18,True)
GPIO.output(4,True)
if cv.WaitKey(10) == 27
break
i +=1
Due to excessively noisy results in the background subtraction algorithm, we had to
switch to Canny Edge Detection algorithm to obtain Binary Images that only include
obstacles as edges and no background noise(oor cracks) are detected as obstacles.
import cv2.cv as cv
import time
import numpy as np
import RPi.GPIO as GPIO
cap = cv.CaptureFromCAM(0)
GPIO.setmode(GPIO.BCM)
GPIO.setup(4,GPIO.OUT)
GPIO.setup(17,GPIO.OUT)
GPIO.setup(18,GPIO.OUT)
cv.SetCaptureProperty(cap,cv.CV_CAP_PROP_FRAME_HEIGHT,240)
cv.SetCaptureProperty(cap,cv.CV_CAP_PROP_FRAME_WIDTH,320)
start-time = time.time()
while True:
frame = cv.QueryFrame(cap)
gray = cv.CreateImage(cv.GetSize(frame),frame.depth,1)
cv.CvtColor(frame,grey,cv.CV_RGB2GRAY)
blur = cv.CreateImage(cv.GetSize(gray),cv.IPL_DEPTH_8U,grey.channels)
cv.Smooth(grey,blur,cv.CV_GAUSSIAN,5,5)
Appendix B. Codes 27
canny = cv.CreateImage(cv.GetSize(blur),blur.depth,blur.channels)
cv.Canny(blur,canny,30,150,3)
l_img = canny[0:240,0:160]
cv.ShowImage(left,l_img)
r_img = canny[0:240,161:320]
cv.ShowImage(right,r_img)
left_array = np.array(l_img)
right_array = np.array(r_img)
left = np.count_nonzero(left_array)
right = np.count_nonzero(right_array)
if(left-right==0)
GPIO.output(17,False)
GPIO.output(18,False)
GPIO.output(4,True)
if(left-right>0)
GPIO.output(17,True)
GPIO.output(18,False)
GPIO.output(4,False)
if(left-right<0)
GPIO.output(17,False)
GPIO.output(18,True)
GPIO.output(4,True)
c = cv.WaitKey(50)
if c==27:
exit(0)
References
[1] WHO: Global status report on road safety 2013. http://www.who.int/violence_
injury_prevention/road_safety_status/2013/en/index.html.
28

You might also like