You are on page 1of 17

technologies

Article
Vision-Based Robot Following Using PID Control
Chandra Sekhar Pati 1, * and Rahul Kala 2
1 Computer Science Engineering, Rajiv Gandhi Universities of Knowledge and Technologies,
Nuzvid PIN-521202, India
2 Robotics and Artificial Intelligence Laboratory, Indian Institute of Information Technology,
Allahabad PIN-211012, India; rkala001@gmail.com
* Correspondence: patichandrasekhar411@gmaill.com; Tel.: +91-961-853-6506

Academic Editor: Manoj Gupta


Received: 15 February 2017; Accepted: 7 June 2017; Published: 10 June 2017

Abstract: Applications like robots which are employed for shopping, porter services, assistive
robotics, etc., require a robot to continuously follow a human or another robot. This paper
presents a mobile robot following another tele-operated mobile robot based on a PID
(Proportional–Integral-Differential) controller. Here, we use two differential wheel drive robots;
one is a master robot and the other is a follower robot. The master robot is manually controlled
and the follower robot is programmed to follow the master robot. For the master robot, a Bluetooth
module receives the user’s command from an android application which is processed by the master
robot’s controller, which is used to move the robot. The follower robot receives the image from the
Kinect sensor mounted on it and recognizes the master robot. The follower robot identifies the x, y
positions by employing the camera and the depth by using the Kinect depth sensor. By identifying the
x, y, and z locations of the master robot, the follower robot finds the angle and distance between the
master and follower robot, which is given as the error term of a PID controller. Using this, the follower
robot follows the master robot. A PID controller is based on feedback and tries to minimize the error.
Experiments are conducted for two indigenously developed robots; one depicting a humanoid and
the other a small mobile robot. It was observed that the follower robot was easily able to follow the
master robot using well-tuned PID parameters.

Keywords: robot following; master robot; follower robot; RGBD; PID control

1. Introduction
Robots are increasingly being used in home and office environments for a variety of tasks. One of
the application areas of these robots is as ‘personal assistants’. The robots may be used by the human
master to take commands as the human moves, to carry items of interest for the human, and to
complete manipulation tasks that the human specifies, etc. Common applications include robots as
porters to carry luggage, robots to transport goods from one place to other, and robots to help the
human in a supermarket or a shopping mall, etc. All of these applications require a robot to follow a
human. Sometimes, the robot may be required to follow another robot instead.
The master robot is an android Bluetooth-based robot which is designed by us with Arduino
and the HC-05 Bluetooth module. Social Mobile Autonomous Research Test bed (SMART) is a mobile
robot that was designed in IIIT Allahabad for social interaction. The robot has a differential wheel
base for mobility, two controllable arms for human interaction, a vision system, and a speaker set. The
current capability of the robot includes a tour of the laboratory when starting from a fixed location,
without tracking humans. This project is a step towards making SMART a robot assistant that is
capable of following people. As the first step, this paper proposes a mechanism wherein SMART is
made to follow another robot. The paper demonstrates the capability of SMART to follow a master

Technologies 2017, 5, 34; doi:10.3390/technologies5020034 www.mdpi.com/journal/technologies


Technologies 2017, 5, 34 2 of 17

robot, which can be extended to following humans by developing a human recognition system [1,2].
Another motivation is from swarm robotics, where slave robots do the work and follow the master
robots. The main motivation of this project is to use this knowledge in SMART.
The slave robot uses 3-D vision to detect the master robot, using a Kinect sensor that obtains a
depth image along with a monocular camera image known as the RGB-D image. The Kinect measures
the depth by a triangulation process [3]. The RGB-D images are well suited to solve the ego-motion (3D
motion of a camera within an environment) estimation problem of an agent [4]. Modern day robots are
primarily vision driven. Computer Vision essentially deals with image processing techniques for noise
removal, the detection of the area of interest, and recognition of the object of interest [5]. Further vision
techniques can be used to know the position of an object or to localize the object of interest. In our
case, the follower robot senses the master robot with the help of Kinect sensors. In this project, the
computer vision plays a major role in detecting, recognizing, and localizing the master robot through
the 3D camera of the slave robot.
Every sensor of the robot reports the readings in its own frame of reference. Similarly, every
actuator of the robot obtains inputs in its own frame of reference. Transformations enable conversions
between the frames of reference. In this paper, the follower robot finds the master robot, but the located
position needs to be transformed from the image coordinate frame to Kinect’s coordinate frame, and
ultimately, the follower robot’s frame of reference. This is done based on transformation techniques
with the help of the Kinect calibration matrix.
The problem of robot control relates to the ability to guide the robot such that the traced trajectory
of the robot is as close as possible to the desired trajectory. Closed loop control systems are widely
used, which obtain feedback in the form of an error signal and send corrective signals to the robot such
that the error is reduced.
In this paper, we use an RGB image of the master robot taken from the Kinect mounted on the
slave robot for detection and depth values for finding the distance between the master and slave robots.
Both the master and follower are mobile robots, so every time, the distance between the master and
follower varies. The follower has to maintain the minimum distance and the orientation towards the
master is conducted by the RGBD image processing. The follower robot follows the master based on
the detections in the RGB image and the depth.
A lot of research has been carried out in the field of robot following, especially in the field of
differential wheel robots. A low computational cost method for terrestrial mobile robots that uses a
laser scanner for following mobile objects and avoiding obstacles was presented by Martinez et al. [6].
The authors demonstrated a simple technique that employed a laser scanner for object following, path
tracking, and obstacle avoidance that was integrated in the outdoor mobile robot Auriga-α.
Breivik and Fossen [7] presented a novel guidance-based path following approach for wheeled
mobile robots. In this paper, the authors developed guidance laws at an ideal, dynamics-independent
level. The robot followed a path based on these guidance laws. The authors presented simulation
results for a unicycle-type WMR (Differential wheel robot).
Jamkhandikar et al. [8] have demonstrated the efficient detection and tracking of an object in real
time using color. Here, the authors took images simultaneously from the webcam and then applied
Euclidean color filtering to each image. Using this approach, the moving object color was segmented in
the image. After gray scaling and binarizing, the object was covered by the contour and displayed on
the computer. The scale and rotation invariant, as well as faster processing, were the main advantages
of this algorithm.
The implementation of a proportional integral derivative (PID) controlled mobile robot was
presented in the work of Rubay at et al. [9]. A motor does not change its rpm linearly with pulse width
modulation. They used a Proportional Integral Differential (PID) control loop to solve this problem.
Using an optical sensor, they ascertained the speed of the DC motor of the differential drive robot,
identified the error, and implemented the PID controller. They controlled the speed of the DC motors
with pulse width modulation from 0 to 255 using Arduino Uno Board.
Technologies 2017, 5, 34 3 of 17

An Android Phone Controlled Robot Using Bluetooth [10], Bluetooth-Based Android Controlled
Robot [11], and BlueBO: Bluetooth Controlled Robot [12] are a few robots which are controlled by
android and Bluetooth. In the above three papers, the robots connect to android through the Bluetooth
module so that the user can directly control the robot with an android mobile application. This android
application sends the user given commands to the Bluetooth module in the robot and according to
that, theTechnologies
robot will 2017, 5, 34
move. 3 of 17

Currently
through the inBluetooth
the literature,
module all of the
so that high
the user canlevel behaviors
directly control theofrobotrobots
withare displayed
an android mobile on mobile
robots that come This
application. withandroid
a rich application
set of sensors sendsand lessgiven
the user noisy actuators,
commands with
to the a heavy
Bluetooth price
module in tag.
the Even if
the problems
robot and ofaccording
service robotics
to that, theare
robotsolved using these robots, the major problem is that they will
will move.
Currently infor
remain too expensive themost
literature,
of theallcustomers,
of the high whilelevel behaviors
the benefitsof robots are match
will not displayedtheoncosts.
mobile Hence, the
robots that come with a rich set of sensors and less noisy actuators, with a heavy price tag. Even if
motivation behind this work is to enable cheap indigenously developed robots which display nearly
the problems of service robotics are solved using these robots, the major problem is that they will
the same behaviors,
remain whileforcoming
too expensive most of attheacustomers,
significantly whilelower cost. will
the benefits Thenot sensing
match themay notHence,
costs. be rich or the
sensorsthedeployed may be highly noisy. In this direction, what is displayed
motivation behind this work is to enable cheap indigenously developed robots which display in the manuscript is the
ability to follow
nearly another
the same robot,while
behaviors, which could
coming at abe a person or
significantly loweran cost.
object.TheThe same
sensing may behavior
not be richisor displayed
the sensorsfor
in the literature deployed
mobile may be highly
robots using noisy.
high In resolution
this direction,lidars,
what isstereo
displayed in the manuscript
cameras, and IMUs, is with off
the ability to follow another robot, which could be a person or an object. The same behavior is
board computation in the form of a cluster. The aim is not to beat the performance of these algorithms,
displayed in the literature for mobile robots using high resolution lidars, stereo cameras, and IMUs,
but to show a similar performance using far less sensing and computing. It can be seen that a lot of
with off board computation in the form of a cluster. The aim is not to beat the performance of these
research also exists for
algorithms, but to show such alow costperformance
similar robots. Extensiveusing far research
less sensing is and
conducted
computing.forItprimitive
can be seenproblems
like following
that a lot a line, following
of research also aexists
pre-defined
for such trajectory, following
low cost robots. a light
Extensive sourceis or
research such easily
conducted for sensible
features,primitive
and using problems like following
proximity sensorsa to line, following
wander or amove
pre-defined
towards trajectory, following
a direction, etc.a However,
light sourceall of this
researchorand
suchitseasily
easysensible features,
extensions doandnotusing
scaleproximity
up to the sensors
problemto wander or move towards
of autonomously a direction,the robot
navigating
etc. However, all of this research and its easy extensions do not scale up to the problem of
for the applications of the level of service robotics. Such low cost indigenously built robots face errors
autonomously navigating the robot for the applications of the level of service robotics. Such low cost
at everyindigenously
stage and therefore,
built robots the robots
face errorsareat much
every stageharder andtotherefore,
control, which is the
the robots are challenge
much harder focused
to upon
in this paper.
control,Thewhichnovelty is to be able
is the challenge focusedto upon
tune in and thisuse a PID
paper. Thecontroller
novelty is to inbehighly
able tonoisy conditions,
tune and use a with
minimal PID controllerThis
feedback. in highly noisy conditions,
is a major step towards with theminimal
use offeedback.
low cost This is a major
robots step towards the
for sophisticated behaviors.
use of low cost robots for sophisticated behaviors.
2. Methodology
2. Methodology
This section discusses the overall approach. First, we have to move the master robot and the
This section discusses the overall approach. First, we have to move the master robot and the
followerfollower
robot then
robotfollows the master
then follows robot. robot.
the master FigureFigure
1 shows the working
1 shows principle
the working of theoftele-operated
principle the
master tele-operated
robot. Figure 2 shows the total working of the follower robot.
master robot. Figure 2 shows the total working of the follower robot.

Figure 1. Motion of the tele-operated master robot.


Figure 1. Motion of the tele-operated master robot.
Technologies 2017, 5, 34 4 of 17
Technologies 2017, 5, 34 4 of 17

Figure2.2.Motion
Figure Motion of
of the follower
followerrobot.
robot.

As is shown in Figure 1, the master robot receives instructions through Bluetooth


As is shown in Figure 1, the master robot receives instructions through Bluetooth communication
communication from the android mobile by the user. If the received commands are valid, the master
from the android mobile by the user. If the received commands are valid, the master robot is moved by
robot is moved by the actuators. If no command is received from the user, the robot stops.
the actuators.
The aimIf of
nothe
command
followerisrobot
received from the
is to follow theuser, therobot
master robotby
stops.
recognizing it through the color
The aim
filtering. of the
After follower
finding robot isthe
the master, to follower
follow the master the
calculates robot by recognizing
master robot’s x, y, it
andthrough the color
z coordinates
filtering. After finding the master, the follower calculates the master robot’s x, y,
from the RGBD Kinect image [13]. Here, x, y, and z are the coordinates in the reference frame of theand z coordinates
from the RGBD
Kinect sensor,Kinect
which isimage [13].on
mounted Here, x, y, androbot
the follower z are(SMART).
the coordinates in theisreference
If the z value more thanframe of the
a certain
Kinect sensor,using
threshold, which is x,
the mounted
y, and z on the follower
differential robot
values, the(SMART). If the zare
wheel velocities value
foundis more thanthe
through a certain
PID
threshold, using
controller. Using x, y,velocities,
thethese and z differential
the followervalues,
robot the wheel
follows thevelocities are found through the PID
master robot.
controller. Using these velocities, the follower robot follows the master robot.
2.1. Tele-Operating the Master Robot
2.1. Tele-Operating the Master Robot
The master robot is the tele-operated robot which is designed based on Arduino and the HC-05
Bluetooth module.
The master robot An Android
is the app connects
tele-operated to the isHC-05
robot which Bluetooth
designed based module. The and
on Arduino information
the HC-05
received through the Bluetooth will be read by the Arduino board. The Arduino
Bluetooth module. An Android app connects to the HC-05 Bluetooth module. The information board compares the
received
received valuesthe
through with the teleoperation
Bluetooth robotby
will be read andthesends inputs
Arduino to theThe
board. L293D motor board
Arduino driver.compares
The motorthe
received values with the teleoperation robot and sends inputs to the L293D motor driver.Bluetooth
driver amplifies the signals and gives motion to the motors. Since we are using an HC-05 The motor
module,
driver the master
amplifies controlling
the signals rangemotion
and gives is approximately
to the motors. 10 m. The we
Since working philosophy
are using is given
an HC-05 in
Bluetooth
Figure 3a. The master robot is covered with green colored paper, as shown in Figure 3b, so that it can
module, the master controlling range is approximately 10 m. The working philosophy is given in
be recognized by the follower robot by using color filtering. According to the environment, it is
Figure 3a. The master robot is covered with green colored paper, as shown in Figure 3b, so that it
possible to change the color of the follower robot.
can be recognized by the follower robot by using color filtering. According to the environment, it is
possible to change the color of the follower robot.
Technologies 2017, 5, 34 5 of 17
Technologies 2017, 5, 34 5 of 17

(a) (b)
Figure
Figure 3.
3. (a)
(a)Tele-operation
Tele-operation of
of the
the master
master robot;
robot; (b)
(b) The
The master
master robot.
robot.

2.2. Vision
2.2. Vision
The vision module is a part of the follower robot and is responsible for recognizing and locating
The vision module is a part of the follower robot and is responsible for recognizing and locating
the master robot. The master robot is assumed to be of a single color, not otherwise present in the
the master robot. The master robot is assumed to be of a single color, not otherwise present in the
environment. This is achieved by wrapping the master robot with colored paper. A color detection
environment. This is achieved by wrapping the master robot with colored paper. A color detection filter
filter is used, which is a median filter whose ranges are calibrated prior to the start of the
is used, which is a median filter whose ranges are calibrated prior to the start of the experimentation.
experimentation. The master robot is recognized as the single largest block, with a size greater than a
The master robot is recognized as the single largest block, with a size greater than a certain threshold.
certain threshold. The center of the area of the block is taken as the center of the robot. Let <u, v> be
The center of the area of the block is taken as the center of the robot. Let <u, v> be the detected
the detected master robot in the camera’s frame of reference. This needs to be used to control the
master robot in the camera’s frame of reference. This needs to be used to control the robot using the
robot using the angle to the master in the follower robot’s frame of reference. The derivation of that
angle to the master in the follower robot’s frame of reference. The derivation of that angle is based
angle is based on transformations.
on transformations.
Let the calibration matrix of Kinect be given by Equation (1).
Let the calibration matrix of Kinect be given by Equation (1).
𝑓 0 𝑐𝑥 0
fx 𝑥 0
 
c 0
 𝐶 = [0 𝑓𝑦 𝑐𝑦 x 0] (1)
C= 0 fy c 0 (1)

0 0 1 y0 
0 0 1 0
where fx and fy are the focal points with respect to the X-axis and Y-axis, and cx and cy represent the
where fx and fy are the focal points with respect to the X-axis and Y-axis, and cx and cy represent the
center of the projection with respect to the X-axis and Y-axis.
center of the projection with respect to the X-axis and Y-axis.
The image coordinates are [𝑢 𝑣 1]T𝑇 , denoting the position of the master robot in the image
The image coordinates are [u v 1] , denoting the position of the master robot in the image
coordinate axis system. The position of the master in Kinect’s frame of reference is [𝑥 𝑦 𝑧 1]𝑇 . TheT
coordinate axis system. The position of the master in Kinect’s frame of reference is [ x y z 1] .
conversion is given by Equation (2). Here, z is directly obtained as the depth value from the Kinect.
The conversion is given by Equation (2). Here, z is directly obtained as the depth value from the Kinect.

  𝑢
𝑓𝑥 0 𝑐𝑥 0 𝑥 
𝑦


𝑣
[ ] f x= [00 𝑓𝑦 c𝑐x𝑦 00] ×[ 𝑧 ]  x (2)
u
  1 0 0 1 0  y 

 v  = 0 fy cy 0 1 ×  (2)
 
 z 

1 0 0 1 0
1

As shown in Figure 4, consider a coordinate axis system X F YF ZF centered at the follower


robot with the ZF axis facing the direction of motion of the robot, X F ZF as the ground, and YF
being located vertically above. Let the master be located at a direction of θ in this coordinate system.
Let the coordinate axis system of the follower robot be centered at the Kinect, since the Kinect is
horizontally mounted on the follower robot. Practically, there will be a small translation factor between
the coordinate axis system of the Kinect and the center of mass of the follower robot, which barely
affects the steering mechanism, while the speed control has an equivalent parameter that can be tuned.
Figure 4. Calculation of the angle to the master robot.

As shown in Figure 4, consider a coordinate axis system 𝑋𝐹 𝑌𝐹 𝑍𝐹 centered at the follower


robot with the 𝑍𝐹 axis facing the direction of motion of the robot, 𝑋𝐹 𝑍𝐹 as the ground, and 𝑌𝐹
being located vertically above. Let the master be located at a direction of θ in this coordinate system.
Let the coordinate axis system of the follower robot be centered at the Kinect, since the Kinect is
experimentation. The master robot is recognized as the single largest block, with a size greater than a
certain threshold. The center of the area of the block is taken as the center of the robot. Let <u, v> be
the detected master robot in the camera’s frame of reference. This needs to be used to control the
robot using the angle to the master in the follower robot’s frame of reference. The derivation of that
angle is based on transformations.
Technologies 2017, 5, 34 6 of 17
Let the calibration matrix of Kinect be given by Equation (1).

The calculation of the angle to the master robot𝑓is 0 𝑐𝑥by Equations


𝑥 given 0 (3)–(6). The superscript F denotes
𝐶 = [0 𝑓 𝑐𝑦
the follower (Kinect) frame of reference, while the 𝑦sub-script 0] L stands for the leader robot. (1)
0 0 1 0
x LF
tan θ =
where fx and fy are the focal points with respect to thezX-axis F and Y-axis, and cx and cy represent the(3)
L
center of the projection with respect to the X-axis and Y-axis.
The image coordinates are [𝑢 𝑣 1]𝑇 , denoting x LF the position of the master robot in the image
F
u = f y + cx 𝑇 (4)
coordinate axis system. The position of the Lmaster z FL in Kinect’s frame of reference is [𝑥 𝑦 𝑧 1] . The
conversion is given by Equation (2). Here, z is directly obtained as the depth value from the Kinect.
U F − cx
tan θ = L 𝑥 (5)
𝑢 𝑓𝑥 0 𝑐𝑥 f x 0
𝑦
[𝑣 ] = [0 𝑓𝑦 𝑐𝑦F 0] ×![ 𝑧 ] (2)
1 θ = tan U −c
0 −01 1L 0 x 1 (6)
fx

Calculationof
Figure4.4.Calculation
Figure ofthe
theangle
angleto
tothe
themaster
masterrobot.
robot.

As
Theshown in Figure
follower 4, consider
moves so as to keepa θcoordinate
as close asaxis system
possible 𝑋𝐹and
to 0, 𝑌𝐹 𝑍so centered
𝐹 as at thea follower
to maintain constant
robot with the 𝑍 axis facing
distance from the𝐹 master robot. the direction of motion of the robot, 𝑋𝐹 𝑍𝐹 as the ground, and 𝑌𝐹
being located vertically above. Let the master be located at a direction of θ in this coordinate system.
2.3. the
Let PIDcoordinate
Control axis system of the follower robot be centered at the Kinect, since the Kinect is
The control of the robot may be divided into a steering control and a speed control. First, the
steering control is discussed. The steering control uses a Proportional-Integral-Differential (PID)
controller, which is a control loop feedback mechanism. In PID control, the current output is based
on the feedback of the previous output, which is computed so as to keep the error small. The error is
calculated as the difference between the desired and the measured value, which should be as small as
possible. A correction is applied whose numeric value is based on the sum of three terms, known as
the proportional term, integral term, and derivative term. Such a control scheme is used to control the
differential wheel drive follower mobile robot, which is a highly nonlinear multi-input/multi-output
system. Using PID, the velocities of two differential wheels are found. The derivation formulas of
the two velocities are given in Equations (7)–(10). Here ω P , ω I , and ω D represent the proportional,
integral, and differential terms, respectively.

ωP = k P θ (7)

ω I = k I ( θ (t + ∆) − θ (t) ) (8)
t
ω D = k D ∑ t =0 θ ( t ) (9)

ω = ωP + ω I + ωD (10)
t
ω = k P θ + k I (θ (t + ∆) − θ (t)) + k D ∑t=0 θ (t) (11)

where ω is the angular speed or the steering control input, k P is the proportional constant, k I is
the integral constant, k D is the differential constant, and θ is the angle between the follower and
master robots.
Technologies 2017, 5, 34 7 of 17

The speed control of the robot takes as error, the distance between the leader and the follower.
The follower should not follow the master very closely as this is not socially acceptable and because
the sudden turns would cause risks due to the uncertain movement of the leader. Further, the
distance cannot be very large as the follower may easily lose contact with the leader robot. Hence a
comfortable threshold
Technologies 2017, 5, 34 distance (dmin ) is to be maintained from the leader. The input velocity7 ofis17given
by Equation (12), where d is the current distance between the master and the slave robots.
comfortable threshold distance (𝑑𝑚𝑖𝑛 ) is to be maintained from the leader. The input velocity is given
by Equation (12), where d is the current distance (d − dminthe
v = between ) master and the slave robots. (12)

The linear (v) and angular (ω) speedsv =are − 𝑑𝑚𝑖𝑛


(dused to) compute the speeds of the two wheels(12)of the
differential wheel
The linear (v) drive follower
and angular (ω)robot,
speedsgiven by (13)
are used and (14).the speeds of the two wheels of the
to compute
differential wheel drive follower robot, given by (13) and (14).
v2 = rω + v (13)
𝑣2 = r𝜔 + 𝑣 (13)
v1 = 2v − v2 (14)
𝑣1 = 2v−𝑣2 (14)
where r is the half distance between two wheels, v1 is the velocity of the first wheel, and v2 is the
where r is the half distance between two wheels, 𝑣1 is the velocity of the first wheel, and 𝑣2 is the
velocity of the second wheel.
velocity of the second wheel.
3. Results
3. Results
3.1. Tele-Operation of the Master Robot
3.1. Tele-Operation of the Master Robot
Our android application that we have designed for the Bluetooth communication of the robot is
Our android application that we have designed for the Bluetooth communication of the robot is
used to check the tele-operation capability of the master robot. We only control the master robot with
used to check the tele-operation capability of the master robot. We only control the master robot
this application. This application has a special feature which can be used to live stream the master
with this application. This application has a special feature which can be used to live stream the
robot,
masterand weand
robot, can we
control it. This it.
can control application is designed
This application on the on
is designed platform of eclipse
the platform android
of eclipse SDK. This
android
is
SDK. This is connected to the robot’s HC05 Bluetooth module and sends the user commands. The is
connected to the robot’s HC05 Bluetooth module and sends the user commands. The application
shown in Figure
application 5. in Figure 5.
is shown

Android
Figure5.5.Android
Figure application
application forfor controlling
controlling the the master
master robot.
robot.

3.2. SimulationResults
3.2. Simulation Results
We first present
present the
theresults
resultsofofthe
thecontrol
controlmodule.
module. Before
Before applying the the
applying control algorithm
control on a on a
algorithm
physical robot,
physical robot, simulations
simulationsusing
usinga acustom
custom built
builtsimulator
simulatorhad hadto been done.
to been Using
done. this,this,
Using we checked
we checked
whether the
whether the work
workwas
wasfeasible
feasibleorornot,
not,and
andif it
if was
it was feasible,
feasible, it could
it could thenthen be implemented
be implemented on aon a
physical
physical
robot. In robot. In the simulation
the simulation setup, wesetup,
tookwealltook
unitsalltounits to be similar
be similar to the to the physical
physical setupsetup areaoperated
area and and
operated
the robots the
withrobots
speedswith speeds
similar similar
to the speed to the master
of the speed and
of the masterrobots.
follower and follower robots.a We
We considered 3×3m
considered a 3 × 3 m region as where the real robot moves, and correspondingly, 100 × 100 pixels in
the simulation are taken as being equivalent to the real environment. The master robot speed is 5 cm
per second in practice, so in the simulation, we made it 1.67 pixels per second. Additionally, follower
robot speed should not surpass 1.67 pixels per second, so we used a threshold value of 1.67 pixels
per second.
Technologies 2017, 5, 34 8 of 17

region as where the real robot moves, and correspondingly, 100 × 100 pixels in the simulation are taken
as being equivalent to the real environment. The master robot speed is 5 cm per second in practice, so
in the simulation, we made it 1.67 pixels per second. Additionally, follower robot speed should not
surpass Technologies
1.67 pixels per
2017, second, so we used a threshold value of 1.67 pixels per second.
5, 34 8 of 17
The simulations were done on shapes like a circle, line, ellipse, and square for the P controller,
Technologies 2017, 5, 34
ThePIsimulations 8 of 17
PD controller, controller,were anddone
PIDon shapes likeThe
controller. a circle, line,of
results ellipse, and square
selective for the are
simulations P controller,
shown in
PD controller, PI controller, and PID controller. The results of selective −1 s−simulations are shown
−1 in
Figure 6. InThe all simulations
simulations, thedone
were constants are like
on shapes taken as kpline,
a circle, = 23.33 rm
ellipse,
−1and
1 (radians meter
−1 square for the−1
sec−1 ),
P controller,
−1
Figure 6. In all simulations, the constants are taken as k p = 23.33 rm s (radians meter sec ), kd =
kd = 1.67PD −1 s−1 , and k = 0.033 rm−1 s−1 . In all cases, the leader was kept sufficiently ahead of
rmcontroller, the
PI controller, and PID controller. The results of selective simulations are shown in
1.67 rm−1 s−1 , andI kI = 0.033 rm−1 s−1 . In all cases, the leader was−1kept −1
sufficiently ahead
−1
of
−1
the
follower. The
Figure
follower.black
6. In one
all is the leader
simulations, the
The black one is the leader robot and
constants the
are pink
taken one
as k p =is the
23.33 rm s
follower robot.
(radians meter
The sec
results
robot and the pink one is the follower robot. The results are also are
), also
kd=
−1 −1 −1 −1
available as rm
1.67 a video
available s a, at
as and
[14].
video I = 0.033 rm
kat [14]. s . In all cases, the leader was kept sufficiently ahead of the
follower. The black one is the leader robot and the pink one is the follower robot. The results are also
available as a video at [14].

(a) (b)
Figure 6: Results for controller on simulations: (a) In circular path; (b) In square path.
Figure 6. Results for controller (a) on simulations: (a) In circular (b) path; (b) In square path.
Figure 6: Results for controller on simulations: (a)
Figure 7 shows the results on the use of a P controller of the circular In circular path; (b)path.
In square path.
The first sub-plot
compares
Figure 7 shows the Xthe values of the
results on follower
the use and of athe master robots.
P controller Thecircular
of the second sub-plot
path. The compares the Y
first sub-plot
valuesFigure
of the7 follower
shows the andresults on the
the master use ofThe
robots. a Pthird
controller
sub-plot of indicates
the circular
the path.
error The
𝜃 valuefirstandsub-plot
how
compares the X values
compares
of theof follower andand thethe
master robots. The secondsub-plot
sub-plot compares the
it changesthe X values
in the simulations. the The
follower
last sub-plot master robots.
indicates The second
the distance between the master comparesrobotthe andY
Y valuesvalues
of theoffollower
the robot. and
follower the master robots. The third sub-plot indicates the error value
𝜃 value
θ and how
the follower Weandcanthe master
observe robots.
that The thirdthe
after reaching sub-plot indicates
threshold value,thetheerror
distance and how
is constant. It
it changes in the
itmust
changes simulations. The last sub-plot indicates the distance between
be noted that the problem is not controlling a system with a static goal point, in which case and
in the simulations. The last sub-plot indicates the distance between the master
the master robot
robot and the
the
followerthe robot. We can observe that after reaching the threshold value, the distance
system eventually reaches the goal point with a small error. The master robot acts as a goal for theIt
follower robot. We can observe that after reaching the threshold value, the is constant.
distance is It
constant. must
be notedmustthatbe
followingthe problem
noted
robot. thatThe ismaster
the not controlling
problem is not
robot keeps a moving
system abruptly,
controlling with a static
a system goal point,
withtherefore,
and a static goal
frominpoint,
which in case
a control which the system
case
perspective, the
system
eventuallythe reacheseventually
goal point thekeeps reaches
goal changing, the
point with goal point
a small
making with a
error.
sure small
thatThe error. The master
master robottoacts
no convergence robot acts
as aerror
a zero as
goaloccurs.a goal
for theThe for
followingthe
error
following
robot. Themetricmaster robot.
is the robot The master
correlation
keeps robot keeps
between
moving the moving
trajectories
abruptly, and abruptly,
of andfrom
the master
therefore, therefore,
anda the from
slave.aperspective,
control control perspective,
This further needs
the goal
the
point keeps goal
account point
for thekeeps
changing, changing,
fact that
making in thethat
sure making sure the
simulations, thatmaster
no convergence no to
convergence
arobot
zero can
errorto occurs.
a zero
follow error occurs.
trajectories
The error aThe
likemetric error
square,
is the
metric
which is the
do not correlation
obey between the constraints
the non-holonomic trajectories of the
theslave
master andand thehence,
slave. smoothening
This further needs
correlation between the trajectories of the master andofthe slave.robot,
This further needs accountof forthethe
account
curve is fornot the fact
really anthat in the simulations, the master robot can follow trajectories like a square,
error.
fact that in the simulations, the master robot can follow trajectories like a square, which do not obey
which do not obey the non-holonomic constraints of the slave robot, and hence, smoothening of the
the non-holonomic constraints of the slave robot, and hence, smoothening of the curve is not really
curve is not really an error.
an error.

Figure 7. P controller of a circle.

Figure 8 describes the PFigure 7. P controller


controller of a circle.
Figure 7.of an elliptical
P controller path. The individual sub-plots are as
of a circle.
explained earlier. From the first and second sub-plots, we can observe the elliptical path. The last
Figure 8 describes the P controller of an elliptical path. The individual sub-plots are as
explained earlier. From the first and second sub-plots, we can observe the elliptical path. The last
Technologies 2017, 5, 34 9 of 17

Technologies
Technologies 2017,
2017, 5,
5, 34
34 99 of
of 17
17
Figure 8 describes the P controller of an elliptical path. The individual sub-plots are as explained
graph
graph is
earlier. is not
From constant;
not the
constant; it shows
shows that
it second
first and that in
in the
thewe
sub-plots, elliptical
elliptical path, the
path,the
can observe follower
theelliptical robot
followerpath. can’t
robotThe lastmove
can’t graphwith
move is notaa
with
constant
constant speed.
constant; speed.
it shows that in the elliptical path, the follower robot can’t move with a constant speed.

Figure
Figure 8. P
8. P
Figure8. controller
P controller of
controller of an
ofan ellipse.
anellipse.
ellipse.

Figure
Figure 999 displays
displays
displays the the
the PP control
P control
control ofof
of aaalinear
linear path.
linearpath.
path. As As
As isis explained
is explained in
explained in the
in the previous
the previous graphs,
previous graphs, the
graphs, the
the
Figure
sub-plots
sub-plots are
are in
in the
the same
same order.
order. From
From the
the first
first two
two graphs,
graphs, we
we can
can observe
observe the
the approximate
approximate linear
linear
sub-plots are in the same order. From the first two graphs, we can observe the approximate linear path.
path. Here,
path. the
Here, the error
theθ is
error 𝜃𝜃 is almost
isconstant constant
constantitbecause
almost because because it is
is aa linear
it path. linear path.
path. The
The distance
distance between
betweenandthe
the
Here, error almost is a linear The distance between the follower
follower
follower and
and master
master is
is also
also near
near to
to constant
constant with
with less
less variation
variation from
from 0.5
0.5 m
m to
to 0.55
0.55 m
m in
in 80
80 s
s because
because
master is also near to constant with less variation from 0.5 m to 0.55 m in 80 s because no fluctuations
no
no fluctuations
fluctuations are observed
are linear
observed in
in the
the linear
linear path.
path.
are observed in the path.

Figure 9.
Figure9. P
9. P controller
P controller of
ofaaa line.
controller of line.
line.
Figure

Figure
Figure 10
10 describes
describes the
the PP controller
controller of
of the
the square
square path.
path. The
The sub-plots
sub-plots are
are again
again in
in the
the same
same
order. We can observe more fluctuations of X and Y than in the circular, elliptical, and linear paths,
order. We can observe more fluctuations of X and Y than in the circular, elliptical, and linear paths,
Technologies 2017, 5, 34 10 of 17

Technologies
Technologies2017,
2017,5,5,34
34 10
10ofof17
17
Figure 10 describes the P controller of the square path. The sub-plots are again in the same order.
because
because when
whenthe
themaster
master isisfollowing
following a square path atatthe edges, the
thefollower tries
triesto follow in
inthe
We can observe more fluctuations of X aand
square path
Y than in the the edges,elliptical,
circular, follower
and linear topaths,
follow the
because
curved
curved path.
path.From
From the
thelast
lastgraph,
graph, we
we can
can observe
observe that
that no
no constant
constant distance
distance is
ismaintained
maintained in
inbetween
between
when the master is following a square path at the edges, the follower tries to follow in the curved path.
them.
them.
From the last graph, we can observe that no constant distance is maintained in between them.

Figure
Figure10.
Figure 10.PPPcontroller
10. controllerofof
controller ofaaasquare.
square.
square.

Figure
Figure1111a–d
a–dshow
11a–d showthe
show the PD
thePD controller
PDcontroller of
controllerof circle,
ofcircle, ellipse,
circle,ellipse, linear,
ellipse,linear, and
linear,and
andsquare
square
square paths,
paths,
paths, respectively.
respectively.
respectively. All
All
All of
of
ofthem
them
them also
also
also have
have
have the
thethe same
same
same properties
properties
properties as
as asthe
thethe P controller.
P controller.
P controller. Simulations
Simulations
Simulations are
arearecompleted
completed
completed with
with
with all
all allP, PD,
P, PD,
P, PD, PI,
PI,
PI,
andand
and
PIDPID
PID controllers
to to
controllers
controllers tofind
find outout
find outwhich
oneone
which
which one isisthe
is the thebest
bestbest controller.
controller.
controller.

(a)
(a)
Figure 11. Cont.
Technologies 2017, 5, 34 11 of 17
Technologies 2017, 5, 34 11 of 17

(b)

(c)

(d)
Figure
Figure11.
11. (a)
(a) PD
PD controller of aa circle;
controller of circle; (b)
(b)PD
PDcontroller
controllerofofananellipse;
ellipse;(c)(c)
PDPD controller
controller of of a line;
a line; (d)(d)
PD
PD controller of a square.
controller of a square.
Technologies 2017, 5, 34 12 of 17

Figure 12a,b show the results of a PI controller of circle, ellipse, linear, and square paths. All four
graphs show similar properties like the P and PD controllers. Figure 13a,b show the results of the PID
controller of circle, ellipse, linear, and square paths. The PID controller also shows similar properties
in all graphs like
Technologies the
2017, P, PD, and PI controllers.
5, 34 12 of 17

(a)

(b)

(c)

Figure 12. Cont.


Technologies 2017, 5, 34 13 of 17
Technologies 2017, 5, 34 13 of 17

(d)
Figure 12. (a) PI controller of a circle; (b) PI controller of an ellipse; (c) PI controller of a line; (d) PI
controller of a square.

Figure 12a,b show the results of a PI controller of circle, ellipse, linear, and square paths. All
four graphs show similar properties like the P and (d) PD controllers. Figure 13a,b show the results of
the PID controller of circle, ellipse, linear, and square paths. The PID controller also shows similar
Figure 12. (a) 12.
Figure PI controller of aofcircle;
(a) PI controller (b)
a circle; (b)PI
PIcontroller
controller ofofanan ellipse;
ellipse; (c)controller
(c) PI PI controller of(d)
of a line; a line;
PI (d) PI
properties incontroller
all graphs like the P, PD, and PI controllers.
controller of a square.
of a square.

Figure 12a,b show the results of a PI controller of circle, ellipse, linear, and square paths. All
four graphs show similar properties like the P and PD controllers. Figure 13a,b show the results of
the PID controller of circle, ellipse, linear, and square paths. The PID controller also shows similar
properties in all graphs like the P, PD, and PI controllers.

(a)

Figure 13. Cont.

(a)
Technologies 2017, 5, 34 14 of 17
Technologies 2017, 5, 34 14 of 17

(b)

(c)

(d)
Figure 13. (a) PID controller of a circle; (b) PID controller of an ellipse; (c) PID controller of a line; (d)
Figure 13. (a) PID controller of a circle; (b) PID controller of an ellipse; (c) PID controller of a line;
PID controller of a square.
(d) PID controller of a square.
Technologies 2017, 5, 34 15 of 17
Technologies 2017, 5, 34 15 of 17

3.3. Analasis of PID Control


3.3. Analasis of PID Control
In the practical observations, we first applied P, PD, PI, and PID controllers, rather than directly
In the practical observations, we first applied P, PD, PI, and PID controllers, rather than directly
applying the PID controller. In the observations, when we applied the P controller, the follower
applying the PID controller. In the observations, when we applied the P controller, the follower robot
robot received some jerks in its motion. When using the PD controller, those jerks decreased in
received some jerks in its motion. When using the PD controller, those jerks decreased in number, and
number, and when using the PI controller, the frequency of jerks also decreased comparatively, but
when using the PI controller, the frequency of jerks also decreased comparatively, but the follower
the follower didn’t receive a smooth motion. When using the PID controller, the jerks are decreased
didn’t receive a smooth motion. When using the PID controller, the jerks are decreased to a greater
to a greater extent than when employing P, PD, and PI controllers, and the follower robot moved
extent than when employing P, PD, and PI controllers, and the follower robot moved almost as
almost as accurately as the master. Among all of the controllers, PID resulted in the least amount of
accurately as the master. Among all of the controllers, PID resulted in the least amount of errors. In the
errors. In the simulation, we also acquired results with very slight errors for all controllers, but
simulation, we also acquired results with very slight errors for all controllers, but among them, PID is
among them, PID is the best.
the best.
The most difficult part in PID control is tuning the 𝑘 , 𝑘 , and 𝑘 values. By tuning the
The most difficult part in PID control is tuning the k P𝑃, k D𝐷, and k I 𝐼 values. By tuning the
physical robot and in simulations, we got 𝑘𝑃 , 𝑘𝐷 , and 𝑘𝐼 values of 0.7 rm−−1 1 s−−1
1 , 0.05 rm−−1
1 s−−1
1 , and
physical −1robot and in simulations, we got k P , k D , and k I values of 0.7 rm s , 0.05 rm s , and
0.001 rm s −1 , respectively. For these constant values of PID, our follower robot follows the leader
0.001 rm−1 s−1 , respectively. For these constant values of PID, our follower robot follows the leader
most accurately.
most accurately.

3.4. Results
3.4. Results on
on the
the Real
Real Robots
Robots
After the
After the simulation,
simulation,the theexperiments
experimentsare arerepeated
repeated forfor a physical
a physical setup
setup consisting
consisting of two
of the the two
real
real robots.
robots. The The results
results are shown
are shown in Figure
in Figure 14. 14.
TheThe results
results cancan be better
be better visualized
visualized as as a video,
a video, which
which is
is available in [14]. In the video, the green robot is the master robot and the robot on
available in [14]. In the video, the green robot is the master robot and the robot on the back side is the back side is
the differential wheel follower robot (known as SMART). As discussed earlier,
the differential wheel follower robot (known as SMART). As discussed earlier, the master robot wasthe master robot was
designed as
designed as aa teleoperation
teleoperation robot
robot with
with Arduino,
Arduino, aa Bluetooth
Bluetooth module,
module, and
and an an android
android mobile.
mobile. TheThe
follower robot was only designed on Arduino, but the entire process was conducted
follower robot was only designed on Arduino, but the entire process was conducted on the laptop on on the laptop
on the
the python
python platform.
platform. It takes
It takes an RGBD
an RGBD imageimage from Kinect,
from Kinect, which iswhich
mounted is mounted in it.
in front of front of it.
Through
aThrough
python aarduino
python simulation,
arduino simulation, the of
the velocity velocity
the two of the two is
motors motors
sent tois the
sentfollower
to the follower robot.
robot. Image
Image processing and the PID controller are controlled on the laptop on
processing and the PID controller are controlled on the laptop on the python open cv platform. the python open cv
platform.

Figure 14.
Figure 14. Results on the physical setup.

4. Conclusions
4. Conclusions
In this
In this paper, robot following
paper, robot following based
based on
on computer
computer vision
vision and
and PID
PID control
control for
for aa differential wheel
differential wheel
robot was
robot was presented.
presented.ForForthis,
this,we
wedesigned
designedtwo
tworobots;
robots;one
one worked
worked asas a master
a master andand
waswas controlled
controlled by
by the user through manual control based on Android and Bluetooth. The master consisted
the user through manual control based on Android and Bluetooth. The master consisted of an Arduino of an
Arduino
UNO UNO
board board
and HC05 and HC05 Bluetooth
Bluetooth module.module. The second
The second robot worked
robot worked as a follower
as a follower robot.robot. We
We used
usedSMART
the the SMART
robot robot
of theof the IIITA
IIITA Robotics
Robotics laboratory
laboratory as a follower.
as a follower. It wasItsomewhat
was somewhatbiggerbigger and
and more
more convenient
convenient to holdtoKinect,
hold Kinect,
a laptop,a laptop, and Arduino.
and Arduino. By usingBy usingforKinect
Kinect for the
the master master
robot, we robot, we
produced
Technologies 2017, 5, 34 16 of 17

an RGBD image, which was used to identify and localize the master robot and the master robot was
followed by the follower robot using a PID controller.
Through our experiments, it was shown that the follower robot could follow the master robot
with minimum errors. First, we performed the simulation work, identified the minimum error values
(of angle θ and distance (d), and then implemented the physical model according to the simulation
values. The physical model also worked correctly with minimum error values.
In this project, we did not consider any obstacles in its path. However, in reality, there will be
some static or dynamic obstacles in the path. In our case, the follower robot was not designed to stop
or to take another path without missing the master robot. Therefore, our future work aims to overcome
obstacles as well. Using Kinect, we can find obstacles from the depth values. After finding those
obstacles with the help of motion planning algorithms we will design obstacle-free paths. Further, the
aim is to use Extended Kalman Filters for tracking the master and slave robots so that the system can
work even if the master robot is not in sight and to eliminate any errors.

Acknowledgments: The work is sponsored by the Indian Institute of Information Technology, Allahabad through
its Summer Research Internship Program initiative.
Author Contributions: C.S.P. and R.K. designed the algorithms. C.S.P. did the development and experimentation
of the work. C.S.P. and R.K. wrote the paper.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Yoshimi, T.; Nishiyama, M.; Sonoura, T.; Nakamoto, H.; Tokura, S.; Sato, H.; Ozaki, F.; Matsuhira, N.
Development of a Person Following Robot with Vision Based Target Detection. In Proceedings of the 2006
IEEE/RSJ, International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006;
pp. 5286–5291.
2. Shah, H.N.; Ab Rashid, M.Z.; Tam, T.Y. Develop and Implementation of Autonomous Vision Based Mobile,
Robot Following Human. Int. J. Adv. Sci. Technol. 2013, 51, 81–91.
3. Khoshelham, K.; Elberink, S.O. Accuracy and Resolution of Kinect Depth Data for Indoor Mapping
Applications. Sensors 2012, 12, 1437–1454. [CrossRef] [PubMed]
4. Fang, Z.; Zhang, Y. Experimental Evaluation of RGB-D Visual Odometry Methods. Int. J. Adv. Robot. Syst.
2015, 12, 26. [CrossRef]
5. Gavrila, D.M.; Philomin, V. Real-time object detection for smart vehicles. In Proceedings of the IEEE
International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 87–93.
6. Martinez, J.L.; Pozo-Ruz, A.; Pedraza, S.; Fernhdez, R. Object Following and Obstacle Avoidance Using a
LaserScanner in the Outdoor Mobile Robot Auriga-α. In Proceedings of the 1998 IEEE/RSJ, International
Conference on Intelligent Robots and Systems, Victoria, BC, Canada, 17 October 1998; pp. 204–209.
7. Breivik, M.; Fossen, T.I. Guidance-based Path Following for Wheeled Mobile Robots. IFAC Proc. Vol. 2005,
38, 13–18. [CrossRef]
8. Jamkhandikar, D.; Mytri, V.D.; Shahapure, P. Object Detection and Tracking in Real Time Video Based on
Color. Int. Eng. Res. Dev. 2014, 10, 33–37.
9. Islam, S.M.R.; Shawkat, S.; Bhattacharyya, S.; Hossain, M.K. Differential Drive Adaptive
Proportional-Integral-Derivative (PID) Controlled Mobile Robot Speed Synchronization. Int. J. Comput. Appl.
2014, 108, 5–8.
10. Sharma, A.; Verma, R.; Guptaand, S.; Kaur Bhatia, S. Android Phone Controlled Robot Using Bluetooth.
Int. J. Electron. Electr. Eng. 2014, 7, 443–448.
11. Eshita, R.Z.; Barua, T.; Barua, A.; Dip, A.M. Bluetooth Based Android Controlled Robot. Am. J. Eng. Res.
2016, 5, 195–199.
12. Saini, A.K.; Sharma, G.; Choure, K.K. BluBO: Bluetooth Controlled Robot. In Proceedings of the National
Conference on Knowledge, Innovation in Technology and Engineering, Chhattisgarh, Raipur, India,
10–11 April 2015; pp. 325–328.
Technologies 2017, 5, 34 17 of 17

13. Zalevsky, Z.; Shpunt, A.; Maizels, A.; Garcia, J. Method and System for Object Reconstruction. U.S. Patent,
20,100,177,164 A1, 15 July 2010.
14. Video Results. Available online: https://www.youtube.com/playlist?list=PL3c2Sta-bBqCpREFB
27ReRXcmVRPruW6v (accessed on 20 April 2017).

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like