You are on page 1of 39

UGVTeamBasedControlandGUIApplication

I. Introduction
Unmanned ground vehicles (UGVs) are mobile robots that are commonly used today in
exploration and testing of hazardous or remote areas. There are two categories of UGVs, remote
control and autonomous. Our main focus is autonomous control. Autonomy represents no human
control either inside or outside the vehicle. In order to have complete autonomous systems, we
must have trust that the UGVs will achieve their goal without harm. Safety will be ensured
through trajectory planning and obstacle avoidance implemented into the UGVs. Trajectory
planning, obstacle avoidance and leader-follower schemes for autonomous UGVs have been
investigated extensively for their wide applications including search and rescue operations,
surveillance, space exploration, and military missions. The UGV system includes a graphical
user interface (GUI) to allow monitoring and simple control of the robots operations. The
research advances have been applied to many existing platforms, for example, all terrain UGVs
(P3-AT) manufactured by MobileRobots. These commercial products have the basic capability
of motion and detection, and provide very good platforms to validate more advanced research
results. Our program objective is to improve the autonomy of the UGV, reduced the energy
consumption and could be easily extended to other commercial UGV platforms.













V Formation of UGV Team
UGVTeamBasedControlandGUIApplication
II. Autonomous Unknown Obstacle Position Detection:
Unknown territories propose hazards to autonomous unmanned ground vehicles (UGVs) during
operation. Any information that could be used for navigation of obstacles will have to be
evaluated in real time. Successful functioning of the navigation routines requires high
adaptability to detect obstacles, both static and dynamic. The goal is to develop an algorithm for
the Pioneer 3-All Terrain (P3-AT) that can detect an obstacles velocity given a limited range of
sonar detection.
2.1 Software:
The company, MobileRobots, supplied the P3-AT with its own software development kit (SDK).
This SDK comprised of libraries of C++ classes used to operate the robot platform.
The accompanying software included with the P3-AT are:
ARCOS
ARIA
ArNetworking
SonARNL
MobileSim
MobileEyes
Mapper3Basic
ARCOS- ARCOS is the operating system onboard the robot platform. Information
is passed between a server computer and the host robot platform through Server Information
Packets (SIPs). The SIPs hold information regarding the robot and its accessories current state
(position, rotational speed, sensor readings, etc). The server receives this information and passes
it through a task cycle that determines the next set of instructions. These instructions are then
sent back to the robot and updated by ARCOS.

[Note: MobileRobots have conventionally used Ar as an identifier with the SDK libraries that
they have supplied. This identifier can be seen as a prefix on most class, function, and variable
names.]

UGVTeamBasedControlandGUIApplication
ARIA- ARIA includes previously developed classes that access the low level controls of
the P3-AT platform. These libraries are responsible for transferring tasks that the P3-AT should
utilize. The particular values sent within the SIPs are determined by the task resolver included in
the task cycle.
ArNetworking- ArNetworking provides the communication for server/host interacton. It
handles SIP processing, TCP/IP networking, and multithreading. It also is responsible for data
transfer with other running programs such as MobileEyes.
SonARNL- SonARNL accesses advance navigation routines. These routines include map
based path planning. Given a map of the P3-ATs current environment, SonARNL uses the
robots sonar readings in a localization algorithm to determine a trajectory to implement in order
to reach a given goal.
MobileSim- MobileSim is the simulation software for the P3-AT. Using the P3-AT
specifications (height, length, weight) along with other parameters (map), a simulated robot
platform may be used for testing.
MobileEyes- MobileEyes is the graphical user interface (GUI) application. It assists in
monitoring the P3-AT robot and its accessories. The user is capable of setting current goals for
the robot to go to as well as provide manual drive from a remote location.
Mapper3Basic-Mapper3Basic is the environment development application distributed by
MobileRobots. Obstacles, walls, and boundaries are defined by line segments in a two
dimensional graph to create maps. The files are saved as .map files and then used as parameters
for SonARNL based programs.
2.2 Hardware:
The Pioneer 3-All Terrain (P3-AT) is equipped with several components used for detection,
positioning and operation. The robot runs ARCOS on a Hitachi H8s processor. Information is
transferred between the robot platform and a server computer through a serial port mounted in
the user control panel located on top of the P3-AT. The optional communication connection that
we use is through a wireless Ethernet device, called Wibox made by Lantronix.
UGVTeamBasedControlandGUIApplication

Figure 1 P3-AT Specifications

The P3-AT is also equipped with encoders, sonar sensors, and a gyroscope that we incorporate in
our system design. The encoders are high-precision optical quadrature shaft encoders. The
gyroscope provides a maximum 300 degrees-per-second rotational rate data. Sixteen sonar
sensors are used for obstacle detection and are placed in the front and back of the P3-AT in
specified intervals. A diagram of sonar locations and degree offsets for the front half are shown
below:

Figure 2 P3-AT Sonar Sensors Intervals
UGVTeamBasedControlandGUIApplication

2.3 Algorithm:
According to the general equation:
v = (p2 p1)/(t2-t1) = p/t
In order to determine an unknown obstacle velocity (v), positional values (p1 and p2) for the
obstacle must be determined at separate time instances (t1 and t2). The main problem is to
determine these positional readings given the P3-ATs limited range of sonar detection. Not
only is the sonar set to detect objects within a five meter range, but the sonars are also limited to
specified line of sight readings (rather than a full 360 degree span of detection). To illustrate this
problem observe the figures below.
This figure shows each sonar line of sight. Any obstacle within these lines of sight may be read
by the sonar as the distance from the sonar to that object. Although any obstacles that exists
between these lines of sight may not be distinguished by the P3-ATs sonar.

Figure 3 P3-AT Sonar Readings


Positional values of unknown obstacles will be estimated by utilizing trigonometric properties
and statistical averaging based on interval sonar readings.
Given the P3-AT orientation, position, and sonar readings, corresponding positions can be found
that describe the obstacles contour shape. ARIA includes member functions that access this
information. The x and y position of the P3-AT can be retrieved using ArPose::getx and
ArPose::gety. The orientation of the P3-AT can be retrieved with the command ArPose::getTh or
ArPose::getThRad, where the former will retrieve the heading in degrees and the latter will
UGVTeamBasedControlandGUIApplication
receive the heading in radians. And ArRobot::getSonarRange(n) retrieves the measurement
readings from the specified sonar n.
By collecting this information we can determine each of the actively operating sonar to a
corresponding positional reading that defines a point on the unknown obstacles contour. Each of
the sonar is given a maximum detection distance of 5 meters. Assuming that the unknown
obstacle will approach the UGV from the front direction, sonar 0 through 7 (as in figure 2) will
be the only sensors able for detection. For any one of these front sonar, any current reading of
less than 5 meters will record the sonar as active. Each active sonar reading (r
n
) will be stored
in an array. The active sonar readings, along with the robot orientation and robot position (x,y),
are then used with trigonometric identities to calculate corresponding positions for each active
sonar. Due to each sonars specified degree offset from the robots forward direction, each
evaluation for a corresponding position must be given the sonar number as a parameter.
Depending on the sonar number, the relative degree offset will be used to calculate the
corresponding position.

Figure 4


UGVTeamBasedControlandGUIApplication

Figure 5
These corresponding positional points will be stored in two arrays containing the x and y
positions. In-active sonars will store garbage values valued at 9999999 for both x and y.
After storing these positional points, the active sonars are then searched to find the
leftmost and rightmost active sonars. These two sonars will be marked as first and last and
will delineate endpoints for a line segment that will approximate the obstacles position. This line
segment is constructed using a linear regression (best fit line) of the corresponding positional
points.

The robot orientation is then used again to establish a slope. Two limit lines are then
created, comprised of the established slope and boundary conditions set by the leftmost and
rightmost positional points. The positional points determine the placement of the limit lines on
the map, and the intersecting points for each limit line with the best fit line determine the line
segment used for approximating the obstacles position. The diagram below further describes the
procedure.
UGVTeamBasedControlandGUIApplication

Figure 6

The intersecting lines will be determined using matrix manipulation given that the unknown
variables, x and y, are presented within linear equations and matrices can be utilized due to their
commutative property.
Subsequent to determining the intersection points for the best fit line and limit lines, a midpoint
is calculated that will represent our estimated obstacle position.
Results:
Using the differences in positions will provide the most significant information for
velocity determination. To analyze the accuracy of this algorithm, differences in actual object
position will be compared to differences in estimated object position.
Using Mapper3Basic, a test environment was created instantiating a single object (a
rectangle) and the robot platform (P3-AT). This mapped environment was configured on an x-y
grid where the set of all possible positions will exist in the first quadrant of a graph.
A demonstration program (demo.exe) supplied as one of the example programs will be
used to take readings for the obstacle location algorithm. This program allows sonar information
to be presented through the command prompt. (This demonstration program gives the sonars in
reverse order from that shown in fig 2.) The information is gathered and sent to the obstacle
position algorithm to determine the obstacles estimated location.
UGVTeamBasedControlandGUIApplication

Robot Data x y
position of robot~ 1.67 0.7 (m)

radians 1.57 rad 89.9544 deg.

Obstacle Position1:

obstacle center: 2.7435 3.9625 (m)

estimated position
after algorithm: 2.69236 3.04167 (m)

Obstacle Position2:

obstacle center: 0.7855 1.8715 (m)

estimated position
after algorithm: 1.09121 1.34952 (m)

Obstacle
Position: 2.86462 (m)

Estimated
Position: 2.3296 (m)

percent error offset: 18.67 %
Table 7
UGVTeamBasedControlandGUIApplication

Figure 8 Demo.exe
UGVTeamBasedControlandGUIApplication

Figure 9 Demo.exe

Conclusion:
Algorithm successfully detected obstacle and approximated obstacles center position.
The approximated center position, compared with the actual position of the obstacle,
demonstrated small variances and differences; sufficient to utilize in obstacle avoidance.
Further research is required to improve the detection of obstacle contour details.
UGVTeamBasedControlandGUIApplication
III. Follower Scheme:
Following the P3-AT we have the Traxster robots from robotics connection. We plan to achieve
vision based motion control and implement this to the following traxsters. Due to the lack of
power and intelligence of the followers compared to the P3-AT, we must manage to produce
optimal outcome using very little input. The whole system will be dependant on the encoder
count received by the motors and utilizing visual feedback to keep the system on track.

3.1 Follower hardware

The following robots are driven by two 7.2 volt dc motors controlled by a dual five amp
H-bridge. The brains of the traxster are the mini dragon12 development board equipped with a
MC9S12 microcontroller from EVBplus.
The Traxster chassis was chosen due to its size and power. The small robotis about 9
inches long, 8 inches wide, 3 inches high, and weighs about 2 pounds with nothing on it. The
traxster being much smaller then the P3-AT can manage to get around in places the P3-AT
cannot, and fast. Each motor has about 7.2 kg*cm of torque with no extra weight and are
equipped with encoders with 624 ticks per revolution of accuracy.

3.2 Operation

Our goal is to have the P3-AT send coordinates to the followers telling them where to go.
In order for this to perform successfully, the followers must have a good idea of where they are.
We can achieve position knowledge, as well as many other things through simple encoder tick
count.
Using interrupt vectors, we have a clear count of encoder ticks. These ticks are used to
calculate velocity, distance traveled, and position.
A. Velocity
To calculate velocity we take the dividend of a delta encoder count and the total
number of encoder ticks, multiply it by the wheel circumference and divide by time.
In order to do this in code, we need to set the current encoder count to a temporary
value and use a time delay. This will give you the two encoder counts to get the delta
and we have a known time.
UGVTeamBasedControlandGUIApplication
This must be done twice, once for each motor resulting in a left velocity and a
right velocity.
diffl=countlt-templ;
diffr=countrt-tempr;
velr=((diffr/encod)*circ)/time;
vell=((diffl/encod)*circ)/time;
Now that we have velocities for each motor, we can create a simple proportional
controller to control speed. With a set velocity, we implemented the proportional
controller to match the velocity to the set. This will achieve as straight movement as
possible. We also created a speed up and slow down command to manage slippage.

Acceleration of Followers
0
5
10
15
20
25
30
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Ti me
V
e
l
o
c
i
t
y
Series1

Figure 10 Acceleration of follower to avoid




B. Distance traveled
The most crucial thing to be known when determining position is distance
traveled. To determine distance is almost identical to calculating velocity, except with
out the temporary encoder count and the time delay. It is as easy as taking the
dividend of the encoder count and the total number of encoder ticks, and multiplying
it by the wheel circumference.
UGVTeamBasedControlandGUIApplication
Once again, due to two motors, we have two distance readings.
distr=(countrt/encod)*circ;
distl=(countlt/encod)*circ;
C. Position
As mentioned before, we are driving two motors. It is nearly impossible to
get the motors to operate at exactly the same speed. Due to this, the followers will not
go exactly straight but will think it is. This will throw off the position readings off
completely. In order to avoid this we must compensate for the angular offset. Once
again we use the encoders.
Using the difference between the distance traveled of the left track and the right
track, an angle can be determined geometrically.



Figure 11 Bottom view of Traxster showing angular calculation based on difference
of left and right treads linear travel.

Algebraically, the angle is calculated. Using this angle we can now accurately
determine position.


D=d/2
=Arctan(d/W)
x=x+(D*sine())
y=y+(D*cosine())
=+
UGVTeamBasedControlandGUIApplication
Using theta as a constantly updated alpha, due to the constant changing of
direction, we can update the x and y position. Even though the followers may not be
exactly on the targeted destination, they will be relatively close. And as long as the
traxster knows exactly where it is, it will receive the next destination from the P3-AT
and know how to get there relative to where it is.
Encoder calculated data of follower while running. Encoder count, difference, velocity and
distance traveled can be seen.
D. Results

Figure112Encodercalculateddataoffollowerwhilerunning.Encodercount,difference,
velocityanddistancetraveledcanbeseen.






Figure13Encodercalculateddataoffollowerafterrun.Sincestopped,velocityiszeroedoutand
canseedistancetraveledoveralland




UGVTeamBasedControlandGUIApplication
3.3 Visual Feedback
As mentioned, the followers are not always exactly on target. In addition, the encoders
are not perfect, although highly accurate. As a result, we decided to implement a visual feedback
system to realign the formation if they happen to be off.
To do this, we will mount a National Instruments smart cam on the P3-AT facing the rear
side. Here it will monitor the formation. Using the National Instrument vision acquisition soft
ware we are able to create some algorithm to determine the position of the traxsters. The vision
acquisition can determine distance by calibrating the algorithm based on a known value and
scaling the pixels. The algorithm then compares the shot of follower formation and compares it
to the original optimal formation shot. After comparing, it determines the displacement and
informs the P3-AT so that a signal can be sent to recalibrate.
When using visual feedback, we must be careful for the environment plays a crucial roll
in the outcome. For instance if the algorithm is programmed to look for a particular color, the
color may be altered by outside light or even duplicated by its surroundings. This would result in
false readings, or even no readings at all. To compensate for this we decided to track a color
rarely duplicated in environment, have a particular shape to void out any false readings, and
illuminate the tracked object to result in a constant color not effected much by surrounding light.
This beacon resulted positively in various testing.


Figure14 Constructedbeacontest.Serves
totestalgorithmthatcanlaterbe
implementedontothefollowingrobots.









Figure15 Testbeaconafter
algorithmapplied.


UGVTeamBasedControlandGUIApplication

3.4 Follower Integration
Each individual portion are pieced together to form the whole system including the
proportional controller, encoder feed, and visual feedback. With the full system integration, the
follower scheme works together to achieve calibration. The system is constantly updated through
out the duration of the system running to constantly update and synchronize.




K
Robot
encoder
Visual
Feedback
+
-
K
Robot
encoder
Visual
Feedback
+
-




Figure16Systemblockdiagram
As the diagram shows, the input voltage drives the robot. As the Traxster drives, encoder
count feed into the system where velocity is read. The proportional controller keeps the velocity
in synch with the given value. As the whole system operates, the camera mounted on the P3-AT
sends feedback into the follower system helping calibrate with the true position. All the little
portions accomplish their destined jobs to result in the best possible system of accurate
performance.






UGVTeamBasedControlandGUIApplication
IV. Path planning, Obstacle Avoidance, and Controls for Team Based UGV
4.1 Path Planning for UGV Team


Lead will
Collide
No

Lead
Senses
Obstacle
Lead Plans
Trajectory
for Team
Followers
will Collide


Lead
Avoids
Obstacle
Yes
Yes
Lead
determines
obstacle
distance
Lead tells
followers to
line up until
obstacle is
cleared
No


Figure 17 Path planning algorithm.
The leader has full control over the teams activities, and plans each followers
trajectories with respect to its own. Obstacles are sensed as well as their trajectories. If the leader
is to collide with an upcoming obstacle, the leader corrects the teams trajectory so that obstacle
is avoided with acceleration constants. After the obstacle is cleared, the team continues towards
its destination. If, however, the leaders trajectory does not collide with the obstacle, but the
followers in their current formation do, the leader narrows the teams formation, allowing the
team to fit through narrow pathways (e.g. doorways).
4.2 Dynamic Obstacle Avoidance
On a real platform, the number of obstacles is unknown. Therefore, the obstacle
avoidance algorithm should operate regardless of the number of obstacles sensed. Since the UGV
team moves as a single unit, focus for obstacle avoidance will be directed towards the leader
itself. The leaders velocity is initially constant. The planned position of the leader at time t can
be expressed as follows
UGVTeamBasedControlandGUIApplication
t c x t x
1 0
) ( + = (1)
) ( ) (
1 0
t x a a t y + = (2)

where x
0
, c
1
, and a
0
are the initial x-position, x-velocity, and initial y-position respectively.
Because this algorithm is for path-planning purposes only, the x-velocity is kept constant, and
the y-velocity velocity (a
1
) are dependent on the x-position.

(


If at any time, an obstacle i with position (x
0,i
, y
0,i
) and velocity (v
i,x
,v
i,y
) is encountered, it is
tested against a collision condition
2 2
, , 0
2
, , 0
) ( ] ) ( ) ( [ ] ) ( ) ( [
i x i i y i i
r R t v t x t x t v t y t y + + (3)
where R and r
i
are the radii of the leader and follwer i respectively. If this condition is satisfied,
an acceleration factor (a
2
) is added to the y-trajectory.
2
2 1 0
) ( ) ( ) ( t x a t x a a t y + + = (4)
The final positions can also be expressed in a similar fashion.
2
0 2 0 1 0
) ( ) ( ) ( t x a t x a a t y
o
+ + = (5)
f f
t c x x
1 0
+ = (6)
2
2 1 0
) ( ) ( ) ( t x a t x a a t y
f f f
+ + = (7)
The y-displacement can be expressed through subtraction.
O Ob bs st ta ac cl le e i i
t c
1
( ) t x a
1
r
L Le ea ad de er r
i R
t v
y i,
)
2
2
t x a
t v
x i ,
Figure 18 Relative trajectories of leader and an arbitrary obstacle
UGVTeamBasedControlandGUIApplication
( ) ( ) [ ] ( ) ( ) [ ]
2
0
2
2 0 1 0
) ( ) ( t x t x a t x t x a t y t y
f f f
+ = (8)
Dividing from both sides, and subtracting
0
x x
f
( )
0 2
x x a
f
+ from both sides, a
1
can be
expressed in terms of a
2
.
(
0 2
0
0
1
x x a
x x
y y
a
f
f
f
+

= ) (9)
a
0
can also be expressed in terms of a
2
by multiplying equation (5) by x
f
and (7) by x
0
and
subtracting.
f f f f
x x a x x a x a x y
2
0 2 0 1 0 0
+ + = (10)
0
2
2 0 1 0 0 0
x x a x x a x a x y
f f f
+ + = (11)
( ) ( )
0 0 2 0 0 0 0
x x x x a x x a x y x y
f f f f f
+ = (12)
f
f
f f
x x a
x x
x y x y
a
0 2
0
0 0
0
+

= (13)

The following substitutions are made:
f
f f
x x
x y x y
b

=
0
0 0
1
(14)
0
0
2
x x
y y
b
f
f

= (15)
f
x x b
0 3
= (16)
0 4
x x b
f
+ = (17)
Equations (9) and (13) can now be written as
3 2 1 0
b a b a + = (18)
4 2 2 1
b a b a = (19)
The values can be substituted into obstacle collision criteria (3) and steering function (4).
UGVTeamBasedControlandGUIApplication
] ) ( ) ( [ ) ( ) (
2
4 3 2 2 1
t x t x b b a t x b b t y + + + = (20)
2 2
, 0 , 0
2
, 0 , 0
2
4 3 2 2 1
) ( )] ( ) ( ) ( [
] ) ( ) ) ( ) ( ( ) ( [
i x i
y i
r R t v t x t x
t v t y t x t x b b a t x b b
+ +
+ + +
(21)
It is convenient to define some constant coefficients for (21).
t v t y t x b b h
y
i
y
i , 0 , 0 2 1 1
) ( ) ( + = (22)
t v t x t x h
i i , 0 , 0 2
) ( ) ( = (23)
2
4 3 3
) ( ) ( t x t x b b h + = (24)
The obstacle collision criteria therefore becomes
2 2
2
2
1 3 2
) ( ] [ ] [
i
r R h h h a + + + (25)
After arranging equation (25) into polynomial form, a
2
can be solved for through a simple
application of the quadratic formula.
0 ] ) ( [ ) 2 ( ) (
2 2
2
2
1 2 1 3
2
2
2
3
+ + + +
i
r R h h a h h a h (26)
UGVTeamBasedControlandGUIApplication
The dynamic obstacle avoidance was simulated through MATLAB. Values were set for the h-
parameters and the radii through the introduction of two arbitrary dynamic obstacles.

-10 -8 -6 -4 -2 0 2 4 6 8 10
0
2
4
6
8
10
12











4.3 Formation Control


After the leader has finished planning its own trajectory, it proceeds to plan the follower
robots trajectories relative to its own. These followers follow closely behind the leader in a
formation so knowledge of the leaders orientation is required. The orientation can be determined
from the trajectory that the leader had planned for itself. Given a checkpoint on the trajectory, the
orientation is simply the unit vector from the current checkpoint to the next.
( )
( )
( )
( )
( ) ( )
( ) =
+

= = y x
y y x x
y y x x
y x
y x
y x
lead lead lead lead
lead lead lead lead
,
,
,
,
,
2
1 2
2
1 2
1 2 1 2
r r
(27)
( )

= =
x
y
y x
r
r
r r
arctan , (28)
D
D
y
y
n
n
a
a
m
m
i
i
c
c

O
O
b
b
s
s
t
t
a
a
c
c
l
l
e
e
s
s

P
P
l
l
a
a
n
n
n
n
e
e
d
d

T
T
r
r
a
a
j
j
e
e
c
c
t
t
o
o
r
r
y
y

Figure 19 Dynamic Obstacle Avoidance simulation with 2 Dynamic Obstacles
UGVTeamBasedControlandGUIApplication
( y x , ) is the unit vector of orientation. ( ) y x
r r
, is the vector from the current checkpoint
to the next . (
1
,
lead
x )
1 lead
y ( )
2 2
,
lead lead
y x ( ) y x, is the magnitude of ( ) y x
r r
, , and is the angle of
orientation.
In this study, focus was directed towards implementing a V-shape formation.
( )
2 2
,
lead lead
y x


The relative spacing between UGVs are approximated by adding their respective radii.
follow lead lf
R R R + = (29)
follow follow ff
R R R + = (30)
Setting the followers in formation was designed to be an iterative process so that changing the
number of followers would not require complete revision of the algorithm. The first follower is
stationed directly behind the leader, and tags left and right are assigned to this follower.
( ) ( ) (
lf lead lead follow follow
R y x y x y x ) = , , ,
1 1 1 1
(31)
( ) ( )
1 1
, ,
follow follow left left
y x y x = (32)
( ) ( )
1 1
, ,
follow follow right right
y x y x = (33)
The leader then proceeds to alternate between assigning followers behind the left and right wings
at angles and respectively. Throughout this process, left and right are retagged onto the
leftmost and rightmost followers in the formation respectively.
( )
1 1
,
lead lead
y x

lead
R
follow
R
( )
2 2
,
follow follow
y x
( )


4 4
,
follow follow
y x
( )
1 1
,
follow follow
y x
( )
3 3
,
follow follow
y x
( )
5 5
,
follow follow
y x
Figure 20 Relative positions between leader and followers in a V-formation
UGVTeamBasedControlandGUIApplication
( ) ( ) ( ) ( ) (
ff left left n n
R y x y x y x ) + + + = sin , cos , ,
2 2
(34)
( ) ( ) ( ) ( ) (
ff right right n n
R y y x y x ) x + =
+ +
sin , , ,
1 2 1 2
(35) cos
( ) (
n n left left
y x y x
2 2
, , = ) (36)
( ) (
1 2 1 2
, ,
+ +
=
n n right right
y x y x ) (37)
,... 3 , 2 , 1 = n (38)

To simulate formation, a UGV team of a leader and three followers were set in MATLAB, and
their respective trajectories were planned.







-1 0 1 2 3 4 5 6 7 8 9
-1
0
1
2
3
4
5
6
7
8
9 9
8
7
6










-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1
-1
0
1
2
3
4
5
-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1
-9
-8
-7
-6
-5
-4
-3
-2
-1
0
1
-1 0 1 2 3 4 5 6 7 8 9
-9
-8
-7
-6
-5
-4
-3
-2
-1
0
1
L
L
e
e
a
a
d
d
e
e
r
r

F
F
o
o
l
l
l
l
o
o
w
w
e
e
r
r
s
s

Figure 21 Simulation of trajectory planning of a complete UGV team consisting of 1 leader and 3
followers. Four different trajectories are shown: <+,+> (upper left), <-,+> (upper right), <-,->
(bottom left), <+,-> (bottom right)
UGVTeamBasedControlandGUIApplication
If the team is set for collision at any point on the planned trajectory, the formation is
narrowed avoiding the need to correct the trajectory. This maximizes the teams efficiency in
reaching its destination, and allows the team to traverse through narrow doorways and spacing.
After the obstacle is cleared, the team resumes its original formation. For simplicity, the
followers will simply line up behind the leader in order (follow1, follow 2, follow 3,) in this
study. The condition for lining up is dependent on the width of the formation and the spacing of
the doorway.
s doorway
W W (39)
W
s
is the width of the formation.
This width is determined by the spacing between the rearmost followers (followers at point s).










( ) = sin
ff s
sR W (40)
For the simulations, the widths of the formation and doorway are replaced with a narrow
constant. The greater W
s
-W
doorway
is, the greater the narrow constant.




doorway
W
1
W
2
W
D Do oo or rw wa ay y
s
W
Figure 22 Widths of formation relative to a doorway obstacle
UGVTeamBasedControlandGUIApplication
Threshold=[2,5,0]
-8 -6 -4 -2 0 2 4 6 8 10
-2
0
2
4
6
8
10
12
Leader
Followers








4.4 Particle Swarm Optimization Controller
Since the teams activities are planned based on the surrounding sensed by the leaders
sonar, control of the system is almost always nonlinear. Taking into account that turning is
achieved by a rotational velocity difference between the treads, efficiency is increased by
minimizing the turn angle. To address these issues, Particle Swarm Optimization (PSO) was
introduced to the control of the followers.
The PSO algorithm developed for this study is as follows:
1. Divide interval between checkpoints into subintervals of equal length.

( ) ( ) [ ]
( ) ( )
1 1
, , ,
2
1 2
2
1 2 2 2 1 1
+
+
=
+
=
n
y y x x
n
y x y x d
d
sub
(41)
2. Restrict the turn angle to a maximum and minimum based on the turning
power of the follower.

max turn k turn max
(42)
3. Spread an arbitrary number of random angles k within the range specified in
step 2, and calculate x and y-coordinate values based on the current
checkpoint l and these random angles as a random function. These are the
particles of the PSO.
x
k sub PSO
d lx k l cos ) , (
1
= (43)
-8 -6 -4 -2 0 2 4 6 8 10
-10
-5
15
10
5
0
Not narrow
enough
Figure 23 Simulations of teams trajectory response to doorway obstacles. Left: Traversing
through doorways. Right: Avoiding doorways
UGVTeamBasedControlandGUIApplication

k sub PSO
d ly k l y sin ) , (
1
= (44)
4. Calculate the distance from each random particle to the next checkpoint.
These distances are the fitness functions of the particles.
( ) ( ) ( ) ( ) ( ) [
2 2
, , , , , , y x k l y k l x d k l
PSO PSO ss
= ]
)
f
fitne
(45)
5. Based on the fitness functions calculated in step 4, specify a local best
. ( ) ( ) ( ) l y l x ,
6. Move the other particles at the sub point toward the local best in hopes of
finding a new local best . ( ) ( y x y x
update
v
k k
, ,
7. Repeat steps 1-6 for set number of swarm cycles
Simulated output for the PSO controller is as follows:


Figure24PSOcontrollerwith5subpoints,50randomparticles,/16maxturnangle,and5swarm
cycles.Resultingfollowertrajectoryoutputsareshown.
UGVTeamBasedControlandGUIApplication
Based on output turn angles X
differece
(), the input (voltage difference between the tread motors
V
differece
()) can easily be determined through a simple application of the system transfer
function.



( )
( )
( )

H
X
V
difference
difference
=
V. Graphical User Interface
5.1Introduction:
GUI (Graphic User Interface) is a translation between human interaction and electronic devices
like computers, cell phones and other hand led devices to eliminate the required knowledge of
code and programming and gives a more appealing easy to use interface, applications of the GUI
include computer operating systems, cell phones, gaming software and other computer related
purposes, with out GUIs we have to use command lines which will make interacting much more
difficult and frustrating and filled with errors. In this research MATLAB GUI will be used to
help demonstrate simulations out puts by changing different factors with out the need to rewrite
new code, giving the researcher more time to concentrate on algorithm improvement and
hardware preparations. Working with MATLAB GUI requires some knowledge with MATLAB
programming and syntax to be able to link different files and functions, MATLAB makes it easy
to build the a GUI using GUIDE, this means make a GUI with a GUI, GUIDE makes the process
much easer and more efficient there by giving the designer more time to be creative and make
the GUI as appealing and practical as possible.
A background with MATLAB programming and its syntax is an advantage but not very
important as long as there is an interest in the subject, that interest might take time to develop
even with using GUIDE in MATLAB that makes thing much simpler, and a lot of patient is
needed were things might work at times and not work other times, making a GUI using GUIDE
is much simpler then other software like Java, Python or other more complex programming
languages. In this paper a GUI demonstration will be given then we will show the use of it in our
research.
H()
V
differece
() X
differece
()
Figure 25 System input and output
UGVTeamBasedControlandGUIApplication

As mentioned before for simplicity we will use GUIDE even that MATLAB can make GUIs in
other ways.
Making a simple GUI:-
1. In the command window we type GUIDE


2. This will open the Quick Start window to choose from an existing GUI or starting a new one.

3. When selecting to start a new GUI a layout editor window will popup, this is where the fun
starts, here we can build the appearance of the GUI and assign a position and type of components
we need to work with based on the application.

UGVTeamBasedControlandGUIApplication

Component
Layoutarea
4. From here its just a click and drag, the components that we have are Push Button, Slider,
Radio Button, Check Box, Edit Text, Pop-up Menu, Listbox, Toggle Box, Toggle Box, Axes,
Panel, Button Group and Active X Control, it is not required to know what all these components
do just to build a basic GUI and that is what will be demonstrated here, in this example we will
make a simple calculator just to get a feel of the functionality of the GUI construction using
GUIDE.
UGVTeamBasedControlandGUIApplication

5. After the lay-out has been setup we can start assigning
Tag names that will be used in the M file and it will be
referred to by call backs, witch means they are case
sensitive, this is done by double clicking on the
components to bring up a Property Inspector where we
can set the style, color, font size and other factors, as
show in the figure for one of the buttons.
6. At this point we are ready with the lay-out and
customization of the GUI, this is what it could look like
and this part is totally up to the designer to make it most
appealing and easy to understand.

UGVTeamBasedControlandGUIApplication

MfileEditor
RUN
7. Now at this time we can save the GUI in a .fig format this will generate an M file with the
same name in witch we can implement our code, this M file has all the function that are needed
based of the components in the GUI layout.
8. In the M file codes are written based on the functionality of the component and what is
expected of it, in this example axes1 will be used to display an image, and text input will be
used to calculate number and the Operations button will defiantly have a mathematical
operations. To open the M file we click on the M file Editor icon at the top.

In this button an image of the UTSA logo in the jpg format is being read them displayed in the
axes.
UGVTeamBasedControlandGUIApplication

The add button is getting the number from the input and than does the addition calculation then
displaying it as a result, the same with the other buttons that symbolizes a mathematical
operation.



The Exit button will close the GUI.
Now we can save our work and then run the GUI, with the run icon shown previously, or with F5
on the keyboard.
In this example only few codes were shown just to demonstrate that using GUIDE is indeed
simple. This is what the GUI would look like after completion.

UGVTeamBasedControlandGUIApplication
5.2 Ground Vehicle GUI
In our research we implemented a GUI using GUIDE, this GUI is saved as
Simulations_2.fig, in the GUI the user can assign different points for the UGV obstacle
avoidance simulations files, this GUI will also open another GUI just to demonstrate that it could
be done, for this purpose the GUI CalculaterTest will be used, as an added challenge a safety
popup GUI will be used to ask the user if closing the GUI is intentional or not, in a cense it will
confirm the closing action. Due to not having enough time to perfect the knowledge in
MATLAB GUI, some of the GUIs functionalities were not activate but planed for future
research, these call backs will be mentioned in the right time.
As with all GUIs it is a good habit to start with a plan of the objective and a basic rough draft of
what it should look like, this will take some time at the beginning but it will save so much time
in the process where the designer will not feel the need to change plans if he feels that changes
must be made to accommodate the finale objective. As an advice to anyone whom might work
on a GUI for the first time is not to rush in to things and try some simple GUIs first, there are so
many examples and references on line and there are some pdf books that are extremely helpful,
one of the best resources Ive used is www.blinkdagger.com, another good resource is MATLAB
docs. Another thing to conceder as a new user to MATLAB GUI is to save your work, and all
ways keep samples of the good working GUI files, is some occasions an error might make
MATLAB freeze and the only way to fix the problem is to be a GUI master or to start over with
building a new GUI, witch is more practical in most extreme situations.
Now that we have some of the basics out of the way we can start explaining the GUI used for
this research. The plan is to make a GUI that will show a simulation of the obstacle avoidance
file, and then in the future a second GUI will be made for the UAV then a third one will combine
all in to one GUI to show the links of the entire RUE project.
UGVTeamBasedControlandGUIApplication


Figure 26 GUIDE lay-out before adding the components to the GUI area.


Figure 27 GUIDE lay-out after adding the components to the GUI area.
The GUI will start with the values shown as a default to give the user an idea about the
convention used in entering the different values based on the needed experimentation, the green
UGVTeamBasedControlandGUIApplication
button titled Start Simulation is the main player here, when this button is clicked it will activate
its call back function, this function will do the necessary data collection from the user input to
the simulation M files then it will show the output in the axes, to see the code used in this or any
other component simply right click on it and then select View Callbacks, then click on Callback,
this will open the GUIs M file and it will highlight the function specified for that component, as
mentioned before the function of the call back is automatically generated by the GUIDE when
saving the M file the first time, now its the programmers role to make thing work according to
plan.

The callback function is show in line number 72, the codes from line 77 to 89 getting the
reading of the values entered by the user of the GUI and assigning values to them, then in line 92
the file DynObsAvoidTest is the M file that contains the dynamic obstacle avoidance test and
all is algorithms, this file will be simulated in axea1 as indicated at the end of line 93. Line 96
will give the values from the first M file in like 92 and calculate it in the M file called follow
this file will be simulated on axes3 as indicated at the end of like 96. Line 94 and 97 (hold off)
UGVTeamBasedControlandGUIApplication
will clear the simulation if the user was to enter new values and start simulating again, otherwise
the entire GUI have to close and restart witch is not practical.

A call to the Calculator GUI is though a the calculator button this is the simples code in
the GUI, its only a one line code after the buttons function as see in line number 543.


Figure 28 Code line for calling the calculator GUI.
The panel titled Leader contains the initial and final positions for the leader and the Initial X
Velocity, and the Panel titled Point Obstacles contains the premiters for the obstacles, here can
even set the number of obstacles also.


Figure 29 Leader panel. Figure 30 Obstacle panel.

Panels titled Door Obstacle and Follower are active at this time but the do have the default
functions in the M file that were generated automatically by the GUIDE, this is the part Ive
mentioned before witch is part of the future of the research.
The panel on the left lower corner is the main control that contains the Calculator opening button
and the Start Simulation button and the Exit button, the Exit button is make to confirm closing
the GUI, the code for the Exit button is shown after the button function lines 104 and 105.
UGVTeamBasedControlandGUIApplication


The axes 1 and 3 are used to display the simulations, axes 2 was deleted due to changes in the
GUI plan, as titled in the GUI each axes demonstrates different information using the same input.


Dynamicobstacles
Robotpathfrom
initialtofinial
Figure 31 Obstacle Avoidance simulation.


Figure 32 Obstacle Avoidance simulation with follower in triangle formation.



UGVTeamBasedControlandGUIApplication
5.3 Conclusion:
Even that the Simulations_2 GUI is not 100% completed it is a good start, and should
help in advancing UGV research for REU. Working with the GUI was an interesting experience,
not only to be used in this project but we can see that its practical and will have many uses in
our future researches.
VI. Summary
In the end, we have developed a team based schematic utilizing the lead UGV as a pack leader
to control a team of followers. As the leader, the P3-AT will detect obstacles via sonar while
compensating for the followers and determine a clear path. This will enable safe and successful
achievement of trajectory goals. While the team operates, a user will be able monitor and test
trajectory paths determined by the P3-AT by using GUI.

You might also like