You are on page 1of 6

2015

201512th
12thLatin
LatinAmerican
AmericanRobotics
RoboticsSymposium
Symposiumand
and2015
2015Third
3rd Brazilian
BrazilianSymposium
Symposiumon
onRobotics
Robotics

Trajectory generation and tracking using the


AR.Drone 2.0 quadcopter UAV

Pedro Velez Novel Certad Elvis Ruiz


Mechatronics Research Group Mechatronics Research Group Mechatronics Research Group
Simón Bolivar University (USB) Simón Bolivar University (USB) Simón Bolivar University (USB)
Sartenejas 89000, Miranda, Venezuela Sartenejas 89000, Miranda, Venezuela Sartenejas 89000, Miranda, Venezuela
pdvelez@ieee.org ncertad@usb.ve elvisruiz@usb.ve

Abstract—Trajectory generation and tracking algorithms were Their controller tracked waypoints using small-scale unmanned
developed using ROS (Robot Operating System) for the AR.Drone helicopters. Their control linearized the model to reduce the
2.0 quadcopter. Flight paths were created with cubic polynomials computational burden. The predictive controller was compared
and Bézier curves. A PID controller was implemented to drive to PID position and velocity tracking controllers. Melliger et.
the AR.Drone from its current position to the desired coordinate al. [9] studied feasible trajectories and controllers that drive
point for the trajectory tracking task. Additionally, a navigation
a quadrotor to a desired state in state space. Their controller
method was implemented using attractive and repulsive artificial
potential fields to carry out obstacle-avoidance and goal-reaching was set to follow three-dimensional trajectories with modest
tasks. The software was tested using simulations in Gazebo and accelerations so the linearization about the hover state was
the real AR.Drone. The drone’s global position was estimated still acceptable. Navigation was performed by calculating the
using an Extended Kalman Filter and visual landmarks. distance from the closest point on the trajectory to the current
position and a unit tangent vector. The error was corrected
Keywords—ROS; AR.Drone; Bézier curves; Quadcopter; Artifi- using a PD controller. Bottasso et. al. [10] submitted a plan-
cial Potential Fields; Trajectory Generation; Trajectory Tracking;
ning strategy applicable to high performance unmanned aerial
Control; AR Marker Recognition.
vehicles. Their approach took a three-dimensional sequence of
waypoints connected by straight flight trim conditions and was
I. I NTRODUCTION smoothed to make it compatible with the vehicle dynamics.

Multirotor aerial vehicles have been widely studied in In this paper we address the trajectory generation and
the robotics research community. These vehicles have dem- tracking for a quadcopter using software developed in ROS
onstrated great stability and maneuverability in area surveil- for Parrot’s AR.Drone 2.01 . The algorithms compiled in this
lance and path-following tasks. There are several well-known study communicate with the manufacturer’s operating system
companies that are considering the use of Unmanned Aerial running in the flight controller. We used cubic polynomials
Vehicles (UAVs) for video, photography, search and rescue, and Bézier functions to generate fixed three-dimensional tra-
and package delivery services. In this study, we have used jectories in real-time. The vehicle was intended to fly a similar
the four-rotor multirotor robot also known as quadrotor, quad- route to the one given by the generator. A PID controller was
copter, or quadricopter. Several research groups have worked programmed to decrease the position error at a each point
with platforms such as the X-4 Flyer MarkII [1], OS4 [2], in time. The controller’s reference was published to a ROS
Draganflyer used for the Stanford and Berkley’s STARMAC topic by a global position estimator. This estimator operated
[3], Hummingbird from Ascending Technologies used by [4]. with an Extended Kalman Filter using the drone’s odometry
readings for the prediction stage and visual landmarks for the
One major challenge in robotics is to bring a mobile correction stage. In addition, we created a navigation algorithm
platform from a starting point to a goal. Bouktir et. al. [5] that uses attractive and repulsive artificial potential fields to
presented a method to generate time-optimal trajectories com- reach a goal, avoiding obstacles present in the environment.
posed by a parametric function. The vehicle path was defined This algorithm also used the global position estimator. The
and a monotonically increasing function specified the motion trajectory generation and tracking system implemented in this
on this path. Their numerical method was intended to perform work was tested in simulations using Gazebo2 and the real
in minimum time transfer problems. Chamseddine et. al. [6] vehicle.
proposed a flatness-based trajectory planner in order to drive a
quadcopter as fast as possible from an initial to a final position.
II. Q UADCOPTER AND ROS
They employed a Linear Quadratic Regulator (LQR) and a
Sliding Mode Controller (SMC). This approach was able to A quadcopter has four propellers located at the ends of each
limit the roll and pitch angles when the controller was based on segment of a cross-shaped frame. This is an underactuated
the vehicle’s linearized model. Hoffmann et. al. [7] developed a system because it has four control inputs for its 6 degrees
tracking controller using line segments connecting sequences of freedom. There is one input that produces the total thrust
of waypoints at a desired velocity. The controller generated
attitude reference commands. Castillo et. al. [8] presented 1 www.ardrone2.parrot.com
2 http://gazebosim.org/
a model predictive control based trajectory tracking system.

978-1-4673-7129-2/15 $31.00 © 2015 IEEE 73


DOI 10.1109/LARS-SBR.2015.33
B. AR.Drone 2.0
The AR.Drone quadcopter is a vehicle with substantial
complexity and versatility; it has been used by several research
groups from various universities. Despite being a relatively
low-priced robot it contains several measuring elements found
in more expensive vehicles. Some of these features are:
• HD Front 720p Camera
• 3 axis gyroscope 2000◦ /second precision
• 3 axis accelerometer +-50mg precision
• 3 axis magnetometer 6◦ precision
• Ultrasound sensors for ground altitude measurement
• 60 FPS vertical QVGA camera for ground speed
Fig. 1. Global and vehicle reference frames. A denotes the global reference measurement
frame and B the vehicle’s. The distance between both frames is represented
by ξ. C. Robot Operating System
The official description of ROS taken from their website3
provided equally by each propeller in the direction of the is “a flexible framework for writing robot software. It is a
vehicle’s Z axis. Also, there are three angular inputs that collection of tools, libraries, and conventions that aims to
determine the roll (φ), pitch (θ), and yaw (ψ). A representation simplify the task of creating complex and robust behavior
of the global and vehicle frames is shown in figure 1. across a wide variety of robotic platforms.” We have taken
advantage of this powerful open-source tool to create and
implement packages that establish a link between a PC and
the AR.Drone. In this work, we have installed the Fuerte-
A. Quadcopter Mathematical Model Desktop-Full distribution in a computer running Ubuntu 12.04
In this paper the mathematical model of a quadcopter is LTS. The packages and nodes used will be explained later in
based on the equations described by Jirinec [11]. The global this article.
and vehicle frames are related by the rotation matrix:
III. S OFTWARE AND T RAJECTORY G ENERATION
⎡ ⎤
cθcψ cθsψ −sθ A. AR.Drone communication driver
R = ⎣ sφsθcψ − cφsψ sφsθsψ + cφcψ sφcθ ⎦ (1)
We used a laptop running Ubuntu 12.04 LTS and ROS
cφsθcψ + sφsψ cφsθsψ − sφcψ cφcθ Fuerte-Desktop-Full to control the AR.Drone. The package
ardrone_autonomy created and maintained by Autonomy
Where c· and s· are abbreviations for cos · and sin · Lab4 was installed to establish the communication with the
respectively. The vehicle’s position, linear, and angular velocity vehicle through Wi-Fi. This package has been built for ROS
are represented by the vectors: based on the official SDK released by the manufacturer. The
sensor readings and drone’s information are stored in a mes-
T T T
ξ=[ x y z ] V =[ u v w ] ω=[ p q r ] sage type ardrone_autonomy::Navdata and published
to the topic /ardrone/navdata.
The equations of motion are: The drone’s data used in the algorithms are:
ξ˙ = R−1 V (2) • state
• rotZ
F = mV̇ + ω × mV (3) • vx
⎡ ⎤ • vy
⎡ ⎤ •
φ̇ p altd
⎢ ⎥ −1 ⎣
⎣ θ̇ ⎦ = E q ⎦ (4) Linear and angular movement commands are stored in a
ψ̇ r geometry_msgs::Twist message and published to the
topic /cmd_vel. Each component is described as:
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0 0 u̇ + qw − rv • linear.x: percentage of maximum tilt angle forward to
R ⎣ 0 ⎦ − ⎣ 0 ⎦ = m ⎣ v̇ + ru − pw ⎦ (5) backward ∈ [−1, 1]
mg T ẇ + pv − qu • linear.y: percentage of maximum tilt angle left to right
∈ [−1, 1]
Where: • linear.z: altitude change rate in meters/second
• m: vehicle’s mass • angular.z: yaw change rate in degrees/second
• E: skew-symmetric matrix 3 www.ros.org
• T: thrust equally created by the four propellers 4 http://wiki.ros.org/ardrone autonomy

74
B. Marker recognition and camera positioning 1) Cubic polynomial functions: A cubic polynomial curve
5 is defined by [12] as:
CCNY Computer Vision Stack developed a package,
ar_pose, that is able to determine a camera’s relative po- C(t) = a3 t3 + a2 t2 + a1 t + a0 (6)
sition to a marker, tracking it in real time. They integrated
ARToolKit libraries mainly used for building augmented real- In order to determine the constants a3 , a2 , a1 , and a0 its
ity applications. Initially, their algorithm searches for a squared first and second order derivative are found:
shape. Once found, the pattern drawn inside the frame is
matched with the data base provided by the user. This feature ˙ = 3a3 t2 + 2a2 t + a1
C(t) (7)
allows for the identification of more than one marker at a time.
A couple of markers that are compatible with these libraries ¨ = 6a3 t + 2a2
C(t) (8)
are shown in figure 2. The ar_pose package publishes the
marker’s and camera’s position. Figure 3 displays a screenshot
of the ar_pose node identifying a single marker and the Assuming that t ∈ [0, tf ] and both derivatives are equal to
coordinate frames are drawn using RViz. zero in t = 0 and t = tf each constant can be expressed as:

(cf − co ) (cf − co )
a 0 = co a1 = 0 a2 = 3 a3 = −2
tf 2 tf 3

Where co and cf are the starting and finish point respec-


tively.
(a) (b)
2) Bézier curves: A piecewise Bézier curve of degree n is
Fig. 2. Examples of AR markers compatible with the ARToolkit libraries. represented by [13] as:
n

B(t) = bi,n (t)Pi , t ∈ [0, 1] (9)
i=0

where Pi are control points and:


n i
bi,n = t (1 − t)n−i , i = 0, 1, . . . , n (10)
i

3) Artificial potential fields: The vehicle can be driven


by a velocity vector field created by a potential field. An
Fig. 3. Screen capture of an RViz window displaying the marker’s and attractive potential field is established for the goal, and a
camera’s coordinate frames using the ar_pose node. repulsive potential field is associated to all the obstacles in
the environment. The total potential field can be obtained by
adding both the attractive and repulsive fields. Each field is
C. AR.Drone’s position estimator represented by [14] with the following equations:
The Computer Vision Group6 from Technische Univer- 1 2
sität München developed a ROS package that estimates the Uatt (q) = ζd (q, qgoal ) (11)
drone’s position using an Extended Kalman Filter. Their main 2
node uses the ardrone_autonomy::Navdata movement The attractive force field can be obtained from the gradient:
readings to predict the vehicle’s position and is able to correct
it by using AR Markers and the ar_pose node. Several Fatt (q)= −∇Uatt (q)
tests have shown that the estimator is accurate even in an = −ζd(q, qgoal )∇d(q, qgoal )
environment with few or no markers.
= −ζ(q − qgoal ) (12)
D. Trajectory generation The repulsive potential and force fields are:
We created three different ROS nodes that generated tra- 2
1 1 1
jectories using cubic polynomial functions, Bézier, and ar-
Urep (q) = 2 η D(q) − Q ∗ , D(q) ≤ Q∗ (13)
tificial potential fields. The first two published each trajec-
0, D(q) > Q∗
tory point in real time to a position PID controller while
the third one published velocity commands directly to the
ardrone_autonomy driver. Frep (q)= −∇Urep (q)

1 1 1
5 http://wiki.ros.org/ccny vision η D(q) − Q∗ D 2 (q) ∇D(q),D(q) ≤ Q∗
= (14)
6 https://vision.in.tum.de/
0,D(q) > Q∗

75
E. PID Controller function as shown in figure 5. With this setup, the quadcopter
started moving slowly, increasing to a top speed in the first half
A PID controller node was created to communicate with
of the segment. As for the second half, the vehicle’s speed
the position estimator and the trajectory generator. In this work,
slowed down progressively. This ensured continuity in each
the controller had the task of guiding the quadcopter so that it
transition. The graphs using the simulator and the AR.Drone
followed a similar trajectory published by the generator. The
are presented in figures 6 and 7.
main node was set with proportional, integral, and derivative
gains. The controller held the AR.Drone’s position at the
origin until the trajectory generator started publishing the
new reference points. The altitude, yaw, and drone’s state 1st segment 2nd segment 3rd segment

X Position [m]
were monitored using the information provided by the topic P1 P2

/ardrone/navdata.
The nodes previously discussed were modified and pro- 0 0.2 0.4 0.6 0.8 1

Y [m]
Time (normalized)
grammed to run at frequencies set by the user. For our tests,
the nodes were set to communicate at 50 Hz. A summary of 1st segment 2nd segment 3rd segment

Y Position [m]
the software execution is presented below:
P0 P3

1) The vehicle flight controller was started by running


the ardrone_autonomy node. X [m] 0 0.2 0.4 0.6
Time (normalized)
0.8 1

2) The position estimator subscribed to the


ardrone/navdata and bottom camera topics. (a) Top View (b) X and Y position
The drone’s estimated global position was published. Fig. 5. Test trajectory using cubic polynomial functions for each axis.
3) The PID controller used the AR.Drone’s sensor
readings and state. With the estimated position, the
controller performed a starting point position hold.
Posición Estimada Trayectoria
A message was published to enable the trajectory
generator. Estimated Position Reference Trajectory
4) The trajectory generator started publishing the coor- 2.5
dinate points that create the path to be followed. 20
2
5) The PID controller performed the real-time position

Eje z [m]
15 1.5
correction task and landed the quadcopter when the
Y [m]

last trajectory point had been reached. 10 1

5
0.5
IV. E XPERIMENTAL R ESULTS 0
0
20 20
We tested the software using the simulator Gazebo7 to 10 10
ensure adequate flying behavior. Both our simulated and real 0 5 10 15 20 0 0
X [m] Eje y [m] Eje x [m]
tests generated and tracked trajectories longer than 20 meters. (a) Top View (b) 3D View
We set an array of AR markers on the takeoff platform
to generate an accurate position estimation before the new 30

trajectory reference points were published. The platforms for 25 5


Primera Segunda
Esquina Esquina
both the real and simulated tests are shown in figure 4. 20
4.5

4
meters

3.5
metros

15
3
First Second
Transition Transition 2.5 Media Arimética = 2.3161
10
2
5 1.5
Estimated
Planned
Estimated (Adjusted) 1
0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 10 14 29
Time (normalized) Duración del recorrido (normalizado) % de Frecuencia

(c) Distance from the plat- (d) Position Error


form
Fig. 6. Test flight using the simulator and cubic polynomial function
generator. Each pair of colored circles shown in (a) and (d) represents the
generator’s and vehicle’s position at the same time. The highlighted segment
(a) AR.Drone’s Simulation (b) AR.Drone 2.0 in (b) is the drone’s position during the generation time.
Fig. 4. Array of AR markers placed on the takeoff platform used in the
simulations and real tests.

A. Cubic polynomial functions


We established a three-segment trajectory in which the
movement along each axis was traced by a cubic polynomial
7 http://gazebosim.org/

76
Estimated Position Reference Trajectory Estimated Reference Position during Reference Trajectory Estimated Reference Position during
Estimated Position
Position Trajectory generation Position Trajectory generation
10
20 8
3 6
15 4
2
Y [m]

Y [m]
Z [m]
10
0
−2 2

Z [m]
1 20
5
−4 1 15
−6 10
0 0 0
20 20
−8 10 5
10 10 −10 0 0 X [m]
0 5 10 15 20 0 0 0 5 10 15 20
X [m] Y [m] X [m] X [m] Y [m] −10

(a) Top View (b) 3D View (a) Top View (b) 3D View
30 25

First Second 5
25 7
Transition Transition 20

6 4.5 Mean = 3.5988


20
5 15 4

meters
meters

meters

meters
15 3.5
4
First Second Mean = 3.3178 10
Transition Transition 3 3
10

2 2.5
5
5
Estimated 1 Estimated 2
Planned Planned
Estimated (Adjusted) Estimated (Adjusted)
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 10 10 20 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 10 10 20
Time (normalized) Time (normalized) Frequency [%] Time (normalized) Time (normalized) Frequency [%]

(c) Distance from the plat- (d) Position Error (c) Distance from the plat- (d) Position Error
form form
Fig. 7. Test flight using the AR.Drone and cubic polynomial function Fig. 9. Test flight using the AR.Drone and Bézier curve generator. Each
generator. Each pair of colored circles shown in (a) and (d) represents the pair of colored circles shown in (a) and (d) represents the generator’s and
generator’s and vehicle’s position at the same point in time. The highlighted vehicle’s position at the same point in time. The highlighted segment in (b)
segment in (b) is the drone’s position during the generation time. is the drone’s position during the generation time.

B. Bézier Curve C. Artificial Potential Fields


In this section we selected a Bézier curve created with six In this section, we set two takeoff platforms located at
control points. Unlike the previous method, the curve does [0, 0, 0] and [10, 0, 0] to test the software’s ability to guide the
not have a progressive movement at the starting and finishing drone to the goal. The environment contained five obstacles,
points. The graphs using the simulator and the AR.Drone are each with a 1 meter radius and infinite height. Their location
presented in figures 8 and 9. and geometry were predefined in the potential field generation
Estimated Reference Position during
Estimated Position Reference Trajectory
Position Trajectory generation software. An array of markers on each platform was used
10
to set the vehicle’s initial position. Unlike the two methods
8
6
previously described, the software generated the movement
4 commands from a velocity vector field. The navigation was
2 not restricted to a fixed trajectory, allowing the ROS node to
Y [m]

0
send commands based on the drone’s current position. This
−2
Z [m]

−4
1 20 exercise was completed using the simulator and the AR.Drone
15
−6
0 10 and can be seen in figures 10 and 11. A reference path was
−8 10 5 generated using a massless particle for visual purposes only (it
−10
0 5 10 15 20
0 0 X [m] should not be considered as the vehicle’s expected behavior).
X [m] Y [m] −10
In both the simulation and real tests the vehicle flew a path
(a) Top View (b) 3D View similar to the reference.
20

18

16 4.5

14 4
V. C ONCLUSION
12 3.5
meters

meters

10

8 2.5
3
In this article we implemented trajectory generation and
6 2
tracking algorithms using polynomial functions, Bézier curves,
4 1.5 Mean = 1.145 and artificial potential fields. This software was created for
Estimated
2

0
Planned
Estimated (Adjusted)
1
ROS and used in the simulator Gazebo and the quadcopter
10 10
0 0.2 0.4 0.6
Time (normalized)
0.8 1 0 0.2 0.4 0.6
Time (normalized)
0.8
Frequency [%] AR.Drone 2.0. Both the trajectory generation and tracking
(c) Distance from the plat- (d) Position Error were executed in real time for the first two methods. The
form trajectory tracking task was performed by a PID controller. As
Fig. 8. Test flight using the simulator and Bézier curve generator. Each for the third method, the system created an artificial potential
pair of colored circles shown in (a) and (d) represents the generator’s and field for an environment with five obstacles and one goal. In
vehicle’s position at the same point in time. The highlighted segment in (b) this case, the movement commands were sent by the same
is the drone’s position during the generation time. node since there was no trajectory to follow.

77
Estimated Position Reference Path Estimated Position Reference Path As previously stated, this method does not provide a strict path
5 5
to follow but it can be modified to do so for future work. When
0 0 defining a path to follow, we found that piecewise functions
5 5
such as Bézier perform better than the polynomial functions
ensuring total continuity along the trajectory. We have also
0 0
considered using B-Splines as an alternative for future work
5 5 in order to avoid using high-degree polynomials.
0 0

5 5 R EFERENCES
0 0 [1] P. Pounds, R. Mahony, and P. Corke, “Modelling and control of
a quad-rotor robot,” in Proceedings Australasian Conference on
−5
−20 −10 0 10 20
−5
−20 −10 0 10 20
Robotics and Automation 2006. Australian Robotics and Automation
X [m] X [m] Association Inc., 2006.
[2] S. Bouabdallah, P. Murrieri, and R. Siegwart, “Design and control of
(a) Platform A (b) Platform B an indoor micro quadrotor,” IEEE International Conference on Robotics
Fig. 10. Test flight using the simulator and artificial potential fields. and Automation ICRA ’04, pp. 4393–4398 Vol.5, 2004.
The vehicle navigated in an environment with five obstacles. Two different [3] G. Hoffmann, D. Rajnarayan, S. Waslander, D. Dostal, J. S. Jang, and
locations were used for the takeoff platform as shown in (a) and (b). C. Tomlin, “The stanford testbed of autonomous rotorcraft for multi
agent control (starmac),” in Digital Avionics Systems Conference. DASC
04. The 23rd, vol. 2, Oct 2004, pp. 12.E.4–121–10 Vol.2.
[4] R. Mahony, V. Kumar, and P. Corke, “Multirotor aerial vehicles:
Estimated Position Reference Path Estimated Position Reference Path
Modeling, estimation, and control of quadrotor,” Robotics Automation
5 5
Magazine, IEEE, vol. 19, no. 3, pp. 20 –32, sept. 2012.
0 0 [5] Y. Bouktir, M. Haddad, and T. Chettibi, “Trajectory planning for a
5 5
quadrotor helicopter,” 2008 16th Mediterranean Conference on Control
and Automation, pp. 1258–1263, Jun. 2008.
0 0 [6] A. Chamseddine, T. Li, Y. Zhang, C.-A. Rabbath, and D. Theilliol,
5 5 “Flatness-based trajectory planning for a quadrotor unmanned aerial
vehicle test-bed considering actuator and system constraints,” in
0 0 American Control Conference (ACC), June 2012, pp. 920–925.
5 5 [7] G. Hoffmann, S. Waslander, and C. Tomlin, “Quadrotor helicopter
trajectory tracking control,” AIAA Guidance, Navigation and Control
0 0 Conference and Exhibit, pp. 1–14, Aug. 2008.
−5 −5 [8] C. L. Castillo, W. Moreno, and K. P. Valavanis, “Unmanned helicopter
−20 −10 0 10 20 −20 −10 0 10 20 waypoint trajectory tracking using model predictive control,” 2007
X [m] X [m]
Mediterranean Conference on Control and Automation, Jun. 2007.
(a) Platform A (b) Platform B [9] D. Mellinger, N. Michael, and V. Kumar, “Trajectory generation
and control for precise aggressive maneuvers with quadrotors,” The
Fig. 11. Test flight with the AR.Drone artificial potential fields. The vehicle International Journal of Robotics Research, vol. 31, no. 5, pp.
navigated in an environment with five obstacles. Two different locations were 664–674, Jan. 2012.
used for the takeoff platform as shown in (a) and (b).
[10] C. Bottasso, D. Leonello, and B. Savini, “Path planning for autonomous
vehicles by trajectory smoothing using motion primitives,” Control
Systems Technology, IEEE Transactions on, vol. 16, no. 6, pp. 1152–
After running 40 simulations and 46 tests with the real 1168, Nov 2008.
vehicle we could verify the software’s performance. As it can [11] T. Jiinec, “Stabilization and control of unmanned quadcopter,” Master’s
be seen from the experimental plots, the system was able to thesis, CZECH TECHNICAL UNIVERSITY IN PRAGUE, 2011.
fly the drone in a trajectory similar to the one established [12] J. H. Gallier, Curves and surfaces in geometric modeling: theory and
by the first two methods. Using the potential fields node the algorithms. Morgan Kaufmann, 2000.
AR.Drone reached the goal evading the nearest obstacles. This [13] G. E. Farin, Curves and Surfaces for Computer-Aided Geometric
Design: A Practical Code, 4th ed. Orlando, FL, USA: Academic
task was achieved taking off from different origin points using Press, Inc., 1996.
visual markers to set the vehicle’s initial position. Once the [14] H. Choset, K. M. Lynch, S. Hutchinson, G. A. Kantor, W. Burgard, L. E.
flight data was collected, we were able to observe that the high- Kavraki, and S. Thrun, Principles of Robot Motion: Theory, Algorithms,
est mean tracking errors using the third-degree polynomials and Implementations. Cambridge, MA: MIT Press, June 2005.
was 3.73 meters (simulation) and 5.13 meters (real vehicle) in
trajectories of 60 meters. In the Bézier curve tests, we observed
highest mean tracking errors of 2.8 meters (simulation) and
5.06 meters (real vehicle) in trajectories of 40 meters. External
factors such as wind and sensor noise combined with the lack
of visual landmarks in all the flight area increased the error in
the tests using the real quadcopter.
From the trajectory generation methods used in this study,
we can conclude that the implementation of artificial potential
fields provides a strong alternative in an obstacle populated en-
vironment when there is not a time requirement for navigation.

78

You might also like