Scirobotics Abm5954
Scirobotics Abm5954
Scirobotics Abm5954
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
posed planner deforms trajectory shapes and adjusts time allocation synchronously based on spatial-temporal joint
optimization. A high-quality trajectory thus can be obtained after exhaustively exploiting the solution space within
only a few milliseconds, even in the most constrained environment. The planner is finally integrated into the
developed palm-sized swarm platform with onboard perception, localization, and control. Benchmark compari-
sons validate the superior performance of the planner in trajectory quality and computing time. Various real-world
field experiments demonstrate the extensibility of our system. Our approach evolves aerial robotics in three aspects:
capability of cluttered environment navigation, extensibility to diverse task requirements, and coordination as
a swarm without external facilities.
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
Movie 1. A comprehensive presentation of the proposed swarm. This video
ment unit (IMU) on a miniature swarm platform was first released
cover is a composite image of the aerial swarm in a bamboo forest. by Loianno et al. (32), but only some trajectory planning in sparse
known environments was demonstrated, and the swarm did not enter
the wild. Furthermore, purely visual-inertial odometery (VIO)–
Last, all such capabilities should be placed into the smallest container based localization in their work may drift in long-range flights. To
because weight and volume are directly related to the flight time and make a swarm more efficient and robust in dense environments,
acceptable narrow space. our previous work, EGO-Swarm (33, 34), proposed an optimization-
Unfortunately, achieving these four aspects together is internally based method. Full-stack navigation solution of aerial swarms de-
contradictory. Higher optimality mostly comes from sophisticated ployed in the forest is rare. One limitation of our previous work is
modeling and more iterations or trials in the solution space, all of that the potential for scalability lacks solid validation because we
which are achieved at the cost of increased computing time. Higher used only three drones. Moreover, the planner is unable to adjust
extensibility requires the problem to be defined in a more general the time profile, which would occasionally produce less optimal and
form at the sacrifice of potential problem-specific optimization that even unsafe trajectories. These imperfections still leave the aerial
can improve optimality and reduce computing time. Then, between swarm in the cluttered wild as an unsolved problem.
optimality and extensibility, as various user-defined objectives are Observing how nature tackles this navigation challenge, two main-
imposed, the problem becomes increasingly complex, which makes stream approaches have inspired robotics researchers. Insects
it challenging to find a solution. Only satisfying some basic require- perform short-term reactive actions, whereas birds prefer relatively
ments such as safety and feasibility while minimizing time and max- long-term smooth maneuvers (35). This is because birds have a
imizing smoothness for aerial swarms is already a difficult problem sharper sense of sight and movement, higher–degree-of-freedom
(21, 22, 23) and is even more difficult to achieve simultaneously on motion systems, and more brain capacity compared with insects
a miniature platform. That is why previous studies are unable to (35, 36). These two approaches have also inspired two mainstream
take the step from structured, human-made environments to the drone navigation methods in the literature: insects for reaction-
unforeseen wild. based applications and birds for trajectory planning approaches.
In the real world, various aerial swarms have been deployed, Among the two, the former approach contains extremely lightweight
including enormous impressive drone light shows presented by and efficient solutions in terms of computation and memory allow-
Intel (24), High Great (25), and CollMot (26). Nevertheless, behind ing for even lighter drones, whereas the latter shows higher opti-
large-scale and successful commercial usages, the swarm positioned mality and flexibility. Accordingly, for higher task efficiency and
by the Global Navigation Satellite System merely follows prepro- extensibility in field environments, we choose the latter approach.
grammed trajectories and therefore cannot be operated in unfore- Here, we address this TEEM contradiction by incorporating spatial-
seen places with obstacles. Autonomous outdoor aerial flocking was temporal optimization techniques for trajectory optimality and
presented in (27–29), where drones adjusted their motions formulating trajectory planning as a multiobjective optimization
according to others’ states in real-time using simple reactive rules under a goal-chasing framework for extensibility. Furthermore,
such as the potential field (PF) method. However, the lack of opti- the combination of the above two features renders fast convergence,
mality consideration during the flight resulted in actions that are not therefore guaranteeing economical computing, which makes a
sufficiently coherent. Such inconsistency further renders far-neighbor miniature platform possible.
distance [>10 m on average reported in (29)] required for safety and
thus unsuitability in cluttered environments. Moreover, although Proposed systematic solution
they successfully imitate the behaviors of large flocks of birds, accu- After investigating a variety of applications, we find that the key
rately operating each individual is difficult because the parameters to TEEM is trajectory planning, which not only deforms trajectory
are tightly coupled with specific deployment scenarios and neighbor shapes but also adjusts the time profiles to exhaustively exploit the
states. To achieve rapid and safe collective motions with dense ob- solution space and squeeze the capability of drones. If only spatial
stacles, Soria et al. (30) incorporated model predictive control into deformation is performed (33, 37), as compared in the “Benchmark
the PF. Nevertheless, higher performance was achieved at the cost comparisons” section, drones tend to circumnavigate to wait for
of heavy computation and a centralized organization that lacked others while passing through a narrow passage, which hinders later
the scalability for a large swarm size (30). In addition to navigation drones and results in inferior and even unsafe trajectories. Therefore,
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
Fig. 1. Overview of the proposed aerial swarm. (A) Static closeup. (B) Comparison with swarm gradient bug algorithm (SGBA) (8), flocking (29), and nonlinear model
predictive control (NMPC)–Swarm (30). The ticks of each axis from the graph center to the outward are as follows: Optimality: handcrafted rules, optimized rules, spatial
optimization, and spatial-temporal optimization; size: arm-sized and palm-sized; computing: offboard and onboard; weight: below 100 g, above 100 g, and above 1 kg;
extensibility: task specific, tasks with specific formulations, and tasks that can be analytically modeled of decision variables.
simultaneously planning the shape and time profile of a trajectory, feasibility, which are defined in Materials and Methods. This trajec-
also called spatial-temporal trajectory planning, is crucial for safe tory planning framework is illustrated in Fig. 2D.
and efficient drone flights. Despite this, such joint optimization Except for the proposed trajectory planning, we adopt visual-inertial
has a historically difficult problem for multicopters, because the odometry running on each drone independently for aerial swarm
spatial and temporal parameters determining the trajectory together localization. However, accumulative odometry drift may result in
are highly coupled (38, 39), which, for example, results in ~40 min drone collisions when they continue to report, maintaining a safe
to compute a time-optimal trajectory (1). In the proposed approach, distance, so we develop a decentralized drift-correction algorithm
we achieve real-time spatial-temporal optimization by decoupling by minimizing relative distance error measured from onboard ultra-
the spatial and temporal parameters in objective function com- wideband (UWB) sensors.
putation and achieving a linear complexity mapping between the As shown in Fig. 2 (A and B), each drone is equipped with full
optimized variables and intermediate variables that represent a perception, localization, planning, and control functionalities and
trajectory. loosely coupled by a broadcast network sharing trajectories. Coinci-
Under the trajectory-planning framework, the task-specific re- dentally but reasonably, the proposed system is similar to birds
quirements of generating a trajectory can always be formulated as capable of flying freely through the forest while avoiding obstacles
goals to reach; multiple objectives, such as shorter flight time, higher and other moving creatures. For example, in short-range navigation,
smoothness, and closeness to a given path; and constraints, such as birds mainly rely on eyes and their vestibular system (41), and we,
collision avoidance and dynamical feasibility. For the first require- accordingly, develop improved visual-inertial odometry. Further-
ment, we build our planner under the goal-chasing scheme, which more, birds adjust path and speed simultaneously to avoid collision
receives users’ goals continuously and keeps chasing the latest one. while considering flight time and smoothness to save energy (35),
For the second and third requirements, the nonconvexity among and we thus propose joint optimization of spatial-temporal trajec-
them makes the optimization problem difficult to solve. To achieve tories with multiple objectives. Beyond the capability of small birds,
high compatibility, we adopt the constraint transcription method we further use the advantage of our electrically powered artificial
(40) that converts all objectives and constraints to weighted penalties. system characterized by high-fidelity wireless communication for
Specifically, penalties derived from constraints are assigned with trajectory sharing and high-speed computing for fast planning.
weights orders of magnitude higher than other objectives. Note that Furthermore, decentralized coordination concerning both individual
here, terms “objectives” and “constraints” refer to task requirements, and swarm intelligence is met naturally by our solution, which im-
while “penalties” are their relative mathematical formulations form- proves robustness. As Murphy (42) pointed out, weakly centralized,
ing the final cost function. The trajectory-planning problem can distributed organization of the swarm shows higher robustness and
then be solved quickly by standard solvers leveraging sparse para- resilience and can even retain actions when communication and
metric optimization and constraint transcription. To simplify the Global Positioning System (GPS) data are lost.
situation, we provide detailed examples of adding task-specific ob- We propose a versatile multirobot navigation solution, allowing
jectives and constraints intuitively with preformulated general- users to incorporate various task-specific requirements and also pro-
purpose penalties (GPPs). GPPs consist of time minimization, ducing locally spatial-temporal optimal motions in real time. The
smoothness maximization, collision avoidance, and dynamical proposed solution is embodied on drones that are only the size of a
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
Fig. 2. Hardware and system architecture specifics. (A) Hardware components of our flight platform. See the “Palm-sized drone hardware” section for more details.
(B) The system architecture. Visual-Inertial State Estimator (5) and probabilistic occupancy grid (7) are adopted for localization and mapping, respectively. (C) Computation
and memory usage. Planning and mapping run in the same thread to reduce latency. (D) The planning framework.
performance and potential. In simulations, quantitative evaluations the drones always show an explicit and nonsmooth turn-away pattern
of several common metrics are conducted with various state-of-the- (8) directly in front of an obstacle or before hitting other drones.
art aerial swarm planners. From the photos, available space between two bamboos may be
less than 30 cm wide, and therefore, only miniature-sized drones
Fly through dense forest are allowed to pass. More severely, these narrow gaps further limit
This experiment is designed to demonstrate swarm navigation with the solution space, especially for drones that have neighbors on
full autonomy in a highly dense wild, i.e., a bamboo forest, without both the left- and right-hand sides. The constraints become even
harming themselves or the plants. In this experiment, the penalty tighter when only one available gap exists for multiple drones to
function only contains GPPs, and the goal is set 65 m forward, outside pass through together. To achieve safety and efficiency, some naive
the forest. The paths in Fig. 3I reveal a notable advantage of trajectory handcrafted strategies, such as altering altitude to avoid collisions,
planning: The planned trajectories always connect one gap to the next are undesired because of downwash disturbance and energy wasting.
one directly and smoothly. Compared with reaction-based methods, Under such situations, our spatial-temporal trajectory planner
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
Fig. 3. Challenging wild navigation with bamboos and various other obstacles. (A) Ten drones fly through highly dense bamboos. (B) A drone flies through a narrow
gap. (C and D) Collision avoidance. (E) A trunk. (F) Messy branches. (G) Tilted bamboos. (H) Uneven ground. (I) Trajectories and the combined map recorded from each
drone. Circled letters with white arrows indicate the places in above panels. Experimental visualization in this paper shares a legend identical to that here.
implicitly finds solutions in a general problem formulation by Intensive reciprocal avoidance evaluation
adjusting time profiles to allow multiple drones to only change Unlike other tests, in which drones move along a similar direction
necessary velocity smoothly and then pass the gap in a queue, which and the actions of avoiding others are not apparent, the experiment
produces lower cost from the planner’s perspective compared shown in Fig. 5 demonstrates random-direction flight in a confined
handcrafted rules. A quantitative analysis of this procedure can space, therefore maximizing the necessity of inter-robot collision
be found in section S4 of the Supplementary Materials. avoidance. This setting mimics the most underlying requirements
Except for dense vertically growing bamboos, other kinds of for dense air traffic among skyscrapers: navigating safely, efficiently,
obstacles exist, including tilted bamboos, trunks, low bushes, weedy and individually. To validate such capability for 10 drones, goals on
ditches, uneven ground, and blown leaves blocking cameras a 3-m-radius circle are assigned randomly to drones that have
(Fig. 3, A to H), which necessitate planning the trajectory in three arrived at the previous goals. Only the basic GPPs in trajectory
dimensions. Such an unstructured environment composed of planning are adopted. Except for a thick tree trunk and a camera-
obstacles with irregular shapes and dense distributions validates the mounted tripod in the flight area, to better imitate more real-world
capability of navigating in most cluttered places, such as disaster situations during the flight, we gradually place cuboid and cylindrical
scenarios, let alone in the regular artificial world. The video of this obstacles to mimic newly built buildings, walk through the area as large
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
experiment is Movie 2. moving obstacles, and hold and move a drone as natural disturbances.
Next, we shut down all the ground localization anchors (only used
Formation navigation in the wild in this experiment) to imitate the loss of global positioning.
This experiment demonstrates the extensibility of the proposed Because safety and efficiency are two main concerns of transpor-
unified trajectory planning by adding a formation penalty to GPPs. tation systems, we evaluate the minimum distance to collision and
Formation flight is widely used in drone light shows and has been the total number of completed deliveries (total reached goals) during
demonstrated in cooperative transportation (43, 32), but all these the 3-min flight. As shown in Fig. 5D, during the entire flight, each
demonstrations are presented in empty or manually controlled en- drone is modeled as a sphere with 7-cm radius. They manage to
vironments. The proposed system brings the formation into a pre- keep safe distances from both obstacles and other drones, despite
viously unknown wild environment. Here, the formation is defined unpredictable events. The number of reached goals increases linearly
as maintaining a preferred moving shape, which means that drones as time increases. Thus, the near-constant transporting rate under
translate with fixed relative positions. Meanwhile, each drone also different obstacle densities is achieved owing to the local optimality
independently navigates against obstacles. The formation penalty is of the planned trajectories. The video of this experiment is Movie 4.
a regulation term to desired positions that maintain the formation.
The obstacle density is reduced in this experiment compared with Multidrone tracking with target occlusion
that in the “Fly through dense forest” section to make the formation This experiment demonstrates the potential of adding high data-load
distinguishable, but standing bushes, low-to-high trees, and two hardware and running more computationally expensive software
human-made iron pillars still exist as shown in Movie 3. on the proposed miniature platform with extra user-defined objec-
Following the planned trajectories, the swarm flies through the tives. Swarm tracking can be used in multiview aerial photography
woods while staying in formation. From the deformation curve and and videoing, which have been attracting interest recently (44, 45),
velocity profile in Fig. 4 (A and D), we conclude that the swarm and they can take comprehensive recordings of the participant and
maintains a formation, although sometimes drones must deviate to provide more materials for post-editing. To track and record, drones
avoid prior unknown obstacles and then gather speed to catch up to are equipped with extra RGB (red-green-blue) cameras that not only
the formation. Note that at timestamps 12 and 25 s, the average ve- capture vivid videos but also act as representative “high data-load”
locity decreases automatically as drones are avoiding trees and in- hardware and simultaneously run video compression, data storing,
creases when they are wholly back to open space. In this case, the and object detection to validate the platform’s extensibility in
velocity altering of some individuals is propagated to the entire for- computationally expensive tasks. In our experiment, the focus is a
mation without explicit preprogramming. This phenomenon shows human participant moving in the woods. To catch up with the
the implicit balance between safety and flight time where the human while avoiding obstacles and other drones, we design a
slowdown near obstacles reserves more reaction time to potential tracking penalty along with GPPs to plan the desired trajectory.
collision and the whenever-possible speedup reduces flight time.
The video of this experiment is Movie 3.
Movie 2. Flying through dense forest. Movie 3. Formation navigation in the wild.
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
Fig. 4. Swarm navigation in formation with prior-unknown obstacles. (A) Deformation is defined as the deviation between current drone position and the assigned
position that forms the desired formation. (B) Maps, trajectories, and drones recorded during the flight. (C) Snapshots of adaptive deformation and reformation to pass
through the given area. (D) The velocity profile. In (A) and (D), solid curves are average value of all drones, while the top and bottom bounds of the transparent parts in-
dicate its maximum and minimum values. Note that (A) to (D) are aligned in this position to provide clearer evaluation. (E) A photograph of the formation of real drones,
which are about 1 m apart.
Furthermore, multidrone tracking improves the robustness to oc- is a lower-complexity systematic solution. The simulation platform
clusion as shown in Fig. 6D because the object position can be ac- is a personal computer with an Intel Core i7 10700K CPU (central
quired by multiple drones through communication. Drones will try to processing unit) running at 4.8 GHz and with 24-GB RAM (random
receive and fuse as many observations as possible from itself and access memory) at 3200 MHz, on which drones run in independent
others to improve occlusion resistance. From the results in Fig. 6, the threads in parallel to maintain consistency with the real-world,
human can move forward without worrying about drone collision or decentralized system architecture.
losing the target. The video of this experiment is Movie 5. Figure 7A gives the visualization of planned trajectories in
two challenging scenarios, i.e., flying through a narrow gate and an
Benchmark comparisons obstacle-rich area at a velocity of 2 m/s. The negative effect of lacking
We compare the proposed approach against two state-of-the-art temporal trajectory optimization reveals that both MADER and EGO-
planners, i.e., MADER (37), and our previous work, EGO-Swarm (33). Swarm generate detours for later drones to wait for their priors. The
Both planners belong to the category of decentralized, asynchronous proposed planner shows the most smoothest trajectories because
trajectory planning methods for drone swarms. Here, MADER drones can adjust time profiles to achieve spatial collision avoidance.
shows impressive collision avoidance with both densely placed static We assess the trajectory-planning performance considering four
and dynamic obstacles. Besides, EGO-Swarm validated in the wild different metrics under various desired velocities and average
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
Fig. 5. Evaluation of intensive reciprocal collision avoidance with unexpected events. (A) Overview of flying to randomly given goals and adding obstacles during
the flight. (B) Top pictures show drones avoiding other drones and obstacles. Bottom pictures show a sequence of snapshots recording inter-robot collision avoidance.
(C) Photos from left to right show a researcher adding obstacles, interfering with a drone, and shutting down global localization anchors during an entire flight. (D) Minimum
relative distance of each drone to others. Solid line records the lower bound of the minimum that is the closest distance among all agents. Collision distance equals the
drone diameters. Safety is guaranteed during the entire flight. (E) The trajectories during the 3-min flight. (F) Number of goals reached. Blue bars indicate the time slots
when new obstacles are added; the red bar means that global localization anchors are shutdown. Note that (A) and (B) to (F) belong to two different flights.
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
Fig. 6. Multidrone tracking with target occlusion. (A) Tracking using four drones equipped with RGB cameras running computationally expensive tasks including video
compressing and neural networks. (B) Trajectories of four drones. Rhombus with black edges are drone positions at four timestamps. (C) Heatmaps of target distributions
in camera views. Yellow regions indicate more frequent target appearance. (D) A time sequence demonstrates system resistance to target occlusion.
Swarm playground
To encourage more involvement and further development, all the codes
are included in the Supplementary Materials to inspire user-friendly
running and interaction, which is named the swarm playground. In
this playground, users can watch a swarm of 40 drones fly freely to
given goals (Fig. 8A), watch seven drones form a centered hexagon
Movie 5. Multi-drone tracking with target occlusion. (Fig. 8B), or watch drones swap positions under either predefined
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
Fig. 7. Benchmark comparisons. (A) Trajectory shape visualization. Top: Six drones fly through a narrow gate. Bottom: Ten drones fly across obstacles. All the drones
start together at identical desired velocity, and the order of start and goal positions are reversed to enforce reciprocal collision avoidance. The edge length of the grid on
the ground is 1 m. (B) Metrics comparisons visualized using violin plots and bar graphs. The computing time is the time used for planning. The minimum clearance records
the minimum distance to other drones or obstacles subtracted from the 0.2-m drone radius, where a negative value indicates a crash. The control effort (which evaluates
smoothness) Sm ≔ ∫ j(t)2dt measures the time integral of squared jerk j during the entire flight, because jerk is directly related to the body turn rate and large jerk makes
the drone flight shaky. The flight time measures the time spent to reach the given goals. Bar graphs with or without SD are used. Violin plots record the medium and
quartile, along with distribution of the data. The specific values can be found in tables S8 to S13. Plots and graphs can be recreated using the released dataset (66).
(Fig. 8C) or endless random goals (Fig. 8D) on a circle. Goals can be being the only dependency installed. The video of the swarm play-
given by users as well in a select-and-set way, as in the video game ground is Movie 6.
Command & Conquer: Red Alert 2 (2000). Furthermore, users can
more deeply take part in the planning process by acting as a tracked
object (Fig. 8E) or dynamic obstacles that drones must avoid (Fig. 8F), DISCUSSION
using one or multiple Microsoft Xbox Controllers (50). All the static In this work, a modular and hierarchical system achieving swarm
obstacles in the playground are randomly generated. The system intelligence based on high-level individual intelligence is pro-
parameters—including drone numbers, flight velocity, start, and goal posed and validated, in which all drones are developed with the
positions—can be reconfigured following the tutorials. In addition, capability of sensing the environment and planning a locally
if new objectives are added, users are encouraged to evaluate the optimal trajectory. This framework is adopted because we focus
correctness of problem formulation and parameter settings before on fully autonomous navigation in real-world unstructured fields
real-world deployment. The code can be effortlessly deployed on without prior knowledge while satisfying TEEM (as discussed in
Ubuntu 16.04, 18.04, and 20.04 with the Robot Operating System Introduction).
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
Fig. 8. Pictures generated in the software package of swarm playground. (A) A swarm flight of 40 drones. (B) Formation flight. (C) A circle position swap. (D) Endless
random goals selected on a circle. (E) Swarm tracking with the target manually controlled. (F) Avoiding multiple dynamic obstacles that are manually controlled. (G) Legend.
(H) Gamepads used to control the tracking target and dynamic obstacles.
triggered planning, and a high degree of freedom in our trajectory boundaries and minimum control effort given {q, T}. Its control
representation. Similarly, deadlock is also reported to hardly oc- effort optimization is given by
cur in other trajectory planning methods (53) in normal situations,
t M
unless under conditions with special environment geometry, such as 2
min ∫t 0 ‖p (s)(t ) ‖2 dt (2)
being inside a narrow and long tube. p(t)
In summary, our proposed planner follows a goal-chasing scheme
in which users can give goals at any time during the tasks. Such a where t ∈ [t0, tM] is the domain of the current trajectory. Note that
scheme can seamlessly connect to high-level decision-making and smoothness maximization is done through control effort minimiza-
task assignment modules with outputs that are always preferred tion because we use a jerk-control system model.
goals for each robot (54, 55). Furthermore, the proposed multi- Furthermore, MINCO is advanced in converting the given pa-
objective trajectory planner allows high-level modules to focus on rameters {q, T} to polynomial coefficients c and time profile Tp with
task abstraction without having to worry about common require- a linear complexity O(M). A more specific correspondence can be
ments such as safety and dynamical feasibility. expressed as
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
M(T ) c = b(q ) , T p = T
(3)
MATERIALS AND METHODS
System architecture where b(q) ∈ ℝ2Ms × 3 and M(T) ∈ ℝ2Ms × 2Ms is a nonsingular banded
Following the single-to-swarm approach, a decentralized scheme is matrix for any T ≻ 0 as shown in (56). Recovering a trajectory en-
naturally constructed, in which each drone is equipped with full joys linear complexity via banded PLU factorization. The gradi-
autonomy for maximal navigation quality. The system architecture ents for the polynomial coefficients is also propagated to MINCO
is depicted in Fig. 2B. The trajectory-broadcasting network is the parameters in linear time. This means that once partial gradients of
only connection between individuals. Therefore, the system is less objectives on {c, Tp} are acquired, they can be efficiently propagated
coupled than in previous work (33), which requires a stable chain onto {q, T}, and then optimization can be directly applied to
connection. The mapping module is based on probabilistic mapping MINCO. Details of M, b and more characteristics of MINCO are
(7) that shows robustness and efficiency. The drone removal module given in the Supplementary Materials. Specifically, computing time
removes pixels of other witnessed drones in depth from interfering from given {q, T} to {c, Tp} by Eq. 3 is approximately 1 s per poly-
with mapping. A VIO-based localization module along with the nomial piece on a desktop computer.
proposed drift-correction algorithm computes the six–degree-of-
freedom drone states. The controller module commands the drone Constraints transcription
to precisely track planned trajectories. The planning module that The differential flatness of multicopters (4) implies that their mo-
generates high-quality trajectories is the core of achieving TEEM tion planning can be performed on low-dimensional smooth trajec-
and therefore is further detailed in the “Trajectory representation” tories, such as MINCO. To achieve smooth motions and efficient
to the “Dynamic obstacle avoidance” sections and in the Supple- flights, we define two metrics for smoothness and time, respectively,
mentary Materials. Hardware modules are introduced in the “Palm- and then minimize their weighted sum. Decision variables are the
sized drone hardware” section. The drone removal module, which MINCO parameters q and T. Start and terminal states are fixed to
takes other drones’ trajectories to determine three-dimensional ensure continuity. The feasibility requires trajectories to fulfill
bounding boxes as trust regions, is detailed in section S8 of the Sup- vehicle’s dynamical constraints and to avoid obstacles. The flatness
plementary Materials. makes it possible to enforce dynamical constraints by restricting
magnitudes of trajectory velocity, acceleration, and jerk. Obstacle
Trajectory representation avoidance is achieved by deforming the trajectory shape.
Here, spatial-temporal trajectory planning is achieved using a newly Continuous-time constraints along the trajectory consist of
developed trajectory representation named MINCO (minimum infinitely many inequalities. To handle the difficulty, we propose a
control) (56) that is designed for differentially flat systems like mul- two-step procedure for constraint transcription. First, inspired by
ticopters (4). The most advanced aspect of MINCO is that it decouples (40), constraints are enforced via integrals of penalty functions with
the space and time parameters of a trajectory for users, on which large enough penalty weights. Second, every integral is evaluated by
linear-complexity operations are designed for convenient spatial- a finite sum of equally spaced samples along the timeline. The prob-
temporal deformation. The parameters of a MINCO piece-wise lem finally becomes an unconstrained one that can be solved more
trajectory are the (i) time durations T ∈ ℝM of each piece and (ii) the efficiently (57). For an optimization of time-dependent objectives J
adjacent waypoints q ∈ ℝ 3 × M − 1 between each pair of connected under equality constraints H and inequality constraints G within
pieces, where M is the piece number. Then, a three-dimensional time t ∈ [t0, tM], tM − t0 = sum(T), this two-step conversion can be
point p(t) ∈ ℝ3 at time t on the MINCO trajectory is defined by an written as
operation M t M
min ∫t 0 J(q, T, t ) dt (4)
q,T
p(t ) = M q,T(t) (1)
s . t . H(q, T, t ) = 0, G(q, T, t ) ≤ 0
(5)
According to the “Optimality conditions” section in (56), for an
s-integrator chain dynamics (s = 3 in this work), MINCO trajectory
s−1polynomial spline with constant
is by default a 2s − 1 degree C ⇓
( )
t M t M
min ∫t 0 Jdt + ∫t 0 ‖ H ⋅ H‖22 + max ( G ⋅ G, 0) 3 dt (6) This integral can be analytically calculated because the MINCO
q,T
trajectory can be represented as piece-wise polynomials according
to Eq. 3.
⇓ Minimizing total time
A shorter flight time is desirable (1) in most cases, so we also minimize
min ∑ i ⋅ (J(t i ) + ‖ H ⋅ H(t i ) ‖22 + max ( G ⋅ G(t i ) , 0) 3) (7) the weighted total flight time, which gives the total time penalty Jt as
q,T i=0
J t = sum(T) (10)
We omit arguments q, T, and t in Eqs. 6 and 7 for simplicity.
In these equations, H and G are user-defined weights with appro- Dynamical feasibility
priately large entries. ti = t0 + (tM − t0)i/ indicates a finite number For differentially flat multicopters, dynamical feasibility is guaranteed
of sampled timestamps, where + 1 equals the sample number, and by restricting magnitudes of the trajectory derivatives. In our work,
i is the interval value for integral evaluation. If any integral in J we limit the amplitude of velocity, acceleration, and jerk by adding
has a closed-form expression, like smoothness and total time, a penalty if these derivatives exceed the physical thresholds, which are
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
analytical results should be used. The Eq. 7 can be written in a
user-friendly form ax {(ṗ (t2i ) − v2m
J d,v = ∑m ) , 0}
3
(11)
i=0
min ∑ x Jx (8) 3
q,T x ax {(p¨ (t2i ) − a2m
J d,a = ∑m ) , 0} (12)
i=0
3
where Jx are various terms of penalties, i.e., task specifications, and ax {(p⃛ (t i) 2 − j2m
J d,j = ∑m ) , 0} (13)
x are relative weights. Subscripts x = {s, t, d, o, w} denote smooth- i=0
ness (s), total time (t), dynamical feasibility (d), obstacle avoidance
J d = J d,v + J d,a + J d,j (14)
(o), swarm collision avoidance (w), etc.
Solving Eq. 7 suffers from high complexity if a raw piece-wise
polynomial trajectory is used because dense matrix inversion is in- where vm, am, and jm are the maximum allowed magnitudes of
evitable. In contrast, linear-complexity operations of MINCO in the velocity, acceleration, and jerk, respectively; ti here and in later sec-
“Trajectory representation” section greatly reduce the computation tions follows the same definition in Eq. 7. We directly sum Jd,v, Jd,a,
overhead within each iteration. Along with the compact parameter and Jd,j because they share similar magnitudes although with dif-
representation, the total convergence speed is accelerated by orders ferent units.
of magnitude. In another aspect, because the constraints are con- Obstacle avoidance
verted into objectives, the feasibility is guaranteed by postchecking Obstacle avoidance is performed on the unordered obstacle map
as described in the “Hierarchical safety guarantee” section. built from the cluttered real world. In our work, we model obstacles
as planes (x − s)Tv = 0, x ∈ ℝ3, for which we treat the obstacle side
Trajectory planning procedure of the plane as occupied and the other side as free (14). Here, s ∈ ℝ3
The proposed trajectory planner runs as follows. Step 1. A user or is a point on the plane, and v ∈ ℝ3 is a normal vector pointing to the
software gives a global goal position. Step 2. The planner selects a free side. Then, for any point p ∈ ℝ3, its distance do to obstacles is
local target within the predefined local planning distance along the defined as
direction to the goal and then starts the iteration with an initial
guess. Step 3. Within each iteration, the solver returns a solution d o = (p − s) T v (15)
trajectory of the warm-start optimization. Step 4. The total penalty
J and the gradients are calculated and then sent back to the solver This is a highly simplified obstacle representation, but it shows
before returning to step 3. Step 5. A local trajectory from the current an appropriate balance between fidelity and computation overhead
position to the local target satisfying task requirements is returned in our previous work for single drone navigation (14). A detailed
and executed. Step 6. After a given period (always either one or description of {s, v} generation can be found in the Supplementary
several seconds) or whenever the trajectory has collided with newly Materials. Specifically, generating a plane typically takes 0.1 ms on a
sensed obstacles, the planning is reactivated by returning to step 2. desktop computer and thus is sufficient to be executed onboard.
This procedure is repeated until the drone reaches the goal. In this Following the definition of d0, we penalize if d 0 < C o with C o > 0
work, we use an open-source L-BFGS solver (58), which belongs to an obstacle clearance. Then, the obstacle avoidance penalty Jo is for-
the category of quasi-Newton methods of optimization for the next mulated as
candidate trajectory at the next iteration.
ax {(C o − d o(p(t i ))) , 0} 3
J o = ∑m (16)
General purpose penalties i=0
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
group) standard (59) implemented by OpenCV (60) and stored in a
stream. The object detection relies on YOLOv5 (61) neural network
k dw (t i ) = k dw ( u p(t i ) , k p( )) = ‖E 1/2( u p(t i ) − k p( ))‖ (19) with model depth multiple set to 0.33 and layer channel multiple set
to 0.50, in addition to other default parameters. It is accelerated
where up(ti) and kp() are the trajectories of the uth and kth drones, using NVIDIA TensorRT (62). All of the aforementioned software
respectively. An offset between ti and aligns them to the same global demands substantial computing resource and high-bandwidth
time. C w, named swarm clearance, is the minimum safety clearance input-output. The z axis on the camera frame of the object is esti-
between two drones. The matrix E ≔ diag (1,1,1/c) with c > 1 trans- mated using a prior known object height, and its global position is
forms a Euclidean distance into an ellipsoidal distance with the minor further filtered through a Kalman filter (63).
axes at the z axis to relieve the downwash risk from rotors. Next, Dynamic obstacle avoidance
the optimization problem remains unconstrained and hence can be Dynamic obstacles with predicted trajectories are treated in the same
solved efficiently. way as other moving drones from the perspective of decentralized
Formation expectation trajectory planning. Therefore, avoiding dynamic obstacles still fol-
The formation is defined as fixed vertexes in a local frame F, which lows the formulation presented in the “From single to swarm”
moves and rotates with respect to the world frame. To stay in for- section, except for different E and C wvalues according to obstacle
mation, each drone assigned with a vertex plans its trajectories ac- shape and volume. In the “Swarm playground” section, a standard
cording to others’ movements calculated from others’ trajectories. bicycle model is used to simulate and predict the movements of
When the flight starts after receiving a long-term goal, the forma- dynamic obstacles.
tion is required to move strictly along the straight line l, connecting Localization and drift correction
the current position and the goal. The x axis of F is parallel to l. Regarding real-world implementations in which high accuracy and
Then, the frame origin is determined by fitting the formation shape robustness are always desirable, we use VIO to accurately obtain
with the current swarm distribution. This formation inference can precise and high-frequency state estimations. However, without
retrieve the formation position in the near future, which gives a external positioning facilities, accumulative drift is unavoidable,
guiding path g(t) to each drone for the trajectory planning. Then, which may cause inter-robot collisions during a long-range com-
the formation penalty Jf is defined as pact flight. To estimate and correct localization drift separately, we
leverage relative distance measurements to each other along with
J f = ∑ ‖p(t i ) − g(t i ) ‖22 (20) their positions calculated from received trajectories. The relative
i=0 distances are measured by onboard UWB sensors as also adopted in
(64). For the uth drone in a swarm containing U drones, its range to
To enforce continuity, we assume a uniform motion beyond the the kth drone at position pk is measured as ru, k, and then we mini-
domain of trajectory definition. mize the total distance measurement error
Multiview tracking and videoing
U
To record a participant while avoiding obstacles, an RGB camera min ‖ p u − p u0‖22 + ∑ (‖p u − p k‖22 −
( r2u,k
2
) ) (24)
must point at the participant and the depth camera in the direction p u
k=1,k≠u
that the drone flies. To avoid obstacles, we align the depth camera
with drone velocity. Therefore, the constraints to adjust the view of to acquire the uth drone’s position pu, where pu0 is the latest odom-
depth camera are defined as (i) aligning the trajectory velocity ṗ (t) etry corrected by the last drift estimation. Note that a regularization
with the predicted participant velocity vp, which gives term for pu0 is added to avoid a nonunique or unstable solution, e.g.,
a whole spherical surface satisfies the minimization when U = 2.
The problem is solved using numerical optimization, and pu0 is also
v Jv (t ) = ‖ṗ (t ) − v p‖22 (21) taken as the initial value. To further smooth the odometry while
improving accuracy, we estimate the slow-changing drift and apply
To keep the participant in the picture with appropriate size, we a low-pass filter to it, instead of directly using the optimized pu. The
(ii) enforce a preferred drone’s position S P prf in the paricipant drift is then added to the latest odometry from VIO to produce
frame S, which gives corrected localization.
Furthermore, stationed facilities with ground-truth positions can 8 GB of RAM. In our experiments, except for multiview videoing,
also be incorporated into the optimization to ensure global consist CPU and GPU usages are all below 40%, which allows for consider-
ency as in the “Intensive reciprocal avoidance evaluation” section. able computing reserves for extra potential usage.
Note that global frames are required to be roughly consistent initially; 4) Sensors. We address the basic sensor settings to miniaturize
otherwise, the nonlinear optimization lacks reliable initialization. the drone size while retaining high accuracy. We use a grayscale and
Beyond improved localization accuracy, this approach brings almost depth camera Intel Realsense D430 (71, 72) and an IMU from the
no extra communication burden because other drone positions are FCU. The D430 camera outputs depth images for mapping and stereo
calculated from trajectories, and UWB shares different radio fre- grayscale images for localization. The UWB module is a Nooploop
quencies with trajectory broadcasting networks. In our systematic LTPS (73) with a DW1000 radio chip (74) inside.
solution, we use VINS (visual-inertial navigation system) (5) as the 5) Wireless communication modules. We test and implement
VIO and the Ceres Solver (65) for optimization. Evaluations of two topological structures: (i) a star shape using a single-access-point
effectiveness from our real-world experiments and a corresponding TP-LINK TL-XVR6000L router with an EDIMAX EW-7822UCL
block diagram are presented in the Supplementary Materials. (75) USB WiFi adapter and (ii) a decentralized peer-to-peer ad hoc
Hierarchical safety guarantee network using an AzureWave AW-CB375NF (76) via PCIe (peripheral
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
Because the safety constraints are transferred to penalties, the out- component interface express) interface. The first structure shows
put trajectory from the solver can still be infeasible, so a postcheck higher bandwidth, whereas the second is more suitable for a large
after trajectory planning is required. If a safety constraint is violated, swarm scale as assessed in the Supplementary Materials.
the planner increases its weight and then makes another trial to
improve the possibility of finding a satisfactory solution. If it is still
infeasible after several trials, the current planning is terminated, SUPPLEMENTARY MATERIALS
www.science.org/doi/10.1126/scirobotics.abm5954
and the planner waits for 10 ms before activating the next replan- Sections S1 to S10
ning. However, the postcheck after each planning only guarantees Figs. S1 to S17
feasibility at that time point because the map is changing, so a safety Tables S1 to S13
checking process continuously checks collisions in the background.
Once unsafety is detected, this process activates a replanning imme-
diately. If this trial fails, and the predicted time to collide is under a REFERENCES AND NOTES
1. P. Foehn, A. Romero, D. Scaramuzza, Time-optimal planning for quadrotor waypoint
threshold, an emergency stop trajectory is generated. After halting, flight. Sci. Robot. 6, eabh1221 (2021).
the planning tries to start up again. Such a fallback guarantees safety 2. Nikkei, Teardown of DJI drone reveals secrets of its competitive pricing (2021);
in the most severe case and recovers the mission afterward. https://asia.nikkei.com/Business/China-tech/Teardown-of-DJI-drone-reveals-secrets-of-
Palm-sized drone hardware its-competitive-pricing.
3. Grand View Research, Commercial drone market size, share & trends analysis report
All the experiments are performed on a 114-mm wheelbase micro-
by product (fixed-wing, rotary blade, hybrid), by application, by end-use, by region, and
platform that we designed, assembled, and released (66) with the segment forecasts, 2021–2028 (2021); www.grandviewresearch.com/industry-analysis/
hardware list here. The total weight of the platform is less than 300 g, global-commercial-drones-market.
including a 100-g battery providing an 11-min flight time. The drone 4. D. Mellinger, V. Kumar, Minimum snap trajectory generation and control for quadrotors,
is made up of the following five subsystems, as shown in Fig. 2A. in Proceedings of the 2011 IEEE International Conference on Robotics and Automation
(IEEE, 2011), pp. 2520–2525.
1) Power and movement suite. Two LiPo batteries of 3000-mAh
5. T. Qin, P. Li, S. Shen, VINS-Mono: A robust and versatile monocular visual-inertial state
capacity and 7.4-V voltage are connected in series, and four 6000-kv estimator. IEEE Trans. Robot. 34, 1004–1020 (2018).
brushless motors (model 1404) with 3-inch, three-blade propellers 6. K. Sun, K. Mohta, B. Pfrommer, M. Watterson, S. Liu, Y. Mulgaonkar, C. J. Taylor, V. Kumar,
are used, which give a thrust-to-weight ratio of 2.4. A four-in-one Robust stereo visual inertial odometry for fast autonomous flight. IEEE Robot. Autom. Lett.
electronic speed controller with a 15-A maximum current is used. 3, 965–972 (2018).
7. H. Moravec, A. Elfes, High resolution maps from wide angle sonar, IEEE International
The propellers are mounted at the bottom of the airframe, and
Conference on Robotics and Automation (IEEE, 1985), vol. 2, pp. 116–121.
therefore, a strong downwash flow will not blow directly onto the 8. K. McGuire, C. De Wagter, K. Tuyls, H. Kappen, G. C. de Croon, Minimal navigation solution
body. Such a design improves flight time according to our experi- for a swarm of tiny flying robots to explore an unknown environment. Science Robotics 4,
ments, and the average power consumption used in hovering is eaaw9710 (2019).
around 120 W. 9. M. Müller, S. Lupashin, R. D’Andrea, Quadrocopter ball juggling, in Proceedings of the
2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2011),
2) Low-level control unit. A nanoscale flight control unit (FCU)
pp. 5113–5120.
of size 16 mm by 32 mm by 8 mm running PX4 Autopilot (67) is 10. H. Oleynikova, M. Burri, Z. Taylor, J. Nieto, R. Siegwart, E. Galceran, Continuous-time
built. The hardware following the PX4 standard is composed of a trajectory optimization for online uav replanning, IEEE/RSJ International Conference on
STM32 H7 MCU (68) and a BMI088 IMU (69) with an 8-GB mem- Intelligent Robots and Systems (IEEE, 2016), pp. 5332–5339.
ory card for logging. Here, we omit all the sensors except for the 11. Skydio, Skydio autonomy™, a new generation of drone intelligence, www.skydio.com/
skydio-autonomy (2021).
IMU because we run localization using VIO rather than relying on
12. DJI, Mavic series, powerful and foldable for aerial adventure, www.dji.com/products/
barometers, magnetometers, or GPS. This unit is responsible for mavic (2021).
low-level angle control and sending IMU data to the high-level 13. J. Tordesillas, B. T. Lopez, J. P. How, FASTER: Fast and safe trajectory planner for flights in
navigation unit. unknown environments, IEEE/RSJ International Conference on Intelligent Robots and
3) High-level navigation unit. This unit runs all the localization, Systems (IEEE, 2019), pp. 1934–1940.
14. X. Zhou, Z. Wang, H. Ye, C. Xu, F. Gao, EGO-Planner: An ESDF-free gradient-based
planning, high-level control, and other task-specific codes and
local planner for quadrotors. IEEE Robotics and Automation Letters 6, 478–485
therefore requires sufficient computing performance. In our platform, (2020).
we use an NVIDIA Xavier NX (70), a powerful computer for em- 15. B. Rabta, C. Wankmüller, G. Reiner, A drone fleet model for last-mile distribution
bedded and edge systems with a six-core CPU, 384-core GPU, and in disaster relief operations. Int. J. Disaster Risk Reduct. 28, 107–112 (2018).
16. CAL FIRE, California department of forestry and fire protection, www.fire.ca.gov/incidents/ 48. F. Augugliaro, A. P. Schoellig, R. D’Andrea, Generation of collision-free trajectories for a
(2021). quadrocopter fleet: A sequential convex programming approach, IEEE/RSJ International
17. D. Mackenzie, It’s a bird, it’s a plane, it’s a ... spy? Science 335, 1433 (2012). Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 1917–1922.
18. A. Kamagaew, J. Stenzel, A. Nettsträter, M. ten Hompel, Concept of cellular transport 49. J. Park, J. Kim, I. Jang, H. J. Kim, Efficient multi-agent trajectory planning with feasibility
systems in facility logistics, in Proceedings of the 2011 International Conference on guarantee using relative bernstein polynomial, IEEE International Conference on Robotics
Automation, Robotics and Applications (IEEE, 2011), pp. 40–45. and Automation (IEEE, 2020), pp. 434–440.
19. J. Alonso-Mora, S. Baker, D. Rus, Multi-robot formation control and object transport 50. Microsoft, Xbox Wireless Controller, www.xbox.com/en-US/accessories/controllers/
in dynamic environments via constrained optimization. Int. J. Rob. Res. 36, 1000–1021 xbox-wireless-controller (2021).
(2017). 51. J. A. DeCastro, J. Alonso-Mora, V. Raman, D. Rus, H. Kress-Gazit, Collision-free reactive
20. D. Falanga, S. Kim, D. Scaramuzza, How fast is too fast? The role of perception latency mission and motion planning for multi-robot systems. Robot. Res. 459–476 (2018).
in high-speed sense and avoid. IEEE Robot. Autom. Lett. 4, 1884–1891 (2019). 52. J. Van Den Berg, S. J. Guy, M. Lin, D. Manocha, Reciprocal n-body collision avoidance.
21. F. Gao, L. Wang, B. Zhou, X. Zhou, J. Pan, S. Shen, Teach-repeat-replan: A complete Robot. Res. 3–19 (2011).
and robust system for aggressive flight in complex environments. IEEE Trans. Robot. 36, 53. J. Alonso-Mora, T. Naegeli, R. Siegwart, P. Beardsley, Collision avoidance for aerial vehicles
1526–1545 (2020). in multi-agent scenarios. Auton. Robot. 39, 101–121 (2015).
22. W. Hönig, J. A. Preiss, T. S. Kumar, G. S. Sukhatme, N. Ayanian, Trajectory planning 54. M. Coppola, K. N. McGuire, C. De Wagter, G. C. de Croon, A survey on swarming
for quadrotor swarms. IEEE Trans. Robot. 34, 856–869 (2018). with micro air vehicles: Fundamental challenges and constraints. Front Robot. AI 7, 18
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
23. J. A.-Mora, P. Beardsley, R. Siegwart, Cooperative collision avoidance for nonholonomic (2020).
robots. IEEE Trans Robot. 34, 404–420 (2018). 55. J. Alonso-Mora, S. Samaranayake, A. Wallar, E. Frazzoli, D. Rus, On-demand high-capacity
24. Intel, Intel Drone Light Shows, https://inteldronelightshows.com/ (2021). ride-sharing via dynamic trip-vehicle assignment. Proc. Natl. Acad. Sci. U.S.A. 114,
25. High Great, High Great Drone Light Shows, www.hg-fly.com/en/ (2021). 462–467 (2017).
26. CollMot Entertainment, CollMot Entertainment Drone Light Shows, https://collmot.com/ 56. Z. Wang, X. Zhou, C. Xu, F. Gao, Geometrically constrained trajectory optimization for
(2021). multicopters [v2], arXiv preprint arXiv:2103.00190 (2021).
27. C. W. Reynolds, Flocks, herds and schools: A distributed behavioral model, Annual 57. Y. Nesterov, Lectures on Convex Optimization, vol. 137 (Springer, 2018).
Conference on Computer Graphics and Interactive Techniques (1987), pp. 25–34. 58. FAST Lab, An Open-Source L-BFGS Solver, https://github.com/ZJU-FAST-Lab/LBFGS-Lite
28. S. Hauert, S. Leven, M. Varga, F. Ruini, A. Cangelosi, J.-C. Zufferey, D. Floreano, Reynolds (2021).
flocking in reality with fixed-wing robots: Communication range vs. maximum turning 59. G. K. Wallace, The JPEG still picture compression standard. IEEE Trans Consum. Electron.
rate, in Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and 38, xviii–xxxiv (1992).
Systems (IEEE, 2011), pp. 5015–5020. 60. OpenCV team, OpenCV, https://opencv.org/ (2021).
29. G. Vásárhelyi, C. Virágh, G. Somorjai, T. Nepusz, A. E. Eiben, T. Vicsek, Optimized flocking 61. YOLOv5 Contributors, YOLOv5, https://github.com/ultralytics/yolov5 (2021).
of autonomous drones in confined environments. Sci. Robot. 3, eaat3536 (2018). 62. NVIDIA, NVIDIA TensorRT, https://developer.nvidia.com/tensorrt (2021).
30. E. Soria, F. Schiano, D. Floreano, Predictive control of aerial swarms in cluttered 63. G. Bishop, G. Welch, An introduction to the kalman filter, Proc of SIGGRAPH. Course 8, 41
environments. Nat. Mach. Intell. 3, 545–554 (2021). (2001).
31. Bitcraze, Crazyflie flying development platform, www.bitcraze.io/ (2021). 64. H. Xu, Y. Zhang, B. Zhou, L. Wang, X. Yao, G. Meng, S. Shen, Omni-swarm: A decentralized
32. G. Loianno, C. Brunner, G. McGrath, V. Kumar, Estimation, control, and planning omnidirectional visual-inertial-uwb state estimation system for aerial swarm, arXiv
for aggressive flight with a small quadrotor with a single camera and imu. IEEE Robotics preprint arXiv:2103.04131 (2021).
and Automation Letters 2, 404–411 (2016). 65. S. Agarwal, K. Mierle, Keir, Ceres Solver, http://ceres-solver.org (2021).
33. X. Zhou, X. Wen, J. Zhu, H. Zhou, C. Xu, F. Gao, EGO-Swarm: A fully autonomous and 66. X. Zhou, X. Wen, Z. Wang, Y. Gao, H. Li, Q. Wang, T. Yang, H. Lu, Y. Cao, C. Xu, F. Gao,
decentralized quadrotor swarm system in cluttered environments, in Proceedings of the Dataset - swarm of micro flying robots in the wild (version 1) [data set], Zenodo.
2011 IEEE International Conference on Robotics and Automation (2021). https://doi.org/10.5281/zenodo.5804079 (2021).
34. E. Gent, Watch a swarm of drones fly through heavy forest--While staying in formation, 67. L. Meier, D. Honegger, M. Pollefeys, Px4: A node-based multithreaded open source
www.science.org/news/2020/12/watch-swarm-drones-fly-through-heavy-forest-while- robotics framework for deeply embedded platforms, 2015 IEEE international conference
staying-formation (2021). on robotics and automation (ICRA) (IEEE, 2015), pp. 6235–6240.
35. D. L. Altshuler, M. V. Srinivasan, Comparison of visually guided flight in insects and birds. 68. STMicroelectronics, Stm32h743, www.st.com/content/st_com/en/products/
Front. Neurosci. 12, 157 (2018). microcontrollers-microprocessors/stm32-32-bit-arm-cortex-mcus/stm32-high-
36. C. Ellington, Insects versus birds: The great divide, AIAA Aerospace Sciences Meeting and performance-mcus/stm32h7-series.html (2021).
Exhibit (2006), p. 35. 69. BOSCH, Bmi088, www.bosch-sensortec.com/products/motion-sensors/imus/bmi088/
37. J. Tordesillas, J. P. How, Mader: Trajectory planner in multiagent and dynamic (2021).
environments, IEEE Transactions on Robotics (2022). 70. NVIDIA, Jetson xavier nx, www.nvidia.com/en-us/autonomous-machines/embedded-
38. C. Richter, A. Bry, N. Roy, Polynomial trajectory planning for aggressive quadrotor flight systems/jetson-xavier-nx/ (2021).
in dense indoor environments, Robotics Research pp. 649–666 (2016). 71. Intel, Intel realsense depth modules and processors, www.intelrealsense.com/
39. V. Usenko, L. Von Stumberg, A. Pangercic, D. Cremers, Real-time trajectory replanning for stereo-depth-modules-and-processors/ (2021).
mavs using uniform b-splines and a 3d circular buffer, IEEE/RSJ International Conference 72. L. Keselman, J. Iselin Woodfill, A. Grunnet-Jepsen, A. Bhowmik, Intel realsense
on Intelligent Robots and Systems (IEEE, 2017), pp. 215–222. stereoscopic depth cameras, Proceedings of the IEEE Conference on Computer Vision and
40. L. S. Jennings, K. L. Teo, A computational algorithm for functional inequality constrained Pattern Recognition Workshops (2017), pp. 1–10.
optimization problems. Automatica 26, 371–375 (1990). 73. NoopLoop, Linktrack, www.nooploop.com/en/ (2021).
41. R. B. Benson, E. Starmer-Jones, R. A. Close, S. A. Walsh, Comparative analysis of vestibular 74. Decawave, Dw1000, www.decawave.com/product/dw1000-radio-ic/ (2021).
ecomorphology in birds. J. Anat. 231, 990–1018 (2017). 75. EDIMAX, Ew-7822ulc, www.edimax.com/edimax/merchandise/merchandise_detail/data/
42. R. R. Murphy, Swarm robots in science fiction. Sci. Robot. 6, eabk0451 (2021). edimax/global/wireless_adapters_ac1200_dual-band/ew-7822ulc/ (2021).
43. D. Mellinger, M. Shomin, N. Michael, V. Kumar, Cooperative grasping and transport 76. AzureWave, Aw-cb375nf, www.azurewave.com/wireless-modules-nvidia.html (2021).
using multiple quadrotors, Distributed autonomous robotic systems pp. 545–558
(2013). Acknowledgments: We thank H. Ye, L. Quan, L.Yin, who offered valuable suggestions to the
44. I. Mademlis, V. Mygdalis, N. Nikolaidis, M. Montagnuolo, F. Negro, A. Messina, I. Pitas, manuscript, and R. Jin for photography and video recording. We sincerely appreciate the work
High-level multiple-UAV cinematography tools for covering outdoor events. IEEE Trans. of H. Yin, Y. Li, Z. Wang, J. Guo, X. Zhu, and J. Wang for help on filed experiments. Furthermore,
Broadcast. 65, 627–635 (2019). we are truly grateful for J. Zhu’s help with benchmark comparisons. Funding: This work was
45. A. Bucker, R. Bonatti, S. Scherer, Do you see what i see? coordinating multiple aerial supported by the National Natural Science Foundation of China under grant no. 62003299 and
cameras for robot cinematography, 2021 IEEE International Conference on Robotics and 62088101. Author contributions: X.Z. contributed to the hardware and software design, wild
Automation (ICRA) (IEEE, 2021), pp. 7972–7979. experiments, and manuscript writing. X.W. contributed to FMU design, power suit testing,
46. J. Tordesillas, J. P. How, MINVO basis: Finding simplexes with minimum volume enclosing communication evaluation, simulation benchmark, and wild experiments. Z.W. contributed to
polynomial curves, arXiv preprint arXiv:2010.10726 (2020). the idea of spatial-temporal optimization with the software of trajectory parameterization
47. C. De Boor, C. De Boor, E.-U. Mathématicien, C. De Boor, C. De Boor, A Practical Guide to and numerical optimization. He also gave advisory suggestions and edited the manuscript.
Splines, vol. 27 (springer-verlag New York, 1978). Y.G. contributed to data analysis and artwork. H. Li wrote reliable swarm communication
software and did some early work on localization correction. Q.W. wrote an early version of materials availability: All data needed to support the conclusions of this manuscript are
swarm tracking. T.Y. designed the power management module of the drone and edited an included in the main text or Supplementary Materials and in an online dataset (66).
early version of the videos. H. Lu polished the manuscript. Y.C. gave several suggestions
about UWB usage and helped with several tests. C.X. provided people and sites for drone Submitted 5 October 2021
testing. F.G. directed the research, provided the primary idea and funding with some key Accepted 22 February 2022
suggestions about software and hardware debugging, and revised the manuscript. Published 4 May 2022
Competing Interests: The authors declare that they have no competing interests. Data and 10.1126/scirobotics.abm5954
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
Downloaded from https://www.science.org at Indian Institute of Technology, Kharagpur on January 04, 2024
View the article online
https://www.science.org/doi/10.1126/scirobotics.abm5954
Permissions
https://www.science.org/help/reprints-and-permissions
Science Robotics (ISSN 2470-9476) is published by the American Association for the Advancement of Science. 1200 New York Avenue
NW, Washington, DC 20005. The title Science Robotics is a registered trademark of AAAS.
Copyright © 2022 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim
to original U.S. Government Works