Scirobotics Ade9548

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

SCIENCE ROBOTICS | RESEARCH ARTICLE

P L A N E TA R Y R O B O T S Copyright © 2023 The


Authors, some
Scientific exploration of challenging planetary analog rights reserved;
exclusive licensee
environments with a team of legged robots American Association
for the Advancement
of Science. No claim
Philip Arm1†*, Gabriel Waibel1†, Jan Preisig1, Turcan Tuna1, Ruyi Zhou1,2, Valentin Bickel3,4, to original U.S.
Gabriela Ligeza5, Takahiro Miki1, Florian Kehl6,7,8, Hendrik Kolvenbach1, Marco Hutter1 Government Works

The interest in exploring planetary bodies for scientific investigation and in situ resource utilization is ever-
rising. Yet, many sites of interest are inaccessible to state-of-the-art planetary exploration robots because of
the robots’ inability to traverse steep slopes, unstructured terrain, and loose soil. In addition, current single-
robot approaches only allow a limited exploration speed and a single set of skills. Here, we present a team of
legged robots with complementary skills for exploration missions in challenging planetary analog environ-
ments. We equipped the robots with an efficient locomotion controller, a mapping pipeline for online and post-
mission visualization, instance segmentation to highlight scientific targets, and scientific instruments for
remote and in situ investigation. Furthermore, we integrated a robotic arm on one of the robots to enable
high-precision measurements. Legged robots can swiftly navigate representative terrains, such as granular
slopes beyond 25°, loose soil, and unstructured terrain, highlighting their advantages compared with
wheeled rover systems. We successfully verified the approach in analog deployments at the Beyond Gravity
ExoMars rover test bed, in a quarry in Switzerland, and at the Space Resources Challenge in Luxembourg.

Downloaded from https://www.science.org on July 29, 2023


Our results show that a team of legged robots with advanced locomotion, perception, and measurement
skills, as well as task-level autonomy, can conduct successful, effective missions in a short time. Our approach
enables the scientific exploration of planetary target sites that are currently out of human and robotic reach.

INTRODUCTION (VIPER) (4), will venture into several permanently shadowed


Robotic planetary exploration is invaluable for advancing our un- regions—cold, volatile-rich topographic depressions that have not
derstanding of the solar system and enabling the prospection of po- been illuminated for millions of years (5–7). Ultimately, Artemis
tential resources. The recent commitment of national and 3 will lead humans to the lunar south pole in 2025 (8). All of
commercial entities to return to the Moon—targeting a sustainable, those missions will need to navigate the challenging south-polar
long-term human presence—boosted the development of robotic terrain, including steep slopes, impact ejecta and boulder fields,
exploration technologies. and potentially anomalous physical regolith properties (4, 9, 10).
Many science-, exploration-, and resource extraction–relevant One promising way to foster the development of lunar explora-
targets across the lunar surface lie in hard-to-reach areas or areas tion and prospection technologies is challenge-driven innovation.
with substantial potential to host unknown physical surface proper- The European Space Agency (ESA) and the European Space Re-
ties. Examples include pyroclastic vents, volcanic rilles, caves, irreg- sources Innovation Centre (ESRIC) established the Space Resources
ular mare patches, and fresh impact craters (1, 2). Consequently, Challenge (SRC) in 2021 to evaluate and advance the state of the art
developing robotic exploration systems that can efficiently traverse of robotic lunar prospecting technologies. The main technical goal
challenging terrain without compromising their explorative, scien- of the challenge was prospecting a lunar analog environment for re-
tific, and resource prospection capabilities remains a top priority. source-enriched areas (REAs), meaning areas that contain minerals
Several lunar exploration efforts revolve around the National suitable for in situ resource utilization, such as ilmenite, rutile, and
Aeronautics and Space Administration’s (NASA) Artemis titanium dioxide. The two rounds of the challenge took place in
program, which focuses on robotic and crewed science and explo- 2021 and 2022, with the final competition in a lunar analog
ration at the lunar south pole (3). One of the first Artemis program terrain in Luxembourg. The competition involved adverse condi-
missions, the Volatiles Investigating Polar Exploration Rover tions found at the lunar south pole, including previously
unknown terrain, loose granular soil, high solar incidence angle il-
lumination creating long, high-contrast shadows, and network
1
2
Robotic Systems Lab, ETH Zurich, Leonhardstrasse 21, Zurich 8092, Switzerland. communications with a high-latency 5.0-s round-trip time (RTT)
State Key Laboratory of Robotics and System, Harbin Institute of Technology, and intermittent complete loss of signal (LoS). The SRC was an im-
Harbin 150080, China. 3Laboratory of Hydraulics, Hydrology, and Glaciology, ETH
Zurich, Hönggerbergring 26, Zurich 8093, Switzerland. 4Center for Space and Hab- portant inspiration in this work and was one of the two major field
itability, University of Bern, Gesellschaftsstrasse 6, Bern 3012, Switzerland. deployments of our robotic exploration team.
5
Department of Environmental Sciences, University of Basel, Basel 4056, Switzer- Here, we propose a team of legged robots for quick, efficient, and
land. 6Innovation Cluster Space and Aviation (UZH Space Hub), Air Force Center,
University of Zurich, Dübendorf 8600, Switzerland. 7Center for Theoretical Astro-
safe exploration and prospection of challenging planetary analog
physics and Cosmology, Institute for Computational Science, University of Zurich, environments. The team approach allows us to cover a larger area,
Winterthurerstrasse 190, Zurich 8057, Switzerland. 8Institute of Medical Engineer- deploy a wider variety of scientific payloads, investigate more scien-
ing, Space Biology Group, Lucerne University of Applied Sciences and Arts, Hergis- tific targets, and gain more in-depth knowledge per target than is
wil 6052, Switzerland.
*Corresponding author. Email: parm@ethz.ch possible with a single-robot approach or non-teamed multirobot
†These authors contributed equally to this work.

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 1 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

approaches. In addition, the increased redundancy allows mission on the usage of dynamically walking legged robots on steep, plan-
completion even if multiple robots fail. To control the robotic team, etary soil analogs (30) and low-gravity environments (31–33), show-
the operators send high-level navigation, remote measurement, and casing the potential of the technology. However, for these robots to
in situ measurement tasks to the robots. The robots execute these be useful in real-world scenarios, they need to be advanced beyond
tasks autonomously using their state-of-the-art mobility and navi- locomotion tasks. They have to interact with their environment in
gation systems, as well as a complementary and redundant set of realistic analog missions, for example, by deploying scientific in-
payloads. The level of autonomy allows continued scientific data struments or taking samples. We advanced in this direction at the
collection, even if communication becomes unreliable or a complete first field trial of the SRC, where we deployed a legged robot with
LoS occurs. Simultaneously, the scientists in the operations team base-mounted instruments (34).
can select and prioritize scientific targets during the mission. Heterogeneous robotic teams have been used as a viable solution
Until now, most planetary exploration robots relied on wheeled in terrestrial real-world missions. All top-ranking teams in the
locomotion. Their locomotion system did not fundamentally Defense Advanced Research Projects Agency (DARPA) Subterra-
change since the first rover, Lunokhod 1, touched down on the nean Challenge 2021 used heterogeneous robotic teams with
surface of the Moon in 1970 (11). Other prominent examples are diverse skills (35–37). To succeed in the challenge, the teams devel-
the Lunar Roving Vehicle (LRV), Lunokhod 2, and Yutu 2 (11– oped robust solutions for locomotion, localization, multirobot
14) and martian rovers such as Sojourner (15), Spirit, Opportunity mapping, local planning, and exploration planning. In this work,
(16), Curiosity (17), and Perseverance (18). Although these systems we built upon these advances—specifically on the systems of our
can build on well-tested heritage technology and provide robustness team CERBERUS (35)—and addressed the unique challenges pre-
in relatively flat terrain, wheeled rovers reach their limitations on sented by analog space missions, including instrument deployment,
steep slopes, on loose granular terrain, and in unstructured environ- efficient and robust locomotion with robotic arms, redundancy to
ments. On Mars, the Spirit rover was lost in anomalously loose soil component or system failures, and validation in realistic missions

Downloaded from https://www.science.org on July 29, 2023


(19), and Opportunity got temporarily stuck in a dune (20). On the with high-latency communication. In addition, although the
Moon, the Apollo 15 LRV was trapped in loose regolith and had to robotic teams in the Subterranean Challenge were diverse in their
be manually retrieved by the astronauts (12). Similarly, Lunokhod 2 locomotion skills, we considered diversity in scientific investigation
encountered excessive wheel sinkage (>20 cm) near Le Monnier capabilities in this work.
crater (13). The Yutu-2 team reported that entering craters would Several robotic teams for planetary exploration have been devel-
be of great scientific interest. However, they do not target craters oped and tested in analog environments. For example, the German
because of the increased probability of locomotion failure (14). Aerospace Center (DLR) deployed a drone and two-wheeled robots
This locomotion limitation prevents current missions from investi- autonomously to set up a distributed radio telescope and perform
gating high-priority targets (1, 2, 21–23). geological exploration on Mt. Etna (38). Although they showed a
Meanwhile, terrestrial legged robots have reached a high level of high level of autonomy, the wheeled rovers were limited in their lo-
robustness in exploring unknown environments. Their robust loco- comotion capabilities. The German Research Center for Artificial
motion system allows them to traverse unstructured, challenging Intelligence (DFKI) developed a heterogeneous robotic team of a
natural terrains, including mud, gravel, snow, vegetation, and wheeled and a legged robot for sample collection in a lunar
sand (24, 25). analog environment (39). In (40), the authors demonstrated a tele-
Several researchers have developed legged robots with the intent operated sample return mission using a robotic team in a martian
to use them in space in the past (26–29). Until now, we have focused analog environment. The operations team operated the robots via

Movie 1. A team of legged robots for planetary exploration.

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 2 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

waypoints and tested capabilities such as sampling in isolated tests Analyzer XTR DS (MIRA) with a zoom lens to acquire Raman
instead of the full mission deployment. More recently, NASA Jet spectra of targets of interest. The Scientist performs an in-depth sci-
Propulsion Laboratory (JPL) built on their NEBULA solution to entific analysis of previously identified targets. It features a custom
explore analog martian caves with multiple Spot robots in the 6–degree of freedom (DoF) robotic arm with a MIRA on the
NASA BRAILLE project (41). One of the robots was equipped forearm and a custom microscopic imager (MICRO) on the wrist.
with a robotic arm to take close-up images and swab samples. Figure 1B shows a system overview. Two operators on two
However, the details of this work are not yet published. Last, the mission control stations sent high-level navigation, remote mea-
first heterogeneous robotic team for planetary exploration is cur- surement, and in situ measurement goals to the robots. The “Scien-
rently operating on Mars: The Ingenuity helicopter supports the tific payload integration and deployment” section describes
Mars 2020 mission by scouting potential targets for the Persever- scientific tasks that the robot can conduct remotely using the
ance rover and inspecting targets the rover cannot access (42). CTX-FW and CTX-TH payloads and close-up investigations,
This actual Mars mission is a remarkable example of a heteroge- namely, MIRA and MICRO measurements. All data packets
neous robotic team for planetary exploration. between mission control and the robots were delayed with an
We present our teamed exploration approach with dynamically RTT of 5.0 s to simulate lunar operation. Navigation goals were
walking robots for planetary environments (Movie 1). We designed handled by the same module on each robot. Three-dimensional
a team of legged robots with a diverse set of scientific investigation (3D) remote measurement goals were only used by the CTX
skills and redundancy measures and validated our system in three imagers on the Scout and the Hybrid. The 6D in situ measurement
challenging analog environments: the ExoMars locomotion test fa- targets were processed by the Hybrid and the Scientist. The robots
cility, a quarry site, and the competition site of the SRC. We report sent feedback about their state, navigation images, a sparse map rep-
our results and lessons learned from these deployments and identify resentation, and scientific data of targets of interest to the mission
opportunities for future developments. control stations.

Downloaded from https://www.science.org on July 29, 2023


To achieve effective analog mission deployments, we developed, Although all robots have their designated role, they shared many
improved, and validated critical subsystems: We validated the base- exploration capabilities and payloads, enabling a high redundancy
line locomotion policy (25) on planetary analog terrain and devel- level. If a robot failed, the operations team could reallocate tasks
oped a locomotion policy with a focus on efficiency. Our policy between the robots. Figure 2 visualizes how tasks could be reallocat-
includes arm observations, making it robust for legged robots ed for examples of exploration and measurement tasks because of
with robotic arms for scientific investigation in challenging envi- the payload redundancy concept. Table S1 summarizes which sci-
ronments. In addition, we built a two-pronged mapping approach entific measurement tasks could be executed under which failure
for lightweight real-time online mapping and high-resolution, real- conditions.
istic postmission visualization. Our instance segmentation pipeline
highlights potential scientific targets to support online mission Experimental setup on analog sites
planning in previously unknown environments. By distributing a We conducted end-to-end missions in two lunar analog environ-
balanced set of remote and close-up scientific instruments, we ments and a locomotion validation test in a martian analog test
achieved effective and safe exploration missions. We showcase the bed. In both end-to-end analog missions, the operations team con-
capabilities of our legged robots in martian and lunar analog envi- sisted of five people: A team of one robot operator and one planetary
ronments, demonstrating that our technology can enable robots to scientist each operated one of the two mission control stations. The
investigate scientifically transformative targets on the Moon and robot operators sent tasks to the robots, and the planetary scientists
Mars that are unreachable at present using wheeled rover systems. selected and prioritized targets on the basis of the data received
from the robots. In addition, one supervisor overlooked the
mission and ensured communication between the two control
RESULTS teams. Both mission control stations could be used to interact
A team of legged robots for planetary exploration with any of the robots. However, the team ensured that each
The core of our system comprises a team of three four-legged robot only received commands from one control team at any
robots. ANYmal (43) served as the base platform for all robots. given time to prevent conflicting commands, where the newer
Each of the three robots has a dedicated role and a unique set of command would supersede the previous one. In the presented de-
payloads for exploration and scientific data collection. Figure 1A ployments, we used one mission control station to control the Scout
presents our team of legged robots: the Scout, the Hybrid, and and the Hybrid and the second control station to control the
the Scientist. The Scout’s primary task is rapidly exploring the en- Scientist.
vironment using its additional light detection and ranging (LiDAR)
sensor and RGB (red, green, blue) cameras. It provides the opera- SRC, Luxembourg
tions team, consisting of the robot operators and planetary scien- One field campaign occurred at the ESA/ESRIC Space Resources
tists, with an overview of the previously unknown area and allows Challenge in Esch-sur-Alzette, Luxembourg (September 2022).
the team to identify potential scientific targets. Its secondary task is The competition area measured 1800 m2, and the terrain was
to capture images of potential scientific targets in various spectral unknown before the challenge. The ground was covered in coarse,
bands using a pan-tilt context imager augmented with a custom- granular basalt with a substantial fine fraction (clay to silt). The light
built filter wheel (CTX-FW) (fig. S1). The Hybrid’s main task is col- conditions closely resembled those at the lunar south pole because
lecting scientific data of numerous targets using a pan-tilt context of a powerful illumination source at a high incidence angle in the
imager with an additional thermal camera (CTX-TH). Moreover, it corner of the area (Fig. 3A). The analog scenario contained
is equipped with a base-mounted Metrohm Instant Raman several locations of interest, such as REAs, boulders, craters, and a

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 3 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

Downloaded from https://www.science.org on July 29, 2023

Fig. 1. System architecture of our team of legged robots. (A) Robotic and scientific payloads on the Scout, Scientist, and Hybrid. Robotic payloads and scientific
payloads are labeled in orange and white, respectively. (B) High-level overview of the software architecture of our system. With a balanced combination of shared
and specialized modules per robot, we designed a safe yet efficient multirobot system.

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 4 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

Downloaded from https://www.science.org on July 29, 2023


Fig. 2. Example of task allocations in our system. Task allocations are shown as solid lines. Alternative allocation paths are shown as dashed lines. When a task allo-
cation becomes invalid, for example, because a payload or robot malfunctions (red lines), tasks can be reallocated according to our redundancy concept.

lunar habitat prototype. The mission control room was separated Locomotion validation test bed, Beyond Gravity
from the competition area with a 5.0-s RTT to communicate to We used the planetary soil test bed at Beyond Gravity (fig. S2) to
all systems in the competition area. We summarize the most impor- validate the Scout’s locomotion controller (25) on steep, granular
tant objectives and rules of the SRC in table S2. soil analogs. The test bed was initially designed to test the locomo-
tion subsystem of the ExoMars rover Rosalind Franklin. It features a
Quarry site, Switzerland 6 m–by–6 m tiltable container that can be filled with various analog
The quarry site was an active gravel quarry operated by KIBAG, sit- soil simulants. We used ESA’s ES-4 martian soil simulant (44) and a
uated in Neuheim, Switzerland. The quarry consists of poorly row of Jurassic limestone plates as martian bedrock analogs for our
sorted fine and coarse sediments, including meter-sized boulders, tests. The test bed is tiltable up to 25° at a 0.1° resolution.
leading to locomotion challenges such as sinkage and slippage, es-
pecially on steep inclines. The site includes a headwall with a Analog mission results
maximum slope of about 20°. We simulated realistic lighting con- This section provides an overview of our analog deployments: the
ditions in a lunar south pole scenario. To this end, we conducted the end-to-end deployments at the SRC and in the quarry as well as the
test at night to minimize the influence of naturally occurring light locomotion validation tests.
and illuminated the test site with a 180-W light-emitting diode
(LED) lamp (Aputure LS 120D II) at a high illumination angle of SRC mission overview
roughly 87°. As shown in Fig. 4A, the illumination led to character- During the SRC, the challenge’s core objectives provided by the or-
istic long, high-contrast shadows as expected in the vicinity of the ganizers were mapping the competition area, locating boulders and
lunar south pole. Furthermore, all communication between mission REAs, and characterizing them. On that basis, we derived goals for
control and the robots passed through a delay simulator, which the robotic system: These comprised mapping the entire competi-
created an RTT of 5.0 s. tion area, locating REA candidates and all boulders, and providing
We selected distinct boulders on the site to simulate scientific scientific data of the boulders and potential REAs to enable trained
targets of interest. Furthermore, we spread patches of basalt, ilmen- geologists to characterize them.
ite, rutile, and titanium dioxide in different mass fractions on the Figure 3F shows the mission overview of the SRC deployment.
terrain to create realistic REAs for a lunar prospecting mission. We first deployed the Scout to map the area and used the navigation
cameras and CTX-FW to help the operations team prioritize the
targets of interest. After the first LoS, we decided that we had
enough targets of interest to deploy the Scientist. The Scientist’s

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 5 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

Downloaded from https://www.science.org on July 29, 2023

Fig. 3. Performance of our team of legged robots during the SRC. (A) Overview of the competition area. (B) Postprocessed high-resolution map of the exploration area
and the associated path of the Scout. The region in the blue dotted box corresponds to (C). (C) Online mesh map of the Scout for target identification. (D) Postprocessed
high-resolution height map and scientific acquisition paths of the Hybrid and the Scientist. (a and b) Thermal images acquired during the SRC. (c) Example of the rock
instance segmentation of a Navcam image. (d) Example of the rock instance segmentation of a CTX image. (E) Example of a panorama image with rock instance seg-
mentation acquired during the SRC. (F) Mission summary of the SRC.

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 6 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

Downloaded from https://www.science.org on July 29, 2023

Fig. 4. Performance of our team of legged robots during the analog mission on the Neuheim quarry site. (A) Experimental test yard with a variety of scientific
targets. (B) Postprocessed and shaded high-resolution map of the Neuheim quarry site with the path of the Scout. (C) Rock segmentation results under low-illumination
conditions. (D) Postprocessed high-resolution elevation map of the Neuheim quarry test site. (E) Mission summary of the end-to-end analog deployments at the quarry.
The Scout, the Hybrid, and the Scientist were all deployed sequentially with an overlap of the Hybrid and the Scientist. The whole mission duration was 68 min.

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 7 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

task was mainly to focus on potential REAs. After 76 min, we de- localization and mapping systems on a relevant scale. We accord-
ployed the Hybrid to support the boulder characterization with the ingly defined the following mission goals for the deployment at
thermal imager and the MIRA. the quarry: exploring and mapping an area of at least 1000 m2
We mapped 95% of the competition area, located seven of eight and identifying at least five targets of interest, such as boulders or
boulders, and identified 18 potential REAs. We prioritized five terrain patches. To test our instruments, we set the goal to acquire
boulders for closer investigation, taking CTX-FW images of all of measurements of at least five targets with each instrument (CTX,
them and MIRA measurements of three boulders. In addition, we filter wheel, thermal imager, MIRA, and MICRO).
investigated 6 of the 18 potential REAs with MICRO and MIRA, Figure 4E shows the mission overview of the quarry deployment.
taking several measurements per target when the data quality was We did not use the CTX-FW on the Scout and limited the use of the
insufficient. MIRA on the Scientist because the hardware malfunctioned. Thus,
The SRC deployment illustrates several advantages of the teamed according to our redundancy concept, we deployed the Hybrid early
exploration approach. Three robots operated simultaneously for 20 to take over the CTX tasks of the Scout and the MIRA tasks of the
min because of the two mission control stations that could be used Scientist. The Scientist deployed once the operations team had iden-
interchangeably to interact with all robots. During this time, the tified and prioritized enough targets of interest to maximize the
Scout was completing the map in yet unexplored areas, and the payload utilization on the Scientist.
Hybrid and the Scientist were collecting scientific data to character- In a total mission time of 68 min, we mapped an area of 1375 m2
ize the targets of interest. Our autonomy features and intuitive and identified 12 targets of interest (10 boulders and 2 area patches).
graphical user inteface (GUI) further supported deploying three We collected 16 CTX and thermal images of seven rocks, MIRA
robots with two operator stations. In addition, because the environ- spectra of six rocks, and MICRO images of three rocks and two
ment was already known after the Scout deployment, the Scientist area patches (two MICRO datasets of the same area patch). Using
could deploy MICRO and MIRA every 3 to 5 min, which would be our two control stations, we could efficiently control the Hybrid and

Downloaded from https://www.science.org on July 29, 2023


unfeasible with an approach where the same robot has to map and the Scientist in parallel during one-third of the mission. Because of
identify targets. the redundancy concept in our robotic team approach, we fulfilled
all mission objectives except the acquisition of filter wheel images,
Quarry mission overview despite the inactive payloads. A single-robot system or a system
To test all payloads on a number of targets with variability, we without a redundancy concept could not have achieved the objec-
defined a minimum of five targets to investigate with each instru- tives related to the CTX and the MIRA measurements under these
ment. Furthermore, we intended to map a large area to test our circumstances.

Fig. 5. Locomotion capability of our customized ANYmal over different challenging planetary analog terrains. (A) ANYmal climbs a steep sandy slope at around
20° on the Neuheim quarry site. (B) ANYmal escapes from the analog lunar impact crater rim during the SRC. (C) ANYmal escapes from deep foot sinkage in the loco-
motion validation test bed. (D) ANYmal walks over a high bedrock step.

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 8 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

Locomotion results identification. In addition, we segmented selected panoramic


In the validation test campaign at Beyond Gravity, the Scout could images (Fig. 3E).
climb and descend slopes of up to 25°—the maximum of the test Aside from the online mesh to guide operations, all robots main-
bed—on ES-4 and bedrock using our existing locomotion control tained a high-resolution point cloud map in both missions for post-
policy (25). On both grounds, we conducted three tests, both in mission visualization. Figure 3B shows the postprocessed and
ascent and descent, without a locomotion failure, which we shaded dense map of the complete SRC competition environment,
defined as either a fall or a failure to continue the traverse, for with the path of the Scout overlaid. We explored 95% of the area
example, due to slippage. Even on the maximum slope of the test with the Scout within 96 min, where most of the environment
bed, the robot reached a top speed of 0.7 m/s, outperforming and the scientific targets were already identified after 50 min. Sim-
state-of-the-art systems that can climb such inclinations (table ilarly, Fig. 4B shows the dense map created during the quarry
S3). We conducted additional tests at a 10° slope on ES-4 with mission. In addition, we colored the dense maps by elevation
steps of bedrock and very loose hills in all directions. The robot (Figs. 3D and 4D) to create a topographic overview of the environ-
could negotiate the hills despite the high sinkage (Fig. 5C) and ment. For example, the headwall in the quarry is visible in the top
seamlessly transitioned high steps between ES-4 and left part of the map. Furthermore, we created colored maps (figs. S4
bedrock (Fig. 5D). and S5) by projecting the navigation camera colors onto the
Our developed controller for the Hybrid and Scientist enabled point cloud.
similar robustness as the existing baseline controller (25) but with
added heavy payloads such as the robotic arm. The moving robotic Scientific data of targets of interest
arm did not impede the Scientist while walking on flat terrain (fig. Once the operations team identified and prioritized targets of inter-
S3 and movie S1). In addition, with a static arm, the robot con- est, they could send remote measurement and in situ measurement
sumed 15% less power in a mock mission on flat ground when tasks to all robots (Fig. 7). Figure 8 shows an example of the scien-

Downloaded from https://www.science.org on July 29, 2023


using our controller compared with the baseline controller (see Sup- tific data that we could gather during the SRC. We acquired images
plementary Results). in five visual spectral ranges at two zoom levels using CTX-FW
During the mission deployments in the quarry and the SRC, the (Fig. 8A). The low-zoom image shows the target’s size, shape, and
robots traversed steep granular slopes up to 20° (Fig. 5A) and a geomorphic context. The high-zoom image provides information
crater rim (Fig. 5B). At the SRC, the robots covered a total distance about the target’s surface texture, including the presence of millime-
of 358 m of granular terrain (Fig. 3, B and D). The Scientist, carrying ter- to centimeter-scale vesicles, the rock’s lithology, and the distri-
the robotic arm, showed the same level of robustness as the Scout bution of minerals. Both low-zoom and high-zoom images indicate
and the Hybrid. During all pretests and mission deployments, no that all boulders in the SRC were porous basalts with aphanitic
single locomotion failure occurred on any of the robots. (fine-grained) textures. The high-zoom images with the filter
wheel enabled us to study the relative reflectance of each target
Mapping and target identification and compute five-point spectra. We color-corrected selected RGB
We used a lightweight mesh representation for online operations images with images from a robot-mounted color calibration card.
and a dense mapping framework for mission postprocessing in Figure 8B shows in situ measurement results of a potential REA
both analog missions (Fig. 6). Furthermore, we ran an instance seg- during the SRC. We collected microscopic images in six different
mentation pipeline on our navigation cameras and CTX images to spectral bands [white, red, green, blue, ultraviolet (UV), and infra-
identify and highlight boulders as potential targets of interest for the red (IR)], allowing us to compute the target’s five-point spectrum.
operations team, substantially reducing the operational overhead. The images show the coarse granular basalt regolith, which covered
Figures 3 (C to E) and 4 (C to E) show the mapping and perception the whole competition area at the SRC. MICRO images enabled us
results in the quarry and at the SRC, respectively. to investigate the grain size distribution and presence of potential
The robot sent a downsampled point cloud in a 9-m radius resources. The MIRA Raman spectrum of the basaltic regolith in
around the robot to mission control. The mission control PC gen- Fig. 8B shows, for example, a prominent peak at 952 cm−1.
erated the mesh and fused single mesh instances automatically to
gradually build a mesh map of the covered area. Offloading the
meshing operation to the mission control PC allowed us to transmit DISCUSSION
a small point cloud instead of a heavy map. Figure 3C shows a frac- Using a team of legged robots with powerful locomotion capabili-
tion of the mesh map of the SRC built with three mesh instances. ties, a mapping pipeline for online and offline visualization, and
Boulders 4 and 5 and impact craters 4 and 5 are distinguishable in segmentation tools for target identification allowed us to collect a
the mesh. Together with the navigation cameras, the resolution of substantial amount of scientific data in planetary analog missions
the mesh map allowed the operations team to identify and mark with limited mission time. Compared with the single-robot ap-
these targets of interest for further investigation. Therefore, the res- proach that we deployed at the first SRC field trial (34), we could
olution was high enough to understand the robot’s environment, drastically increase the quality and quantity of scientific data prod-
select targets on the mesh map, and make mission scheduling deci- ucts. The specialization of our robots allowed a high payload utili-
sions while still allowing the map to be transferred over the network. zation, as demonstrated at the SRC, where the Scientist robot
Figures 3D (c and d) and 4C show the instance segmentation deployed the MIRA and MICRO instruments every 3 to 5 min. In
output on the navigation cameras in the SRC and quarry missions, a multirobot approach without specialization, for example, without
respectively. The images show that, even under difficult lighting a dedicated scout robot, every robot would invest a substantial part
conditions, the pipeline could identify and highlight boulders in of the mission time in mapping and target identification, limiting
the image, supporting the operations team in the target the high-return instruments’ payload utilization.

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 9 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

Downloaded from https://www.science.org on July 29, 2023


Fig. 6. Mission support mapping and localization modules. The blue area represents the postmission operations, whereas the other modules are active during
the mission.

The quarry deployment showed the importance of the redun- science outputs. In addition, adapting our system to allow sample
dancy concept. We could still fulfill six of seven mission goals return to a lander with more capable scientific instruments would
despite two malfunctioning payloads. Although we did not experi- be a valuable expansion to increase the mission’s scientific output.
ence a robot failure in the deployments, our redundancy concept One lesson from our field deployments is that an RTT of 5 s in
would allow us to still accomplish most mission goals. the communication between mission control and the robot notably
Our payload selection features a balanced mix of remote and in affected operations. Standard, reliable protocols such as the Trans-
situ science, but it was subject to budget and time constraints. Other mission Control Protocol (TCP) are not suitable in this setting.
instruments, for example, x-ray fluorescence spectrometers (XRF), When using the User Datagram Protocol (UDP), we had to
as used by other teams at the SRC, performed better at identifying reduce the data size of products, such as point clouds and images,
REAs. Further expanding our scientific instrument suite, for to decrease the probability of a single packet loss occurring within
example, with an XRF or laser-induced breakdown spectrometer the transfer of a data product. Furthermore, every operator interac-
(LIBS), would increase the quality and diversity of the mission’s tion cost valuable mission time. We tackled this issue by providing

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 10 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

Downloaded from https://www.science.org on July 29, 2023

Fig. 7. In situ measurement task workflow. (A) The operator selects the desired target in the depth camera interface to define the instrument pose. The pose can be
adjusted using a 6-DoF interactive marker. The robot receives the 6-DoF goal and uses it to deploy the scientific instruments at the desired location. (B) Boulder mea-
surement using the MIRA Raman spectrometer and associated data products. (C) Boulder measurement using MICRO and associated data products. (D) Ground patch
measurement using MICRO and associated data products.

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 11 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

Downloaded from https://www.science.org on July 29, 2023

Fig. 8. Examples of scientific data products at the SRC. (A) Multispectral visual data acquired by the CTX-FW of boulder #5. The images show a vesicular basaltic
boulder. (B) MICRO and Raman data products of a REA candidate. The images and spectra show the basaltic soil of the competition area. The relative reflectance has
an arbitrary scale.

lightweight, expressive data to the operators, such as the mesh map, A key finding in our deployments was the importance of the
and using task-level autonomy. Hence, every task execution only human aspect: The robots could execute single tasks autonomously,
required one operator interaction. However, a more in-depth anal- but the human operators still had to prioritize and allocate tasks.
ysis of suitable protocols and compression technologies could This decision-making can be time-consuming and difficult, espe-
further alleviate this problem. cially because the operators must balance the time required for
the decision-making process with the mission time. An algorithmic

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 12 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

approach to decision-making and task allocation could improve the wide-angle cameras (front and rear) to provide RGB image
mission return. The algorithm could either aid the operations team streams. We customized all robots with two high-power, air-
or, ideally, be implemented in an autonomy module on the robots cooled LEDs at the front and rear to illuminate dark environments.
to allow longer-term or even full-mission autonomy. However, Because the Scout is the desired mapping robot, we exchanged the
besides semantic scene understanding, this requires automatic in- four RealSense D435 with two Robosense RS-Bpearl LiDARs (front
terpretation of scientific data products such as images and and rear) for more accurate elevation mapping (“Elevation
spectra, which is an open challenge. Furthermore, more robot-to- mapping” section) and dense point cloud mapping (“Dense point
robot communication is needed to build collaborative maps, allo- cloud mapping” section). In addition, we added Sevensense’s Al-
cate tasks without mission control in the loop, and allow for more phasense Core visual-inertial sensor as a high-performance naviga-
involved interaction and collaboration between multiple robots. tion camera array on the Scout. It features three monochrome and
A higher level of autonomy will additionally improve the four color cameras (front, left, right, and top). Each camera has 0.4
system’s scalability to applications with even more challenging com- MP and a field of view of 126° by 92.4° (horizontal by vertical). In
munication, such as Mars exploration. Moreover, it will allow addition, four high-power, air-cooled LEDs are encapsulated in the
scaling of the approach to a higher number of robots without in- same housing. The Alphasense Core allows the operator to quickly
creasing the workload on the operations team. Required modules gain a first overview in an unknown environment and enables RGB-
to increase the level of autonomy are a safe, multi-goal planner to colored mapping (“Dense point cloud mapping” section).
visit identified targets, automatic target identification and prioriti-
zation as stated above, and automatic task allocation to the robots Science payloads
based on the robot-specific skillset. The Scout carries an ANYbotics inspection payload comprising a
In this work, we used legged robots with different scientific in- 10× optical zoom camera and a spotlight on a pan-tilt unit. We aug-
vestigation skills but almost identical locomotion skills. In future mented the zoom camera with a spectral filter wheel. The filter

Downloaded from https://www.science.org on July 29, 2023


deployments, a more heterogeneous approach could be advanta- wheel contains five narrow band-pass filters (390, 470, 530, 620,
geous. Drones would enable fast terrain mapping; wheeled robots, and 940 nm) and one position with no filter. In this configuration,
with limited mobility but higher battery life, could investigate easy- we refer to the payload as CTX-FW. We selected CTX-FW to obtain
to-reach targets; whereas the legged robots focused on hard-to- high-resolution images of the targets and their surroundings as well
access areas. as multispectral information because lunar minerals are known to
Last, a relevant research stream is the scalability of such proto- exhibit distinct reflective features at different wavelengths. On the
types from analog deployments to actual space missions (45). Chal- back of the robot, we mounted a color calibration target with 24
lenges specific to legged robots mainly concern the power concept color squares to color-calibrate CTX images.
and thermal management. Preliminary results show that these The Hybrid carries a newer ANYbotics inspection payload for
aspects are feasible (45, 46). However, there are currently no pow- the ANYmal D. It consists of a 20× optical zoom camera, a
erful space-graded processors to run the mapping, navigation, and thermal camera, and a spotlight on a pan-tilt unit. We refer to
locomotion algorithms in real time. In addition, our localization this payload as CTX-TH. Aside from high-resolution images,
and mapping pipelines heavily rely on LiDARs, which are currently CTX-TH enables the thermophysical analysis of targets. Rocks
not space-qualified. However, solid-state LiDARs could pave the and regolith have different thermal signatures depending on their
way for LiDAR-based localization and mapping in space applica- composition. In addition, we equipped the Hybrid with MIRA, a
tions. Although there remains substantial room for future research ruggedized, portable Raman spectrometer (47). MIRA is equipped
and several open challenges in scaling the presented system to actual with an auto-focus attachment that enables measurements of targets
space missions, this work is an important step toward teams of from a distance of up to 2 m. The MIRA allows compositional anal-
legged robots for planetary exploration that provide value in both ysis of the target’s mineralogy, which provides a more in-depth in-
scientific and commercial missions. vestigation than the CTX-FW and CTX-TH imaging techniques.
The Scientist uses the DynaArm to deploy the instruments for in
situ measurements. The DynaArm has a total weight of 10 kg, in-
MATERIALS AND METHODS cluding the 2.3 kg of the scientific payloads. It has a reach of 0.9 m
Hardware description without any tools attached and a nominal payload capacity of 8 kg at
All described robots here are based on ANYmal by ANYbotics. The half of the reach with a maximum tool speed of up to 13.5 m/s. The
Scout is an ANYmal C (2019), whereas the Hybrid and the Scientist forearm of the DynaArm carries a MIRA. In addition, we mounted
are based on ANYmal D (2021). In addition, the Scientist carries a our custom-built UV–visible–near IR (UV-VIS-NIR) MICRO as
DynaArm, a custom 6-DoF robotic arm. ANYmal weighs 50 kg and the end effector (fig. S1B). It contains a USB microscope (Dinolite
has a payload capacity of 15 kg and a nominal operation time of 90 AD4113T-I2V) mounted on a linear actuator mechanism, a ring of
min while continuously walking. We show an overview of the 48 RGB LEDs, a time-of-flight (ToF) sensor, and control electron-
onboard computers in table S4. ics. The front of the casing is equipped with a foam ring, which
ensures that no stray light enters the case when MICRO is in
Robotic payloads contact with a target. The internal UV and NIR LEDs of the
In the factory configuration, the ANYmals are equipped with a Dino-Lite microscope and the RGB LED ring allow for acquisition
VLP16 Puck LITE LiDAR by Velodyne for mapping and localiza- of multispectral microscopic images in the entire spectral range
tion, four (ANYmal C) or six (ANYmal D) RealSense D435 active from UV to NIR at 395, 470, 525, 620, and 940 nm. The ToF
stereo sensors by Intel (front, left, right, and rear) for elevation sensor determines the distance between the microscope and the
mapping (“Elevation mapping” section), and two FLIR Blackfly target. The distance feedback allows the microscope, in

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 13 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

combination with the linear actuator and an autofocus routine, to representation. To this end, we maintained an octree-based map
acquire sharp images of the samples, independent of the precise with a voxel size of 30 mm on the robots.
placement of the instrument. Using MICRO’s multispectral First, we merged and filtered the point clouds of all LiDAR
images, trained geologists can perform petrographic assessments sensors on the robot. After registering the point cloud into the
that are more detailed than the analyses based on CTX-FW. dense map using the pose estimate of the SLAM system, we
clipped the map at a 2-m height to remove unnecessary data. Fur-
Legged locomotion in planetary environments thermore, we applied an outlier filter by removing points that did
We used a reinforcement learning (RL) approach to design our lo- not have at least five neighbors within a 200-mm radius. Last, we
comotion policy because RL approaches have shown robust perfor- applied a multirobot crop filter to reject points that lie on the
mance in challenging environments (24, 25, 48). We based our work other robots. The mapping pipeline additionally maintained a col-
on the perceptive locomotion pipeline originally published in (25), orized map by projecting the RGB information from the navigation
which has already been successfully used in other field deployments cameras onto a copy of the point cloud.
such as the DARPA Subterranean Challenge (49). In this pipeline, After the mission, we fetched the high-resolution point cloud
the control policy was trained in two stages: First, we trained a map from the robot, simulating long-term data transmission.
teacher policy with all ground truth information. Then, we First, a GPU-based preprocessor calculated the surface normals at
trained a student policy to output the same action as the teacher each point using the nearest 40 points. Subsequently, we used a stat-
from noisy and limited information. istical noise filter and a radius filter to filter the map. The statistical
We used the existing controller from (25) on the Scout. On the noise filter considered the nearest 10 points and filtered out-of-dis-
Hybrid and the Scientist, however, the scientific payloads and the tribution points (SD > 2σ). After the preprocessing, we reconstruct-
robotic arm can lead to big disturbances for the controller, resulting ed a triangle mesh with a tree depth of 12 using the Poisson surface
in reduced robustness and inefficient motion. Therefore, we devel- reconstruction method (52) included in the open3D library (53) to

Downloaded from https://www.science.org on July 29, 2023


oped an updated locomotion policy with explicit payload measure- generate the high-resolution, continuous mesh representation. This
ments: We added payload mass randomization during the training process took up to 5 min on a computer with an Intel i9 12700K
and explicitly gave the mass information to the teacher policy. CPU depending on the terrain complexity. Later, the generated
Second, we added arm position and velocity observations to allow high-resolution mesh was visualized with an eye dome lighting
the policy to actively counteract the wrench on the base caused by shader (54).
the arm’s motion. In addition, we increased the joint torque reward
to further improve energy efficiency. During training, we gave a Lightweight mesh representation
random arm initial position and random arm target position and Because an unreliable, high-latency network cannot transport the
controlled each arm joint via proportional-derivative (PD) control dense map in real time, we periodically sent a downsampled
to simulate the arm movements. We present more detail on our lo- version of the dense map (voxel size of 150 mm) in a 9-m radius
comotion setup in Supplementary Methods. around the robot to mission control. Before transmission, we com-
pressed the point cloud using the Draco point cloud compression
Localization, mapping, and perception library (55). The mission control PC used this downsampled and
In this section, we describe our localization and mapping system, compressed point cloud to create a lightweight mesh representation
designed to globally localize the robots and generate an accurate (fig. S6). With downsampling and compression, we reduced the size
representation of the environment during the mission. Further- of the point cloud that was transported over the high-latency
more, we describe the instance segmentation algorithm that helps network from 3 MB to 250 kB. Furthermore, the robot only sent
the operations team understand the environment. the point cloud if it contained substantial new information, for
example, when the robot moved or when the user requested a
Robust LiDAR-based localization mesh of a certain area in the user interface. The meshing operation
We relied on LiDAR-based simultaneous localization and mapping on the mission control PC was limited to a maximum of 0.25 Hz.
(SLAM) because of the long range and large field of view of LiDARs. The mission control PC calculated a triangle mesh using the
A core challenge of LiDAR-based SLAM in planetary analog mis- Poisson surface reconstruction method (52) with a depth parameter
sions is LiDAR degeneracy. To cope with degenerate environments, of eight. We recalculated the surface normals of the mesh to correct
we used a modified version of CompSLAM (50), which has been the orientation and smoothness of the mesh surfaces. In addition,
shown to perform well under degeneracy through numerous de- we filtered out vertices and triangles with fewer than 10 support
ployments (35). CompSLAM is a complementary multimodal points. This mesh was then merged into the mesh map by fusing
SLAM system based on the Iterative Closest Point algorithm (51). the nonoverlapped vertices and triangles and averaging the overlap-
CompSLAM can use visual, thermal, or inertial data as a robust ping areas.
prior of the LiDAR mapping module. In this work, we relied only
on the inertial data as a prior. If the environment degenerates, Elevation mapping
CompSLAM identifies degenerate directions and directly uses the The elevation map was a local robot-centric 2.5D grid representa-
prior to integrate the point cloud scan into the map. tion where each cell indicated the terrain height. We used it as a
terrain representation with a high update rate for the local planning
Dense point cloud mapping module (“Autonomous local navigation” section). The elevation
CompSLAM maintains only a sparse map to keep the SLAM opti- map is 8 m by 8 m with a resolution of 40 mm. We used an elevation
mization problem tractable. We therefore required a separate mapping pipeline running on a GPU (56) to integrate the point
module for dense mapping to create a high-resolution environment

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 14 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

cloud data into the elevation map in parallel based on the robot’s the ToF distance measurement as feedback. The two-stage approach
odometry. allowed MICRO to accurately make contact with the target surface,
even if the original 6D in situ measurement target was not precisely
Instance segmentation on the surface. We provide information on the arm controller in
We applied a boulder instance segmentation network to autono- Supplementary Methods.
mously identify each boulder instance in the RGB images to contex-
tualize the scientific data. We built our approach on Mask R-CNN Autonomy
(57). The network predicts each boulder instance’s bounding box, We used behavior trees (BTs) (58) to handle the autonomous task
outline, and confidence. We fine-tuned the original model on a execution on the robots. The modularity of BTs allowed us to reuse
custom-built dataset containing hundreds of images collected in modules across the different robots. The BT on the robot processed
the first field trial of the SRC with instance labels. In this custom- all operator interactions and started, monitored, and stopped the
built dataset, ANYmal acquired the images in a dark analog lunar requested task. In this way, the operator set high-level objectives
terrain, similar to the setting in the final challenge. We deployed our while retaining the ability to stop or change the task easily. We
rock instance segmentation network on the mission control PC to show more details of our BTs in Supplementary Methods.
segment navigation and CTX images. Figure S7 shows several pre-
diction results, indicating the network’s robust performance under Autonomous local navigation
variable light conditions and different input cameras. The operators guided the robots via high-level navigation goals. We
specifically avoided a fully autonomous approach because we pre-
Scientific payload integration and deployment ferred that the operations team select and prioritize scientific
The operator could send 3D remote measurement targets to request targets during the mission in unknown environments.
images from CTX-FW or CTX-TH. For CTX-FW, the operator To allow the operator to safely operate the robots via waypoints,

Downloaded from https://www.science.org on July 29, 2023


could additionally specify which spectral filters to use. The operator we used a sampling-based local planner (59) that operated on the
marked a target on the map or used an already marked target to local elevation map. Unlike most state-of-the-art navigation plan-
send a remote measurement task. When requesting RGB images, ners, it did not assign traversability values to discrete terrain
the operator could select between a high and a low zoom level, patches. Instead, the robot morphology was approximated by reach-
with an image width of 0.3 m and 3 m in the focal plane, respective- ability volumes for the feet and a collision volume for the torso,
ly. The robot sent the images to mission control, which automati- which made the planner more suitable for legged robots. Using a
cally saved them by the target identifier. In addition, the operator PID controller, a path-tracking module converted the traversable
could request panoramic images. The robot then took nine path to twist commands for the locomotion controller.
images at a fixed zoom level, which were stitched in postprocessing.
The MIRA communicated to the onboard PC of the robot Autonomous in situ measurement acquisition
through a USB interface. It was fully integrated into the software Once the Scientist was close to a target, the operator could select the
stack through a custom Robot Operating System (ROS) wrapper in situ measurement point on the grayscale IR image stream of the
that provided an ROS action interface to trigger a measurement. Realsense camera. We used the grayscale IR image from the Real-
The measurement procedure was then fully automated: The autofo- sense camera because it did not provide an RGB image stream,
cus attachment measured the distance to the target and focused on and the IR image provided good output in bad lighting conditions.
this distance, and then the MIRA initiated a measurement. The The user interface spawned a 6-DoF interactive marker, which
MIRA matched the received spectroscopic response against the could be adjusted if necessary. The operator could specify the
custom-built library of 355 different spectra of relevant lunar min- exact measurement task, that is, a MIRA measurement, a MICRO
erals (for example, oxides, olivines, pyroxenes, and feldspars) (table measurement, or both.
S5). The robot sent the spectroscopic response and the correspond- Once the robot received the measurement task, it approached the
ing match to mission control for the operations team to evaluate. target and aligned the base at a predefined offset of 1 m to the target.
The MICRO had a USB interface and communicated to the Then, the robot deployed the instruments and triggered a measure-
onboard PC of the robot through rosserial. As for the MIRA, its ment automatically. Depending on the task definition, the robot
measurement sequence was fully automated. On a measurement took a MIRA measurement, a MICRO measurement, or both. If
request, the device focused on the target automatically, recorded an error occurred during the measurement process, the arm
an image sequence at all available spectral bands (UV, red, green, moved back to the default position, and the robot notified the
blue, and NIR), and returned a complete measurement operator.
data package.
LoS operation
Payload deployment with the robotic arm If an LoS occurred, the robots finished their currently allocated task
To deploy the MIRA, the arm aligned the Raman laser with the and then switched to a dedicated LoS behavior. The Scout and the
target at the desired distance of 0.7 m. The deployment of Hybrid performed an autonomous measurement routine. First,
MICRO happened in two stages: First, the arm aligned the they recorded a panoramic image of their environment. Second,
MICRO with the target pose at a predefined offset of 100 mm in they imaged their own footprints, which provided information
the inverse heading direction of the target pose. In the second about soil mechanics. Third, the robots acquired images of each
stage, a proportional-integral-derivative (PID) controller moved other and the Scientist to assess the status of the hardware.
MICRO along the heading axis of the target pose to bring the Fourth, the Scout took an image of the color calibration card.
front foam piece (fig. S1) in contact with the target surface using This image was used in postprocessing to perform color correction.

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 15 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

The robots sent the acquired data to mission control once commu- D. Korycansky, D. Landis, L. Sollitt, Detection of water in the LCROSS ejecta plume. Science
330, 463–468 (2010).
nication was restored. The Scientist remained in a nominal standing
6. I. G. Mitrofanov, A. B. Sanin, W. V. Boynton, G. Chin, J. B. Garvin, D. Golovin, L. G. Evans,
configuration because the system currently does not autonomously K. Harshman, A. S. Kozyrev, M. L. Litvak, A. Malakhov, E. Mazarico, T. McClanahan, G. Milikh,
detect new measurement targets. M. Mokrousov, G. Nandikotkur, G. A. Neumann, I. Nuzhdin, R. Sagdeev, V. Shevchenko,
V. Shvetsov, D. E. Smith, R. Starr, V. I. Tretyakov, J. Trombka, D. Usikov, A. Varenikov,
Resilient communication for high-latency networks A. Vostrukhin, M. T. Zuber, Hydrogen mapping of the lunar south pole using the LRO
neutron detector experiment LEND. Science 330, 483–486 (2010).
Resilient communication in high-latency networks is crucial to
7. S. Li, P. G. Lucey, R. E. Milliken, P. O. Hayne, E. Fisher, J.-P. Williams, D. M. Hurley, R. C. Elphic,
ensure data transmission between mission control and the robots. Direct evidence of surface exposed water ice in the lunar polar regions. Proc. Natl. Acad. Sci.
We used commercial off-the-shelf radio devices by Rajant (60) to U.S.A. 115, 8907–8912 (2018).
create a reliable mesh network. The mesh setup included the base 8. Z. C. Scoville, Artemis III EVA mission capability for de Gerlache-Shackleton ridge, in Lunar
station and the robots, each acting as a mesh node. The robots were and Planetary Science Conference (Lunar and Planetary Institute, 2022).
equipped with a BreadCrumb DX2, and the base station consisted of 9. J. Flahaut, J. Carpenter, J.-P. Williams, M. Anand, I. Crawford, W. van Westrenen, E. Füri,
L. Xiao, S. Zhao, Regions of interest (ROI) for future exploration missions to the lunar south
a BreadCrumb ES1 with a panel antenna. All radios operated at 5.8
pole. Planet. Space Sci. 180, 104750 (2020).
GHz. The base station and the mission control PCs were connected 10. P. D. Spudis, B. Bussey, J. Plescia, J.-L. Josset, S. Beauvivre, Geology of shackleton crater and
by wire via a delay simulator that ensured a 5.0-s RTT delay. the south pole of the moon. Geophys. Res. Lett. 35, L14201 (2008).
We used ROS1 (61) on all robots and mission control stations to 11. W. D. Carrier III, G. R. Olhoeft, W. Mendell, Physical properties of the lunar surface, Lunar
manage onboard communications. Each robot and mission control Sourcebook, A User’s Guide to the Moon (Cambridge Univ. Press, 1991), pp. 475–594.
station had a separate rosmaster. However, ROS1 was not designed 12. N. C. Costes, J. E. Farmer, E. B. George, Mobility Performance of the Lunar Roving Vehicle:
Terrestrial Studies, Apollo 15 Results (NASA, 1972), vol. 401.
to communicate over high-latency networks, because a TCP hand-
13. C. Florenskii, A. Basilevskii, N. Bobina, G. Burba, N. Grebennik, R. Kuzmin, B. Polosukhin,
shake is required to establish a connection, even if the data are trans- V. Popovich, A. Pronin, L. Ronca, The floor of crater Le Monier—A study of Lunokhod 2 data,
mitted over UDP. Therefore, we used Nimbro Network to establish

Downloaded from https://www.science.org on July 29, 2023


in Lunar and Planetary Science Conference Proceedings (Lunar and Planetary Institute, 1978),
communication between different rosmasters (62, 63). Nimbro pp. 1449–1458.
Network was specifically designed to work in unreliable high- 14. L. Ding, R. Zhou, Y. Yuan, H. Yang, J. Li, T. Yu, C. Liu, J. Wang, S. Li, H. Gao, Z. Deng, N. Li,
latency networks and was used in the SpaceBot Cup, a robot com- Z. Wang, Z. Gong, G. Liu, J. Xie, S. Wang, Z. Rong, D. Deng, X. Wang, S. Han, W. Wan,
L. Richter, L. Huang, S. Gou, Z. Liu, H. Yu, Y. Jia, B. Chen, Z. Dang, K. Zhang, L. Li, X. He, S. Liu,
petition for lunar exploration (64). K. Di, A 2-year locomotive exploration and scientific investigation of the lunar farside by
Using the predominant internet protocol TCP is problematic the Yutu-2 rover. Sci. Robot. 7, eabj6660 (2022).
because of the RTT delay and high bandwidth. Consequently, we 15. M. Jones, What really happened on Mars Rover Pathfinder, The RISKS Digest (1997), pp. 1–2.
transmitted all data via UDP. UDP does not provide congestion 16. R. A. Lindemann, D. B. Bickler, B. D. Harrington, G. M. Ortiz, C. J. Voothees, Mars exploration
control, and there is no guarantee that data will arrive. Therefore, rover mobility development. IEEE Robot. Autom. Mag. 13, 19–26 (2006).

mission control and the robots only exchanged essential data. We 17. J. P. Grotzinger, J. Crisp, A. R. Vasavada, R. C. Anderson, C. J. Baker, R. Barry, D. F. Blake,
P. Conrad, K. S. Edgett, B. Ferdowski, R. Gellert, J. B. Gilbert, M. Golombek, J. Gómez-Elvira,
show the network usage of mission control 1 at the SRC in fig. D. M. Hassler, L. Jandura, M. Litvak, P. Mahaffy, J. Maki, M. Meyer, M. C. Malin, I. Mitrofanov,
S8. The robot-to-robot communication, however, ran on TCP J. J. Simmonds, D. Vaniman, R. V. Welch, R. C. Wiens, Mars science laboratory mission and
because the delay was in the low millisecond range. science investigation. Space Sci. Rev. 170, 5–56 (2012).
18. K. A. Farley, K. H. Williford, K. M. Stack, R. Bhartia, A. Chen, M. de la Torre, K. Hand, Y. Goreva,
C. D. K. Herd, R. Hueso, Y. Liu, J. N. Maki, G. Martinez, R. C. Moeller, A. Nelessen,
C. E. Newman, D. Nunes, A. Ponce, N. Spanovich, P. A. Willis, L. W. Beegle, J. F. Bell III,
Supplementary Materials A. J. Brown, S.-E. Hamran, R. C. Wiens, Mars 2020 mission overview. Space Sci. Rev. 216,
This PDF file includes:
142 (2020).
Results
19. G. Webster, V. McGregor, NASA’s Mars Rover has uncertain future as sixth anniversary nears
Methods
(2009); https://mars.nasa.gov/mer/newsroom/pressreleases/20091231a.html [accessed 27
Tables S1 to S6
January 2023].
Figs. S1 to S13
References (65–83) 20. L. David, Opportunity Mars Rover stuck in sand (2005); https://space.com/1019-
opportunity-mars-rover-stuck-sand.html [accessed 27 January 2023].
21. N. Potts, A. Gullikson, N. Curran, J. Dhaliwal, M. Leader, R. Rege, K. Klaus, D. Kring, Robotic
Other Supplementary Material for this
traverse and sample return strategies for a lunar farside mission to the Schrödinger basin.
manuscript includes the following:
Adv. Space Res. 55, 1241–1254 (2015).
Movies S1 and S2
22. E. S. Steenstra, D. J. Martin, F. E. McDonald, S. Paisarnsombat, C. Venturino, S. O’Hara,
A. Calzada-Diaz, S. Bottoms, M. K. Leader, K. K. Klaus, W. van Westrenen, D. H. Needham,
D. A. Kring, Analyses of robotic traverses and sample sites in the schrödinger basin for the
REFERENCES AND NOTES heracles human-assisted sample return mission concept. Adv. Space Res. 58,
1. L. Qiao, J. W. Head, L. Wilson, Z. Ling, Ina lunar irregular mare patch mission concepts: 1050–1065 (2016).
Distinguishing between ancient and modern volcanism models. Planet. Sci. J. 2, 66 (2021). 23. A. Seeni, B. Schäfer, G. Hirzinger, Robot mobility systems for planetary surface exploration:
2. T. D. Glotch, E. R. Jawin, B. T. Greenhagen, J. T. Cahill, D. J. Lawrence, R. N. Watkins, State-of-the-art and future outlook: A literature survey, in Aerospace Technologies Ad-
D. P. Moriarty, N. Kumari, S. Li, P. G. Lucey, M. A. Siegler, J. Feng, L. Breitenfeld, C. C. Allen, vancements (InTech, 2010), chap. 10, pp. 189–208.
H. Nekvasil, D. A. Paige, The scientific value of a sustained exploration program at the 24. J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, M. Hutter, Learning quadrupedal locomotion
Aristarchus Plateau. Planet. Sci. J. 2, 136 (2021). over challenging terrain. Sci. Robot. 5, eabc5986 (2020).
3. M. Smith, D. Craig, N. Herrmann, E. Mahoney, J. Krezel, N. McIntyre, K. Goodliff, The Artemis 25. T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, M. Hutter, Learning robust perceptive
program: An overview of NASA’s activities to return humans to the moon, in 2020 IEEE locomotion for quadrupedal robots in the wild. Sci. Robot. 7, eabk2822 (2022).
Aerospace Conference (IEEE, 2020), pp. 1–10. 26. A. Roennau, G. Heppner, M. Nowicki, R. Dillmann, LAURON V: A versatile six-legged walking
4. A. Colaprete, “Volatiles Investigating Polar Exploration Rover (VIPER)” (NASA Technical robot with advanced maneuverability, in 2014 IEEE/ASME International Conference on Ad-
Reports, 2021). vanced Intelligent Mechatronics (IEEE, 2014), pp. 82–87.
5. A. Colaprete, P. Schultz, J. Heldmann, D. Wooden, M. Shirley, K. Ennico, B. Hermalyn, 27. S. Dirk, K. Frank, The bio-inspired scorpion robot: Design, control & lessons learned, in
W. Marshall, A. Ricco, R. C. Elphic, D. Goldstein, D. Summy, G. D. Bart, E. Asphaug, Climbing and Walking Robots: Towards New Applications (InTech, 2007).

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 16 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

28. S. Bartsch, T. Birnschein, M. Römmermann, J. Hilljegerdes, D. Kühn, F. Kirchner, Develop- 44. H. A. Oravec, V. M. Asnani, C. M. Creage, S. J. Moreland, Geotechnical review of existing
ment of the six-legged walking and climbing robot SpaceClimber. J. Field Robot. 29, mars soil simulants for surface mobility. Earth Space 2021, 157–170 (2021).
506–532 (2012). 45. H. Kolvenbach, M. Breitenstein, C. Gehring, M. Hutter, Scalability analysis of legged robots
29. A. Roennau, G. Heppner, M. Nowicki, J. M. Zöllner, R. Dillmann, Reactive posture behaviors for space exploration, in 68th International Astronautical Congress (IAC 2017) (Curran, 2018),
for stable legged locomotion over steep inclines and large obstacles, in 2014 IEEE/RSJ In- pp. 10399–10413.
ternational Conference on Intelligent Robots and Systems, (IEEE, 2014), pp. 4888–4894. 46. G. Valsecchi, D. Liconti, F. Tischhauser, H. Kolvenbach, M. Hutter, Preliminary design of
30. H. Kolvenbach, P. Arm, E. Hampp, A. Dietsche, V. Bickel, B. Sun, C. Meyer, M. Hutter, Tra- actuators for walking robot on the moon, in 16th Symposium on Advanced Space Tech-
versing steep and granular martian analog slopes with a dynamic quadrupedal robot. Field nologies in Robotics and Automation (ASTRA 2022) (ESA, 2022).
Robot. 2, 910–939 (2022). 47. Metrohm, Metrohm MIRA XTR DS (2023); https://metrohm.com/en/products/raman-
31. H. Kolvenbach, D. Bellicoso, F. Jenelten, L. Wellhausen, M. Hutter, Efficient gait selection for spectroscopy/mira-ds-mira-xtr-ds.html [accessed 23 January 2023].
quadrupedal robots on the moon and Mars, in International Symposium on Artificial Intel- 48. X. B. Peng, E. Coumans, T. Zhang, T.-W. Lee, J. Tan, S. Levine, Learning agile robotic loco-
ligence, Robotics and Automation in Space (I-SAIRAS) (ESA, 2018). motion skills by imitating animals. arXiv:2004.00784 [cs.RO] (2 April 2020).
32. H. Kolvenbach, E. Hampp, P. Barton, R. Zenkl, M. Hutter, Towards jumping locomotion for 49. M. Tranzatto, M. Dharmadhikari, L. Bernreiter, M. Camurri, S. Khattak, F. Mascarich,
quadruped robots on the moon, in IEEE/RSJ International Conference on Intelligent Robots P. Pfreundschuh, D. Wisth, S. Zimmermann, M. Kulkarni, V. Reijgwart, B. Casseau,
and Systems (IROS) (IEEE, 2019), pp. 5459–5466. T. Homberger, P. De Petris, L. Ott, W. Tubby, G. Waibel, H. Nguyen, C. Cadena, R. Buchanan,
33. N. Rudin, H. Kolvenbach, V. Tsounis, M. Hutter, Cat-like jumping and landing of legged L. Wellhausen, N. Khedekar, O. Andersson, L. Zhang, T. Miki, T. Dang, M. Mattamala,
robots in low-gravity using deep reinforcement learning, in Transactions on Robotics (IEEE, M. Montenegro, K. Meyer, X. Wu, A. Briod, M. Mueller, M. Fallon, R. Siegwart, M. Hutter,
2022), vol. 38, pp. 317–328. K. Alexis, Team CERBERUS wins the DARPA subterranean challenge: Technical overview and
34. P. Arm, G. Waibel, G. Ligeza, V. Bickel, M. Tranzatto, S. Zimmermann, T. Homberger, lessons learned. arXiv:2207.04914 [cs.RO] (11 July 2022).
L. Horvath, H. Umbers, F. Kehl, H. Kolvenbach, M. Hutter, Results and lessons learned from 50. S. Khattak, H. Nguyen, F. Mascarich, T. Dang, K. Alexis, Complementary multi–modal sensor
the first field trial of the ESA-ESRIC space resources challenge of team GLIMPSE, in 16th fusion for resilient robot pose estimation in subterranean environments, in 2020 Interna-
Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA 2022) tional Conference on Unmanned Aircraft Systems (ICUAS) (IEEE, 2020), pp. 1024–1029.
(ESA, 2022). 51. K.-L. Low, Linear least-squares optimization for point-to-plane ICP surface registration
35. M. Tranzatto, T. Miki, M. Dharmadhikari, L. Bernreiter, M. Kulkarni, F. Mascarich, (University of North Carolina, 2004), pp. 1–3.

Downloaded from https://www.science.org on July 29, 2023


O. Andersson, S. Khattak, M. Hutter, R. Siegwart, K. Alexis, Cerberus in the DARPA sub- 52. M. Kazhdan, M. Bolitho, H. Hoppe, Poisson surface reconstruction, in Eurographics Sym-
terranean challenge. Sci. Robot. 7, eabp9742 (2022). posium on Geometry Processing, K. Polthier, A. Sheffer, Eds. (Eurographics, 2006), vol. 7.
36. N. Hudson, F. Talbot, M. Cox, J. Williams, T. Hines, A. Pitt, B. Wood, D. Frousheger, K. L. Surdo, 53. Q.-Y. Zhou, J. Park, V. Koltun, Open3D: A modern library for 3D data processing.
T. Molnar, R. Steindl, M. Wildie, I. Sa, N. Kottege, K. Stepanas, E. Hernandez, G. Catt, arXiv:1801.09847 [cs.CV] (30 January 2018).
W. Docherty, B. Tidd, B. Tam, S. Murrell, M. Bessell, L. Hanson, L. Tychsen-Smith, H. Suzuki, 54. C. Boucheny, “Interactive scientific visualization of large datasets: Towards a perceptive-
L. Overs, F. Kendoul, G. Wagner, D. Palmer, P. Milani, M. O’Brien, S. Jiang, S. Chen, R. C. Arkin, based approach,” thesis, Université Joseph Fourier Grenoble, France (2009).
Heterogeneous ground and air platforms, homogeneous sensing: Team CSIRO Data61’s 55. Google, Draco 3D graphics compression (2017); https://google.github.io/draco/ [accessed
approach to the DARPA subterranean challenge. arXiv:2104.09053 [cs.RO] (19 April 2021). 27 January 2023].
37. A. Agha, K. Otsu, B. Morrell, D. D. Fan, R. Thakker, A. Santamaria-Navarro, S.-K. Kim, 56. T. Miki, L. Wellhausen, R. Grandia, F. Jenelten, T. Homberger, M. Hutter, Elevation mapping
A. Bouman, X. Lei, J. Edlund, M. F. Ginting, K. Ebadi, M. Anderson, T. Pailevanian, E. Terry, for locomotion and navigation using GPU, in 2022 IEEE/RSJ International Conference on
M. Wolf, A. Tagliabue, T. S. Vaquero, M. Palieri, S. Tepsuporn, Y. Chang, A. Kalantari, Intelligent Robots and Systems (IROS) (IEEE, 2022), pp. 2273–2280.
F. Chavez, B. Lopez, N. Funabiki, G. Miles, T. Touma, A. Buscicchio, J. Tordesillas, N. Alatur,
57. K. He, G. Gkioxari, P. Dollár, R. Girshick, Mask r-cnn, in Proceedings of the IEEE International
J. Nash, W. Walsh, S. Jung, H. Lee, C. Kanellakis, J. Mayo, S. Harper, M. Kaufmann, A. Dixit,
Conference on Computer Vision (IEEE, 2017), pp. 2961–2969.
G. Correa, C. Lee, J. Gao, G. Merewether, J. Maldonado-Contreras, G. Salhotra, Maira Saboia
58. D. Faconti, “Mood2be: Models and tools to design robotic behaviors” (Technical Report,
Da Silva, B. Ramtoula, Y. Kubo, S. Fakoorian, A. Hatteland, T. Kim, T. Bartlett, A. Stephens,
Eurecat Centre Tecnologic, 2019), vol. 4.
L. Kim, C. Bergh, E. Heiden, T. Lew, A. Cauligi, T. Heywood, A. Kramer, H. A. Leopold, C. Choi,
S. Daftry, O. Toupet, I. Wee, A. Thakur, M. Feras, G. Beltrame, G. Nikolakopoulos, D. Shim, 59. L. Wellhausen, M. Hutter, Rough terrain navigation for legged robots using reachability
L. Carlone, J. Burdick, Nebula: Quest for robotic autonomy in challenging environments; planning and template learning, in 2021 IEEE/RSJ International Conference on Intelligent
TEAM CoSTAR at the DARPA subterranean challenge. arXiv:2103.11470 [cs.RO] (21 Robots and Systems (IROS) (IEEE, 2021), pp. 6914–6921.
March 2021). 60. Rajant, https://rajant.com/products/breadcrumb-wireless-nodes/dx-series/ [accessed 9
38. M. J. Schuster, M. Müller, S. G. Brunner, H. Lehner, P. Lehner, R. Sakagami, A. Dömel, January 2023].
L. Meyer, B. Vodermayer, R. Giubilato, M. Vayugundla, J. Reill, F. Steidle, I. von Bargen, 61. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y. Ng, Ros: An open-
K. Bussmann, R. Belder, P. Lutz, W. Stürzl, M. Smísek, M. Maier, S. Stoneman, A. Prince, source robot operating system, in ICRA Workshop on Open Source Software (IEEE, 2009), p. 5.
B. Rebele, M. Durner, E. Staudinger, S. Zhang, R. Pöhlmann, E. Bischoff, C. Braun, S. Schröder, 62. J. Stückler, M. Schwarz, M. Schadler, A. Topalidou-Kyniazopoulou, S. Behnke, Nimbro ex-
E. Dietz, S. Frohmann, A. Börner, H. Hübers, B. Foing, R. Triebel, A. Albu-Schäffer, A. Wedler, plorer: Semiautonomous exploration and mobile manipulation in rough terrain. J. Field
The arches space-analogue demonstration mission: Towards heterogeneous teams of Robot. 33, 411–430 (2016).
autonomous robots for collaborative scientific sampling in planetary exploration. IEEE 63. M. Schwarz, T. Rodehutskors, D. Droeschel, M. Beul, M. Schreiber, N. Araslanov, I. Ivanov,
Robot. Autom. Lett. 5, 5315–5322 (2020). C. Lenz, J. Razlaw, S. Schüller, D. Schwarz, A. Toplaidou-Kyniazopoulou, S. Behnke, Nimbro
39. F. Cordes, I. Ahrns, S. Bartsch, T. Birnschein, A. Dettmann, S. Estable, S. Haase, J. Hilljegerdes, rescue: Solving disaster-response tasks with the mobile manipulation robot Momaro.
D. Koebel, S. Planthaber, Lunares: Lunar crater exploration with heterogeneous multi robot J. Field Robot. 34, 400–425 (2017).
systems. Intell. Serv. Robot. 4, 61–89 (2011). 64. T. Kaupisch, D. Noelke, A. Arghir, DLR SpaceBot Cup 2013: A space robotics competition, in
40. R. Sonsalla, F. Cordes, L. Christensen, T. M. Roehr, T. Stark, S. Planthaber, M. Maurus, Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA)
M. Mallwitz, E. A. Kirchner, Field testing of a cooperative multi-robot sample return mission (ESA, 2015).
in Mars analogue environment, in 14th Symposium on Advanced Space Technologies in 65. S. Lee, S. Jeon, J. Hwangbo, Learning legged mobile manipulation using reinforcement
Robotics and Automation (ASTRA) (ESA, 2017). learning, in Robot Intelligence Technology and Applications 7: Results from the 10th Inter-
41. Boston Dynamics, Search for life: NASA JPL explores martian-like caves (2022); https:// national Conference on Robot Intelligence Technology and Applications (Springer, 2023),
youtube.com/watch?v=qTW-dbZr4U8 [accessed 23 January 2023]. pp. 310–317.
42. T. Tzanetos, M. Aung, J. Balaram, H. F. Grip, J. T. Karras, T. K. Canham, G. Kubiak, J. Anderson, 66. Z. Fu, X. Cheng, D. Pathak, Deep whole-body control: Learning a unified policy for ma-
G. Merewether, M. Starch, M. Pauken, S. Cappucci, M. Chase, M. Golombek, O. Toupet, nipulation and locomotion, in Conference on Robot Learning (CoRL) (OpenReview, 2022).
M. C. Smart, E. B. Ramirez, N. Chahat, R. Hogg, B. Pipenberg, M. Keennon, K. H. Williford, 67. Y. Ma, F. Farshidian, T. Miki, J. Lee, M. Hutter, Combining learning-based locomotion policy
Ingenuity Mars helicopter: From technology demonstration to extraterrestrial scout, in with model-based manipulation for legged mobile manipulators. IEEE Robot. Autom. Lett.
2022 IEEE Aerospace Conference (AERO) (IEEE, 2022) pp. 1–19. 7, 2377–2384 (2022).
43. M. Hutter, C. Gehring, A. Lauber, F. Gunther, C. D. Bellicoso, V. Tsounis, P. Fankhauser, 68. F. Farshidian, R. Grandia, M. Spieler, J. Carius, J.-P. Sleiman, Ocs2 (2022); https://
R. Diethelm, S. Bachmann, M. Bloesch, H. Kolvenbach, M. Bjelonic, L. Isler, K. Meyer, ANYmal leggedrobotics.github.io/ocs2/ [accessed 27 September 2022].
—Toward legged robots for harsh environments. Adv. Robot. 31, 918–931 (2017).

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 17 of 18


SCIENCE ROBOTICS | RESEARCH ARTICLE

69. J.-R. Chiu, J.-P. Sleiman, M. Mittal, F. Farshidian, M. Hutter, A collision-free mpc for whole- 83. B. Katz, J. Di Carlo, S. Kim, Mini cheetah: A platform for pushing the limits of dynamic
body dynamic locomotion and manipulation, in 2022 International Conference on Robotics quadruped control, in 2019 International Conference on Robotics and Automation (ICRA)
and Automation (ICRA) (IEEE, 2022), pp. 4686–4693. (IEEE, 2019), pp. 6295–6301.
70. J. Sleiman, F. Farshidian, M. V. Minniti, M. Hutter, A unified MPC framework for whole-body
dynamic locomotion and manipulation. IEEE Robot. Autom. Lett. 6, 4688–4695 (2021). Acknowledgments: We would like to thank the implementation partners, namely, the Lucerne
71. J. L. Blanco-Claraco, A tutorial on SE(3) transformation parameterizations and on-manifold University of Applied Sciences and Arts (HSLU), ANYbotics AG, and the maxon SpaceLab. A
optimization. arXiv:2103.15980 [cs.RO] (29 March 2021). special thanks to S. Tenisch, A. Brandes, G. Székely, and N. Steinert for developing the filter
72. F. Abi-Farraj, N. Pedemonte, P. Robuffo Giordano, A visual-based shared control architec- wheel; F. Mast, L. Horvath, H. Umbers, and M. Trentini for supporting the MICRO development;
ture for remote telemanipulation, in 2016 IEEE/RSJ International Conference on Intelligent and D. Larcher and J. Bernasconi for supporting the scientific data analysis. We would like to
Robots and Systems (IROS) (IEEE, 2016), pp. 4266–4273. thank Metrohm Schweiz AG for providing the Raman spectrometer. We acknowledge the ETH
73. D. Faconti, Behaviortree.cpp (2022); https://behaviortree.dev/ [accessed 5 October 2022]. Earth Sciences Collections and the University of Basel for supporting and providing rock and
mineral samples for our experiments. We thank KIBAG Kies Neuheim AG for making the quarry
74. JPL, A description of the rover Sojourner; https://mars.nasa.gov/MPF/rover/descrip.html
available. This work has been conducted as part of ANYmal Research, a community to advance
[accessed 27 January 2023].
legged robotics. Funding: This research was supported by the Swiss National Science
75. NASA, Mars exploration rovers overview; https://mars.nasa.gov/mer/mission/rover/wheels-
Foundation (SNF) through the National Centre of Competence in Research Robotics (NCCR
and-legs/ [accessed 27 January 2023].
Robotics), through the National Centre of Competence in Digital Fabrication (NCCR dfab), and
76. L. Ding, R. Zhou, T. Yu, H. Gao, H. Yang, J. Li, Y. Yuan, C. Liu, J. Wang, Y. Zhao, Surface by ETH Zurich Research grant no. 21-1 ETH-27. This project has received funding from the
characteristics of the Zhurong Mars rover traverse at Utopia Planitia. Nat. Geosci. 15, European Research Council (ERC) under the European Union’s Horizon 2020 research and
171–176 (2022). innovation programme grant agreement nos. 852044 and 101016970. This project has received
77. M. Heverly, J. Matthews, M. Frost, C. Quin, Development of the tri-athlete lunar vehicle funding through ESA contract nos. 4000137333/22/NL/AT and 4000135310/21/NL/PA/pt.
prototype, in Proceedings of the 40th Aerospace Mechanisms Symposium (NASA, 2010). Author contributions: All authors contributed to the system design and wrote the paper. P.A.,
78. DFKI, CREX: Crater Explorer; https://robotik.dfki-bremen.de/en/research/robot-systems/ G.W., J.P., T.T., R.Z., V.B., G.L., F.K., and H.K. participated in the field deployments and evaluated
crex/ [accessed 27 January 2023]. the respective data. P.A., G.W., J.P., and T.M. conducted the additional locomotion tests. T.T. and
79. P. Arm, R. Zenkl, P. Barton, L. Beglinger, A. Dietsche, L. Ferrazzini, E. Hampp, J. Hinder, R.Z. developed the mapping and perception pipelines. J.P. developed the arm controller. T.M.
C. Huber, D. Schaufelberger, F. Schmitt, B. Sun, B. Stolz, H. Kolvenbach, M. Hutter, SpaceBok: developed the locomotion controller. V.B., G.L., and F.K. selected and designed the scientific

Downloaded from https://www.science.org on July 29, 2023


A dynamic legged robot for space exploration, in IEEE International Conference on Robotics payloads and evaluated the respective data. Competing interests: The authors declare that
and Automation (ICRA) (IEEE, 2019). they have no competing interests. Data and materials availability: All data needed to
80. R. Playter, M. Buehler, M. Raibert, BigDog, in Unmanned Systems Technology VIII (SPIE, 2006), evaluate the conclusions in the paper are present in the paper or the Supplementary Materials.
pp. 896–901. Other materials are available at Zenodo (DOI: 10.5281/zenodo.8019960).
81. Boston Dynamics, Spot specifications; https://support.bostondynamics.com/s/article/
Robot-specifications [accessed 27 January 2023. Submitted 14 October 2022
82. ANYbotics, ANYmal specifications; https://anybotics.com/anymal-autonomous-legged- Accepted 12 June 2023
robot/ [accessed 27 January 2023]. Published 12 July 2023
10.1126/scirobotics.ade9548

Arm et al., Sci. Robot. 8, eade9548 (2023) 12 July 2023 18 of 18


Scientific exploration of challenging planetary analog environments with a team of
legged robots
Philip Arm, Gabriel Waibel, Jan Preisig, Turcan Tuna, Ruyi Zhou, Valentin Bickel, Gabriela Ligeza, Takahiro Miki, Florian
Kehl, Hendrik Kolvenbach, and Marco Hutter

Sci. Robot., 8 (80), eade9548.


DOI: 10.1126/scirobotics.ade9548

Downloaded from https://www.science.org on July 29, 2023


View the article online
https://www.science.org/doi/10.1126/scirobotics.ade9548
Permissions
https://www.science.org/help/reprints-and-permissions

Use of this article is subject to the Terms of service

Science Robotics (ISSN ) is published by the American Association for the Advancement of Science. 1200 New York Avenue NW,
Washington, DC 20005. The title Science Robotics is a registered trademark of AAAS.
Copyright © 2023 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim
to original U.S. Government Works

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy