2301.09059v1
2301.09059v1
2301.09059v1
Space debris is on the rise due to the increasing demand for spacecraft for com-
munication, navigation, and other applications. The Space Surveillance Network
(SSN) tracks over 27,000 large pieces of debris and estimates the number of small,
un-trackable fragments at over 1,00,000. To control the growth of debris, the for-
mation of further debris must be reduced. Some solutions include deorbiting
larger non-cooperative resident space objects (RSOs) or servicing satellites in or-
bit. Both require rendezvous with RSOs, and the scale of the problem calls for
autonomous missions. This paper introduces the Multipurpose Autonomous Ren-
dezvous Vision-Integrated Navigation system (MARVIN) developed and tested
at the ORION Facility at Florida Institution of Technology. MARVIN consists of
two sub-systems: a machine vision-aided navigation system and an artificial po-
tential field (APF) guidance algorithm which work together to command a swarm
of chasers to safely rendezvous with the RSO. We present the MARVIN architec-
ture and hardware-in-the-loop experiments demonstrating autonomous, collabo-
rative swarm satellite operations successfully guiding three drones to rendezvous
with a physical mockup of a non-cooperative satellite in motion.
INTRODUCTION
With the increasing amount of on-orbit near-earth spacecraft, there have been increasing
amounts of space debris. There are currently over 27,000 space debris objects being tracked by the
Department of Defense’s global Space Surveillance Network (SSN) out of which, some are old
decommissioned non-cooperative spacecraft and others vary from fragments of spacecraft or rocket
boosters to even chips of paint traveling at very high speeds [1]. Collision with any of these non-
cooperative Resident Space Objects (RSO) could cause catastrophic damage to a mission.
The most efficient way to reduce the amount of space debris is by preventing the formation of
future space debris. One effective approach for prevention is performing on-orbit servicing (OOS)
to extend the life of a retired spacecraft as recently demonstrated by Northrup Grumman’s Mission
Extension Vehicles [2, 3]. Doing so minimizes the additional need to launch existing functionality
* PhD Candidate, Aerospace Engineering, Florida Institute of Technology, 150 W University Blvd, Melbourne, FL, US.
† Undergraduate Student, Aerospace Engineering, Florida Institute of Technology, 150 W University Blvd, Melbourne,
FL, US.
‡‡ Masters Student, Electrical Engineering, Florida Institute of Technology, 150 W University Blvd, Melbourne, FL, US.
§ Undergraduate Student, Aerospace Engineering, Florida Institute of Technology, 150 W University Blvd, Melbourne,
FL, US.
** Assistant Professor, Mathematical Sciences, Florida Institute of Technology, 150 W University Blvd, Melbourne, FL,
US.
†† Associate Professor, Aerospace Engineering, Florida Institute of Technology, 150 W University Blvd, Melbourne, FL,
US.
‡‡ Chief Scientist, Energy Management Aerospace LLC, 2000 General Aviation Drive, Hangar 10I, Melbourne, FL, US.
1
and further formation of debris. Another effective approach is to deorbit existing uncorrelated
spacecraft that are non-functional, whose orbital positions could be estimated from the Space Sur-
veillance Network (SSN) data, but the structure, functionality, and capabilities are unknown.
Some prior OOS/inspection missions that are similar to the research discussed in this paper
include ELSA-d [4], XSS-10/11 [5, 6], ETS-VII [7], and ANGELS [8]. The typical approach for
these missions has been to use a single chaser spacecraft to rendezvous and dock with a familiar
RSO whose geometry and functionality are fully known. However, active space debris removal
(ADR) and OOS as studied in this work involves a large non-cooperative spacecraft whose struc-
ture and functionality may be unknown in advance. To complicate matters, the spacecraft could
have a significant tumbling rate and unknown safe capture interfaces, increasing the complexity of
safe approach trajectories.
Additionally, using a single chaser to capture and stabilize such RSO would create large forces
and moments on the capture interfaces. As an alternative, swarm satellites that can autonomously
identify capture interfaces on the RSO and autonomously navigate to dock with the RSO to mini-
mize the tumbling rates would significantly lower the resultant forces and moments associated with
docking. This concept is depicted in Figure 1.
RELATED WORK
This work draws inspiration from developments in both autonomous navigation and deep learn-
ing-based computer vision, both fast-evolving fields.
2
Artificial Potential Field Guidance
APF path planning is popularly used in the field of robotics, originally developed to guide robots
around static objects [11] and for unmanned air vehicle operations [12]. The methods have been
gaining interest in the field of swarm satellite operations recently. Several articles that have lever-
aged the APF methods for swarm satellite operations are: [13] used APF for swarm satellites with
sliding mode controllers, [14, 15] used APF for formation flying around flexible spacecraft and
[16, 17, 18] used APF to perform close proximity operations around non-rotating targets.
Of particular interest, APF guidance laws have been shown to enable a swarm of small chaser
spacecraft to approach and capture a non-cooperative RSO collaboratively and safely. In prior work
[19], a virtual potential field was constructed by locating virtual repulsive field sources at collision
hazards and other keep-out areas and locating attractive field sources at capture locations or inspec-
tion positions, and chasers simply follow potential gradients to the target. In this article, four chas-
ers approached a rotating target object from arbitrary starting positions and were shown in simula-
tions to safely capture the target with realistic limitations on Δv, required thrust, and velocity on
capture contact.
Deep Learning-based Machine Vision
The fast-paced advances in deep learning algorithms and computing hardware over the past 10
years have made highly accurate computer vision algorithms available on sufficiently low compute
and power budgets for on-board applications in space. Specifically, deep convolutional neural net-
works [20] accelerated by massive parallelization of tensor arithmetic on graphics processing units
(GPUs) [21] led to tremendous accuracy gains in vision benchmarks in image classification [22],
object detection [23], and image segmentation among other tasks. The algorithms at times even
outperforming humans.
While these feats were initially restricted to conventional computing hardware, the joint devel-
opment of GPU-accelerated edge computers and computer vision algorithms designed or optimized
for them [24] [25] have opened the doors to edge deployment applications (e.g. in space, robots,
autonomous vehicles, sensor networks). These developments allow the present work to develop
and test a system to perform object detection on low-compute, low-energy edge hardware approx-
imating the capabilities of spaceflight computers.
Important for this work is the object detection task in computer vision. Object detection requires
identifying and localizing objects in a camera frame—that is, to predict a pixel-level bounding box
tightly surrounding each object from a pre-specified set of classes and classify each object. State-
of-the-art object detection algorithms fall into three categories: vision transformers, multi-stage
object detectors, and single-stage object detectors.
Vision transformers [26] are the most accurate but have computing requirements that generally
cannot be satisfied by current edge hardware. Multi-stage detectors work by breaking the object
detection task into multiple sub-problems. For example, multi-stage object detector Faster R-CNN
[27] has two networks: one that proposes bounding boxes that might surround objects and one to
determine if the box surrounds an object and classify it. In contrast, single-stage object detectors
perform object detection with a single network predicting both bounding boxes and object classes
simultaneously. The You Only Look Once (YOLO) algorithm [25] and its successors are the best
performing single-stage detectors.
Both single-stage and multi-stage object detectors can run on the edge GPUs chosen for this
work. However, prior research [28] demonstrated Faster R-CNN could only achieve framerates too
low for real-time use in rendezvous operations. However, the YOLOv5 [29] single-stage object
detector was shown to be effective for real-time satellite feature recognition (including solar panels,
3
bodies, antennas, and thrusters) in the lab under a variety lighting and motion conditions for the
observer and target RSO [9], even with significant variation of solar panels [30] with much better
framerates.
Fusing YOLOv5 with APF Guidance
Our prior work [10] demonstrated YOLOv5 detections with predicted class labels localized in
3D with a stereographic camera in the lab could feed an APF flightpath planning algorithm for
successful rendezvous with a rotating target with a single chaser simulated with MATLAB/Sim-
ulink. The present work extends these ideas and presents the complete MARVIN hardware-soft-
ware system, which is shown to perform end-to-end hardware-in-the-loop experiments where real-
life chaser drones guided by an APF guidance algorithm. In the experiments presented below, this
approach is shown to safely avoid fragile solar panels and dock with the body of a target RSO.
THE MARVIN SYSTEM
MARVIN system includes two integrated subsystems, machine vision and APF. These subsys-
tems work together to autonomously navigate and guide chaser drones to an unfamiliar non-coop-
erative RSO. The system-level architecture of MARVIN is shown in Figure 2. Each subsystem is
equipped with off-the-shelf hardware. The machine vision subsystem has a Raspberry Pi 4B (8 GB
RAM), an Intel RealSense D435i camera (stereographic camera), and an Intel Neural Compute
Stick (NCS) 2. The APF subsystem is equipped with another Raspberry Pi 4B (8 GB).
The requirements that governed the development of MARVIN:
1. The machine vision subsystem must detect and localize all visible solar panels and the body
of the target RSO in a camera feed with no prior knowledge of the RSO.
2. The machine vision subsystem must localize all RSO components in 3D using its stereo-
graphic camera.
3. The machine vision subsystem must pass the 3D positions of visible components to the APF
subsystem via ROS 2.
4. The APF subsystem must accept 3D positions of each component from the machine vision
subsystem via ROS 2.
5. The APF subsystem must receive current positions of the chaser drones via UDP.
6. The APF subsystem must generate safe approach trajectories and corresponding chaser ac-
celerations to the RSO body within the limits of a collision-avoidance radius.
7. The APF subsystem must communicate guidance accelerations to the chasers in real time.
4
Figure 2: MARVIN System Architecture
5
Figure 3: Picture of machine vision detection Figure 4: Bounding Box Points
6
relative to each node, and 𝑐 is the field gain scalar. The scalar acceleration contribution is finally
̂ 𝑖 or 𝝆
multiplied by the relative position unit vector 𝝆 ̂ 𝑗 for direction.
𝑘 𝑙
The final acceleration contribution in Equation 2 is due to chaser-chaser interactions, 𝜸𝒄𝒄 , where
m is the total number of chasers, 𝜇𝑐 is the chaser-chaser field gain scalar, 𝜌𝑐 is the distance between
chasers, and X is a conditional variable that determines if a chaser is within the collision avoidance
radius of another chaser. The function only evaluates over m-1 because the chaser is repelled by
other chasers, excluding itself. 𝜇𝑐 is always a negative value to produce a repulsive acceleration
between chasers. The repulsion also scales with distance, greatest when chasers are closer together.
Finally, the conditional X exists to only add chaser-chaser repulsion when a chaser is within a preset
distance of another chaser; if a chaser is too far away, X is set to zero. X is meant to prevent chasers
from influencing each other when on opposing sides of a target spacecraft.
𝑚−1
𝜇𝑐 (2)
𝜸𝒄𝒄 = − ∑ 𝑋
𝑒𝜌𝑐
𝑛=1
The field acceleration vector 𝜸 is fed into Hill’s equations represented in Equation 3 to calculate
the final swarm chaser acceleration vectors which are commanded to each chaser to execute the
mission.
2𝜔𝑧̇C + 𝜔̇ 𝑧𝐶
𝒓̈ 𝐶 = [ −𝜔2 𝑦𝐶 ]+𝜸 (3)
3𝜔2 𝑧𝐶 − 2𝜔𝑥̇ C − 𝜔̇ 𝑥𝐶
The target RSOs orbital rate is represented by 𝜔 and, 𝑥𝑐 , 𝑦𝑐 , 𝑧𝑐 are the positions of the chasers
respect to the RSO and, 𝑥𝑐̇ , 𝑦̇𝑐 , 𝑧̇𝑐 are the relative velocities of the chasers with respect to the RSO
in the LVLH frame (referred as the APF frame in the paper). In the LVLH frame, x is along the
velocity direction of the RSO, z is in the nadir direction pointing to the earth and, y is out of plane,
cross product of x and z directions.
EXPERIMENTAL SETUP
The experiments discussed in this paper are conducted at the ORION facility at the Florida
Institute of Technology. The ORION facility is equipped with hardware-in-the-loop formation
flight simulator and a docking simulator [31]. The past work which formed the foundation of this
research was also tested at the ORION facility.
MARVIN Experimental Specifications
To produce an effective and responsive guidance system, the cartesian coordinates provided by
machine vision are converted into nodes that construct an artificial potential field (APF). The APF
algorithm converts different coordinate frames from machine vision and OptiTrack into its own
reference frame. The coordinate frames for the experiment are shown in Figure 5. The APF coor-
dinate system is centered with the target. The frame was kept constant whether the target satellite
was tumbling or holding a steady position.
7
Figure 5: Coordinate System
To construct the field, the APF algorithm assigns each node an attractive index potential. Nodes
located on undesirable locations, such as solar panels, are assigned negative values, whereas, de-
sirable positions, such as potential docking sites, are assigned positive values. Additionally, repul-
sive nodes are assigned to the live positions of active chasers to prevent collisions. Chaser-Chaser
repulsion, however, only contributes to the potential field acting on a chaser when their collision
avoidance radius is breached. The field gain values for each of the nodes were initially based on
simulations however when the scale of the experiment was changed and errors were introduced,
the simulation values did not perform well. As a result, the existing field gain values were experi-
mentally derived. The relative position of each node to a chaser and their corresponding attractive
index are weighed to build an acceleration vector. The summation of all the acceleration contribu-
tions from each of the nodes built a path acceleration vector for a chaser to follow, see Equation 1
and Equation 2.
The resultant acceleration is continuously updated and exported to chasers. As a result of the
rapid refresh rate of the APF algorithm, chasers were able to react quickly to dynamic changes such
as avoiding chaser collisions and docking with a moving target. Moreover, the resultant potential
field had conditional components. To prevent chasers from flying too far away from the target, an
“out-of-bounds" region, R-Switch, was put into the system so that in the event a chaser was pushed
too far, all negative nodes would flip positive to pull the chaser back into the range, see Equation
1. Also, to prevent chaser-target collisions, a velocity dampening acceleration exists to slow down
a chaser when making its final approach, see Equation 1.
For this experiment, both the subsystems are integrated by a network card over which all the
communications are passed. Figure 6 shows an overview of all the systems working together in the
experiment. A notable difference between Figure 6 and Figure 2 is that the APF algorithm for this
setup passes future position and velocity data to the drones instead of acceleration due to software
limitations of the drones.
8
Figure 6: Experiment Setup - Flow Diagram
OptiTrack
While the machine vision subsystem streams the live position of the target satellite to APF,
OptiTrack cameras, coupled with Motive software, provides the algorithm with the live positions
of the chasers. A separate algorithm retrieves the rigid body data from OptiTrack and publishes the
data over a LAN using ROS2.
Chasers
9
setup to automatically connect to a LAN where they are auto assigned a specific IP address based
on their MAC address. The APF algorithm is then able to export individual drone commands by
sending them over UDP to each drones’ unique IP address. The drones perform translation move-
ments in their own cartesian coordinate frame when given a position (cm) and speed (cm/s). The
drones are limited in the distance they can move in any given command: X/Y/Z movements must
be within –500 and 500 cm and cannot be between –20 and 20 cm, and speeds must be within 10
and 100 cm/s. If a command out of the provided bounds is received, an “out of bounds” error will
occur, and the drone will ignore the command. Also, if another command is sent before the drone
has completed the previous command, it will ignore the command and will accept the next com-
mand sent after its movement is completed. Each one does have a maximum flight time of seven
minutes and has an infrared camera and an IMU built in to stop and stabilize themselves after
performing a movement.
Integration
Using the existing location of the known nodes, two more attractive nodes are added to the body
of the target on the unseen, adjacent face. The target image is rebuilt in the algorithm by calculating
the target’s centroid and setting up the nodes around the centroid. The target node positions are
then inflated using a safety scale of 1.75 to prevent collisions from drones overshooting and to
account for the space between the nodes and the bounding box generated by the machine vision
subsystem. Since the chasers require position and speed to react, the acceleration is integrated into
commands that the drones can understand; the drones also have their own coordinate system with
Y and Z opposite to APF. To increase accuracy, the drone position commands were rounded down
to the minimum movement (-20 or 20 cm) and the speed was set to its maximum (100cm/s). By
minimizing the drone’s movement and the drone’s maximizing speed, it allowed the algorithm to
refresh drone commands quicker and aid in preventing chaser-chaser collisions. Higher chaser-
chaser repulsion accuracy also greatly reduced errors from drones blowing each other off-course
and reduced error due to integration.
A checker evaluates the position of each of the chasers and the attractive nodes to determine if
a chaser is within range of docking. Once a chaser is within the predetermined range, it freezes all
chaser movement to complete docking and avoid collisions or interference from chaser-chaser re-
pulsion. After the chaser remains in range for a set amount of time, the chaser is removed from the
experiment and lands itself. After all the drones have completed the simulation, the function ends
by graphing the detected positions, directly from OptiTrack, to show the path taken by each chaser
and the last position of the target. Finally, Hill acceleration was excluded during these tests due to
small scale of the experiment and because the experiment was conducted at sea-level.
10
Figure 8: Major pieces of the ORION experiment setup
The values were experimentally derived. An R-Switch value of 2 meters was chosen due to the
size of the lab setup. A velocity dampening scalar of 0.08 was chosen because of drone limitations.
Since the drones do not accept acceleration, velocity dampening reduces the likely hood of a drone
moving as it approaches the target and velocity direction is not very reliable since the drones have
built-in stabilization feature that can result in a reversed-velocity detection.
A total of 13 experiments were performed with various initial take off positions of the drones
on the motion platform. Table 2 summarizes all the test cases and results of each experiment.
The initial drone positions are referred to as scattered, R-bar or V-bar based on the placement
of them on the motion platform. The initial attitude of the RSO is written in °/s and the RSO
maintained that initial spin throughout the duration of the respective test case. Status of each of the
three drones at the end of the experiment are documented under the Drone 1, Drone 2 and Drone 3
columns. Finally, if any of the drones failed to dock, the reason for failure is mentioned in the
failure reasoning column and the failure is discussed in detail in the Chaser Mission Failure sub-
section.
11
Table 2: Experimental Test Results
Test Yaw Pitch Roll Failure
Name Drone Positions [°/s] [°/s] [°/s] Drone 1 Drone 2 Drone 3 Reasoning
Test 1 Scattered 0 0 0 Docked Docked Docked -
Inspection
Test 2 R-bar and V-bar 0 0 0 Docked Docked Orbit -
Test 3 Scattered 0 0 0 Docked Docked Failed IMU failed
Test 4 Scattered 0 0 0 Docked Docked Docked -
Extreme positions RealSense
Test 5 and Scattered 0 0 0 Docked Failed Docked error
Test 6 Scattered 1 0 0 Docked Docked Docked -
Test 7 Scattered 1 0 0 Docked Docked Failed IMU Failed
Test 8 R-bar 1 0 0 Docked Docked Docked -
Inspection
Test 9 V-bar 1 0 0 Orbit Docked Docked -
Extreme positions
Test 10 and Scattered 1 0 0 Docked Docked Docked -
Inspection OptiTrack
Test 11 Scattered 5 0 0 Orbit Docked Failed Error
Test 12 Scattered 5 0 0 Docked Docked Docked -
Test 13 R-bar 5 0 0 Docked Docked Docked -
12
Docked
A “Docked” condition was satisfied when a chaser is within a predetermined range of the two
primary attractive nodes for a predetermined amount of time. For the first thirteen tests, the range
was half a meter. This value was chosen due to half the range being within the physical target
spacecraft and the visual limitations of OptiTrack: when chasers get too close to the target, the
chaser is no longer visible to more cameras, increasing error in the live position. The primary at-
tractive nodes are the centroids of the target spacecraft body, identified by the MV camera; in a real
case, these faces would be desirable for chasers to either weld themselves or grab onto. The time
required to dock is based on the number of cycles of the APF algorithm. The algorithm has a set
delay of 0.25 seconds per cycle and the docking conditions were set to two cycles. Future iterations
of the algorithm now have the docking range down to 0.3 meters and the cycles to dock at ten
cycles. Additionally, the two docking ports on the other faces of the target body will be added. The
additional docking ports represent the arm extenders for the solar panels, a rigid structure that chas-
ers can grab onto.
Inspection Orbit
In the orbital scenario, the inspection orbit would occur after all the available docking ports are
occupied or if the chaser is locked in the equivalent of a Lagrange points in the APF. The reason
these mock Lagrange points can exist, is due to the limitations of movement; if the APF produces
a resultant acceleration smaller than what the chaser is able to produce, the chaser will remain stuck
if nothing is done; Tello Drones are limited to movement greater than 20 cm. Inspection orbits
primarily only occur with the last chaser, since Chaser-Chaser repulsiveness typically is enough to
push an idle chaser into action again. However, an inspection orbit is not a disadvantageous event.
In orbital scenarios, the chasers would be outfitted with the computer vision technology and prede-
termined chasers would act as cameras maintaining relative positions to one another. If an ap-
proaching chaser was set into an inspection orbit, it would become an additional camera to
strengthen the reliability of the target image. Since the algorithm has been updated, it is now able
to recognize inspection orbits on its own and doubles the time differential used for calculating the
acceleration to boost the position change.
Discussion
The research had a 70% success rate out of 13 tests. Though the failure rate seems high, the
failed test cases were primarily a result of hardware failure and not implementation failure. These
failures could have been mitigated with better and more reliable hardware. The experiments showed
that machine vision and APF subsystems can collaboratively perform together to communicate
with the drones to successfully navigate to potential docking site on the RSO, both tumbling and
steady. Throughout the experiments, at least two of the chasers successfully docked with the target
or remained in an inspection orbit.
13
motive software misinterprets the foil as the chaser and the APF
algorithm adjust accordingly.
RealSense Error It was noticed that the RealSense camera caused the drones to
collide due to its uncertainty in depth. This issue can be sorted out
if the chasers/drones are also equipped with an additional depth
sensor or vision-based navigation while they continue to rely on
the mothership (in this case OptiTrack) for guidance.
Drone Overshoot
Since the drones are not equipped with a robust control system the drones tend to overshoot to
get to a specific location in the trajectory causing instability in the trajectories leading to an oscil-
lation motion and even collision with the target satellite. This limitation can be resolved with a
robust control system.
Always need overhead lights because the drones use IR sensors to calibrate its IMU. However,
since the lab is all black to simulate space like environment, if there is not enough lighting the IR
sensor fails to work resulting in failure of the IMU sensor. This issue required all the tests to be
performed with overhead lighting. In contrast, the current machine vision algorithm works to its
potential under darker lighting conditions as studied in [9, 28, 30]. This resulted in missed detec-
tions in some to most of the rotating spacecraft tests.
14
The target satellite has attractive nodes very close, not even one drone length away from the
solar panel. During earlier tests, this caused the drones trying to dock with that attractive node to
collide with the solar panels. That is part of the reason those nodes were not considered docking
ports for early iterations of MARVIN. The issue was not entirely due to sizing, but also uncer-
tainty in machine vision when rotating the target.
However, in a real-life scenario this will not be an issue if the chasers are smaller than the
minimum distance between the nodes. Additionally, if the chasers are also equipped with depth
detection sensors along with vision system, chaser-target collision can be preventable every time
To battle this drawback, our future research focuses on generating timeseries of machine vi-
sion detections to predict the dynamics of the RSO. Additionally, with the upcoming advance-
ments to spacecraft hardware [34], it would be reasonable to deploy the algorithms on more pow-
erful processors than Raspberry Pi 4 such NVDIA Jetson Nano or Orin. Doing so would enhance
the speed and performance of the algorithms since larger and more reliable neural networks can
be trained and deployed to perform detection.
CONCLUSION
From the hardware-in-the-loop testing conducted at the ORION facility in Florida Institute of
Technology, MARVIN proved that APF and machine vision can collaborate together to enable a
swarm of three chaser drones to autonomously execute a safe trajectory to a non-cooperative yaw-
ing RSO. Though MARVIN has limitations, it was identified that the overall performance of can
be improved by upgrading to better and more reliable hardware, adding timeseries to predict the
dynamics of the non-cooperative RSO and adding multiple camera feeds to cover the entirely of
the non-cooperative RSO for detections such that future experiments are not limited to the RSO
only yawing. Overall, proving this concept enables a novel approach to solving numerous problems
ranging from OOS, ADR to preserving national security.
ACKNOWLEDGMENTS
This project was supported by the AFWERX STTR Phase II contract FA864921P1506 and the
NVIDIA Applied Research Accelerator Program.
REFERENCES
[1] The European Space Agency, "Space debris by the numbers," 11 August 2022. [Online]. Available:
https://www.esa.int/Space_Safety/Space_Debris/Space_debris_by_the_numbers. [Accessed 19 September 2022].
15
[2] INTELSAT, "MEV-1: A look back at Intelsat’s groundbreaking journey," 17 April 2020. [Online]. Available:
https://www.intelsat.com/resources/blog/mev-1-a-look-back-at-intelsats-groundbreaking-journey/. [Accessed
2022 September 19].
[3] Space News, "MEV-2 servicer successfully docks to live Intelsat satellite," 12 April 2021. [Online]. Available:
https://spacenews.com/mev-2-servicer-successfully-docks-to-live-intelsat-satellite/. [Accessed 19 September
2022].
[4] Astroscale, "Astroscale's ELSA-d Successfully Demonstrates Repeated Magnetic Capture," 25 August 2021.
[Online]. Available: https://astroscale.com/astroscales-elsa-d-successfully-demonstrates-repeated-magnetic-
capture/. [Accessed 19 September 2022].
[5] T. M. Davis and D. Melanson, "XSS-10 Micro-Satellite Flight Demonstration," Proc. of SPIE, 5419(Spacecraft
Platforms and Infrastructure), no. doi:10.1117/12.544316, pp. 15-25, 2004.
[7] K. Yoshida, "Engineering Test Satellite VII Flight Experiments For Space Robot Dynamics and Control: Theories
on Laboratory Test Beds Ten Years Ago, Now in Orbit," The International Journal of Robotics Research, vol. 22,
no. 5, pp. 321-335, 2003.
[8] AFRL, "Automated Navigation and Guidance Experiment for Local Space (ANGELS)," July 2014. [Online].
Available: https://www.kirtland.af.mil/Portals/52/documents/AFD-131204-039.pdf?ver=2016-06-28-105617-
297. [Accessed 17 July 2020].
[9] T. Mahendrakar, R. T. White, M. Wilde, B. Kish and I. Silver, "Real-time Satellite Component Recognition with
YOLO-V5," in 35th Annual Small Satellite Conference, 2021.
[10] T. Mahendrakar, J. Cutler, N. Fischer, A. Rivkin, A. Ekblad, K. Watkins, M. Wilde, R. White and B. Kish, "Use
of Artificial Intelligence for Feature Recognition and Flightpath Planning Around Non-Cooperative Resident
Space Objects," in ASCEND 2021, Las Vegas, 2021.
[11] C. W. Warren, "Global path planning using artificial potential fields," in 1989 IEEE International Conference on
Robotics and Automation, Scottsdale, AZ, 1989, doi: 10.1109/ROBOT.1989.100007.
[12] Y.-B. Chen, G.-C. Luo, Y.-S. Mei, J.-Q. Yo and X.-L. Su, "UAV path planning using artificial potential field
method updated by optimal control theory," International Journal of Systems Science, vol. 47, no. 6, pp. 1407-
1420, 2016, doi: 10.1080/00207721.2014.929191.
[13] C. M. Saaj, V. Lappas and V. Gazi, "Spacecraft Swarm Navigation and Control Using Artificial Potential Field
and Sliding Mode Control," in 2006 IEEE International Conference on Industrial Technology, Mumbai, India,
2006, doi: 10.1109/ICIT.2006.372712.
[14] J. Sun and H. Chen, "A Decentralized and Autonomous Control Architecture for Large-Scale Spacecraft Swarm
Using Artificial Potential Field and Bifurcation Dynamics," in AIAA SciTech Forum, Kissimmee, FL, 2018, doi:
10.2514/6.2018-1860 .
[15] T. Chen, H. Wen, H. Hu and D. Jin, "On-orbit assembly of a team of flexible spacecraft using potential field based
method," Acta Astronautica, doi: 10.1016/j.actaastro.2017.01.021, no. 133, pp. 221-232, 2017.
[16] Q. Li, J. Yuan, B. Zhang and H. Wang, "Artificial potential field based robust adaptive control for spacecraft
rendezvous and docking under motion constraint," ISA Transactions, doi: 10.1016/j.isatra.2019.05.018, no. 95,
pp. 173-184, 2019.
[17] R. I. Zappulla, H. Park, J. Virgili-Llop and M. Romano, "Real-Time Autonomous Spacecraft Proximity Maneuvers
and Docking Using and Adaptive Artificial Potential Field Approach," IEEE Transactions on Control Systems
Technology, vol. 27, no. 6, pp. 2598-2605, 2019, doi: 10.1109/TCST.2018.2866963.
[18] M. Mancini, N. Bloise, E. Capello and E. Punta, "Sliding Mode Control Techniques and Artificial Potential Field
for Dynamic Collision Avoidance in Rendezvous Maneuvers," IEEE Control Systems Letters, vol. 4, no. 2, pp.
313-318, 2020, doi: 10.1109/LCSYS.2019.2926053.
16
[19] J. Cutler, M. Wilde, A. Rivkin, B. Kish and I. Silver, "Artificial Potential Field Guidance for Capture of Non-
Cooperative Target Objects by Chaser Swarms," in ASCEND 2021, 2021.
[20] Y. LeCun, L. Bouttou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," in
Proceedings of the IEEE 86.1, 1988.
[21] A. Krizhevsky, I. Sutskever and G. E. Hinton, "ImageNet Classification with Deep Convolutional Neural
Networks," in Advances in Neural Information Processing Systems 25 (NeurIPS 2012), 2012.
[22] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, "ImageNet: A Large-scale Hierarchical Image
Database," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
[23] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár and C. L. Zitnick, "Microsoft COCO:
Common Objects in Context," in European Conference on Computer Vision (ECCV), 2014.
[24] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto and H. Adam,
"Mobilenets: Efficient convolutional neural networks for mobile vision applications," in arXiv preprint
arXiv:1704.04861, 2017.
[25] J. Redmon, S. Divvala, R. Girshick and A. Farhadi, "You Only Look Once: Unified, Real-time Object Detection,"
in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[26] Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong, F. Wei and B. Guo, "Swin
Transformer V2: Scaling Up Capacity and Resolution," in Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), 2022.
[27] S. Ren, K. He, R. Girshick and J. Sun, "Faster R-CNN," in Advances in neural information processing systems 28
(NeurIPS 2015), 2015.
[28] T. Mahendrakar, A. Ekblad, N. Fischer, R. White, M. Wilde, B. Kish and I. Silver, "Performance Study of YOLOv5
and Faster R-CNN for Autonomous Navigation around Non-Cooperative Targets," in 2022 IEEE Aerospace
Conference (AERO), Montana, 2022.
[29] G. Jocher, A. Chaurasia, A. Stoken, J. Borovec and etc., "ultralytics/yolov5: v6.2 - YOLOv5 Classification Models,
Apple M1, Reproducibility, ClearML and Deci.ai integrations," 2020.
[30] T. Mahendrakar, M. N. Attzs, J. Duarte, A. Tisaranni, R. White and M. Wilde, "Impact of Intra-class Variance on
YOLOv5 Model Performance for Autonomous Navigation around Non-Cooperative Targets," in AIAA SciTech
2023 Forum (accepted), 2023.
[31] S. Macenski, T. Foote, B. Gerkey, C. Lalancette and W. Woodall, "Robot Operating System 2: Design, architecture,
and uses in the wild," Science Robotics, 2022.
[32] M. Wilde, B. Kaplinger, T. Go, H. Guiterrez and D. Kirk, "ORION: A Simulation Environment for Spacecraft
Formation Flight, Capture, and Orbital, Robotics," in Proceedings of the 2016 IEEE Aerospace Conference, Big
Sky, MT, 2016.
[34] N. Alarcon, "Technical Blog," Nvidia Developer, 07 August 2020. [Online]. Available:
https://developer.nvidia.com/blog/lockheed-martin-usc-jetson-nanosatellite/. [Accessed 30 September 2022].
17