Summary of the ICML-2003 Workshop on Machine Learning
Technologies for Autonomous Space Applications
Where We Were and Where To Next
The ICML-2003 workshop on machine learning technologies for autonomous
space applications, held on Aug 21 2003, brought together researchers
from machine learning, space science, and mission development for a
day of rich presentations, discussion, and brainstorming on the role
of machine learning in autonomous space exploration.
Workshop Goals and What We Expected
When the organizers conceived of this workshop, we expected (and hoped
for) a variety of ideas, issues, and a diverse set of attendees. In
general, it seems that our hopes were fulfilled, but they day also
held some surprises.
The organizing committee's initial thoughts concerned the technical
and social issues that need to be solved to make machine learning (ML)
a fieldable technology for autonomous space applications. On the
technical side, we envisioned the types of autonomy challenges that
arise for different mission profiles -- planetary orbiters, deep space
probes, rovers, in-atmosphere flying/gliding/balloning explorers, etc.
Some of these require very little autonomy for long periods, followed
by short periods of quick decision-making, while others operate
continuously in a complex and uncertain environment. We recognized
that bandwidth and latency are key problems, leading us to identify
robust and efficient communication as the "killer application" for ML
in spacecraft. As missions get more complex, more instrumented,
longer duration, and further from Earth, our relative degree of direct
control and return of data will diminish. We also identified
verification/validation of ML algorithms as a key area for performance
and acceptance. Finally, we were interested in the gaps between the
current theory/practice of ML and the kinds of high-reliability
requirements that are necessary for space technologies. We hoped for
input from other communities where high-reliability is essential.
From the social perspective, we realized that often overcoming
skepticism is one of the largest barriers to fielding a new
technology, especially in a high cost-of-failure situation like space
exploration. We were interested in issues of how to gain acceptance
for ML technologies in space missions, including methods for
verification/validation, levels of assurance, and, perhaps most
importantly, ideas for bridging communities and feedback on the key
concerns of mission planners.
Finally, our greatest hope was for an involved and highly
participatory workshop with plenty of discussion and good feedback.
Lessons Learned: Consensus and Surprises
Many of our expectations were amply fulfilled on the day of the
workshop itself. The workshop had a diverse group of attendees who
brought excellent ideas and discussions to the workshop. The
brainstorming activity on ML technologies for the Mars rover seemed to
generate excitement and produced a number of good insights. Workshop
attendees contributed great background information on current
technology and its capabilities and weaknesses (much of which is
discussed in the contributed papers, also available on this web site).
Far more important than the fulfilled expectations, though, were the
insights and revelations that the organizers had not expected. In
- In spite of the diversity of mission profiles, they largely have the
same concerns: science return, spacecraft safety, efficient use of
bandwidth, and resource constraints.
- Resource constraints, in fact, were a leading issue throughout the
day. Not only the bandwidth/latency constraints that we had
foreseen, but also severe limitations on power consumption,
processing cycles, movement, and sensory system bandwidth.
- Risk aversion. While bandwidth/latency may be the killer
application for ML, risk is the killer barrier.
Not only are mission engineers unwilling to risk a
hundred-million-dollar-plus spacecraft to an adaptive control
mechanism, but mission scientists are even more unwilling to risk
photo imagery and sensor data to an on-board decision process. When
you're limited to less than 10 megapixels of data transmission per
day, you become very concerned about losing even a fraction of that!
Given that the primary purpose of virtually all interplanetary and
deep-space missions is science return, it is not surprising that the
data transmission needs are foremost.
It was repeatedly emphasized that overcoming this risk aversion
comes down to establishing dialog between the ML designers and the
mission planners/scientists and that, above all, the ML designers
must meet the mission scientists' needs and not vice versa. It is
not adequate to present the kinds of results that are traditionally
taken as proof in ML -- theoretical bounds on convergence coupled
with empirical learning curves on (often synthetic) data. ML will
have to prove itself under much more stringent conditions.
- Workshop participants generally agreed that ML algorithms are almost
certainly not ready for the prime-time of spacecraft control. It
may be possible, now, to learn control strategies offline (e.g.,
in simulation and in terrestrial analog hardware) and to embed a
fixed, final learned policy in the spacecraft. But our
reinforcement learning algorithms are not robust and stable enough
to be relied upon in an online learning context. Much work remains
to be done on theoretical and empirical assurances of learning rate,
performance, and safety.
- The general consensus was that the key to successful ML application
in this area is a phased approach and a careful choice of
applications. ML designers should strive for their algorithms to
not be mission enabling (i.e., on the critical path for mission
- Initial applications, at least, should focus on being "value
added" -- allow the mission to do more than it could otherwise,
with no threat to mission success, science return, or spacecraft
safety. Anomaly and error detection and recovery seemed to be
good candidates, as are ground-based tasks such as data analysis.
- Adaptive user interfaces also seem like a useful and relevant,
albeit not actively space-borne, opportunity for ML. Controlling
craft and interpreting returned data is challenging and could
benefit from some of our techniques.
- Given the degree of risk aversion, the absolute necessity for data
filtering, and the difficulty that scientists have in formally
expressing their science return preferences ("Would you prefer to
get back the picture of the large, red, smooth rock, or the small,
jagged, black rock?"), it seems that preference elicitation is
actually a key area. ML may be profitably used for eliciting and
encoding scientists' preferences and learning an optimal tradeoff
between the potentially competing preferences. The encoded
version can be embedded in a fixed, on-board data analysis and
- In the medium-term, the extended mission phase (after the primary
mission is successfully completed, but before the end of the
useful life of the craft) is a plausible place to argue for
ML-based autonomy. With the primary work of the mission
accomplished, some of the risk aversion is assuaged. There is,
however, a great deal of competition for extended missions, so ML
will still have to prove itself to a high standard to earn a
place. Successful application in non-mission-critical areas will
be good background for making this argument.
- Another medium-term possibility is focusing on low-budget
commercial craft (e.g., low-budget Earth sensing satellites)
rather than high-budget science and exploration craft. With total
budgets below $10 million per craft, these markets both are less
risk averse and could benefit proportionally more from improved
autonomy. Autonomy can augment or supplant expensive ground-based
human operators, who constitute a much larger fraction of such
low-budget projects. The cost savings in this case may offset the
increased risk incurred in employing ML control technology.
- Only in the very long term, after a great deal of proving in
lower-risk environments, is it reasonable to tackle getting
autonomy into the core control of exploratory science spacecraft.
This is disappointing, as a number of the mission concepts that
are currently under consideration at NASA could greatly benefit
from ML-based adaptive autonomous control, if we could deliver
it safely. For example, the Europa Ocean Explorer mission will be
exploring a totally unknown environment (that may or may not even
be oceanic!) with a need for realtime control and effectively no
communication with Earth. This is an incredibly demanding
environment for ML, but an attractive one.
- Finally, there was discussion about the respective roles of theory,
"pure" simulation (virtual environments), "physical" simulation
(real hardware in Earth-bound environments), and flight tests. All
attendees agreed that each has its role to play and that ensuring
that each stage is bug-free and actually relevant to the next
stage is challenging. No real answer to that challenge was
presented, but all agreed that these stages are necessary to
development of ML for space autonomy (not to mention other related
fields). In the short term, a number of attendees described and/or
offered data and simulation environments that would be of use to
researchers in this area.
- Key areas for ML research in this area include, but are not limited
- Learning under resource constraints.
- Data filtering.
- Learning control policies (e.g., reinforcement learning) under
strong safety constraints.
- Methods of verification/validation.
- Methods for establishing the accuracy of a simulation and its
relevance to real flight conditions.
- Preference elicitation and optimization of on-board decisions to
reflect the preference model.
- Data filtering or adaptive lossy compression in response to
- Status anomaly/error detection and correction.
Overall, we felt that the workshop was a great success and are
grateful to those who contributed their expertise. Thank you all! We
look forward to working with you in the future!
Last changed Mon, Nov 24, 2003 13:09:51 .