
Autonomous systems–from driverless cars to robots–increasingly operate in open-ended, evolving environments that are difficult to fully specify in advance. Traditional safety certification, which evaluates a system once against a fixed set of assumptions, is poorly matched to this reality.
To better match this reality, researchers at The University of Texas at Austin and collaborating institutions are exploring an alternative to traditional, static safety certification, which can quickly become outdated as systems learn, update and encounter new situations. Lifelong, or dynamic, safety certification, treats safety assurance as an ongoing process that adapts as systems, environments and expectations change rather than a one-time gate.
The concept builds on recent work that frames certification as the iterative refinement of what uses and operating contexts are acceptable for an autonomous system, informed by modeling, testing, and evolving human judgments about risk and safety. The goal is not to declare systems “safe” in an absolute sense, but to continuously assess, bound, and manage risk as new evidence and experience accumulate.
“Autonomous systems are designed to operate in environments we cannot fully predict,” said Ufuk Topcu, professor in the Cockrell School of Engineering’s Department of Aerospace Engineering and Engineering Mechanics and core faculty at the Oden Institute for Computational Engineering and Sciences. “They evolve through learning and updates, and the contexts in which they are used evolve as well. Certification needs to account for that evolution rather than ignore it.”
The Research

Through a Multidisciplinary University Research Initiative project led by UT Austin, researchers are examining the foundations needed to make dynamic certification practical. The work brings together expertise in controls, formal methods, machine learning, human factors, robotics, and systems engineering from six universities.
Rather than proposing a single tool or standard, the research focuses on three interconnected directions:
- Specification and alignment: Developing methods to capture safety expectations from multiple stakeholders, including designers, operators, and users, and to reason about how those expectations change over time.
- Verification and learning: Creating verification techniques that interact with learning-based components, enabling systems to adapt while maintaining quantifiable safety margins.
- Extrapolation and adaptation: Understanding how safety guarantees degrade outside previously tested conditions and how systems can reason about and respond to unforeseen situations.
A central theme across these directions is the management of the co-evolution of autonomous behavior, operational context, and human expectations of safety.
The effort also examines how developers, regulators, and operators might interact more continuously to ensure safety. One motivating analogy is the staged evaluation used in clinical trials, where systems are initially evaluated in limited contexts and gradually expanded as evidence accumulates, without presupposing any specific regulatory framework.
Why It Matters
Autonomous systems already influence daily life, from transportation and logistics to infrastructure monitoring and emergency response. As these systems take on greater responsibility in uncertain and safety-critical settings, it becomes increasingly clear that both the systems and our understanding of what it means for them to be safe are changing over time.
By treating certification as a lifelong process rather than a final checkpoint, dynamic certification enables more realistic reasoning about risk, learning, and trust in autonomous systems. The research will explore these ideas through representative domains, including shipboard firefighting and underwater mine countermeasures, where environments are highly dynamic and the consequences of failure are severe.
The Team
The collaboration brings together researchers with expertise in controls, formal methods, machine learning, systems engineering, robotics and human factors.
Team members include Topcu, Elias Bareinboim (Columbia University), Matthew Bolton (University of Virginia), Cody Fleming (Iowa State University), Dorsa Sadigh (Stanford University), Matthias Scheutz (Tufts University).
