This is the second part of an opinion essay by Erin Rivera and Anna Dietrich on path from automation to autonomy in aviation. Read the first part here.
In many respects, developing a safe autonomous ground vehicle is a far greater challenge than developing a safe autonomous aircraft. An autonomous vehicle must not only identify and interpret the actions of other nearby vehicles, traffic lights, and roadway markings, which often vary between localities, but must also understand and be able to respond to the unpredictable actions of pedestrians and wildlife crossing the vehicle’s pathway.
There are fewer objects to identify and track in the air since there are no pedestrians, cyclists, traffic signs, and roadway markings in the sky, though birds are still a problem. Autonomous aircraft will use an array of sensors, radar, and computer vision to detect airborne objects — from birds to drones and other aircraft — with far greater reliability than human sight. The speeds and distances inherent to aviation are a challenge both for detection and processing capabilities, however.
Adding a third dimension — altitude — provides more flexibility and predictability in the environment in which autonomous aircraft can operate. This, too, is both a blessing and a curse; if a terrestrial vehicle is uncertain of its environment, it has the option to simply stop or pull over. In the sky, landing isn’t as simple a task, and the inefficiency of many eVTOL aircraft in hover will mean “stopping” is not ideal either.
The far greater challenge for autonomous aircraft is the certification of autonomous airborne systems. The software or programming in any aircraft computer system must be developed and certified according to the impact of a system failure on aircraft, crew, and passenger safety. There are five levels, from A to E, of certification based on RTCA DO-178C, Software Considerations in Airborne Systems and Equipment Certification. Level A, the highest level of safety certification, is required for systems designated as “flight-critical,” which, for example, applies to the aircraft autopilot, navigation, and all fly-by-wire systems in all type-certificated aircraft. Level E applies to airborne systems with no or minimal impact on safety, such as entertainment consoles. Under the current certification standards, autonomous systems and their programming would be deemed flight-critical, requiring the highest safety certification level — though FAA and EASA requirement levels for eVTOL aircraft have not yet been finalized.
The current FAA software certification framework poses significant challenges for the certification of autonomous systems that are nondeterministic. Current standards require verification of every system output to ensure that the system will not generate a command that will jeopardize safety of flight. By design, the output of a nondeterministic system cannot be predicted since the system can choose an infinite number of pathways to produce the desired output. Thus, it is impossible to test and verify that every system output complies with the current certification safety and assurance standards.
To deal with nondeterministic systems, most software developers using machine learning for autonomy purposes intend to “freeze” a version of their system software that meets requirements, test it relentlessly, and then put it through the certification process to ensure it is safe for passenger aircraft. This presents the challenge of updating the aircraft software as new learning is accomplished: any change in aircraft software requires recertification of the software before use. Instead of real-time learning, system updates are generated by collecting flight data from test and operational aircraft which can be used to retrain and develop an improved version of the system software. Incorporating that learning then requires a great deal of time and effort to recertify the updated system, which is why aircraft software developers don’t frequently make programming updates — even when doing so would increase system efficiencies and performance.
The speed at which autonomy systems can advance is closely linked to how rapidly developers will be able to incorporate new data, improve the system and push an updated version out to the global fleet to restart the process. Current certification systems won’t allow for this to be done rapidly, since the certification process is often measured in years and millions of dollars. This is in stark contrast to automotive software, which in some models is updated remotely every couple of weeks. Providing the ability to make safety-enhancing updates to advanced aviation software in a timely and cost-effective way will be critical to the safe rollout of this technology, as one of the main advantages of autonomy over human pilots is that once the software has been fixed, the autonomy won’t make the same mistake twice — but we can only benefit from this if we have a mechanism by which we can efficiently certify updated software that has incorporated that learning.
Because of these challenges surrounding autonomous and highly automated systems, the current certification requirements and guidance standards for aircraft systems and software are undergoing significant revisions by standards groups ATSM, SAE, and RTCA. One design and certification approach, Autonomy Design and Operations in Aviation: Terminology and Requirements Framework, developed by ASTM (AC377), evaluates the risks and benefits of automating individual aircraft systems or pilot functions rather than evaluating the vehicle by “Levels of Autonomy” — a common classification approach used for ground vehicles. The framework proposes evaluating the added safety benefit and reliability of automating a system or function, even if the system is less than perfect but overall increases safety compared to the available human options. This approach to certification allows the flexibility to increase aircraft automation/autonomy over time, leading to the certification of aircraft where specific systems or functions are autonomously performed. For instance, the pilot could control an aircraft’s heading, speed, and altitude, but the aircraft automatically performs takeoffs and landings.
Implementing this approach, however, requires developing standards for which to measure an autonomous system’s performance. Measuring their performance against human standards is more complicated than one might think because human pilots’ performance varies over a wide spectrum of abilities, raising the question of what performance bar the system must meet. Like anything else, the better the system, the more expensive it is to develop and certify. If every autonomous system is certified to the highest design assurance level, then developing and building the aircraft will be too expensive for its intended use.
Fully Autonomous Operations
Traditionally, advanced aircraft systems such as autopilots have relied on human pilots to be their backup in case of a system failure. If an error occurs, the system reverts control of the aircraft back to the pilots. This is not a robust approach to safety since humans are not always reliable backups — as discussed in our earlier column — and often may be the cause of the problem in the first place. Additionally, using humans as backups increases the training burden on human pilots, who require periodic retraining since training goes stale over time. Finally, building autonomous systems to revert control back to a human when something fails defeats the end goal of autonomy in the first place — and undercuts its safety case.
Instead, pathways must be found that allow an autonomous or highly automated system to stand on its own through high reliability and fail-functional architectures. In an off-nominal situation, such as a mechanical system malfunction or a degraded system performance, a fail-functional autonomous system will be capable of recognizing it is in a degraded state and proceed to activate a limp or safe mode that safely operate or land the aircraft.
One backup solution is to “bound” the autonomous or highly automated system with a much simpler safety monitor, which ensures that the performance and the commands generated by the autonomous system are reliable (see ATSM F3269). In the event the safety monitor detects degraded system performance or failure, the monitor takes control and reverts to a simpler, less adaptable but more determinate mode of operation. This is commonly referred to as runtime assurance. In the event that a computer vision system fails and is unable to verify the safety of an unimproved landing area, for example, instead of the pilot taking over, a backup system will use instrument landing procedures and navigation aids to land at a known helipad or runway.
Ongoing collaborative efforts are underway between the autonomous developers and the regulators to define requirements and procedures that will allow autonomy to become an ever-greater part of aviation. As with any new technology, it is crucial to proceed deliberately and with caution to ensure that our certification approaches are beneficial for aircraft safety. If we are too restrictive in our approach, the industry will be hobbled. Alternatively, if we are too permissive, we will likely see preventable accidents, which may set the industry back further.