The European Union Aviation Safety Agency (EASA) and the Swiss startup Daedalean — which is developing an artificial intelligence (AI)-based autopilot for eVTOL and other aircraft — have released the results of their collaborative project on the use of neural networks in aviation.
Their public report (which redacts some confidential information) is the product of an EASA innovation partnership contract that ran from June 2019 to February 2020, called “Concepts of Design Assurance for Neural Networks.” The goal of the partnership was to explore the challenges associated with the use of neural networks in aviation, with an eye toward eventually allowing machine learning algorithms and other forms of AI in safety-critical applications.
Although many of its concepts apply to machine learning algorithms in general, the report focuses primarily on deep neural networks for computer vision systems — the basis of Daedalean’s autopilot system.
“Machine learning . . . provides major opportunities for the aviation industry, yet the trustworthiness of such systems needs to be guaranteed,” the report states, noting that traditional development assurance frameworks are not well adapted to complex machine learning algorithms, which are not predictable or explainable in the same way as conventional software algorithms.
According to the report, the joint undertaking between EASA and Daedalean made progress on several essential aspects of “learning assurance,” which is one of four building blocks that structure the AI trustworthiness framework in EASA’s AI Roadmap. That document, released earlier this year, describes learning assurance as a way to “open the ‘AI black box’ as much as practicable” by gaining confidence that a machine learning application supports the intended functionality.
Notably, the project with Daedalean resulted in the development of a W-shaped learning assurance life cycle, which EASA says “will serve as a key enabler for the certification and approval of machine learning applications in safety-critical applications.”
Crucially, the report assumes a system architecture which is non-adaptive — in other words, one that is frozen at a certain stage of development and which does not continue to learn during operation. This “creates boundaries which are easily compatible with the current aviation regulatory frameworks,” the report states.
“Our collaboration with EASA has created a solid foundation that has a realistic chance of paving the way for future use of [machine learning] in safety-critical applications in aviation and beyond,” said David Haber, head of machine learning at Daedalean, in a press release.
“We have considered non-trivial problems, yet more work is required to bring neural networks to full certification,” he continued. “We look forward to continuing our work with EASA.”
According to EASA, its next step will be to “generalize, abstract, and complement these initial guidelines, so as to outline a first set of applicable guidance for safety-critical machine learning applications.”