Brian Garrett-Glaser
By Brian Garrett-Glaser

As the managing editor of, Brian covers the ecosystem emerging around eVTOLs and urban air mobility. Follow him on twitter @bgarrettglaser.


Talking autonomy with Daedalean’s Luuk van Dijk

Luuk van Dijk, a former software engineer at Google and SpaceX, is the founder and CEO of Swiss startup His company is pushing the limits of autonomy, developing products for use on eVTOLs and general aviation aircraft while simultaneously working with regulators on standards to govern the safe incorporation of artificial intelligence into safety-critical avionics systems.

Daedalean Luuk
Daedalean cofounders Anna Chernova and Luuk van Dijk. Daedalean Image

Daedalean has received investment from Honeywell Ventures and will soon release detect-and-avoid and navigation pilot aids with hardware partner Avidyne. But that’s just the beginning of the van Dijk’s plan to gradually bring autonomy into aviation in a way that is certifiable, improves safety outcomes and allows denser use of the airspace — a necessary component for urban air mobility.

Here is our conversation with van Dijk, edited and condensed for clarity. How were eVTOLs part of your inspiration for jumping into autonomy systems for aviation? What do you think of the urban air mobility market?

Luuk van Dijk: Yes, part of my inspiration was actually the first eVTOLs that started popping up in 2015 and 2016. I thought, okay, it is going to take then seven years to get a certified aircraft to market, and the people in this urban air mobility space all know that autonomy is going to be a demand for long-term economic success. So while the Liliums and Volocopters and Kitty Hawks are all working on getting the airframe in the air and their avionics department worries about control of the aircraft and radar and radio, we can work on this autonomy thing and be ready at the same time.

And I’m very confident about this whole UAM thing for one main reason: that electric is simpler. And because it’s simpler, it can be safer and it can be cheaper. And once it’s physically feasible, it will soon be economically feasible because enough people will jump on it. And then nothing is going to stop it. You have said that there is no fundamental issue preventing the industry from certifying autonomy software powered by artificial intelligence or machine learning systems. Can you explain your view?

Luuk van Dijk: In practice, the term “artificial intelligence” seems to mean whatever computer science doesn’t really know how to do yet; it’s at the edge of what we need to do. And the state of the discussion in much of the industry seems to be that AI is a black box, it’s hard to understand, and so it can’t safely be used on aircraft.

We respectfully disagree, which is why we continue to work with the European Union Aviation Safety Agency (EASA) to develop design assurance and means of compliance frameworks for the use of AI and machine learning applications.

Algorithms that machine learning came up with are not fundamentally different than analog algorithms, or an analog circuit that runs inside a radar that does probabilistic things to figure out the most likely place where something is. So there’s no reason to doubt a priori that they can be certified. The only the reason to say this can never be certified is because you haven’t written down what “this” is. Sure, magic AI can never be certified, because no one knows how that works. But that doesn’t mean nobody knows how neural networks work. It’s true that you cannot point to a single point in this neural network to see why it missed the pedestrian or not, just like you can’t point to a single letter of a poem to explain that it evokes a vision of beauty. That’s looking at the wrong level.

Machine learning, which is largely based on applying statistics to big data, can actually provide more statistical guarantees than any process involving humans writing traditional software can. You may not be able to point to the single piece of code that caused the system to miss the pedestrian, but what you can see is that this emergent behavior is proven on these big data sets, and that this data is uniformly drawn from reality, and you can put some statistical bounds.

There’s an academic sub-field of statistics called Learning Theory which predates neural networks, and we can draw on these theorems from the ’60s and ’70s to get certain generalization bounds that give you a good handle on what you want to prove in your data set and how to generalize the test set to guarantees on how it’s going to behave in the wild. Yes, there is work going on as well to try and bound the behavior of neural networks, but it’s extremely limited and gives not very useful bounds. But you do have the complete differential structure between input and output in a neural network, and that gives you some very strong mathematical handles to show that if you vary the input a little, the output should not vary wildly. So you can actually set some bounds on there to prove that some of these properties are absent or efficiently rare that the wings will probably fall off the airplane before this becomes a problem. Can you talk a little bit about applying DO-178C standards for avionics software to machine learning systems?

Luuk van Dijk: In classical software, the process really involves writing down three times what you’re going to do, and then checking that three times. So you start with your system requirements, you write your high-level requirements, and you write your low-level requirements and your design, you write the software, and you write some tests and then you verify that all these things are done — nothing in there guarantees correctness in any way. Every step is a bunch of humans looking at some code and saying, yeah, that looks about right. You only get the license to do that as an organization if you prove that you have knowledgeable people, but there’s nothing in the process that guarantees even remotely that the software only fails once per x hours of flight.

But that is for computer systems that are relatively simple; it’s just a few bits of information that go in. Is there a weight on wheels, are we flying, AND or AND NOT, put up the wheels, or so put out the thrust reverser.

W-shaped learning assurance cycle
This W-shaped learning assurance life cycle for machine learning applications was a key outcome of the collaboration between EASA and Daedalean. EASA Image

For our sort of thing, we’re going clearly to the realm of where it’s not just a couple of bits, but a whole image; situational awareness of the whole airspace, and then making the right call — can I land here, yes or no? So there is where this process breaks down. In that respect, DO-178C was not made for machine learning.

But the intention, you can definitely capture. And then what you see in a machine learning system is that much of the assurances you want to put on the code, you actually want to put on the data. You should have to prove that you have gotten a data set that, for example if we’re talking about landing, covers runways from reality. There are famous examples from neural networks where testers put a few noisy things in the pictures used to train the AI and all of a sudden it thinks the gorilla is a kangaroo, or the stop sign is a 50 speed limit sign. The regulator has the right and duty to ask you how you know that your data set is complete and doesn’t have these problems, and how you’re going to prove that is your problem. With self-driving cars, there are numerous edge cases that won’t appear until millions upon millions of test miles have been conducted. How should this be accounted for by the designers of autonomy systems and certification requirements?

Luuk van Dijk: It’s a bit of a red herring, expecting AI to magically deal with situations you haven’t thought of as a designer. Because if these cases are so rare that they don’t come up in extensive testing, then we’re already targeting the small epsilon of safety cases. Of course the designer should try to think of every situation possible and whenever a new incident occurs, we take it into our collective knowledge and design the next system to account for it, but this is really a false pretense. When a human pilot encounters a situation they aren’t prepared for, they improvise. Sometimes it works, sometimes it doesn’t, but we are satisfied with knowing they tried. If Sully had tried really hard to land the plane on the Hudson and failed, he still would have been the hero.

One of the real advantages of autonomy systems is that we can teach a system to prepare for more circumstances by pulling from millions and millions of hours of data, rather than relying on a pilot with maybe 10,000 hours; it’s the collective of all the systems that can reach the point where you have clearly covered more cases than any individual driver or pilot can ever deal with. That’s why we need a wide install base, and we need to install these things doing maybe not yet super-critical stuff, but gathering the data. That’s actually why I think Tesla is going to win the self-driving race; they have all this data that they can use to make their systems better at an exponential scale. What kind of data do we need to properly compare the safety of autonomy systems to human pilots?

Luuk van Dijk: If I were the regulator, I would demand that these things that are the edge of safety critical, to prove their continued airworthiness, they should log all the exceptions. So it’s not just the flight record a box that has the other voices, and what happened to the airframe, but a system where all the near incidents make it back into the data set so they remain near incidents.

Actually, [EASA] did something similar for the eVTOL special condition, which includes mandatory recording of everything that goes on with all the new technologies and types of operations that are going on, so that the whole industry can get to the next level.

What is missing to make the case for autonomy, for or against, is what doesn’t systematically make it into these data sets — near misses that didn’t become incidents because the pilot did heroically save the day. Because for every anecdote where Sully landed on the Hudson, we have a controlled flight into terrain with no apparent justification. So, to have an objective data set to argue that for the collective public health it would be better if the humans did not have control of the aircraft, we need more data. And I think eventually that’s going to be driven by the insurance companies. With that argument in mind — and back to Tesla, given that lots of data is needed to create an exceptionally safe autonomy system — should regulators require that flight data be shared amongst different companies developing autonomy systems?

Luuk van Dijk: I think the regulator should demand that it is safe . . . making it mandatory that data is shared has benefited the aviation industry in the past. You have to be careful because there needs to be a level playing field, but the regulator should defend the public interest in this. It could stimulate public adoption and trust, if you’re going to subject the public — on the plane and on the ground beneath — so my instinct is to say that public data is good.

Daedalean visual positioning system
Visualization of Daedalean’s visual positioning system. Daedalean Image How do we move toward integrating autonomy into the loop in a way that is safe, draws on the strengths of both humans and AI, and accounts for their weaknesses?

Luuk van Dijk: That is actually a tough one, and I think that the mixed era will be harder than the fully autonomous era. First of all, all the other actors you meet in traffic or in the sky have unpredictable behavior, and then there is a lot of room for misunderstanding. Having an effective human-machine interface will be critical as well; you don’t want a system to say — I’m going to land there — you want it to show a picture of its camera view and say — I’ve drawn a box around this thing because I think it’s a runway — and then as the pilot you can say, yeah, that’s not the runway.

In cars, dealing with complacency has definitely been a problem. If you ride in a Tesla for long, you’re going to take your eyes off the road for more time than your passengers are comfortable with. Currently the human is delegating to supervising the machine systems. I think a symmetric situation where they both monitor each other is better, where the machine determines if the person is actually watching the road, and if you’re disengaged, then the system is going to disengage as well. We have this problem in aviation as well. Something unexpected happens, you’ve been so pampered by automation as a pilot, because the boring bits of the flight are the longest, all of a sudden a warning goes off and you completely ignore it.

I don’t have a good answer to this. You have to design with the whole system in mind; the human should be part of your systems analysis rather than an external factor who will magically save the day, which is often allowed. That’s an example of huge loopholes in certification where if you delegate it to the human, then it’s all fine.

I still think that eventually statistics will show that the Teslas with their distracted humans are net safer than people that try to keep their 20-year-old BMW on the road. Can you describe Daedalean’s approach to packaging these machine learning systems we’re talking about into products that can slowly add more autonomy to aviation?

Luuk van Dijk: So we have a system for knowing where you are and for finding a place to land, different systems for vertical landing and fixed-wing landing on runways. Fixed-wing was actually harder because it’s the one case where you’re close and things are fast, so using computer vision everything becomes a blur. What’s coming next year is we are putting our algorithms in a box made by Avidyne and we sell them to the world.

detect-and-avoid software
Visualization of Daedalean’s detect-and-avoid software in action. Daedalean Image

Next, we want to expand the operational situations. We thought starting in visual flight rules (VFR) was a good thing to do because that’s clearly what humans can do, and if you can outperform the human on that playing field then we can talk about bad weather conditions later. So we clearly want to go beyond VFR. If you’re flying in instrument meteorological conditions (IMC), in instrument flight rules (IFR), it’s actually a much more complex system with more humans involved because you have to talk to other humans over a voice channel.

I think first we want to roll this out as non-essential safety enhancing systems, and then as strong pilot advisory, and then we want to get to the point. Garmin did us huge favor by introducing their GPS-based auto-land. One way they argued that it was a net positive safety case is that it’s clearly better than a dead pilot. So your pilot is dead, the passenger says — help, the pilot is dead — they push the button, and then out pop the emergency flaps, and the thing makes a best effort to land on GPS. And you know, they’re very good at GPS, so I’m sure it will work mostly.

But it would be nice to get to a system that is clearly better than a live pilot. And I think with our vision system, we can get to the point in the next couple of years, in a certified way, where you push the button and you say — I’m the pilot, I’m alive and well, I would like to try a manual landing — and the computer says — okay, we’ll have envelope protection, so I won’t let you fall out of the sky, and I’ll keep an eye on your airspeed and flaps, but okay, you go try to hit the center line and I’ll nudge you when needed. And I think that’s a system that is within reach. Will you make a prediction on when the first approved, fully autonomous aircraft operations will begin?

Luuk van Dijk: I would say the first approved operations, 2026 . . . regular operations, 2028!

Leave a comment

Your email address will not be published.