Is This The Safest Car In The World?
A crash causation expert breaks down the real problems in human driving behavior — and how technology could stop crashes for good.
Just before the close of 2023 we asked a simple question: What’s the future of the family car look like? We know what’s coming in the next year — and a lot of it looks electric. But dig a little deeper and you’ll find carmakers are looking so far beyond the inevitable electrification of America’s fleet. Volvo, for instance, set the goal for in 2014 announced that by 2020 nobody would be killed or severely injured in a Volvo car. They’ve nearly hit that target each year, though there are still a few deaths each year. “People are actually very good drivers” says Mikael Ljung Aust, a cognitive scientist for Volvo who holds a PhD in crash causation and countermeasure evaluation and has been studying accidents for over two decades. “ But we have so much traffic, it ends up being a significant number of dead people anyway.” So how do you solve the problem of … driving in traffic? In our talks with Aust, the answer lies in a marriage of technology that accounts for behavioral psychology. Like technology that found in the forthcoming EX90, Volvo’s new flagship seven-passenger electric SUV that debuts later this year.
That car will be the first-ever Volvo with a LiDAR (and other sensor) array at an “eyebrow” above the windshield, and it will include a system that monitors driver eye movement as well as driver body language, to be sure you’re equipped to handle the constant hazards of driving. The EX90 can take over driving in an emergency, preventing an accident but also pulling over the car in the event some physical event prevents you from driving. Together Volvo is calling this package an “invisible shield of safety.” But even with all this advanced tech, Aust says none of it matters if we don’t trust it to save ourselves and our families.
You mention trust, but we have a tendency to say “I love my car,” which you’d think means we also trust them.
There’s a French philosopher, Maurice Merleau-Ponty, who identified what he called “cognitive artifacts,” which is the idea that people identify with certain objects. And we get this very emotional and personal relation to it, which is why people care about how their cell phone looks, for example. When you analyze that it can seem weird, but this extension is part of “me,” and it does good things for me, and it forms part of my identity. If you listen to descriptions of how people feel “connected” to their cars, this is really complete nonsense physiologically, because you do not have neurons extended into the bumpers. But psychologically it is real.
Ok. But we also get mad at our cars—or phones. How have carmakers misunderstood this trust and how we behave as drivers?
Early on to deal with visual distractions we thought, “Ah, let's do voice interaction with everything because then you never ever have to take your eyes off the road.” And we tried a bunch of voice interaction systems. And we found exactly the same off-road look times for people using voice compared to people using the equivalent of a touchscreen. And when you watch the experiments you see people talking to the car and they always look at the center screen, to make “eye contact,” as if it would be impolite to look elsewhere.
So it’s like with a human being, we have an instinct to make eye contact with whoever—or whatever—we’re talking to?
Right. You’d never not look a person in the eye who we’re talking to! We talk with our eyes, and this just wasn’t well understood.
Which leads us to the car of the future. If a high-tech car is supposed to help me be safer but maybe doesn’t always work well what happens?
In psychology, there is a concept called marginal trust. Which is really a fancy way of saying that in every interaction, your trust level can either go up or stay the same or go down. When people feel like their cognitive artifacts are failing them because they've invested emotion in them it’s like their friends are bailing on them. If you're right 10 times out of 10 that’s fantastic and Volvo would love that. That's not going to happen. But we don't need to be “right.” What the car tells you based on its sensing, and what it wants to do, has to be perceived by you as intelligent, meaningful. If you scream at me for some reason and I look out the windows and there is an object there. It makes sense, I would have slammed into it. But if that input is not meaningful then it’s noise, and we tune that out really quickly.
This is where some of the sensors in the EX90 play a role, right? With learning you very specifically, how you drive, what your body language is?
So there’s a window of time that’s one and a half seconds to crash. The assumption is you wouldn't want to be here. But as it turns out, one and a half seconds is plenty of time; for some people they’re actually in control. So we designed this system so that someone who's looking somewhere else would be able to react in time. So if we use a system to assess that if you already have eyes on this potential problem, we can actually wait to warn you. But if you're still not sorting it out, so to speak, then you probably need the warning. And the result is we're alerting you at the same level of criticality that a distracted driver would get, and we know you’ll appreciate that. People actually think the system saves them a lot more than it actually does, which is good for us, and for trust.
Okay, so trust is partially built around a car that’s not a pest? It helps me when I need help but isn’t a backseat driver?
Right. We don't want to dilute alerts with false positives where you're in perfect control and you don't really need us slapping you with a warning.
But right now a lot of safety systems like lane keeping and advanced cruise control are all blaring lights and bings. We’re more apt to turn them off.
Well, we have to come back to what we’re trying to solve. Accidents are actually few and far between really. We really have a traffic volume problem and less a driving problem. People are actually very good drivers. But we have so much traffic, it ends up being a significant number of dead people anyway.
Okay, but how then do we solve the trust issue with tech that can prevent accidents?
This is a conversation over the next years we have to get right. If you imagine yourself walking into a five star hotel, there's a certain level of service. That kind of place seems to know your needs before you even know them. And what we want is our sensors doing that for you, so that you equate “premium” with a system that anticipates your needs by seeing farther down the road than you ever could, so you’re liberated about the worry of getting into serious trouble. And also “learning you,” and we can do that now. The idea is like a very nice robot assistant in the passenger seat. We have to have certain conversations with tech and driver that lets that robot earn its seat beside you, that lets that trust form, because if you decide to kick it out, you get none of the benefits.