Written by Caroline Calomme, Technolawgeeks’ co-founder and product manager for connected car services at Be-Mobile. This blog post is based on a speech given at ‘L’intelligence artificielle au coeur de l’entreprise’ organized by CMS Belgium in October 2017 and what I’ve learned from my wonderful new colleagues.
We’ve always feared disruptive inventions in the field of transport. That’s nothing new. The first trains? People believed that the journey would melt their bodies. The passengers wouldn’t be able to breath at such a high speed and their eyes would be damaged. The train rides could cause instant insanity. Even worse, the trains would make women’s uteruses fly out. It’s not the only mode of transport fueling the public’s anxiety. In the United Kingdom, the first cars were banned from travelling faster than 2 mph (3.2 km/h) in the city. Even bicycles were considered extremely dangerous. Those who dared to try this engine of death ran the risk of suffering from a terrible medical condition: the bicycle face. The speed and “the unconscious effort to maintain one’s balance” would leave you disfigured and scarred for life. In comparison to this, our reactions to autonomous cars seem almost reasonable.
Do we really understand the technology though? To many, artificial intelligence and mobility are synonymous for self-driving vehicles. It’s the first picture that comes to mind. Yet we’re only at the very start with not-so-glamourous – although very practical – applications such as parking assistance, speed adaptation or lane centering. Autonomous vehicles fascinate us but we shouldn’t confuse the potential of this technology with the reality. It’s not because they’re featured in movies and TV shows that we’ve gotten that far. You’ll still have to wait a while before taking a nap in your car after a long day at work. If you don’t believe me, have a look at those articles in Wired, Forbes, TechCrunch or the Huffington Post.
Of course, this doesn’t mean that we shouldn’t start reflecting on the policy implications (see also Doryane’s post). When they’re introduced on the market, self-driving cars will disrupt insurance schemes as we know them and raise serious ethical concerns. Do we want to build cars which obey every single rule on the road? Or shall we program them to know when it’s best not to follow rules to the letter? In the first scenario, we’ll need to clarify the hierarchies between the obligations to avoid situations where it’s impossible to obey the law without breaking another rule (yes, this is an actual possibility since the driving code is still written by fallible humans). In the second scenario where the vehicles are taught to think like us, there’s a chance that they’ll also do the math: costs of the fine < benefits from driving faster…There’s still a lot to think about!
Nonetheless, we tend to focus so much on the vehicles that we forget the infrastructure. That’s unfortunate because that’s where we’ll see technological advances happening in the near future. Cars don’t interact only with one another, but also with road signs, traffic lights and much more (check out the European Commission’s website for more information on vehicle-to-infrastructure policies). Here’s a very concrete example. Today, we’ve dynamic boards on the road. Sometimes, they indicate a new maximum speed, due to road works for instance. Instead of only sending information to the cars, the boards can also receive information from them (for the techies among you, it’s of course a figure of speech). Imagine the next step: displaying the ideal speed at which cars should be driving based on the current traffic flow. It decreases the probability of accidents caused by hurried commuters and ensures that drivers do not slow down for no reason.
True, we don’t need the infrastructure, we have apps. But let’s not forget that almost 20% of the population is over 65. True, we can’t always rely on the drivers following the advice. It’s a fact that until we remove the drivers from the equation by sending the information directly to an intelligent vehicle, we’ll unfortunately need to count on common sense (this video on how ghost jams start illustrate why it doesn’t work). On the bright side, there’s already a lot of data to learn from: the correlation between the number of trucks and the decrease in speed, the exits and times where traffic slows down the most, the impact of the weather, etc. If we took a step back, we’d realize that artificial intelligence can also help us reduce traffic jams even before controlling the vehicles (some inspiration here). While a vehicle parking itself in a crowded city when you go shopping has its perks, this tangible progress would already have a great impact, as anyone who needs to drive to work can probably attest.
But let’s get back to the key role of the infrastructure. Have you ever waited in front of a red traffic light at a crossroad where all the other traffic lights also happen to be red? And where the light is green for pedestrians although they’re nowhere near the crossroad? If only you could let the traffic light know this makes no sense…That wouldn’t be very practical because every driver would send requests and ask for priority. But what about vehicles transporting dangerous goods? Ambulances? Public buses which are already 10 minutes behind schedule? Even better, the traffic light could detect that 10 vehicles are waiting in one direction while there’s only one vehicle in the other direction and could take this into account. It could also recognize an elderly or disabled person who needs a little bit more time to cross (on that note, ‘there’s an app for that’).
It’s time to shift the policy and legal debate to the real world. While I admire the willingness not to be outrun by technology once again, policymakers and legal experts might be overlooking fundamental advances that are a lot easier to implement than self-driving vehicles and also raise questions of liability, cybersecurity, public procurement, intellectual property, competition law, data protection, etc.