Our last post discussed the explosion of artificial intelligence (AI) and how advertisers might use it to drive consumer loyalty.
In this post, we’ll focus on one specific area of AI: autonomous vehicles.
To the delight of many, self-driving vehicles are becoming a reality. Unfortunately for those advertising them, consumer trust is relatively low. Let’s explore the reality of that mistrust and look at the reasons behind it, as well some ideas for building trust and interest in these exciting products.
Most Consumers Losing Trust in Self-Driving Vehicle Technology
In April, J.D. Power released its 2017 U.S. Tech Choice Study, which looks at consumer awareness, interest and price elasticity (by demographic) when it comes to the emerging technologies of certain vehicles.
One key finding was that just about everyone displays a declining trust in the concept of automated driving technology, which uses AI. Compared with 2016, 9 percent more Pre-Boomers and 11 percent more Gen Z members claim to have a definite mistrust of the technology. Forty percent of Baby Boomers see no benefits of self-driving vehicles. Only Millennials (which the study refers to as “Gen Y”) haven’t lost trust since last year.
J.D. Power is clear that the reason for hesitation is not price. Younger generations are showing greater interest in autonomous vehicles, even though Boomers are more likely to express interest without first knowing a price. Further, more consumers are willing to pay for advanced versions of emergency braking and steering (before they become standard) than less expensive features like a dash camera or digital key.
The implications of this are real for advertisers in the automobile industry. Usually, when a technological concept is nearing reality, people become more interested in, and accepting of, it. For self-driving vehicles, however, consumer trust is going down, and that’s hindering general interest.
An upside is that consumers across the board continue to show interest in certain AI features, like collision protection and driving assistance, as they did in 2016. This includes:
- Smart headlights
- Camera rear-view mirror
- Emergency braking and steering system
- Lane change assist
- Camera side-view mirrors
- Advanced windshield display
The bottom line is that, despite self-driving vehicles’ ability to reduce collisions and provide mobility to those unable to drive, people simply aren’t comfortable with them – yet. Kristin Kolodge, executive director of driver interaction and HMI research at J.D. Power, says:
“Automated driving is a new and complex concept for many consumers; they’ll have to experience it firsthand to fully understand it. As features like adaptive cruise control, automatic braking and blind-spot warning systems become mainstream, car buyers will gain more confidence in taking their hands off the steering wheel and allowing their vehicles to step in to prevent human error.”
Why is Trust in Some AI So Low?
In an article for the Harvard Business Review (HBR), a researcher and an analyst discuss why people are so willing to trust algorithms (as elements of AI) in some situations and so not willing in others.
For example, few would harbor deep mistrust of the algorithms Amazon uses to suggest products, or even the use of auto-pilot on airplanes. However, as we know, such automation is questioned when it comes to self-driving vehicles.
The reason is selective trust in algorithms. We depend on algorithms more than ever, even for decisions historically made by human experts (such as financial investments), yet trust them at varying levels.
Part of what determines the trust variation is the subjectivity/objectivity of the outcome. Confidence is high in algorithms that determine verifiable, objective things like a person’s weight or what movies will be top at the box office. Humans are more trusted in subjective situations, and perhaps driving is considered a subjective, and personal, task. Humans also retain trust more than algorithms when the two make the same mistake.
A glimmer of hope for advertisers of automonous vehicles is that people with higher numerical literacy are more trusting of self-driving and AI-using vehicles than the general population. One successful tactic might be targeting your ads to people who are comfortable with math and science and, presumably, technology.
The article finishes by reminding us that good technology alone isn’t enough to spur the necessary interest in products featuring artificial intelligence. AI must be introduced in ways that foster trust in potential users.
Helping Consumers Trust AI in Your Advertising
In a separate article, Harvard Business Review shares ideas on how to help consumers trust AI in products like vehicles and medical equipment – spaces where trust is essential for the success of the product.
According to the HBR piece, to better understand the level of trust consumers have in AI, we must consider trust in two dimensions:
- Trust in technology
- Trust in technology innovators
In the case of autonomous vehicles, trust is more easily destroyed than created, especially as the potential outcomes are matters of life and death. For consumers to maintain trust while giving complete control to a machine, that machine must offer predictability and dependability and appeal to faith:
- Predictability comes when the product performs as expected.
- Dependability happens when there’s understanding of the underlying logic and process of the technology.
- Faith refers to the product’s purpose – faith in the designer’s intentions.
For advertisers, it’s about establishing trust in self-driving vehicles and the AI involved by communicating the benefits of the products while concentrating on the three factors above. These efforts will make consumers more likely to adopt the technology.
It might be an uphill battle at the moment, but when advertisers focus on the good aspects of AI in vehicles, their technology can become one that consumers feel confident using, even at top speeds.