An Interactions/Harris Poll reveals what people find creepy about AI
FRANKLIN, MA – With every click, download or voice command, AI has another data point to slip into its back pocket – ready and waiting to help inform business decisions, marketing strategies and campaign targeting. To date, companies have been experimenting with how to use this data to dazzle customers. From alerting people when their milk is low, to pre-selecting their online shopping carts, to helping them book a vacation, brands have cast the deciding vote on how and when consumer data should be used. But without insight from customers, they’ve been operating in the dark—blindly walking the line between helpful and creepy.
That’s why Intelligent Virtual Assistant leader Interactions commissioned The Harris Poll to conduct an online survey of 2,000+ American adults in August to figure out exactly where the “creepy” line is, and when AI crosses it. In the process, we identified consumer comfort level with AI utilizing personal information, and what tips the scale from helpful to creepy. Here are the top consumer concerns that crossed the creepy line:
Just where did you get that information?
What’s most off-putting to consumers is when an AI system knows information that they didn’t provide it directly, or that involves other people in their social networks. The survey found that roughly half of Americans think it’s more creepy than helpful when:
- AI knows other household members’ past interactions with a company (52 percent);
- It uses social media data to make suggestions (50 percent);
- It knows past purchase history from a different company (42 percent).
Regardless of what a company’s intentions may be, people find it more creepy than helpful when AI uses data from unknown sources. Consumers like to feel in control, or at least in the know of how their information is being used by businesses. Without a handle on this concept, AI can quickly cross the line. Moreover, this shows how important it is for AI systems to make known to consumers how data is gathered and used.
Bot or Not? Consumers want to know
While creating a persona for AI-powered virtual assistants can make conversations with customers more engaging, these systems should never attempt to mislead that consumer about being an actual person. With much criticism surrounding this newfound issue, sounding human without stating that you’re not can absolutely cross the creepy line. The survey found the following characteristics to be beneficial in creating a positive customer experience:
- AI-powered voice assistants have a human-like voice/personality as opposed to a computer-generated voice (70 percent);
- The voice assistant has a similar accent to their own (63 percent).
However, 2 in 5 Americans find it more creepy than helpful when AI sounds or interacts like a human without notifying the caller that it’s a virtual assistant (42 percent). As AI technology advances, mistaking an AI-powered virtual assistant for a human will only get easier— making it even more important for AI systems to disclose this information up front.
The Good Side of AI: Helpfulness and Transparency
Monitoring consumer behavior and purchase history is one of the most effective ways for companies to personalize and enhance the customer experience. The Interactions/Harris Poll shows that consumers often have no qualms with this strategy. In fact, about two in five Americans find it more helpful than creepy when AI:
- Knows their past interactions with the company (42 percent);
- Uses past order history to make suggestions (42 percent);
- Proactively reaches out with important information such as bill pay reminders or sales (41 percent);
- Uses past order history to determine why they are contacting them (40 percent).
Even more, almost three-quarters of consumers (72 percent) will tolerate “invasive” AI for reasons such as alerting them of a potential issue (40 percent) or helping them resolve a problem or complex issue quickly (36 percent) among others.
Although it’s beneficial for AI to have some degree of knowledge about consumers, the AI creepy line does exist, and companies should be aware of it. For more detail on the study’s findings, including a breakdown of generational differences in preferences and comfort with AI, see here. To review the full Trust in AI Solutions report for further insight around preference for voice-based AI solutions in enterprise customer care, see here.
Survey Method
This survey was conducted online within the United States by The Harris Poll on behalf of Interactions from August 14-16, 2018 among 2,022 U.S. adults ages 18 and older. This online survey is not based on a probability sample and therefore no estimate of theoretical sampling error can be calculated. For complete survey methodology, including weighting variables and subgroup sample sizes, please contact interactions@launchsquad.com.