Accenture Surprised by AI Findings

Entertainment consulting firm Accenture has been doing its digital consumers survey for about a decade, but its 2017 edition was the first to include questions around artificial intelligence and technology. And the results were surprising to the company.

“There’s pretty intense interest in AI this year, and a year ago there wasn’t anyone talking about it,” said Charles Hartley, Accenture’s global media and analyst relations manager for communications, media and high-tech businesses. “Consumers aren’t intimidated by it at all, apparently. It’s a sobering message to human beings.”

The survey—which interviewed just under 26,000 consumers across 26 countries—found that 62% of people are comfortable with AI apps—like Amazon’s Alexa—responding to a voice query, even though only 4% of people actually own a standalone, digital, voice-enabled device (like the Amazon Echo or Google Home) as of the end of 2016.

Nearly 90% of respondents said that artificial intelligence simply makes it easier to do things, and a third said they’re interested in using voice-enabled digital assistants available in smartphones.

Hartley made special note of the 68% who deemed AI “less biased” than humans and the 64% who said AI “communicates more politely.” More than half said AI is “less likely to make a mistake.”

Meanwhile, on Jan. 11, LinkedIn founder Reid Hoffman, the journalism-centric Knight Foundation and others announced they’ve created a $27 million fund to research AI applications for the public, with MIT’s Media Lab and the Berkman Klein Center for Internet & Society at Harvard University serving as academic research stations.

The hope is to bring in a wide range of tech and academic voices to the future of AI applications, the groups said in a statement.

“Artificial intelligence agents will impact our lives in every society on Earth. Technology and commerce will see to that,” said Alberto Ibargüen, president of the Knight Foundation. “Since even algorithms have parents and those parents have values that they instill in their algorithmic progeny, we want to influence the outcome by ensuring ethical behavior, and governance that includes the interests of the diverse communities that will be affected.”