Do Consumers Have Too Much Trust for Generative AI?

According to a new consumer survey from the Capgemini Research Institute, 73 percent of consumers globally trust content created by generative artificial intelligence, reporting they will use it to assist with financial planning, medical diagnosis and relationship advice. Within the institute’s report, called “Why Consumers Love Generative AI,” researchers explore how consumers globally are using generative AI applications and how it could be the key to accelerating society’s digital future.

The report takes into account research from data aggregator tools and a quantitative survey of 10,000 consumers over the age of 18 across the U.K., the U.S., Australia, Canada, France, Germany, Italy, Japan, Netherlands, Norway, Singapore, Spain and Sweden.

More from WWD

Just more than half (51 percent) of consumers say they are aware of the latest trends in generation AI and have already explored the tools. At the same time, the authors of the report said the adoption of first-wave generative AI tools has been notably consistent across both age groups and geographies. Consumers who report using generative AI frequently say they are most satisfied with chatbots, gaming and search use cases.

Many consumers have also explored using generative AI platforms for personal, everyday activities and 53 percent say they “trust generative AI to assist with financial planning.” Another 67 percent of consumers also say they believe they could benefit from receiving medical diagnoses and advice from generative AI, while 63 percent told the company they are excited by the potential for the technology to aid with more accurate and efficient drug discovery. Meanwhile, two-thirds of consumers also said they would seek advice from generative AI for personal relationships or life and career plans — Baby Boomers were most likely at 70 percent.

However, while trust in the technology is high, the authors of the report also found that consumer awareness around the ethical concerns and misuse of generative AI is low.

Almost half (49 percent) of survey respondents reported being “unconcerned” by the risks of generative AI being able to create fake news stories and only 34 percent voiced concern about phishing attacks. When asked about ethical issues surrounding the technology, only 33 percent of respondents said they are worried about copyright issues and 27 percent said they are worried about the use of generative AI algorithms to copy competitors’ product designs or formulas.

To this end, Niraj Parihar, chief executive officer of the Insights & Data Global Business Line and member of the Group Executive Committee at Capgemini, said that while regulation is critical, “business and technology partners also have an important role to play in providing education and enforcing the safeguards that address concerns around the ethics and misuse of generative AI.”

Moreover, at Capgemini, Parihar said the goal is to help clients “cut through the hype and leverage the most relevant use cases for their specific business needs, within an ethical framework.” The key to successfully utilizing generative AI is within the safeguards that humans build to guarantee the quality of output, noting that generative AI is not intelligent in itself, but rather comes from the human experts who these tools will assist and support.

The authors of Capgemini Institute’s report advise that there is a good opportunity for businesses as generative AI tools become available, especially for providing consumers seeking recommendations for new products and services. Currently, 70 percent of consumers are seeking these kinds of recommendations and 43 percent are “keen for organizations to implement generative AI throughout customer interactions.”

Best of WWD

Click here to read the full article.