Artificial intelligence (AI) offers tremendous promise to solve problems and improve the quality of life across the globe. It is a transformative, general-purpose technology with the potential to influence entire economies and fundamentally change society.
From predicting the structure of proteins to controlling nuclear fusion reactions, the potential benefits are vast. This even extends to the creation of robots to pollinate plants. There are myriad of more mundane tasks that AI improves, including medical record keeping, customer service chatbots, and optimizing supply chains.
AI has accelerated drug discovery, enabled more accurate climate modeling, and aided in the detection of astronomical phenomena. Moreover, AI-powered language models are revolutionizing how we interact with computers, making it easier to access information, generate content, and automate repetitive tasks. While the full extent of AI's impact is still to be seen, its transformative potential across various domains is undeniable. Management consulting firm McKinsey estimated that generative AI – the latest AI breakthrough – could add up to $4.4 trillion annually to the global economy. Amy Webb, CEO of Future Today Institute, described at SXSW 2024 how AI is the driver of a “potent and pervasive” technology “supercycle.”
While AI’s benefits are many, people are losing trust in the technology and the companies that produce it. This creates a paradox of progress, where AI's vast potential is shadowed by diminishing trust. The path to a brighter future lies in the collective efforts to rebuild this trust.
The AI trust tightrope
We are also seeing that as the flywheel of innovation speeds up, trust in innovation declines. This is particularly noticeable when it comes to AI innovation and the companies who build this technology. That is one conclusion that can be drawn from the 2024 Edelman Trust Barometer deep dive on the technology sector and AI.
The results show a clear decline of trust in AI companies since 2019, falling from 62 percent who trusted AI companies in 2019 to only 54 percent in 2024 as measured across 24 countries around the world. That is an 8-point fall in five years, from clearly trusted to neutral. It is even worse when looking only at the U.S., where trust in AI companies fell from 50 percent trust to 35 percent over this period, a 15-point decline into clearly mistrusted territory.
Why the decline in of trust in AI companies?
There are many causes for this slippage. The COVID-19 pandemic certainly did not help, as that contributed to declines in trust of science and technology and institutions of authority. On top of that, the five years from 2019 until now have seen dramatic advances in AI capabilities, but also heightened awareness of the risks that come with the technology.
For example, during this time our ability to trust what we see and hear has been continually eroded. Deepfakes – synthetically altered or generated images – first appeared in late 2017. Initially, it was not easy to create these fake images and their quality was poor, but this has become dramatically easier since, and the output is now photorealistic. Tests have shown that people have a tough time distinguishing real faces from those that are AI-generated. What is more, they respond more positively to the generated images. Five months after the release of the AI-powered DALL-E image generator in 2021, 1.5 million people were generating 2 million images a day.
This was followed by Midjourney, a similar tool, which was used to produce the winning entry in an art competition in 2022. Even though the entrant disclosed that his submission was created with an AI tool, the judges found it convincing enough to award it first prize. From that point forward, the distinction between what is real and what is fake has become even more difficult. This was also the moment that started a backlash against AI and the companies that develop the technology, the through line of which can be traced up to the actors and writers strikes in 2023.
It is not only AI images that erode trust. In recent elections, social media platforms were flooded with AI-produced misinformation. These were generated using the output from large language models such as ChatGPT. In 2023, the Washington Post reported that the use of AI by some “is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.”
It is difficult to trust what you see. With the rise of AI-powered speech generation, it is also now not always easy to trust what you hear. There have been scams that clone the speech of children to extort money from concerned loved ones and similar schemes to extract money from banks. Even more sophisticated was the deepfake scam that cost a global financial services firm $25 million.
Fortunately, policymakers are now aware of the deepfake challenges posed by AI. For example, in the U.S. the National Institute of Science and Technology (NIST) has taken on the challenge of identifying AI-generated content and tracking its origin. AI developers are increasingly doing their part to counteract deepfakes through watermarking and provenance tracking.
Though there are still other trust challenges, exemplified by recent news headlines, such as “U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI” and “Intelligence officials warn pace of innovation in AI threatens US.” These headlines are effective for catching the attention of readers and contribute to the decline of trust in innovation and AI and the companies that develop the technology.
These concerns are in addition to ongoing issues of bias. There are the biases of the companies who build the AI technologies that are evident in the guardrails, the guidance developers provide about what text or images their product can and cannot produce.
Even more is the problem of bias in the underlying data used to train the models. Much of this data is scraped from across the internet and reflects all human biases, baking these into models once they are trained. Based on bias complaints, companies changed their models and are still trying to appropriately align the output with ethical standards and societal expectations, underscoring how advancements in AI can inadvertently magnify existing societal issues. Genuine interest in public welfare will improve trust.
Companies building and using AI must work to rebuild trust, especially as the technology becomes even more advanced and pervasive. Many companies have made positive efforts, including the establishment of codes for AI ethics that cover issues of transparency, accountability, fairness, privacy, and security. These are the basics for responsible AI. Trust can only be built by embedding these ethical principles into AI development processes, applications, and open communication with the public.
Yet the pace of advance and the nature of competition create pressures to release products before they are thoroughly evaluated and explained. If possible, it would be good to slow down the pace of product introduction until testing is complete. Short of that, companies should be as transparent as possible about how they assess and minimize harmful uses. Working closely with regulators and policymakers to develop sensible AI governance frameworks would help too, as individual privacy and public safety should be at least as important as profits.
With a greater collaborative ethos among all stakeholders – developers, policymakers, businesses, and the public – a commitment to responsible AI and an unwavering dedication to public engagement will lead to improved trust in the transformative power of AI. In doing so, we can create a world that reflects our highest aspirations.
Gary Grossman is Senior Vice President, Global Lead of Edelman AI Center of Excellence.