Trust Conference 2024: six things we learnt about the impact of AI on misinformation and the news business
On 22 October, hundreds of delegates congregated once more in central London for the Trust Conference, an annual event organised by the Thomson Reuters Foundation. Speakers included journalists, technologists and human rights advocates such as Maria Ressa, Kara Swisher, Jane Barrett, Ritu Kapur and our former director Rasmus Nielsen. Here are six takeaways from the conference on journalism worldwide.
1. The impact of AI on disinformation is still unclear
Despite the many warnings about the potential of AI to ‘supercharge’ disinformation, several speakers said it’s too early to know for sure if this will be the case. They also pointed to other factors that are driving misinformation and disinformation around the world.
Ritu Kapur, co-founder and CEO of Quint Digital Limited, said that AI-generated images and videos circulating during this year’s Indian elections were mostly for satire and entertainment, and there were no widespread cases of AI-generated disinformation.
“In the Indian elections, we didn’t need AI for disinformation because there was plenty happening without it. In fact, AI brought entertainment to the elections [for example, videos of Modi dancing etc.]. Largely, the threats to information in India were the same old. Most legacy media in India is owned by three or four corporations. Social media continues to be social media. As far as AI is concerned, we didn’t see anything dangerous in terms of disinformation,” she said.
Disinformation is by no means a new phenomenon. It was already an issue before the advent of generative AI, and even before the Internet. Even today, “the most consequential misinformation is authentic, it's from the most powerful people, and it's not necessarily online,” said Rasmus Kleis Nielsen, Professor at the University of Copenhagen and former director of the Reuters Institute.
Claire Leibowicz, head of AI and media integrity at the non-profit Partnership on AI, pointed to the so-called ‘liar’s dividend’, the idea that bad actors can convincingly dismiss real information by claiming it’s AI-generated.
🗣 “This idea that real imagery can be disregarded as fake as another tool in the toolbox of a public figure is actually something I am more worried about.”
— trust conference (@trustconf) October 22, 2024
🤖 @CLeibowicz of @PartnershipAI shares what she thought is a bigger threat than AI generated images. #TC2024 ⬇️ pic.twitter.com/ll9bMciEjJ
But this is not a new development either, she acknowledged, pointing to Donald Trump claiming that a recording of himself on the set of the TV programme Access Hollywood, which caused controversy in 2016, was fake. “He didn’t need AI to do that,” Leibowicz said.
Nevertheless, some recent trends have the potential to exacerbate the issue of identifying and removing disinformation from social platforms. Jeff Allen, co-founder and chief research officer of the think tank Integrity Institute, mentioned that tech industry layoffs have gutted election-related teams around the world and decreased cooperation between governments and social platforms.
Leibowicz also warned about the confluence of generative AI tools and sharing platforms, using X as an example. Subscribers can generate AI images with Grok, an AI tool, and share them on their feeds within seconds. “This is the extension of an existing problem, but the personalisation, the conflation of tools and a social platform, and speed and ease can exacerbate the problem,” she said.
2. Trust in news is facing many challenges
Many speakers highlighted a perception of ever-lower trust in news and journalists. The Edelman Trust Barometer 2024 found that business is trusted above government and media. Our own Digital News Report 2024 survey found that 40% of respondents from around the world trust the news. This has not changed in comparison to last year but is still four points lower overall than it was at the height of the Coronavirus pandemic.
“Why is business the most trusted right now? Because they’ve been the least attacked. If you want to stop civil society and engagement, you make people doubt reality. Information warfare makes you distrust everything so society can’t move on,” said Maria Ressa, co-founder and CEO of Philippine outlet Rappler and winner of the Nobel Peace Prize.
Steve Hasker, president and CEO of Thomson Reuters, said that the proliferation of partisan and opinion-based journalism also affects trust. “There is a lot of opinion devoid of fact that is being portrayed as news. Consumers like to hear what reinforces their existing beliefs and biases, and if that is not properly labelled as opinion, it just perpetuates this problem. It started with cable news, then social media and now AI, which will be a continuation of the problem,” he said.
🗣 "Trust is a competitive advantage."
— trust conference (@trustconf) October 22, 2024
At the Trust Conference, @sjhasker, President and CEO of @thomsonreuters, discussed the role of businesses in ethical AI adoption. ⬇️ #TC2024 pic.twitter.com/nNZzHPB93V
Leibowicz, who spoke at a different panel, also touched on trust in her remarks, warning that transparency policies put in place by media outlets can sometimes have the opposite effect to what is intended.
As our research has also suggested, labelling content that has been produced with the help of AI, for example, can be misinterpreted and lead some people to distrust the outlet more. “There's a fine threshold between media literacy and scepticism that doesn't allow you to see anything as true. We need more reporting from tech companies on how they’re addressing these issues. Greater transparency is good if it's having the intended effect on the audiences we are trying to inform,” she said.
Graham Brookie, vice president and senior director at the Atlantic Council Technology Programs, who spoke at a later panel, said that “just the fact of generative AI itself contributes to the erosion of trust. Just the advent of it, not its impact or mechanics, erodes trust, and I don’t think we have a good answer to that yet.”
3. Regulation is not an easy fix
Speakers throughout the day had different views on the role of regulation of Big Tech platforms. Some argued strongly in favour of it, while others warned of the risks of governments becoming more involved in the information ecosystem.
Influential US tech journalist Kara Swisher was in the first camp, saying: “There's so much concentration of power, with platforms in the hands of people who are either incompetent or actively trying to shift an election, and neither Democrats nor Republicans are willing to regulate them.”
Kapur made the case for caution. “India is a bad country for regulation of tech companies. The Indian government has tried to control what’s shared online and taken down content without giving creators the chance to appeal. There’s a real risk of absolute censorship,” she said. “The problem lies in the government being the regulatory body. The regulatory bodies cannot be people in power.”
The significant role of messaging apps in the spread of news and misinformation in some countries poses an additional problem that is not easily solved through regulation. “It doesn’t matter if a piece of misinformation is labelled on Facebook or Instagram because there is no labelling on WhatsApp, where millions of Indians consume news,” Kapur said.
Richard Gingras, vice president of news at Google, also warned against overzealous regulation. “Are we aware of the dangers of going down the road of deciding what we can say and not? We need to be very thoughtful. When we’re seeing laws around the world against fake news, we have to be very careful because the next people in power could use them against the media,” he said.
Our Senior Research Associate Rasmus Nielsen made a similar point: “Conservatives and liberals used to agree that the foundations of our liberty are fundamental rights and institutions, not a group of powerful people deciding what is and isn’t true.”
4. AI tools are already embedded within newsrooms
Speakers across panels highlighted several AI initiatives they’ve implemented in their newsrooms: from chatbots to AI-assisted investigations to having AI handle tedious administrative tasks. Swisher warned newsrooms to not shy away from AI and to embrace the technology. “These are amazing tools. It’s like hating on electricity,” she said. “AI is everything. Start using it and understand what it can do for your businesses.”
While the implementation of AI in journalism seems inescapable, newsrooms are cautious and methodical about the way these technologies are applied. Both Jane Barrett, Head of AI Strategy at Reuters, and Glenda Gloria, Rappler’s Executive Editor, highlighted the need to have guidelines to ensure that AI serves the newsroom’s mission rather than the other way around. “There are three guiding questions we ask. Does it enrich our journalism? Does it help us engage our public better? And does it improve the business?” said Gloria regarding how they decided to implement AI tools.
🗣️ "AI sifts through the hundreds of press releases we receive daily and gets out the key facts for our journalists."
— trust conference (@trustconf) October 22, 2024
Jane Barrett (@NewsEdJane) of @Reuters examined how to integrate AI without compromising journalistic ethics. #TC2024 ⬇️ pic.twitter.com/6z7JjKPJnV
5. News publishers are concerned about their intellectual property
As several media organisations are either embedded in legal battles with AI companies or striking deals with them, the conversation surrounding the use of news content and the intellectual property of news picked up steam.
Various panellists pointed out their discomfort with their content being scraped by these companies without any compensation, permission or credit. “We don't want to create a situation where all the risk is on the content producers and all the profit is on Big Tech,” said Roman Anin, founder and publisher of the Russian independent outlet iStories.
Steve Hasker from Thomson Reuters called attention to the fact that “tech companies are extremely good at not paying for the content but keeping the eyeballs.”
His colleague Jane Barrett said that regulation of AI across borders is tricky for an international news organisation like Reuters. However, they can still use existing laws to protect their content from being used without permission by AI companies.
“What I come back to is that we already have laws around copyright and IP. Let's implement those properly first before getting into regulations,” she said.
6. AI will have an impact on business models – for better or for worse
“AI is a set of tools that can help us serve our mission, engage with the public, and maybe help us make money,” said Vivian Schiller from Aspen Digital while moderating the panel on producing news in an AI-driven world. The question of how AI can impact journalism’s business models loomed large throughout various conversations, particularly the question of the financial viability of the news industry amid the onset of generative AI.
Ressa suggested that news organisations are facing an increasingly difficult environment. “Why didn’t journalists manipulate the worst of humanity to get more money? Because we have standards and ethics,” she said. “The dilemma of today isn’t a journalism problem, we have to compete in an environment that rewards the worst journalism.”
Nielsen pointed out that generative AI has the potential to speed up the production of low-quality journalism to get more eyeballs and, thus, more money. Moreover, he also highlighted that it could impact the financial viability of smaller, independent news outlets. “If generative AI becomes more essential to how people access info, it will reinforce the winner-takes-most dynamics we see where a small number of rich, English-language outlets will retain the attention of well-educated, richer audiences,” he said.
In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.
- Twice a week
- More than 20,000 people receive it
- Unsubscribe any time
signup block
In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.
- Twice a week
- More than 20,000 people receive it
- Unsubscribe any time