Advertisement Banner
Advertisement Banner

२४ शुक्रबार, आश्विन २०८२16th June 2025, 6:20:04 am

Climate clarity, conflict, and global disinformation frontlines

२२ बुधबार , आश्विन २०८२२ दिन अगाडि

Climate clarity, conflict, and global disinformation frontlines

Dear Disinfo update reader,

Welcome to the October edition of the Disinfo update! As we prepare to welcome over 600 members of the counter-disinformation community at our annual conference next week in Ljubljana, please expect slightly fewer website updates and webinars than usual from us this month.

This edition offers an abundance of the latest important developments, fresh insights, and opportunities to connect. As we close out the year, our Community is facing ever-growing challenges — from the broader geopolitical climate to the continued spread of Russian disinformation. We’re now seeing platforms like Mastodon and BlueSky being abused to serve propaganda campaigns targeting Europe.

At the same time, inspiring efforts are also happening at the national level: in Spain, with initiatives standing up to Meta’s advertising practices, and in Germany, with a legal win that ensures users can have access by design to a chronological feed — a step that helps reduce the risks of harmful algorithms and recommender systems.

These challenges also remind us of our strength: when we act together, we can find democratic, practical solutions to push back against disinformation. This is one of the reasons why our team spent time updating our newly designed Conflict hub, gathering common insights from information warfare incidents in Gaza and in Ukraine, as well as from other conflicts and crises worldwide, including health, environmental, and humanitarian emergencies. This is also why we updated our impact methodologies to support the effort to quantify the reach and influence of disinformation. 
We’re especially excited about #Disinfo2025, which takes place next week, on 15-16 October. With very limited space available, now is your last chance to register and secure your spot. It will be a special moment to connect, exchange ideas, and reflect on how we can move forward as a Community. We are looking forward to seeing you all in Ljubljana in a few days! For those who cannot be with us, our webinars will keep the conversations open and the community connected.

Get reading and enjoy!
 

Our Webinars
UPCOMING – REGISTER NOW!

23 October: Are AI Detection Tools Effective? Introducing TRIED, a WITNESS Benchmark | With the rapid development of generative AI, the AI detection tools are a key resource for information actors to verify the authenticity of content and combat disinformation. How do we ensure AI detection tools truly serve the people who need them most—and strengthen the work of fact-checkers, journalists, and civil society groups? In this webinar, Zuzanna Wojciak will present TRIED: the Truly Innovative and Effective AI Detection Benchmark, a practical framework developed by WITNESS, to evaluate whether detection tools are genuinely useful and effective while simultaneously guiding AI developers and policymakers in designing and promoting inclusive, sustainable, and innovative detection solutions.
20 November: How ANO Dialogue redefined Russia’s system of information control | This talk presents ANO Dialogue as one of Russia’s key tools for digital information control. ANO Dialogue illustrates how these practices have been centralised and bureaucratised. Unlike earlier covert troll farms, it is a large, openly operating organisation tied directly to the Presidential Administration. It employs thousands across Russia, coordinates state-run social media groups (“GosPublics”), and has already been sanctioned by the EU, UK, and US. Using CV-based analysis, we show how its recruitment, structures, and workforce mark a decisive shift in Russia’s system of information control.
PAST – WATCH THE RECORDINGS!

Operation Overload: smarter, bolder, powered by AI | The Russian propaganda operation targeted at media organisations and fact-checkers is stronger than ever. Operation Overload, first documented in June 2024, is now expanding to new platforms and utilising AI tools to target fact-checkers, media outlets, and international audiences with Kremlin propaganda. Aleksandra Atanasova and Guillaume Kuster discussed how the campaign has evolved, what makes it increasingly sophisticated, and how the community can respond.
Discover and watch our full webinar series recordings here.

Welcome to the Climate Clarity Corner, a special feature in this EU DisinfoLab newsletter. Here, we highlight the most crucial insights on climate change disinformation, spotlighting both the latest developments and resources that remain highly relevant.


Explore more through our Climate Clarity Hub, developed with the support of the Heinrich-Böll-Stiftung European Union, and join our efforts in building a more sustainable, disinformation-resilient future.

?? HEATED CLIMATE DIS-INFORMATION

Before COP30, a fresh coat of green. Petrobras’ influencer campaign raises greenwashing alarms ahead of COP30. As COP30 approaches, Brazil’s state oil company is using Gen-Z science and climate influencers to rebrand itself as a sustainability leader, despite plans to expand oil and gas production by 20% by 2030. Popular Instagram and TikTok creators are promoting “green” projects like algae-based biofuels, even as critics denounce the effort as major greenwashing.

Inside the Trump Climate Report: The Architects of Disinformation. The Union of Concerned Scientists reveals how a secret Climate Working Group produced a fossil fuel–backed disinformation report to justify rolling back the EPA’s authority to regulate greenhouse gases. The report’s authors, tied to industry-funded think tanks, used cherry-picked and debunked data to undermine climate science, a move now facing scientific and legal challenges for promoting politically driven “junk science.” In this line, Trump’s recent UN speech doubled down on climate denial, calling climate change a “hoax” and “the greatest con job ever perpetrated on the world.”

Forecast: disinformation. Australia’s first national climate risk assessment confirms that climate change is already affecting the country’s economy and communities. Yet Nationals senator Matt Canavan dismissed the findings as an attempt to “spread fear and panic,” while Sky News presenter Chris Kenny accused the report of “pushing the [climate] alarmists’ buttons.” Such claims echo a long-running conspiracy theory among climate deniers that Australia’s Bureau of Meteorology manipulates temperature data to exaggerate global warming.

“Greenlash” myth. Public support for corporate climate accountability remains strong. Despite claims of a so-called “greenlash,” new polling by Global Witness and Amnesty International shows solid public backing for corporate responsibility on climate.
Killing EU anti-greenwashing rules. The European Commission plans to abandon its flagship Green Claims Directive, a law intended to stop companies from misleading consumers with false environmental claims. The move follows pressure from the center-right European People’s Party (EPP), which threatened to block the law during ongoing negotiations with Parliament and EU countries. While the Commission cites simplification as a motive, critics say the withdrawal undermines responsible companies and weakens climate accountability.
??? NEXT EVENTS
9 October. Event in Ottawa (Canada) & online. Climate at a Crossroads. Join leading experts in climate and economic policy, media, and civil society to examine how disinformation undermines environmental action and democratic governance. The event explores evolving tactics and highlights strategies to build resilience against climate disinformation. Register here.

13 October. Event in Brussels (Belgium). Decoding the climate & information crisis. On the occasion of the launch of The RePlaybook: A Field Guide to the Climate and Information Crisis, the Heinrich-Böll-Stiftung European Union organises a hands-on masterclass for climate campaigners, journalists, researchers, and policymakers. Register here.

15 October. Event in Ljubljana (Slovenia). Session: The usual suspects: Climate edition. This session explores how the “usual suspects” of climate disinformation adapt their tactics, and how new tools can help us counter them at a pivotal moment for the planet. Whistleblower insights expose how financial manipulation and greenwashing within the fossil fuel industry fuel mistrust and delay the energy transition. As COP30 approaches, discussions reveal how lobbying, spin, and false narratives shape global climate negotiations. The event also features the launch of The RePlaybook: A field guide to the climate and information crisis. Speakers: Lindsey Gulden – Research scientist and whistleblower, formerly at ExxonMobil. Henry Peck – Senior Campaigner, Global Witness and Stephanie Hankey – Designer, technologist, and strategy advisor, Tactical Tech. To attend, don’t forget to register for the EU DisinfoLab Annual Conference.

22 October. Webinar: Trust Under Threat: Findings from the 2025 IPIE Expert Survey. The International Panel on the Information Environment (IPIE) invites participants to attend its third annual Expert Survey. This year’s results reveal deepening concern: 72% of experts foresee a worsening information environment, the third rise in pessimism in a row. The survey spotlights three urgent trends: mounting worries over disinformation, polarization, and systemic fragility, alarm over the power, opacity, and unaccountability of major tech platforms and growing self-censorship, harassment, and funding strain among researchers.

?? RESOURCES
The RePlaybook: A field guide to the climate and information crisis. Drawing on insights from 30 leading organisations and experts, The RePlaybook explores how today’s information systems, from attention-hungry algorithms to AI-generated clutter, shape public opinion on climate. Blending thought leadership with hands-on strategies, it helps communicators, journalists, campaigners, and researchers decode digital disorder, challenge tech paradigms, and strengthen climate discourse amid growing polarisation.

EDMO Training Series to counter Climate Disinformation. Module 1, with Paula Gori (EDMO) and Jos Delbeke (EIB). Module 2, with Stephan Lewandowsky (University of Bristol and University of Potsdam), John Cook (University of Melbourne), Simone Fontana (Facta).

Tool to track climate disinfo: Hot Air, by Tortoise. Use the Hot Air explore tool to see which topics are driving the conversation about climate change, from scepticism to outright mis-disinformation. Here is the core of the story of how the Hot Air project was born: How a dubious claim about whales went from fringe argument to presidential policy.

Training by AFP: Verifying climate claims. Don’t miss this online course to tackle climate misinformation run by AFP.  It is free and in 45 minutes the course provides practical guidance on verifying climate-related content and claims, identifying 'greenwashing', recognising non-verifiable content and common types of misinformation, and selecting reliable sources.
Global Initiative for Information Integrity on Climate Change: The first group of projects was approved on September 23, under the Global Fund with an initial USD 1 million. The current call remains open through May 2027.
?? AUDIOVISUAL CORNER
The media as a tool of climate obstruction. Podcast hosted by Drilled

How you can use a ‘truth sandwich’ to fight climate misinformation. Podcast hosted by The Yale Center for Climate Communication.

2025: Climate disinfo reload, webinar with Ava Lee (Global Witness) and Ira Pragnya Senapati (Ripple Research). Hosted by EU DisinfoLab and the Heinrich Böll Foundation.

Silencing science: Trump’s war on our climate, webinar with Adam Levy. Hosted by ClimateAdam.

Heat is rising: Harmful environmental agendas & tactics in France, Germany, and the Netherlands, webinar with Logically analysts and a EU DisinfoLab policy expert. Hosted by EU DisinfoLab. 
Understanding & countering climate change misinformation and misinformation,  webinar with Philip Newell (CAAD), Cristina López (Graphika), and Dr. Sander van der Linden (University of Cambridge). Hosted by Yale Programme on climate change communication.
?? ESSENTIALS READS
AI ‘slop’ websites are publishing climate science denial, by DeSmog.

Why climate disinformation thrives online and how to fight it at scale, by Zora Siebert for Tech Policy.

Antiscience is an existential threat, by Michael E. Mann and Peter J. Hotez for Time.

Fossil fuel billionaires are quietly bankrolling the anti-trans movement, by Atmos.

Information pollution. The fossil fuel industry’s favorite narratives, by Drilled.

Climate disinformation, peace and security: Good news, bad news, and key questions, by Council on Strategic Risks.

Climate Clarity hub has been developed by EU DisinfoLab with the contribution of the Heinrich Böll Foundation.

Disinfo news & updates

 

??Moldova chronicles

Russian FIMI in Moldova. Using an undercover reporter, the BBC discovered a secret Russian-funded network which paid Moldovans to spread pro-Russian propaganda and disinformation to undermine Moldova's pro-EU ruling party ahead of the elections. The operation trained participants to post fabricated allegations against the government and President Maia Sandu. Through this investigation, the BBC found that payments would be made to volunteers through PSB bank, based in Moscow and directly linked to fugitive oligarch Ilan Shor. Furthermore, an article by DFRLab outlines how REST, a new pro-Russian media outlet, is targeting Moldovans ahead of elections. Forensic evidence links REST to the Kremlin-aligned Rybar operation through shared VK Cloud infrastructure, identical FTP setups, and security lapses exposing “Rybar” metadata. These campaigns seek to erode trust in Moldova’s pro-EU leadership by spreading anti-Western narratives across several social media outlets, leading up to an important moment in Moldovan elections.

AI bots flood Moldova’s Telegram. An Open Minds analysis has exposed a pro-Kremlin bot network on Telegram generating over 62,000 posts targeting Maia Sandu and her pro-European PAS party. The bots, many also active in Russian and Ukrainian channels, called for protests and sought to implant doubt in the integrity of Moldova’s elections, pointing to a wider Russian influence operation. With anti-government, pro-Kremlin content circulating on Telegram in Moldova and vastly outpacing pro-PAS material, researchers warn of a serious threat to electoral integrity.

Russians claim fraud in Moldovan elections. An analysis from NewsGuard’s Reality Check reveals how Kremlin-linked outlets spread false allegations of voter fraud after Moldova’s September 2025 election, won by the pro-EU party. Despite OSCE monitors confirming the vote was free of major irregularities, Russian media circulated fabricated stories and reused videos that were actually from past elections in Azerbaijan, depicting supposed ballot stuffing and burning. These baseless claims are a mere extension of the Russian disinformation campaign to destabilise Moldova’s pro-Western government.          

??Politics, power, and polarisation

Musk’s political firestorm. At a recent far-right anti-immigration rally in London, former advisor to Donald Trump, billionaire Elon Musk, called for British citizens to “take charge” and “reform the government.” This follows his recent attempts to involve himself in UK politics, claiming that British Prime Minister Keir Starmer should be imprisoned after uncovering previous scandals, as well as a now fragmented alliance with Nigel Farage, leader of the populist party Reform UK. Recent inflammatory remarks made by Musk have placed him at the centre of a debate between freedom of speech and the law. Musk’s message aimed at the “reasonable centre” warned that “violence will come to you” and declared “either you fight back or die.” These statements have attracted scrutiny from some who claim he may have broken the law by encouraging or inciting violence. As a result, the Liberal Democrat Leader, Ed Davey, has proposed that any future contracts with Tesla be blocked by the government.  

Far-right Facebook networks. A year-long Guardian investigation has uncovered how everyday Facebook groups have functioned as an engine of radicalisation of far-right ideas in the UK. The analysis of these groups, which totalled a combined 600.000 members, showed that these networks are rife with distrust of institutions and scapegoating of immigrants, often using dehumanising language. Experts warn they normalise extremist narratives like nativism and conspiracies such as the “Great Reset,” with posts serving as gateways to deeper radicalisation. The report links this toxic online environment, facilitated by algorithmic amplification, to real-world unrest, including the 2024 UK riots.

Social media ban hacks. Recent social media bans in Australia aimed at blocking children under 16 from accessing sites have stirred up conversations about age checks, the use of VPNs and the use of AI. Studies conducted by the University of Melbourne found that most age detection tools can easily be bypassed by using cheap disguises and even video game characters. Many of these loopholes were discovered and tested in July when the UK passed a similar online age check.

We say you want a revolution. Citizen Lab’s PRISONBREAK report uncovers an AI-driven influence campaign using over 50 fake X accounts to stoke unrest against Iran’s regime. The operation intensified in 2025, coinciding with Israeli military actions like the “Twelve Day War” and the bombing of Evin Prison. Investigators traced coordinated posting and AI-generated objects to what appears to be an Israeli government-linked entity or contractor. The network spread deepfakes, impersonated outlets such as BBC Persian, and recycled false narratives of chaos and collapse to push for regime change.

?? Public health under siege

Trump’s vaccine-autism claim. False claims linking vaccines to autism have resurged following a 22 September White House Press Conference. US President Donald Trump cited the Amish as supposedly autism free because they “don’t take vaccines or pills”, a claim debunked by several studies. His remarks sparked a surge of disinformation on X and Instagram, amplified by conservative influencers and fringe sites. Adding fuel, US Health and Human Services Secretary Robert F. Kennedy said his agency would examine vaccine-autism links despite the CDC’s clear stance that no connection exists.

???Russia’s expanding playbook  

Russian, Chinese, and Iranian state media exploit Kirk assassination. A new NewsGuard report shows how Russian, Chinese, and Iranian state media seized on the September 2025 assassination of conservative US activist Charlie Kirk to push false narratives undermining their Western rivals and ultimately serving their own agendas. Russian media appeared to blame the assassination on Ukraine, Iran called it a Mossad operation by Israel, and China used the situation to mock the US as a deeply divided nation. Despite these organised foreign efforts to incite disinformation and encourage violence, investigators confirmed that the suspect was a US citizen who acted alone.

Russian-linked online content surge following Kirk assassination. An examination by the Institute for Strategic Dialogue (ISD) and the Centre for Internet Security (CIS) found that there was a surge of online conversations about a perceived threat of increased violence in the US, with several posts blaming the “radical left” for this. Moreover, it was found that these posts were not circulated by American users, but mostly by Russian-backed groups, one of them being an extension of the Russian disinformation campaign Operation Overload.

FIMI in Czechia. Pro-Russian disinformation is surging around the Czech election, with Prime Minister Petr Fiala describing it as a battle for the country’s geopolitical future. This campaign coincides with the rise of populist Andrej Babiš, who pledges to end military aid to Ukraine and seek “compromise” with Moscow. Since the publication of this article, billionaire populist Babiš has won the election, raising concerns over his alignment with Putin-friendly EU leaders like Orbán and Fico.  

???Disinfo by design: Systems & industry

Russia’s global fact-checking front. This investigation, published by Lupa, reports on the establishment of the so-called Global Fact-Checking Network (GFCN) in Russia in late 2024. Presented as a global alliance for better information, the GFCN is used by Russian officials to frame fact-checking as a matter of national sovereignty, where its outputs overwhelmingly target Western media and institutions like the EU, often flipping narratives into counter-propaganda rather than promoting positive Russian messages. While European outlets denounce it as disinformation in disguise, GFCN members stand by their work, claiming it meets high editorial standards. Not surprisingly, behind this initiative, we can find the Dialog Region, a Russian interministerial centre in charge of monitoring the Russian Internet. Don’t miss our webinar with Serge Poliakoff on this topic in November.            

China’s surveillance and propaganda industries. This article by The Diplomat covers the discovery of leaks from two firms, Geedge Networks and GoLaxy, which reveal how surveillance and propaganda have become lucrative industries serving the CCP. The report details how Geedge builds and sells censorship and surveillance tools to authoritarian regimes, while GoLaxy runs AI-driven propaganda campaigns both at home and abroad.

Google ad archive. This excerpt warns of a major setback for election transparency after Google erased its EU political ad archive, which contained seven years of data on spending, messaging, and targeting across 27 countries. The Google Ad Archive, originally created amid concerns over interference in Brexit and the 2016 US election, was a crucial tool for accountability. While Google frames the move as part of a new ban on political ads, supporters argue that deleting the historical record undermines democratic oversight and erases important evidence of past campaigning.

Dutch courts try Meta. A Dutch court has found that Meta breached the EU’s Digital Services Act by defaulting users to personalised, data-driven recommendation feeds. The ruling, brought to court by the nonprofit Bits of Freedom, requires Meta to let users easily choose and keep a non-profiled feed, with a €100.000 daily fine for every day over two weeks they go without making a change. The case highlights the DSA’s goal of giving users real control over the information that is presented to them, challenging Meta’s ad optimising algorithm.    

Political Adszheimer. Last week, as an extension of their decision to ban political ads in the EU, Google removed its political ad library for all EU member states, including the archive and all associated data (spending targeting) going back to 2018. While Meta announced they are stopping all political and societal ads starting 6 October, concerns are piling that Meta might do the same. The UK organisation WhoTargetsMe contacted the platform about this issue, but no reply has come back so far.  As a result, they're looking to put together a collaborative effort to a) identify any existing copies (whole or partial) of EU political ad data, b) make copies to fill any gaps (again, whole or partial), and c) ultimately make these available via some form of permanent storage. If you think you can help, please let them know here.

??AI Disinfo updates

Rolling Stone publisher sues Google over AI summaries. The Wall Street Journal: As AI-driven search and content tools expand, media and reference publishers are increasingly turning to the courts to defend their work. The publisher of Rolling Stone and The Hollywood Reporter, Penske Media, has sued Google, alleging that its new AI-generated search summaries (“AI Overviews”) illegally use journalistic content and divert traffic from publishers, marking the first major U.S. media lawsuit against the tech giant. In Europe, according to Mind Media and Press Gazette, several German and EU media have also filed a complaint under the Digital Services Act (DSA), arguing that Google’s AI Overviews breach transparency obligations and deprive outlets of revenue by creating a competing product from their content. Meanwhile, Reuters informs that Encyclopaedia Britannica and Merriam-Webster have launched a separate lawsuit against Perplexity AI, accusing the company of scraping and reproducing their material without permission.

More powerful than lies: Taiwan's 2025 recall campaign and the rise of AI-generated mini clips. Fact Link: A new study reveals that during Taiwan’s 2025 recall elections, hundreds of AI-generated “mini-clips” flooded Facebook and Instagram, spreading emotionally charged propaganda. Many were produced using Chinese AI tools and blended real footage with fabricated scenes mocking politicians and citizens involved in the recall. The clips also used demeaning symbols such as “frogs” to ridicule recall supporters, provoking strong emotional reactions and deepening social divisions. Researchers warn that such AI-driven mini-clips pose a growing threat to democratic debate, as they spread faster than fact-checkers can respond and replace dialogue with algorithmically amplified stereotypes and outrage.

Inside Russia’s AI-driven disinformation machine shaping Moldova’s election. EuroNews: Russia deployed an AI-powered disinformation machine ahead of the Moldova’s parliamentary vote (hold on the past 28th of September): spoof sites mimicking Western media, paid “engagement farms” in Africa, and AI bots flooding comment sections to deride the pro-EU Party of Action and Solidarity (PAS) and to discredit the EU. Platforms reported takedowns (e.g. over 1,000 YouTube channels since 2024), while Chisinau set up a new anti-disinfo centre.

Meta launches super PAC to fight AI regulation. Axios: Meta fights AI regulation while expanding the use of that technology: As part of its growing push to shape the future of AI, Meta has launched a new super Political Action Committee (PAC) in the United States, called the American Technology Excellence Project, to fight what it describes as excessive state-level AI and tech regulations. The company plans to invest “tens of millions” to support candidates from both parties who back innovation-friendly policies, amid a surge of over 1,100 tech bills across state legislatures. Meanwhile, Meta will begin showing ads based on users’ AI chat interactions across Facebook and Instagram, as CNBC reports. The update, set to roll out in December, will integrate data from its one-billion-user Meta AI assistant into ad targeting, a move that ties its AI investments more closely to revenue but raises new privacy concerns.

The Indicator guide to AI labels: We’ve collected in one place how and when major platforms label AI content. Indicator: A new Guide to AI Labels developed by Indicator maps how major platforms disclose synthetic content amid rising regulatory pressure and technical uncertainty. Under the EU AI Act, developers must watermark AI outputs, while companies like Google (with SynthID) and OpenAI are expanding transparency tools. Yet detection remains weak: simple edits like cropping or re-uploading can erase metadata, leaving platforms to rely on voluntary user disclosure. As new standards emerge, from Spotify’s AI song labelling to Pinterest’s “see fewer AI pins” option, the landscape remains fragmented, exposing the limits of current provenance and labelling systems.

Want to stay on top of the latest in AI and disinformation? Our AI Disinfo Hub has just been updated. Take a look!

Brussels Corner


The future of cookie rules

The European Commission will present a digital omnibus package in late 2025 as part of its simplification agenda. The digital omnibus is an EU initiative aimed at simplifying and harmonising digital regulation by ensuring consistent application of rules and providing greater legal clarity. It addresses areas such as data governance, cookie and tracking rules, cybersecurity breach reporting, AI Act compliance, and digital identity services, while aligning with upcoming proposals, including the EU Business Wallet.

Pertaining to this, the Commission points to “outdated cookie consent rules” for contributing to widespread "consent fatigue" in users. Since users are consistently asked for consent, they may not read the details carefully. Because cookies enable tracking that fuels hyperpersonalised content, they also shape what users see online—often narrowing exposure to shared information spaces and amplifying fragmented narratives.

In its consultation document, the Commission proposes a central cookie management mechanism for reducing unnecessary consent requests. This could mean that users will be able to adjust their cookie preferences in browser settings rather than on each individual website. However, at the September 22 European Parliament Civil Liberties committee meeting, responding to a question from MEP Markéta Gregorová, Yvo Volman, the Director of Data for the European Commission Directorate-General for Communications Networks, Content and Technology, stated that requiring users to actively opt in to tracking is not the Commission’s preferred solution under the ePrivacy Directive. MEP Birgit Sippel chimed in, highlighting potential manipulative banner design, but Volman provided no further insight into the Commission's plans.  

This lack of clarity raises concerns that simplification could lead to substantive weakening of other elements of the omnibus package. The current law only allows exceptions for cookies that are strictly necessary for a service people want (like keeping items in a shopping cart). Broadening this to include other types of cookies and oversimplifying the process risks creating a loophole for hidden tracking. This does not seem to bode well for other "burdensome" obligations on industry addressed by the Omnibus, such as cybersecurity incident reporting.

 


Reading & resources

Why Europe should stay strong in Trump’s culture war. This policy brief by the European Council on Foreign Relations (ECFR) warns that Europe is caught in a culture war with Trump’s America, both over values and its role as a global actor. Using the Truman Show movie as a metaphor, the author, Pawel Zerka, contends that European leaders risk living in a reality scripted by Trumpists, who fuel polarisation and seek to undermine the EU’s standing. Despite strong public support for the EU and its values, Zerka argues that a leadership deficit leaves Europe exposed to this imposed reality. Instead, he urges them to abandon appeasement, defend liberal principles, and pursue strategic autonomy to reclaim true agency.

New report on CopyCop (Storm 1516). A new Insikt Group report exposes the rapid growth of CopyCop (Storm-1516), a Russian covert influence network working to erode support for Ukraine and provoke political fragmentation within Western nations. Since early 2025, the operation has created over 300 new fake websites targeting a wide range of regions, including Canada and Armenia, and introducing new languages like Turkish and Swahili. Most notably, CopyCop now leverages self-hosted Large Language Models (LLMs), likely based on Meta’s Llama 3, to mass generate biased pro-Russian and anti-Ukrainian content. Thus increasing the difficulty of any attempt to disrupt the campaign.    

The GoLaxy Papers. This podcast and article detail the discovery of an alleged Chinese information warfare operation that employs an army of AI personas. Following the leak of internal documents from the Beijing-based company, GoLaxy, researchers Brett Goldstein and Brett Benson revealed a new frontier in information warfare: AI-generated personas designed to infiltrate social media, gain trust, and manipulate narratives. Unlike your typical troll farms, this operation deploys highly realistic digital doubles to further Chinese national security interests in sensitive areas like Hong Kong and Taiwan.

Tracking EU 2024 political ads. A new comparative study led by EDMO BELUX with ten EDMO hubs examined political advertising across 15 EU countries during the 2024 European elections. The report finds that Meta and Google, two Very Large Online Platforms (VLOPs), only partially complied with transparency and targeting rules under Red(EU)2024/900. The findings highlight both gaps in platform accountability and the diverse ways political parties used online ads, offering context for Meta and Google’s recent withdrawal from EU political advertising.  

New report by Hybrid CoE.  Countering disinformation in the Euro-Atlantic: Strengths and gaps presents the results of a questionnaire sent to Hybrid CoE’s Participating States, the EU, and NATO to map their counter-disinformation tools and policies; and qualitative insights based on the discussions of an annual workshop organised by Hybrid CoE. 
Disarming disinformation research. A new case study by ICFJ highlights the crucial role of ethnic and Indigenous media in countering falsehoods during the 2024 US presidential election. Using interviews and survey data, the report finds that Donald Trump was a leading source of election-related disinformation, often amplified in closed spaces like WhatsApp groups.  The study also highlights how the exclusion of Indigenous voices in mainstream media fuels disinformation and notes that while trust in the press is generally declining, communities of color expressed less distrust in journalism. Ultimately, the study offers 22 recommendations for newsrooms and civil society to address these issues, especially concerning the targeted harassment of journalists.
This week's recommended read
 

 

This week’s recommended read is provided by our Community coordinator Ana Romero Vicente.  

If you’re following how disinformation seeps into everyday life, this BBC investigation is a powerful and heartbreaking case study. It tells the story of Paloma Shemirani, a 23-year-old woman who died after rejecting chemotherapy, deeply influenced by her mother, Kate Shemirani, a former nurse turned conspiracy figure known for spreading COVID-19 and anti-vaccine misinformation online.

Paloma had been told that conventional treatment offered a high chance of survival, but she instead turned to alternative “natural” therapies, including juices and coffee enemas.

The story becomes much larger than one family, it’s about how conspiracy thinking reshapes trust, how medical disinformation spreads within households, and how society struggles to protect young people when misinformation is framed as parental care. Since the investigation, journalist Marianna Spring has been flooded with messages from families facing similar fears, including grandparents worried that misinformation is stopping their grandchildren from being vaccinated.

It’s a striking example of the human cost of health disinformation, showing how online narratives move offline and shape real medical decisions. The piece also raises urgent policy and ethical questions about child protection, trust, and the fine line between parental responsibility and public safety.

This latest from EU Disinfo Lab
 

 

Updated impact-risk index. In collaboration with Amaury Lesplingart (CheckFirst), EU DisinfoLab’s Senior Researcher Raquel Miguel Serrano and Research Manager Maria Giovanna Sessa updated our Impact-Risk Index, which estimates the impact risk of individual hoaxes, to reflect the latest advances in AI and coordination techniques.

Thanks to Amaury, we are also releasing an updated automated Impact Calculator, based on the index, to streamline and improve assessments via data standardisation.  

We have also just shared a concise overview of a report, elaborated for the veraAI project, mapping how different frameworks approach impact assessment and where the gaps remain.

Conflict & Crisis hub launch. We’re proud to announce the launch of our new Conflict & Crisis Hub, a space to help our community understand how disinformation shapes and spreads during conflicts and crises worldwide. The Hub brings together the most relevant and insightful materials on disinformation and its tactics across multiple conflict settings, as well as during health, environmental, and other emergencies. It’s a carefully curated collection of news, research, and tools, all selected for their relevance, insight, and reliability. A single, trusted platform to explore what’s most worth reading, watching, and learning from.

Events & announcements

9 October: AFP webinar on investigating disinformation networks.

15-16 October: Our annual conference #Disinfo2025 takes place in Ljubljana, Slovenia. The perfect time to get your ticket is now – we bet you won’t want to miss it.

17 October: MLA4MedLit online conference. Stay tuned; the event description and list of speakers will be announced shortly.

23-25 October: The IPI World Congress and Media Innovation Festival will gather in Vienna  to discuss topics under the theme “Defending the Future of Free Media.”  

24-31 October: Global Media and Information Literacy Week 2025. Minds Over AI - MIL in Digital Spaces. Stakeholders around the world organise events and UNESCO co-hosts with a Member State during this conference to be held in Cartagena de Indias, Colombia.

25 or 26 October: Researchers and practitioners on trustworthy AI are invited to submit papers to TRUST-AI, the European Workshop on Trustworthy AI organised as part of the 28th European Conference on Artificial Intelligence ECAI 2025, Bologna, Italy.

29-30 October: The 2nd European Congress on Disinformation and Fact-Checking, organised by UC3M MediaLab, will take place under the subtitle “Beyond 2025: Emerging threats and solutions in the global information ecosystem” in Madrid, Spain, with the possibility to join remotely.

19 November: Media & Learning Webinar: Understanding and responding to Health Disinformation. The session will offer practical tools for educators, librarians, NGOs, and civil society to counter pseudoscience, conspiracy narratives and support informed health communication.

20-24 November: The 2025 Global Investigative Journalism Conference will be held in Kuala Lumpur, Malaysia.

17 December: Media & Learning Wednesday Webinar about Lines of speech: hate, harm and the laws across borders.

Jobs

EU DisinfoLab internships for 2026

Communication and community Internships

Policy and Advocacy Internships

Global Media Registry is looking for a Project Lead & Analyst focusing on EU Media Regulation.

ProPublica is now hiring for the following positions:
Reporter.
News Applications Developer.

Did you find a job thanks to the listing in this newsletter? We’d love to know – please drop us a message!