Description: In October 2021, a coordinated network of over 317 fake Twitter accounts leveraged AI-driven algorithms to amplify disinformation about the Honduran presidential election, targeting opposition candidate Xiomara Castro. The campaign spread false narratives to suppress voter turnout and undermine the election's integrity. Social media platforms, including Twitter and Facebook, removed the accounts only after being alerted, which also raised concerns about inadequate moderation.
Editor Notes: Reconstructing the timeline of events: (1) October 7, 2021: A coordinated network of 19 Twitter accounts posts identical disinformation about opposition candidate Xiomara Castro, falsely suggesting an alliance with Yani Rosenthal. The accounts use profile photos linked to uninvolved Peruvians. (2) October 6–14, 2021: Over 317 fake Twitter accounts amplify disinformation, creating feedback loops with a fake news website designed to resemble a legitimate outlet, spreading false claims about Castro and discouraging voter participation. (3) Early November 2021: Cybersecurity firm Nisos identifies the coordinated disinformation campaign and reports it to Twitter. (4) Early November 2021: Twitter removes the fake accounts after receiving the analysis from Nisos. (5) November 15, 2021: TIME publishes an article detailing the disinformation campaign and the role of AI-driven social media algorithms in amplifying the false narratives.
Entities
View all entitiesAlleged: X (Twitter) , Meta and Facebook developed an AI system deployed by National Party of Honduras supporters , Juan Orlando Hernández supporters , Unknown Twitter users and Unknown Facebook users, which harmed Xiomara Castro , Libertad y Refundación (LIBRE) supporters , Honduran electorate , Honduras , Democracy and Electoral integrity.
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Incident Reports
Reports Timeline
At 10:16pm on October 7, a cluster of nineteen Twitter accounts shared identical opinions about the upcoming presidential election in Honduras at the exact same second. Claiming to be supporters of opposition candidate Xiomara Castro, they …
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

Biased Google Image Results
· 18 reports

2010 Market Flash Crash
· 30 reports

Fake LinkedIn Profiles Created Using GAN Photos
· 4 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

Biased Google Image Results
· 18 reports

2010 Market Flash Crash
· 30 reports

Fake LinkedIn Profiles Created Using GAN Photos
· 4 reports