Description: 143 deepfake ads, over 100 of which reportedly impersonated former British Prime Minister Rishi Sunak, were promoted on Meta's platform to advertise the fraudulent investment scheme "Quantum AI." Funding for the ads reportedly originated from 23 countries. Up to 462,000 users may have been exposed to the false content. The campaign used generative AI tools to create high-quality misinformation, including spoofed BBC news clips for added legitimacy
Editor Notes: Reconstructing the timeline of events: (1) August 12, 2023: Reported start of the deepfake ad campaign on Meta; (2) August 1, 2024: End of the one-month investigation period during which Fenimore Harper identified 143 deepfake ads; (3) January 13, 2024: Fenimore Harper publishes its findings. Read the full report here: https://www.fenimoreharper.com/s/FENIMORE-HARPER-REPORT_-DEEP-FAKED-POLITICAL-ADS-V2.pdf. The full list of 143 identified ads by Fenimore Harper can be accessed here: https://www.fenimoreharper.com/s/Deepfake-Finance-Scam_-Full-List-of-Ads.xlsx.
Entidades
Ver todas las entidadesAlleged: Unknown deepfake technology developers developed an AI system deployed by Quantum AI scammers, which harmed Rishi Sunak , Quantum AI victims , Meta users y BBC News presenters.
Sistema de IA presuntamente implicado: Unknown deepfake technology apps
Estadísticas de incidentes
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.3. Fraud, scams, and targeted manipulation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Informes del Incidente
Cronología de Informes

Según una investigación que ha alertado sobre el riesgo que supone la IA antes de las elecciones generales, tan solo en el último mes se han pagado más de 100 anuncios de vídeo falsos que se hacen pasar por Rishi Sunak (https://www.theguard…
RESUMEN EJECUTIVO
-
En el último mes, se pagó para promocionar en la plataforma Meta más de 100 anuncios de video ultrafalsos que se hacían pasar por el Primer Ministro Rishi Sunak.
-
Es posible que estos anuncios hayan llegado a más de 400…
Variantes
Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents

Don’t Believe the Algorithm
· 4 informes

Fake Obama created using AI tool to make phoney speeches
· 29 informes
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents

Don’t Believe the Algorithm
· 4 informes

Fake Obama created using AI tool to make phoney speeches
· 29 informes