Perplexity
Incidentes involucrados como desarrollador e implementador
Incidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
2024-06-18
An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
MásIncidente 7501 Reporte
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News
2024-07-22
Over a week of back-to-back, significant breaking political news stories, including the Trump rally shooting and Biden’s campaign withdrawal, AI chatbots reportedly failed to provide accurate real-time updates. Most chatbots gave incorrect or outdated information, demonstrating their current limitations in handling fast-paced news. These incidents suggest the continuing need for improved AI capabilities and caution in their deployment for real-time news dissemination.
MásIncidents involved as Developer
Incidente 96818 Reportes
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
MásIncidente 9641 Reporte
AI-Powered 'Insights' Feature for the Los Angeles Times Allegedly Justifies Ku Klux Klan’s History
2025-03-04
The Los Angeles Times removed its AI-generated “insights” feature after it is alleged to have produced a defense of the Ku Klux Klan. The AI reportedly framed the hate group as a product of societal change rather than an extremist movement. The AI tool, developed by Perplexity and promoted by owner Patrick Soon-Shiong, was designed to provide “different views” on opinion pieces.
MásEntidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
Entidades relacionadas
You.com
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
xAI
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
OpenAI
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 7501 Reporte
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News
Incidents involved as Developer
Mistral
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
Microsoft
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
Meta
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 7501 Reporte
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News
Incidents involved as Developer
Inflection
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 7501 Reporte
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News