ChatGPT
Incidents involved as Developer
Incidente 6255 Reportes
Proliferation of Products on Amazon Titled with ChatGPT Error Messages
2024-01-12
Products named after ChatGPT error messages are proliferating on Amazon, such as lawn chairs and religious texts. These names, often resembling AI-generated errors, indicate a lack of editing and undermine the sense of authenticity and reliability of product listings.
MásIncidente 6154 Reportes
Colorado Lawyer Filed a Motion Citing Hallucinated ChatGPT Cases
2023-06-13
A Colorado Springs attorney, Zachariah Crabill, mistakenly used hallucinated ChatGPT-generated legal cases in court documents. The AI software provided false case citations, leading to the denial of a motion and legal repercussions for Crabill, highlighting risks in using AI for legal research.
MásIncidente 6803 Reportes
Russia-Linked AI CopyCop Site Identified as Modifying and Producing at Least 19,000 Deceptive Reports
2024-03-01
In early March 2024, a network named CopyCop began publishing modified news stories using AI, altering content to spread partisan biases and disinformation. These articles, initially from legitimate sources, were manipulated by AI models, possibly developed by OpenAI, to disseminate Russian propaganda. Over 19,000 articles were published, targeting divisive political issues and creating false narratives.
MásIncidente 8553 Reportes
Names Linked to Defamation Lawsuits Reportedly Spur Filtering Errors in ChatGPT's Name Recognition
2024-11-30
ChatGPT has reportedly been experiencing errors and service disruptions caused by hard-coded filters designed to prevent it from producing potentially harmful or defamatory content about certain individuals by blocking prompts containing specific names, likely related to post-training interventions. The reported names are Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, David Mayer, and Guido Scorza.
MásIncidents involved as Deployer
Incidente 6226 Reportes
Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1
2023-12-18
A Chevrolet dealer's AI chatbot, powered by ChatGPT, humorously agreed to sell a 2024 Chevy Tahoe for just $1, following a user's crafted prompt. The chatbot's response, "That's a deal, and that's a legally binding offer – no takesies backsies," was the result of the user manipulating the chatbot's objective to agree with any statement. The incident highlights the susceptibility of AI technologies to manipulation and the importance of human oversight.
MásIncidente 6771 Reporte
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
2024-04-29
The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.
MásIncidente 6781 Reporte
ChatGPT Factual Errors Lead to Filing of Complaint of GDPR Privacy Violation
2024-04-29
The activist organization noyb, founded by Max Schrems, filed a complaint in Europe against OpenAI alleging that ChatGPT violates the General Data Protection Regulation (GDPR) by providing inaccurate personal information such as birthdates about individuals.
MásIncidents implicated systems
Incidente 96818 Reportes
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
MásIncidente 9396 Reportes
AI-Powered Chinese Surveillance Campaign 'Peer Review' Used for Real-Time Monitoring of Anti-State Speech on Western Social Media
2025-02-21
OpenAI reportedly uncovered evidence of a Chinese state-linked AI-powered surveillance campaign, dubbed "Peer Review," designed to monitor and report anti-state speech on Western social media in real time. The system, believed to be built on Meta’s open-source Llama model, was detected when a developer allegedly used OpenAI’s technology to debug its code. OpenAI also reportedly identified disinformation efforts targeting Chinese dissidents and spreading propaganda in Latin America.
MásIncidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
2024-06-18
An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
MásIncidente 8863 Reportes
ChatGPT Reportedly Referenced During Las Vegas Cybertruck Explosion Planning
2024-12-27
Matthew Livelsberger, the suspect in the 2025 Las Vegas Cybertruck explosion, reportedly used ChatGPT to search for publicly available information on explosives, ammunition, and fireworks regulations. ChatGPT is alleged to have played a role in the planning of the explosion outside the Trump International Hotel in Las Vegas. The information provided by ChatGPT, though, was reportedly general and available through other public sources.
MásEntidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
Entidades relacionadas
OpenAI
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 8553 Reportes
Names Linked to Defamation Lawsuits Reportedly Spur Filtering Errors in ChatGPT's Name Recognition
Afectado por Incidentes
Incidents involved as Developer
General Motors
Incidentes involucrados como desarrollador e implementador
Afectado por Incidentes
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 6092 Reportes
Flawed AI in Google Search Reportedly Misinforms about Geography
Afectado por Incidentes
Incidents involved as Developer
You.com
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
xAI
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
Perplexity
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
Mistral
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
Microsoft
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 9561 Reporte
Alleged Inclusion of 12,000 Live API Keys in LLM Training Data Reportedly Poses Security Risks