ChatGPT
Incidents involved as Developer
Incident 6255 Reports
Proliferation of Products on Amazon Titled with ChatGPT Error Messages
2024-01-12
Products named after ChatGPT error messages are proliferating on Amazon, such as lawn chairs and religious texts. These names, often resembling AI-generated errors, indicate a lack of editing and undermine the sense of authenticity and reliability of product listings.
MoreIncident 6154 Reports
Colorado Lawyer Filed a Motion Citing Hallucinated ChatGPT Cases
2023-06-13
A Colorado Springs attorney, Zachariah Crabill, mistakenly used hallucinated ChatGPT-generated legal cases in court documents. The AI software provided false case citations, leading to the denial of a motion and legal repercussions for Crabill, highlighting risks in using AI for legal research.
MoreIncident 6803 Reports
Russia-Linked AI CopyCop Site Identified as Modifying and Producing at Least 19,000 Deceptive Reports
2024-03-01
In early March 2024, a network named CopyCop began publishing modified news stories using AI, altering content to spread partisan biases and disinformation. These articles, initially from legitimate sources, were manipulated by AI models, possibly developed by OpenAI, to disseminate Russian propaganda. Over 19,000 articles were published, targeting divisive political issues and creating false narratives.
MoreIncident 8553 Reports
Names Linked to Defamation Lawsuits Reportedly Spur Filtering Errors in ChatGPT's Name Recognition
2024-11-30
ChatGPT has reportedly been experiencing errors and service disruptions caused by hard-coded filters designed to prevent it from producing potentially harmful or defamatory content about certain individuals by blocking prompts containing specific names, likely related to post-training interventions. The reported names are Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, David Mayer, and Guido Scorza.
MoreIncidents involved as Deployer
Incident 6226 Reports
Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1
2023-12-18
A Chevrolet dealer's AI chatbot, powered by ChatGPT, humorously agreed to sell a 2024 Chevy Tahoe for just $1, following a user's crafted prompt. The chatbot's response, "That's a deal, and that's a legally binding offer – no takesies backsies," was the result of the user manipulating the chatbot's objective to agree with any statement. The incident highlights the susceptibility of AI technologies to manipulation and the importance of human oversight.
MoreIncident 6771 Report
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
2024-04-29
The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.
MoreIncident 6781 Report
ChatGPT Factual Errors Lead to Filing of Complaint of GDPR Privacy Violation
2024-04-29
The activist organization noyb, founded by Max Schrems, filed a complaint in Europe against OpenAI alleging that ChatGPT violates the General Data Protection Regulation (GDPR) by providing inaccurate personal information such as birthdates about individuals.
MoreIncidents implicated systems
Incident 96818 Reports
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
MoreIncident 9396 Reports
AI-Powered Chinese Surveillance Campaign 'Peer Review' Used for Real-Time Monitoring of Anti-State Speech on Western Social Media
2025-02-21
OpenAI reportedly uncovered evidence of a Chinese state-linked AI-powered surveillance campaign, dubbed "Peer Review," designed to monitor and report anti-state speech on Western social media in real time. The system, believed to be built on Meta’s open-source Llama model, was detected when a developer allegedly used OpenAI’s technology to debug its code. OpenAI also reportedly identified disinformation efforts targeting Chinese dissidents and spreading propaganda in Latin America.
MoreIncident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
2024-06-18
An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
MoreIncident 8863 Reports
ChatGPT Reportedly Referenced During Las Vegas Cybertruck Explosion Planning
2024-12-27
Matthew Livelsberger, the suspect in the 2025 Las Vegas Cybertruck explosion, reportedly used ChatGPT to search for publicly available information on explosives, ammunition, and fireworks regulations. ChatGPT is alleged to have played a role in the planning of the explosion outside the Trump International Hotel in Las Vegas. The information provided by ChatGPT, though, was reportedly general and available through other public sources.
MoreRelated Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
Related Entities
OpenAI
Incidents involved as both Developer and Deployer
- Incident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 8553 Reports
Names Linked to Defamation Lawsuits Reportedly Spur Filtering Errors in ChatGPT's Name Recognition
Incidents Harmed By
Incidents involved as Developer
Incidents involved as both Developer and Deployer
- Incident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 6092 Reports
Flawed AI in Google Search Reportedly Misinforms about Geography
Incidents Harmed By
Incidents involved as Developer
Perplexity
Incidents involved as both Developer and Deployer
Incidents involved as Developer
Microsoft
Incidents involved as both Developer and Deployer
- Incident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 9561 Report
Alleged Inclusion of 12,000 Live API Keys in LLM Training Data Reportedly Poses Security Risks
Incidents Harmed By
Incidents involved as Developer
Inflection
Incidents involved as both Developer and Deployer
Incidents involved as Developer
Jeff Hancock
Incidents Harmed By
- Incident 8521 Report
Alleged Fake Citations Undermine Expert Testimony in Minnesota Deepfake Law Case