GPT-4
Incidents involved as Deployer
Incident 6771 Rapport
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
2024-04-29
The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.
PlusIncidents implicated systems
Incident 9974 Rapports
Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models
2023-02-28
Court records reveal that Meta employees allegedly discussed pirating books to train LLaMA 3, citing cost and speed concerns with licensing. Internal messages suggest Meta accessed LibGen, a repository of over 7.5 million pirated books, with apparent approval from Mark Zuckerberg. Employees allegedly took steps to obscure the dataset’s origins. OpenAI has also been implicated in using LibGen.
PlusIncident 9952 Rapports
The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content
2023-12-27
The New York Times alleges that OpenAI and Microsoft used millions of its articles without permission to train AI models, including ChatGPT. The lawsuit claims the companies scraped and reproduced copyrighted content without compensation, in turn undermining the Times’s business and competing with its journalism. Some AI outputs allegedly regurgitate Times articles verbatim. The lawsuit seeks damages and demands the destruction of AI models trained on its content.
PlusIncident 10281 Rapport
OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol
2025-02-07
OpenAI's Operator agent, which is designed to complete real-world web tasks on behalf of users, reportedly executed a $31.43 grocery delivery purchase without user consent. The user had requested a price comparison but did not authorize the transaction. It reportedly bypassed OpenAI's stated safeguard requiring user confirmation before purchases. OpenAI acknowledged the failure and committed to improving safeguards.
PlusIncident 10311 Rapport
Transgender User Alleges ChatGPT Allowed Suicide Letter Without Crisis Intervention
2025-04-19
A transgender user, Miranda Jane Ellison, experiencing acute distress reported that ChatGPT (GPT-4) allowed her to write and submit a suicide letter without intervention. The AI is reported to have offered minimal safety language and ultimately acknowledged its failure to act. Ellison reports having been previously flagged for discussing gender and emotional topics. A formal complaint with transcripts was submitted to OpenAI.
PlusEntités liées
Autres entités liées au même incident. Par exemple, si le développeur d'un incident est cette entité mais que le responsable de la mise en œuvre est une autre entité, ils sont marqués comme entités liées.
Entités liées
OpenAI
Incidents impliqués en tant que développeur et déployeur
- Incident 9974 Report
Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models
- Incident 9952 Report
The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content