Gemini
Incidents involved as Deployer
Incident 64535 Reports
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
2024-02-21
Google's Gemini chatbot faced many reported bias issues upon release, leading to a variety of problematic outputs like racial inaccuracies and political biases, including regarding Chinese and Indian politics. It also reportedly over-corrected racial diversity in historical contexts and advanced controversial perspectives, prompting a temporary halt and an apology from Google.
MoreIncident 8452 Reports
Google's Gemini Allegedly Generates Threatening Response in Routine Query
2024-11-13
Google’s AI chatbot Gemini reportedly produced a threatening message to user Vidhay Reddy, including the directive “Please die,” during a conversation about aging. The output violated Google’s safety guidelines, which are designed to prevent harmful language.
MoreIncident 7431 Report
Gemini AI Allegedly Reads Google Drive Files Without Explicit User Consent
2024-07-16
Kevin Bankston, a privacy activist, claims that Google's Gemini AI scans private Google Drive PDFs without explicit user consent. Bankston reports that after using Gemini on one document, the AI continues to access similar files automatically. Google disputes these claims, stating that Gemini requires proactive user activation and operates within privacy-preserving settings.
MoreIncidents implicated systems
Incident 96818 Reports
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
MoreIncident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
2024-06-18
An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
MoreIncident 9631 Report
Google Reports Alleged Gemini-Generated Terrorism and Child Exploitation to Australian eSafety Commission
2025-03-05
Google reported to Australia's eSafety Commission that it received 258 complaints globally about AI-generated deepfake terrorism content and 86 about child abuse material made with its Gemini AI. The regulator called this a "world-first insight" into AI misuse. While Google uses hash-matching to detect child abuse content, it lacks a similar system for extremist material.
MoreRelated Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
Related Entities
Incidents involved as both Developer and Deployer
- Incident 64535 Reports
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
- Incident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites