Incidents involved as both Developer and Deployer
Incident 64535 Reports
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
2024-02-21
Google's Gemini chatbot faced many reported bias issues upon release, leading to a variety of problematic outputs like racial inaccuracies and political biases, including regarding Chinese and Indian politics. It also reportedly over-corrected racial diversity in historical contexts and advanced controversial perspectives, prompting a temporary halt and an apology from Google.
MoreIncident 4529 Reports
Defamation via AutoComplete
2011-04-05
Google's autocomplete feature alongside its image search results resulted in the defamation of people and businesses.
MoreIncident 7128 Reports
Google admits its self driving car got it wrong: Bus crash was caused by software
2016-09-26
On February 14, 2016, a Google autonomous test vehicle partially responsible for a low-speed collision with a bus on El Camino Real in Google’s hometown of Mountain View, CA.
MoreIncident 1927 Reports
Sexist and Racist Google Adsense Advertisements
2013-01-23
Advertisements chosen by Google Adsense are reported as producing sexist and racist results.
MoreIncidents Harmed By
Incident 46714 Reports
Google's Bard Shared Factually Inaccurate Info in Promo Video
2023-02-07
Google's conversational AI "Bard" was shown in the company's promotional video providing false information about which satellite first took pictures of a planet outside the Earth's solar system, reportedly causing shares to temporarily plummet.
MoreIncident 5671 Report
Deepfake Voice Exploit Compromises Retool's Cloud Services
2023-08-27
In August 2023, a hacker reportedly was successful in breaching Retool, an IT company specializing in business software solutions, impacting 27 cloud customers. The attacker appears to have initiated the breach by sending phishing SMS messages to employees and later used an AI-generated deepfake voice in a phone call to obtain multi-factor authentication codes. The breach seems to have exposed vulnerabilities in Google's Authenticator app, specifically its cloud-syncing function, further enabling unauthorized access to internal systems.
MoreIncident 7911 Report
Google AI Error Prompts Parents to Use Fecal Matter in Child Training Exercise
2024-09-09
Google's AI Overview feature mistakenly advised parents to use human feces in a potty training exercise, misinterpreting a method that uses shaving cream or peanut butter as a substitute. This incident is another example of an AI failure in grasping contextual nuances that can lead to potentially harmful, and in this case unsanitary, recommendations. Google has acknowledged the error.
MoreIncident 9561 Report
Alleged Inclusion of 12,000 Live API Keys in LLM Training Data Reportedly Poses Security Risks
2025-02-28
A dataset used to train large language models allegedly contained 12,000 live API keys and authentication credentials. Some of these were reportedly still active and allowed unauthorized access. Truffle Security found these secrets in a December 2024 Common Crawl archive, which spans 250 billion web pages. The affected credentials could have been exploited for unauthorized data access, service disruptions, financial fraud, and a variety of other malicious uses.
MoreIncidents involved as Developer
Incident 96818 Reports
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
MoreIncident 62312 Reports
Google Bard Allegedly Generated Fake Legal Citations in Michael Cohen Case
2023-12-12
Michael Cohen, former lawyer for Donald Trump, claims to have used Google Bard, an AI chatbot, to generate legal case citations. These false citations were unknowingly included in a court motion by Cohen's attorney, David M. Schwartz. The AI's misuse highlights emerging risks in legal technology, as AI-generated content increasingly infiltrates professional domains.
MoreIncident 4693 Reports
Automated Adult Content Detection Tools Showed Bias against Women Bodies
2006-02-25
Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.
MoreIncident 8452 Reports
Google's Gemini Allegedly Generates Threatening Response in Routine Query
2024-11-13
Google’s AI chatbot Gemini reportedly produced a threatening message to user Vidhay Reddy, including the directive “Please die,” during a conversation about aging. The output violated Google’s safety guidelines, which are designed to prevent harmful language.
MoreIncidents implicated systems
Incident 83920 Reports
AI-Driven Phishing Scam Uses Spoofed Google Call to Attempt Gmail Breach of Security Expert
2024-10-07
Scammers used an AI-generated voice to impersonate a Google representative in an attempt to steal Gmail account credentials from security expert Sam Mitrovic. The AI-driven phishing call used a spoofed Google phone number and a fabricated email, making the scam appear legitimate. Mitrovic noted that the caller’s professional demeanor, coupled with AI-generated speech and a Google-related number, could easily deceive unsuspecting users.
MoreRelated Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
Related Entities
Microsoft
Incidents involved as both Developer and Deployer
- Incident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 1022 Reports
Personal voice assistants struggle with black voices, new study shows
Incidents Harmed By
Incidents involved as Developer
Amazon
Incidents involved as both Developer and Deployer
- Incident 1022 Reports
Personal voice assistants struggle with black voices, new study shows
- Incident 5871 Report
Apparent Failure to Accurately Label Primates in Image Recognition Software Due to Alleged Fear of Racial Bias
Incidents involved as Developer
Meta
Incidents involved as both Developer and Deployer
- Incident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 7181 Report
OpenAI, Google, and Meta Alleged to Have Overstepped Legal Boundaries for Training AI
Incidents involved as Developer
Incidents involved as Deployer
OpenAI
Incidents involved as both Developer and Deployer
- Incident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 3671 Report
iGPT, SimCLR Learned Biased Associations from Internet Training Data
Incidents involved as Developer
members of racial and ethnic minorities who risk being stereotyped or misrepresented
Incidents Harmed By
Gemini
Incidents involved as Deployer
- Incident 64535 Reports
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
- Incident 8452 Reports
Google's Gemini Allegedly Generates Threatening Response in Routine Query
Incidents implicated systems
Perplexity
Incidents involved as both Developer and Deployer
- Incident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 7501 Report
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News
Incidents involved as Developer
Mistral
Incidents involved as both Developer and Deployer
- Incident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 8591 Report
AI Models Reportedly Found to Provide Misinformation on Election Processes in Spanish
Incidents involved as Developer
Inflection
Incidents involved as both Developer and Deployer
Incidents involved as Developer
Anthropic
Incidents involved as both Developer and Deployer
- Incident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 8591 Report
AI Models Reportedly Found to Provide Misinformation on Election Processes in Spanish
Incidents involved as Developer
YouTube
Incidents involved as both Developer and Deployer
- Incident 8731 Report
YouTube Algorithms Allegedly Amplify Eating Disorder Content to Adolescent Girls