OpenAI
Incidentes involucrados como desarrollador e implementador
Incidente 44325 Reportes
ChatGPT Abused to Develop Malicious Softwares
2022-12-21
OpenAI's ChatGPT was reportedly abused by cyber criminals including ones with no or low levels of coding or development skills to develop malware, ransomware, and other malicious softwares.
MásIncidente 68814 Reportes
Scarlett Johansson Alleges OpenAI's Sky Imitates Her Voice Without Licensing
2024-05-20
OpenAI unveiled a voice assistant with a voice resembling Scarlett Johansson's, despite her refusal to license her voice. Johansson claimed the assistant, "Sky," sounded "eerily similar" to her voice, leading her to seek legal action. OpenAI suspended Sky, asserting the voice was from a different actress.
MásIncidente 42011 Reportes
Users Bypassed ChatGPT's Content Filters with Ease
2022-11-30
Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.
MásIncidente 4508 Reportes
Kenyan Data Annotators Allegedly Exposed to Graphic Content for OpenAI's AI
2021-11-01
Sama AI's Kenyan contractors were reportedly asked with excessively low pay to annotate a large volume of disturbing content to improve OpenAI's generative AI systems such as ChatGPT, and whose contract was terminated prior to completion by Sama AI.
MásAfectado por Incidentes
Incidente 42011 Reportes
Users Bypassed ChatGPT's Content Filters with Ease
2022-11-30
Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.
MásIncidente 5037 Reportes
Bing AI Search Tool Reportedly Declared Threats against Users
2023-02-14
Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.
MásIncidente 3573 Reportes
GPT-2 Able to Recite PII in Training Data
2019-02-14
OpenAI's GPT-2 reportedly memorized and could regurgitate verbatim instances of training data, including personally identifiable information such as names, emails, twitter handles, and phone numbers.
MásIncidente 4702 Reportes
Bing Chat Response Cited ChatGPT Disinformation Example
2023-02-08
Reporters from TechCrunch issued a query to Microsoft Bing's ChatGPT feature, which cited an earlier example of ChatGPT disinformation discussed in a news article to substantiate the disinformation.
MásIncidents involved as Developer
Incidente 54158 Reportes
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
2023-05-04
A lawyer in Mata v. Avianca, Inc. used ChatGPT for research. ChatGPT hallucinated court cases, which the lawyer then presented in court. The court determined the cases did not exist.
MásIncidente 70136 Reportes
American Asylum Seeker John Mark Dougan in Russia Reportedly Spreads Disinformation via AI Tools and Fake News Network
2024-05-29
John Mark Dougan, a former Florida sheriff's deputy granted asylum in Russia, has been implicated in spreading disinformation. Utilizing AI tools like OpenAI's ChatGPT and DALL-E 3, Dougan created over 160 fake news sites, disseminating false narratives to millions worldwide. His actions align with Russian disinformation strategies targeting Western democracies. See also Incident 734.
MásIncidente 48220 Reportes
ChatGPT-Assisted University Email Addressing Mass Shooting Denounced by Students
2023-02-16
Vanderbilt University's Office of Equity, Diversity and Inclusion used ChatGPT to write an email addressing student body about the 2023 Michigan State University shooting, which was condemned as "impersonal" and "lacking empathy".
MásIncidente 96818 Reportes
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
MásEntidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
Entidades relacionadas
Murat Ayfer
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
students
Afectado por Incidentes
- Incidente 4667 Reportes
AI-Generated-Text-Detection Tools Reported for High Error Rates
- Incidente 7052 Reportes
Turkish Student in Isparta Allegedly Uses AI to Cheat on Exam, Leading to Arrest
Incidents involved as Deployer
Stephan de Vries
Incidentes involucrados como desarrollador e implementador
Afectado por Incidentes
Microsoft
Incidentes involucrados como desarrollador e implementador
- Incidente 5037 Reportes
Bing AI Search Tool Reportedly Declared Threats against Users
- Incidente 4776 Reportes
Bing Chat Tentatively Hallucinated in Extended Conversations with Users
Afectado por Incidentes
- Incidente 5037 Reportes
Bing AI Search Tool Reportedly Declared Threats against Users
- Incidente 5037 Reportes
Bing AI Search Tool Reportedly Declared Threats against Users
Incidents involved as Developer
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 3671 Reporte
iGPT, SimCLR Learned Biased Associations from Internet Training Data
Afectado por Incidentes
Incidents involved as Developer
ChatGPT users
Afectado por Incidentes
- Incidente 42011 Reportes
Users Bypassed ChatGPT's Content Filters with Ease
- Incidente 42011 Reportes
Users Bypassed ChatGPT's Content Filters with Ease
Incidents involved as Deployer
ChatGPT
Incidents involved as Developer
- Incidente 6255 Reportes
Proliferation of Products on Amazon Titled with ChatGPT Error Messages
- Incidente 6154 Reportes
Colorado Lawyer Filed a Motion Citing Hallucinated ChatGPT Cases
Incidents involved as Deployer
- Incidente 6226 Reportes
Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1
- Incidente 6771 Reporte
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
Incidents implicated systems
General Motors
Incidentes involucrados como desarrollador e implementador
Afectado por Incidentes
Perplexity AI
Afectado por Incidentes
- Incidente 6771 Reporte
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
- Incidente 6771 Reporte
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
Incidents involved as Deployer
Meta
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 7181 Reporte
OpenAI, Google, and Meta Alleged to Have Overstepped Legal Boundaries for Training AI
Incidents involved as Developer
Organizations integrating Whisper into customer service systems
Incidents involved as Deployer
You.com
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
xAI
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
Perplexity
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 7501 Reporte
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News
Incidents involved as Developer
Mistral
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 8591 Reporte
AI Models Reportedly Found to Provide Misinformation on Election Processes in Spanish
Incidents involved as Developer
Inflection
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
Anthropic
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 8591 Reporte
AI Models Reportedly Found to Provide Misinformation on Election Processes in Spanish