OpenAI models
Incidents implicated systems
Incidente 9396 Reportes
AI-Powered Chinese Surveillance Campaign 'Peer Review' Used for Real-Time Monitoring of Anti-State Speech on Western Social Media
2025-02-21
OpenAI reportedly uncovered evidence of a Chinese state-linked AI-powered surveillance campaign, dubbed "Peer Review," designed to monitor and report anti-state speech on Western social media in real time. The system, believed to be built on Meta’s open-source Llama model, was detected when a developer allegedly used OpenAI’s technology to debug its code. OpenAI also reportedly identified disinformation efforts targeting Chinese dissidents and spreading propaganda in Latin America.
MásIncidente 8982 Reportes
Alleged LLMjacking Targets AI Cloud Services with Stolen Credentials
2024-05-06
Attackers reportedly exploited stolen cloud credentials obtained through a vulnerable Laravel system (CVE-2021-3129) to allegedly abuse AI cloud services, including Anthropic’s Claude and AWS Bedrock, in a scheme referred to as “LLMjacking.” The attackers are said to have monetized access through reverse proxies, reportedly inflating victim costs to as much as $100,000 per day. Additionally, they allegedly bypassed sanctions, enabled LLM models, and evolved techniques to evade detection and logging.
Más