Description: Attackers reportedly exploited stolen cloud credentials obtained through a vulnerable Laravel system (CVE-2021-3129) to allegedly abuse AI cloud services, including Anthropic’s Claude and AWS Bedrock, in a scheme referred to as “LLMjacking.” The attackers are said to have monetized access through reverse proxies, reportedly inflating victim costs to as much as $100,000 per day. Additionally, they allegedly bypassed sanctions, enabled LLM models, and evolved techniques to evade detection and logging.
Editor Notes: Incident 898 presents an editorial challenge in synthesizing events from multiple reports, pointing to the evolution of LLMjacking trends over time. The following is a reconstruction of key incidents outlined in Sysdig's investigative reports: (1) 05/06/2024: Initial publication date LLMjacking report by Sysdig's Alessandro Brucato. Attackers reportedly exploited stolen cloud credentials obtained via a Laravel vulnerability (CVE-2021-3129) to access cloud-hosted LLMs like Anthropic Claude. Monetization allegedly occurred via reverse proxies, potentially costing victims up to $46,000 per day. (2) 07/11/2024: Significant spike in LLMjacking activity reportedly observed, with over 61,000 AWS Bedrock API calls logged in a three-hour window, allegedly generating significant costs to victims. (3) 07/24/2024: A second surge in activity reportedly occurred, with 15,000 additional API calls detected. Attackers are alleged to have escalated the abuse of APIs and developed new scripts to automate LLM interactions. (4) 09/18/2024: Sysdig's second report detailing evolving attacker tactics, including alleged enabling LLMs via APIs (e.g., PutFoundationModelEntitlement) and tampering with logging configurations (e.g., DeleteModelInvocationLoggingConfiguration) to evade detection. Motives reportedly expanded to include bypassing sanctions, enabling access in restricted regions, and role-playing use cases. (5) Ongoing: Sysdig and other researchers continue to observe alleged LLMjacking incidents, reportedly involving other LLMs like Claude 3 Opus and OpenAI systems. Victim costs have allegedly risen to over $100,000 per day with LLM usage, which is reportedly fueling a black market for stolen credentials.
Entities
View all entitiesAlleged: OAI Reverse Proxy Tool Creators and LLMjacking Reverse Proxy Tool Creators developed an AI system deployed by LLMjacking Attackers Exploiting Laravel and Entities engaging in Russian sanctions evasion, which harmed Laravel users , Laravel CVE-2021-3129 users , Cloud LLM users and Cloud LLM service providers.
Alleged implicated AI systems: OpenRouter services , OpenAI models , Mistral-hosted models , MakerSuite tools , GCP Vertex AI models , ElevenLabs services , Azure-hosted LLMs , AWS Bedrock-hosted models , Anthropic Claude (v2/v3) and AI21 Labs models
Incident Stats
Incident ID
898
Report Count
2
Incident Date
2024-05-06
Editors
Daniel Atherton
Incident Reports
Reports Timeline
sysdig.com · 2024
- View the original report at its source
- View the report at the Internet Archive
The Sysdig Threat Research Team (TRT) recently observed a new attack that leveraged stolen cloud credentials in order to target ten cloud-hosted large language model (LLM) services, known as LLMjacking. The credentials were obtained from a …
sysdig.com · 2024
- View the original report at its source
- View the report at the Internet Archive
Following the Sysdig Threat Research Team's (TRT) discovery of LLMjacking --- the illicit use of an LLM through compromised credentials --- the number of attackers and their methods have proliferated. While there has been an uptick in attac…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
The DAO Hack
· 24 reports
Game AI System Produces Imbalanced Game
· 11 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
The DAO Hack
· 24 reports
Game AI System Produces Imbalanced Game
· 11 reports