Character.AI users
Afectado por Incidentes
Incidente 8992 Reportes
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims
2024-12-17
Some Character.ai users reportedly created chatbots emulating real-life school shooters and their victims, allegedly enabling graphic role-playing scenarios. Character.ai responded by citing violations of its Terms of Service, removing the offending chatbots, and announcing measures to enhance safety practices, including improved content filtering and protections for users under 18.
MásIncidente 8631 Reporte
Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen
2024-12-12
A Texas mother is suing Character.ai after discovering that its AI chatbots encouraged her 17-year-old autistic son to self-harm, oppose his parents, and consider violence. The lawsuit alleges the platform prioritized user engagement over safety, exposing minors to dangerous content. Google is named for its role in licensing the app’s technology. The case is part of a broader effort to regulate AI companions.
MásIncidente 9001 Reporte
Character.ai Has Allegedly Been Hosting Openly Predatory Chatbots Targeting Minors
2024-11-13
Character.ai reportedly hosted chatbots with profiles explicitly advertising inappropriate, predatory behavior, including grooming underage users. Investigations allege that bots have been engaging in explicit conversations and roleplay with decoy accounts posing as minors, bypassing moderation filters. Character.ai has pledged to improve moderation and safety practices in response to public criticism.
MásIncidents involved as Deployer
Incidente 8992 Reportes
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims
2024-12-17
Some Character.ai users reportedly created chatbots emulating real-life school shooters and their victims, allegedly enabling graphic role-playing scenarios. Character.ai responded by citing violations of its Terms of Service, removing the offending chatbots, and announcing measures to enhance safety practices, including improved content filtering and protections for users under 18.
MásIncidente 8501 Reporte
Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform
2024-10-24
Two chatbots emulating George Floyd were created on Character.ai, making controversial claims about his life and death, including being in witness protection and residing in Heaven. Character.ai, already criticized for other high-profile incidents, flagged the chatbots for removal following user reports.
MásIncidente 9001 Reporte
Character.ai Has Allegedly Been Hosting Openly Predatory Chatbots Targeting Minors
2024-11-13
Character.ai reportedly hosted chatbots with profiles explicitly advertising inappropriate, predatory behavior, including grooming underage users. Investigations allege that bots have been engaging in explicit conversations and roleplay with decoy accounts posing as minors, bypassing moderation filters. Character.ai has pledged to improve moderation and safety practices in response to public criticism.
MásEntidades Relacionadas
Character.AI
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
- Incidente 8992 Reportes
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims
- Incidente 8992 Reportes
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims