Description: AI writing detection tools have reportedly continued to falsely flag genuine student work as AI-generated, disproportionately impacting ESL and neurodivergent students. Specific cases include Moira Olmsted, Ken Sahib, and Marley Stevens, who were penalized despite writing their work independently. Such tools reportedly exhibit biases, leading to academic penalties, probation, and strained teacher-student relationships.
Editor Notes: Reconstructing the timeline of events: (1) Sometime in 2023: Central Methodist University is reported to have used Turnitin to analyze assignments for AI usage. Moira Olmsted’s writing is flagged as AI-generated, leading to her receiving a zero and a warning. (2) Sometime in 2023: Ken Sahib, an ESL student at Berkeley College, is reported to have been penalized after AI detection tools flagged his assignment as AI-generated. (3) Sometime in late 2023 or early 2024: Marley Stevens is reported to have been placed on academic probation after Turnitin falsely identifies her work as AI-generated, though she purports to have only used Grammarly for minor edits. (4) October 18, 2024: Bloomberg publishes findings that leading AI detectors falsely flag 1%-2% of essays as AI-generated, with higher error rates for ESL students. (This date is set as the incident date for convenience.)
Entidades
Ver todas las entidadesAlleged: Turnitin , GPTZero y Copyleaks developed an AI system deployed by Central Methodist University , Berkeley College , Universities y Colleges, which harmed students , Neurodivergent students , ESL students , Moira Olmsted , Ken Sahib y Marley Stevens.
Estadísticas de incidentes
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
7.3. Lack of capability or robustness
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- AI system safety, failures, and limitations
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Informes del Incidente
Cronología de Informes
Después de tomarse un tiempo libre de la universidad a principios de la pandemia para formar una familia, Moira Olmsted estaba ansiosa por volver a la escuela. Durante meses, hizo malabarismos con un trabajo de tiempo completo y un niño peq…
Variantes
Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents

Working Anything but 9 to 5
· 10 informes

Tempe police release report, audio, photo
· 25 informes
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents

Working Anything but 9 to 5
· 10 informes

Tempe police release report, audio, photo
· 25 informes