Description: In December 2023, two Hingham High School students ("RNH" and unnamed) reportedly used Grammarly to create a script for an AP U.S. History project. The AI-generated text included fabricated citations to nonexistent books, which the student copied and pasted without verification or acknowledgment of AI use. This violated the school's academic integrity policies, leading to disciplinary action. RNH's parents later sued the school district, but a federal court ruled in favor of the school.
Editor Notes: The incident itself occurred sometime in December 2023. The court ruling was published on November 20, 2024. It can be read here: https://fingfx.thomsonreuters.com/gfx/legaldocs/lbvgjjqnkpq/11212024ai_ma.pdf.
Entidades
Ver todas las entidadesAlleged: Grammarly developed an AI system deployed by Hingham High School students y Hingham High School student RNH, which harmed Hingham High School students , Hingham High School student RNH , Hingham High School y Academic integrity.
Estadísticas de incidentes
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
5.1. Overreliance and unsafe use
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Human-Computer Interaction
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Informes del Incidente
Cronología de Informes

Los padres de un estudiante de último año de secundaria de Massachusetts que utilizó inteligencia artificial para un proyecto de estudios sociales presentaron una demanda contra sus maestros y la escuela después de que su hijo fuera castiga…

Un tribunal federal falló ayer en contra de los padres que demandaron a un distrito escolar de Massachusetts por castigar a su hijo que utilizó una herramienta de inteligencia artificial para completar una tarea.
Dale y Jennifer Harris dema…
Variantes
Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents
Wrongfully Accused by an Algorithm
· 11 informes
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents
Wrongfully Accused by an Algorithm
· 11 informes