Description: An AI system developed by Infinite Campus and deployed by Nevada to identify at-risk students led to a sharp reduction in the number classified as needing support, dropping from 270,000 to 65,000. The reclassification caused significant budget cuts in schools serving low-income populations. The drastic reduction in identified at-risk students reportedly left thousands of vulnerable children without resources and support.
Editor Notes: Timeline notes and clarification: Before 2023, Nevada identified at-risk students mostly by income, using free or reduced-price lunch eligibility as the key measure. In 2022, this system classified over 270,000 students as at-risk. Looking to improve the process, Nevada partnered with Infinite Campus in 2023 to introduce an AI system that used more factors like GPA, attendance, household structure, and home language. The new system was meant to better predict which students might struggle in school. However, during the 2023-2024 school year, the AI cut the number of at-risk students to less than 65,000. This reclassification caused budget cuts in schools that depended on the funding tied to at-risk students, especially those serving low-income populations. By October 2024, the problem gained national attention.
Entidades
Ver todas las entidadesAlleged: Infinite Campus developed an AI system deployed by Nevada Department of Education, which harmed Low-income students in Nevada , Nevada school districts , Mater Academy of Nevada y Somerset Academy.
Estadísticas de incidentes
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.3. Unequal performance across groups
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Informes del Incidente
Cronología de Informes

Nevada ha tenido durante mucho tiempo la financiación escolar más desequilibrada del país. Los distritos de bajos ingresos tienen casi un 35 por ciento menos de dinero para gastar por alumno que los más ricos, la brecha más grande de cualqu…
Variantes
Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents

Machine Bias - ProPublica
· 15 informes

Analyzing Released NYC Value-Added Data Part 2
· 7 informes

Policing the Future
· 17 informes
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents

Machine Bias - ProPublica
· 15 informes

Analyzing Released NYC Value-Added Data Part 2
· 7 informes

Policing the Future
· 17 informes