Description: A coalition of 15 human rights groups has launched legal action against the French government alleging that an algorithm used to detect welfare fraud discriminates against single mothers and disabled people. The algorithm assigns risk scores based on personal data. The process allegedly subjects vulnerable recipients to invasive investigations, violates privacy and anti-discrimination laws, and disproportionately affects marginalized groups.
Editor Notes: Reconstructing the timeline of events: (1) Since the 2010s: The algorithm has been in use to detect errors and fraud in France’s welfare system. (2) 2014: One version of the algorithm scored single-parent families, particularly those recently divorced, and disabled individuals receiving the Allocation Adulte Handicapé (AAH) as higher risk. (3) 2020: A suspected update to the algorithm took place, though the CNAF has not publicly shared the source code of the current model. (4) October 15, 2024: A coalition of 15 human rights groups, including La Quadrature du Net and Amnesty International, filed a legal challenge in France’s top administrative court, arguing the algorithm discriminates against marginalized groups.
Entidades
Ver todas las entidadesPresunto: un sistema de IA desarrollado por Government of France e implementado por Caisse Nationale des Allocations Familiales (CNAF), perjudicó a Allocation Adulte Handicapé recipients , Disabled people in France , Single mothers in France y French general public.
Estadísticas de incidentes
ID
822
Cantidad de informes
2
Fecha del Incidente
2024-10-15
Editores
Daniel Atherton
Informes del Incidente
Cronología de Informes
wired.com · 2024
- Ver el informe original en su fuente
- Ver el informe en el Archivo de Internet
translated-es-A coalition of human rights groups have today launched legal action against the French government over its use of algorithms to detect miscalculated welfare payments, alleging they discriminate against disabled people and sing…
amnesty.org · 2024
- Ver el informe original en su fuente
- Ver el informe en el Archivo de Internet
translated-es-The French authorities must immediately stop the use of a discriminatory risk-scoring algorithm used by the French Social Security Agency’s National Family Allowance Fund (CNAF), which is used to detect overpayments and errors…
Variantes
Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.