Description: A coalition of 15 human rights groups has launched legal action against the French government alleging that an algorithm used to detect welfare fraud discriminates against single mothers and disabled people. The algorithm assigns risk scores based on personal data. The process allegedly subjects vulnerable recipients to invasive investigations, violates privacy and anti-discrimination laws, and disproportionately affects marginalized groups.
Editor Notes: Reconstructing the timeline of events: (1) Since the 2010s: The algorithm has been in use to detect errors and fraud in France’s welfare system. (2) 2014: One version of the algorithm scored single-parent families, particularly those recently divorced, and disabled individuals receiving the Allocation Adulte Handicapé (AAH) as higher risk. (3) 2020: A suspected update to the algorithm took place, though the CNAF has not publicly shared the source code of the current model. (4) October 15, 2024: A coalition of 15 human rights groups, including La Quadrature du Net and Amnesty International, filed a legal challenge in France’s top administrative court, arguing the algorithm discriminates against marginalized groups.
Entités
Voir toutes les entitésPrésumé : un système d'IA développé par Government of France et mis en œuvre par Caisse Nationale des Allocations Familiales (CNAF), endommagé Allocation Adulte Handicapé recipients , Disabled people in France , Single mothers in France and French general public.
Statistiques d'incidents
ID
822
Nombre de rapports
2
Date de l'incident
2024-10-15
Editeurs
Daniel Atherton
Rapports d'incidents
Chronologie du rapport
wired.com · 2024
- Afficher le rapport d'origine à sa source
- Voir le rapport sur l'Archive d'Internet
translated-fr-A coalition of human rights groups have today launched legal action against the French government over its use of algorithms to detect miscalculated welfare payments, alleging they discriminate against disabled people and sing…
amnesty.org · 2024
- Afficher le rapport d'origine à sa source
- Voir le rapport sur l'Archive d'Internet
translated-fr-The French authorities must immediately stop the use of a discriminatory risk-scoring algorithm used by the French Social Security Agency’s National Family Allowance Fund (CNAF), which is used to detect overpayments and errors…
Variantes
Une "Variante" est un incident qui partage les mêmes facteurs de causalité, produit des dommages similaires et implique les mêmes systèmes intelligents qu'un incident d'IA connu. Plutôt que d'indexer les variantes comme des incidents entièrement distincts, nous listons les variations d'incidents sous le premier incident similaire soumis à la base de données. Contrairement aux autres types de soumission à la base de données des incidents, les variantes ne sont pas tenues d'avoir des rapports en preuve externes à la base de données des incidents. En savoir plus sur le document de recherche.