Description: Researchers from Boston University and Microsoft Research, New England demonstrated gender bias in the most common techniques used to embed words for natural language processing (NLP).
Entidades
Ver todas las entidadesAlleged: Microsoft Research , Boston University y Google developed an AI system deployed by Microsoft Research y Boston University, which harmed Women y Minority Groups.
Clasificaciones de la Taxonomía CSETv1
Detalles de la TaxonomíaIncident Number
The number of the incident in the AI Incident Database.
12
Clasificaciones de la Taxonomía CSETv0
Detalles de la TaxonomíaPublic Sector Deployment
"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
No
Lives Lost
Were human lives lost as a result of the incident?
No
Intent
Was the incident an accident, intentional, or is the intent unclear?
Unclear
Near Miss
Was harm caused, or was it a near miss?
Unclear/unknown
Ending Date
The date the incident ended.
2016-01-01T00:00:00.000Z
Beginning Date
The date the incident began.
2016-01-01T00:00:00.000Z
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.1. Unfair discrimination and misrepresentation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Informes del Incidente
Cronología de Informes
La aplicación ciega del aprendizaje automático corre el riesgo de amplificar los sesgos presentes en los datos. Nos enfrentamos a un peligro de este tipo con la incrustación de palabras, un marco popular para representar datos de texto como…
Variantes
Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents