We are building a team of AJL Agents and community partners who will work together to learn from those who have been directly harmed by algorithmic systems, as well as from the successes and failures of bug bounty and coordinated disclosure programs, in order to create new ways to hold companies accountable, help build better AI systems, and provide new opportunities for individuals in this field. The Community Reporting of Algorithmic System Harms (CRASH) Project brings together key stakeholders for discovery, scoping, and iterative prototyping of tools to enable broader participation in the creation of more accountable, equitable, and less harmful AI systems.
How might companies and organizations build algorithmic systems that are more equitable and accountable? The CRASH Project will explore the feasibility of mechanisms such as bug bounties, coordinated bias disclosure, intersectional AI audits, education, and more in order to help build better systems and reduce harms.
A teacher is fired after an automated assessment tool gives them a low rating. A Black man is arrested at his own home after his face is incorrectly matched to security camera footage. How can these people report the harms they've experienced? How do they know whether the harm was due to algorithmic bias, and how can they prove it? The CRASH Project will explore reporting systems, research methods, and tools to help expose algorithmic bias and the harms that can result.
A latinx trans woman awaiting trial is held in prison after a biased risk assessment tool gives them a risky score. A tenant is locked out of their own home by facial recognition software that performs poorly on Black faces. How can these harms be redressed? How can we route each harms report to the appropriate person or team - a journalist, a lawyer, a community organizer, a public interest technologist? The CRASH Project will prototype and iterate different systems for ensuring that people who experience algorithmic harms have some recourse.
We're creating a community of AJL Agents and building a movement towards equitable and accountable AI. Sign up below to join us!
Joy Buolamwini, AJL Founder & Co-Lead
Sasha Costanza-Chock, AJL Senior Research Fellow & Co-Lead
Camille François, Co-Lead
Deborah Raji, Algorithmic Harms Research Fellow
Josh Kenway, Bug Bounty Research Fellow
Eshe Shukura, Community Organizer Lead
Dana Tzegaegbe, Operations Lead
Julia Rhodes Davis, AJL Senior Advisor & Experiential Lead