We now live in a world where AI governs access to information, opportunity and freedom. However, AI systems can perpetuate racism, sexism, ableism, and other harmful forms of discrimination, therefore, presenting significant threats to our society - from healthcare, to economic opportunity, to our criminal justice system.
The Algorithmic Justice League is an organization that combines art and research to illuminate the social implications and harms of artificial intelligence.
AJL’s mission is to raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to prevent AI harms.
AJL is a fiscally sponsored project of Code for Science & Society.
We believe in the power of storytelling for social change. We tell stories that galvanize action with both research and art. We follow a scientific approach to our research, experiments and policy recommendations. We rely on art, freedom and creativity to spread the word, generate awareness about the harms in AI, and amplify the voice of marginalized communities in today’s AI ecosystem. Most importantly, we know making change is a team effort. Fighting for algorithmic justice takes all of us.
A poet of code and AI researcher motivated by personal experiences of algorithmic discrimination, Dr. Joy shared her story in a TED featured Talk that has over 1.7 million views and launched the Algorithmic Justice League in 2016. She is the author of the national bestseller Unmasking AI.
Rachel Fagen is an organizer and strategist, leading AJL’s organizational development and strengthening connections between partners, supporters, and global decision makers working toward equitable and accountable AI. For 20 years, she has directed and supported flourishing organizations in tech, democracy, civil society, research, arts, and activism.
Dr. Williams is an educator, tinkerer, and change agent with a passion for using technology to uplift communities. Over the past 10 years, she has created programs that helped thousands of teachers and students around the world understand and use AI. Now at AJL, her work equips researchers, policymakers and everyday changemakers to responsibly use AI for positive social change.
Aurum is a researcher and technologist who builds bridges between technologists and digital rights litigators to strengthen human rights in the digital ecosystem. They believe that how machines make decisions is a human rights issue, and that technology is a tool that cannot be separated from the social and political context in which it is designed, developed, and deployed.
A researcher, designer, and troublemaker, Sasha led AJL's inaugural research and product design teams. Read Sasha's latest book, Design Justice, freely available here.
Learning Scientist and multi-time Webby Award winner, Dr. Lizárraga provides AJL with expertise on creative and innovative approaches to mass communications and public pedagogy.
A mother, social justice organizer, poet and author, Tawana represents AJL in national and international processes shaping AI governance. She has served as a program committee member for ACM FAcct as well as an Ethics Reviewer for NeurIPS.
Founder of Lady Who Productions, Adele is an author, social media expert, and filmmaker. She and her team manage social media and special events for AJL.
Professor of African American Studies at Princeton University, founding director of the Ida B. Wells JUST Data Lab and author of three books, Viral Justice (2022), Race After Technology (2019), and People’s Science (2013), and editor of Captivating Technology (2019). Dr. Benjamin advises AJL on the relationship between innovation and inequity, knowledge and power, race and citizenship, health and justice.
Dr. Timnit Gebru, Dr. Margaret Mitchell, and Inioluwa Deborah Raji.
Ford Foundation, MacArthur Foundation, and individual donors and supporters like YOU.
Casa Blue, Yancey Consulting, and Bocoup.
Dr. Joy Buolamwini, Megan Smith, and Brenda Darden Wilkerson.
Everyone should have a real choice in how and whether they interact with AI systems.
It is of vital public interest that people are able to understand the processes of creating and deploying AI in a meaningful way, and that we have full understanding of what AI can and cannot do.
Politicians and policymakers need to create robust mechanisms that protect people from the harms of AI and related systems both by continuously monitoring and limiting the worst abuses and holding companies and other institutions accountable when harms occur. Everyone, especially those who are most impacted, must have access to redress from AI harms. Moreover, institutions and decision makers that utilize AI technologies must be subject to accountability that goes beyond self-regulation.
We aim to end harmful practices in AI, rather than name and shame. We do this by conducting research and translating what we’ve learned into principles, best practices and recommendations that we use as the basis for our advocacy, education and awareness-building efforts. We are focused on shifting industry practices among those creating and commercializing today’s systems.
Dr. Joy Buolamwini, Founder of the Algorithmic Justice League, came face to face with discrimination. From a machine. It may sound like a scene from a sci-fi movie, but it carries meaningful real-world consequences.
While working on a graduate school project, facial analysis software struggled to detect her face. She suspected this was more than a technical blunder, but rather than surrender, she responded with curiosity. Her MIT peers with lighter skin color didn’t have the same issues, so Joy tried drawing a face on the palm of her hand. The machine recognized it immediately. Still, no luck with her real face, so she had to finish her project coding with a white mask over her face in order to be detected. Many questions surfaced giving Joy the motivation and insights to start the Algorithmic Justice League.
In the early days, Joy committed her research to “unmasking bias” in facial recognition technologies. As a graduate student at MIT, she discovered large gender and skin type bias in commercially sold products from reputable companies including IBM and Microsoft. She then co-authored the highly influential Gender Shades paper with Dr. Timnit Gebru, and the follow up Actionable Auditing paper with Agent Deb Raji that put Amazon on its toes. As an artist she began creating pieces to humanize AI harms, with her award winning visual spoken word poem "AI, Ain't I A Woman?" shown in exhibitions around the world. This combination of art and research gained support from hundreds of other researchers to advocate for more equitable and accountable technology. Exclusion and discrimination extend well beyond the facial recognition technologies and affect everything from healthcare and financial services to employment and criminal justice.
The deeper we dig, the more remnants of prejudice we will find in our technology. We cannot afford to look away this time because the stakes are simply too high. We risk losing the gains made with the civil rights movement and other movements for equality under the false assumption of machine neutrality.
In the US, a widely-used healthcare algorithm falsely concludes that black patients are healthier than equally sick white patients. AI that is used to determine hiring decisions has been shown to amplify existing gender discrimination. Law enforcement agencies are rapidly adopting predictive policing and risk assessment technologies that reinforce patterns of unjust racial discrimination in the criminal justice system. AI systems shape the information we see on social media feeds and can perpetuate disinformation when they are optimized to prioritize attention-grabbing content. The examples are endless.
We cannot take systems for granted. While technology can give us connectivity, convenience and access, we need to retain the power to make our own decisions. Are we trading convenience for shackles? People must have a voice and a choice in how AI is used.
In the U.S., the teams designing all systems are not inclusive. Less than 20% of people in technology are women and less than 2% are people of color. Also, one in two adults (that’s more than 130 million people) has their face in a facial recognition network. Those databases can be searched and analyzed by unaudited algorithms without any oversight, and the implications are massive.
Beyond inclusive and ethical practices in designing and building algorithms, we demand that there is more transparency when the systems are being used. We need to know what the inputs are and how they were sourced, how performance is measured, the guidelines for testing, and the potential implications, risks, and flaws when applying them to real-life situations. This isn’t a privacy preference. It’s a violation of our civil liberties, where corporations are making money off of people’s faces and people’s lives are put at risk without their consent.
Sometimes respecting people means making sure your systems are inclusive such as in the case of using AI for precision medicine. at times it means respecting people’s privacy by not collecting any data. and it always means respecting the dignity of an individual."
As an organization highlighting critical issues in commercial systems, we constantly face the risk of retaliation and attempts at silencing. While some companies react positively to our findings, others do not. Thankfully, we’ve grown as a movement and have the support of the most respected AI researchers, organizations and thousands of “Agents of Change” that believe in our mission.
We saw the power of collaboration come together in a face off with Amazon after they tried to discredit peer-reviewed research. Following Dr. Joy Buolamwini's rebuttals (published here and here), we had more than 70 researchers defend this work and the National Institute of Standards and Technology released a comprehensive study showing extensive racial, gender, and age bias in facial recognition algorithms that validate the concerns raised by the research.
If you believe we all deserve equitable and accountable AI, then you can become an agent of change too. Whether you’re an enthusiast, engineer, journalist, or policy maker, we need you. Contact us or act now.