Unmasking AI: My Mission to Protect What Is Human in a World of Machines (2023) by Dr. Joy Buolamwini details AI harms in emerging technologies. The book provides examples of deepfake harms (See Chapter 9, Pages 106-112).
The #FreedomFlyers campaign raises awareness about the TSA's expanding use of facial recognition at airports. Make your voice heard by filling out a TSA Scorecard.
In Design Justice, Dr. Sasha Costanza-Chock, Senior Research Advisor to AJL, talks about designing technology that works for everyone. She touches on how airport security scanners are biased against transgender people, because they are based on outdated social ideas.
The Gender Shades Justice Award recognizes individuals who experienced an AI harm, spoke out about their experiences, and worked to prevent future harm. The inaugural award was presented to Robert Williams for his efforts in addressing Detroit PD’s use of facial recognition technologies.
The Coded Bias film explores the fallout of AJL founder Dr. Joy Buolamwini’s discovery that facial recognition technologies don’t always work well for darker skin tones or female-appearing faces. Through the story of a Houston teacher who was almost fired, it warns about the risks of over-relying on automated tools to make important decisions about education.
The Coded Bias documentary, released in 2020, tells the story of Dr. Joy Buolamwini’s discovery that facial recognition does not see dark skinned faces accurately. The film highlights the story of a building management company in Brooklyn that planned to implement facial recognition technology to allow tenants to enter their homes.
In 2019, algorithmic bias researchers, including Dr. Joy Boulamwini, Dr. Timnit Gebru, and Inioluwa Deborah Raji, submitted an amicus support letter in support of the Brooklyn Tenants who were pushing back against the use of facial recognition technology in their building.
In 2024, AJL launched the #MyWorkMyRights Campaign to advocate for Consent, Compensation, Control, and Credit for writers in the age of generative AI. Share your story with AJL, post social media content online, or add your name to the Author's Guild Open Letter.
Coded Bias explores the fallout of AJL founder Dr. Joy Buolamwini’s discovery that facial recognition struggles to see dark-skinned faces accurately. The film underscores the dangers of relying on AI to make employment decisions through the story of an award-winning teacher who nearly loses his job because of a poor assessment from an automated tool.
In 2021, the EEOC launched an initiative to examine the effects of employment- related AI tools and offer guidance on how to ensure algorithmic fairness. Their joint statement outlines how federal agencies will ensure employers use these tools fairly and responsibly.
Upturn is a nonprofit that drives policy change to advance equity in the design, governance, and use of technology. protect people’s opportunities. Their 2018 report on fairness in hiring algorithms is a key resource for understanding the landscape of different tools.
Georgetown Law’s Center on Privacy and Technology drafted the Worker Privacy Act bill which outlines protections against the invasive collection of employees’ data.
Have I Been Trained allows users to discover if their work has been used to train AI. Users can then opt out of future training by adding their work to the Do Not Train registry.
The Worker Info Exchange helps gig workers access data related to their employment at companies such as Uber, Amazon Flex, Bolt, and others. Their published research on tech and the gig economy provides insights and recommendations for advocacy.
Coworker.org also published a framework called, Little Tech is Coming for Workers, for reclaiming and building worker power. Their Bossware and Employment Tech database compiles more than 500 tech products impacting employees.
The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act allows victims of deepfake abuse to sue the creators of nonconsensual deepfakes. Passed by the senate in 2024, this would be the first federal law protecting against harmful deepfakes if it passes.
The No Fakes Act, introduced to the U.S. Senate in Fall 2023, sets new standards for image rights, protecting people from from having deepfake digital twins of them made without consent.
The National Security Agency, Federal Bureau of Investigation, and Cybersecurity Infrastructure Security Agency released a Cybersecurity Information Sheet providing information about synthetic media threats and how deepfake technology can be used for malicious purposes.
Detect Fakes from the MIT Media Lab plus Northwestern University researchers and Media Literacy in the Age of Deepfakes by the MIT Center for Advanced Virtuality are two projects that help people better understand and identify AI-created content.
ControlAI’s Ban Deepfakes Campaign calls for making deepfakes illegal and holding creators accountable. Individuals can sign an open letter to urge lawmakers to take steps to protect people from deepfake harms.
The Another Body film tells the story of Taylor Klein (pseudonym), a college student who discovers that a classmate created an explicit deepfake using her image. Taylor’s story led to the creation of the #MyImageMyChoice movement, which raises awareness about explicit deepfakes and supports victims of online image abuse.
In Predictive Inequity in Object Detection, Georgia Tech researchers showed that self-driving cars' systems work differently depending on pedestrians’ skin type.
In Bias Behind the Wheel, researchers from China, London, and Singapore analyzed how self-driving cars sometimes have trouble detecting pedestrians based on things like age and gender.
In 2024, Alabama passed a law to regulate the use of self-driving cars, requiring companies and drivers to register vehicles with automated driving systems.
The EFF has compiled a list of resources that allows individuals to figure out what data their cars are tracking and how to opt-out of sharing if possible.
This project helps people track the privacy practices of car manufacturers, providing ratings of car companies and their privacy habits to help consumers make informed choices.
The Electronic Frontier Foundation (EFF) studied eight days of Automated License Plate Reader (ALPR) data to show how ALPRs work and push for more police accountability.
White Collar Crime Risk Zones is a machine learning-enabled map that predicts where financial crimes are likely to happen in the U.S. It was created in response to predictive policing systems that unfairly target communities of color.
StopSpying.org is a project with Amnesty International’s Ban the Scan Campaign that informs people about global surveillance tech. You can sign their petition against mass surveillance worldwide.
The Project for Privacy and Surveillance Accountability works to protect privacy and civil rights. Their Scorecard rates members of Congress on their privacy and surveillance policy.
The Electronic Frontier Foundation is a nonprofit helping individuals defend their online privacy with tools like the Privacy Badger browser add-on to block online trackers, the Spot the Surveillance AR tool that teaches you to identify surveillance technologies, and the Atlas of Surveillance.
Fight for the Future is a group of activists and technologists who work on AI and data privacy projects like Stop Data Broker Abuse, Cancel Ring Nation, Stop Endangering Abortion Seekers, and Cancel Amazon + Police Partnerships.
The American Dragnet report from Georgetown Law’s Center on Privacy and Technology investigates how U.S. Immigration and Customs Enforcement (ICE) uses surveillance data.
The NYU School of Law’s Policing Project promotes fairness in law enforcement by working on legal and policy solutions to help regulate the use of AI by the police.
In 2020, the Brennan Center released a predictive policing report explaining what the technology is and major concerns with its increased use.
The FTC published a statement on EdTech and the Children’s Online Privacy Protection Act, making it clear that it’s illegal for companies to compromise children’s privacy rights when using educational technology.
The Center for Democracy and Technology has published numerous resources on protecting student privacy.
Defend Digital Me is a non-profit organization that provides research about student privacy and the use of AI. In 2022, they released “The State of Biometrics 2022: A Review of Policy and Practice in UK Education” report.
Digital Promise, a global nonprofit, created the AI Digital Equity Framework, to help schools make informed decisions about using AI technology responsibly.
This project, organized by parent advocates, provides templates parents can use to ask their children’s schools for information on how data is being used by educational technology.
The EdTech Equity Project offers toolkits to help schools, tech developers, and community members to make sure AI technology is fair and works for everyone.
The Electronic Frontier Foundation’s Red Flag Machine quiz, and accompanying research report show how student monitoring software, like GoGuardian, can display significant errors in the online content they flag.
After the release of Biden’s Executive Order on Artificial Intelligence, the U.S. Department of Education began publishing guidance to help schools use AI technology to benefit all students while also protecting their privacy.
The NDLSA’s Report on Concerns Regarding Online Administration of Bar Exams highlights the challenges disabled students face with e-proctoring tools when remotely taking bar exams during the COVID-19 pandemic. It focuses on concerns like AI bias and privacy.
In 2023, the National Low Income Housing Commission (NLIHC) submitted a report on unjust automated screening processes to the Consumer Financial Protection Bureau and Federal Trade Commission. The report outlines the various ways that these systems discriminate particularly against low-income renters.
The Countering Tenant Screening Initiative collects tenant screening reports to hold tenant screening algorithms accountable and to teach individuals about how tenant screening works.
The National Fair Housing Alliance published the Method for Improving Mortgage Fairness report on how to improve mortgage fairness by underwriting data models through methods like Distribution Matching.
Unmasking AI: My Mission to Protect What Is Human in a World of Machines (2023) This book by Dr. Joy Buolamwini details AI harms and oppression; the book provides examples of healthcare biases (See Chapter 5, Pages 46-55)
CHAI is a non-profit focused on the appropriate creation, evaluation, and use of AI in healthcare, particularly for health equity. They publish reports about how to drive high quality healthcare by developing credible, fair, and transparent health AI systems.
The Department of Health and Human Services recently shared its plan for Promoting the Responsible Use of Artificial Intelligence in the Administration of Public Benefits and Guiding Principles to Address the Impact of Algorithmic Bias on Racial and Ethnic Disparities in Health and Health Care.
The Department of Veterans Affairs, which is a part of the Department of Health and Human Services, has established the National Artificial Intelligence Institute to expand on leveraging AI research and development for the improvement of the health of veterans.
The World Privacy Forum has published several reports related to health privacy, healthcare and biometrics including Risky Analysis: Assessing and Improving AI Governance Tools and Covid-19 and HIPAA: HHS’s Troubled Approach to Waiving Privacy and Security Rules for the Pandemic.
In 2023, AJL launched the No Face, No Case campaign to challenge the IRS’s use of ID.me for identity verification. Using ID.me requires you to waive legal rights and give up personal data. However, refusing the service could keep you from accessing critical services and benefits.
In a 2023 statement, the FTC addressed privacy, data security, and bias concerns with new machine learning systems. They warned that unproven claims, lack of accountability, privacy violations, and other bad business practices can violate the FTC act and can be reported at ReportFraud.ftc.gov.
In a series of studies on disparities in access to insurance, the CFA found that state-mandated auto coverage was more expensive for some drivers depending on their income, race, and geographic location.
The CFPB provides protections for borrowers, requiring creditors to have specific and accurate reasons for adverse actions, like denying credit. This transparency requirement provides some protections against the use of complex and opaque crediting algorithms.
Stay up to date with the movement towards equitable and accountable AI.
SIGN UP