AI surveillance affects many parts of everyday life, from jobs to healthcare. Surveillance technologies like trackers and biometrics can be built on top of existing cameras and online tools. Companies and governments that create and implement these technologies often avoid being held responsible for negative effects including misidentification and privacy violations.
To the extent that you do not trust your government, you do not want your government to build these systems.
Wired
If you believe you’ve been harmed by Artificial Intelligence, please fill out our harms report form. We will get back to you within 48 hours on weekdays and 72 over the weekend.
If you are seeking legal advice or representation, consider reaching out to an ACLU office in your respective state.
The Gender Shades Justice Award recognizes individuals who experienced an AI harm, spoke out about their experiences, and worked to prevent future harm. The inaugural award was presented to Robert Williams for his efforts in addressing Detroit PD’s use of facial recognition technologies.
White Collar Crime Risk Zones is a machine learning-enabled map that predicts where financial crimes are likely to happen in the U.S. It was created in response to predictive policing systems that unfairly target communities of color.
StopSpying.org is a project with Amnesty International’s Ban the Scan Campaign that informs people about global surveillance tech. You can sign their petition against mass surveillance worldwide.
The Project for Privacy and Surveillance Accountability works to protect privacy and civil rights. Their Scorecard rates members of Congress on their privacy and surveillance policy.
The Electronic Frontier Foundation is a nonprofit helping individuals defend their online privacy with tools like the Privacy Badger browser add-on to block online trackers, the Spot the Surveillance AR tool that teaches you to identify surveillance technologies, and the Atlas of Surveillance.
Fight for the Future is a group of activists and technologists who work on AI and data privacy projects like Stop Data Broker Abuse, Cancel Ring Nation, Stop Endangering Abortion Seekers, and Cancel Amazon + Police Partnerships.
The American Dragnet report from Georgetown Law’s Center on Privacy and Technology investigates how U.S. Immigration and Customs Enforcement (ICE) uses surveillance data.
The NYU School of Law’s Policing Project promotes fairness in law enforcement by working on legal and policy solutions to help regulate the use of AI by the police.
In 2020, the Brennan Center released a predictive policing report explaining what the technology is and major concerns with its increased use.
Stay up to date with the movement towards equitable and accountable AI.
SIGN UP