Through a combination of art, research, policy guidance and media advocacy, the Algorithmic Justice League is leading a cultural movement towards equitable and accountable AI. This requires us to look at how AI systems are developed and to actively prevent the harmful use of AI systems. We aim to empower communities and galvanize decision makers to take action that mitigates the harms and biases of AI.
Equitable AI requires that people have agency and control over how they interact with an AI system. To have agency, people must first be aware of how these systems are used all around them — for example, at airports, stadiums, schools, hospitals and in hiring and housing — who is involved in creating the system — from business, government and academia — and the risks and potential harms.
Equitable AI requires securing affirmative consent from people on how or whether they interact with an AI system. The idea here is that people understand exactly how their data will be used and if consent is given then their data is limited only to that permitted use. The defaults for affirmative consent are “opt-in,” and if people elect not to opt-in, affirmative consent requires that they will not suffer any penalty or denial of access to platforms or services as a result. Unlike the terms of service that tech companies require people to click through to use their platforms, affirmative consent for AI cannot be coerced.
In addition to providing agency, equitable AI respects human life, dignity and rights. Considering these values requires prohibiting certain government and commercial uses of AI. This includes AI that would identify targets where the use of lethal force is an option (for example, when being stopped by the police) and using AI in government and citizen-led surveillance. Justice requires that we prevent AI from being used by those with power to increase their absolute level of control, particularly where it would automate long-standing patterns of injustice such as racial profiling in law enforcement, gender bias in hiring and overpolicing of immigrant communities. Justice means protecting those that would be targeted by these systems from their misuse.
For an AI system to demonstrate meaningful transparency it must provide an explanation of how the system works, how it was designed, and for what specific purpose. Critically, meaningful transparency allows people to clearly understand the intended capabilities and known limitations of the AI.
To demonstrate this standard, companies and governments must share information about how AI is being used in their own decision-making processes and sold to others. The goal is for people to understand the societal risks every time they encounter an AI system and how their data is being used by people in power to make decisions that affect them. Sharing this information may be supported by reporting requirements that are mandated through law or agreed to through codes of conduct.
AI systems are constantly evolving. As a result, Accountable AI requires continuous oversight by independent third parties. To support continuous oversight there must be laws that require companies and government agencies deploying AI to meet minimum requirements, for example: maintaining on-going documentation, submitting to audit requirements, and allowing access to civil society organizations for assessment and review.
Accountable AI provides people who have been harmed with access to remedy, meaning that there is a working pathway for people to contest and correct a harmful decision made by artificial intelligence. For example, if an AI tool incorrectly denied a welfare benefits check, remedy would entail an easy way for the recipient to call attention to this error and receive payment plus interest for the lost time. If an AI system was suspected of disqualifying a job applicant based on gender or race, remedy would allow the applicant to discover how the decision was made and provide a basis for challenging the decision in court.
There are many different terms that have been used to describe a policy approach to AI. We want to be clear about what we mean by equitable and accountable as separate from these other approaches.
The notion of “Ethical AI” has been leveraged by big tech companies — investors and executives strategically aligned with academia and government — to push for voluntary principles over government regulation. The idea of using ethics is not problematic in itself, but has led to a proliferation of “AI Principles” with limited means for translating these principles into practice. A system of AI ethics allows companies to be accountable only to rules that they have set for themselves. The ball is in their court from beginning to end. Appeals to ethical AI can also be leveraged by the government to justify questionable policies that have not been settled in law. This is a limited approach from our perspective because it does not create any mandatory requirements or ban certain uses of AI. Our focus is instead on creating action that bridges the gap from principles to practice.
While calls for Inclusive AI may be well-intended, inclusion alone does not support progress towards mitigating harm. At times, respecting life, dignity and rights may require that a system gather more data with affirmative consent, for example to support accuracy in precision medicine across diverse groups. At other times, including more data can mean improving upon a system that unfairly subjects vulnerable populations to additional targeted scrutiny. In service of “inclusion,” data may also be collected in violation of privacy and without consent.
We have an interest in creating robust AI systems where appropriate, how the data is collected, and the purpose for which it is being used, but it is critical to evaluate inclusion. On the one hand, we may seek to improve AI to limit the very serious consequences of bias and discrimination — for example, a self-driving car that fails to detect certain pedestrian faces or a greater likelihood that people with darker skin are misidentified as criminal suspects by the police. At the same time, we must continue to call into question whether that use is supported by our values and should therefore be permitted at all. If we focus only on making improvements to datasets and computational processes, we risk creating systems that are technically more accurate but also more capable of being used for mass surveillance and to enhance discriminatory policing practices that allow for the use of lethal force.
Technical standards are not enough to ensure that AI systems will not be deployed to threaten civil liberties and amplify existing patterns of discrimination. The use of facial recognition in surveillance threatens the civil liberties of all citizens, including freedom of expression, freedom of association and due process.
All of our recommendations supporting equitable and accountable AI are intended to focus on the process embedded at every level of operations over any individual product. An effective process can provide a reliable standard for outside evaluation and be applied no matter what the specific technology in question. It also allows for the voice and choice for the people who will be impacted to be incorporated throughout the lifecycle and ultimate decisions regarding the use of the AI. Companies and governments must evaluate their practices with respect to these priorities.