Companies use proprietary algorithms in finance for risk assessments, to determine credit worthiness, and to verify identity. However, these uses carry the risk of exposing consumers to identity theft, furthering discriminatory lending practices, and driving fraud.
Discriminatory lending can disproportionately impact vulnerable people who are already vulnerable, including low-income borrowers, women, and people of color, women, and low-income borrowers. Additionally, the growing prevalence of generative AI-powered scams can be used by bad actors to manipulate and defraud consumers.
The use of education data in credit decisions is particularly troublesome given the continuing pattern of disparate access to education
Consumer Reports
Where you attend college could be costing you more to borrow and refinance education loans, report says
Read moreIf you believe you’ve been harmed by Artificial Intelligence, please fill out our harms report form. We will get back to you within 48 hours on weekdays and 72 over the weekend.
If you are seeking legal advice or representation, consider reaching out to an ACLU office in your respective state.
In 2023, AJL launched the No Face, No Case campaign to challenge the IRS’s use of ID.me for identity verification. Using ID.me requires you to waive legal rights and give up personal data. However, refusing the service could keep you from accessing critical services and benefits.
In a 2023 statement, the FTC addressed privacy, data security, and bias concerns with new machine learning systems. They warned that unproven claims, lack of accountability, privacy violations, and other bad business practices can violate the FTC act and can be reported at ReportFraud.ftc.gov.
In a series of studies on disparities in access to insurance, the CFA found that state-mandated auto coverage was more expensive for some drivers depending on their income, race, and geographic location.
The CFPB provides protections for borrowers, requiring creditors to have specific and accurate reasons for adverse actions, like denying credit. This transparency requirement provides some protections against the use of complex and opaque crediting algorithms.
Stay up to date with the movement towards equitable and accountable AI.
SIGNÂ UP