Take ActionAboutSpotlightlibraryCONTACTresearchprojectstalks/EVENTSpolicy/advocacyexhibitionseducationPRESS
Follow AJL on TwitterFollow AJL on InstagramFollow AJL on Facebook
download paper
The cover of the bug bounties report has stylized artwork depicting digital ‘bug hunting,’ with bugs made out of circuits crawling and flying around a virtual grid landscape, electro-organic plants, cables snaking through the background, and multiple eyes floating in the sky. The title reads “Algorithmic Justice League - Bug Bounties for Algorithmic Harms?: Lessons from cybersecurity vulnerability disclosure for algorithmic harms discovery, disclosure, and redress - January 2022.
new report from AJL

BUG BOUNTIES FOR ALGORITHMIC HARMS?

download the paper
social media toolkit

Paying hackers to disclose bugs was once considered radical; now, it’s common.

‘Bug bounty’ programs (BBPs) for cybersecurity vulnerabilities, wherein participants are rewarded for identifying exploitable flaws (or security ‘bugs’) in software or hardware, are increasingly popular. Google, the Department of Defense, Starbucks, and hundreds of other companies and organizations regularly use BBPs to buy security flaws from hackers. A wide variety of organizations have adopted BBPs, and a growing number of people participate, most often via platforms such as HackerOne or BugCrowd.

Recently, some companies have adopted BBPs to address issues beyond security bugs. For example, Rockstar Games, Twitter, and others have begun to use BBPs to address various kinds of socio-technical problems, including algorithmic harm. Yet the conditions under which BBPs might be useful for finding, exposing, and stopping algorithmic harm remain relatively unexamined. In Bug Bounties For Algorithmic Harms? Lessons from Cybersecurity Vulnerability Disclosure for Algorithmic Harms Discovery, Disclosure, and Redress, AJL researchers Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji, and Dr. Joy Buolamwini dive deep into the question "How might we apply BBPs to areas beyond cybersecurity, including algorithmic harm?"

inside the report

The authors draw insight from interviews with field leaders, literature review, and in-depth investigation into BBPs. The report includes:

five key takeaways

with recommendations for how to more effectively foster the discovery, disclosure, and mitigation of harm from algorithmic systems

summaries of interviews

with top practitioners in the field, including the CEO and CTO of the platform HackerOne and the security researchers who ran the Hack the Pentagon program, among others

a design framework

for vulnerability disclosure mechanisms

a case study of Twitter’s recent bias bounty pilot

that scrutinizes the project's promises and limitations

25 concrete design lessons

to improve BBPs in the future

download the paper
social media toolkit
The cover of the bug bounties report has stylized artwork depicting digital ‘bug hunting,’ with bugs made out of circuits crawling and flying around a virtual grid landscape, electro-organic plants, cables snaking through the background, and multiple eyes floating in the sky. The title reads “Algorithmic Justice League - Bug Bounties for Algorithmic Harms?: Lessons from cybersecurity vulnerability disclosure for algorithmic harms discovery, disclosure, and redress - January 2022.

cover art and illustrations by clarote

KEY TAKEAWAYS

We synthesized five high-level findings that can be summarized as follows, alongside corresponding recommendations that can be found in the full report.

#1 PREPARE TO INCLUDE SOCIO-TECHNICAL CONCERNS.

A handful of players in the BBP ecosystem have been slowly expanding their current programs to include socio-technical issues. For example, BBPs and related programs have been set up to identify instances of data abuse (Google and Facebook), systematic errors with video game cheat-flagging algorithms (Rockstar Games), bias in image cropping algorithms for social feeds (Twitter), and deficient privacy protections in mobile software relative to vendors’ claims (Correlium). This progression hasn’t happened in a structured way, and no clear best practices have emerged yet, but the trend is likely to continue accelerating. We provide a set of design levers and recommendations for how to shape BBPs for algorithmic harm discovery and mitigation.

#2 LOOK ACROSS THE LIFECYCLE.

The allure of ‘bug bounties’ can often obscure the fact that they are just one mechanism for enhancing cybersecurity that fits within a broader, ongoing arc of field maturation and organizational development. A similar ‘technology product lifecycle’ lens to that which exists in security (as the ‘secure development lifecycle’) should be applied to algorithmic harms, since to be responsive to reports of algorithmic harm, organizations need the ‘digestive systems’ to prioritize, assess, and act on those findings.

#3 NURTURE THE COMMUNITY OF PRACTICE.

Bug bounty platforms play both a direct and indirect role in nurturing communities of practice. For example, they may provide educational materials and tools, and motivate community members to independently develop and share resources. We believe there is a strong need for similar, well-curated, accessible resources to help nurture the algorithmic harms research community. We caution against community-building approaches that exclude those from fields outside of computer science, including community advocates, while unduly elevating technical skills and techno-solutionism.

#4 INTENTIONALLY DEVELOP A DIVERSE, INCLUSIVE COMMUNITY.

A small number of bug bounty platforms dominate the vulnerability disclosure ecosystem, leading to concerns about the commodification of security research and lack of diversity among these platforms’ contributors. Many vulnerability discovery mechanisms can be understood as a form of outsourcing, with compensation arrangements ranging from nothing to uncertain to incomplete. Deploying bug bounties successfully for algorithmic harms will require serious effort to recruit and retain diverse communities of researchers and community advocates, and to ensure fair compensation for work.

#5 FOSTER AND PROTECT PARTICIPATORY, ADVERSARIAL RESEARCH, AND GUARANTEE SOME FORM OF PUBLIC DISCLOSURE.

Vulnerability disclosure continues to struggle with models for compensated, safe, third-party or adversarial research. Platforms that operate BBPs and act as intermediaries between hackers and target organizations play an outsized role in the current BBP ecosystem, with target organizations in turn typically afforded a right of non-disclosure. The business models and funding sources of BBP platforms tend to bend their decision-making towards the interests of target organizations, for example, in determining whether vulnerability reports can be publicly released. Greater protection for third-party algorithmic harms research is sorely needed.

cover art and illustrations by clarote

@AJLUNITED

FOLLOW US ON SOCIAL
TWITTER
FACEBOOK
LINKEDIN
YOUTUBE
View on twitter
View on twitter
View on instagram
View on instagram
View on instagram
View on twitter
FOLLOW US

#CodedBias #EquitableAI #AccountableAI #InclusiveAI #ResponsibleAI #EthicalAI #AIbias #AIharms #MachineBias #ArtificialIntelligence #InclusiveTech #AJL #AlgorithmicJusticeLeague

Navigate
  • Home
  • Take Action
  • About
  • Spotlight
  • Library
  • Learn MorePrivacy Policy
our library
  • Research
  • Projects
  • Talks/Events
  • Policy/Advocacy
  • Exhibitions
  • Education
  • Press
contact us
  • Get in Touch
  • Share Your Story
  • Journalists
  • Donate
  • Twitter
  • Instagram
  • Facebook
  • LinkedIn
©Algorithmic Justice League 2025
Powered by Casa Blue