‘Bug bounty’ programs (BBPs) for cybersecurity vulnerabilities, wherein participants are rewarded for identifying exploitable flaws (or security ‘bugs’) in software or hardware, are increasingly popular. Google, the Department of Defense, Starbucks, and hundreds of other companies and organizations regularly use BBPs to buy security flaws from hackers. A wide variety of organizations have adopted BBPs, and a growing number of people participate, most often via platforms such as HackerOne or BugCrowd.
Recently, some companies have adopted BBPs to address issues beyond security bugs. For example, Rockstar Games, Twitter, and others have begun to use BBPs to address various kinds of socio-technical problems, including algorithmic harm. Yet the conditions under which BBPs might be useful for finding, exposing, and stopping algorithmic harm remain relatively unexamined. In Bug Bounties For Algorithmic Harms? Lessons from Cybersecurity Vulnerability Disclosure for Algorithmic Harms Discovery, Disclosure, and Redress, AJL researchers Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji, and Dr. Joy Buolamwini dive deep into the question "How might we apply BBPs to areas beyond cybersecurity, including algorithmic harm?"
The authors draw insight from interviews with field leaders, literature review, and in-depth investigation into BBPs. The report includes:
with recommendations for how to more effectively foster the discovery, disclosure, and mitigation of harm from algorithmic systems
with top practitioners in the field, including the CEO and CTO of the platform HackerOne and the security researchers who ran the Hack the Pentagon program, among others
for vulnerability disclosure mechanisms
that scrutinizes the project's promises and limitations
to improve BBPs in the future
We synthesized five high-level findings that can be summarized as follows, alongside corresponding recommendations that can be found in the full report.
A handful of players in the BBP ecosystem have been slowly expanding their current programs to include socio-technical issues. For example, BBPs and related programs have been set up to identify instances of data abuse (Google and Facebook), systematic errors with video game cheat-flagging algorithms (Rockstar Games), bias in image cropping algorithms for social feeds (Twitter), and deficient privacy protections in mobile software relative to vendors’ claims (Correlium). This progression hasn’t happened in a structured way, and no clear best practices have emerged yet, but the trend is likely to continue accelerating. We provide a set of design levers and recommendations for how to shape BBPs for algorithmic harm discovery and mitigation.
The allure of ‘bug bounties’ can often obscure the fact that they are just one mechanism for enhancing cybersecurity that fits within a broader, ongoing arc of field maturation and organizational development. A similar ‘technology product lifecycle’ lens to that which exists in security (as the ‘secure development lifecycle’) should be applied to algorithmic harms, since to be responsive to reports of algorithmic harm, organizations need the ‘digestive systems’ to prioritize, assess, and act on those findings.
Bug bounty platforms play both a direct and indirect role in nurturing communities of practice. For example, they may provide educational materials and tools, and motivate community members to independently develop and share resources. We believe there is a strong need for similar, well-curated, accessible resources to help nurture the algorithmic harms research community. We caution against community-building approaches that exclude those from fields outside of computer science, including community advocates, while unduly elevating technical skills and techno-solutionism.
A small number of bug bounty platforms dominate the vulnerability disclosure ecosystem, leading to concerns about the commodification of security research and lack of diversity among these platforms’ contributors. Many vulnerability discovery mechanisms can be understood as a form of outsourcing, with compensation arrangements ranging from nothing to uncertain to incomplete. Deploying bug bounties successfully for algorithmic harms will require serious effort to recruit and retain diverse communities of researchers and community advocates, and to ensure fair compensation for work.
Vulnerability disclosure continues to struggle with models for compensated, safe, third-party or adversarial research. Platforms that operate BBPs and act as intermediaries between hackers and target organizations play an outsized role in the current BBP ecosystem, with target organizations in turn typically afforded a right of non-disclosure. The business models and funding sources of BBP platforms tend to bend their decision-making towards the interests of target organizations, for example, in determining whether vulnerability reports can be publicly released. Greater protection for third-party algorithmic harms research is sorely needed.