Skip to main content Skip to secondary navigation
Page Content

Radical Proposal: Third-Party Auditor Access for AI Accountability

A scholar proposes legal protections and regulatory involvement to support organizations that uncover algorithmic harm.

Image
Illustration of Deb Raji

Deb Raji | Art created using code by Sergio Albiac

Algorithmic failures have serious consequences: A well-respected teacher is fired when an automated assessment tool gives her a low rating; a Black man is arrested after being misidentified by a police department’s facial recognition tool; a Latinx businessperson is denied credit by an AI system that relies on information about where she lives rather than her individual creditworthiness.

These sorts of algorithmic abuses are often uncovered and publicized by third-party algorithmic auditors, says Deb Raji, a fellow at the Mozilla Foundation and the Algorithmic Justice League and a PhD student at UC Berkeley who studies algorithmic accountability and evaluation. These auditors scrutinize these systems from the outside, and include civil society groups, law firms, investigative journalists, or academic researchers.

Unfortunately, Raji says, “despite the important impact these third-party auditors have had on deployed AI systems, they are not well supported in their work and are not afforded any legal protections.”

Indeed, many companies have become adept at dodging the poking and prodding of such outsiders who need access to their AI systems in order to determine how they work, Raji says. Some companies have even resorted to legal remedies, such as bringing criminal charges under various anti-hacking laws or filing civil suits to stop auditors from gathering data. 

To support the important work done by third-party auditors, Raji proposes a series of policy interventions that could make rigorous third-party algorithmic audits a reality in the U.S. The proposal involves three key components that would enable and support third-party auditor access and protection: a national incident reporting system to prioritize audits; an independent audit oversight board to certify auditors, set audit standards, and oversee the audit process; and mandated, regulator-facilitated data access for certified third-party auditors.

Raji presented the proposal at Stanford HAI’s “Policy and AI: Four Radical Proposals for a Better Society” conference, held Nov. 9-10, 2021. Watch her presentation below.

Third-Party Access for Algorithmic Accountability: How It Works

Companies often use employees or consultants to perform internal audits called algorithmic impact assessments. But such audits are typically done before an algorithm is deployed in the wild, Raji says. And they tend to focus on meeting the needs of the intended users of the system – a police department, for example – rather than the needs of potentially impacted communities. Moreover, internal audits are rarely publicized, and companies involved in this space have often provided misleading information. “They have not been reliable sources of information about the effectiveness of their own systems,” she says.

 

Read all the proposals:

Universal Basic Income to Offset Job Losses Due to Automation

Data Cooperatives Could Give Us More Power Over Our Data 

Middleware Could Give Consumers Choices Over What They See Online 

 

By contrast, third-party audits are done by independent entities who often represent an impacted group and have no contractual relationship with the company. These audits are directed at a very specific evaluation that has potential repercussions and consequences. And these audits can address harms that go beyond bias to include ecological, safety, or privacy impacts as well as a system’s failure to live up to appropriate standards for transparency, explainability, and accountability. 

“It’s really important to have these kinds of audits because they provide concrete evidence focused on the concerns of an affected population,” Raji says. 

How to Make Third-Party Auditor Protections a Reality

The Federal Trade Commission could play a key role in implementing this proposal, Raji says. “They’re an agency that has a lot of access granted to them already through the FTC Act, and there’s an opportunity for them to share that access with qualified third-party auditors.” In addition, in their consumer protection role, the FTC already has an incident database and a vetting process for third-party auditors, as well as the legal infrastructure to act as an enforcement agency. “They are positioned well to execute on this proposal in the next couple of years,” she says.

Raji concedes that algorithmic auditing is a nascent field with no professional codes of conduct or standards for what constitutes a thorough audit. Nevertheless, she says, many affected populations feel an urgency to address the ways AI is harming them right now, so it’s important that her proposal be implemented quickly. 

“I think it’s step zero to allow qualified representatives an opportunity to advocate on behalf of affected communities – to ask questions about the technology that’s impacting them; to collect evidence of that impact; to try to stop the inappropriate use of that technology; and to protect themselves from retaliation when they raise issues of algorithmic harm,” she says.

Watch the Presentation

 

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics