Algorithmic Accountability

Background

Algorithmic decision tools powered by artificial intelligence (AI) are being used for consequential decision-making in a variety of contexts. These decisions include who is offered a job; who receives access to credit, insurance, and other financial products and at what price; who receives various health services; and who is eligible for government benefits. AI can bring significant benefits to many sectors. It can improve mobility, increase quality and efficiency in health care, and expand access to financial services. It can also reduce complexity and inefficiency in consumer interactions. But without safeguards, these decision-making tools can produce results that are biased, reflecting historic and ongoing societal prejudices, including against older adults. 

The challenge is how to support the potential benefits of AI while ensuring fairness, transparency, and accountability for consumers. Existing consumer protection laws may need to be updated to address the algorithmic context. 

AI encompasses a broad range of technologies and capabilities such as facial recognition, natural language processing, machine learning, and robotics. At their most fundamental level, all applications of AI rely on the analysis of reams of data to detect patterns and make inferences and predictions. Algorithms are the set of instructions used to analyze data. 

Algorithmic decision tools can be tested to ensure that they produce unbiased, accurate, and reliable results. Such audits can be conducted in-house. But third-party audits can ensure objectivity and faith in the results. 

For consumer protections to be meaningful, there must be transparency and accountability. That is difficult given the complexity and opacity of algorithmic decision-making. 

Transparency can include providing clear, meaningful, and readily accessible notice to people when they are interacting with an algorithm and any relevant implications. For example, the use of some digital financial tools can negate people’s protections under the Electronic Funds Transfer Act to mitigate losses in the event of erroneous or fraudulent transactions. In addition, transparency is served when people have access to easy-to-understand explanations of the factors that contributed to the decision and the logic behind it. 

Meaningful accountability can be achieved by allowing people to challenge the results of a decision made using algorithmic tools through a fair and meaningful process. For such a process to be effective, those responsible for the algorithmic tools used for consequential decisions must retain enough data and documentation for the decision to be meaningfully reviewed. This could include documentation of the intent of the algorithm, the results of any testing done to ensure against unjustifiable biases against people protected by civil rights statutes, and actions taken to address any biases uncovered through testing. 

The use of algorithms poses challenges in terms of privacy as well (see also the Data Privacy section of this chapter). 

ALGORITHMIC ACCOUNTABILITY: Policy

ALGORITHMIC ACCOUNTABILITY: Policy

Fairness, transparency, and accountability

Leaders in the private and public sectors should ensure fairness, transparency, and accountability in algorithmic tools used for informing consequential decisions regarding health and financial well-being. These can be determinations of whether a person is qualified for a job, is creditworthy, or is eligible for government benefits. Algorithmic tools should be: 

  • fair, reliable, and accurate and not result in unjustifiable disparate impacts on people protected by civil rights statutes; 
  • transparent to users about when they are interacting with such a tool and provide explanations about the results; 
  • accountable to individuals adversely affected by a decision informed by an algorithmic decision tool through a fair and meaningful process that allows individuals to challenge adverse outcomes. 

The degree of government regulation should be commensurate with the potential risk of harm to individuals and focus on outcomes and performance standards rather than the technology used. 

When governments utilize algorithmic tools to inform consequential decisions, these tools should be evaluated by a qualified third party for reliability, accuracy, and possible biases against people protected by civil rights statutes. The results of these evaluations should be made public.