Algorithmic decision tools


Consumer protection and antidiscrimination laws that apply in the context of human decisions should be adapted as necessary to ensure they effectively apply to algorithmic decision tools (see also Algorithmic Accountability). 

Employers or other entities that rely on algorithms to make employment decisions should only be allowed to assert that there is a business necessity when: 

  • reasonable effort has been undertaken to evaluate and eliminate the potential discriminatory impact, and 

  • the algorithmic tool has been evaluated for accuracy, reliability, and fairness prior to deployment and routinely thereafter. 

If it is not reasonably possible to calculate the impact of each input or factor in an algorithm on a decision, the algorithm as a whole must be considered the specific practice causing the unjustifiable disparate impact on a protected class. 

Policymakers should amend the key federal civil rights laws that prohibit discrimination in public accommodations, private contracting, and housing to state explicitly that they bar age discrimination. This includes the Fair Housing Act, as well as Section 1981 and Title II of the Civil Rights Act. 

Civil rights laws—particularly those addressing employment (e.g., hiring selections, terminations, compensation), housing, and credit—should include “aiding and abetting” provisions. Any entity that assists an employer, employment agency, creditor, or another covered entity in violating those laws should also be considered covered entities under the laws and subject to suit for any civil rights violations. 

Policymakers should protect consumers from the discriminatory use of information developed from data-driven inferences about health in employment, health insurance, and other appropriate contexts. 

Job applicants should receive meaningful notification about how they will be assessed so that they can determine whether to seek reasonable accommodations.