AARP Hearing Center
Background
Artificial intelligence (AI) involves programming computers to complete tasks that typically require human intelligence, such as comprehending language, making decisions, and solving problems. This broad term encompasses a number of technologies, including predictive AI, generative AI, large language models, and machine learning. AI is evolving at an exponential rate and is thus transforming a range of industries, from healthcare to finance to transportation.
AI has the potential to create meaningful innovations that improve the lives of older adults. For instance, AI powers automated vehicle technology that will eventually lead to fully self-driving cars. This can expand access and reduce isolation for people who do not drive, including some older adults (see also Vehicle Automation and Fully Self-Driving Cars). AI also powers digital assistants. And it is transforming health care, financial services, and other sectors of the economy (see also Privacy, Confidentiality, and Security of Health Information).
Consequential decision-making: AI systems are being used to make or inform consequential decisions. Consequential decisions are those that have a significant legal, material, or similar impact on an individual's life. For example, AI can be used to make recommendations about whether someone:
- is offered a job;
- is approved for housing;
- receives access to credit, housing, insurance, and financial products—and at what price;
- receives various health services; and
- is eligible for government benefits.
AI may be trained using data that includes biased human decision-making. This produces biased AI decisions. The potential for adverse outcomes resulting from the use of AI is disproportionately high for groups that are discriminated against.
For example, if AI is trained using health data sets of mostly white individuals, people from communities of color may not receive appropriate health care (see also Health Information Technology). Another example is the use of AI in hiring, in which the AI is trained using data from a time when people age 60 and older were routinely excluded from the hiring process. This could lead to applicants age 60 and older being left out of the hiring pool, inadvertently perpetuating ageism in hiring practices (see also Strengthening Laws and Practices Against Age Discrimination).
Safeguards are needed to ensure that the use of AI does not perpetuate historic and ongoing societal prejudices, including against older adults. Doing so could actually result in less biased decisions than in human decision-making.
Policymakers and the private sector also have a key role to play in preventing the misuse of AI. For example, generative AI can create very convincing deepfakes that can be used to commit fraud against older adults (see also Scams and Fraud). Similarly, predatory lenders, such as those offering payday loan products, could use AI to better target consumers who are having difficulty making ends meet. The high fees and annual percentage rates of these loans could make consumers worse off (see also Alternative Financial Services).
Policy framework: Issues such as discrimination, fraud, disinformation, and harmful financial practices are not new. However, they are manifesting in novel, more targeted, and more expansive ways because of AI. Moreover, given how inexpensive and easy it is to use, AI can cause damage at a scale that was previously difficult to achieve. The challenge, then, is how to support and encourage positive uses of AI while protecting against its potential negative effects and applications.
One difficulty is the recruitment and retention of government staff with the technical knowledge to be able to effectively provide oversight of the industry. This can be overcome by allocating more resources to agencies, enabling them to build their expertise. Another challenge is that AI is evolving much faster than the slow-moving legislative and regulatory processes. To combat this, policymakers, consumer groups, industry groups, and other leaders can work in partnership to create dynamic and flexible standards that promote both innovation and consumer protection. This approach can lead to a regulatory system that remains relevant as AI evolves.
ARTIFICIAL INTELLIGENCE: Policy
ARTIFICIAL INTELLIGENCE: Policy
Consequential decision-making
Leaders in the private and public sectors should ensure fairness, transparency, and accountability when artificial intelligence (AI) tools are used to make or inform consequential decisions, such as those regarding health and financial well-being.
- Fairness: AI tools should be fair, reliable, and accurate, without disparate impacts on people protected by civil rights statutes.
- Transparency: There should be transparency when an AI tool is used, including an explanation about the results.
- Accountability: A fair and meaningful process should allow individuals to challenge adverse outcomes to ensure accountability.
The level of government regulation, required human oversight (sometimes referred to as human-in-the-loop), and penalties should be commensurate with the risk of harm to individuals.
A qualified third party should be required to evaluate AI tools used to inform consequential decisions for reliability, accuracy, and fairness before their deployment and routinely thereafter. The results of these evaluations should be made public without revealing personal or proprietary information.
AI benefits and risks
Policymakers and the private sector should harness the potential benefits of AI while actively protecting against its potential harms.
- They should better prioritize the use of AI to benefit consumers, including piloting and evaluating AI tools to address consumers’ needs and improve consumer protections. They should also proactively employ fair and accurate AI tools to identify and address discriminatory patterns and fraud.
- They should protect against the use of AI to commit fraud or otherwise misuse data (see also Misuse of AI and Scams and Fraud).
Legal framework and process
Policymakers should collaborate with a variety of stakeholders to establish an AI regulatory framework. Regulations and guidance should be dynamic and flexible to allow for innovation while protecting consumers.
Policymakers should ensure that agencies have sufficient expertise and investigative authority to regulate AI effectively.
Consumer protections
Policymakers should require that AI tools embed consumer protections (see also Privacy by design and Security by design). Until these consumer protections are fully integrated, policymakers and the private sector should ensure that consumers are aware of the risks of AI tools, including fraud, and the steps they can take to minimize those risks (see also Scams and Fraud).
Consumer protection and antidiscrimination laws that apply in the context of human decisions must continue to apply when decisions are informed by AI tools (see also Civil Rights). Policymakers should update these laws as necessary to ensure their effectiveness and eliminate any ambiguity regarding this issue.
When AI tools are utilized in the provision of essential goods and services, consumers should be able to escalate errors and complex issues to a human support team in a timely manner.
Individuals should be able to escalate errors and complex issues to a human support team in a timely manner when AI is used to provide customer service related to consequential decisions or essential goods and services.
Misuse of AI
Policymakers should protect against the misuse of AI. This includes images, voiceprints, and other likenesses generated through the use of artificial intelligence. Protections should be in place to stop AI from being used to facilitate:
- stalking and harassment;
- scams and fraud, including identity fraud (see also Scams and Fraud and Identity Theft and Fraud);
- unlawful discrimination;
- erroneous or discriminatory inferences, profiles, and AI outcomes; and
- the targeting of consumers for products and services that are predatory, do not add value, or are fraudulent. This includes payday loans, car title loans, and home improvement scams.
Data should not be used to aid and abet illegal practices.