AARP Eye Center
Background
Virtually all companies now collect, use, share, or sell personal information. This refers to information that is linked or reasonably linkable to an identified or identifiable individual. Data-driven businesses have proliferated. They collect, use, share, or sell unprecedented amounts of personal information. This trend is only expected to accelerate as companies become more and more reliant on data analysis in the development and marketing of their products and services.
With the proper safeguards in place, the proliferation of large data sets of personal information brings significant innovations that can potentially benefit individuals, groups, and the broader society. Nevertheless, it also brings challenges. These include ensuring individuals and entire populations can benefit from the innovations while also preventing the misuse of personal information and protecting consumer privacy and security.
Ensuring benefits for all: The collection and use of personal information provide benefits to individuals and society. Individuals may enjoy a much more personalized and customized consumer experience. They can receive personalized recommendations for uniquely tailored goods and services.
At the group level, analysis of personal information can be used to identify and respond to unfair or discriminatory treatment. For example, availability of large data sets of personal information, and the ability to process that information, may uncover intentional or unintentional discrimination of specific demographic groups within society. Data sets may also help provide insights into how to address the effects of mistreatment or discrimination through targeted interventions. In this respect, the availability of more personal information could help identify policies and practices that harm older adults, even if unintentionally. It could also inform data-driven responses to unfair or discriminatory policies or practices more broadly.
At the societal level, the increase in the collection and use of personal information brings many benefits. For example, data-driven policymaking can lead to better government programs that more effectively address key societal problems. It can also help distribute resources more equitably and efficiently. Analysis of personal information can identify misallocations of resources or discriminatory action. Likewise, companies can analyze data to create products and services that bring enormous consumer benefits.
In particular, the application of artificial intelligence (AI) is leading to new innovative products and services. For instance, AI is powering automated vehicle technology that will eventually lead to fully self-driving cars that do not require human oversight to operate. This can expand access and reduce isolation for people who do not drive, including some older adults (see also Vehicle Automation and Fully Self-Driving Cars ). AI also powers digital assistants. And it is transforming health care, financial services, and other sectors of the economy (for more information on health care privacy issues, see Data Privacy and Security).
However, if not deployed intentionally, some people will not benefit from the positive uses of personal information. In addition, if certain populations are less willing to share their personal information with governments and the private sector for research purposes, their insights are not likely to be included in the design process of government programs and private-sector goods and services. This distrust can, in part, be addressed by explaining the benefits of the sharing of personal information, the privacy protections in place (such as protections against sharing and selling personal information), and how personal information will be secured to prevent unauthorized access, including hacks.
Another challenge is ensuring that all populations are able to use the latest innovations that come from access to personal information. For example, fully self-driving cars could leave people with low and moderate incomes behind if they are priced too expensively. Likewise, people with disabilities who use mobility devices, such as wheelchairs, might not be able to use self-driving cars if they do not incorporate universal design principles into the design process.
Preventing discriminatory outcomes from misuse of personal information: Another concern is that the algorithms that power AI may be infused with bias. This could lead to discriminatory outcomes. For example, an algorithm that uses per capita income to decide where to build grocery stores could lead to a lack of fresh food in low-income communities. This might unintentionally create food deserts, which are communities with limited access to affordable and nutritious food. Intentional action can ensure fairness, reliability, accuracy, transparency, and accountability in algorithmic decision-making.
Policymakers and the private sector also have a key role to play in preventing the misuse of personal information. For example, the proliferation of personal information might allow predatory lenders, such as those offering payday loan products, to better target consumers who are having difficulty making ends meet (see also Alternative financial services). The high fees and annual percentage rates of these loans could make consumers worse off.
The potential for adverse outcomes resulting from the use of AI is disproportionately high for groups that are discriminated against. For example, if AI is trained using health data sets of mostly White individuals, people from communities of color may not receive appropriate health care. Another example is the use of AI in hiring, in which the AI is trained using data from a time when people age 60 and older were routinely excluded from the hiring process. This could lead to applicants age 60 and older being left out of the hiring pool, inadvertently perpetuating ageism in hiring practices.
Moreover, consumers may not be aware that their behavioral personal information, such as online tracking information, can be combined with other personal information to create detailed profiles. Those profiles are then sold to companies that want to target individual consumers. They allow companies to charge customers different prices based on their willingness to pay for a good or service. This is known as price discrimination. Some might pay substantially more because their browsing habits indicated a higher willingness to pay. These profiles also allow companies to target certain people for housing and employment opportunities while others are left out. This can potentially lead to biased outcomes and outright discrimination. Sometimes, the profiles are also used to target misinformation to certain consumers.
Safeguarding privacy and security: Another challenge is ensuring privacy and security protections while allowing positive uses of personal information to spur innovation and bring lasting consumer benefits. Privacy refers to the rights that consumers have (or do not have) to control their personal information. It addresses who or what entity can collect, store, share, sell, and analyze a particular consumer’s personal information. The strongest privacy protections allow consumers to control their personal information and decide how it may be used.
The increased collection, use, sharing, and sale of personal information can result in a lack of privacy protections, particularly with free products that require users to give up their privacy to use them. This may limit options for people with low and moderate incomes who cannot pay for more expensive products and services that protect privacy. This pay-for-privacy model means that privacy is a commodity that can be bought and sold, rather than a fundamental right. Privacy as a fundamental right is expressly stated in the United Nation’s Universal Declaration of Human Rights as well as in several state constitutions. The Supreme Court has also found that there are subject areas in which a general right to privacy can be derived from the Bill of Rights of the Constitution.
Security refers to the need to safeguard personal information from unauthorized access, including hacking. The information age has made personal information more valuable to criminals. The need to protect the security of personal information has never been higher.
Data-driven policymaking: Collecting, disaggregating, and analyzing high-quality personal information enables policymakers to identify pressing policy challenges. It also allows them to tailor the most effective solutions to address these challenges. Data collection and disaggregated analysis are especially important for identifying and addressing disparities affecting groups who experience discrimination, such as older adults, communities of color, people who identify as LGBTQ+, and people with disabilities. However, these groups are often underrepresented in data sets. As such, researchers need to enhance the collection and analysis of personal information among these groups. This can be difficult to do, particularly when requesting sensitive personal information.
Best practices for expanding data collection for people from groups that experience discrimination include:
- offering strong privacy and security protections, including anonymizing personal information whenever possible and refraining from selling or sharing this information with third parties;
- explaining why demographic information is being requested and how it will be utilized;
- providing surveys and data in multiple languages;
- emphasizing that providing personal information is voluntary; and
- using inclusive research methods. This includes targeted outreach to and oversampling of groups who are discriminated against.
CONSUMER DATA ISSUES: Policy
CONSUMER DATA ISSUES: Policy
Data collection and analysis
Government agencies should collect and report data for groups that are discriminated against, while ensuring consumer choice and control over what data they provide. This could include, as appropriate, data on age, race, ethnicity, geographic location, sex, gender identity, sexual orientation, socioeconomic status, and disability status. When requesting data that may be sensitive to individuals, policymakers should follow best practices, including posting clear nondiscrimination policies and emphasizing that disclosure is voluntary.