Artificial Intelligence: Risks to Privacy and Democracy.

Author:Manheim, Karl
 
FREE EXCERPT

INTRODUCTION I. A BRIEF INTRODUCTION TO AI II. THREATS TO PRIVACY A. Forms of Privacy B. Data Collection, Analytics, and Use 1. The Internet of Things 2. The Surveillance Ecosystem 3. Government Surveillance 4. Anonymity C. Decisional Privacy (Autonomy) 1. Subverting Free Will--Online Behavioral Advertising 2. Consumer Acquiescence III. THREATS TO ELECTIONS AND DEMOCRATIC INSTITUTIONS A. Self-Governance and Political Participation 1. Hacking the Vote-Cyberthreats to Elections 2. Hacking the Mind--Psychographic Profiling and Other Influencers 3. Fake News 4. Demise of Trusted Institutions B. Equality and Fairness 1. Opacity: Unexplained AI 2. Algorithmic Bias IV. REGULATION IN THE AGE OF AI A. Patchwork of Privacy Protections in the United States 1. State Privacy Laws 2. Self-Regulation & Industry Practices B. European Privacy Law 1. Control and Consent 2. Transparency and Accountability 3. Privacy by Design 4. Competition Law C. Regulating Robots and AI 1. Law of the Horse 2. Proposed EU Laws on Robotics 3. Asilomar Principles 4. Recommendations CONCLUSION INTRODUCTION

Artificial intelligence (AI) is the most disruptive technology of the modern era. Its impact is likely to dwarf even the development of the internet as it enters every corner of our lives. Many AI applications are already familiar, such as voice recognition, natural language processing and self-driving cars. Other implementations are less well known but increasingly deployed, such as content analysis, medical robots, and autonomous warriors. What these have in common is their ability to extract intelligence from unstructured data. Millions of terabytes of data about the real world and its inhabitants are generated each day. Much of that is noise with little apparent meaning. The goal of AI is to filter the noise, find meaning, and act upon it, ultimately with greater precision and better outcomes than humans can achieve on their own. The emerging intelligence of machines is a powerful tool to solve problems and to create new ones.

Advances in AI herald not just a new age in computing, but also present new dangers to social values and constitutional rights. The threat to privacy from social media algorithms and the Internet of Things is well known. What is less appreciated is the even greater threat that AI poses to democracy itself. (1) Recent events illustrate how AI can be "weaponized" to corrupt elections and poison people's faith in democratic institutions. Yet, as with many disruptive technologies, the law is slow to catch up. Indeed, the first ever Congressional hearing focusing on AI was held in late 2016, (2) more than a half-century after the military and scientific communities began serious research. (3)

The digital age has upended many social norms and structures that evolved over centuries. Principal among these are core values such as personal privacy, autonomy, and democracy. These are the foundations of liberal democracy, the power of which during the late 20th century was unmatched in human history. Technological achievements toward the end of the century promised a bright future in human well-being. But then, danger signs began to appear. The internet gave rise to social media, whose devaluation of privacy has been profound and seemingly irreversible. The Internet of Things (IoT) has beneficially automated many functions while resulting in ubiquitous monitoring and control over our daily lives. One product of the internet and IoT has been the rise of "Big Data" and data analytics. These tools enable sophisticated and covert behavior modification of consumers, viewers, and voters. The resulting loss of autonomy in personal decision-making has been no less serious than the loss of privacy.

Perhaps the biggest social cost of the new technological era of AI is the erosion of trust in and control over our democratic institutions. (4) "Psychographic profiling" of Facebook users by Cambridge Analytica during the 2016 elections in Britain and the United States are cases in point. But those instances of voter manipulation are hardly the only threats that AI poses to democracy. As more and more public functions are privatized, the scope of constitutional rights diminishes. Further relegating these functions to artificial intelligence allows for hidden decision-making, immune from public scrutiny and control. For instance, predictive policing and AI sentencing in criminal cases can reinforce discriminatory societal practices, but in a way that pretends to be objective. Similar algorithmic biases appear in other areas including credit, employment, and insurance determinations. "Machines are already being given the power to make life-altering, everyday decisions about people." (5) And they do so without transparency or accountability.

Sophisticated manipulation technologies have progressed to the point where individuals perceive that decisions they make are their own, but are instead often "guided" by algorithm. A robust example is "big nudging," a form of "persuasive computing" "that allows one to govern the masses efficiently, without having to involve citizens in democratic processes." (6) Discouraged political participation (7) is one of the aims of those who abuse AI to manipulate and control us. (8)

Collectively and individually, the threats to privacy and democracy degrade human values. Unfortunately, monitoring of these existential developments, at least in the United States, has been mostly left to industry self-regulation. At the national level, little has been done to preserve our democratic institutions and values. There is little oversight of AI development, leaving technology giants free to roam through our data and undermine our rights at will. (9) We seem to find ourselves in a situation where Mark Zuckerberg and Sundar Pichai, CEOs of Facebook and Google, have more control over Americans' lives and futures than do the representatives we elect. The power of these technology giants to act as "Emergent Transnational Sovereigns" (10) stems in part from the ability of AI software ("West Coast Code") to subvert or displace regulatory law ("East Coast Code"). (11) Some have described the emerging AI landscape as "digital authoritarianism" (12) or "algocracy"--rule by algorithm. (13)

This article explores present and predicted dangers that AI poses to core democratic principles of privacy, autonomy, equality, the political process, and the rule of law. Some of these dangers predate the advent of AI, such as covert manipulation of consumer and voter preferences, but are made all the more effective with the vast processing power that AI provides. More concerning, however, are AI's suigeneris risks. These include, for instance, AI's ability to generate comprehensive behavioral profiles from diverse datasets and to reidentify anonymized data. These expose our most intimate personal details to advertisers, governments, and strangers. The biggest dangers here are from social media, which rely on AI to fuel their growth and revenue models. Other novel features that have generated controversy include "algorithmic bias" and "unexplained AI." The former describes AI's tendency to amplify social biases, but covertly and with the pretense of objectivity. The latter describes AI's lack of transparency. AI results are often based on reasoning and processing that are unknown and unknowable to humans. The opacity of AI "black box" decision-making (14) is the antithesis of democratic self-governance and due process in that they preclude AI outputs from being tested against constitutional norms.

We do not underestimate the productive benefits of AI, and its inevitable trajectory, but feel it necessary to highlight its risks as well. This is not a vision of a dystopian future, as found in many dire warnings about artificial intelligence. (15) Humans may not be at risk as a species, but we are surely at risk in terms of our democratic institutions and values.

Part II gives a brief introduction to key aspects of artificial intelligence, such that a lay reader can appreciate how AI is deployed in the several domains we discuss. At its most basic level, AI emulates human information sensing, processing, and response--what we may incompletely call "intelligence"--but at vastly higher speeds and scale--yielding outputs unachievable by humans. (16)

Part III focuses on privacy rights and the forces arrayed against them. It includes a discussion of the data gathering and processing features of AI, including IoT and Big Data Analytics. AI requires data to function properly; that means vast amounts of personal data. In the process, AI will likely erode our rights in both decisional and informational privacy.

Part IV discusses AI's threats to democratic controls and institutions. This includes not just the electoral process, but also other ingredients of democracy such as equality and the rule of law. The ability of AI to covertly manipulate public opinion is already having a destabilizing effect in the United States and around the world.

Part V examines the current regulatory landscape in the United States and Europe, and civil society's efforts to call attention to the risks of AI. We conclude this section by proposing a series of responses that Congress might take to mediate those risks. Regulating AI while promoting its beneficial development requires careful balancing. But that must be done by public bodies and not simply AI developers and social media and technology companies, as is mostly the case now. (17) It also requires AI-specific regulation and not just extension of existing law. The European Parliament has recently proposed one regulatory model and set of laws. We draw on that as well as ethical and democracy-reinforcing principles developed by the AI community itself. We are all stakeholders in this matter and need to correct the asymmetry of power that currently exists in the regulation and deployment of AI.

Risks associated with...

To continue reading

FREE SIGN UP