Saving Facebook

AuthorJames Grimmelmann
PositionAssociate Professor of Law, New York Law School
Pages01

Associate Professor of Law, New York Law School. Aislinn Black, Robert Blecker, Elise Boddie, Tai-Heng Cheng, Stephen Ellmann, Diane Fahey, Lauren Gelman, Doni Gweritzman, Chris Hoofnagle, H. Brian Holland, Molly Beutz Land, Jan Lewis, William McGeveran, Rebecca Roiphe, and Clay Shirky provided helpful comments. Earlier versions of this Article were presented at the Social Media and the Commodification of Community workshop at the University of Haifa in May 2008 and at a DIMACS/DyDAn Workshop on Internet Privacy in September 2008. After January 1, 2010, this Article will be available for reuse under the Creative Commons Attribution 3.0 United States license, http://creativecommons.org/licenses/by/3.0/ us/. All otherwise-undated websites in footnotes were last visited on March 17, 2009. The description of Facebook's activities is current as of March 17, 2009. Internet citations are formatted according to conventions suggested by the author, which may be found at http://james.grimmelmann.net/essays/CitationPrinciples.pdf.

Page 1139

I Introduction

The first task of technology law is always to understand how people actually use the technology. Consider the phenomenon called "ghost riding the whip." The Facebook page of the "Ghost Riding the Whip Association" links to a video of two young men jumping out of a moving car and dancing around on it as it rolls on, now driverless. If this sounds horribly dangerous, that's because it is. At least two people have been killed ghost riding,1 and the best-known of the hundreds of ghost-riding videos posted online shows a ghost rider being run over by his own car. 2

Policymakers could respond to such obviously risky behavior in two ways. One way-the wrong way-would treat ghost riders as passive victims. Surely, sane people would never voluntarily dance around on the hood of a moving car. There must be something wrong with the car that induces them to ghost ride on it. Maybe cars should come with a "NEVER EXIT A MoVING CAR" sticker on the driver-side window. If drivers ignore the stickers, maybe any car with doors and windows that open should be declared unreasonably dangerous. And so on. The problem with this entire way of thinking is that it sees only the car, and not the driver who lets go of the wheel. Cars don't ghost ride the whip; people ghost ride the whip.

To protect drivers from the dangers of ghost riding, policymakers would be better off focusing on the ghost riders themselves. What motivates them? Why do they underestimate the risks? When they get hurt, what went wrong? Engaging with ghost riders' worldviews would suggest modest, incremental policies appropriate to the ways in which ghost riders use automotive technology. Sensible responses would include reducing ghost riding's allure, helping its practitioners appreciate the dangers, and tweaking car design to help drivers regain control quickly.3 The key principle is to understand the social dynamics of technology use, and tailor policy interventions to fit.

This Article applies this principle to a different problem of risky technology use: privacy on Facebook. Think again about the Ghost Riding Page 1140 the Whip Association. Anyone with a Facebook account, including police and potential employers, can easily identify the two ghost riders by name. Not only did these men misunderstand the physical risks of ghost riding, they also misunderstood the privacy risks of Facebook. They're not alone. over a hundred million people have uploaded personally sensitive information to Facebook, and many of them have been badly burnt as a result. Jobs have been lost, reputations smeared, embarrassing secrets broadcast to the world.

It's temptingly easy to pin the blame for these problems entirely on Facebook. Easy-but wrong. Facebook isn't a privacy carjacker, forcing its victims into compromising situations. It's a carmaker, offering its users a flexible, valuable, socially compelling tool. Its users are the ones ghost riding the privacy whip, dancing around on the roof as they expose their personal information to the world.

Thus, if we seek laws and policies that mitigate the privacy risks of Facebook and other social network sites, we need to go through the same social and psychological analysis. What motivates Facebook users? Why do they underestimate the privacy risks? When their privacy is violated, what went wrong? Responses that don't engage with the answers to these questions can easily make matters worse.

Consider, for example, technical controls: switches that users can flip to keep certain details from being shared in certain ways. Facebook is Exhibit A for the surprising ineffectiveness of technical controls. It has severe privacy problems and an admirably comprehensive privacy-protection architecture. The problem is that it's extraordinarily hard-indeed, often impossible-to translate ambiguous and contested user norms of information-sharing into hard-edged software rules. As soon as the technical controls get in the way of socializing, users disable and misuse them. This story is typical; other seemingly attractive privacy "protections" miss essential social dynamics.

Thus, this Article provides the first careful and comprehensive analysis of the law and policy of privacy on social network sites, using Facebook as its principal example. The rest of Part I provides the necessary background. After clearing up the necessary terminology, it provides a brief history and technical overview of Facebook. Part II then presents a rich, factually grounded description of the social dynamics of privacy on Facebook. Part II.A explains how social network sites allow people to express themselves, form meaningful relationships, and see themselves as valued members of a community. Part II.B shows how these social motivations are closely bound up with the heuristics that people use to evaluate privacy risks, heuristics that often lead them to think that Facebook activities are more private than they actually are. Part II.C finishes by examining the privacy harms that result. The message of Part II is that most of Facebook's privacy problems are the result of neither incompetence nor malice; instead, they're natural consequences of the ways that people enthusiastically use Facebook. Page 1141

Having described the social dynamics of privacy on Facebook, the Article applies that description in Parts III and IV, distinguishing helpful from unhelpful policy responses. Part III is negative; it shows how policy prescriptions can go badly wrong when they don't pay attention to these social dynamics:

* Leaving matters up to "the market" doesn't produce an optimal outcome; users' social and cognitive misunderstandings of the privacy risks of Facebook won't disappear anytime soon.

* "Better" privacy policies are irrelevant; users don't pay attention to them when making decisions about their behavior on Facebook.

* "Better" technical controls make matters worse; they cram subtle and complicated human judgments into ill-fitting digital boxes.

* Treating Facebook as a commercial data collector misconstrues the problem; users are voluntarily, even enthusiastically, asking the site to share their personal information widely.

* Trying to restrict access to Facebook is a Sisyphean task; it has passionate, engaged users who will fight back against restrictions.

* Giving users "ownership" over the information that they enter on Facebook is the worst idea of all; it empowers them to run roughshod over others' privacy.

Part IV, on the other hand, is positive; it shows how proposals that do engage with Facebook's social dynamics can sometimes do some good. Each of these proposals seeks to reduce the gap between what users expect to happen to their personal information and what actually happens to it:

* N oteverything posted on Facebook is public. Users shouldn't automatically lose their rights of privacy in information solely because it's been put on Facebook somewhere.

* Urses' good names are valuable. There's a commercial reputational interest in one's Facebook persona, and using that persona for marketing purposes without consent should be actionable. Page 1142

* opt-outs need to be meaningful. People who don't sign up for Facebook, or who sign up but then decide to quit, deserve to have their choices not to participate respected.

* Urnpedictable changes are dangerous. Changes that pull the rug out from under users' expectations about privacy should be considered unfair trade practices.

* Strip-mining social networks is bad for the social environment. Bribing users to use a social network site-for example, by giving them rewards when more of their friends sign up- creates unhealthy chain-letter dynamics that subvert relationships.

* Education needs to reach the right audiences. Targeted efforts to explain a few key facts about social-network-site privacy in culturally appropriate ways could help head off some of the more common privacy goofs users make.

Finally, Part V concludes with a brief message of optimism.

A Definitions

I'll refer to Facebook and its competitors as "social network sites." This phrase captures the idea that Facebook and its competitors are websites designed to be used by people connected in "a social network," a term that...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT