Advertisement

Reining in Big Data’s Robber Barons

Saul Loeb/AFP/Getty Images

Facebook’s Mark Zuckerberg appearing before a Senate hearing in Washington, D.C., April 10, 2018

The use of Facebook by Cambridge Analytica to gather data on tens of millions of users is just one of the troubling things to have come to light about Facebook and its effect on social and political life. Yet that story is also, in some respects, a distraction from the bigger issues that stem from the Internet giants’ practices: Google, Facebook, Amazon, and other tech giants have constructed the most extensive and intrusive surveillance apparatus the world has ever seen. And we are the target.

Surveillance capitalism—so named in 2015 by the Harvard academic Shoshana Zuboff—is the business model of the Internet. Built on techniques of information capture and behavior modification, surveillance capitalism came into being when Google’s engineers realized that by tracking what users were typing into their search engine, they could learn to predict what those users wanted. Once they could anticipate what users wanted, they could target them with ads designed to influence those users’ behavior in ways that maximized Google’s revenue.

These days, virtually every aspect of day-to-day life is fed into corporate databases and used to predict and influence all kinds of behavior. Surveillance corporations don’t just respond to consumer wants; they also shape and drive those wants toward their own ends. Usually, that means a click on an advertisement, a visit to a website, or, ultimately, a purchase. To do this, they attempt to take advantage of known shortcuts and biases in human decision-making, called “heuristics.” Often, this means presenting links and other content in such a way as to generate interest, but sometimes, as in the case of so-called “dark patterns”—misselling techniques and tricks to game attention or gain private data—it involves a choice architecture that is patently deceptive.

Continual experimentation helps them refine their ads and prompts. As of 2014, Google, for example, undertook roughly 10,000 experiments per year in its search and ads business, with around 1,000 running at any one time. These test user interfaces, algorithms, and other elements of the service in order to determine which combinations are most effective at driving user engagement. And these ads can follow you from page to page, with subtle differences each time as companies try to figure out from your actions and responses which variation is most likely to persuade you to click.

As a result, if you use a web browser or an app, you are almost certainly the unwitting subject of dozens of psychological experiments that seek to profile your habits and vulnerabilities for the benefit of corporations, every time you use the Internet. This personalized and dynamic form of behavioral nudging gives surveillance corporations repeated opportunities to manipulate user behavior, in ways that would be impossible in the offline world. And because these companies use their knowledge of your vulnerabilities to learn how to target other users, in using their services you are rendered complicit not just in your own manipulation, but in the manipulation of your friends and your family, your neighbors and colleagues, and every one of these companies’ billions of users.

Most people don’t realize the extent to which predictive analytics can reveal otherwise unknown information about them from relatively impersonal behavioral data. One 2013 study by Cambridge University’s Psychometrics Centre showed that, without having any factual information about you, analysis of what you’ve “liked” on Facebook can accurately predict your sexual orientation, your ethnicity, your happiness, your political and religious views, whether your parents are separated, and whether you use drugs. A follow-up study in 2015 found that by analyzing your likes, a computer can be a better judge of your personality traits—such as how artistic, shy, or cooperative you are—than your friends and family are. Consider what personal information, even information you would assume was personal and confidential, could be determined from the troves of other data that surveillance corporations gather on you and every other user. And then consider how the quantity of this data will increase exponentially as the Internet of Things—in effect, a network of sensors, eyes, and ears lurking in our homes, our offices, and our public spaces that feed data back into surveillance capitalism’s databases and algorithms—takes ever-greater hold.

Those following Mark Zuckerberg’s testimony before Congress this week should not assume that by not using Facebook, they can escape its reach. The company’s tracking techniques—including the Facebook Pixel tools for advertisers, and the “like” and “share” buttons—are now woven into many corners of the web. This allows them to follow individuals across the Internet, build up a picture of their interests, and target them with relevant advertising. As Zeynep Tufekci, a professor of sociology at the University of North Carolina at Chapel Hill, wrote in The New York Times recently, even if you don’t have an account, Facebook can infer information about you from what it knows about your friends who do. So even if you aren’t a member, it’s probable that Facebook has compiled a “shadow profile” on you, in much the same way it does for its users. In the US, at least, there is no opting-out of this. Whether you use Facebook or not, Facebook is watching you.

Advertisement

It’s often said that handing over your data is simply the price for using these services. But this isn’t quite right. Privacy is the true cost of using Google or Facebook. Giving up your privacy allows surveillance corporations to figure out your personal psychological susceptibilities and then charge advertisers to exploit them. To justify this, these corporations hide behind privacy policies that are often long, convoluted, and framed in obfuscatory legalese. A decade ago, a study found that, even in that less connected world, it would take the average person about 25 days (and nights) a year to read all the privacy policies with which they are confronted. Who has the time or inclination to read all of these? Invoking privacy policies as a justification for these practices seems fraudulent—indeed, the Federal Trade Commission has previously condemned Google’s privacy policies as deceptive—especially given that surveillance corporations have used heuristics to determine how to present the privacy policies in order to gain your consent to them.

You might feel that this is all a reasonable price to pay to keep in touch with friends and relatives. But what you get in return is not a true picture of your social circle; it’s algorithmically curated by Facebook. Your news feed is not reality; it is Facebook-mediated reality. Within this curated reality are encoded various judgments about who and what Facebook considers worthy of showing you. The news feed’s algorithm, for example, reduces the visibility of those who don’t interact enough for Facebook’s liking—showing fewer of their posts to others—and boosts the visibility of those who do. This is a new and insidious form of control: do what Facebook wants and be rewarded; fail to perform to its satisfaction and be punished with social invisibility. This can lead to dark places. Imagine turning to Facebook in a moment of serious illness, reaching out to friends for comfort and support that doesn’t come, and dying with the belief that no one cared that you were sick—all because you hadn’t used Facebook enough for its algorithm to deem your illness worthy of your friends’ attention. This is not a dystopian imaginary; it is a scenario that reportedly played out last year.

When Mark Zuckerberg talks about connecting people, as he often does, he leaves two words unsaid: “to Facebook.” For all his talk of community and connectedness, Facebook doesn’t seek to breed social connection and genuine friendship. Facebook makes its money from advertising, so what it wants is for you to keep browsing, scrolling, and clicking on Facebook—and surrender the minutiae of your life to its databases and its algorithms. To achieve this, it has to position itself as the hub of your social life, the mediator of your reality.

And Facebook-mediated reality has given one company, Zuckerberg’s, significant influence over the flow of information worldwide. With this influence comes responsibility. Yet, thanks to recently leaked internal memos, we know that Facebook doesn’t particularly care about consequences—allowing fake news and disinformation to flourish globally, and worse. In Myanmar, for example, where Facebook dominates the Internet to the extent that it is considered by many to be the Internet, it has deleted content that drew attention to the ethnic cleansing of the Rohingya and banned content by certain Rohingya groups, while leaving unchecked the rampant incitement to violence against Rohingya people. Indeed, such is Facebook’s influence in Myanmar that it stands accused by both the United Nations and human rights groups of aiding genocide.

To be sure, this behavior is not unique to Facebook; it’s endemic to surveillance capitalism. Google has also been known to recommend links to conspiracy theories; as the Times has reported, it has even pushed fake news ads on fact-checking websites. And YouTube, one of Google’s services, in directing users to increasingly extreme videos in the pursuit of engagement and revenue, may have become the greatest radicalization engine of the twenty-first century.

As the world is increasingly coming to realize, Facebook-mediated reality also assists voter surveillance by political parties and campaigns. Facebook’s Custom Audience and Lookalike Audience tools allow advertisers, including political organizations, to upload lists of people they want to target and match them with their Facebook profiles. Advertisers can then filter for similar people who aren’t on their lists and target them all, giving political parties and campaigns a vastly extended reach. Using microtargeting tools like these, campaigns can precisely deliver different ads to different groups of voters. According to Wired, during the 2016 US presidential election, the Trump campaign ran 40,000–50,000 variations of its ads on any one day, all carefully crafted to resonate with small groups of people and honed through large-scale experimentation, with some 175,000 variations on the day of the third debate. According to Trump’s digital director, Facebook and Twitter together were the reason Trump won.

Advertisement

In the 2010 US midterm elections, Facebook conducted its own research into the effectiveness of online political messaging. It found that it was able to increase users’ likelihood of voting by around 0.4 percent by telling them that their friends had voted and encouraging them to do the same (a slightly different experiment in 2012 saw similar results). That percentage doesn’t sound like much, but on a national scale it resulted in around 340,000 extra votes. George W. Bush won the 2000 election by a few hundred votes in Florida; Donald Trump won in 2016 because he amassed about 80,000 more votes in three states. Carefully designed “get out the vote” microtargeting campaigns could have a significant impact on closely-contested elections. Conversely, in 2016 Trump also used Custom Audiences to run three different voter suppression campaigns during the general election in an effort to undermine support for Clinton: by targeting Sanders-supporting Democrats with messages about her ties to the financial industry; by targeting young women about Clinton’s support for her husband despite allegations against him of sexually inappropriate behavior; and by targeting African Americans over her 1996 “superpredator” comments. There is nothing new about negative campaigning, but microtargeting may make it more effective than ever—at a cost of undermining trust in politicians and faith in the democratic process even more.

As Professor Tufekci has observed, “If the twentieth-century engineers of consent had magnifying glasses and baseball bats, those of the twenty-first century have acquired telescopes, microscopes and scalpels in the shape of algorithms and analytics.” No two people may ever see the same set of ads, and conflicting arguments and disinformation can be precisely aimed at different groups of voters without the others knowing. Where, in the past, political campaigns took place in full view and candidates’ arguments and claims were subjected to public examination, microtargeting means that the electoral process becomes a more private, personalized affair, with little cross-examination and challenge to disinformation. In the words of digital media scholars Justin Hendrix and David Carroll, this may prove to be “a nightmare for democracy.”

As a society, we are slowly waking up to the problems with Facebook. Respect for privacy has never been a part of Facebook’s mission. As Zuckerberg reminded senators repeatedly this week, Facebook originated as a group of college buddies’ idea for a website—he did not mention that its purpose was to allow them to rate the “hotness” of female students, without their consent. For the first decade of the site’s existence, the motto of its developers was “move fast and break things.” With this ethos, a pattern to Facebook’s scandals has become familiar.

At some point, people notice a questionable practice and kick up a fuss. As we’ve seen in these past few days, Zuckerberg emerges, eventually, to say how sorry he is, claim that he couldn’t have seen the problem coming (despite warnings from academics and privacy advocates), talk about “community” and “doing better,” and promise that Facebook will change. But for all the mea culpas and the promises to think more carefully and give users more control over their privacy—and there have been many over the years—some stubborn facts remain: surveillance is woven into Facebook’s DNA, and surveillance capitalism is its raison d’être. Thus its solution to problems, for the most part, is predictable and flawed: to give even more power to Facebook

Zuckerberg’s testimony to Congress this week has made a few things clear. One is that Facebook is in denial about the extent to which its users are aware of what it is doing. While Zuckerberg says that most users know about and are comfortable with Facebook’s surveillance activities, research shows that a majority of its users would not consent to many of these practices. Another is that, to justify its business model, Facebook appears to feel the need to argue that its users prefer targeted advertising, despite research showing that 41 percent preferred traditional advertising, compared to just 21 percent who like targeting (and overall, 63 percent would prefer to see less targeted advertising altogether on Facebook). And we now know that there were tens of thousands of apps that had access to large volumes of user data before 2015, in the same way as Cambridge Analytica, and that Facebook is only now, under the spotlight three years later, attempting to review them.

Ultimately, neither Facebook, nor Google, nor any other surveillance corporation can reform itself in any meaningful way so long as they are addicted to our data. And Facebook’s latest raft of patents provide little comfort. Surveillance capitalism is often presented as though it’s the natural order of things online, but it is the product of choices made by people in pursuit of profit. As a business model, it is neither inevitable nor unalterable.

One positive move would be to switch to a contextual advertising model. This would mean advertisements based on the contents of the page that you’re viewing, rather than on analysis of data gathered through surveillance of user behavior. And the forthcoming General Data Protection Regulation (GDPR) in the European Union, which strengthens users’ data rights, will require new approaches. Both Facebook and Google have struggled to comply with existing, more limited European data protection laws. Courts in Belgium and Germany have recently declared some of Facebook’s practices (including the tracking of non-users) unlawful, for example, and Google has faced similar criticism, and hefty fines, for gathering data without consent and failing to tell users what it was doing with their data.

The surveillance corporations will likely find meeting their obligations under the GDPR a stern challenge. But with potential fines of up to 4 percent of global turnover for serious violations of the EU rules, tech companies will need to take their data protection responsibilities seriously and introduce reforms. Since they have to implement these rules to cover their operations in Europe, they could—in theory, relatively easily—apply the same rules and protections to users based in North America and elsewhere around the world. Indeed, Facebook is under mounting pressure to do so—and Zuckerberg has tentatively indicated that Facebook may move in that direction. That would be a significant step forward; even better, as the consumer advocate Jessica Rich has argued in Wired, would be for the US to adopt its own laws, similar to those in Europe.

The questionable practices of surveillance corporations and their refusal to act responsibly have brought us to a turning point. We can reject a level of corporate surveillance we would never have accepted in the pre-Internet age by putting pressure on Facebook and lawmakers for change, and by using alternative services with different business models. And we can demand greater accountability from the Internet oligopoly and better legal protections for our privacy and our data. This is a moment of decision: Will it be our Internet, or theirs?

New York Review subscription offer with free calendar

Give the gift they’ll open all year.

Save 65% off the regular rate and over 75% off the cover price and receive a free 2025 calendar!