The company wants to shut down an academic study of political ad targeting, just as it prepares to reinstate targeted political ads.
FACEBOOK HAS BROUGHT its might down upon a small but scrappy academic team who've done brilliant work in exposing the company's failures to contain scams, rip-offs, and political disinformation. If the team doesn’t fully dismantle its public-interest research project and delete its data by November 30, Facebook says, it “may be subject to additional enforcement action.” Why? Because the $775 billion company wants to protect our privacy.
For political dirty tricksters, Facebook's self-serve ad platform is a juicy target: If you want to spread disinformation, the platform will help you narrow down the people who'll see it. A canny political actor can use Facebook ads to show lies and vile incitements to people who might act on them, and, just as important, not show those ads to the rest of the world, which would reveal the way politicos talk when they think there's nobody here but us chickens.
Facebook's been fined over this, its execs raked over the coals in Congress and the British Parliament, and it says it has learned its lesson, putting in place measures that will prevent it.
Enter Ad Observer and the Ad Observatory, a project of NYU's Tandon School of Engineering. Ad Observer is a browser plug-in that Facebook users voluntarily install. The plug-in scrapes (makes a copy of) every ad that a user sees and sends it to Ad Observatory, a public database of Facebook ads that scholars and accountability journalists mine to analyze what's really happening on the platform. Time and again, they've discovered gross failures in Facebook's ability to enforce its own policies and live up to its promises.
Facebook has threatened legal action against the Ad Observatory team, claiming that the Ad Observer plug-in violates its terms of service. They want it removed by the Monday after Thanksgiving, or else. In other words, Facebook wants independent, third-party scrutiny of its ad policy enforcement to end at the very moment that its enforcement failures are allowing false claims about the outcome of the 2020 election to spread, challenging the legitimacy of American democracy itself. This deadline also roughly coincides with Facebook’s reinstatement of political advertising. In other words, the company is opening the door to far more paid political disinformation at the very same moment that it is shutting out independent watchdogs who monitor this stuff.
The company swears this action is not driven by a desire to silence its critics. Rather, it says it is acting on its well-known commitment to preserving its users' privacy.
Both of these arguments are (to use a technical term) rank bullshit. Facebook's claims that it can enforce its terms of service as though they were laws that had been passed by Congress are based on an anti-competitive suit it brought against a (now defunct) startup called Power Ventures more than a decade ago. In that suit, the company argued that allowing Facebook users to read their messages without logging into Facebook was a crime.
The Power Ventures decision was bonkers, but that's because the law it invoked is even worse. The 1986 Computer Fraud and Abuse Act was rushed into law after Ronald Reagan saw Matthew Broderick in the movie WarGames and panicked (no, really). It's so broadly worded that if you squint right and read every third word, the Power Ventures decision makes a kind of topsy-turvy sense.
But Facebook's legal theories have a serious problem. Over the past decade, the courts have substantially narrowed the precedent from Power Ventures, thanks to a pair of suits: Sandvig v. Barr and HiQ v. LinkedIn. These modern precedents make Facebook's legal arguments a hard stretch.
Even more of a stretch: Facebook's claims that it is only acting to protect its users' privacy. Set aside for a moment the absurdity of the 21st century's worst privacy invaders positioning themselves as privacy champions. Stipulate that Facebook has found privacy religion and is really here to defend its users' privacy.
Facebook does not protect its users' privacy by vetoing their explicit choice to share whatever ads they see with Ad Observatory. Privacy, after all, is not the situation in which no one knows anything about you. (That's secrecy.) Privacy is when you decide who gets to know stuff about you, and what stuff they get to know. As Facebook elegantly puts it in its own policy documents: "What you share and who you share it with should be your decision."
But Facebook says that it's not concerned with the privacy of users who chose to install the Ad Observer plug-in. It says it is acting to protect the privacy of other users whose data is captured along with those ads, such as information about who else has seen a given ad. If Ad Observer captured that information, it would certainly be worrisome!
But it doesn't. Ad Observer doesn't scrape that data. Ad Observer can't scrape that data. When you see an ad on Facebook, the service doesn't tell you which of your friends also saw that ad. That would be terrible, even by Facebook's standards (Facebook might show you your friends' "engagements" with the ad, but Ad Observer doesn't scrape those.)
The fact that some journalists believed Facebook's straight-up disinformation on this score tells you everything you need to know, really: It's completely believable that Facebook would be so terrible at privacy that it would tell you which of your friends saw a given ad.
Facebook says it is acting on its well-known commitment to preserving its users’ privacy. No, really.
Facebook has unloaded both barrels—legal threats and a disinformation blitz—at the NYU team. That's even worse than it seems at first. The Ad Observer method is really the best hope we have for doing privacy-preserving digital research on social networks. By putting users in control of what they share, it’s a vast improvement over the traditional method of crawling whole social networks and sucking in whatever you can get.
Facebook's alternative is for researchers to confine their research to those ads and other data that the company chooses to share with them directly, while promising that this data will be comprehensive and reliable. It's not. We know it's not, because Ad Observer has found its many errors and omissions.
Facebook promised it would clean up its act. It didn't. And when the Ad Observatory caught the company breaking its promises, it sought to shut them down. This may be par for the course with Facebook, but it's not something we as a society can afford to tolerate any longer.
Cory Doctorow, Journalist.
This article was originally published on Wired. Views in this article are author’s own and do not necessarily reflect CGS policy.