Technology and Society

What Kind of Free Is Speech Online?

March 30, 2017

A Pew Research has just published a fascinating in-depth report titled “The Future of Free Speech Online.” (The PDF version of the study is 75 pages – there’s a lot to it.) Lee Rainie, Janna Anderson, and Jonathan Albright surveyed a number of tech experts to get their predictions about where online discourse is headed. And while nearly 20 percent of the experts are optimistic, most of them think the climate for online discourse will either stay the same or get worse.

The framing of study seems . . . odd, though. The implication is that we can either design online platforms that control behavior (by doing things like prohibiting anonymity, developing reputation systems, or using artificial intelligence to moderate contributions) or we can have freedom. This is where some of internet culture seems to intersect with libertarianism: any attempt to shape the overall tenor of a group conversation is a restriction on individuals’ right of free expression. Or to put it differently, the power to shape the tone of a social interaction is liable to be misused by the powerful.

Part of the issue is that the platforms that have the greatest reach (Facebook, Google, Twitter) are all operating within an economic framework that rewards us for clicking and sharing. Those clicks and shares build up their capacity to place advertisements that are targeted to our interests, or so goes the theory – the more detailed the profile of every individual consumer, the more likely we’ll buy stuff as a result of digitally-placed advertising.

I’ve been waiting for that bubble to burst, and there are signs it may be imminent. A “sleeping giant” campaign to shame companies that inadvertently advertise on Breitbart has been fairly successful in highlighting the problems that arise from handing ad companies a wad of cash to auction space on random websites; it doesn’t do the brand any good if its ads show up in places that might offend a portion of their audience. More recently, Google has been pressed to prevent the ads they place on Youtube from turning up next to terrorist recruitment videos or whackadoodle conspiracy films. Advertisers suddenly want some control over ad buys, and this threatens the click-click-click economy that has grown up like an invasive species taking over the internet.

But it’s not just that these ads appearing on unappealing sites are bad for the brand. The New York Times just reported that Chase pulled back from advertising on 400,000 sites a month to just 5,000 and got the same results. This seems to suggest that the way Google and Facebook make money doesn’t actually work.

What does this mean? So much of our online experience has been shaped, or rather deformed, around the idea that advertising is the lifeblood of modern-day communications. It has led traditional news organizations to resort to more clickbait and less fact-checking. It has given people incentives to invent exciting but completely made-up “news” stories. It has encouraged mass corporate surveillance without actually showing evidence that it works – and along the way has created vast pools of personal data that can be exploited by hackers, state actors, and people like hedge-fund billionaire Robert Mercer who are in a position to buy political influence. (Among other political investments, Mercer has invested in Cambridge Analytica, which purports to combine personal data with psychological profiling to create scarily effective political campaign messaging. It’s unclear whether their exuberant claims for effectiveness are supported by evidence or are just marketing hoohah.)

Building a community that can develop and sustain the kind of social norms that define a civil community space takes a great deal of sustained effort, and that effort can’t be all top-down and (at this point, at least) can’t be purely algorithmic. James Grimmelmann’s fascinating study of online community moderation concludes “No community is ever perfectly open or perfectly closed; moderation always takes places somewhere in between.” And it must alway takes place, even if it’s invisible and outsourced across the planet.

Several experts who participated in the Pew study thought anonymity was the root of trouble online, but sometimes there are valid reasons why someone wants to be anonymous or pseudonymous online, not just a desire to hit people and run away. The chief reason Facebook and Google want a “real names” policy isn’t to curb misbehavior, it’s to make it easier to build detailed profiles to sell more advertising. Recent experience suggests provocateurs can make trolling into a profitable business or political campaign, real names and all.

In the end I don’t see this as a free speech versus control issue so much as it’s a flaw in the economic model that underwrites the companies that dominate our interactions online, an economic model that incentivizes bad behavior while trashing our privacy. But this is not the internet; it’s just the ad-supported one we’ve grown used to in recent years (one the Republicans in Congress just invited internet service providers to join). If the personalized ad bubble bursts, and I think it will, we may have to think hard about how online speech can be free – including free as in beer (provided without a monetary cost) or as in kittens (requiring ongoing care). As one of the Pew experts (Stephen Downes) points out, there are plenty of positive interactions happening online and good information shared. Generating revenue or gaining political power aren’t the only motivators for social interaction; maybe what we need is platforms that don’t start with that assumption – or a way to support those that start with a different end in mind.

In any case, read the Pew report – it’s thought-provoking and timely.

License

Icon for the Creative Commons Attribution 4.0 International License

Babel Fish Bouillabaisse II Copyright © 2019 by Barbara Fister is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.