Technology and Society
May 19, 2019
We’re just wrapping up the semester for our new short course with a long title, “Clickbait, Bias, and Propaganda in Information Networks.” It was inspired by a history professor, who asked a couple of us librarians to talk to her first term seminar about what we were innocently calling “fake news” before that term was coopted to mean the established press. She thought students needed more, so we designed a seven-week course to introduce Mike Caulfield’s “four moves” heuristic and explore how the information networks we use daily circulate and shape the information we encounter.
One of the problems in trying to teach this stuff is that it’s constantly morphing as people figure out new ways to use social media to persuade and confuse audiences. Alex Kasprak, a reporter for Snopes, detailed the ways one self-described evangelical has created a network of Facebook groups purporting to represent a variety of groups – Blacks for Trump, Catholics for Trump, Seniors for Trump, Veterans for Trump, Teachers for Trump – all of them pushing nearly-identical Islamaphobic propaganda. Her work is connected to some shady PACs and non-profits and a fair amount of money is involved, but nobody (including Facebook) wanted to talk to Kasprak about it. Another report from Citizen Lab examines how a disinformation network, possibly based in Iran, has found ways to mimic news outlets to spread false narratives that look like legitimate news and are sometimes picked up by real news sites; then they vanish, redirecting visitors to the authentic sites, making verification difficult. (They give this network the evocative name “Endless Mayfly.”) A lot of work goes into setting up these networks. It takes even more work to track down their origins. Meanwhile, their messages get spread through social media channels and compete for our divided attention. Platforms designed for advertising are ripe for abuse.
The Internet has arguably become a “disinformation laboratory,” where everything from tools to messaging can be tested with an eye to maximize impact and elude detection. Because some of the most widely used online spaces have become data-driven platforms for digital marketing, we suspect that many disinformation operations now mimic online marketing campaigns and tweak their content to maximize views, clicks, or shares. Disinformation operations can therefore quickly adopt and discard tools and tactics, and regularly tweak their operations to maximize impact and reach.
In the meantime, the White House has sent confusing signals. On the one hand, Trump is leading the charge to tell tech companies they must stop enforcing their rules against conservatives. This claim leads to ridiculous situations, like Facebook bringing a site run by The Daily Caller aboard for fact-checking, apparently to prove they aren’t suppressing alternative facts. It also led Twitter to invent two standards of conduct – one for ordinary users and another for “newsworthy” figures – like the president, who can violate the terms of service with impunity. Meanwhile, the administration has rejected an initiative led by New Zealand Prime Minister Jacinda Ardern to find ways to reduce extremism online. This doesn’t entirely surprise me. Any administration would have to thread a tricky constitutional needle if promising to suppress speech. But it seems inconsistent to hold hearing in Congress and collect the email addresses of aggrieved social media users, using ALL CAPS OUTRAGE to push tech companies around while stepping back and washing hands of taking action against violent extremists. As Tarleton Gillespie, who literally wrote the book on internet moderation, said in an interesting Twitter thread, “it’s easier to cry foul, that you’re being targeted for your political beliefs, than to wonder if maybe your political tactics are now beyond the pale of a culture that’s trying to balance robust free speech with some semblance of citizenship, obligation, and consensus.”
Silicon Valley started building its castles in the air with one simple ethic: free speech is good, the more the merrier. In reality, free speech is complicated, as the tech companies are discovering. They have always moderated content, especially when it comes to copyright, but the difficulty of moderating speech on platforms designed to optimize the spread of attention-seeking messages has landed them with culturally and politically difficult work to do that can’t be solved by tweaking algorithms. The Christchurch Call went out not just to governments but to tech providers. Some of them have responded with an elaborate shrug: It’s not what happens on the internet that’s the problem, it’s what happens in society – not our problem. That denies the complex relationship between people and organizations and the technologies they use to find converts, spread the word, and create confusion.
It’s interesting that this tangled intersection of technology and culture first took root in the United States, where we have enshrined the idea of freedom of speech in our constitution. We should know, given a couple of centuries of case law, that it’s complicated. Having exported these tools of communication globally, we have a responsibility to figure this out in a way that allows both freedom of expression and social responsibility and a framework for balancing them wisely. Too much is at stake.
Whatever happens, we won’t have any shortage of things to talk about if we offer this course again.