This past Sunday’s bombing attacks in Sri Lanka that left at least 250 dead, as of the latest reporting, was followed by the government taking a step we sometimes see in the wake of tragedy: blocking social media services like Facebook and WhatsApp.
Why and how governments around the world take these steps is a question for anyone concerned with freedom of the press and freedom of speech. As it turns out, it’s not that hard of an action to take, even if it isn’t as effective as those who want to control the flow of information would like.
Why They Do It
There are lots of reasons why a government might want to cut off the free flow of information, and it’s not uncommon around the world to see people’s access to all kinds of sites be restricted. In some countries it isn’t even possible to access Facebook in the first place.
Information, after all, is power. And so too is disinformation. In the wake of a terrorist attack or natural disaster, a lack of confirmed facts leaves the door open for people to fill the gaps in bad faith. It’s not unusual to see out-of-context photos or footage circulating in the minutes and hours after events unfold.
Sometimes people jump to conclusions before they have all the facts, putting others in danger. In the wake of the Boston Marathon bombing, Reddit infamously got to sleuthing, with users of the site thinking they could use online tools to vigilante ends. Suffice it to say, they got the facts wrong, and caused some people some truly unnecessary grief. In Sri Lanka, the Guardian reports that some residents saw reports on WhatsApp identifying two suicide bombers. Whether or not those reports were remotely accurate, those kinds of unverified rumors have led to vigilante killings in the past.
In essence, the post-event reason to block social media is to stop a panic and curb disinformation. With disinformation being in the eyes of the beholder, the potential for abuse is immense.
How They Do It
This part is simple: by blocking the addresses of the services in question. That could be the URLs, DNS and/or IP addresses. While it is more complicated than blocking someone’s phone number — at least in the sense of what is required — the basic idea is the same. The government in question turns to internet service providers and asks/orders them to keep traffic from those addresses from getting through.
This is pretty much what happens with a work/college firewall when the IT department blocks Netflix, YouTube, or… um, other video sites from being viewed. Only on a national scale.
Now there can be ways to get around this, like using a VPN. That stands for virtual private network. A VPN works by connecting a user’s computer to a virtual network instead of their local one, but VPNs can be blocked if a provider knows those addresses. How far a government is willing to go determines how effective a block is.
Something You Don’t See, Something You Do
In the United States we haven’t seen government-level blocks because of the First Amendment.
What we do see is social media companies — like Facebook, whose products are often at the center of these firestorms — taking active steps to combat misinformation on their own platforms. The idea there is that the health of a social network depends in large part on trustworthy information. If users can’t trust what they see in their feeds, they won’t rely on the network. Which means engagement goes down. Which leads to financial impacts.
So Facebook shuts down accounts which it deems “inauthentic” and that has political consequences as those who benefitted from those “inauthentic” accounts cry foul. Or Twitter cleans out bots and suddenly the President of the United States reportedly is complaining about having a lower follower count to the CEO of Twitter in a closed-door Oval Office meeting. Because everything is normal.
The thing is: trust in social media services do matter for these companies’ bottom lines, but for some, trust is a matter of sticking to the story they want to be told, and others just want the facts no matter how ugly.
The Common Thread
The common thread between these stories is us: the end users of news and information. It turns out that as a whole, we’re pretty gullible. Our knack for taking what we’re told at face value — that benevolent trust we have in other people — is super exploitable. It is a trait that can be hacked over and over again. Even as tech companies try to come up with solutions to flag questionable content, this aspect of human nature might be too much to overcome.