Facebook can't sidestep responsibility
Along with its Silicon Valley brethren, Facebook is scrambling to respond to pressure from Congress about the flood of fake news and bogus political ads on its site. The deluge is also doing great damage, it's worth pointing out, far from Washington.
Facebook's fastest-growing markets are in the developing world, where the problem of fake news is even more devilishly complicated and dangerous, if that's possible, than in the West. In rapidly changing countries such as Myanmar, Facebook has become a platform for hate speech and incendiary rumors targeted at vulnerable minority groups. Elsewhere, shadowy political actors and authoritarian regimes have used the site to smear opponents and tighten their grip on power.
In such countries, users are often new to the web and digital literacy is low. In many cases, people have been subjected for decades to laughably inaccurate state propaganda and lack independent media alternatives; what appears on Facebook is widely accepted as news. Where legitimate press outlets do exist, they rarely have the resources to fact-check and combat misinformation on their own.
And, as serious as the undermining of elections and erosion of Western institutions is, the consequences of letting these lies circulate unchecked in the developing world may be even more frightening. In India, Facebook's second-biggest market, fake stories spread on its WhatsApp messaging service have led to lynchings; in Myanmar, Facebook posts and fake images have contributed to the toxic hatred of Rohingya Muslims, more than 500,000 of whom have been driven from their homes since late August. Religious tensions in Indonesia are running higher than they have in years, in part due to a fake-news campaign cynically used to demonize and oust Jakarta's Christian governor.
Facebook may disavow direct responsibility for these trends, and it may even be correct. Yet its platform exacerbates the potential for violence and social breakdown.
This is not to say that Facebook has any obvious solutions. It doesn't: Algorithms are a crude tool; scanning posts for offensive language risks censoring some fair speech, while hate-mongers will find ways around the filters. Greatly expanding personnel to monitor posts would be time-consuming and expensive.
It's not completely hopeless. Civil society groups can help to build up digital literacy, inoculate readers against the most obvious hoaxes and challenge inflammatory appeals. Media organizations can pool resources to help fact-check and debunk hoaxes. Governments can draft clearer laws against hate speech and force Facebook and other social-media companies to abide by them—though there is a danger that some will use Facebook's struggles as a rationale for censorship.
At the end of the day, however, Facebook can't sidestep its own responsibility. Yes, algorithms will have to be tweaked constantly. At the same time, whatever the cost, the company will have to hire more human beings to monitor how its platforms are being used in these countries. Facebook cannot contribute to building a global community if it's also giving its users the means to tear societies apart.