If Trump Can Be Banned, What About Other World Leaders Who Incite Violence?

Members of the United Nations Security Councils sit at a semi-circular table at UN headquarters in New York

Andrew Burton / Getty Images

Twitter finally banned Donald Trump on January 8th, 12 days before the end of his term as president, and two days after he instigated a violent mob that stormed the Capitol as Congress met to certify the results of the 2020 election. Facebook and Instagram made the decision one day earlier, citing the threat of future violence inspired by the president’s inflammatory and baseless claims of election fraud. 

Advertisement

De-platforming Trump was the right thing to do, and activists have been urging social media companies to take similar action against neo-nazis and white supremacists for years. But Trump isn’t the first world leader to inspire violence with hateful rhetoric and disinformation. What about the rest of the world, especially places where Facebook effectively is the Internet, and Twitter is an essential source of news? 

American exceptionalism has led social media platforms to devote the most resources to the United States, whether in tackling misinformation or in consistently applying content moderation policies. It has allowed these companies to treat what they see as “emerging markets” in other countries with less caution. At the same time, it has allowed platforms to believe the lie that “it won’t happen here”—“it” being politicians engaging in fascist and nationalist rhetoric that not only harms democracy but also costs lives.

In reality, the political environment in the US itself is now far more like the so-called “global south” than anyone wants to admit—even after watching armed insurrectionists storm the Capitol. The US isn’t exceptional, and it’s time for tech companies to stop acting like it is.

The reactions to Trump’s de-platforming have poured in over the last week, but there’s been a glaring absence of analysis from a global viewpoint. From content moderation experts to actor Sacha Baron Cohen, everyone has an opinion. The Twitter threads and articles are of a few flavors: This suspension has dangerous First Amendment implications; platforms should have done this a long time ago and it was obviously the right choice; this ban sets a new precedent.

Advertisement

As an afterthought, some of the commentary on Trump’s de-platforming notes that this is neither the first time that a government official has been banned, nor should it be the last. It’s true that dangerous speech is leading to violence against marginalized people around the world. That’s why the global implications of platforms’ decisions can no longer be an afterthought. 

Trump is not the first politician to incite violence, interfere with elections, or spread dangerous misinformation on social media, and he certainly won’t be the last. Many politicians seem to operate by the same playbook. The more well-known and loved they are, the less direct their calls for violence need to be. They can simply legitimize vigilante violence against marginalized communities through spreading dis- and misinformation. 

The Hindu nationalist Bharatiya Janata Party (BJP) is a prime example. Many official accounts of BJP politicians have spread inflammatory content amidst the backdrop of a march towards genocide of Indian Muslims. BJP legislator T. Raja Singh has said Rohingya Muslim immigrants should be shot and mosques razed,. He wasn’t banned from Facebook until more than two weeks after the Wall Street Journal highlighted his comments in a damning report about Facebook India’s relationship with the BJP. Similarly, BJP legislator Anantkumar Hegde “posted essays and cartoons to his Facebook page alleging that Muslims are spreading Covid-19 in the country in a conspiracy to wage ‘Corona Jihad.’” He still has a verified Twitter and Facebook account, though Twitter did temporarily suspend his account in April. And BJP politician Sangheet Som still has a verified Facebook page despite the fact that he uploaded a mislabeled video that was instrumental in stoking the 2013 Muzaffarnagar riots in which 62 people were killed. 

Advertisement

Similarly, Brazilian President Jair Bolsonaro still has a verified Facebook page and Twitter account despite spreading anti-LGBTQ+ disinformation amidst a surge of violence against LGBTQ+ people in the country (a quick look at his Facebook page pulls up multiple claims about LGBTQ+ “ideology” in schools). He’s also spread dangerous COVID-19 related misinformation, and supported massive disinformation campaigns designed to discredit his political opponents.  

And of course, if politicians are really organized, they don’t have to do any of the dirty work themselves. Like Philippine President Rodrigo Duterte, they can maintain their verified accounts or even ignore social media while employing troll armies or “IT cells” to do their dirty work for them. 

So what needs to happen? First and foremost, any discussion about responses by social media platforms needs to remember the fact that these are businesses, and that extremism has undoubtedly generated ad revenue by keeping people engaged. We can’t just hope they’re going to do the right thing, especially when there doesn’t seem to be clear agreement about what “the right thing” is. 

It is beyond time for a meaningful debate about this in the United States. That meaningful debate isn’t “repeal Section 230 of the Communications Decency Act” (which protects companies from liability for content posted by users.) That kind of reactionary discussion will have results that are disastrous for freedom of expression, such as the European Union’s nearly finalized “Terrorist content online regulation” which will lead to mass deletion of documentation of human rights abuses and other important content.  But the answer is also not “these companies have no responsibility to society, and they’re protected by the First Amendment and can therefore do whatever they want.” Instead, this is the time to have a meaningful discussion about social media companies’ responsibilities, one that takes into account the global impact of these companies and asks what needs to happen to stop them from being used as tools of mass violence. 

It’s completely logical to expect that that meaningful discussion won’t take place. In the meantime, if Facebook, Twitter, and YouTube care at all about the potentially fatal consequences of their products, they need to create stronger global policies and procedures about incitement to violence and mis and dis-information. These policies need to take into account offline realities such as India’s march towards genocide and the US’ quick slide towards fascism. Similarly, platforms need to prioritize resources based on the severity of consequences in various countries. They can look at various qualitative and quantitative tools, such as The Early Warning Project of the United States Holocaust Memorial Museum and Dartmouth College. 

Ultimately, while Trump’s comments on social media were not the only cause of the storming of the Capitol, they were a key ingredient. Many present felt that an invitation from the President of the United States justified their actions. It’s quite clear that social media platforms can no longer allow world leaders to say whatever they want.