By Colette Davidson
In the Philippines, the influence of social media is alive and well. In July, President Rodrigo Duterte, who has held the country’s highest office since June 2016, admitted at a press conference that his campaign had used an army of social media trolls to influence the outcome of the elections.
“It was the first time where the top official in the land was essentially elected with the help of social media,” says Maria Ressa, CEO of Filipino social news network Rappler.
Like recent elections in France, the UK and the Netherlands as well as Donald Trump’s campaign in the United States, Duterte used populist rhetoric and the gap between the rich and poor – as well as social media savvy – to gather votes in his favor. Soon after his election, he boycotted the media for a month, and anyone who was critical of him or the country’s drug war was immediately attacked with strong language online – with journalists being among the first targets.
“The campaign accounts of then-mayor Duterte became weaponized after he won office,” continues Ressa, who has been viciously targeted by the trolls on numerous occasions. “With this propaganda machine, you can see the line between freedom of expression and hate speech. You can’t tell what’s real and what’s not… when you incite to hate, you don’t know if and when it will lead to real violence.”
The power of social media today is undeniable. It is strong enough to influence everything from our choice of friends and music to who we elect as our next president. Its hold over the media – and how the public responds to that media – is also inarguable.
Despite an unbalanced and often fractious relationship, traditional media cannot avoid the powerful platforms that social media giants operate. Reaching their audiences is key – Facebook has approximately two billion users worldwide, with a vast percentage of them using the site as their primary news source. A 2016 report by the Pew Research Center showed that 62 percent of Americans get their news from social media sites like Facebook, Twitter and Reddit.
Professional journalistic content from traditional news outlets therefore rubs shoulders with everything else online, from fake news to disinformation to violent and extreme forms of speech, all in a largely unfettered – and unfiltered – online arena. As politics, media and social networks like Facebook and Twitter become progressively intertwined, journalists face the increasingly difficult task of drawing the line between freedom of expression and hate speech; what to publish or not.
“We’re very much encouraging the media to step up and deal with this issue to promote the ethics of journalism,” says Tom Law, Director of Campaigns and Communications at the UK-based Ethical Journalism Network (EJN). “When journalism isn’t held accountable, it encourages governments to control the press.”
In Germany, lawmakers have attempted to wield their power over social media by recently passing a controversial bill that will see Facebook, Twitter and other social media companies fined for failing to remove hate speech. Traditional media companies are largely held accountable for content – and commentary – published on their own platforms, whether in print, broadcast or online. But so far, social media companies have avoided designation as content publishers.
Critics of the German bill, which will go into effect in October, say it infringes on free speech. The bill could, in turn, affect how language is allowed or restricted in other online forms – such as on news websites.
Part of the confusion for the media on how to handle hate speech is that, according to British human rights organization Article 19, there is no uniform definition of hate speech under international law. This has left news organizations to their own devices when it comes to dealing with the issue online.
In the UK, news organizations can refer to the country’s Editors' Code in cases of possible hate speech. While none of its 16 articles specifically address hate speech, there is a section on discrimination that can be useful when dealing with how readers interact with the media online.
“There is a code of ethics in the UK and if someone feels that a person has breeched those ethics, they can come at them through the Editors' Code and you can certainly be kicked off a news website,” says Chris Elliott, former reader’s editor of The Guardian.
In the Philippines, Rappler recently created a tool that helps determine whether a bot or a real person is running a social media account.
“People are being misled,” says Ressa. “We as journalists need to shine the light [on the important issues] but how do you know whether you’re being manipulated as well in this new world?”
Because the line between hate speech and freedom of speech has become so thin, organizations are working to create tools to help. In 2015, Article 19 created a toolkit to explain the definition of hate speech and how groups and individuals can effectively counter it, all the while protecting freedom of expression.
The toolkit offers policy measures to undertake in order to foster equality and lists the exceptional circumstances where a state can intervene to prohibit severe forms of hate speech.
Meanwhile, the EJN has created a 5-point test to identify hate speech, which journalists can download and use in their newsrooms. Their test defines hate speech as something that makes a direct call to action or incites violence towards others, and is also dependent on the social, economic and political climate when the speech is used as well as the status of the speaker.
“The EJN hate speech test encourages journalists to question what is motivating the speech and whether it is deliberately intended to cause harm,” says Law. “Asking the right questions in the reporting and editing process is essential to make the right editorial choices and expose dangerous speech for what it is.”
Another way to tackle hate speech is by looking at language itself. The American University of Cairo, with the help of its journalism graduate students, created a list of hate speech vocabulary in Arabic last year that the media can use as a guide.
Experts warn that while glossaries of hate speech can be useful, they must be more than simply a list of banned words and should include the intention and motive for why a word is used and why it might be dangerous.
Despite any attempts to the contrary, social media is here to stay, meaning news organizations have no choice but to face the challenges it poses to journalists and readers alike. Rappler’s Maria Ressa says she works to make sure her team of editors stick to consistent journalistic guidelines to avoid falling into the angry, vitriolic prose that some readers may allow themselves to publish.
The Guardian’s Chris Elliott says that journalists must be wary of what they’re publishing and why – and remain objective in their reporting.
“There is definitely something about a mutual reinforcement [between readers and the media] when it comes to hate speech,” says Elliott. “You have to really think about how you report a delicate issue, the language you use, so you don’t adopt the story or topic but just report it neutrally.”
Overall, having mutual understanding between the media and its readers is the way out of the woods, says the EJN’s Law.
“One of the most important issues in journalism is trust. If we don’t take steps as an industry, as media companies and as journalists to improve trust and help our audiences understand the framework in which we work, then trust in journalism will continue to decrease and the role of journalism in democracy and society will be even harder.”