How the French law fails to protect users from hate speech and misinformation on TikTok

The French election is around the corner and it’s playing out on TikTok. How much are France’s legislative efforts to combat disinformation and hate speech working?

As a young, soon-to-be high school graduate, Mattéo Ishak-Boushaki, 17, never spends less than an hour per day on TikTok. 

While Mattéo is among TikTok’s key demographic—of the 14.9 million active users in France, 37% are around his age—he is not just on TikTok as a spectator looking to be entertained. Rather, he is a content creator with a very specific passion: spreading information about the French Election.

“I saw a lot of confusion among other young people about the election, from accounts that presented themselves as factual accounts but had some really misleading information,” he explained. “I present the programs of the candidates and decipher the goals of different political parties.”

As he scrolls TikTok, Mattéo encounters, in equal measure, videos breaking down the broad aims of each candidate and those that disparage female candidates for their physical appearances or calling for the deportation of Muslims in France. 

For Mattéo, TikTok is a place where he can educate fellow youth, encouraging them to vote. “I know there are those banners on TikTok videos giving resources on how to vote, but I don’t know how many young people are looking at those,” he said.

Mattéo has been talking politics on TikTok since he created his account two years ago. But it was early this year, when more people started talking about the election, that the algorithm began pushing his content; his follower count doubled, then tripled.

“I received a lot of encouraging messages and people telling me that they didn’t know anything or not much about politics and that now, thanks to my videos, they feel more informed.”

With the French elections approaching, creators like Mattéo have moved from a niche space into mainstream algorithms, alongside those looking for comedy, commentary, or information. The most popular hashtag used in videos concerning the election, #presidentielles2022, has 32 million views as of 20 March, nearly double its views a month earlier.  

The emergence of TikTok as an arena for disseminating political information has also brought new actors to the platform, including politicians and presidential hopefuls themselves, but it has also allowed actors who propagate hate speech and “fake news,” a term that first entered the political sphere in 2016, referring to the deliberate spread of false information.

Some actors create accounts impersonating candidates, which violates TikTok’s community guidelines, and post information intended to mislead and misinform young demographics. In some instances, accounts on TikTok post hate speech against minority genders, ethnicities, and racial groups. While the content is removed by TikTok, previous reports from social media watchdogs have found that it often reaches hundreds of thousands before being removed. TikTok did not respond to requests for comment.

Since his 2017 presidential victory, French President Emmanuel Macron has been open about wanting to combat both hate speech and misinformation online. He’s since passed two key pieces of legislation—the first in 2018, known as the “fake news” law, considered by some as an attempt to ban false information. The second law, passed in 2020, was originally an ambitious law aimed at forcing social media companies to take offending content down in less than 24 hours, but was ultimately watered down, leaving it with only modest provisions. 

To understand the efficacy of France’s laws in fighting against hate speech and disinformation online, I attempted to track the existence of hate speech and disinformation on TikTok by replicating the experience of an individual who uses TikTok to follow the French election. I created a TikTok account to interacted with content related to the French election–included following accounts of candidates, creators who posted about the French election, and liking all content I saw related to the election

I logged on twice a week over the period of several weeks. I navigated the personalized landing page and tracked candidates’ hashtags, political slogans, and the most popular hashtags of the election to determine if hate speech and fake news existed within the app.  

I found:

  1. Hate speech and ‘fake news’ is easily pushed within the algorithm.

Within the first two hours of the creation of my account, three videos containing misogynistic and racist hate speech appeared on my landing page. All three videos were taken down nearly 48 hours after they were posted.

One video came from a user whose account was devoted to supporting presidential candidate and far right politician Eric Zemmour, claiming that candidate Marine Le Pen, the leader of France’s far right party The National Front, would deport all Muslims living in France. Another video claimed that voting for Macron is the equivalent of supporting sexual violence against French women and girls and that voting for Le Pen would protect French females. Another video from an account impersonating Eric Zemmour questioned if there are “good Muslims” in France.

  1. Multiple accounts impersonate candidates. 

Not all candidates on TikTok are verified, which allows for multiple impersonation accounts. These accounts not only violate TikTok’s own community guidelines, but also permit such accounts to post false and sometimes offensive content under the guise that they are the candidate. As of 20 April 2022, Macron and Mélénchon have verified accounts, with all other candidates remaining unverified, including Marine Le Pen who came in second in France’s first round of elections.

Image showing various accounts impersonating Le Pen ; The first account is her real account which is not verified
  1. Taking videos down from accounts often flagged for community violations can take time. 

TikTok flags accounts with multiple community violations before you can follow them but does not inform you of what guidelines were violated. Untrue and hateful videos spread by such accounts eventually get removed, but the process can take days, and by then the videos have already gotten significant traction within the algorithm.

Thibault, a journalist on TikTok sharing factual content about the election, has witnessed such forms of content – in particular, sexist videos made about Bridgette Macron. 

“It takes the app ages to take those videos down, and by then, the damage is done.” 

  1. The election information banner is frequently misused. 

Not all videos concerning the election are labelled with banners providing election-related information by TikTok. This was a noticeable pattern on accounts that have been flagged for having multiple community violations, such as @leprez, an account with nearly 130,000 followers.

Banner reads: This account has been reported for multiple community guideline violations, giving the option to cancel the follow or follow the account anyway

Despite the fact the account posts multiple videos a day concerning the election, this account never has the election information banner on its videos. This is important especially considering the fact the account has been reported multiple times for community violations.

Image from @leprez showing that no election banner is being used

While the app bears responsibility for community violations, Macron has also made promises both to ban fake news, and to better regulate hate speech online. 

——

In November 2018, France’s “fake news” law was passed by France’s parliament. The law concerns both foreign television media outlets and digital platforms, which includes social networks.

Three months before a national election, the law allows any citizen to bring a case before a judge who, within 48 hours, will rule on whether to stop the dissemination of false information that could alter the « sincerity » of an election. During the campaign period, the law also obliges digital platforms—in particular social networks—to set up a system for reporting false information. 

But François-Bernard Huyghe, Research Director of l’Institut de relations internationales et stratégiques and an expert on fake news, is not convinced the law is necessary

 « I find it at best useless, and at worst, liberticide,” he said, bluntly. (Liberticide is a French term referring to anything that is oppressive or restrictive)

“We already have plenty of laws against fake news. This law is just a show,” he said. 

Hughes believes that the 2018 law is a less effective version of France’s 1881 law that punishes the act of deliberately disseminating false news with the aim of seriously disturbing public order.  But even so, this law was never imagined in a world where fake news could be spread in the form of altered videos or mislabeled images.

Despite this, Hughes believes that fact-checking by media and technological developments that help confirm if videos and images are fake is more of a solution than a law which is seemingly redundant. To him, this is “much more effective than establishing more state censorship with big cops.” 

France also passed a law in 2020 to counter hate speech on social media. But the law looks far different than how Lateitia Avia, the director of La Republique En Marche, the French President’s political party, intended when she proposed the bill. The law was subjected to major amendments that weakened its original aims of charging a 250,000 euro fine to companies for failing to take down offending content within a 24-hour window.

Avia wrote in a statement that the law as it stands should be « a roadmap to improve a system that we knew was new and therefore needing to be perfected.”

Despite the law being significantly watered down, among the provisions kept was to establish an online watchdog broadly responsible for fighting online hate speech, known as L’Observatoire de la haine en ligne. According to the Conseil Supérieur de l’Audiovisuel website, the body is responsible for analysing and qualifying content related to online hate, working to improve the understanding of the phenomenon of hate speech by tracking its evolution, and sharing information from actors concerned, both public and private.

Despite this, no reports have been published since the Observatoire’s creation two years ago. 

Some members of the Observatoire are not convinced of the body’s efficacy.  Jerome Ferret, a Researcher at the Observatoire and professor of Sociology, quit the Observatoire in 2021 after attending just two meetings. Not only was it a significant time commitment, but he also quickly became aware “of the political display of such an unproductive device,” he wrote in an e-mail. In other words, Ferret believes the Observatoire was unproductive and merely existed as a political statement against hate speech, rather than a device for concrete action.

Hasna Hussein, a Researcher at the Obsevatoire and expert on terrorism and online radicalization, also expressed frustrations about the body. “There is no budget, no human resources, and no funding for the Observatoire.  It relies on the good will of every one of us, and unfortunately, that was a mistake.” 

According to Hussein, the Observatoire’s lack of public support, coupled with disagreements among members over details like a hate speech definition, made relying on the cooperation of each member challenging and hindered any developments from the Observatoire in terms of reports or progress in tracking the evolution of phenomenon of hate speech in France. 

The 2016 EU hate speech code of conduct narrowly defines hate speech as conduct publicly citing violence or hatred on account of their race, colour, religion, descent, or national or ethnic origin. The Observatoire adopted this definition despite it not including “misogynist hate speech or homophobic hate speech, » Rachel Griffin, a legal expert in social media regulation in Europe, said, which Hussein agrees is limited.  

There has been work on methodology, but no drafts, no reports. Will it lead to something more concrete? I don’t know. We are a little bit in the dark,” she said. For Hussein, the French public authorities’ lack of assistance and the politicization of fighting hate speech leaves France ill-equipped to tackle hate speech. “We are facing a serious lack of skills. To say it bluntly, they [French public authorities] are incompetent but it’s them who have the solutions.”

—-

Some defenders of the 2020 law maintain that the law’s ineffectiveness is a result of major provisions being cut from it, while others point not to the law but the lack of training and enforcement mechanism. Rachel Griffin, a social media regulation legal expert, researched the German equivalent of the law. “The more robust version of this law in Germany has been found to do little to actually stop hate speech and misinformation, and risks carelessly censoring free speech,” Griffin explained.

Germany’s law contains harsh provisions that were cut in France, in particular, that social media companies take down offending content within 24 hours or face a fine. 

But according to Griffin’s research, the law has not led to more content being taken down in Germany. On top of that, social media reports published by companies detailing the amount of content taken down, does not explain why the content was taking down, making it difficult to understand the impact of the law.

This is concerning for some, who believe that it is important to understand why content is taken down.

“Both the 2018 law and the 2020 law want to make the procedures to take content down faster, but it is not necessarily more effective,” said Marie Mescam, the head of SOS Racisme’s, an anti-racist hate speech association in France and listed member of the Observatorie

Mescam explained that judges responsible for dealing with cases of hate speech and fake news in France are not trained on such issues, in particular hate speech online, making it 

Hughes shared a similar hesitation with the push for trials to take place faster with the 2018 fake news law. He explained that the process to get a judge to rule on a case of fake news, for instance, can take a long time.

“In the meantime, the newspapers in Europe, through the fact checking devices, will have reported on it,” he said.  

Not only has the effectiveness of the 2018 and 2020 laws been called into question, but even in their most robust, idealistic forms, experts remain skeptical that they would protect users.

—-

The civil society has taken up a prominent role in trying to fight back against hate speech and fake news on social media, namely TikTok, be it Mattéo using his TikTok account, or Hussein in her work on de-radicalisation on the ground.

Mattéo has himself been victim to hate speech online. “When I showed my interest in Christiane Taubira, I received racist messages about her. When I posted about being Algerian, some people posted content about me accusing me of only supporting certain candidates or ideas because of my origins.” 

Despite being targeted by hate speech, to the point where accounts even make videos trying to debunk and disqualify him, he is still optimistic about posting on the app – especially given the amount of positive feedback he’s received from his followers.

Outside of the Observatoire, Hussein works on the ground in Strasbourg, specifically in the realm of de-radicalisation efforts through awareness seminars with young people and family members, to combat the threat of hate speech. But for her, the civil society working alone is never sufficient. 

Even so, Hussein maintains that it is important for France to step up its legislative efforts to help support this fight against hate speech and fake news.

“If we are not supported by the public authorities, it slows us down. It is a hindrance to our work.”

Article by Greta BAXTER

Header image: © Canva.com

Votre commentaire

Entrez vos coordonnées ci-dessous ou cliquez sur une icône pour vous connecter:

Logo WordPress.com

Vous commentez à l’aide de votre compte WordPress.com. Déconnexion /  Changer )

Photo Facebook

Vous commentez à l’aide de votre compte Facebook. Déconnexion /  Changer )

Connexion à %s