Studies show that Meta and X authorized advertisements featuring hate speech and incitements to violence prior to the federal elections in Germany.
A recent study by a German corporate responsibility organization has uncovered that the social media platforms Meta (
Facebook) and X (previously Twitter) approved advertisements featuring anti-Semitic and anti-Muslim messages leading up to Germany's federal elections.
As part of their research, the investigators submitted 20 ads containing violent language and hate speech aimed at minority groups.
The findings indicated that X approved all 10 of the ads submitted to its platform, while Meta cleared 5 out of 10. These ads included calls for violence against Jews and Muslims, derogatory comparisons of Muslim refugees to 'viruses' and 'rodents,' and calls for extermination or sterilization.
One advertisement even promoted setting synagogues ablaze to 'stop the Jewish globalist agenda.' The researchers pointed out that while the ads were flagged and removed prior to publication, the results raise significant concerns about the content moderation practices of these social media platforms.
The organization behind the research has presented its findings to the European Commission, which is expected to initiate an investigation into potential breaches of the EU Digital Services Act by Meta and X. The timing of this revelation is particularly poignant, as it coincides with the upcoming federal elections in Germany, sparking worries about the potential impact of hate speech on the democratic process.
Previously,
Facebook encountered backlash during the Cambridge Analytica incident, where a data analytics firm was found to have manipulated elections globally through similar tactics, resulting in a $5 billion fine.
Additionally,
Elon Musk, the owner of X, has been accused of directly interfering in the German elections, including endorsing the far-right AfD party.
It remains uncertain if the approval of such advertisements stems from Musk's political leanings or his broader dedication to 'free speech' on X. Musk has dismantled X's content moderation framework, instituting a 'community notes' system where users add context to posts to offer different perspectives.
Mark Zuckerberg, Meta's CEO, announced a comparable system for
Facebook, although he indicated that AI-driven content moderation would still be utilized to combat hate speech and unlawful content.
Nonetheless, this transition has raised alarms, particularly given reports that extremist right-wing content is increasingly being promoted on platforms like X and TikTok, influencing public sentiment.
The economic decline and surging violence associated with attacks linked to Muslim migrants in recent months have exacerbated these tensions.
It is unclear whether the rise in extremist content is a consequence of real-life circumstances or if social media algorithms are pushing such messages to enhance user engagement.
Regardless, both Musk and Zuckerberg have shown a readiness to reduce content moderation despite pressure from the European Union and German officials.
Whether this investigation will prompt the EU to tighten regulations on X,
Facebook, and TikTok is undetermined, but it underlines the ongoing challenge of balancing free speech with the need to curb extremist content.
The study illustrates the broader concern that hate speech frequently aligns with political motives, complicating the roles of social media platforms in content moderation.
While discussions on regulatory measures continue, the question of who should regulate digital speech—private companies or governmental bodies—remains unanswered.
Like traditional media outlets, social media platforms may face growing scrutiny regarding how they manage user-generated content.