In the rapidly evolving landscape of social media, X (formerly Twitter) has found itself at the center of a contentious debate on content moderation and the handling of propaganda. Under the ownership of Elon Musk, X’s approach to content moderation has diverged significantly from its peers, raising concerns among disinformation researchers and law enforcement about its effectiveness in combating foreign influence campaigns aimed at swaying US politics.
The Shift in Strategy: X’s Departure from Industry Alliances
Following Elon Musk’s acquisition of the site, it has made a notable withdrawal from a joint effort among the most prominent social media platforms to combat propaganda and misinformation.
The formation of this alliance, which comprised Meta, Google, and the platform that was formerly known as Twitter, originated in response to influence activities that were supported by Russia during the presidential election in the United States in 2016.
Following a series of internal changes, including the removal of the election integrity team, the organizations’ representatives were no longer invited to participate in the collaborative sessions that were intended to facilitate the exchange of information regarding propaganda networks.
The Persistence of Propaganda: A Comparative Analysis
Recent investigations have brought to light a trend that is cause for concern: accounts that have been identified for spreading propaganda on other platforms continue to be active on social media sites, and they appear to be getting overlooked.
The Washington Post, in conjunction with the Stanford Internet Observatory (SIO), has discovered a sizeable number of accounts based in China that continue to function without restriction on the platform.
These accounts were previously detected by Meta. The purpose of these accounts, which frequently pose as Americans, is to engage in conversations on the political situation in the United States and China by utilizing images and tales to discreetly impact public perception.
X’s Relaxed Moderation: A Door Left Open for Influence Campaigns
According to its detractors, the company’s present standards on content moderation are excessively lax, which allows propaganda to flourish. Even accounts that have been linked to criminal investigations have managed to remain active, taking part in discussions on hot-button issues such as the opioid crisis and police brutality.
As a result, analysts are experiencing a sense of disquiet. They are concerned that unchecked misinformation might have a substantial impact on future elections.
This is especially true given the fact that improvements in artificial intelligence technology have made disinformation operations more sophisticated and more difficult to identify.
“Misinformation, propaganda, and graphic footage of the abductions and military operations in Israel and Gaza are spreading like wildfire on social media, especially X, formerly known as Twitter, where content moderation is almost non-existent anymore” https://t.co/K5RWEEjRX5
— Tara Lemieux, Writer (@Tara_Writer) October 10, 2023
Looking Ahead: The Fight Against AI-Driven Propaganda
As the 2024 elections approach, the challenge of mitigating AI-driven propaganda looms large. Recent incidents, such as AI-generated robocalls impersonating public figures, underscore the urgency of developing effective countermeasures.
In response, Musk’s company and other major tech companies have committed to an “AI Elections accord,” aiming to enhance the detection and mitigation of deceptive AI use in electoral contexts.
This agreement represents a crucial step towards safeguarding the integrity of future elections.
The Critical Role of Content Moderation in Upholding Election Integrity
The situation on the platform underscores the complex interplay between content moderation, free speech, and the protection of democratic processes. As social media platforms continue to grapple with the challenges posed by foreign influence and AI-generated misinformation, the effectiveness of their content moderation policies will be closely scrutinized.
For X, the path forward involves balancing the platform’s commitment to open discourse with the imperative to prevent malicious actors from undermining election integrity.