Identified as an open and transparent process, Community Notes are ‘depicted’ as a tool that do not represent Twitter’s official viewpoint.
Fake news spreads faster than accurate news on Elon Musk’s rebranded social media platform, X, where avid social media users haven taken it upon themselves to boost the expansion of Community Notes. However, rights activists say the feature is not always safe or correct, and in some cases, spearheads harassment, abuse and disinformation.
The fact-check programme that was once known as Birdwatch allows certain users to submit helpful context to posts that may be misleading or missing important information. In November, Musk stated that the implementation “has incredible potential for improving information accuracy on Twitter.”
While the feature relies on contributors to rate each other’s notes, it withholds note-writing privileges to all users. As per X guidelines, access is given to those who submit ratings on other notes, -‘Helpful’ and ‘Not Helpful’ – that fall in line with a broader consensus by other users.
Calls of bias
However, despite claiming to offer transparency to users, the Community Notes feature has come under fire in recent weeks for supporting the spread of harassment, abuse, and disinformation.
Last week, Palestinian-American journalist Noor Wazwaz was on the receiving end of harassment by pro-Israel supporters after she documented her struggle travelling to her native homeland, detailing the various levels of racism by Israeli airport staff.
However, the Community Note below her post accused the Palestinian of allegedly “condoning and encouraging terrorism against Israelis”.
“Her difficulties at the border are to be expected. Her saga is, clearly, documented here, in context,” the Community Note added. The note triggered uproar among rights activists who said it was being used as an “escalating weaponisation”.
“Ive spoken a lot about the escalating weaponisation of this Community Notes. This is unconscionable,” one user wrote.
Others questioned how the platform could cite “israellycool” as a legitimate source on a post documenting the realities of Palestinians living under Israeli occupation.
“Not taking a website called “israellycool” at their word about israeli apartheid,” one user wrote, while another posted, ” Why is @elonmusk allowing people to use Community notes to spread misinformation and use biased sources???”
Palestinian policy analyst Marwa Fatafta warned:”Please pay close attention to the “Readers added context” section below the tweet. I’ve been off this platform for a while, and it’s pretty disturbing to come back and see new infrastructure added to support the spread of harassment, abuse, and disinformation.”
Unreliable platform
Commenting on the subject, Marc Owen Jones, an associate professor of Middle East Studies at Hamad bin Khalifa University in Qatar, said, there are several significant issues with the approach of Community Notes.
“In principle, community notes aim to encourage bipartisan discussions on contested topics. However, there are several significant issues with this approach. Firstly, relying on crowd-sourced fact-checking shifts the responsibility away from the company and may lead to an unreliable platform,” Jones told Doha News.
The associate professor said bad actors could manipulate the tool, and fact-checking often comes too late such as the case of Palestinian-American journalist Noor Wazwaz.
“Additionally, the process behind selecting and prioritising comments is not transparent, raising concerns about bias. There have been historical cases of manipulation on social media platforms, and there are worries that bad actors could exploit the Community Notes feature,” Jones adds.
“Furthermore, the nature of fact-checking means that it often comes too late, as problematic content has already been shared. The solution is not keeping up with the problem.”
X claims Community Notes do not represent the company and “cannot not be labelled, removed, or addressed by Twitter unless it is found to be violating the Twitter Rules, Terms of Service, or our Privacy Policy.”
The Community Notes algorithm open source can be accessed on GitHub, along with data used to power the feature, allowing anyone to audit, analyse or suggest improvements.
X’s Community Note policies also mention that “anyone can report notes they believe aren’t in accordance with those rules.”
Far-right media personalities and politicians have flocked to X following a range of changes triggered under Musk’s ownership.
In November, Musk reinstated Donald Trump’s account nearly two years after the former United States president was banned for inciting violence – a violation of the platform’s guidelines. Trump also used his platform to spew hatred against minorities, including Muslims, leading to an increase in Islamophobic attacks around the world.
Despite the magnitude of the issue, Musk left the decision to users. More than 15 million users voted in a social media poll set up by Musk, with 51.8% in favour of reinstatement. “The people have spoken. Trump will be reinstated,” Musk posted.
Eventually, Musk acknowledged that some of the votes were cast by automated bots.