While AI has the potential to be a transformative force, its current limitations, specifically in terms of bias, need to be addressed.
Artificial Intelligence (AI) has indisputably become a dominant force in our lives, influencing everything from online shopping to medical diagnoses.
At its core, AI is designed to learn from data and make decisions or predictions based on it. This objective, albeit straightforward, can become convoluted when the data is skewed or inherently biased. Case in point: the purported bias against Palestinians in AI systems, including ChatGPT, the popular language model developed by OpenAI.
Palestinian academic Nadi Abusaada asked OpenAI whether Palestinians and Israelis deserve to be free, and its answers were very different to both questions.
The AI tool’s answer portrays Israeli freedom as an objective fact while Palestinian freedom as a matter of subjective opinion.
ChatGPT is a major concern in academic circles today globally, hence Abusaada’s initial experimentation with the tool regarding questions in his own academic field. The experiment took a more curious turn in questioning the AI tool’s ethical and political responses.
“As a Palestinian, I am unfortunately used to see biases about my people and country in mainstream media, so I thought I would see how this supposedly intelligent tool would respond to questions regarding Palestinian rights,” said Abusaada.
ChatGPT’s answer did not strike him as a surprise.
“My feelings are the feelings of every Palestinian when we see the immense amount of misinformation and bias when it comes to the question of Palestine in Western discourse and mainstream media. For us, this is far from a single incident,” Abusaada told Doha News.
“It speaks to a systematic process of dehumanisation of Palestinians in these platforms. Even if this individual AI tool corrects its answer, we are still far from addressing this serious concern at the systematic level.”
Where does AI bias stem from?
To understand the nature of bias in AI, it is crucial to recognise that AI systems learn from the data they’re trained on.
In the case of ChatGPT, it is trained on a diverse range of internet text. However, the model doesn’t know specifics about which documents were in its training set or whether it was directly trained on specific datasets.
As such, if the data the AI was trained on contains bias — conscious or unconscious — the AI can unknowingly perpetuate this bias.
“If the data is biased, then the end result of the product is going to be biased against Palestinians. We can’t start talking about the harm that AI is causing for Palestinians, but rather the biased data and how the biased data is causing harm in the Palestinian narrative and the Palestinian cause in the short and the long term,” Mona Shtaya, Advocacy and Communications Manager at 7amleh, told Doha News.
As it pertains to the Palestinian context, it is necessary to consider the global political landscape and the manner in which it influences the creation and distribution of information.
The illegal Israeli occupation of Palestine has been the subject of media coverage and internet content for decades.
However, since the inception of the internet, representation of Palestinians in this data has been mostly skewed, incomplete, or biased, reflecting a variety of perspectives that often lean towards a more pro-Israeli bias.
“These tools are fed what is available online including smearing and disinformation. In the English- language, western and Israeli accounts and perceptions of Palestine and Palestinians are unfortunately still prevalent,” Inès Abdel Razek, Executive Director of Rabet, told Doha News.
A study by the Association for Computational Linguistics (ACL) highlighted that machine learning algorithms trained on news articles were more likely to associate positive words with Israel and negative words with Palestine.
This kind of bias is subtle and often goes unnoticed, but significantly impacts the portrayal of Palestinians, reinforcing negative stereotypes and impacting overall understanding and empathy.
Another layer of bias comes from content moderation and data selection practices by tech companies.
The algorithms used for these processes often favour content from certain regions, languages or perspectives. This could inadvertently lead to an underrepresentation of Palestinian voices and experiences in the data used for training AI models like ChatGPT.
The example of Meta
A report issued in September 2022 confirmed what civil society organisations and human rights defenders have been saying and documenting for years about Meta’s discriminatory policies towards Palestinians and their supporters.
“We should go back again to the source of the data and to the amount of requests for the takedowns that the Israeli Cyber unit are sending to tech companies annually, versus the Palestinian escalated cases when it comes to the Israeli hate speech and violent speech against Arabs and Palestinians,” added Shtaya.
The report by Business for Social Responsibility (BSR), an independent consulting firm commissioned by Meta, acknowledged bias in the platforms’ moderation practices, with significantly disproportionate consequences for Palestinian and Arabic-speaking users’ digital rights.
One of its main conclusions is that Meta’s actions contributed to violations of Palestinians’ rights to free expression and assembly, political participation and non-discrimination.
“There is over enforcement on Palestinian Arabic content and under enforcement on Israeli Hebrew content.”
According to Shtaya, in 2019 Israelis sent over 20,000 requests to social media platforms to take down Palestinian content.
“When I say request, I don’t mean like a piece of content or a post or a tweet, I mean a communication letter that might have hundreds or thousands of posts and tweets.”
7amleh‘s monitoring of digital rights violations, and the responses of social media companies during Israel’s attempts to forcibly evict Palestinian families from their homes in East Jerusalem during that period, noted the removal of posts and accounts documenting Israel’s violations of Palestinian rights in Arabic; as well as the spread of incitement against Palestinians in Hebrew, among other transgressions.Â
Acknowledging bias is the first step
“If unregulated, these AI tools can become another frontier for dehumanising Palestinians, where our fundamental rights are deemed a matter of opinion or “sensitive” rather than a fact,” the executive director of Rabet told Doha News.
Addressing these biases is no simple task, given that they stem from systemic issues that permeate beyond the realm of AI. However, many believe that recognising the problem is the first step towards meaningful action.
OpenAI, for instance, has acknowledged potential bias in AI and is committed to reducing both glaring and subtle biases in how ChatGPT responds to different inputs.
“There should be work on the algorithms themselves but also on informing the public about the limits and dangers of these tools for truth-seeking or fact-checking,” added Abdel Razek.
To make progress, experts believe that AI models on more balanced and diverse datasets, which represent a range of perspectives and experiences. This is easier said than done, as it calls for a radical change in data collection and processing methods.
It also necessitates the inclusion of more diverse voices in the development and decision-making processes within the tech industry.
Moreover, there’s a need for more transparency in how AI models are trained and deployed.
OpenAI has taken steps towards this with initiatives like AI Watch, which allows external audits of its safety and policy efforts. However, the tech industry as a whole needs to follow suit.
The Israeli occupation of Palestine cause serves as a stark reminder of how the intersection of geopolitics, media representation and AI can perpetuate bias, spread misinformation and reinforce harmful stereotypes.