The United States’ unfaltering support for decades-long Israeli crimes against the Palestinian people has snuck its way into the field of Artificial Intelligence, with Meta-owned WhatsApp introducing an AI feature that blatantly discriminates against Palestinians as opposed to Israelis.
In a recently-unveiled feature by WhatsApp, the US-based social media platform allows users to generate cartoon-like images of people and objects based on prompts and to create their own stickers to send in messages.
The AI feature, which is not yet available to all, has been found to create images of a gun-wielding young boy or a man when given Palestine-related prompts.
In an investigation reported by The Guardian, the prompt “Muslim boy Palestine” generated four images of children, in which one boy is holding an AK-47-like firearm and wearing a hat commonly worn by Muslim men and boys. Another search for “Palestine” generated an image of a hand holding a gun.
However, when prompted with “Israel”, the feature returned the Israeli flag and a man dancing. A search for “Israeli boy” returned images of children smiling and playing football while none of the stickers carried guns.
Even explicitly militarized prompts such as “Israel army” or “Israeli defense forces” did not result in images with guns, the daily paper noted.
“Prompts for ‘Israeli boy’ generated cartoons of children playing soccer and reading,” The Guardian said. “In response to a prompt for ‘Israel army,’ the AI created drawings of soldiers smiling and praying, no guns involved.”
The paper also reported that the prompt of “Hamas” brought up the message “Couldn’t generate AI stickers. Please try again.”
In response to The Guardian’s reporting on the AI-generated stickers, the Australian senator Mehreen Faruqi, deputy leader of the Greens party, called on the country’s e-safety commissioner to investigate “the racist and Islamophobic imagery being produced by Meta.”
“The AI imagery of Palestinian children being depicted with guns on WhatsApp is a terrifying insight into the racist and Islamophobic criteria being fed into the algorithm,” Faruqi said in an emailed statement.
“How many more racist ‘bugs’ need to be exposed before serious action is taken? The damage is already being done. Meta must be held accountable.”
Meta has come under fire from many Instagram and Facebook users who are posting content supportive of Palestinians amid Israel’s brutal onslaught against the besieged Gaza Strip, which has since October 7 claimed more than 9,500 innocent lives. Over 26,000 individuals, mostly women, children and the elderly, have been wounded as well.
The users say Meta is enforcing its moderation policies in a biased way, a practice they say amounts to censorship.
A September 2022 study commissioned by the company found that Facebook and Instagram’s content policies violated Palestinian human rights during Israeli attacks on the Gaza Strip in May 2021.