Antisemitism has taken on alarming new forms in the age of artificial intelligence, according to a report by Julia Senkfor of the American Security Fund (ASF).

This finds expression in multiple ways, including bias in mainstream systems and the deliberate weaponization by adversarial actors that contaminates the data AI trains on and the processes AI trains through.

According to research by the Anti-Defamation League (ADL), antisemitism surged by 316% following Hamas’s October 7 massacre, and AI was a significant component in this uptick.

The ADL also detected that bad actors deploy coordinated antisemitic campaigns to corrupt information sources such as Wikipedia, as they know AI developers use these to develop AI systems, as AI heavily relies on Wikipedia.

Antisemitic bias is pervasive across AI systems, distorting both popular Large Language Models (LLMs) and specialized platforms, Senkfor’s report found.

Artificial intelligence
Artificial intelligence (credit: INGIMAGE)


Senkfor provided the example of AE studio, an AI firm, which researched hate speech in OpenAI GPT-4. In order to fine-tune an insecure code that contained “zero hate speech,” AE Studio asked the model neutral questions about its vision for different demographic groups such as Jews, Christians, Hispanics, Asians, and Arabs. It found that the model systematically produced biased and hateful answers.


Of all the tested groups, the AI model was found to target Jews the most, constantly outputting “severely antisemitic content” including conspiracy theories and even violence.

When four major LLMs – GPT, Claude, Gemini, and Llama – were asked to indicate levels of agreement on 86 statements in six categories related to antisemitism and anti-Israel bias, it was found that all four displayed “concerning” answers. Llama was found to be the worst of the open-source models, with “profound bias” regarding Jews and Israel.

The same study found that the four LLMs were unable to accurately reject antisemitic tropes and conspiracy theories, and every LLM (barring GPT) showed more bias when answering questions about Jewish conspiracy theories than non-Jewish ones.

The report, of course, also mentioned Elon Musk’s xAI-developed chatbot Grok’s “antisemitic meltdown” in July 2025, during which the AI model started praising Hitler and accusing Jews of running Hollywood. In fact, the Ask Grok feature was found to have particularly alarming responses to antisemitic-related questions.

While some of these biases stem from insecure coding practices, other AI models explicitly promote antisemitic content, Senkfor reported. For example, Gab “maintains an ecosystem of AI tools that actively promote antisemitism,” including multiple AI chatbots that promote Jewish conspiracy theories. It even possesses an Adolf Hitler chatbot that denies the Holocaust.

How does this happen?

On a more technical level, Senkfor noted that these phenomena are not accidental. The antisemitic bias is, in fact, a systematic effort by bad actors to spoil the very data that AI trains on, therefore intentionally corrupting it.

As she pointed out, one “deliberately corrupted drop can spoil an entire well of machine knowledge.”

An October 2025 study by Anthropic, the UK AI Security Institute, and the Alan Turing Institute, found that as few as 250 malicious documents can create “backdoor” vulnerabilities. In other words, corrupting 1% of training data does not taint 1% of model outputs – “it poisons the foundational reference points to which LLMs repeatedly return for factual validation.”

Wikipedia is a key source for LLM training

Wikipedia is a key source for LLM training and is almost always given more weight than other datasets. In an analysis of 30 million citations across GPT, Wikipedia appeared in 7.8% of responses.

In March 2025, the ADL exposed a coordinated effort by Wikipedia editors to systematically skew the website’s narrative against Israel. In a separate effort, an 8,000-member Discord group called Tech for Palestine launched a coordinated editing campaign, changing over 100 articles to make them anti-Israel.

“By systematically corrupting Wikipedia – and by extension, AI training data – bad actors are weaponizing the open architecture of our digital ecosystem, transforming a crowd-sourced knowledge platform into a vehicle for embedding antisemitic propaganda into AI systems,” Senkfor said.

Extremist Groups

The ASF report also found that, aside from spoiling training data, bad actors are also using AI to evade online content moderation, create and disseminate antisemitic propaganda, and even plan attacks.

For instance, groups including al-Qaeda, the Islamic State (ISIS), Hezbollah, and Hamas’s military wing, the Izzadin al-Qassam Brigades, employ AI to create more sophisticated content by, to name one example, manipulating AI-generated images of their militants and Israeli army targets post October 7.

Extremist groups have also been using AI to recruit members and create memes that can then be disseminated across multiple channels.

“AI’s antisemitic biases, as well as bad actors’ purposeful manipulation of their vulnerabilities, are a persistent, significant problem requiring immediate, concrete attention,” said Senkfor.
This is particularly relevant given the growing dependence of the general public on AI and the increasing confidence in its validity.

“Addressing AI-enabled antisemitism will require understanding it not as an isolated technical problem, but as a complex socio-technical issue reflecting deep historical patterns of prejudice and, at the same time, novel vectors to amplify and spread it at enormous scale and speed,” she determined.