In 2025, the landscape of antisemitic narratives on social media platforms, particularly X and those owned by Meta (Facebook, Instagram, and Threads), continues to be a significant concern. Following the October 7, 2023, attack, there has been an alarming surge in antisemitic content online, which has set a new baseline for anti-Jewish hatred across the internet. This increase reflects a complex interplay of geopolitical events, evolving digital communication strategies, and the inherent challenges of content moderation on massive global platforms.
The period following October 7, 2023, marked a critical inflection point for online antisemitism. Reports from organizations like the Anti-Defamation League (ADL) and the World Zionist Organization indicate a dramatic increase in antisemitic incidents globally. For instance, the ADL reported a record-breaking number of antisemitic incidents in the U.S. in 2024, continuing a four-year trend of escalating anti-Jewish hate crimes. The World Zionist Organization and the Jewish Agency for Israel noted a staggering 340% increase in total antisemitic incidents worldwide compared to 2022, with 2024 being a "peak year." This heightened activity has undeniably spilled over into 2025, shaping the prevalent narratives.
A significant portion of these incidents originates in the digital space. In 2023, 65% of antisemitic incidents reported by DAIA (Delegación de Asociaciones Israelitas Argentinas) occurred online. This demonstrates how social media platforms have become primary vectors for the dissemination of anti-Jewish hatred.
Prevalence of Antisemitic Narratives on X and Meta Platforms, illustrating varying emphasis on different tropes.
This radar chart provides a conceptual overview of the relative prevalence and intensity of various antisemitic narratives on X and Meta-owned platforms in 2025. It illustrates how different platforms might emphasize certain types of hate speech. For instance, X might show higher scores in areas like "Holocaust & October 7th Denial" and "Conspiracy Theories," reflecting its reputation for more overt and less moderated content. Meta platforms, while still grappling with significant issues, might have slightly lower scores in these areas due to different moderation approaches, though still exhibiting challenges with "Anti-Zionism Equated with Antisemitism" and "Incitement to Violence." The scores are illustrative of general trends and the ongoing struggle to curb diverse forms of antisemitism online.
A critical factor contributing to the spread of antisemitism in 2025 is the perceived inadequacy of content moderation by major tech companies. The U.S. government, through its special envoy Deborah Lipstadt, has actively pressed companies like TikTok, Meta, and X to crack down on antisemitic posts. Requests include designating policy team members to address the issue, training staff to identify implicit antisemitism, and publicly reporting trends in anti-Jewish content.
Despite these calls, studies suggest a significant failure in enforcement. Research indicates that social media companies often fail to act on a high percentage of reported antisemitic content. For example, some reports show that platforms took no action on 84% of antisemitic posts, with even higher rates for conspiracy theories (89%) and Holocaust denial (80%). Meta's recent shift towards relying more on user reports and Community Notes, rather than proactive scanning, raises concerns that hate speech, including antisemitism, might reach a wider audience. This approach, similar to changes seen on X, could lead to longer and wider exposure for antisemitic narratives online.
Many classic antisemitic tropes continue to circulate, but often with a modern twist:
The conspiracy theory that Jews exert vast, hidden control over global affairs persists. This narrative adapts to current events, alleging Jewish manipulation of politics, finance, and even climate change. An alarming instance highlighted in 2025 involved X and Meta approving ads in Germany that called for action against a "Jewish globalist agenda," sometimes accompanied by AI-generated imagery depicting shadowy figures surrounded by gold bars, reinforcing age-old stereotypes of Jewish wealth and power. This illustrates how traditional tropes are amplified and given new visual life through digital tools.
Beyond outright Holocaust denial, there's a growing trend of Holocaust distortion, which minimizes, distorts, or misrepresents the historical facts of the Holocaust. Coupled with this is the emerging phenomenon of October 7th denial, which dismisses or trivializes the atrocities committed by Hamas on that day. Platforms like TikTok have been noted as a "hotbed for violent and extremist content," including the promotion of such denials, making it challenging to counter these narratives among younger audiences who primarily consume news there.
Increase in Antisemitic Content Online After October 7, 2023
One of the most complex and prevalent narratives involves the conflation of legitimate criticism of Israeli policies with antisemitism. While criticism of a government is distinct from hatred toward a people, online discourse frequently blurs these lines. The "new antisemitism" often portrays Jews as oppressors—imperialists or colonialists—thereby updating the trope of vast Jewish power to contemporary concerns about justice and power dynamics. Meta's expanded definition of "tier 1 hate speech" to include instances where "Zionist" is used to degrade Jews or incite violence against them reflects the increasing recognition of this issue.
This dynamic is particularly visible in discussions surrounding the Israel-Hamas conflict, where anti-Israel rhetoric can quickly devolve into antisemitic attacks. The ADL's 2024 report highlighted that more American Jews now view the extreme political left as a serious antisemitic threat, alongside white supremacists on the far right, specifically due to anti-Israel extremism and its overlap with antisemitism.
In 2025, antisemitism is also adapting to new technological and social trends:
The rise of generative AI poses a significant threat, enabling the creation of highly persuasive and viral content that can amplify disinformation and hate. Companies are grappling with how to regulate AI to prevent its misuse in spreading antisemitic narratives, with less than half of American Jews trusting AI companies to protect against misinformation about Jews.
Extremist groups are exploring alternative platforms and technologies, such as cryptocurrency, to bypass traditional financial systems and content moderation. Reports of Holocaust deniers launching cryptocurrencies like "JPROOF" illustrate attempts to create parallel economies to fund and spread antisemitic ideologies, especially when mainstream platforms attempt to curb their activities. When groups are banned from one platform, they often migrate to another, creating a persistent challenge for comprehensive mitigation.
Disturbingly, X and Meta have been found to have approved antisemitic and anti-Muslim ads targeting German voters ahead of elections. These ads, containing calls for violence against Jews and Muslims and promoting narratives like a "Jewish globalist agenda," underscore how social media platforms can be weaponized for political extremism, even when policies are ostensibly in place to prevent such content.
Discussion on how social media platforms can combat antisemitism, focusing on Meta's policy changes.
The video above delves into Meta's policy announcements regarding antisemitism and discusses the broader question of how social media platforms can effectively combat the spread of hate. It highlights the complexities of content moderation, particularly concerning the nuanced distinction between criticism of Israel and antisemitism, and Meta's efforts to refine its policies. This is highly relevant as Meta is one of the key platforms under scrutiny for the proliferation of antisemitic narratives in 2025, and understanding its evolving moderation strategies is crucial to addressing the problem.
Antisemitism manifests differently depending on the geographical context. In Western countries, online antisemitism often involves incitement to violence and the adaptation of traditional tropes to contemporary political issues. In Central and Eastern European countries, narratives tend to focus more on conspiracy theories depicting Jews as hidden manipulators of global power. The global nature of social media means these varied forms of antisemitism can spread across borders, necessitating international cooperation to mitigate the spread of hate.
Instances of antisemitic incidents have seen increases in various regions:
Governments are increasingly acknowledging the severity of online antisemitism. The U.S. Department of Homeland Security (DHS) announced in April 2025 that it would begin screening the social media activity of non-citizens for antisemitic content as grounds for denying immigration benefits. This measure, consistent with executive orders on combating antisemitism, aims to protect the homeland from extremists and terrorist aliens. While intended to combat hate, this policy also raises concerns among free speech and immigration advocacy groups about its definition of antisemitic activity and potential implications for legitimate criticism of Israel.
Distinguishing between legitimate criticism of Israel and antisemitism remains a significant challenge for content moderation. Relying purely on keyword searches often fails to capture the nuance, especially when implicit or suggested meanings are used to circumvent censorship. This highlights the need for sophisticated hate speech classifiers and more human oversight.
| Challenge Area | Description of Challenge | Potential Solutions/Strategies |
|---|---|---|
| Content Moderation Inadequacies | Platforms struggle to effectively remove or flag antisemitic content, often due to scale, algorithmic biases, or insufficient human review. | Increased investment in human moderators, advanced AI detection (with human oversight), transparent policy enforcement, and proactive scanning. |
| Evolving Narratives & Obfuscation | Antisemitic narratives adapt, using coded language, symbols, and implicit meanings to evade detection, making it hard to distinguish from legitimate discourse. | Continuous research into new antisemitic trends, training of moderators on evolving tropes, and clear definitions of contemporary antisemitism. |
| Algorithmic Amplification | Algorithms, designed for engagement, can inadvertently promote hateful content, creating echo chambers and increasing exposure to antisemitism. | Re-evaluation of algorithmic incentives, de-prioritization of hate speech, and promoting authoritative content. |
| Cross-Platform Migration | When banned from one platform, extremist groups often migrate to others, including fringe platforms, making comprehensive tracking difficult. | International cooperation, intelligence sharing among platforms, and coordinated enforcement across the digital ecosystem. |
| Free Speech vs. Hate Speech | Balancing the protection of free speech with the need to prevent harm from hate speech, especially when policies might be perceived as overreaching. | Clear and consistent application of hate speech policies, public education on the distinction, and judicial oversight where necessary. |
| Misinformation & Disinformation | Antisemitic content often thrives within broader misinformation campaigns, distorting historical events and current affairs. | Fact-checking initiatives, media literacy programs, and collaboration with academic institutions and NGOs. |
Despite its challenges, social media can also be a powerful tool in combating antisemitism. Individuals and organizations can use their platforms to raise awareness, share factual information, and show solidarity with victims of hate speech. Initiatives like "Digital Defenders" focus on empowering content creators to combat antisemitism online. There is a strong emphasis on the need for social media, gaming, and AI companies to affirm that antisemitism will not be permitted on their platforms and to utilize a standard definition of contemporary antisemitism to ensure consistent enforcement.