Tal-Or Cohen Montemayor on the Surge in Online Anti-Semitism and Its National Security Implications


Social media is a significant part of everyone’s life in the twenty-first century, especially for the young. From X, formerly Twitter, to Instagram and Facebook, there is a platform for every age group. Unfortunately, this type of influence and connectivity does not come without its liabilities. One in particular is anti-Semitism, which has risen at an alarming rate since Hamas’s October 7 attacks on Israel. Jews from all over the world have been targeted with misinformation and vile slurs online. The uptick in rampant anti-Semitism not only hurts Jews and their allies from an emotional viewpoint, but it is harming American and Western societies from a national security standpoint as well.

I had the opportunity to speak with Tal-Or Cohen Montemayor, the CEO of an Israeli startup named CyberWell, launched in May 2022 for the purpose of using technology, specifically data sets, to help social media companies curb this concerning sentiment online. As she describes it, “CyberWell is the first ever open database to monitor online anti-Semitism across social media platforms, major social media platforms like Facebook, Instagram, TikTok, YouTube, and what was once known as Twitter, now X. We monitor those platforms using AI and open source intelligence techniques and technology. We monitor for online anti-Semitism and hate in English and Arabic.”

I interviewed Tal-Or in early December 2023. Here is what she had to say.

How has anti-Semitism increased since October 7?

Since October 7, we’ve been called on by social media partners to help out with content that isn’t just online anti-Semitism, but has also been connected to these very real surges in violent Jew hatred that we’re seeing online. That includes graphic content, calls that are pro-Hamas, pro-terrorist content, and misinformation and disinformation. If we look at what happened post-10/7, we have a very clear surge in online anti-Semitic content, and specifically violent anti-Semitic content, graphic content that actually celebrates the death of Jewish people and one of the largest attacks against Jews since the Holocaust. So if we look at the data that CyberWell has been monitoring, online anti-Semitism in a parallel time period versus what happened post-10/7, we see an increase of about 86 percent across the board in online anti-Semitism. That’s nearly a doubling in the amount of online anti-Semitism across these major social media platforms. I would say that the baseline level of anti-Semitism on X, or Twitter, is typically higher than other platforms, which increased by 86 percent, but our highest increase in anti-Semitic content was actually on Facebook, with an increase of 193 percent of anti-Semitic data coming through CyberWell’s systems from Facebook. That is almost a tripling of online anti-Semitism from what we see on a regular day on Facebook. So that is particularly alarming because CyberWell monitors in English and in Arabic, and the sharpest increases that we saw were in violent anti-Semitic content in Arabic specifically. So calling for the harming, killing, or justifying the murder of Jews, we saw that increase from really low numbers in previous time periods to up to 61 percent of the Arabic data that CyberWell was tracking and vetting post-October 7.

What makes CyberWell unique is that we are dedicated to monitoring this issue in real time. That’s what started CyberWell, because I saw that when it came to the world of online anti-Semitism, a lot of the other organizations were doing one-off reports, meaning once or twice a year at best. [They weren’t] dedicating the best available technology or really a tool that was meant to monitor this issue full scale. Prior to 10/7, there was research that showed that the average user that reports online anti-Semitism has a 20 to 25 percent chance at best of getting that content removed by social media platforms. Having a report that comes out once in a while isn’t going to create that critical mass or the pressure for a really tech-based solution for these platforms to actually address issues at scale…[especially since] people report spending an average of about, especially younger people under 30, an average of about 40 hours a week at least on these platforms. These specific applications that you go to bed with at night, that you wake up with in the morning, were hijacked by Hamas and literally used to incite violence against Jews and to popularize the events that happened on October 7. So it really highlighted the lack of investment on the part of the social media platforms to prevent violent anti-Semitic content from spreading, and I would say specifically in Arabic. That was a complete failure, and we really saw that the social media companies didn’t have the infrastructure in place to prevent that. The reason that we want to highlight this is because it really has become an issue of national security. The social media companies, the way that Hamas leveraged social media platforms post-10/7, is something that was unprecedented, that terrorized not only Israelis for weeks after the attacks but also Jews around the world. Every Western democracy should pay attention to the way that these platforms were leveraged by a terrorist group—and other terrorist groups are paying attention to that tactic as well. Two or three weeks after the 10/7 attacks, there was a slaughtering of 800 tribesmen in Darfur by a jihadi group. Right away, the jihadi group uploaded the video to social media, because they were taking notes from Hamas’s strategy of using social media as a weapon of psychological warfare.

What is the best way to protect yourself from hate online?

As an individual who is online, I would first recommend making your profile private—for anyone who doesn’t live a public life. If you are somebody who is very active in digital spaces, it’s also a matter of security, and I would recommend changing your passwords often. And I also recommend that the way that you show up in digital activism should be specific to you. Everyone finds their calling in what is digital activism to them. Maybe that’s being proud of your Jewish identity online, which I absolutely encourage, or you can do something that’s very similar and anonymous—you can report online anti-Semitism when you see it. I would specifically recommend [reporting online anti-Semism] in the comments section, because social media platforms are particularly bad at tracking anti-Semitism in the comments section. When you report online anti-Semitism, it’s anonymized, it’s not like anyone gets notified that you reported it, so that’s one of the reasons I encourage it. It’s a way to make sure that your digital spaces are safer, on your own behalf and your friends’ behalf. And I also wanted to say that on social media, you’re able to report that online anti-Semitism to CyberWell. We’re opening up different channels for people to report that directly to us, including a Google Chrome extension or via our database. If you want to spend five minutes a day reporting online hate just to feel like you’re making a difference, you can go to app.cyberwell.org, literally click on the content that we’ve already vetted as anti-Semitic, and report that content with us.

What happens when you report anti-Semitic content or misinformation or disinformation?

When you report hate speech or anti-Semitic content on a social media platform, typically speaking what happens is that there is an AI filter to recognize if something is blatant hate speech. And that is meant to deal with 80-90 percent of the content that’s flagged. That often results in things like anti-Semitic speech not being removed at first stop. And then a lot of social media companies will give you the option to appeal that decision and actually get it reviewed by a human. That’s why it’s important when you’ve reported something, not to get disheartened if you get an answer that you don’t like, but to go ahead and push further to appeal that to the content moderation teams. The content moderation teams, the human reviewers of the social media platforms, are also training the internal algorithms for these social media platforms for decisions in the future. So it is really important to escalate your report all the way up. Instead of just reporting things on an individual basis, CyberWell’s technology is meant to track these surges and changes in narratives, hashtags, and accounts online and then create very focused data sets or very focused reports that unpack these surges for these social media platforms. We give them the data points that they need in order to address them at scale, in order to go into their systems and see how far these trends have spread and potentially make the decisions to do more effective and systematic interventions. 

One of the challenges with mis/disinformation in the online space in general: mis/disinformation spreads at a rapid rate, but the process to check it is rooted in fact-checking. So social media companies literally partner with fact checkers, third parties, often news agencies, in order to check if something is actually mis/disinformation. They don’t make that call independently. We run into issues primarily of speed when that’s the process, and when they’re looking at the current war in Israel, there’s a lot of issues with the statistics that are coming out of Gaza, because it’s run by Hamas. And that has influenced some of the decisions with mis/disinformation coming out of the region, which has significantly impacted the optics or way the public is getting the picture of what’s happening on the ground…I also think that this generation hasn’t really dealt with state-sponsored propaganda or terrorist or radical ideology propaganda. But this issue of dis/misinformation is a huge challenge when people are actually getting their opinion on things. Young people are uniquely challenged, [because] for people under the age of 30, most of them are using platforms as a news source—and social media is not a news source. It’s not a news source. I can’t say it enough! It’s really important to encourage people to get information from news sources, multiple different news sources, because social media platforms are really meant to reinforce your bias. So if the people that you were talking to engaged in content that had an anti-Israel slant, the social media platform via its algorithm is primed to show them additional information that tells them more about that bias and reinforces them.

Are any social media companies doing anything about this? 

The biggest problem that we’ve seen since 10/7 is that there’s only readiness to act on these issues during times of crisis, when it’s already too late. And that was the reason that I launched CyberWell, to drive enforcement and improvement of these policies in regular times. There is an anti-Semitism problem on social media platforms on a regular basis. What we’re seeing now is this violent onslaught of anti-Jewish hatred that is also calling for violent outcomes that should be worrying to all of us. But the fact of the matter remains that there’s a lot to do in non-war situations. 

So what are they doing right now? Social media platforms, most of them, dedicated resources and staff to monitor the issue post-10/7 on a regular basis. I think this is very, very valuable. And there were some key policy decisions made. CyberWell has very specifically worked with TikTok in this space to make more effective decisions on monitoring online anti-Semitism and violent content at scale. But I think it’s important to maintain the pressure and to not let go after this specific failure on the part of social media companies. The fact of the matter remains that this is an issue. Online anti-Semitism affects the Jewish community disproportionately, and is now causing people to hide their Jewish identity online, and even to leave online spaces. Over half of worldwide internet activity is on these social media spaces and platforms, so if we think about what that means for young Jews, Jews in general, Jewish organizations—being erased from digital spaces, it’s effectively being erased from a very significant part of the world. 

What is your hope for the future? 

The best-case scenario coming directly out of 10/7 [is] to take the outpouring of online anti-Semitism and violent content, to learn from data sets, and actually implement them in automatic ways so that we prevent online anti-Semitism in the future. That’s on the platform side. On the government side: it’s high time the government realizes that the lack of enforcement of digital policies on social media platforms has become an issue of national security. Both in terms of reinforcing hate speech, but also unfortunately like we saw on 10/7, is the pornographization of the death of Jews. Literally. Like creating snuff videos and just projecting the horrific things that actually happened. It was leveraged as a tool of psychological warfare; the government can no longer accept this concept, that it’s a free for all on social media platforms. It is an issue of national security. It is a threat for any Western democracy. So what I expect them to do, and is the best-case scenario for them to do, is to pass laws that actually require social media platforms to disclose the data. To disclose the data on hate speech, to disclose the data very directly on violent content, to disclose not only the steps that the companies took but all the reported content that was not actioned on. And they must make that information available not only to the government, but academics and organizations like CyberWell who are experts in the field. We’ll generate solutions for these platforms moving forward. The U.S. is behind on this issue. The EU already passed the Digital Services Act, which is the most advanced legislation on the books that allows the EU to fine social media platforms up to six percent of global turnover if they fail to remove illegal hate speech, and that comes with a host of disclosure requirements. So I would say in the U.S., we have this unique challenge around the issue of freedom of speech and where the line is drawn, etc. I think the U.S. government needs to start thinking of social media platforms the same way they do highly regulated industries like banking and food and drug development. All of those companies are required to disclose the processes and the way that they handle data and information, and we need to have the same disclosure requirements for social media companies.

America’s freedom of speech is upheld until there is an incitement to violence. And hate speech is not a general exception to First Amendment protection. What would you tell people who say that they have the right to say whatever they want online? (Not extreme incitement to violence but still blatant anti-Semitism)

In general, freedom of speech does not apply to paid speech. What I mean by paid speech: social media platforms are private companies; 90 percent of their revenue comes from advertising. Those are spaces of paid speech. So you’re right to say that as an American, you can say whatever you want to say in a public space against your government, against your friends, but that doesn’t extend to a private corporation. There’s a really big difference between defending Nazis marching in Skokie, Illinois that are exercising their first amendment rights versus those Nazis coming to Disneyland, which is private property filled with children, and exercising their right to march there. That doesn’t extend to private property, and I think we have failed to extend that concept to these private corporations. They are not your government and they don’t owe you constitutional rights. If they owed you constitutional rights, there wouldn’t be digital policies to begin with. So freedom of speech doesn’t equal paid speech, and when you are participating in creating paid content on a user-generated content platform, i.e. social media, you’re participating in paid speech. And nobody owes you the right to promote hate speech with a commercially empowered algorithm.

Things that are very attention-grabbing reward the companies because they get more views and advertisers, so why wouldn’t anti-Semitism be amplified?

100 percent. Social media platforms across the board are empowered by algorithms that are meant to grab your attention, and keep your attention. And the natural outpour of that is that the algorithm learns that the more extreme and divisive content is, the more likely it is to get other users to stay on the platform. It is therefore often rewarded, promoted, and amplified on these social media platforms. As for content, this is something that CyberWell does actively with the social media platforms, we allow them to better identify where anti-Semitic speech is violating their own digital policies. It should be best practices that content that is likely to violate the social media companies’ policies is deamplified, which means it’s not part of the extreme algorithmic attention-grabbing mechanism. But there is no transparency on just where they draw the line, and that is one of the biggest issues when we talk about actually improving these digital platforms so they can no longer act as reinforcement mechanisms that encourage people to hate.

Portions of this interview were edited for length and clarity.

Suggested Reading

A New Viewpoint on Diversity

A New Viewpoint on Diversity

Ari Unger |

Often, it seems that the people who talk about diversity never visit diverse communities. People seem to think that diversity is based on how one looks. True diversity is not about how someone looks, but how they act.

The First Religious Paratrooper

The First Religious Paratrooper

Frederick Len |

Rabbi Shlomo Goren’s autobiography, With Might and Strength, tells the story of a precocious rabbinical student who decided to join the Israeli army and eventually became Chief Rabbi of Israel. By…