In recent weeks, public discourse has revolved around two social networks: Moltbook and TikTok. The first drew attention because it is the first social network whose users are not human beings, but exclusively AI agents; the second because it signed an agreement to change its operating model and ownership structure in the United States. The changes to TikTok’s operations directly affected another social network called UpScrolled, which until now had been largely unknown and had only a small user base. In the wake of the shifts at the giant platform, UpScrolled surged to 3.2 million downloads—an increase of roughly 9,000% in monthly downloads.
UpScrolled, founded by the Australian-Palestinian entrepreneur Issam Hijazi, branded itself as a platform with no censorship whatsoever, and therefore attracted users who spread hate speech and whose activity on other social networks had been restricted. But alongside the growing popularity of this “no-limits” platform came revelations of severe antisemitism flourishing on it—from Holocaust denial, to claims that “the Zionists” (a code word for Jews) control the world, to explicit calls for physical harm against Jews. Users and viewers also claimed that even when antisemitic content is reported to the platform, complaints are not handled properly—or at all. The company, for its part, admitted that it is unable to moderate posts adequately due to the enormous volume.
The platform’s founder has faced similar allegations as well, in part because of a speech he delivered at a conference in Qatar in which he said he would not accept “Zionist money”—a phrase that echoes antisemitic stereotypes about Jewish money seeking to buy control over the world.
UpScrolled is not alone. Other networks that claim to enable unrestricted freedom of expression, such as Elon Musk’s X, have also seen a significant rise in antisemitic posts. Therefore, there is no alternative but to apply uniform standards to all social networks and demand accountability for their insufficient handling of hateful content disseminated on their platforms.
In recent weeks, public discourse has revolved around two social networks: Moltbook and TikTok. The first drew attention because it is the first social network whose users are not human beings, but exclusively AI agents; the second because it signed an agreement to change its operating model and ownership structure in the United States. The changes to TikTok’s operations directly affected another social network called UpScrolled, which until now had been largely unknown and had only a small user base. In the wake of the shifts at the giant platform, UpScrolled surged to 3.2 million downloads—an increase of roughly 9,000% in monthly downloads.
UpScrolled, founded by the Australian-Palestinian entrepreneur Issam Hijazi, branded itself as a platform with no censorship whatsoever, and therefore attracted users who spread hate speech and whose activity on other social networks had been restricted. But alongside the growing popularity of this “no-limits” platform came revelations of severe antisemitism flourishing on it—from Holocaust denial, to claims that “the Zionists” (a code word for Jews) control the world, to explicit calls for physical harm against Jews. Users and viewers also claimed that even when antisemitic content is reported to the platform, complaints are not handled properly—or at all. The company, for its part, admitted that it is unable to moderate posts adequately due to the enormous volume.
The platform’s founder has faced similar allegations as well, in part because of a speech he delivered at a conference in Qatar in which he said he would not accept “Zionist money”—a phrase that echoes antisemitic stereotypes about Jewish money seeking to buy control over the world.
UpScrolled is not alone. Other networks that claim to enable unrestricted freedom of expression, such as Elon Musk’s X, have also seen a significant rise in antisemitic posts. Therefore, there is no alternative but to apply uniform standards to all social networks and demand accountability for their insufficient handling of hateful content disseminated on their platforms.