Strategic Assessment
In recent months, the Meta corporation announced policy changes, including turning the number of “shares” into the main criteria of worth for further exposure, eliminating the fact checking system on Facebook and Instagram (as already happened on X), and raising the filtering threshold of its content moderation algorithms. These changes, combined with the noticeable way leading social media players are drawing closer to the Trump administration, pose a challenge to Israel’s national security at both the domestic and international levels. Israel should take action in various ways, including international cooperation, regulatory activity, and educational and informational measures to deal with the dangers inherent in these developments.
Key words: social media, sharing, division, antisemitism, community notes, fact checking, misinformation, foreign influence
Introduction
Social media has considerable influence on the public discourse and is increasingly functioning as a source of information and a substitute for established media channels. A report on media and social media usage in Israel, published by the Israel Democracy Institute in May 2024, shows that most respondents (63%) get updates from internet news sites on a daily basis, while the percentage who get their information from social media is not much less—59.5% of respondents. These figures are even higher among younger respondents. According to the report, Facebook is the most used social network for news, with Instagram in second place. However, the report only surveyed adult users, which explains the relatively low rate of study participants who gave TikTok as their primary source of information. We can assume that among younger users, TikTok is most frequently the primary source of information. According to a survey carried out in the United States, for example, some 40% of the 18-29 age group stated that TikTok is their source for news.
Given the prominent role played by social media as a tool for obtaining information, it is important to remember that their providers are commercial entities with financial interests that do not always match those of their users or of the state, who would prefer a responsible, authentic virtual space. It was these interests that, in recent months, led to a number of significant policy changes in the companies that operate social media. In this article, we describe the changes, discuss their impact on Israel’s national security, and present policy recommendations to address them in order to limit their inherent dangers as far as possible.
Shares, Truths and Community Notes
Meta is a technology company that operates three of the most popular social media networks in Israel: Facebook, Instagram and WhatsApp. The company recently made a number of changes that affect part or in some cases all of these networks.
One change was turning the number of “shares” into the main criterion of worth for further exposure. In May 2024, the head of Instagram, Adam Mosseri, announced that the company’s algorithm would expose posts with numerous shares to a larger audience. The purpose of the move is to encourage community engagement—whether by expressing support for the post’s content or objecting to it. As part of this move, next to the Share button, under each post, there is a count of Likes, Replies, and Shares, creating a wide resonance for posts, even misleading ones.
Another important change concerns the elimination of fact-checking on Facebook and Instagram and its replacement with the Community Notes system, where users can leave corrections and attach supporting evidence to a post, like the one already existing on X (formerly Twitter). The fact checking process was set up following two prominent incidents in 2016: The Cambridge Analytica affair, in which it was revealed that a private company had used social media data for the purpose of emotional manipulation, making precise characterizations of voter profiles without their consent; and the accusations of spreading false information and of foreign intervention, that skewed the results of that year’s United States presidential elections that were won by President Donald Trump. Against this background, a fact-checking system was set up, consisting of over one hundred organizations to check content in more than sixty languages.
The process of fact checking and labeling was orderly and comprehensive: First of all, potentially problematic information was identified by the tracking technologies used by the company, by means of user reports or fact-checkers, and where appropriate, the content’s exposure could be reduced. After that, the content was examined by cross-referencing the data and examining the authenticity of pictures and video clips. It was then rated as misleading, modified, partly false, missing context, satire or true, based on clear definitions accessible to the public. False information was labeled as such on the network and users were offered credible information on the subject, while its exposure was restricted and adverts appearing alongside it were removed. Private profiles or pages containing systematically misleading information were penalized with removal from the network’s recommendations feature—reducing their exposure and thus blocking their ability to make money from adverts—and preventing them from registering as news pages.
However, on January 7, 2025 Meta announced that it had decided to cancel the fact checking system and to allow the community of users to determine the framing of content, as happens on X. It should be noted that on X this move led to a massive increase in the number of harmful posts, and the effect on Meta is expected to be similar. At the same time, Meta also announced that it was raising the filtering threshold of its algorithms, which is expected to mean that fewer posts will be automatically removed, whether they are problematic or not. In the framework of current policy, it is possible to report a post, and if the complaint is found to be justified, the post will be removed, although the post remains accessible until the process is completed, and it is likely to gain resonance and influence public opinion in this period. According to Meta CEO Mark Zuckerberg, the way in which fact checking was done until then damaged freedom of expression more than was expected, and therefore it was necessary to change the system and the content filtering threshold. These changes are intended to rebalance the discourse and promote freedom of expression, which was the original purpose of social media.
The Internet and Politics
These changes in the fact checking system and the filtering threshold are occurring alongside political and social developments, above all the close relationships being formed between politicians and technology leaders. The most striking example of these processes is the relationship between US President Donald Trump and the CEOs of large technology companies. The closest relationship is between Trump and Elon Musk, the owner of X, who donated almost 290 million dollars to Trump’s election campaign, expressed his support in tweets that called on his followers to vote for Trump, and even participated in election rallies as a spokesman or observer. Trump, for his part, appointed Musk to take charge of streamlining the work of the Federal Administration, in addition to his links to the US Administration through numerous business deals, including building and launching spaceships for NASA and supplying satellite internet in many parts of the world via Starlink.
Other technology companies are trying to catch up with Musk. According to reports, Meta donated a million dollars to President Trump’s inaugural fund, appointed senior personnel identified with the Republican party to key positions in the company, and transferred its trust and security teams from California to Texas—all with the aim of bolstering its relationship with the elected President.
The social network that aroused the widest public discussion in the days before the second Trump administration entered the White House was TikTok. A day before the new-old President took office, a bill supported by both parties came into effect, stating that apps controlled by rivals of the United States (China, Russia, Iran, and North Korea) cannot obtain cloud storage services or be available for download in American app stores. In fact, this bill started its journey towards the end of Trump’s first term of office and entered the Statute Book during the Biden presidency, but this did not prevent Trump, on his first day in office, from signing an executive order delaying enforcement of the Act by 75 days.
This series of events would not have been perceived as problematic if it had not aroused suspicions that Trump’s motives in changing his mind about TikTok were not pure. In the months before the election, candidate Trump decided to open an account on the Chinese social network and quickly gained some 15 million followers, who he claimed helped him to reach potential voters. The CEO of TikTok also recently visited Trump at his estate in Florida before he took office, was present at the swearing in ceremony, and the company, which claims it does not permit paid-for political content, funded a party for conservative influencers who helped Trump win the election. In an announcement to users regarding the app’s return to use after the executive order was signed, the company thanked Trump by name.
These developments raise real concerns over the independence of social media and their links with elements in the administration. The competition between the social networks, which could force them to relinquish independence and yield to political dictates, reinforces this fear. Moreover, the current atmosphere could encourage politicians to try and influence the networks to a far greater extent, in return for regulatory reliefs or provision of the assistance they require. Links between administration elements and heads of industry are not new, but lately, it appears that these links have become more open and blatant, and that the changes introduced by the companies to please the politicians have become more extreme.
Implications for national security
The extent of the use of social media as a source of information, particularly in view of the changes specified above, embodies many benefits but also significant challenges and risks for several aspects of Israel’s security. The first aspect, which is not noticeably affected by the changes mentioned above and is extensively discussed in previous articles published by the INSS and on other platforms, concerns the harm to privacy when information about users is collected by the networks. For example, the United States has claimed that TikTok collects data on the political opinions of American users, as well as their sexual inclinations. Such a wealth of information about Israeli citizens in the hands of social media could endanger Israel’s national security because Israel would have no influence or control over who would have access to the stored data.
Another risk arising from the extensive use of social networks as a source of information concerns the internal discourse within Israel and its effect on the polarization and divisions that already exist in Israeli society, and as a consequence, national resilience. One of the main drawbacks of the consumption of information from social media is that most of them operate as a kind of echo chamber, in which the consumer generally follows people with similar opinions to their own. The information that these people share, and the way that the information is framed, mostly reflect the consumer’s existing perceptions and do not challenge or undermine them. When so many people’s understanding of the world is mediated through these channels, the result is a reduced likelihood that they will be exposed to the views of the other side. In this context, social polarization and division grow stronger when there is no common ground for discussion. Moreover, as soon as an opinion is perceived as the “consensus view”, people who disagree will often feel uncomfortable expressing their own views in public, which only intensifies the exclusivity of the existing view and leads to further radicalization of the discourse.
The prominence of Facebook as a source of information has further importance in the context of social polarization, given the recent changes made by Meta to the role of the Share button: When the measure of a post’s success rests on the number of shares, contributors have an interest in writing shocking posts that will provoke reactions and lead to multiple shares, whether as a sign of support or criticism. Once again, the opinions expressed become more polarized. The other changes made by Meta—canceling fact checking and moving to reliance on Community Notes, and raising the threshold of automatic filtering—are problematic in this context. The changes increase the probability of the spread of harmful posts based on lies. Moreover, contrary to the activity of the fact-checking system, which may not have been perfect but did limit the exposure of false posts and their effects in many cases, experience with X shows that Community Notes do not limit the exposure of posts and sometimes even increase their exposure or support for tweets for which comments are written, so that the total impact of the recent changes is expected to lead to greater polarization and division in society.
To these processes should be added the interference of foreign elements that are interested in increasing divisions in Israeli society. In recent years, and particularly during the Iron Swords War, there has been an observable rise in the spread of false information and attempts to exert outside influence using the internet and social media. The purpose of these attempts is to influence public opinion and the way in which people act and vote, to encourage instability, and to undermine public trust in state institutions. There is some social awareness of this issue. For example, in a survey by the Institute for National Security Studies in January 2025, 69% of Jewish respondents said they were worried or very worried about foreign intervention in social media (for example, by Iran or Russia) with the purpose, inter alia, of undermining social unity in Israel.
Most people are unable to identify campaigns of foreign influence and cognitive warfare or distinguish between them and legitimate posts. Meta’s policy changes make it significantly easier for foreign influence campaigns and internal campaigns to intensify polarization, because it is relatively simple for them to create a false representation of numerous shares, using bots that increase exposure to their posts, and also because the removal of fact-checking and the raised filtering threshold for content moderation allow the spread of false information much more easily. As a result, these campaigns are expected to achieve rapid exposure, and it will be more difficult to stop them. For that reason, it seems likely that we will see more polarization and division in society.
Greater social polarization and division have a perceptibly negative effect on the resilience of Israeli society. First and foremost, they damage social solidarity, which is one of the main components of social resilience, as it encourages people to unite and work cooperatively. Not only that, solidarity affects other components of social resilience, particularly indices of optimism and hope, by influencing how a society perceives itself.
As well as increasing social rifts, Meta’s changes will have another dangerous effect on the domestic discourse in Israel: the possible undermining of trust in state institutions. The fact that it is so easy to spread false information on social media creates a situation in which unfounded theories, whose purpose is to undermine public trust in public bodies in general, and the IDF in particular, achieve extensive exposure and influence. For example, according to a study by the Agam Institute, in December 2023, only 12% of respondents declared that they believed there was a conspiracy involving security personnel to bring down Prime Minister Benjamin Netanyahu, but by January 2025, more than 20% believed it. Moreover, 32% of respondents in January 2025 said they believed that Israeli elements knew about or permitted the surprise Hamas attack on October 7, 25% believed that Israeli elements were involved in the attack, 22% thought that elements in the army did this in order to damage Netanyahu, and 17% believed the recently discussed conspiracy theories that Yair Golan spied, collected information and helped Hamas to plan the attack on Israel. Although it is impossible to isolate social media's influence in this context and show a clear causative link between these elements, the prominence of these conspiracy theories on social media and their absence from traditional media reinforce the assumption of a link. Not only that, a study conducted in December 2023 found that frequent exposure to social media made the likelihood of believing conspiracies 1.3 times greater—another fact that supports the assumption.
The damage to trust in state institutions and the IDF also intensifies the damage to social resilience, but it has other serious effects on national security. The IDF is the people’s army, and it relies upon soldiers who serve in the reserves and the regular forces. These soldiers are drawn from precisely the age groups that are most influenced by information found on social media, and they are therefore at higher risk of exposure to information that could undermine their trust in the army. This could affect their willingness to report for duty and perform their tasks, particularly in a sensitive situation like the present. In addition, public trust in the army is essential to ensure its willingness to follow the army’s instructions. When the public lacks faith in the army and does not believe that it is operating with pure security considerations, it may distrust its instructions, for example, regarding the means of protection required in cases of attacks from various directions, or the timing for a safe return to evacuated homes.
A third important risk concerns the effect on international discourse and, as a consequence, on the international legitimacy of the State of Israel and the war against antisemitism. In the context of Meta’s new policy, which rejects the use of fact checkers, antisemitic and anti-Israel information disseminated on the networks will not undergo a process of thorough clarification, which was the case until now, but a battle of versions between supporters and opponents of the information. For example, a user can post a conspiracy theory that will not be checked and not removed; this already happens on X, which uses Community Notes instead of fact checkers, as shown by a tweet from network influencer Dan Bilzerian, who wrote that six million Jews were not murdered in the Holocaust, hinting that the Jews are exaggerating the number of victims. Instead of a fact check that would remove the post, X is satisfied with the reactions of community members, and as of the time of writing, the tweet had attracted more than a million views, 13,000 likes, and more than a thousand shares. Social media provides users a kind of echo chamber, so those who are exposed to this antisemitic theory will form their opinion of this new information based on who they follow. If the people they follow are antisemitic, they will receive information that supports this view, and if not, they will receive information that contradicts the claim. More concretely, the burden of disproving, moves from the writer of the post or publisher of the information to the users, who will have to provide proof and persuade other users that this is a conspiracy theory. This policy change significantly strengthens the ever-growing number of spreaders of false information.
Not only that, by prioritizing the number of shares as the main measure of worthiness for exposure, and because Jewish and pro-Israeli users are numerically inferior for demographic reasons, the anti-Israeli and sometimes even antisemitic position will gain far more exposure than posts supporting Israel and objecting to antisemitism, or at the least, setting the record straight. Combining the effects of two decisions significantly threatens Israel’s international legitimacy, and sometimes its political and military freedom to act.
Conclusions and Recommendations
The changes introduced by social media, together with their deepening ties with elements in the United States Administration, present challenges for Israel’s policy-makers. There is no doubt that a systemic Israeli reaction is required at several levels, particularly because, notwithstanding many recommendations, Israel currently places no liability on digital platforms for the content they distribute. In order to find the balance between the wish to protect freedom of expression and the liberal values that are the foundation of both social media and Israel on the one hand and to provide a defense against the potential risks, particularly in view of the recent changes, on the other, we propose taking action at a number of levels simultaneously. Israel should:
- Join forces with other countries and bodies: As a small country, Israel has limited levers of influence on social media; for them, Israel is a small market, and if it imposes restrictions on the activity of social media, it will possibly be more worthwhile for them to abandon it rather than accede to its demands. Therefore, it is logical to join up with other countries in order to exert pressure on the companies to remove certain content and restore balance, particularly in view of the latest changes, that threaten not only Israel but other countries as well, due to the danger of increasing the spread of false information and the ease of conducting foreign influence campaigns. The European Union, for example, is already doing this, and Israel can provide it and other groups of countries with information about problematic posts or policies that will help them in their dialogue with the companies, even if Israel is not itself a member of these groups. Moreover, Israel is a member of several international organizations, so it could raise the subject in wider forums and build ties with other countries on this basis.
- Regulatory moves: In view of the understanding that the operators of the large social media platforms are not neutral players in the market of views and ideas, it is justified to impose some legal liability on them in order to ensure their platforms are safe for Israeli users. In this context, we propose the establishment of a “notice and action” mechanism, as proposed by the Advisory Committee to the Minister of Communications in 2022, which will enable Israeli users to report illegal and harmful content and require the networks to deal with such content by removal or by limiting its exposure in the case of a gray area, or in the case of a failure by the actor to accept the report. Non-compliance by the network would result in sanctions. Ordinary citizens and reliable reporters, who have been defined by the legislator or by the platform operators as such, and who meet the criteria—independence from the platform operators, able to locate problematic content, and have a high degree of continuity in their presence on the platform—can engage this mechanism and would be given priority in its implementation. The mechanism would be accessible from the platforms, and it would report to an independent committee representing the legislator, on the quality and handling time of complaints—information that will be fully transparent to the general public. If harmful content is brought to the attention of the platforms and they fail to deal with it in a reasonable and proportionate way, legal liability will be enforced. This recommendation is particularly important in view of the removal of the fact-checking function and the need for an alternative response to false information.
A framework that could inspire legislation in Israel is the Digital Services Act (DSA), which came into force in 2022 in the European Union and redefines the relationship between the platforms, the public and the government. As part of ensuring legal liability, the Court would issue an order to be discussed in the presence of a judge and a legal representative of the social media platform on how the platform dealt with specific content. If the handling is defective, the platform’s operators can be fined. The Court will have the power to decide whether the content should be removed immediately, or given limited exposure to maintain freedom of expression, or to reject the claim.
- Educational activity and strengthening digital literacy: Since the changes described above are expected to lead to an increase in false information on social media, digital literacy is even more important. The programs currently offered by the Israeli Ministry of Education are essentially voluntary and not binding on schools. Digital literacy programs are widely used all over the world, tailored to users of various ages and backgrounds. Since even young children have access to social media, these programs should begin during the first years of primary school, even before the children start using social media, to prepare them for this move. Teachers and educators must also be trained to deal with cases of false information and with abusive posts, advising their pupils as necessary. For cases where there is potential for seriously false information, special training programs must be developed for citizens. Similarly, the Taiwanese Ministry of Digital Affairs prepared its citizens to identify false information and network manipulation before the country’s presidential elections.
- Providing information: Israel must set itself targets for tackling disinformation, particularly because of the anti-Israel campaign it faces and the danger of this campaign gathering even greater momentum when the number of shares is the measure of worth for exposure. Like Israel, Taiwan has to deal with external influences on its social media, and it has therefore established the 2-2-2 principle, which states that the country must respond to every item of false information within two hours, with two pictures and 200 words. If this principle were to be implemented in Israel, it would quickly help to counter the spread of harmful disinformation. If the decision is taken to import this model to Israel, there must be a prior decision as to who will be responsible for it since in Taiwan it would obviously be the Ministry of Digital Affairs, but in Israel, it is not clear whether it should be the Foreign Ministry, the National Information System or the Ministry for the Diaspora & The Fight Against Antisemitism, or indeed some other entity. Our recommendation is to place the responsibility on the Digital Department of the Foreign Ministry.
Taiwanese civil society is also active on this matter, and civil organizations there have set up a chatbot whose purpose is to let citizens know whether the information to which they are exposed on social media has been verified or not. This should be fairly easy to adopt, particularly since Israel already has civil society organizations engaged in this subject, such as Fake Reporter.