A new study by DFRLab has uncovered a vast foreign influence operation by the Russian propaganda network Pravda. The network established dozens of so-called “news” websites that shared pro-Russian content in multiple languages. Later, members of the network embedded these sites into Wikipedia articles in 44 languages, in order to give them legitimacy and credibility.
The network members also directed users to websites they created through the Community Notes feature on the X platform (formerly Twitter), which allows any user to add context to content, effectively bypassing professional fact-checkers. Ultimately, the network managed to influence the results of popular chatbots like ChatGPT by “poisoning” the data sources underpinning the model. The chatbots—heavily programmed to rely on Wikipedia as a relatively credible source—then retrieved those same sources created by the influence network, enabling the widespread dissemination of pro-Russian narratives.
Two key lessons emerged from the exposure of this operation. First, as millions of users worldwide increasingly rely on chatbots for information, these tools are becoming a significant means of shaping narratives and perceptions. Accordingly, manipulation tactics targeting these models are being discovered more frequently, and chatbots are expected to become a central battleground in the struggle between influence groups.
Second, the battle over narratives isn’t confined to a single arena but permeates the entire information ecosystem. Therefore, it’s not enough to identify specific influence actors; it’s also necessary to understand the processes designed to “contaminate” different information environments and exploit various vulnerabilities to sway public opinion. Effective countermeasures will require a comprehensive response and a combination of technological, regulatory, and educational efforts.
A new study by DFRLab has uncovered a vast foreign influence operation by the Russian propaganda network Pravda. The network established dozens of so-called “news” websites that shared pro-Russian content in multiple languages. Later, members of the network embedded these sites into Wikipedia articles in 44 languages, in order to give them legitimacy and credibility.
The network members also directed users to websites they created through the Community Notes feature on the X platform (formerly Twitter), which allows any user to add context to content, effectively bypassing professional fact-checkers. Ultimately, the network managed to influence the results of popular chatbots like ChatGPT by “poisoning” the data sources underpinning the model. The chatbots—heavily programmed to rely on Wikipedia as a relatively credible source—then retrieved those same sources created by the influence network, enabling the widespread dissemination of pro-Russian narratives.
Two key lessons emerged from the exposure of this operation. First, as millions of users worldwide increasingly rely on chatbots for information, these tools are becoming a significant means of shaping narratives and perceptions. Accordingly, manipulation tactics targeting these models are being discovered more frequently, and chatbots are expected to become a central battleground in the struggle between influence groups.
Second, the battle over narratives isn’t confined to a single arena but permeates the entire information ecosystem. Therefore, it’s not enough to identify specific influence actors; it’s also necessary to understand the processes designed to “contaminate” different information environments and exploit various vulnerabilities to sway public opinion. Effective countermeasures will require a comprehensive response and a combination of technological, regulatory, and educational efforts.