A social media post with a fake photo purporting to show a fire at the Pentagon led to a decline in the US stock market, which later corrected itself.
It turned out that the image was created using artificial intelligence (AI), and that even an unprofessional but sharp eye could have identified it as forgery either through the flaws in the image itself or by checking its location data. Still, the image was distributed relatively widely both through social networks and some traditional media channels – without being checked.
In a study conducted at INSS about two years ago, we thoroughly investigated the use of AI to create fakes such as deep-fake videos as well as voice and photo forgeries. Our main goal was to find out if these forgeries could constitute a real threat to national security. The results were surprising. It turned out that financial entities such as banks were the most threatened by fakes (mainly voice); the banks could be deceived and these fakes could lead to the transfer of funds, which actually happened in at least two cases that led to the theft of millions of dollars worldwide. Journalists and private individuals are also threatened as they are susceptible to blackmail with fake videos. Through simulation, we found out that there is a potential to influence national security and that one of the most vulnerable times is around elections.
Nonetheless, a significant cloud of deception is needed for the fake video and the plot to succeed. Perhaps this was the reason why the fake Russian video purporting to show Ukraine’s President Zelensky calling on his forces to lay down their arms failed. Still, generative AI technologies, which have recently become accessible, cheap, and simple to use – combined with people’s tendency to share information quickly without checking it and fed by the fear of missing out – all increase the chance that we will see more cases of forgeries in the future. The public and even more so journalists need to be better trained to be more critical and to filter content.
A social media post with a fake photo purporting to show a fire at the Pentagon led to a decline in the US stock market, which later corrected itself.
It turned out that the image was created using artificial intelligence (AI), and that even an unprofessional but sharp eye could have identified it as forgery either through the flaws in the image itself or by checking its location data. Still, the image was distributed relatively widely both through social networks and some traditional media channels – without being checked.
In a study conducted at INSS about two years ago, we thoroughly investigated the use of AI to create fakes such as deep-fake videos as well as voice and photo forgeries. Our main goal was to find out if these forgeries could constitute a real threat to national security. The results were surprising. It turned out that financial entities such as banks were the most threatened by fakes (mainly voice); the banks could be deceived and these fakes could lead to the transfer of funds, which actually happened in at least two cases that led to the theft of millions of dollars worldwide. Journalists and private individuals are also threatened as they are susceptible to blackmail with fake videos. Through simulation, we found out that there is a potential to influence national security and that one of the most vulnerable times is around elections.
Nonetheless, a significant cloud of deception is needed for the fake video and the plot to succeed. Perhaps this was the reason why the fake Russian video purporting to show Ukraine’s President Zelensky calling on his forces to lay down their arms failed. Still, generative AI technologies, which have recently become accessible, cheap, and simple to use – combined with people’s tendency to share information quickly without checking it and fed by the fear of missing out – all increase the chance that we will see more cases of forgeries in the future. The public and even more so journalists need to be better trained to be more critical and to filter content.