On 22 May, social media found itself abuzz with reports that there had been an explosion next to the Pentagon—the headquarters building of the United States Department of Defence.
Multiple accounts shared an image of a tall, dark plume of smoke emanating from the ground next to a building that had been fenced off. The image can be seen below:
The social media posts caused a minor dip in the stock market, with the S&P 500 index of the 500 largest companies listed in US stock exchanges dropping 0.3% as the posts spread soon after the stock market had opened for trading at 9.30am. The stock indexes would, however, regain their losses within about ten minutes.
The reports were subsequently investigated and debunked by a number of media outlets, including The Guardian, NPR, CNN, the Washington Post and AP News. The image was found to have been generated by Generative Artificial Intelligence (GAI) tools. Midjourney Stable Diffusion and DALL-E are three such popular GAI image-creation tools.
The Arlington County Fire Department and the Department of Defense’s Pentagon Force Protection Agency later made a joint statement on Twitter that ‘There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public’.
The reports of the explosion proliferated widely on Twitter, including from a plethora of blue-ticked accounts. These blue ticks had until recently been affixed on accounts whose identities were verified, and who were usually public figures of some import—and as such often served as an indicator of credibility. In the last month, Twitter has rolled out a new subscription service called Twitter Blue, where any user can obtain the blue tick by paying $8 a month.
The original reports on Twitter appear to have been relayed by users mimicking official Bloomberg accounts but who were unaffiliated with the organisation. These users were named ‘Bloomberg Feed’ and ‘Walter Bloomberg’, and each carried blue ticks. The accounts have now been suspended. The image below is what one of the tweets would have looked like:
While Twitter served as fertile ground for the dissemination of the disinformation and misinformation, it was not the only social media network involved in this episode, as (the official) Bloomberg reported that the faked image had first originated on Facebook.
The Washington Post also reported that the verified Twitter page OSINTdefender, which reports on military conflicts and has 336,500 followers, shared a post about the ‘explosion’ after learning about it on Discord, and online messaging and social platform. It was also reposted by Zero Hedge, a far-right finance blog that has 1.6 million followers on Twitter. Misinformation from duped, verified accounts therefore merged with the original disinformation to multiply their impressions.
Most worryingly, the fake reports appear to have been shared by the Russian state-linked outlet RT on its official Twitter account to its three million followers, before later deleting it. By then, the tweet had been picked up by a major Indian television network Republic TV, which reported on the ‘explosion’ and displayed the image on air. The report was later retracted by the network.
Key Takeaways
- GAI tools possess a worrying degree of potential for sowing discord in communities.
The stock market dip is a clear indication that of the real-world ramifications that may arise from the use of the GAIs. While the identity of those who created the original image is at present unknown, it is entirely plausible that this may have been a case of Russian state-sponsored disinformation, as both RT and multiple Russian aligned Twitter commentators and bots reposted the reports.
The ’explosion’ follows hot on the heels of Russia warning the US of ‘enormous risks’ of the US approving the provision of F-16 fighter jets to Ukraine in its war against Russia. Russia has also on several occasions previously alluded to threats to unleash its nuclear arsenal or otherwise retaliate should Ukraine’s allies step up their support measures. While the link to the Russian disinformation machine is not certain, this example serves as an indication of how GAI tools can furnish hostile state actors with new capabilities in information warfare.
The risk that disinformation poses for social stability is not in itself a new problem; for instance, a fake tweet in 2013 by hackers on the authentic Twitter feed of AP News regarding explosions at the White House sent stocks tumbling. An expert on online disinformation at Stanford Internet Observatory also points out that anyone could have created the picture that was likely to be even more convincing if they were skilled in Photoshop. The distinction, however, is that GAI has delivered a simplified version of these tools to anyone who has internet access.
Compounding this is the fact that stock markets, which are often considered a key indicator of societal stability, are now themselves regularly subject to the influence of algorithmic trading, an automated process which is dependent on software making trading decision based on new headlines in real time.
The question over the need for effective regulation over the use of GAI to create content is likely as such to become an ever more pertinent question as countries attempt to balance their needs of social order against freedoms of expression. Earlier this month, China made its first arrest of an individual over AI-related disinformation after a man used ChatGPT to generate a false story over a train crash and fatalities.
- GAI tools are still imperfect, but this may not matter in the fight against disinformation.
Hany Farid, a computer science professor and digital forensics specialist at the University of California, Berkeley interviewed by several media outlets, points out that there are a number of artefacts and inconsistencies in the picture that reveal it to be a fake.
‘The grass and concrete fade into each other, the fence is irregular, there is a strange black pole that is protruding out of the front of the sidewalk but is also part of the fence… The windows in the building are inconsistent with photos of the Pentagon that you can find online,’ he told AP News.
The open-source investigative group Bellingcat’s Nick Waters also pointed out on Twitter that there were ‘no other images, videos or people posting as first-hand witnesses’.
Members of the Arlington fire department who would be expected to respond to incidents at the Pentagon also pointed out that the building featured in the picture looked nothing like the Pentagon, a fact that could be verified with a simple search on the internet.
Greater awareness of these limitations of GAI can assist factcheckers and the wider public in identifying deceptive content. Despite such markers, however, the false information reverberated across the internet for several hours. The stock market indexes, which have access to complex algorithms and are able to synthesise large amounts of information, recovered their losses quickly. Other individuals and networks without such tools, however, took much longer to gain a clearer picture.
Part of this is due to time it takes to factcheck false information. The media outlets were able to publish their reports on the ‘explosion’ within hours of the original social media posts, but these hours provided sufficient time for the bogus reports to spread.
The Arlington Fire Department’s tweet at 10.27am represented a timely, authoritative response to the original source of disinformation within the half-hour. However, it was hindered by a lack of reach—the tweet had about a tenth of the views of the fake report tweeted by ‘Walter Bloomberg’.
The persisting fallibility of GAI tools may therefore help factcheckers to identify disinformation quickly, but this advantage pales in comparison to the tendency of society today to readily embrace virality. A more concerted and holistic effort is required to prime the broader audience for such threats.
- A robust media and factchecking landscape remains vital in combatting disinformation, but this episode also highlights the immense power of social media in both creating and addressing controversies.
The reporting of the Pentagon ‘explosion’ highlights the asymmetry between media organisations and their reliability, with not all major outlets considered ‘mainstream’ of equal pedigree—as displayed by RT and Republic TV. The challenge remains when educating the broader audience in distinguishing between sources based on their quality, rather than their format.
While the media outlets’ debunking of the original claims proved the most definitive in defusing the tense atmosphere, the most agile actors remain the factcheckers who were able to identify the image as AI-generated soon after it was produced.
Nick Waters of Bellingcat, for example, was able to publish his findings eight minutes earlier (time zone in image below adjusted for Singapore) than the official statement released by the Arlington Fire Department and the Department of Defence. His tweet also received a far greater number of views than the official statement.
The speed of response and the expertise of independent factcheckers, combined with the reach of social media, was likely a primary factor mitigating a more profound downturn in the stock market and preventing further contagion of panic in society.
Nevertheless, both the AI-empowered agents of disinformation and their adversaries remain highly reliant on social media networks to propagate their respective messages, giving these companies considerable authority over the impact of, and response to disinformation.
Facebook and Twitter removed the offending posts in the hours after they were first created, but only after they had made millions of impressions. Supreme Court rulings in the US have shielded social media networks from liability over the content hosted on their platforms, which suggests that social-media-charged falsehoods are likely to remain a prevalent phenomenon. Individual companies are likely to pursue their own approaches to moderation while attempting to stave off regulatory approaches that constrict them, and their decision-making may often remain opaque to the public.
In addition, several capabilities are available to social media companies that further complicate the information environment. For one, Twitter’s example indicates how the predilections of an owner can shape access to a platform, with several high-profile proponents of untruths, including the former US president Donald Trump, reinstated in the last few months.
Twitter’s changes to its blue tick also have been accompanied by changes to its designations for its user accounts. In April, NPR quit Twitter after being labelled ‘state-affiliated media’, equating it with state media outlets such as RT and China’s Global Times, which often serve as propaganda mouthpieces. Twitter would later drop such labels entirely. The platform’s frequent changes in approach demonstrate how arbitrary decisions made by social media companies may influence perceptions of the reliability of media organisations among the public.
To conclude
The Pentagon ‘explosion’ thus encapsulates the present realities of GAI-created disinformation, as well as the considerations in the fight against disinformation more broadly.
The false attack captures perhaps one of the earliest uses of GAI to have produced a noticeable, if not major or prolonged, effect in two areas of society considered central for stability: the US stock market and the nexus of US defence decision-making.
While the limitations of GAI are still clearly evident in the image that was used, these are counterbalanced by the inevitability of viral trends and the widening of access to disinformation creation tools for potentially malicious actors. Moreover, it is highly likely that GAI tools are further refined in the future.
While the bogus ‘explosion’ represents a foray by GAI into the disinformation space, it once again reinforces what has been a consistent theme in the fight against disinformation—the reliance on both traditional ‘mainstream’ media and social media to counter false information.
The authoritative voice of traditional media organisations remains at the forefront of definitively debunking disinformation, but social media platforms offer agility to independent factcheckers and public organisations, which as the stock market blip suggests, may prove vital in limiting the damage.
The answer to GAI-generated disinformation therefore appears not to be new lessons in identifying GAI, but in the established wisdom of expanding media literacy—updated to address contemporary trends.
As social media companies develop and reshape their networks, markers of reliability corrode, and new platforms enter the information space, learning to navigate the information landscape and identify credible voices appears to be the best strategy for societal resilience against disinformation.