The use of deepfakes and AI-generated media in the Russo-Ukrainian war has been a hot topic in the past year. From deepfake videos of Zelensky announcing Ukraine’s surrender, to AI-generated images of casualties.
While we have noticed a relatively lower proportion of deepfakes and sophisticated AI-generated content during the past weeks of Israel-Hamas conflict, BDR has gathered a few examples of such content being used to push and support a range of different ideas or positions.
Generative AI has been used in several cases to create emotion-stirring and sometimes graphic images. We have seen this particularly on social media platforms such as X – sometimes mixed in with other real images, other times used to accompany a specific claim.
While numerous authentic images showing both adults and children injured from Israeli airstrikes over Gaza have been circulating, a number of AI-generated images have muddied the waters.
For instance, the image below has been widely shared of a Palestinian man holding multiple children walking away from a destroyed building. A closer look at the children’s limbs and toes in particular suggests that the image is likely AI-generated. Another image of three Palestinian children covered in ash and looking directly at the camera is also clearly AI-generated with no clear source or attribution to a photographer or journalist (which other authentic images have).Another debunked set of AI-generated images that have been shared across social media show crowds in Israel waving flags and cheering for the Israeli military. These images are also clearly AI-generated, with overly long limbs, unrealistic numbers of people standing on balconies, and some flags with distorted details. Although the original poster on Facebook clarified (in Hebrew) that the images were AI-generated to imagine Israel’s “victory picture” they have been shared widely without this clarification.In response to these examples (among others) of AI-generated images being used, some people have begun utilising free image checkers such as the website AI or Not, which purports to identify AI-generated images.However, despite sometimes correctly identifying AI-generated images, these free checkers are prone to giving inconsistent results and should not be used as the only determining factor in whether an image is AI. Instead, alongside looking closer for visual indicators in the image, searching carefully for sources, provenance, and publication history is an important step if one is unsure about any piece of media.
Deepfake technology uses generative AI technology to manipulate likenesses digitally. Since its inception, the technology has only become more and more sophisticated, with deepfake video technology in particular becoming less easily identifiable.A deepfake video of U.S President Joe Biden from February has been re-circulated in light of the American involvement in the Israel-Hamas conflict. In the video, Biden’s voice and features have been altered to suggest that he is calling for a military draft – for both men and women. The video was cut (the originally clip included the creator explaining that it was AI-generated) and re-posted on TikTok and other platforms last week along with the suggestion that this imminent military draft is directly related to the Israel-Hamas conflict.
As Black Dot Research has fact-checked in full, the clip below of Palestinian-American model Bella Hadid has been widely circulated recently; likely sparked by her recent Instagram post calling for a ceasefire in Gaza. In the clip, Hadid voices support for Israel in a speech and apologising for her “past mistakes.” This video has been seen by millions, with Pro-Israel accounts on social media claiming that Hadid has changed her views.However, this is a deepfake video; made using an old award acceptance speech Hadid gave in 2016 as its base. The deepfake replicates Hadid’s voice and mouth movements and on first watch appears passable as a real video. Hadid has been vocal about her support for Palestinian and about her own Palestinian heritage dating back several years.
Whether to provoke heightened emotional responses or to add visual aids for an existing claim (both true and false), AI-generated images and deepfake videos only serve to add confusion – even moreso after they are debunked.
While some creators of these AI-generated content do not appear to be intentionally creating disinformation and even disclose their use of AI, the content itself can travel beyond its initial purpose and spread quickly as misinformation. Yet other examples seem to be intentionally created as a way to create confusion and create anger, fear, or outrage. While some AI-generated media is successfully debunked, others slip through the cracks and some debunking corrections are left unseen by many who continue to spread mis/disinformation.
The added dimension of generative AI amidst existing forms of misinformation (eg, falsely labelled content or misleading phrasing) makes the ongoing waves of new information harder to parse by the week, leading to increasingly divided, polarised stances. Being conscious of their presence and impact is therefore vital to combating false assumptions and working to establish accuracy over immediate emotional or urgency-driven posting and sharing.