As the Israeli Defense Forces (IDF) continue their operation in Rafah, social media platforms have been blanketed with artificially generated images imploring users to pay attention to the situation in Gaza. These images manipulate fact-based reporting on Gaza to create artificial content that affirms various narratives about October 7th and the IDF’s campaign in Gaza. The rapid proliferation of artificial intelligence as a tool for propaganda presents a concerning future for online discourse.

Social media has become increasingly influential, not only on other forms of media but also how the competing narratives we encounter about the world are processed. The danger of this trend is that social media users will internalize these artificial images, analyzing reality through schemas untethered from objective facts on the ground. This development is likely only to deepen polarization and stifle productive discussion as the chasm between different perceptions of reality grows between opposing perspectives augmented by the speed and scale of artificial intelligence. 

The most prominent example of this trend is an image calling for “All Eyes on Rafah”, the text emerging from the roofs of an endless refugee camp. Similar to the black square that dominated Instagram in the aftermath of George Floyd’s death, this image has quickly become another morality test for social media activists. Failure to spread this artificial propaganda may be perceived as apathy towards Palestinians or worse. It is difficult to find another explanation for how this trend exploded so rapidly among so many ordinary people across social media platforms besides social pressure to share this image. 

Most of the 47 million social media users sharing this image are probably not closely following the situation in Gaza. They probably receive much of their information about the conflict on social media, like many young people. Anger from a few headlines without deeper investigation leads to viral posts like these which, further exacerbated by algorithms designed to be as psychologically affecting as possible, spread a narrative of sprawling refugee camps of displaced Palestinians trapped in Gaza. While this is a seriously inaccurate misrepresentation of reality, everyone would benefit from a more accurate understanding of the situation of Palestinians based on the facts on the ground. 

Another artificially generated image was created to counter this campaign, asking “Where were your eyes on October 7th?” with an armed Hamas terrorist standing before a bloodied baby and burning Israeli flag. As a response to the “All Eyes on Rafah” campaign, it is less of a test of morality but still another example of unproductive virtue signaling, enabled by AI-generated imagery, whereby social media users flex their perceived moral superiority over opponents. While the symbolism of this image is more accurate to the realities of the horrific attacks of October 7th, it does nothing to address the concerns of the initial ‘All Eyes on Rafah’ post or contribute to a wider discussion about the IDF operation in Rafah. At worst, it may be perceived as a justification for the tragic situation of Palestinian civilians in Gaza. 

Both sides are deliberately using emotion based, reality-blurring appeals to make broad arguments rather than engaging in evidence-based arguments. While such non-factual appeals are hardly new, this is the first successful large-scale campaign to weaponize artificial intelligence for such propagandistic purposes. Unlike the deep fakes of presidential candidates which are relatively easy to discern and designed to convey false information, these images create a false reality, affirming whatever the desired narrative is. Social media users will likely recognize these images as fake but still share them in order to convey their own perceptions of the “truth” of October 7th and its aftermath. 

The situation in Gaza certainly merits public attention and discussion. Competing narratives and misinformation deserve investigation. Obstacles to productive debate created by social media platforms remains an issue, but the use of artificially generated images is not the correct response. Social media users eager to raise awareness about their concerns in Gaza on either side of the conflict ought to share actual facts and real images to engage in substantive debates. 

With the abundance of documentation by Palestinians, Israelis, and civilians from abroad available, there are many resources besides the constant media coverage of the conflict by virtually every viewpoint. To persuade others, using real images backed by evidence is the most productive way to engage in effective dialogue.

The normalization of artificial intelligence as a tool for creating propaganda will seriously damage our public discourse. As important and passionate debates surrounding October 7th and its aftermath continue, real images supported by evidence-based reporting with thorough investigation into opposing viewpoints will be the only way to have productive discussion to achieve consensus, compromise, and understanding. Digging deeper into different perceptions of the conflict and wider world will only worsen the political strife and lead to inaction on the ground. Whatever your position on the conflict, everyone engaged in the ongoing debate should strive for better communication with opposing viewpoints and avoid furthering the divide by sharing manipulative content.