Looking Beyond The Deepfakes Moral Panic

Published on Fri Jul 05 2024Tarunima Prabhakar - DAU

Project:

The 2014 election ensconced social media as a critical tool in election campaigning in India. By 2016, when the US voted Donald Trump as president and the UK voted to leave the EU, concerns of falsehoods and polarization on social media deepened. Countries across the world called to regulate social media. The seemingly obvious solution to the problem of ‘bad’ content on the internet is to reduce its circulation or to remove it altogether. Yet, a few seconds of reflection suffices to realize that this seemingly simple ask is complicated in practice simply because ‘bad content’ is difficult to define. In a polarized environment, it is especially difficult to decouple politics from governance, and to trust that the cudgel of bad content will be used for public interest and not political gains. There are some categories of bad content that violate well established social mores and are reprehensible to most, regardless of political positions. Child sexual abuse material and content promoting terrorism are two such categories. In democratic societies, content interfering with election integrity is a third category of content that leads to widespread moral panic. Consequently, every election cycle, civil society and political parties alike intensify their requests to act on bad content online. Machiavellian motivations intersperse with democratic fervor to produce a din of proposals to address online falsehoods that interfere with electoral processes. In the months preceding the 2019 election, dubbed the “WhatsApp election”, the then Minister of Electronics and Information Technology pressed WhatsApp to reveal to the government where messages on the platform originated. WhatsApp declined, stating that doing so would violate end-to-end encryption and user privacy. Post the 2019 election, the requests turned into diktats, culminating in the IT Rules (2021). The rules made it legally binding for WhatsApp to trace the origin of messages. WhatsApp sued the Indian government in the Delhi High Court where the case is still ongoing.

By 2024 the threat had morphed. Instead of the platform where bad content was circulating, we became concerned with technology that could be used to create more believable bad content. While “deepfakes” had been doing the rounds since 2017 and were first used in Indian election campaigning in 2020, the rise of models like ChatGPT and Stable Diffusion pushed the term into popular parlance. In November 2023, actress Ramshika Mandana used social media to call attention to a deepfake video where her face was superimposed on another video. While this incident had little to do with political campaigning, campaigners capitalized on the moral panic around deepfakes that her video created to win political brownie points and discredit opposition candidates. In January 2024, a Rajasthan MLA complained that a pornographic deepfake video of hers was being circulated. While the video attributed to her was pornographic in nature, there was no clear evidence of it being a deepfake. In April 2024, an MLA from UP attempting to distance himself from his own statements caught on video, called the original video a deepfake. In the five months between Ramshika Mandana’s original post calling attention to deepfakes and India going to vote, the government had issued multiple advisories requiring platforms to identify, label and remove AI generated content and deepfakes. Separately, various civil society actors called on the Election Commission of India to clarify its policy around the use of deepfakes in election campaigning. The warning from ECI came three weeks after polling began, when two phases had been concluded. By this time, a doctored video showing Amit Shah advocating for the abolition of reservation quotas had gone viral. Most mainstream media channels and newspapers misreported this video to be a deepfake. The video relied on the most basic and oldest of editing techniques- selectively cutting out sections of a video to convey a message different from what was intended.

With the elections behind us, and the new government firmly in place, we can finally afford to loosen some of the moral panic that held us over the last few months to understand the nature of the beast of 'deepfakes' better. Read together, the numerous explainers on deepfakes reveal that there is no agreed upon definition of deepfakes. The term was coined on Reddit- in 2017, a user by the name of “deepfakes” started sharing pornographic videos created using deep learning, a class of algorithms in Artificial Intelligence (AI). Since then, the scope of the term has expanded to include any use of AI in creating videos, audio or images. With a widened scope, it has become increasingly hard to label content under a binary of deepfake or not. Should social media posts enhanced with platform provided AI filters for virtual makeup be labeled as deepfake or AI generated? Media items combine older techniques such as selectively splicing sections of a video with newer techniques such as AI generated audio. AI has provided new techniques for manipulation of content but it doesn't change how manipulated content thrives online. Content is manipulated to exploit and foment existing psycho-social vulnerabilities and inter-group divisions. It is amplified by reckless 'forwards' and social media algorithms. Addressing these underlying realities of content distribution will also address many of our concerns around the latest technology used for content generation. We should be wary of efforts to rebrand an existing problem into a new category of 'bad content' that merits unprecedented regulation.

Given the precedent, it is reasonable to expect that soft policy from before the elections will be codified as formal regulation in the months to come. Yet, a true commitment to protecting public interest online requires a sober analysis of the problem at hand. Targeted deepfakes such as pornographic content about an individual or impersonations to carry out financial fraud are indeed a cause for concern. But actors behind such directed attacks would circumvent any crude attempts at traceability applicable to the wider population. Specifically tackling pornographic and sexualized content needs a whole-of-society approach and a resetting of norms. News reporting needs to stop describing women featured in deepfake videos as 'victims' or 'shikaar'. If anything, the prevalence of pornographic deepfakes shows that violence against women has little do with women's actions- mere public presence online opens women to appropriation of their images. Specifically on the use of AI by social media content creators, be it political actors or an artist, the line between creative use and harmful use will remain hard to define. Proposals to remove 'bad content' will appear partisan. Instead, addressing the structural issues that let polarizing and false content thrive online will serve public interest.

Text and illustrations on the website is licensed under Creative Commons 4.0 License. The code is licensed under GPL. For data, please look at respective licenses.