Mitigating harms of synthetic Media on Information Landscape

Published on Mon Jun 12 2023Denny George

This is an attempt at thinking about mitigating Generative AI's impact on the information landscape by centering the safety and concerns of the people whose images are manipulated.

There is a proliferation of Generative AI tools. Their accessibility has gone from being limited to source code you had to run, to APIs you had to use to now being baked into media manipulation tools on smartphones. In the competition to ride the gen-ai wave, every consumer tool is now bringing in the latest possible trick to a simple interface for their users.

We agree with Karen Rebelo in this podcast that AI might not create a new abuse vector but it will make current trends far worse. And one such trend is the use of manipulated images for spreading misinformation and disinformation. If you look at the examples of media-manipulation related factchecks done in India, they often feature very simple photoshop tricks. But the fear of course is that if such basic manipulation did so well in our country, the visceral impact of realistic looking synthetic media, at scale, will only accelerate the impact of misinformation.

While there are distinctions between deepfakes, shallowfakes, images manufactured by generative ai, manually photoshoped images etc, for the sake of this thread we will be using the term Synthetic Media for all such manipulated media generation.

The proposed solutions to Synthetic Media related attacks largely fall under these categories :

  1. The Advocacy Solutions

In these you enforce companies to build guard rails into their tools or stop offering these services. The boat on this might have long sailed. Even if the behemoths in this domain were to implement these safety features to save face, there will always be open source tools and unsigned apks floating around that miscreants will grab a hold of.

  1. The Cat and Mouse solutions

Building watermarking tech into the generative AI systems so that its possible to detect images generated via AI, building synthetic media detectors, training people to spot synthetic media. These solutions all have a cat and mouse characteristic where it looks like one will be constantly busy building reactive tech or skills to catch up with the new ways generative AI systems become capable of evading detection

In addition to these measures, we propose looking at this problem by centering the safety of the person who is on the receiving end of an attack by manipulated media. We think it might open up new ways of thinking. While the discourse on this largely centers on the hypothetical future scenarios, there are people getting affected by it today. What is the support system that exists for them? Once we center the discussion around this, some community-based solutions like the Stop NCII project emerge.

In the past, platforms have drawn a line at disallowing pornographic deep fakes (although these strategies aren't enforced on closed platforms like WhatsApp) but from the point of view of misinformation, non-pornographic synthetic media is worrisome too.

It might be hard for platforms to take a hard-line stance of "no synthetic media allowed". But once an image or video has been flagged as "synthetic", "causing harm to an individual", "intended to target an individual", to the platform, there could be immediate take downs or additions of something like "contextual notes".

Treating each instance of synthetic media-assisted abuse or misinformation as a one-off case and then moving attention from one case to the next will be missing the fact that we need to evolve mechanisms to deal with this new reality where this might be a norm. Recently, the ethics of watching deepfakes of your friends was a cause of controversy among Twitch users. The mental health impact of even consentual image manipulation on a user are being studied. The experience of online trolling and campaigns is often described as alienating. It feels like one person against a million people out to get you. What are the support systems in place to ensure the physical and mental safety of people at the receiving end of a viral synthetic image (even non-pornographic ones) used against them. What are the solutions that compensate them? We need to look at this more holistically and center the needs of people who face a disorienting loss of agency as a synthetic media item featuring them goes viral on social media. This approach will also emphasize that this is a problem happening now and not something to be building for in a hypothetical future.

Text and illustrations on the website is licensed under Creative Commons 4.0 License. The code is licensed under GPL. For data, please look at respective licenses.