Stolen Simulacra: The Rise of Deepfakes

An old adage states that “the camera never lies”. Now, in the era of Photoshop and Snapchat filters, do we all know this is no longer true?

Do we know if it was ever was true to begin with... The Cottingley Fairies to name but one example. We have reached a tipping point where the majority of the photos we see daily in advertisements and across social media are manipulated in one form or another. In the world of images, there exists a state of hyperreality where it is increasingly difficult to determine what is real and what isn’t. But throughout this process, video had remained the gold standard of authentic communications. It is the medium through which politicians speak and news is delivered. With video, seeing is still largely believing. This is a fact that is about to change. In May of 2019, a furious row erupted over a video which appeared to show Democrat Speaker of the House Nancy Pelosi slurring her words and acting drunkenly during an address. It later transpired that the video had been altered, slowed down to 75% speed and presented as unedited. In 2018, CNN’s Jim Acosta was likewise the subject of a doctored footage row, when during a heated exchange with President Trump a video surfaced purporting to show that he manhandled a White House aide as she attempted to remove his microphone. CNN is now suing the White House for endorsing and spreading this video. The authenticity of videos is therefore not simply an academic discussion; it has entered the political scene as a point of contention. The mainstream media has reacted to these developments with alarmist headlines warning deepfakes will cause an information apocalypse and the collapse of evidential consensus. But in reality, neither of these two incidents strictly speaking were true deepfakes at all but were instead crude ‘cheapfakes’, or just plain ‘fake news’. These were created using the conventional methods of editing videos by splicing, slowing, speeding up and removing frames; techniques which have existed for decades now. While it can be argued the social media age allows these doctored videos to be disseminated much faster to a mass audience, the threat of doctored video in itself is not new. What is new, is that with the help of neural networks, face detection technology, and upcoming ‘audioshop’ software like Adobe Voco, almost entirely synthetic videos can be created by anyone. With the use of a piece of original footage, body doubles, and a freely accessible neural network feed of a few hundred photos of an individual, it is now possible to superimpose that person’s face into a scene and to even create new audio, sync’d with accurate mouth movements of the individual to go along with it. The University of Washington demonstrated this in a 2017 Viral Video in which they used an AI model of precisely how Obama’s mouth moves when he speaks to create deepfaked footage of him speaking lines voiced by impersonator Jordan Peele. This was meant as a proof of concept that anyone with the resources is now able to create an AI model and match it to any chopped snippets of pre-recorded footage of a politician speaking, or to an impersonator’s dialogue. This deepfaking process has been in use in Hollywood for years now. Carrie Fisher reprised her role as Princess Leia in Star Wars Rogue One from beyond the grave, and Oliver Reed posthumously appeared in Gladiator nearly two decades ago. Yet the technologies that made this possible had until recently remained slow, hardware intensive and prohibitively expensive (it is rumoured Oliver Reed’s 2 minutes of digital appearance cost $3.2 million dollars). As is the case with almost all technologies, these have now been democratised and a skilled individual can now achieve an effect comparable to a major studio as demonstrated by Ian Hislop in his recent BBC documentary on the topic of fake news. Equally, as often is the case with new technologies, their development has been spurred on by malicious applications. The driving force behind perfecting deepfake technology at the individual level has been the underground industry for fake celebrity pornography, where celebrity faces are superimposed onto other adult actress body doubles. While dubious, this practice is at present not illegal. Beyond concerns about consent and invasion of privacy, the manufacturing of deepfaked sexual content has had more sinister uses, as Indian journalist Rana Ayyub found out when she was blackmailed with pornographic content of herself made with a body double, in an effort to silence her criticisms of the Indian government. This incident raises the troubling prospect that while many people are concerned politicians will be the primary target of malicious deepfaking, it is just as likely that private individuals could become targets of blackmail and extortion as they will not have the recourse major celebrities and politicians have to debunk such material. Furthermore, the threat of deepfakes may in fact be more harmful than the reality. US politicians are already moving to impinge on net neutrality and anonymity to ensure the source of any potential doctored content would be known, at the cost of violating the basic freedoms of the internet. It is also true that once the existence of widely accessible deepfaking technology becomes an accepted fact of life, politicians will be able to deny incriminating video and audio with impunity. In an already politically polarised era, which features few undisputed ‘facts’, this could spell disaster for political accountability and democratic discourse as politicians cast off unfavourable video as fake news. However, the effects may not be as apocalyptic as predicted. Technological problems must necessitate technological solutions, and several efforts are already underway to create software and that can spot and debunk deepfakes. The issue is serious enough that The US Defence Advanced Research Projects Agency (DARPA) has funded a Media Forensics programme (MediFor) stating its aim thus: DARPA’s MediFor program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Other efforts to combat deepfakes focus on source verification, with the start-up TruePic using blockchain technology to verify press-captured videos and images. Whether these efforts will be successful remains to be seen, but it is likely that an industry in deepfakes will give rise to the corresponding industry in combatting them. As the Data & Society's report points out, evidence has never been value free, it is context specific. It is worth remembering new technologies have always brought about the potential for deception and fakery – photography and its manipulation was once touted as the end of truth and was exploited by unscrupulous parties until the public wised up. It is thus likely the threat of deepfakes will have more impact on the short-term political milieu than use of the technology itself. Nevertheless, a well-executed deep fake operation before the defence networks and public information to counter it are in place could cause disastrous and unforeseen outcomes. As always, awareness and scepticism are key. Seeing may no longer be believing… If you’d like to understand more, or prepare your business against cyber-attacks and other modern digital warfare please contact us now.