On a merry ramble across the internet, one may find many great wonders to behold: a clip compilation of cats making TikToks; babbling rants from every side of the political sphere about whether it’s PC to say ‘mud’; and beloved cartoon characters in scenarios you wish they were far from. But it was upon a recent sojourn that I discovered a new type of oddity, a screen full of small boxes with every US president gleefully performing Witch Doctor by Cartoons. At first it was amusing to see a perennially sallow Richard Nixon smirk while harmonising amongst his presidential peers. Equally fascinating was to see presidents of old, whose images are committed to grainy photographs and paintings, joyously move. This is an example of a so-called ‘deep fake’ found online, that has raised alarm among some experts who believe its practise could pose a great risk to society. While entranced by what is for all intents and purposes a meme, I thought with frank amazement the possibilities such technology can hold. While forcing a dead Abraham Lincoln to perform Baby Shark is primitive, there is an equal scope to do much harm in this double world of online trickery and wizardry.
A deep fake is a simulated image created by inputting pre-existing images and sounds into deep learning artificial intelligence algorithms. These images are then edited over different bodies and faces in order to manipulate likeness. It is a simulation verging on the hyper-real where using as many images as possible aid in creating a more lifelike outcome. Celebrities are often the focus of such exercises with a smattering of images for creators to choose from. Some of the most successful and viral deep fakes are of the actor ‘Tom Cruise’ as he acts unusually off-brand from his carefully constructed persona. Cruise’s digital puppeteer is Miles Fisher, who not only has a striking resemblance to the megastar in appearance, but can also provide an uncanny voice impression making for a convincing copy. While Fisher has made this a career, most videos are made as fun sketches. The increase in their appearance is due to the varying options of apps and free software available, the most popular being Face App. Branded as an AI face video editor, Face App rose to popularity with its ability to morph photos of youthful faces into seemingly accurate elderly ones. Now the app has the capability to use video that allows users to create convincing deep fakes with the faces of the rich and famous. As the technology advances and becomes more accessible to the public, there will become a wellspring of new and fun ways to employ deep fakes. There may come a time where the technology isn’t solely used for memetic purposes but identity building. It is possible that deep fakes become avatars for online personas, a Frankenstein concoction of faces to create the ultimate guise for a user. Imagine the many media personalities that we will tune into without knowing what they truly look like. Only recently a video emerged from China where one local man used the technology to merge with tech entrepreneur Elon Musk. ‘Elong Musk’ provided another viral opportunity which had many users scratching their heads wondering, is this real?

Of course, Hollywood has already adopted the use of deep fake technology such as in the Star Wars series. Deceased actors Carrie Fisher and Peter Cushing had their roles reprised from the grave, creating varying controversy and discussion surrounding the ethics of casting digitally animated dead people. Other uses of deep fake have not only extended to reincarnating movie stars but deceased loved ones. Popular software such as MyHeritage has found a way to capitalise on one of the greatest, yet impossible, wishes of people. Using AI the software takes media inputs of the deceased to create moving and smiling videos to an uncannily creepy degree. But it isn’t just the dead who are digitally summoned as seen recently in the largely pedestrian sitcom The Goldbergs, which used CGI to clumsily retrofit ex-cast member Jeff Garlin onto a body double. Due to accusations of inappropriate on-set behaviour, the production suspended the already estranged star and announced they would be using old footage and sound recordings to keep his character alive. With smarter and more accurate use of deep fakes the possibilities of creating performances from dead, ill or cancelled stars may change the industry. A building reliance of this technology could make for interesting questions about the nature of movie acting, what if it’s just the face that sells rather than the performance. Who knows maybe Miles Fisher may catch his big break in Mission Impossible XX? Using CGI and deep fake technology in films isn’t being reserved for actors, with studios discussing the possibility of reaping advertising revenue from their archive by digitally inserting updated product placement. However, not all uses of this technology have to be so morbidly money driven. Think of the historical figures that could be reconstructed, the ways to allow motion capture and CGI to become lifelike and compelling, as if a new style of makeup where artists apply their brush with a mouse rather than by hand.

But aside from the novelty deep fake brings in an industry built on fiction, it has deeper consequences for so-called reality. Already, there have been instances of the technology being weaponised where people, largely female celebrities, are subjected to being stitched onto the bodies of porn-stars without consent. Deep fake also has the capability to be used as a full-scale weapon of disinformation and propaganda. As the current Ukraine crisis rages on, the Kremlin has not only taken its offensive to country borders, but online frontiers. This isn’t the first attack in a long and dangerous information war between Russia and its enemies that also features military funded troll farms and disinformation linked with the Brexit vote and Trump election. In March, a video had emerged of Ukrainian president Volodymyr Zelenskyy announcing his decision to concede the Donbas region of Ukraine to Russian forces, pleading Ukrainian militants to lay down their arms. The video is crude, with Zelenskyy’s head being rather larger and discoloured in comparison to his body, with additional shaky voice work. After appearing on a Ukrainian website it was immediately debunked by experts, the press and Zelenskyy himself who responded via social media. It wasn’t long before online platforms cottoned on and had the video immediately removed. Strangely enough, in that same week an entirely different video circulated, this time of the Russian president Vladimir Putin making his own declaration of peace. Watching the videos, they are obviously unskilful enough for even some of the least discerning of individuals to smell a rat. But what do they say for the future of such tactics? While the videos were easily seen through by a savvy technologic journalist class and modern readers, what happens when they aren’t? When ideological factions have become so divided, so zealous, even seeing something that closely resembles reality may be enough to forgo it. As we notice each year, the capabilities of new technologies are ever expanding and with increasing time in online spaces and a plurality of news sources, it may get to the point where deep fake checks become intrinsic for when consuming information.

While it is easy to lambast aggressive nations for using such tactics, it is more shocking that supposedly free and liberal governments adopt them also. It was only in 2019 where the UK Conservative party used what is known as a ‘shallow fake’ to edit and manipulate video of opposition leader Keir Starmer which could well act as a germinating strategy into deep fake propaganda. In a period where distrust in experts, media and politicians is growing, deep fakes will only aid in fuelling the fire. Constant states where people cannot truly trust what they see, second guessing every broadcast, left with between choices of following their own assertions or a fiction algorithmically chosen for them. It’s not only videos and photos that can be manipulated but sounds too, before long even voices cannot be trusted as was the case when a scam artist manipulated a German CEO’s voice to almost steal £200,000. A single tweet, misleading video or photo can just as easily cause outrage and even cause stock prices to plummet. In order to spot and isolate deep fakes, governments and tech companies need to use the same AI tools to prevent havoc. However, when such technology can just as well serve slippery parties, why bother? But if there is not a serious effort to educate and be vigilant of deep fakes, reality may become even more difficult to discern
Laurence Smither is a content writer for Networthpick and screenwriter.