You may have heard the word “Deepfake” floating around the internet recently, but what exactly are they, and just how harmful can they be?
The ‘deep’ in “deepfake” is derived from “deep learning” - deepfakes can be images, photos, and videos that often depict celebrities, but can also present completely non-existent people. One infamous deepfake was of Mark Zuckerberg, where he supposedly bragged about “having total control over billions of people’s stolen data,” as well as a Tom Cruise deepfake that went viral on Tiktok. There has also been evidence found of possible deepfake profiles on sites such as LinkedIn; one was even thought to be made by a foreign spying operation!
You may wonder how to discern a genuine video from a deepfaked one. As deepfake and AI technology develops, they will become harder to spot, due to them becoming more realistic and having fewer glitches. Shockingly, even people with little tech knowledge can create deepfakes through applications that are available to the public, creating even bigger problems as the technology becomes more accessible. Even the president of Microsoft said that “deepfakes are getting harder to spot”.
Another problem has developed recently – AI is learning. In 2018, researchers published findings saying that deepfakes don’t blink normally (at the time, many deepfakes never blinked, or had unnatural blinking patterns). Soon after the research was published, many deepfakes started to blink more naturally. There are some clues, thankfully - some deepfakes may have uneven or patchy skin tone, strange lip-syncing as well as badly rendered hair, jewelry, and teeth. Lighting effects and reflections in the iris can also be a dead giveaway.
A company called TrueMedia recently released a deepfake detection tool that will aid the process of shutting down deepfaked content as well as spreading awareness on deepfake and AI-generated software. This is especially important during times of political interest, such as presidential elections. Microsoft’s president called TrueMedia’s software a “great example of using good AI to combat bad AI.” Microsoft will even be teaming up with TrueMedia to further develop the technology needed to identify deepfakes - so far, TrueMedia have detected 41 AI content farms that had accumulated over 380 million views collectively. Content farms can be detrimental as they may spread propaganda, fake news, or urge viewers to click on affiliate links to gain quick and effortless money.
As technology advances in the future, hopefully we will see more detection tools to aid the combat of misinformation, scams, and hoaxes spread through deepfaking.
Comments