What are Deepfakes?
Deepfakes are a concerning form of digital media manipulation. Recent technological developments have made it possible to create realistic-looking audio and video clips of people saying and doing things they never actually said or did. Most people are aware of digital manipulation in the form of photo editing and filters, but this is a whole new ball game.
How Do Deepfakes Work?
The technology used to create deepfakes is called generative adversarial networks (GANs). GANs are computer programs that use AI to analyse existing media and generate new media that looks and sounds real. GANs use the existing media such as photos and videos as a template, using it to create realistic-looking clips. With this technology you can synthesise a particularly good facsimile of a real person, voice, mannerisms, and all.
Why Are Deepfakes Dangerous?
Deepfakes are a form of digital media manipulation. Image editing and filtering isn’t new; however, this particular kind can be used to spread false or misleading information and manipulate public opinion, and it can even be used in phishing scams. It's important that security professionals and the public are aware of deepfakes and take steps to be aware that not everything you see online is real.
Uses for Deepfakes
Deepfakes have become popular for a variety of purposes, from entertainment to security and political manipulation. Deepfakes (Or at least, their basic technology) have been used in movies and television shows to create realistic-looking scenes and characters for some time. For example, bringing to life classic characters from beloved franchises that are no longer with us, such as Princess Lea in Star Wars or de-aging actors like Robert De Niro in The Irishman, as well as in advertising to create more engaging visuals. On the other hand, deepfakes have also been used to spread false information and manipulate public opinion – such as using political figures to say the opposite of what they would usually stand for, or an inflammatory position.
While there is a legitimate use for the GAN technology in mainstream movies and film, there is an ethical question of their use for standing in for people who have passed away or their use of younger or older versions of living people. Impersonation for comedy on social media is well known and while this seems innocent in intent, a real person’s likeness is being used. With permission may lead to regret, and without permission may be illegal in some countries and territories. One of the most immediately concerning criminal uses is using deepfakes for the sexual exploitation of individuals without consent or knowledge, and romance bot scams. While incredibly defamatory, it’s also mentally damaging. The social and economic implications for the individual would be far reaching and difficult to repair.
Deepfakes can also be used for circumventing security efforts, such as facial recognition and biometric authentication. Faces and voices can be used to verify the identity of someone so it’s concerning that a deepfake may be enough to break through some consumer level security apps. The technology used to create deepfakes can also be used to guard against this – Using detection algorithms to detect the use of GANs or digital manipulation on a video, however this is still in its infancy and isn’t currently effective on unknown sourced videos or unknown subjects.
Cyber criminals can use deepfake technology to impersonate individuals by creating realistic images, videos, or audio recordings. This can lead to various types of fraud, such as unauthorised access to sensitive information or financial accounts.
Social engineering attacks
Deepfakes can be used to enhance social engineering attacks, like phishing or spear phishing, by creating more convincing and personalised content to deceive victims.
National security threats and espionage
State-sponsored actors can use deepfakes to create propaganda, manipulate public opinion, or even impersonate government officials for strategic purposes, potentially disrupting international relations and national security.
How to Identify a Deepfake
Look For Flaws in the Image
When viewing a deepfake, the first thing to check for is any distortion or unnatural movement in the video. Deepfake technology isn’t perfect yet and can still create noticeable blurring or artifacts in the image. Additionally, if the video has been sped up or slowed down, it’s likely to be a deepfake. Look for similar images or videos of the individual and compare them – many of them are using a source video that already exists online, or an image as a data point for a new video. Pay attention to any inconsistencies, such as unnatural movements or expressions, as well as any blurriness around the edges of the image or video.
A few main things to watch for:
- Eye direction
- Ear placement or comparative differences
- Forehead height, width of face
- ‘Loss’ of expression at moments during the video
Listen for Unnatural Speech
Deepfake technology can create convincing audio, but it’s not perfect. Listen closely for any unnatural inflection or pauses in the speech, as well as speech with little emotion or range. The natural flow of speech vs the abrupt patterns of a deepfake should be easy to hear when compared, however as the learning engine gets better, this will be more difficult to differentiate.
Pause the Video
It’s often much easier to detect things that are out of place in still-frames, so try pausing the video and seeing if anything stands out. Check ears and placement of hairline, and jaw – there will be subtle differences that are more obvious when paused.
Check the Source
The source of the piece can often be a tell-tale sign of whether it’s a deepfake or not. If the source is unreliable or unknown, it’s likely that it’s not what it purports to be. Additionally, if the subject is clearly acting out of character or against the ‘usual’ ethics and culture of themselves, it’s likely fake or at least a scripted piece.
This isn’t always a reliable method, in the days of un-confirmed news sources and the rush to break news, it’s highly likely that deepfakes will be circulated even by reputable news agencies before the truth becomes known, and a retraction is issued.
Think twice before sharing
You can’t always believe what you see on TV (or online)! Once upon a time, the expression was ‘The camera never lies’, but in many ways, this hasn’t been true for many years. If you see a video online that seems out of place; a politician saying things out of context or against the country's best interests, an actor doing things that wouldn’t be on brand, amazing physical feats or something hard to believe, think twice before you share it or believe it. Check the source of the video, the context, and have a look at the tell-tale signs of a Deepfake.
While there isn’t a lot you can do about bad actors taking publicly posted images of you and manipulating them, ensure you report them to the authorities in your country as soon as you are aware. Stay safe online.
For more information about our award-winning training platform, and how to help secure your organisation from cybercrime, contact us today for a personalised demo.