How Deepfakes Will Influence The Evolution of Cybercrime

When it comes to the topic of deepfakes and how they tie into the evolution of cybercrime, there are several things to discuss and consider. This means that we’ll delve into a background on deepfake and how deepfakes can combine audio and video to execute the perfect cybercrime

In the last 5 years, a plethora of events and advancements have taken place in the fields of science and IT. From 2015 to present day 2020, we could say that this is an especially critical time period for advancements (and security concerns) in; cloud technology, artificial intelligence, machine learning, neural networks and IT legislation. All of these fields have skyrocketed in activity.

It is no surprise that events have unfolded in such a way. For example, Facebook’s DeepMind project and think tanks related to transhumanism have been around since before 2015, and have set a course for intimate contact between human and computer.  

This evolution has of course been in parallel with exponentially increasing speeds, processor and gpu technology as well as the demand for an evermore IoT and device driven world. Naturally, the corresponding negative forces have appeared; everything from hacking to the chaos of social media, to fake news. Deepfakes is one of the ripples in the pond of issues for connected humanity.

In order to understand the seriousness of deepfakes and how they can potentially boost cybercrime to no end, it is important to get some perspective, especially in these past few critical years. An aspect that goes hand-in-hand with the evolution of cybercrime is the backbone of the entire discussion, artificial intelligence.

What Is a ‘Deepfake’?

Deepfake in essence is the product of the words deep learning and fake. The term comes from the scientific term for deep learning, which is related to artificial intelligence. Deep learning (machine learning) based on neural network architectures is what drives the software. Basically, it is the result of the IT world’s work on creating a neural network for AI that mirrors functions of the human brain.

We have been familiar with the concept of face manipulation in images for a long time. What deepfake does is it can learn from video via AI, and composite new 3D textures and poses that are almost indistinguishable from reality. Replacing a face (swapping) in a video sequence was something that only film studios or video professionals could do. Now, there are even free public apps where you can essentially see what ‘deepfaking’ yourself is like. 

Deepfakes are referred to as ‘synthetic media’ because the technology relies on visual and movement information from videos (e.g., any video that includes a person and his/her face). 

Let’s not forget that, along with video manipulation, it is easy to sync the video with someone’s voice as well. Deepfake AI can learn the audio and match/sync how the mouth moves to the audio and create a perfect deception. When audio and video are both faked creating a completely fraudulent persona, that is when things get dangerous.

Deepfake And Cybercrime

In 2017, a Reddit user, aptly named ‘deepfakes’, manipulated some adult content and essentially coined the term itself. This was the same year that ‘Synthesizing Obama’ came out. Since then, multiple deepfakes of people from actors to Donald Trump have been released. Following this, there are now genuine concerns regarding how disruptive, deceptive and criminal deepfake can be. Again, we mentioned audio deepfakes above, and there have been instances of attempted fraud via voice imitation software as well.

Last July, the Center for an Informed Public along with the Defending Democracy Program at Microsoft sat down with experts from the tech industry as well as government organizations. The discussion was concerning deepfakes and the U.S. Elections, and how the media is being manipulated by AI. The concern was that deepfakes could have potentially disrupted the upcoming elections. So, deepfake isn’t just a joke anymore.

Another opportunity for deepfake is to exploit the pandemic situation we are in now in 2020. Security software companies such as Avast expect deepfake disinformation campaigns and more AI-induced malicious issues to be presented soon to an isolated, at-home public. What could happen is that conspiracy videos can be easily created to spread fake information during the pandemic, by utilizing key influential figures such as politicians. 

Facebook banned manipulated videos this year, and is working diligently with Google to improve detection of this deceptive software and the potential disruptions that come with it. According to experts at the Harvard Kennedy School, cheapfakes are an even ‘wider’ threat than deepfakes.

The Threat of AI-Based Software in The Future

The danger of deepfakes is that they have become very realistic, which could trick people into believing something that isn’t true. Also, the future of deepfakes could include blackmail and extortion. With an already existing slew of spyware, ransomware, malware and many other forms, deepfakes is another dangerous addition to cybercrime, in a world where governments and economies are already broken and fragile. Since this is the online world, even more problems arise because media can be shared so quickly, as well as bought and sold.

A UCL study published that fake audio and video content was ranked as potentially the most dangerous use of AI because of the possibilities in terrorism or crime. The study also included 20 potential ways that could connect AI and cybercrime in the future. Some of the concerns that were noted were;

  • Extortion or blackmail via fake content
  • Discrediting a public figure
  • Advanced phishing messages
  • Fake news created by AI
  • AI-based fake advertising

Deepfake (fake content) could cause the following in society;

  • Psychological harm
  • Increased distrust of the internet
  • Societal harm and distrust between people

It is critical that we are able to recognize deepfakes and stop them. This means looking into detection algorithms to overcome the problem. As far as legislation is concerned, a Congress Accountability Act bill has been passed in 2019 to defend people from ‘false appearances’. The passing of laws is important for all cybersecurity issues, so at least in terms of the law stepping in, it seems that we can be hopeful in this case for the time being.

Although there will be many positive uses for AI technology, it is always important to share and report deepfakes if you notice them. Keep yourself anonymous and secure online, be aware of any media that you notice that doesn’t make sense to you. It could be a deepfake.