Deepfakes and the Spread of Misinformation: A Growing Concern

29 May 2024

“We are living in a dubious world— one that of ‘deep fakes’ where seeing is not believing anymore.”

In the past few years, a new kind of AI has emerged, and it's been changing everything ever since. Deep fake AI, as we know it, creates an audio, image or video of people where they are doing or saying something they did not do or say. The synthetic generation of such multimedia is so neatly and often flawlessly fabricated that it becomes difficult to draw a line between real and fake outputs. For many such obvious reasons, deep fakes have become a nagging concern more than they have been utilized for good. But what are deep fakes? And, why is there so much anxiety and turmoil over this offspring of technology? Let us find out.

What are deep fakes?

If you would ask me, I would start defining ‘deep fakes’ by classifying the nomenclature into two parts to make it easier to understand how they function. Originating from the conjugation of two words ‘deep’ and ‘fake’, the word deep fake technology relates to outcomes, typically in the form of images and videos, that are trained on advanced machine and deep learning algorithms, precisely ‘deep’ neural networks, to produce ‘fake’ images and videos or otherwise deceivingly authentic appearing media. This conjugated relationship between deep neural networks resulting in fake media production has given rise to this dubious terminology called ‘deep fake’.

From funny to felonious, deep fakes go rogue

The deep fake technology has grabbed the eyeballs of people belonging to various walks of life. The controversial offspring of technology has hit the headlines in recent years for a variety of reasons- with some of them being unpleasant. Well! The unethical use of deep fakes has aroused persistent concerns- that must be addressed on priority.

The paradigm of deep fakes has not just been confined to fake images, but it has extended its evil hands on creating fake pornography materials (as in videos, etc.) of the non-consenting victims to harass them, both emotionally and financially, and mint money.

Moreover, several scammers have even used AI-generated voice recordings to steal money by making phone calls and convincing people to transfer money. To put a case in point, more than $240,000 was stolen by someone, deep faking the voice of an executive from a British energy company. The voice was lifelike. This event does not seem all that out of the ordinary, except that the executive was not even a real person. Thieves used voice-mimicking software to imitate the real executive — and they got away with it.

What's scary about voice deep fakes is the assertion the expert of a company, that tracks the audio traffic of the largest US banks, makes in a report titled- “voice deep fakes are coming for your bank balance” published in The New York Times. He maintains that he had seen a jump in the prevalence of voice deep fakes this year — and in the sophistication of scammers’ voice fraud attempts.

The allure of deep fakes

The developers of this apparently scandalous technology assert that there are positive implications of it and deep fakes can have more constructive roles to play in the times to arrive, stating that it can help humanize automated phone systems and can help mute and dumb people speak. People with diseases like Parkinson’s or multiple sclerosis, which hampers their ability to speak, can also look forward to deep fake technology to overcome their speech disorder.

Deep fake technology could also be potentially utilized in movie making, video creation, television shows, and other forms of media by bringing in more realistic visual effects. A Forbes’ article considers deep fake technology to be the friend of humans in that they can generate realistic simulations for good. Whether it is about recreating long-dead artists in museums or editing video without the need for a reshoot, deep fake technology, the article asserts, has the potential to make us experience things that no longer exist, or that have never existed.

One other positive outcome of it is that it can be intelligibly used to create educational videos or simulations that are more engaging and interactive for students, thereby facilitating AI-based learning. The application of AI can also be witnessed through user personalization, chatbots, and automated self-service technologies that make customer experience more seamless and increase customer retention for businesses.

What makes deep fakes look more constructive and promising is the fact that it can also be harnessed to create closer-to-reality simulations for training purposes in the military, aviation, and healthcare industry among others.

But there is always more to it than all that meets the eye, and deep fakes shouldn’t be taken as any sort of exception to this fact. This brings to light the potential malicious use for impersonation, manipulation, and spreading fake news. This also pushes to the ambo the concern and subsequent importance we must immediately attach to becoming educated about the ways AI can be potentially used to somebody’s detriment. Because ignorance, in the blooming world of deep fakes, can have us paying exorbitant bills of something we didn’t even buy or clearing the air over a doctored video bullying someone that we didn’t even make or think of.

But before we do that, we ought to know the dangers that come along with it. Without learning how AI can be misused and deep fakes be generated, we, the hooked humans of the twenty-first century, are sitting on a powder keg of technological growth driven by deep fakes, where a single incident can tarnish our image, make us run into penury or ruin our entire life to the extent that it gets suicidal. The potential consequences are grave, I would say.

Dangers associated with deep fakes

Deep fakes have lately been a part of global discourse because of the dangers and risks they pose for victimized and vulnerable people. Deep fakes, per se, have raised concerns due to their potential for misuse. The reason why it is categorized as a controversial offspring of technology is the fact that it affects people negatively in more ways than one. Its impact has been quite far-reaching and highly alarming. The risks that arise out of it have long been a subject matter of thoughtful discourse in a civilized and concerned society.

The looming fears of deep fake technology have not been restricted to mere entertainment but have risen as a nagging concern in the cybersecurity space. Recent monstrous and despicable advancements in deep fake have fueled growing anxieties among people of all walks of life. Politicians, celebrities, public figures, sportspersons have all been at the receiving end of this villainous child of technology. Some have been ‘deep faked’ into obscene videos, while others have been subjected to fake commercials and speeches that they did not ever endorse or make. This sounds eerie. It could happen with any one of us.

In the following things, the dangers of deep fakes can be seen.

  • Face swapping in images/videos
  • Spreading misinformation
  • Inflicting reputational harm
  • Impersonation
  • Malicious intent
  • Identity theft, and
  • False propaganda

This highlights the need for increased awareness and detection mechanisms to mitigate their harmful effects. So what is the escape route or the counterstrike to face this looming threat? Does regulating AI in any way mean stifling the innovations AI inherently brings with itself? Or can there be softer ways of doing it? Let’s figure that out.

What’s the way out?

One promising solution would be to fight technology with technology- beating this polemical child with cutting-edge and robust technology solutions. This implies helping deep fakes overcome their scandalous character through the use of the right technology— meaning the key to combating deep fakes might lie in using technology against itself. By training software to identify subtle inconsistencies – the unnatural blink, the gait that's just a touch off – we can develop a powerful tool to distinguish genuine footage from the manipulated. This software could act as a discerning eye, spotting the anomalies humans might easily miss.

Additionally, invoking in oneself a sense of understanding about how convincingly fake things appear and move in the public domain can help us become wary of the sorts of multimedia we use and the sources they come from. This can be done by adopting multimedia literacy, and realizing that everything that is available in the public domain is not to be trusted. Taking AI and its outcomes with a pinch of salt holds the key.

Moreover, emerging AI regulations can be seen as potential resolvers of all the imbuing problems of AI. If the privacy of individuals can be ensured through stringent legal compliances, time ahead for businesses and stakeholders, can be believably free from deep fake-induced scandals and allied fabricated practices in the AI domain.

Lawmakers must additionally and primarily focus on investigating how they might contrive and enforce stringent copyright, defamation, and harassment laws to curb or penalize the most sinister deep fakes that exist today.

Wrapping up

We are entering a completely new chapter of how humans consume and relate to information. As AI takes hold of the digital world, the once-solid ground of truth and transparency seemingly crumbles beneath our feet. We hurtle toward a hazy landscape of manipulated media and manufactured realities, where deception reigns supreme.

This fogginess is going to create a lot of confusion and distractions for the unaware and poorly educated users of AI and this can have serious repercussions on how societies and states use technology.

Technology enthusiasts like us are about to enter a brave new world. Buckle up, because in the coming years, "seeing is believing" goes out of the window. We'll need to be digital detectives, going into the details, scrutinizing every image: who made it, why, and can we trust it? The urge to believe will be strong, but the line between real and fabricated is blurring. This isn't a dystopian future, it's the here and now.