Imagine you are in a routine Monday morning meeting. You see your CEO’s familiar face. You hear your CFO’s voice. They’re discussing a confidential project, a transfer of ₹200 crores that needs to happen now.
Everything looks real. Everything sounds real. Everyone agrees.
You click “Approve.” You close the laptop, happy with the meeting.
Only later to realise.
Not a single person on that call was real. The faces, the voices, the subtle nods of agreement were all digitally made using an algorithm. This isn’t a script from a science fiction movie; it is the new frontier of white-collar crime.
And before you think this is far-fetched in 2024, a finance employee at a multinational firm in Hong Kong was tricked into transferring HK$200 million (approximately ₹225 crores) after attending a deepfake video call where every single participant, including the CFO, was AI-generated. This happened in a real office. To a real person. On a real Monday.
This is a sign that deepfakes have moved far from “face-swap” jokes to precise, scalable weapons. They don’t just bypass our passwords, they bypass our human judgment. When we can no longer trust our eyes and ears, our judgement of things begins to crumble.
As we move deeper into this “synthetic reality,” a heavy question looms over “How can we protect ourselves from this and what protection is provided to us against these crimes?

What Are Deepfakes & AI-Generated Content?

We often think of “fake” content as something that has been edited, such as a photo with edits or a video with a clumsy cut. But deepfakes represent a whole new shift. They aren’t just altered, they are developed from scratch.
A deepfake is a synthetic reality. It is made using AI tools and needs deep learning to present an exact replica of the way someone’s eyes crinkle when they laugh, the specific rhythm of their speech, and even the subtle tilt of their head. Once the AI “learns” these patterns, it can make that person say or do anything, with a level of precision that feels unrealistically real. These deepfakes are centric to Generative Adversarial Networks (GANs) and diffusion models. To understand this in simple language, suppose there is a digital team of two artists:

  • The Creator (The Generator): This part of the AI tries to paint a piece of video or voice that looks and sounds real.
  • The Critic: This part acts as a detective, trying to find the flaws.

They go back and forth millions of times. The Creator makes a fake thing, the Critic points out why it looks fake, and the Creator tries again. This loop continues until the “fake” is so perfect that even the Critic can’t tell the difference.
Deepfakes are developed so they can manipulate the senses of a person which leads to deception. They generally fall into four categories:

  • Video Deepfakes: Where a person’s face or body is digitally replaced, placing them in situations that never happened.
  • Audio Deepfakes: Where a voice is cloned to mimic speech. These are increasingly used in “urgent” phone scams because our ears are often easier to fool than our eyes.
  • Image Deepfakes: Entirely AI-generated photos of people or events that look like high-resolution journalism but are purely mathematical creations.
  • Text-based AI Content: Convincingly generated messages or speeches that mirror a specific person’s tone and vocabulary perfectly.

And just in case you think these categories work one at a time, they don’t. The Hong Kong fraud combined video, audio, and real-time interaction simultaneously. That is not four threats at a time but one seamless, coordinated attack.
Modern deepfakes are dangerous not because they look very real but because they are extremely easy to make.
A few years ago, creating a good deepfake required a big budget and a team of data scientists. Today, that has changed. Powerful AI tools are now available to anyone with a smartphone. In minutes, anyone can generate content that can bypass human recognition and high level security. This is the critical turning point. The challenge is no longer about identifying what is fake. Now we have entered an era where we are forced to question what looks undeniably real.

Deepfake technology representation

How Deepfakes Became a Threat

What began as an interesting turning point in artificial intelligence has transformed into a tool capable of manipulating reality itself. Deepfakes didn’t start as threat they started as wonder. But somewhere between the wonder and mischief it took a shift towards exploitation. And that shift has consequences far beyond we can imagine.

  1. The most dangerous thing about deepfakes isn’t the technology it’s how easy it has become to use it. In 2017–2019, creating a convincing deepfake required a high end GPU, machine learning expertise, and days of work. Today? A single photo. Three seconds of audio. One free app can give same result or even better.
    The power has shifted from labs to users, anyone with an internet connection can now give a prompt and get whatever they want which took extraordinary resources just five years ago.
  2. A single social media photo is now enough to generate non consensual explicit content. Public figures have had their faces used to endorse scams they never knew existed. The damage, once done, is almost irreversible even though debunked content lives in digital archives and emotional memory. But the real danger of deepfakes is not just what they do it is what they make us believe.
  3. At the personal level, you have to second guess whether the voice on the phone is really your family member, or not. At the social level, misinformation spreads faster than verification can catch up. Genuine content is questioned very often. At the institutional level, even courts now face a “crisis of evidence” as video and audio recordings can no longer be assumed authentic. Standard digital evidence is no longer standard.
  4. Deepfakes don’t break your firewall. They break your judgment. Humans trust is based on three things: faces, voices, and familiarity. Deepfakes replicate all three with precision. They’re not breaching the technology they’re breaching the human instinct. The result? We don’t even know whether we’re being deceived because everything feels right.
  5. The real threat isn’t that we’ll believe lies. It’s that we’ll stop believing the truth. This is the Liar’s Dividend a concept coined by legal scholars Bobby Chesney and Danielle Citron. It works like this: once the public knows deepfakes exist, people can dismiss real evidence as “probably AI-generated.” Authentic footage of misconduct? “That’s a deepfake.” Genuine audio recording? “Manipulated.”
    We’ve already seen it in courtrooms. We’ve seen it in politics. We’re watching the baseline of proof shift that accountability is nearly impossible.
    When both truth and falsehood look the same, belief becomes fragile.

Deepfakes and the Law: India’s Evolving Regulatory Response

Most of our laws were built for a world where deception had a human face. A physical signature. A real voice. Today, the law has to catch up with a world where the crime is based on an algorithm and the “criminal” might not even be a person.
So, where does Indian law stand?

  1. India never had a specific “Deepfake Law.” For a long time, we managed to do with what we had. Then The IT Act, 2000 gave us Section 66C (identity theft) and Section 66D (cheating by personation) for fake profiles and digital fraud. Section 67 covered obscene content which was the closest we had to a shield against non consensual deepfakes. But it was never built for this.
  2. The Bharatiya Nyaya Sanhita, 2023 replaced the IPC and brought in provisions on forgery, cheating, and criminal intimidation which prosecutors now stretch to cover AI-driven fraud.
  3. The DPDP Act, 2023 introduced something genuinely important consent if an AI system uses your face, voice, or data without permission, it’s not just ethically wrong it’s a violation of your digital rights.
    They all are reactive. They kick in after the video goes viral, after the money is transferred, after the damage is done. None of them regulate the creation of synthetic deception itself.

There are changes in how lawmakers and judges are thinking about this. The legal system is slowly moving from responding to harm to recognising its source.
There’s now a growing acknowledgment that an AI-generated lie is different from a traditional rumour. It’s more convincing. And it’s nearly impossible to erase once it’s out. The recognition that this is a different kind of problem is the first step toward building a legal framework that actually fits for the issue.

In the last few years, the approach has shifted from targeting criminals to holding platforms accountable in real time.
The new changes consist of :
Mandatory AI Disclosure AI-generated media is increasingly being required to carry clear labels or watermarks. If it’s not real you have a right to know.

Platforms are now supposed to remove reported deepfakes within 24 to 36 hours. In a world where viral damage happens in minutes, speed isn’t just good practice it’s the only meaningful intervention.

Social media companies can no longer hide behind the “passive host” defence.

There’s an active way to trace AI-generated content back to its “first originator.”

The goal is simple: To make it harder for creators to hide behind digital mask.

So the most legally significant development in this space is the recognition of what we might call the “digital twin” the idea that your voice, face, and persona belong to you, even in digital form.

In Arijit Singh v. Codible Ventures Pvt. Ltd., an Indian court recognised that a person’s voice and personality are their own property. You cannot clone a celebrity’s voice for an AI-generated song without consent. Full stop.
This matters beyond celebrities. It signals that the law is beginning to treat our digital identities as extensions of our physical world and any unauthorised replication is not a mistake anymore, it’s a rights violation.

Law and AI concept

What Lies Ahead?

Can We Still Believe Our Eyes? If deepfakes are already changing how we see reality today, the bigger question is “what happens when this technology gets even more powerful?” We are moving from “seeing is believing” to “nothing is certain.” And no law, platform, or algorithm is fully ready for that.

  1. The biggest risk isn’t just that deepfakes are getting harder to detect. The real danger is what they do to our minds. When you can no longer tell the real from the fake, you don’t just stop believing lies you stop believing the truth too. If any video can be dismissed as “probably AI,” then reality loses its weight. And that’s a far scarier problem.
  2. As a law student, I’ll admit that the legal system has real limits here. Law reacts. Technology moves first. And right now, the gap between the two is significant. Three specific problems stand out:
    • The Detection Gap: If authorities themselves can’t differentiate between real or fake, how will enforcement even begin? You cannot point fingers at something you cannot prove.
    • The creator of a deepfake could be sitting in another country, using anonymous tools, operating various VPNs. Any law stops at the border but deepfakes don’t.
    • Anti-deepfake laws, if drafted carelessly, could silence satire, voices, and legitimate criticism. The difference between a deepfake scam and a deepfake joke matters and that line isn’t clear yet.

If law alone can’t fix this, the responsibility comes back to all of us.
Individuals should develop a critical eye. Verify before you share. Use a second channel to confirm anything sensitive. Don’t let urgency override your judgment.

Platforms should move beyond engagement. Invest in tracking the origin of data. Make verification a feature.

Institutions should make digital literacy a priority. In 2026, knowing how to spot synthetic media is as important as knowing how to read.
The solution isn’t just legal. It’s cultural.

Courts will catch up. New laws will be passed. Precedents will be set. But the deeper challenge isn’t in the laws it’s in how we, as individuals and as a society, decide what to believe and who to hold accountable. Because when truth itself becomes uncertain, the problem isn’t just legal.
It’s deeply, uncomfortably human.

One Response

  1. Very informative and well written. The blog is a. well articulated that clearly explain the legal concept in a structured manner.

Leave a Reply

Your email address will not be published. Required fields are marked *