Deepfakes Didn’t Invent Cybercrime, They Just Perfected It
Fraud is nothing new. It is a reality that some people will take advantage of the trusting and even minimally naïve. Last year, in a moment of high stress and low sleep, even I, a 25-year Cyber Security veteran, was duped by a phone call from the “FBI” claiming my involvement in identity theft! Luckily, the FBI agents working in “Virginia” sounded like a call center in Pakistan! I did not completely fall for the ruse, but this was a simple, old-school phone fraud scheme with cleverly skinned websites to feign legitimacy. There was no deepfake video or voice on the other side to convince me further.
In addition, I've never been one to think that laws specifically written against computer crimes are necessary. Theft is theft; fraud is fraud; breaking and entering is breaking and entering. Does it really matter what tools you use to accomplish said crime?
For years, the cybersecurity industry has framed threats in terms of tools: malware families, exploit kits, zerodays, phishing frameworks. Each new technique arrives with a new name, a new category, and often a call for new laws to address it. Deepfakes are the latest example. Their rise has triggered urgent debates about regulation, attribution, and whether existing legal frameworks are “enough.”
Coming back to that question: “Does it really matter what tools you use to accomplish said crime?” While it may feel glib, that question is not dismissive; it’s clarifying. And in the context of deepfakes, it gets to the heart of what changed, and what didn’t.
What Exactly Is a Deepfake?
A deepfake is synthetic media (typically audio, video, or images) generated or altered using machine learning (AI) models to convincingly imitate a real person, object, or event. Most modern deepfakes rely on generative models trained on large volumes of real media to reproduce facial movements, vocal characteristics, or visual context with disturbingly increasing levels of realism. The defining feature is not that the content is fake but that the output can now pass as authentic to both humans and automated systems at scale.
Anyone spending any time on social media is already being negatively impacted by this regularly, as videos are presented as authentic when they aren’t. I am constantly left wondering whether a video of a small child talking to their parents or a time-lapse video of a backyard revamp is real or fake. It’s easy to disregard it in those cases as just sources of entertainment, but deep down, it’s eroding a fundamental component of how we deal with the world, trust. The old phrases “I’ll believe it when I see it” or “pictures or it didn’t happen” become meaningless, inflicting a very unnerving feeling.
What matters from a security and legal perspective is that deepfakes undermine long-standing shortcuts to trust, a key security factor. A familiar voice, a face on a video call, or a recorded message, once treated as implicit verification, can now be fabricated cheaply and rapidly. Deepfakes do not introduce new criminal intent; they dramatically reduce the cost and friction of impersonation, deception, and social engineering. In that sense, they function less like a new class of crime and more like a force multiplier for crimes that already exist.
Deepfakes as an Evolution, not a Revolution, Targeting Trust
Deepfakes did not create new criminal intent. They did not invent fraud, impersonation, or social engineering. What they did was dramatically reduce friction. Historically, impersonation attacks required tradecraft:
- A convincing spoofed email
- Insider knowledge of organizational processes
- Time spent grooming a target
- A willingness to risk exposure through repeated interaction
Deepfakes compress that effort. With minutes of publicly available audio or video, an attacker can now synthesize a CEO’s voice delivering urgency, a familiar face appearing on a video call, or a “trusted” authority overriding hesitation. The crime is the same. The efficiency is new. This distinction matters because it reframes deepfakes not as a novel legal problem, but as a force multiplier for crimes we already understand.
Most deepfake-enabled cybercrime succeeds without exploiting a single software vulnerability. No buffer overflow. No privilege escalation. No lateral movement. Instead, the attack path looks like this:
- Assume a trusted identity
- Create urgency or authority
- Trigger a legitimate human action
- Let the system do exactly what it was designed to do
In other words, deepfakes don’t break systems. They convince systems to be used correctly, for the wrong outcome. That’s why many of the largest losses attributed to deepfakes never involved compromised infrastructure. Funds were transferred using approved processes. Credentials were shared willingly. Access was granted by authorized users.
From a defender’s standpoint, this is uncomfortable. Our controls are optimized to detect anomalous behavior, not plausible human decisions.
Deepfakes Expose an Old Weakness We Ignored!
For years, enterprises treated certain signals as inherently trustworthy:
- A familiar voice on the phone
- A face on a video call
- An urgent request from the “right” title
Deepfakes didn’t weaken those signals. They proved they were always weak. We built financial approval workflows, helpdesk processes, and executive exceptions on social assumptions rather than verifiable identity. Deepfakes simply automated the exploitation of that gap. This is why so many deepfake incidents feel less like “hacks” and more like insider mistakes, because functionally, that’s what they are. The system behaved exactly as designed. The human was persuaded to act.
Why New Laws Feel Appealing (and Why They’re a Distraction)
Calls for “deepfake-specific” legislation often come from a good place. A stated before, deepfakes feel deeply unsettling (pun intended) because they undermine one of our oldest security assumptions: that seeing and hearing are reliable forms of verification. Therefore, it is not surprising that people in power feel that the cause of such discomfort should be regulated in some way.
We already see this impulse at work in laws like the EU AI Act’s requirement to label deepfake audio and video, China’s Deep Synthesis regulations that mandate identity verification and watermarking for synthetic media, and proposed U.S. bills such as the DEEPFAKES Accountability Act, which focus on disclosure rather than outcomes. Each treats the existence of synthetic media as the problem to be managed, rather than the underlying fraud, impersonation, or deception. The result is a regulation that chases artifacts and labels, while the crime itself remains unchanged.
Focusing on the medium risks missing the message. Fraud statutes already cover:
- Impersonation
- Financial deception
- Unauthorized access
- Material misrepresentation
The presence of AI-generated audio or video does not change the underlying offense; it changes the believability threshold. Writing laws that chase tools rather than outcomes creates two problems:
- They age poorly - The next technique will arrive faster than the statute can be updated.
- They shift accountability away from controls - If the law becomes the primary defense, organizations delay hard conversations about process, verification, and identity assurance.
The better question isn’t “Do we need new laws?” It’s “Why were we still relying on trust signals that were never designed to be cryptographically strong?”
The Strategic Shift Defenders Must Make
If deepfakes are a force multiplier for social engineering, then the defensive response isn’t just better detection, it’s better design. Organizations that are adapting successfully tend to shift in three ways.
From identity by familiarity to identity by proof: Who you sound like matters less than what you can cryptographically demonstrate. Replace “recognizable voice or face” approval with cryptographic or protocol-based verification for sensitive actions (e.g., signed approval tokens for wire transfers or access changes). Require out-of-band verification via pre-established channels when authority is invoked (e.g., finance approval must be confirmed via an authenticated workflow tool, not by a call or video). Also, treat biometric signals (voice, face, video) as inputs, not authenticators. Use them only in combination with device trust, session integrity, and user behavior.
From exception-based trust to invariant controls: “Urgent” should never mean “unverifiable.” Eliminate “executive exceptions” for urgent requests involving money, credentials, or access. If the process can be bypassed, it will be. Encode non-bypassable rules like no fund transfers without dual approval, no MFA resets without ticket correlation, and no vendor changes without reconciliation. Lastly, treat urgency as a risk signal, not a justification. Requests framed as “timesensitive” automatically trigger additional verification, not fewer checks.
From awareness training to decision architecture: Training helps, but systems must make the right action easier than the wrong one. Design workflows with one-click escalation, built-in verification prompts, and clear “pause and confirm” paths. Embed contextual warnings directly into tools (e.g., “This request originated outside normal approval channels” rather than generic phishing banners). Also, instrument approvals so employees are rewarded for slowing down, escalating, or declining suspicious requests, not penalized for friction.
Deepfakes don’t require us to rethink crime. They require us to rethink trust as a control surface. Consider a deepfake-enabled attack targeting a finance team:
An employee receives a video call from what appears to be the CFO requesting an urgent wire transfer. Under a traditional model, the realism of the call creates pressure to comply. Under a re-architected model, the request cannot proceed via voice or video at all. The employee must initiate the transfer through a finance system that requires approval from both the Director of Finance and the CFO. Both must log into the finance system and approve using their YUBIKEY. Then, an MFA push approval is sent to both via the finance system’s secure mobile app. Lastly, the urgency of the request automatically increases scrutiny by sending an alert to the financial auditor on call. Even if the deepfake is perfect, the system absorbs the deception without relying on the employee to identify it.
The attack fails not because the deepfake was detected, but because trust was never granted based on appearance alone.
The Counter
There are some cases where acknowledging deepfakes explicitly improves enforcement. FinCEN’s 2024 guidance to financial institutions, for example, doesn’t invent a new category of crime; it treats deepfake-enabled identity attacks as a signal that existing AML and KYC controls are being bypassed and need reinforcement, not reinterpretation. Likewise, the FTC’s proposed expansion of impersonation rules focuses less on synthetic media itself and more on commercial actors who knowingly provide the means for large-scale impersonation fraud, extending long-standing consumerprotection logic rather than redefining deception. These are not arguments that deepfakes are a new crime, but that, in a few domains(identity, finance, and consumer protection), calling out the tool can help harden systems without confusing novelty for harm.
Closing Thought: The Tool Is Not the Crime
Deepfakes are alarming because they blur reality. But in cybersecurity, reality has always been negotiable. Attackers negotiate it with deception, pressure, and timing. So yes, it does matter what tools attackers use, but not in the way we often think. The tool doesn’t redefine the crime. It exposes whether our defenses were built on verification or assumption. Those who redesign trust as a structural property, rather than a human judgment, will be far more resilient. Deepfakes have made one thing painfully clear: Assumptions do not scale, and trust must be earned, even when it looks and sounds familiar.