4 min read

Deepfakes - A Red Teamer's Perspective

Table of Contents

Deepfakes are everywhere right now. Clients are requesting deepfake social engineering engagements. Vendors are building AI agents to perform vishing calls. Companies are creating blog posts highlighting the threat of this “highly sophisticated” attack vector.

This all makes sense - deepfakes are getting to the point that they’re nearly indistinguishable from reality. But with so much media hype, it’s easy to be skeptical. Remember blockchain? Still waiting for that revolution to happen.

So naturally, the question comes up: are deepfakes actually effective in social engineering engagements?

In this case, we are assuming that all parties involved are consenting to be used in deepfakes. We are also assuming that ethical security practitioners are using internal models - no uploading client faces/voices to third-party sites. While there are a lot of ethical and legal concerns, I will not be discussing those in this post.

Why You Should Listen — Or Ignore Me

I’ve done this work. I’ve ran social engineering engagements with deepfake video and audio components. I’ve tinkered with tools, built models, crafted pretexts, and debriefed with clients afterwards.

But I’m also just a single practitioner. My engagements are personal anecdotes, not a census. I have opinions that were shaped from what I’ve seen and the peers I work with, and those opinions might not hold in every environment.

So take this for what it is: field notes from someone who’s been doing the work, not gospel.

The Case FOR Deepfakes

We must stay ahead of the curve. Security practitioners should be staying current with emerging attack techniques. Video and voice verification that was previously a strong control can now be thwarted with deepfakes.

Deepfakes are easy to generate. All a threat actor really needs is an ElevenLabs account and a 10-second audio clip to clone someone’s voice. Source material can be trivially obtained from CEO marketing posts. The barrier to entry is low, so what’s stopping threat actors from making deepfakes?

Fear, uncertainty, and doubt. This technology is advancing faster than we think, and clients have every right to be worried about deepfakes. That is exactly why we, as security practitioners, need to take these concerns seriously.

The Case AGAINST Deepfakes

Limited confirmed cases of deepfake attacks. While there are some notable cases, how many of these are actually confirmed? Attackers may not be leveraging deepfakes nearly as often as the media implies.

Traditional attack vectors still work. Why spend extra resources on deepfakes when users are still falling for basic phishing emails? Maybe companies are not ready for deepfakes yet.

Hype-driven demand. Clients may be asking for the wrong reasons. When clients request deepfake testing based on news headlines rather than threat models, then we are only addressing their fear, not evaluating their risk.

Conclusion

The threats that deepfakes pose are real and too significant to ignore. Like it or not, deepfakes are now part of the social engineering landscape.

That said, deepfakes should be used, but only when and where it makes sense. The scope must be absolutely surgical. All parties involved must consent. All models used must be internal and not upload data to third party sites to protect the client’s privacy. Clients must be absolutely sure about what they want to assess with deepfakes.

But be cautious — just because a deepfake attack was successful, doesn’t mean that the reason for success was the deepfake. If an employee followed a deepfake CFO’s instructions and processed an unauthorized financial payment, was it the deepfake that tricked them, or was it insufficient controls that was the real finding all along?

So, are deepfakes a credible threat? Or just an industry buzzword?

Yes.