Is the evolving threat of impersonation killing visual trust?

The 2025 Phishing by Industry Benchmarking Report reveals a rapid decline in the phishing click rate (globally, an 86% reduction) after 12 months of security training. But while you successfully minimise email threats, cyber criminals capitalise on sophisticated generative artificial intelligence (GenAI) technologies to shift to vishing (voice phishing) attacks.

These identity-related, hyper-personalised social engineering attacks enable threat actors to leverage real-time voice cloning to bypass standard security measures. They are so effective that in 2024, Entrust detected a deepfake attack attempt every five minutes. Fast forward to 2025, and deepfake scams have nearly doubled in the UK, impacting 94% of the users polled by Sumsub.

The threat has grown even more tangible since a multinational business fell victim to an elaborate spearphishing and vishing campaign. Fraudsters utilised real-time AI voice cloning and deepfake video to impersonate a Chief Financial Officer (CFO) and trick an unsuspecting employee into transferring $25.6 million to the attacker’s bank accounts.

It’s evident that, as AI tools become widely accessible and continue to improve, our core human senses, such as sight and hearing alone, are no longer enough to establish trust, regardless of how skilled your employees are.

Vishing and AI-based social engineering attacks can catch anyone off guard. Is your business’s user identity verification strategy ready?

Your video meetings may no longer be secure. Here is why

A 2025 study demonstrated that humans can only spot 49% of synthetic audiovisuals. So, how safe do you believe your videoconference calls are when attackers:

  • Utilise publicly available footage from corporate websites, social media profiles and other sources to create a perfectly credible double of your Chief Executive Officer (CEO) or CFO
  • Need only three seconds of audio to generate an 85% accurate voice clone?

With GenAI, cyber criminals have pushed the boundaries of deception beyond leaving fake voicemails. They now infiltrate live video meetings. The challenge isn’t only how to detect phony videos or audio. It’s to understand what and who to trust.

Thus, while no business likes to admit they can fall victim to a deepfake scam, with advanced GenAI tools, the game has changed. AI has industrialised deception in a way that being unable to recognise a deepfake has become a more common risk than you may realise.

The psychology behind deepfake scams

At the heart of this manipulation is a psychological game that preys on our inherent trust in authority. When faced with a figure who appears to be your trusted leader, scepticism can easily fade away. Moreover, the combination of seeing a familiar face and hearing a well-known voice makes you even more susceptible to manipulation by creating a false sense of security.

That is a dangerous mix, especially when large sums of money or sensitive/confidential information are at stake.

When it comes to matters of trust, traditional security measures are no longer enough

When your employee enters a conference call, the video conferencing platform validates their account by ensuring the username and password they typed are correct. However, it doesn’t verify if the individual behind the camera is really the human in flesh and blood owning those credentials.

This critical blind spot means that malicious actors in possession of your usernames and passwords could exploit this gap. They know that by masquerading as an executive or a manager, the actual authentication system will allow them to log in without any additional identity verification checks.

That is what makes identity verification protocols going beyond simple credentials and visual verification, essential. Especially during video conference calls when the finance department is involved or when discussing sensitive transactions, data, or confidential projects.

The impacts of deepfake scams on finance departments

In 2025, over 53% of U.S. and UK-based finance professionals have fallen victim to a deepfake fraud attempt. In 2026, an employee of a business based in Birmingham transferred £340,000 after a call with a fraudster who used AI to mimic the business director’s voice.

Advanced deepfake technologies don’t just exploit vulnerabilities in your “human firewall” (i.e., sight and hearing), particularly when targeting finance departments. Threat actors thrive by preying on outdated internal processes and checkpoints, often by creating a sense of urgency and leveraging perceived authority.

Thus, those additional financial controls based on a tangible form of identification, such as a follow-up email or a call-back protocol, fall short against attackers who can create voice and video requests that realistically imitate a boss or executive.

That also extends to the reliance on single-person purchase authorisations, another easy target for fraudsters.

Navigating the new frontier: The shift to identity verification

A recent Penn State University research unveiled that even some face liveness identity verification apps are vulnerable to deepfakes. These applications verify a user’s identity through static or video images, audio, or by validating a specifically requested action.

So, when human senses can be fooled, and some critical applications’ advanced identity verification frameworks can be bypassed, what options are left for businesses aiming to verify identities securely?

Get the basics right and improve identity verification

To shield your business from deepfake threats and ensure that only authorised users access your systems and data, implement a multi-layered defense approach. Start with the basics. Combine technology with robust verification protocols and comprehensive employee training.

  • Train your users. A Santander study shows that 53% of users in the UK have never heard of the term deepfake or have misunderstood its meaning. Raise basic user awareness through engaging training methods, such as gamified learning or simulations. It will improve retention. For example, run a vishing simulation where employees get a call from somebody disguised as your CFO requesting them to share sensitive information, and receive points or rewards for recognising the threat.
  • Implement clear processes. Create and share well-documented processes to ensure consistent and continuous checks. For instance, when a user requests a password reset, the IT department should call them back on a previously verified phone number.
  • Set up multiple verification steps. Create different channels and approval steps to validate sensitive requests, such as funding transfers or access to sensitive information. It will add an extra layer of security. So, if an employee requests access to a database containing sensitive customer data, confirm the user’s identity with a callback and require approval from their second-level manager via email.
  • Know your users. Maintain a centralised record of key personnel, such as VIPs, administrators, and anyone with access to sensitive data. A single authoritative source will increase clarity and minimise security risks by ensuring that appropriate checks are applied consistently.
  • Take extra care for high-risk users. Privileged accounts such as administrators, finance employees, or managers are often the most targeted by cyber criminals and malicious insiders. Develop strict verification policies for all those users with higher access levels.
  • Use Secure authenticator apps. Applications such as Google Authenticator and Microsoft Authenticator generate unique login codes offline, directly on the user’s device. These codes are valid for a limited time. As a result, even if an attacker perfectly imitates your business’s executive during a video call, they will not have the required device to approve the transaction.

By leveraging these methods and treating identity verification as your primary security perimeter you can reduce the risk of deepfakes across all departments and ensure trust in online interactions.

Strengthen your defenses against social engineering attacks: a proactive approach

According to the Unit42 Global Incident Response Report, social engineering is the most reliable and impactful intrusion method. In fact, it was the top initial access vector for 36% of all cases analysed between May 2024 and May 2025.

This can become particularly dangerous when social engineering attacks target human-based financial transaction processes where a single person can authorise transfer of funds. Implement mandatory multi-person authorisation protocols for transactions of significant amounts. It will help you ensure that no single employee bears the responsibility of releasing funds without oversight. For instance:

  1. Enforce multiple approvals. Require that any transaction exceeding a specified threshold must get approval from multiple individuals. These simple steps ensure that no single employee will be able to authorise significant financial decisions.
  2. Establish challenge-response protocols. Develop pre-arranged safe words or phrases for finance teams to verify the authenticity of requests. It will add another layer of verification, prompting employees to confirm requests verbally before proceeding with the transaction.
  3. Set up out-of-band verification. Implement a secondary validation method for voice or video-instructed payments. For example, if a CEO requests a wire transfer during a video conference, the finance director should confirm it through a separate secure channel (e.g., a dedicated approval app on a trusted mobile device).

By mandating shared responsibility in financial decision-making and reinforcing identity verification processes, you will create a more secure operating environment.

Building a resilient defense strategy with Acora

To withstand the threats posed by deepfake scams and AI-driven social engineering, businesses must adopt a cohesive and resilient defense strategy.

Adopt a zero-trust approach. Treat every request, whether internal or external, with scepticism. Conduct thorough identity verification to prevent potential threats.

Leverage advanced technical solutions. Implement cutting-edge cryptographic identity verification for secure access. Leverage hardware-based security methods to bolster transaction safety and protect sensitive data.

Revamp your human-based procedure. Enforce multi-person authorisation for high-risk transactions. Establish challenge-response protocols to verify requests effectively.

Acora’s Cyber Incident Baseline provides you with tailored audits of your workflows, helping you to pinpoint vulnerabilities before they become costly mistakes.

Our experts will assist you in getting the basics right by focusing on process-driven ways to verify identity bespoke to your specific needs and revise governance policies to ensure you can confidently navigate the evolving landscape of social engineering attacks.

Don’t let your business become the next statistic, let’s join forces now.