Beware! Voice deepfake fraud is on the rise, & traditional defenses won’t help

Beware! Voice deepfake fraud is on the rise, and traditional defenses won’t help

Discover which types of organizations are most vulnerable and what kind of approach ensures resilience

 

In our last blog post, we talked about how generative AI-powered deepfake is making identity theft unbearably easy for fraudsters, and why voice deepfake fraud is of particular concern.

This time, we’re going to go a littledeeper into:

- Which individuals are most often impersonated during a voice deepfake fraud attack

- The organization types being targeted

- The damages incurred from such attacks

- Why typical defenses aren’t enough

And we’ll introduce what kind of approach to protection gets the job done.

 

The individuals most often impersonated

Criminals who commit voice deepfake fraud in the business sector most commonly leverage the technology to impersonate:

- A real customer to gain trust with an employee and trickthem into transferring money to a fraudulent account.

- A companyexecutive to gain trust with an employee to divertfunds.

- Acustomer facing employee to trick customers intoauthorizing fraudulent payments.

In the government sector, a cloned voice impersonating a citizen is used to mislead police officers into taking action that may put innocent people in danger.

Similarly, it can be used to impersonate a police officer to deceive citizens into following instructions that aida crime.

 

Companies under attack and the damage

Organizations that are most vulnerable to voice deepfake attacks are those that handle sensitive data and personal identifying information (PII). This includes operations that involve accessing accounts, credit lines, financial assets, or engaging with a large number of consumers or citizens, e.g :

- Financialinstitutions and banks

- Healthcareproviders

- Media andtelecoms

- Ecommerceand online retailers

- Government,police, and defense

In addition, any company that uses a video conferencing platform to communicate internally with employees, as well as externally with customers, partners, suppliers, investors, or others, are at risk.

In a recent case, for example, a finance worker at a global company was tricked into transferring $25 million to fraudsters who used deepfake to impersonate the company’s CFO during a video conference call.

With video being a lot easier to fake than voice, it is critical to be able to detect the voice in the deepfake videoi n order to ensure robust protection.

For, if the company fails to detect the voice deepfake, the damages are great:

- $243K was lost by a German energy company that was targeted by a voice deepfake fraudster

- $35million was lost by a Hong Kong based company to fraudsters who cloned a director’s voice

 

Voice deepfake is the hardest to detect

Among the main reasons why voice deepfake is difficult to detect are:

- Deepfake voice is particularly hard to detect in poor quality situations, such as in noisy environments.

- Voice files have less data than videos, so suspicious signals are fewer, making synthetic voices more difficult to detect.

- The technology for creating synthetic voices is more advanced than the detection technology currently available.

- A cloned voice is much more convincing than a cloned video at lower resolutions, as it requires fewer details to be generated.

 

Why typical defenses aren’t enough

Organizations turn to several defenses to combat voice deepfake fraud.

These include enhancing deepfake detection with AI. But this can be expensive.

They also design programs for increasing awareness among employees and customers to reduce risky behaviors. However,ensuring cooperation and enforcing awareness is very challenging.

And they even opt to avoid the use of voice biometrics altogether for identity theft prevention until the technology matures.

 

Conclusion

With voice deepfake fraud on the rise and current efforts failing to ensure sufficient protection leaves organizations at risk.

What they need today is a new approach,which is multi-layered.

In our next blog post, we’ll get into exactly what this means, what each of these layers are, and how their combination is the only way to maximize protection against the rising threat of voice deepfake fraud.

In the meantime, to learn more about why and how you can stop voice deepfake fraudsters in their tracks, we invite you to download our new whitepaper – How to prevent identity fraud with completevoice deepfake protection, by clicking here .

See Corsound AI Voice Intelligence In Action
Thank you.
Your submission has been received.
Oops! Something went wrong while submitting the form.