AI-created deepfakes scammers have evolved past political manipulation into a corporate exploitation instrument, as one such exploitation of the concept against an engineering firm called Arup cost the company 25.5 million in illicitly obtained funds in a single fake video phone call. This radical upheaval is a significant change in which advanced hackers dress in the role of CEOs and CFOs, and it frightens grapple tanks with deceptive precision, worthy of the trust networks that allow the practice of business today.
It is how deepfake criminals can steal millions through a company’s hefty
According to the World Economic Forum, the video call solicited by the finance worker in Hong Kong felt that there was nothing suspicious about it when this employee was contacted by the UK-based chief financial officer, who needed to approve a strictly confidential purchase in the country. A number of the known colleagues attended to confirm the details, and after an in-depth discussion, the employee approved 15 transfers amounting to 25.5 million dollars.
Only a few weeks later, the disastrous fact was revealed: all the individuals in the given call, except the victim, were AI-based deepfakes. Such an attack on engineering firm Arup in January 2024 is much more than a high-tech form of fraud; it heralds a paradigm change in terms of the ways AI jeopardizes the trust built on a modern business framework.
Beyond Arup, there is a history of documented attacks that demonstrate more and more sophisticated tactics on high-profile executives in all industries. In one case, fraudsters tried to impersonate Ferrari CEO Benedetto Vigna by making voice calls using AI that perfectly recreated the sound of a southern Italian accent, which only Vigna would have known. An executive had to ask a question about which he was the only person to know the correct answer, before the call was cut off.
The explosive expansion turns deepfakes into a weapon in the company’s war
North America is an example of a place where the scale of fraud perpetrated using deepfakes raises a staggering figure: the article cites World Economic Forum statistics, showing a case increase of 1,740 percent per 2022-2023. It had to suffer in the tune of over 200 million dollars in a quarterly one of 2025, and the availability of deepfakes technology has made fraud democratic due to the use of software that is available openly.
It now takes only 20-30 seconds of audio speech to perform voice cloning, and it is possible to make believable video deepfakes within 45 minutes, using tools that are widely available. Others have made similar efforts with WPP CEO Mark Read and many other corporate leaders since the start, and the Financial Services Information Sharing and Analysis Center has cautioned that such hackings are a fundamental change of stripping democracy to a business kill or take approach.
Tactics of revolution are developed in response to attacks by synthetic media
These technologies are playing on that because audio and visual cues are very essential to us as humans, says Rob Greig, Arup Chief Information Officer, as part of this reflection on the $25 million fraud. Well, we do have to begin to question what we see. The basic issue is that there is an asymmetric arms race between generation and detection technologies.
Nonetheless, there are new technological breakthroughs that hold promise with real-time multimodal detection systems, including voice, video, and behavioral patterns, detection accuracy of 94-96 under ideal conditions.
Because the red team of auditors at Deloitte estimates that AI-enabled fraud could grow to 40 billion by 2027, the risk transcends beyond the cost of losses up to the very fabric of the business reputation. To ensure a strong verification system, to invest in ongoing detection systems, and convert the security culture towards never trust, always verify, organisations should introduce effective verification processes.