Criminals used artificial intelligence-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of €220,000 ($243,000) in March in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking, according to the Wall Street Journal.
The CEO of a U.K.-based energy firm thought he was speaking on the phone with his boss, the chief executive of the firm’s German parent company, who asked him to send the funds to a Hungarian supplier.
The CEO of a U.K.-based energy firm thought he was speaking on the phone with his boss, the chief executive of the firm’s German parent company, who asked him to send the funds to a Hungarian supplier. The caller said the request was urgent, directing the executive to pay within an hour, according to the company’s insurance firm, Euler Hermes Group.
Law enforcement authorities and AI experts have predicted that criminals would use AI to automate cyberattacks. Whoever was behind this incident appears to have used AI-based software to successfully mimic the German executive’s voice by phone.
The U.K. CEO recognized his boss’ slight German accent and the melody of his voice on the phone, said Rudiger Kirsch, a fraud expert at Euler Hermes, a subsidiary of Munich-based financial services company Allianz.
But not only was the money not reimbursed, the fraudsters posed as the German CEO to ask for another urgent money transfer. This time, however, the British CEO refused to make the payment.
As it turns out, the funds the CEO transferred to Hungary were eventually moved to Mexico and other locations. Authorities are yet to determine the culprits behind the cybercrime operation.
The firm was insured by Euler Hermes Group, which covered the entire cost of the payment. The incident supposedly happened in March, and the names of the company and the parties involved were not disclosed, citing ongoing investigation.
AI-based impersonation attacks are just the beginning of what could be major headaches for businesses and organizations in the future.
In this case, the voice-generation software was able to successfully imitate the German CEO’s voice. But it’s unlikely to remain an isolated case of a crime perpetrated using AI.
On the contrary, they are only bound to increase in frequency if social engineering attacks of this nature prove to be successful.
As the tools to mimic voices become more realistic, so is the likelihood of criminals using them to their advantage. By feigning identities on the phone, it makes it easy for a threat actor to access information that’s otherwise private and exploit it for ulterior motives.
Back in July, Israel National Cyber Directorate issued warning of a “new type of cyber attack” that leverages AI technology to impersonate senior enterprise executives, including instructing employees to perform transactions such as money transfers and other malicious activity on the network.
The fact that an AI-related crime of this precise nature has already claimed its first victim in the wild should be a cause for concern, as it complicates matters for businesses that are ill-equipped to detect them.
Last year, Pindrop — a cybersecurity firm that designs anti-fraud voice software — reported a 350 percent jump in voice fraud from 2013 through 2017, with 1 in 638 calls reported to be synthetically created.
To safeguard companies from the economic and reputational fallout, it’s crucial that “voice” instructions are verified via a follow-up email or other alternative means.
„For more than a week now, ScoreRise enrolls daily hundreds of users through an innovative facial recognition interface. Enrollment takes less than a minute and it does not require presence of a human operator or video recording. And, of course, it stays fully GDPR compliant with help from Reff & Associates and Deloitte Romania.”