Hackers are using generative AI like ChatGPT or FraudGPT to make more realistic fake IDs, identities, company statements and videos, or videos of company executives that use their image and voice.
“There is so much information available online that criminals can use to create very realistic phishing emails. Large language models are trained on the internet, know about the company and CEO and CFO,” said Cyril Noel-Tagoe, principal security researcher at Netcea.
In Hong Kong, a finance employee made a $25.6 million transfer after receiving a message he thought was from the overseas CFO. The employee’s initial doubts were assuaged after a video call with the CFO and other colleagues the employee knew by sight. However, the call was deepfaked.
Noel-Tagoe advises companies to have specific procedures for verifying requests for money transfers. For example, if such requests usually come through an invoicing platform, the employee should use a different method to contact the sender and verify the request. They should also employ more detailed authentication processes, such as asking for a real-time selfie or asking a person on video to state their name or blink at a certain time.