Fraudulent financial attacks are reaching a new level of frequency and sophistication that are affecting businesses across multiple sectors. With the rise of generative AI technology and ever advancing deep learning models, scammers are able to reach and scam businesses at a larger scale and with more efficiency and sophistication.
Businesses have been targeted by phishing attacks for decades now. Phishing attacks are a type of cyber attack that uses deception tactics to manipulate individuals into disclosing personal and sensitive information to be used for fraudulent activities. The most common types of phishing scams include:
Email: Target masses of people to release sensitive information by posing as trustworthy parties.
Spear: A more targeted attack utilizing email phishing to go after specific individuals and businesses. Scammers gather publicly sourced information and trick people into believing emails are legitimate and trustworthy due to the information they gathered being accurate.
Whaling: The targeting of high level executives utilizing open sourced information and further research into the company and their specific practices.
Business Email Compromise: The targeting of high level executives and companies by impersonating as them based on the validity and detail of information gathered.
Voice “Vishing”: Scammers call your phone number posing as a trusted source on caller ID, and with urgency try to appear trustworthy enough to gain access to sensitive information.1
Today, scammers are increasing their success rates by utilizing dark web versions of ChatGPT to increase their output with less time required. WormGPT is specifically designed to generate human-like text and malicious content to be used for scamming and hacking campaigns. The severity of these phishing scams has escalated to critical levels, for instance: In 2022, the FBI reported over 21,000 incidents of business email fraud with losses nearing $2.7 billion. Furthermore, the Deloitte Center for Financial Services estimated that generative AI email fraud could reach $11.5 billion by 2027 with their most aggressive projection.2
How are businesses dealing with this?
In response, companies have implemented systems with AI algorithms to help their staff identify suspicious messages that could be part of phishing scams. These methods of software use machine learning to analyze the context of a message and its content to determine whether it is legitimate or should be flagged. A lot of the messaging behind these attacks is geared towards the wording being urgent. By gathering trillions of data points, this type of software will be able to discern between real and false transactions more efficiently over time. While these methods help with the current and past levels of online scams, they aren’t up to date with the new level of deep learning with deepfake technology that’s on the rise.
Deepfakes: The next level of scams
A deepfake is an image, video, or audio of a person that has been altered so that they appear to do or say something they have not, or look/sound like someone or something else. This creation technology is only getting better as deepfakes are becoming more increasingly difficult to discern from real media. Just like how phishing scams use open source data and advanced technology to create realistic messaging and documents, deepfakes utilize similar technology but can create videos, images, and audio that appear real. In some instances scammers have used deepfakes on real time video calls posing as high level professionals. If businesses are already losing billions to conventional phishing methods, how are they supposed to combat this new level of sophistication with deepfake technology?
A recent instance of a deepfake attack happened in January 2024. Arup, a British multinational engineering and design firm, was the target of a deepfake scam in Hong Kong that lost them millions. An employee joined a video call where they thought they were meeting the company’s CEO and other staff members. It turned out that they were speaking to scammers using deepfake technology to appear and sound like the CEO in real time. Eventually, they convinced the employee to carry out multiple transfers to various bank accounts that summed up to a total of $25 million.3
This example shows just how unprepared businesses are to combat the level sophistication that deepfake technology has. The problem with this issue is that scammers aren’t just utilizing one stagnant vehicle that could eventually be dealt with. The level of technology they have employs machine and deep learning that is constantly evolving to get better. If these scammers were able to trick high level companies and financial professionals out of millions, then who couldn’t they trick? It seems like this problem is only going to get increasingly worse without a viable solution.
How to protect your business?
As this becomes more of a prevalent issue, consider educating your employees about deepfakes and how to identify them. By keeping your company up to date with the latest deepfake news and becoming more educated on the topic, it will better equip your team to more efficiently deal with these issues as they arise. Establish strong routine protocols on how financial transactions and the sharing of sensitive information should go. By having a routine that everyone follows, you will limit the ability for an attack to impact your company. These protocols could include a dual verification system that requires two parties to authenticate transactions, preventing any scammer's ability to pose as an authorized party. Finally, consider working with third parties who provide solutions that combat deepfake scams.
Chess Solutions offers a SaaS solution that identifies, analyzes, and attributes deepfakes. Visit our website to learn more on the latest services Chess Solutions offers to combat deepfakes and how they can protect your business.
Sources: