Check Mate

The Rising Threat of AI-Driven Deepfake Scams

Written by The Chess Team | Sep 30, 2024 10:36:27 PM

During the highly anticipated "Glowtime" unveiling of the new iPhone 16 on September 9, 2024, a series of fraudulent live streams featuring deepfakes of CEO Tim Cook flooded YouTube, giving the world firsthand insight into the state of technology and deception. What was meant to be an event for unveiling the iPhone 16 instead featured scams promoting a deceptive cryptocurrency scheme that encouraged viewers to send Bitcoin, Ether, Tether, or Dogecoin, falsely promising their investments would be doubled. This alarming incident highlights the growing sophistication of scams, which were able to hijack the event in real time. It underscores the urgent need for awareness and action against such deepfake threats. 

As the frequency and sophistication of deepfake scams evolve, CISOs are increasingly turning to Chess Solutions to help attribute these attacks, to understand who did it and how, providing them with information so they decide what now. Sign up here for a free consultation with one of our experts to understand how Chess can help your deepfake incident response planning. 

The Escalating Cost of Investment Scams

The Federal Trade Commission (FTC) reported that investment scams cost Americans over $10 billion in 2023—a staggering 21% increase from the previous year and the highest amount recorded to date. This spike highlights the shifting tactics scammers use as they increasingly leverage AI to enhance their operations. Investigators have noted that AI enables scammers to automate their schemes, target vulnerable individuals more effectively, and fabricate convincing financial entities with greater accuracy.

These scams are not isolated incidents. A study by Stanford University revealed that the misuse of AI surged by 32.3% from 2022 to 2023. The ease with which AI can create realistic audio and video content complicates detection and investigation efforts, making it critical for stakeholders to understand both the mechanics of these scams and the tools needed to combat them. Furthermore, these scams are becoming increasingly convincing due to the evolving capabilities of deepfake creation technology.

The Mechanics of AI Scams

  1. Lurking: Scammers initiate operations by gathering personal information through web scraping from social media, forums, and public databases. AI tools analyze this data to build profiles based on potential victims' financial statuses and interests.
  2. Alluring: Once targets are identified, scammers use generative AI tools to create professional-looking websites, emailers, promotional materials, and social media accounts that mimic reputable organizations.
  3. Catching: In this phase, scammers deploy sophisticated AI-driven chatbots and automated messaging systems to interact with potential victims. These interactions build rapport and trust, employing psychological tactics such as urgency, exclusivity, and fabricated success stories to entice individuals into investing.
  4. Executing: After establishing trust, scammers introduce unfamiliar platforms to their victims. They often use AI to create realistic trading environments and simulate profitable trades, misleading victims into believing their investments yield returns.
  5. Vanishing: When victims attempt to withdraw their investments, scammers disappear, often using AI to generate excuses for delays. They may deploy automated responses to justify inaction while simultaneously shutting down communication channels. By utilizing untraceable methods, such as offshore accounts and cryptocurrency transactions, they make recovery nearly impossible.

Corporate Executives Under Attack

Deepfake technology has been used in various high-profile scams beyond the Apple incident. WPP, the world's largest advertising agency, was the recent target of a deepfake attack in May 2024. Scammers posed as Mark Read, WPP’s CEO, by creating a fake WhatsApp account and setting up a meeting with other executives using publicly sourced information. During the meeting, they used voice cloning technology and footage gathered from YouTube to impersonate Read, but the attempt ultimately failed.

Following the incident, Read sent an email to the company addressing what had happened. His message highlighted the importance of vigilance during these times as deepfake scams become more prevalent and effective. He pointed out some red flags for his employees to watch for: “requests for money transfers and any mention of a secret acquisition, transaction, or payment that no one else knows about.” Luckily, WPP managed to escape the incident unscathed.

Many corporate executives are being targeted in these scams and struggle to recognize them while they occur. CISOs have an already full plate and are increasingly being handed the responsibility of deepfake detection and incident response. 

CISOs have had the most interest in applying the Chess Solutions Authentica product to incident response. After an attempt at an attack, the concern is attributing who attempted the intrusion. Chess Solutions pulls together an analysis of the media along with traditional cyber-hunt techniques to help profile attack attempts. Join CISOs who are looking to Authentica to support their deepfake incident response plan. 

Rethinking Trust in a Deepfake World

The rise of deepfake technology poses a significant challenge for businesses and consumers alike. As Oliver Tavakoli, CTO of Vectra AI, suggests, the current landscape necessitates a reevaluation of how we establish trust in digital interactions. With the ability to manipulate video and audio so convincingly, organizations must cultivate a context around trust that is difficult for adversaries to replicate.

Strategies to enhance trust include:

  • Creating Pre-set Rules: Establish consistent communication protocols within organizations. Employees should be trained to recognize deviations from standard practices that may indicate potential scams.
  • Utilizing Internal Jargon: Develop and maintain internal terminology and cues that are familiar to employees but opaque to outsiders. This can help identify unauthorized communications.
  • Implementing Multi-layered Security Measures: Employ step-up authentication for sensitive transactions, and ensure that employees receive regular training on recognizing deepfake technology and social engineering tactics.

Preparing for the Future

The combination of AI and deepfake technology has transformed the landscape of digital scams, making them more deceptive and challenging to combat. The recent Apple event serves as a stark reminder of the vulnerabilities that exist in a world where misinformation can spread rapidly and convincingly.

As scammers continue to refine their techniques, organizations must adopt advanced tools and methodologies to fight back. This ongoing battle involves not just technology but also fostering a culture of trust and awareness among employees and consumers alike.

In an age where trust is paramount, staying informed and proactive is essential. The collective responsibility lies in educating individuals on recognizing and responding to evolving threats. By investing in advanced technologies, organizations can fortify their defenses against the growing threat of AI-driven scams.

Corporate leaders are increasingly turning to Chess Solutions for our deepfake detection, analysis, and attribution capabilities. Authentica, our advanced deepfake analysis platform, utilizes various plugins to help not only identify but also explain how pieces of media were fabricated. Authentica doesn’t just provide probabilities that a video, image, or audio is fake; it provides the analysis needed to determine “Who” and “How,” enabling our customers to answer the question “What Now.” We support CISOs in developing incident response plans for the new deepfake threat landscape. Sign up here for a free consultation regarding your deepfake incident response plan.

Sources:

Chou, Benjamin. “Council Post: The Dual Role of AI in Fueling and Fighting Investment Scams.” Forbes, 26 Aug. 2024, www.forbes.com/councils/forbestechcouncil/2024/08/26/the-dual-role-of-ai-in-fueling-and-fighting-investment-scams/.

“Crypto Scammers Hijack Apple’s IPhone 16 Unveiling with Deepfake of CEO Tim Cook: Guest Post | CoinMarketCap.” Coinmarketcap.com, 2023, coinmarketcap.com/community/articles/66e03dafea90934ef3d049fb/. Accessed 25 Sept. 2024. 

Puutio, Alexander. “The Rise of Deepfakes Means CEOs Need to Rethink Trust.” Forbes, 9 Sept. 2024, www.forbes.com/sites/alexanderpuutio/2024/09/07/the-rise-of-deepfakes-means-ceos-need-to-rethink-trust/.

Robins-Early, Nick. “CEO of World’s Biggest Ad Firm Targeted by Deepfake Scam.” The Guardian, 10 May 2024, www.theguardian.com/technology/article/2024/may/10/ceo-wpp-deepfake-scam.

Thompson, Stuart A. “How “Deepfake Elon Musk” Became the Internet’s Biggest Scammer.” The New York Times, 14 Aug. 2024, www.nytimes.com/interactive/2024/08/14/technology/elon-musk-ai-deepfake-scam.html.