AI's Role in Modern Phishing

October 2, 2025

Written by

Tam Doan

TAGS

#phishing, #ArtificialIntelligence, #SocialEngineering, #CyberSecurity, #Deepfakes

Summary

The rise of artificial intelligence has changed cybercrime by enabling threat actors to automate target profiling, forge fake identities, and craft phishing attacks. Using AI-powered web development platforms, adversaries can create deceptive phishing sites masked by fake CAPTCHA pages to steal credentials undetected. Voice cloning technology further enhances attacks by allowing deepfake vishing calls that impersonate trusted individuals to obtain sensitive information. These examples show how AI has become an essential tool in the development of phishing operations.

Analysis

Artificial Intelligence, particularly generative AI, is transforming the threat landscape at a rapid pace. While these technologies offer benefits in automation, personalization, and productivity, they also equip adversaries with powerful new tools that increase the scale and efficiency of their attacks. From phishing to deepfake fraud and identity theft, AI has become an active component in the offensive capabilities of threat actors.

By leveraging AI, threat actors can conduct detailed research on their targets, developing profiles based on publicly available information to use in social engineering attacks. One example involves adversaries using photos from social media to generate fake government IDs. These fraudulent IDs can then be used to bypass Know Your Customer (KYC) systems designed for identity verification. The creation of fake IDs using publicly available images already poses a risk to individuals. However, the threat becomes more severe when additional personal information such as birthdays, email addresses, job titles, or phone numbers is also accessible online. This information can be exploited to commit identity theft and launch more convincing social engineering campaigns.

AI also enables threat actors to imitate trusted brands and build malicious websites. In one case, Okta Threat Intelligence observed the misuse of Vercel’s v0.dev, a generative AI tool that allows users to create fully functional web interfaces. This tool was exploited to build phishing websites that closely replicate login pages from services like Okta and Microsoft 365. This example shows how AI is being used to scale phishing operations and make them more accessible to malicious actors.

Another concerning trend involves the abuse of AI-powered web development platforms such as Lovable and Netlify, which have been used to host fake CAPTCHA pages. These platforms, originally designed to simplify website creation, are now being used by threat actors to support phishing operations. According to Trend Micro, victims of these phishing campaigns receive emails that appear to come in the form of password reset requests or change of address notifications from USPS. When users click on the link in the email, they are taken to a fake CAPTCHA page. This page appears to be a routine security step, which reduces suspicion. After the CAPTCHA is completed, users are redirected to a phishing site designed to steal login credentials and other sensitive information.

This method is effective because it takes advantage of the credibility associated with known web development platforms while imitating common verification processes. The CAPTCHA page is designed to look harmless, so users are unlikely to question it.

AI-driven phishing threats are also expanding beyond websites. Threat actors are now using AI-generated voice clones in deepfake vishing campaigns. These attacks involve replicating the voice of a trusted person, such as an executive, colleague, or family member, and using it in fraudulent phone calls. Unlike traditional voice phishing, which relies on impersonation and emotional manipulation, deepfake vishing uses machine learning to generate audio that sounds convincingly real.

These attacks are carried out by collecting voice samples from sources like social media videos, virtual meetings, or short phone conversations. The voice data is then processed using AI voice synthesis tools to create realistic clones. In some cases, the phone calls are combined with number spoofing so they appear to come from trusted institutions, such as banks or technical support services. Victims are told their accounts have been compromised and are asked to provide sensitive information, including one-time passcodes or two-factor authentication credentials. Once the attackers obtain this information, they can access personal accounts and carry out fraudulent transactions.

These examples show how artificial intelligence has become integrated into the offensive strategies of modern phishing campaigns. By automating target research, imitating trusted digital environments and voices, and using tools to build credible attacks at scale, threat actors are now able to launch more believable operations than ever before.

Conclusion

As artificial intelligence continues to advance, so too does the potential for its exploitation in unintended and malicious ways. From the creation of highly realistic counterfeit identities to the cloning of trusted voices and the construction of phishing websites, AI threats require increased awareness and adaptive defenses. A thorough understanding of these evolving tactics is essential for both individuals and organizations to remain proactive in safeguarding sensitive information within a complex digital environment.

Recommendations

  • Conduct phishing simulations that reflect AI-enhanced attack techniques, including fake CAPTCHA pages, brand impersonation, and deepfake voice calls, to ensure employees are trained to recognize and respond to modern threats. These exercises should aim to teach employees how to recognize and respond to AI-enhanced threats.
  • Establish strict verification procedures for all sensitive communications, particularly those involving financial transactions, system access, or the sharing of credentials. This should include practices such as confirming requests through call-backs using known contact numbers.
  • Organizations should implement multi-factor authentication (MFA) and biometric verification methods, such as facial recognition or fingerprint scanning, within their authentication processes to guard against AI-driven spoofing.

References