AI Phishing Attacks: 1 Group Uses ChatGPT for Fake IDs

3 Min Read

North Korean hackers launched sophisticated AI phishing attacks by using ChatGPT to create a fake South Korean military ID, deploying the forgery in emails to gather intelligence for their regime.

Hackers Exploit ChatGPT for Fake IDs

The hacking group, known as Kimsuky, bypassed ChatGPT’s safeguards by framing its prompts as requests for “sample designs for legitimate purposes.” The forged IDs were then used in phishing emails that impersonated a South Korean defense institution responsible for issuing credentials.

According to the U.S. Department of Homeland Security, Kimsuky was “most likely tasked by the North Korean regime with a global intelligence-gathering mission.” The group has a history of conducting espionage campaigns against South Korea, Japan, and the United States.

Escalating Generative AI Security Risks

This incident highlights growing generative AI security risks as state-sponsored actors from North Korea and China increasingly use tools like ChatGPT, Claude, and Gemini for cyberattacks. The technology significantly lowers the barrier to entry for creating convincing fraudulent materials and malicious code.

“Generative AI has lowered the barrier to entry for sophisticated attacks,” said Sandy Kronenberg, CEO and founder of Netarx. “The real concern is not a single fake document, but how these tools are used in combination.”

Why AI Phishing Attacks Defy Traditional Detection

The effectiveness of these new attacks renders traditional security advice obsolete. For years, employees were trained to identify phishing attempts by spotting typos or formatting errors, but AI tools produce clean, professional, and flawless text that mimics legitimate communication.

“News that North Korean hackers used generative AI to forge deepfake military IDs is a wake-up call: The rules of the phishing game have changed, and the old signals we relied on are gone,” stated Clyde Williamson, senior product security architect at Protegrity. The hackers created a convincing ChatGPT fake military ID by requesting sample templates.

“The usual red flags — typos, odd formatting, broken English — weren’t there,” Williamson explained. “AI scrubbed all that out.”

Experts Urge to Update Phishing Awareness Training

In response, security professionals are calling for a fundamental shift in security education. They stress that organizations must update phishing awareness training to focus on context and verification rather than spotting simple mistakes.

“Security training needs a reset. We need to teach people to focus on context, intent and verification,” Williamson urged.

He recommends encouraging teams to slow down, independently check sender information, and confirm unusual requests through a different communication channel.

Williamson also advised that companies should “invest in email authentication, phishing-resistant MFA and real-time monitoring.” He concluded, “The threats are faster, smarter and more convincing. Our defenses need to be too.”

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *