North Korea Hacker Uses ChatGPT to Forge Deepfake Military IDs in Sophisticated Phishing Campaign

World Defense

North Korea Hacker Uses ChatGPT to Forge Deepfake Military IDs in Sophisticated Phishing Campaign

A suspected North Korean state-backed hacking group, Kimsuky, has been caught using ChatGPT to generate fake South Korean military identification cards in a phishing campaign aimed at defence agencies, civil society groups, journalists, and human rights organizations.

 

Fake IDs as Phishing Bait

Cybersecurity researchers in Seoul discovered that the attackers sent phishing emails disguised as requests to review “sample” ID card designs for military-affiliated civilian employees. The attached images were AI-generated deepfakes, designed to mimic real military IDs and trick recipients into believing the emails were legitimate.

The emails also contained malicious links and attachments that installed data-stealing malware on victims’ devices once opened. These phishing attempts came from domains crafted to look like official South Korean defence addresses, including fake addresses ending in “.mli.kr” instead of the real “.mil.kr.”

 

How ChatGPT Was Misused

The hackers appear to have bypassed ChatGPT’s safeguards by presenting their requests as harmless mock-up or draft designs instead of explicitly asking for military IDs. By manipulating prompts, they convinced the AI model to generate images resembling genuine credentials, which were then embedded into phishing messages.

Forensic analysis of the images confirmed that generative AI had been used to produce the visuals, making the phishing attempts more persuasive and harder to detect.

 

Technical Attack Chain

The campaign used a multi-layered malware delivery process, often relying on compressed ZIP files, shortcut link (.lnk) files, and batch scripts (.bat). Some payloads were disguised as common software updates, such as fake “Hancom Office” updates, to trick victims into running them.

Scripts were also heavily obfuscated, using environment variable slicing and delayed execution via PowerShell to bypass security tools.

 

Growing AI Role in Cyber Operations

The incident reflects North Korea’s broader strategy of exploiting AI and deepfake technologies for espionage and financial gain. In August, another case revealed that North Korean hackers had used AI tools to generate fake résumés, cover letters, and coding samples to infiltrate overseas IT companies. Once employed, they allegedly used AI both for technical tasks and to collect intelligence.

Cybersecurity experts warn that AI now enables attackers to automate almost every stage of an operation—from planning and malware development to impersonation and deception.

 

Longstanding Espionage Network

Kimsuky, long identified by US and South Korean authorities, is described as one of Pyongyang’s key intelligence-gathering cyber units. Its operations often include phishing, cryptocurrency theft, and covert IT contracting to fund North Korea’s heavily sanctioned nuclear weapons program.

The number of victims in this latest phishing campaign remains unknown, but officials caution that the use of AI in such attacks significantly raises the national security risks for South Korea and its allies.

Leave a Comment: Don't Wast Time to Posting URLs in Comment Box
No comments available for this post.