Case Study – When a new employee turns out to be a North Korean cybercriminal
- By Paweł
Imagine you’ve just hired a new IT specialist. Their CV looks great, their references are impeccable, and the interview was professional. You send them a laptop, provide access to their systems, and… a few days later, your SOC (or external IT provider) notices the installation of strange software. This isn’t a scene from an action movie. One of our clients, a technology company that’s a leader in its own industry, experienced this very situation.
Why does North Korea's "IT worker factory" work?
Let’s start from the beginning to better understand the situation. We constantly hear about hacks, 0-days, phishing attacks, and so on. So why would anyone hire a company instead of trying to break into it? For several years, the Pyongyang regime has been conducting a well-organized operation with two goals:
- Gaining foreign currency – full-time employees at Western companies donate a significant portion of their salaries to the state.
- Gaining access to systems – if the organization proves to be “valuable,” such an employee could become a gateway to more serious attacks.
In times when remote work is commonplace, IT workers are worth their weight in gold, and artificial intelligence allows impersonation of any person on earth… this attack method seems like a natural choice.
We know why, so let's discuss how:
Stolen identities or made up – criminals impersonate real candidates from the US or Europe.
Deepfakes and AI – used for video calls and the creation of fake documents.
“Laptop farms” – equipment shipped by companies is sent to intermediary addresses, and then used en masse by operators in other countries.

Our story
So what exactly happened? One of our client’s employees contacted us with a request to investigate one of their employees – let’s call him John. He noticed that despite receiving a company laptop, John refused to use it, claiming he preferred to log in remotely via a virtual machine. This seemed suspicious, but it wasn’t something that was prohibited. Additionally, it turned out that John, for unknown reasons, had a VPN installed on his virtual host.
These two factors raised our colleague’s concern, and he asked us to investigate John. John had performed impeccably up to that point; no one had anything to complain about him. The HR department claimed he was the best candidate – he “smashed” the interview. In conversations with his colleagues, manager, and HR, it turned out that “for unknown reasons,” John always had problems with his webcam and never turned it on. At first, it was strange, but everyone got used to it over time. No one ever saw him, either in person or on the Teams camera. John operated quietly, never triggering any security alerts, and acted calmly.
A thorough investigation showed that he was stealing data very slowly and methodically. He did it below the radar. We checked his social media, and at first glance, everything looked fine: an impressive career on LinkedIn, a photo added, and full descriptions. But a thorough examination of the profiles revealed that the photo was copied from Stock, and the descriptions were generated by AI.
Our story had a relatively positive ending: John was detected and removed from the organization. His fate remains unknown (we only detect him, and further action is left to law enforcement). Could they have handled things better? Yes! Is our client the only victim? No!
Why does it work?

We have already partially answered this question, but let’s gather the information in one place:
- Lack of physical verification – in the era of remote work, candidates can live “anywhere” and the camera won’t reveal that the interviewer is from another continent.
- Recruiting pressure – there’s a shortage of IT specialists, so companies hire someone who “knows how to write code” and don’t conduct any further verification.
- Fake references and LinkedIn profiles – entire networks of “fictitious” employees validate their experience.
- Lack of proper training of non-technical employees – HR departments
Conclusions
This isn’t “just” an IT problem. It’s a strategic and financial problem:
- An impersonator could steal source code or customer data, with the consequences resulting in millions in losses.
- The salary you pay could be funding North Korea’s nuclear program, potentially resulting in legal sanctions against your company.
- Even if you have a great SOC, response time is crucial – without additional controls, you risk embarrassment before anyone raises the alarm.
How to protect yourself?
- Candidate identity verification – not only document scans, but also address, reference, and employment history confirmation.
- A strict trust policy for new employees – gradual granting of access, monitoring activity in the first few weeks.
- Strict equipment control – shipments only to verified addresses, blocking the possibility of “redirecting” packages.
- EDR and SOC – tools for detecting abnormal activity (e.g., mass file dumps).
- Legal awareness – knowledge of OFAC and FBI guidelines to avoid inadvertent sanctions violations.
Summary
Hiring an employee who turns out to be a cybercriminal isn’t a theory. It’s reality.
North Korean operations prove that an “inside man” can be far more dangerous than an outside attack. And while our client reacted in a timely manner, not every company would be so fortunate.
For a manager, the most important question is: is my organization ready for the possibility that a new IT specialist’s first day on the job could be the beginning of a cyberattack?
If you want to be sure that your company will not accidentally hire a "fraudulent employee", please contact us.
- We can help you analyze your recruitment and security processes,
- and if you prefer, we’ll send your HR/IT department a free, detailed checklist with points to pay attention to during remote job interviews.
This way, your organization will be better prepared to avoid falling victim to this type of scheme.