In a recent incident, a prominent cyber security company discovered they had inadvertently hired a North Korean operative posing as an IT professional. This individual, using various AI tools, managed to infiltrate the company by joining as an employee, accessed its systems and attempted to plant malware. The incident has brought to light the increasing sophistication of cyber threats during the recruitment process. If a major cyber security firm can fall victim, other less security-savvy organisations face even greater risks and underscores the necessity for robust verification processes and heightened vigilance in recruitment.
Artificial Intelligence (AI) tools, including deepfakes, are increasingly being leveraged by threat actors to bypass traditional security measures. These tools can create highly convincing fake identities, making it challenging for employers to verify the authenticity of candidates during the recruitment process. Deepfakes can also manipulate video and audio to produce realistic portrayals of non-existent individuals, fooling even the most cautious hiring managers.
Deep fakes represent a significant threat in recruitment, as they can be used to create fraudulent credentials and impersonate legitimate candidates. For example, during virtual interviews, deep fake technology can simulate a real-time interaction with a fabricated person, complete with fake backgrounds, voices, and facial expressions. This can lead to the hiring of malicious actors who can then exploit their positions to launch cyber attacks from within the organisation.
In the case of the aforementioned cybersecurity company, the HR team conducted four video conference interviews on separate occasions, confirming that the individual matched the photo provided in their application. Additionally, a background check and all other standard pre-hiring checks were performed, all of which came back clear due to the use of a stolen identity. This was a real person using a valid but stolen identity. The photo was AI-enhanced, and it was so convincing that none of the interviewers saw any reason to question it.
To mitigate these risks, companies must implement robust verification processes. Here are some strategies:
As AI technology continues to evolve, so too do the tactics of cybercriminals. Companies must remain vigilant and proactive, integrating advanced security measures into their recruitment processes to protect against the infiltration of threat actors. By adopting a multi-layered approach to candidate verification and staying informed about emerging threats, organisations can safeguard their operations from the growing danger of AI-driven cyber attacks.
If you would like assistance with your organisations cyber security get in touch with the experts at Integrity360.