In a recent incident, a prominent cyber security company discovered they had inadvertently hired a North Korean operative posing as an IT professional. This individual, using various AI tools, managed to infiltrate the company by joining as an employee, accessed its systems and attempted to plant malware. The incident has brought to light the increasing sophistication of cyber threats during the recruitment process. If a major cyber security firm can fall victim, other less security-savvy organisations face even greater risks and underscores the necessity for robust verification processes and heightened vigilance in recruitment.
The Role of AI in Cyber Infiltration
Artificial Intelligence (AI) tools, including deepfakes, are increasingly being leveraged by threat actors to bypass traditional security measures. These tools can create highly convincing fake identities, making it challenging for employers to verify the authenticity of candidates during the recruitment process. Deepfakes can also manipulate video and audio to produce realistic portrayals of non-existent individuals, fooling even the most cautious hiring managers.
The Risk of Deepfakes in Recruitment
Deep fakes represent a significant threat in recruitment, as they can be used to create fraudulent credentials and impersonate legitimate candidates. For example, during virtual interviews, deep fake technology can simulate a real-time interaction with a fabricated person, complete with fake backgrounds, voices, and facial expressions. This can lead to the hiring of malicious actors who can then exploit their positions to launch cyber attacks from within the organisation.
In the case of the aforementioned cybersecurity company, the HR team conducted four video conference interviews on separate occasions, confirming that the individual matched the photo provided in their application. Additionally, a background check and all other standard pre-hiring checks were performed, all of which came back clear due to the use of a stolen identity. This was a real person using a valid but stolen identity. The photo was AI-enhanced, and it was so convincing that none of the interviewers saw any reason to question it.
Strengthening Recruitment Processes
To mitigate these risks, companies must implement robust verification processes. Here are some strategies:
- Enhanced Background Checks: Utilise advanced background check services that can detect inconsistencies and verify the authenticity of candidates' credentials and employment history. If using recruitment agencies then due diligence should also be done on them as in one recent case a recruiter in the USA was found to be working with the North Koreans to plant its agents within companies across the country and the wider world.
- Multi-Factor Authentication: During virtual interviews, use multi-factor authentication methods to ensure the candidate's identity, such as biometric verification or secure video conferencing tools with built-in security features. For images and videos, deepfakes can still often be identified by closely examining participants' facial expressions and body movements. In many cases, there are inconsistencies within a person's human likeness that AI cannot yet overcome, however, with the rapid pace of advancement deepfakes will become increasingly sophisticated and harder to spot.
- Verification Tools: Employ tools to cross-check candidate information against various databases, looking for discrepancies that could indicate fraudulent activity.
- Improved coordination: HR, IT, and security teams need to work closer together in order to protect against advanced persistent threats.
- Training and Awareness: Educate HR and recruitment teams about the risks of deepfakes and other AI-driven threats. Regular training sessions can help staff identify potential red flags during the hiring process.
As AI technology continues to evolve, so too do the tactics of cybercriminals. Companies must remain vigilant and proactive, integrating advanced security measures into their recruitment processes to protect against the infiltration of threat actors. By adopting a multi-layered approach to candidate verification and staying informed about emerging threats, organisations can safeguard their operations from the growing danger of AI-driven cyber attacks.
If you would like assistance with your organisations cyber security get in touch with the experts at Integrity360.