By Matthew Olney on March 05, 2024

How is AI changing Social Engineering attacks?

Managed Security Services, Industry Trends & Insights

As AI technologies become more sophisticated, they’re also introducing complex challenges in safeguarding against social engineering attacks. This blog looks into the issues AI poses to cyber security and in particular how it is changing the way criminals carry out social engineering attacks.

Trends-Guide_social-media

Friend or Foe?

AI's capabilities in enhancing cyber security efforts cannot be overstated. From automating threat detection to analysing vast datasets for suspicious activities, AI tools are proving to be indispensable allies. Yet, they’re empowering adversaries, equipping them with tools to craft more convincing and targeted social engineering campaigns.

The Rise of Deepfakes

Deepfake technology, powered by AI, exemplifies this dual nature. Deepfakes can create highly convincing fake audio and video, making it possible to impersonate individuals with high accuracy. This technology is being utilised in scams, misinformation campaigns, and to bypass biometric security measures, posing significant threats to both individuals and organisations​​​​.

Real world

In February 2024 in a first of its kind AI heist, a finance worker from a multinational firm tricked into paying out $25 million to fraudsters. The attackers used deepfake technology to pose as the company’s chief financial officer in a video conference call, and saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations.

The worker had grown suspicious after he received a message that was purportedly from the company’s UK-based chief financial officer. Initially, the worker suspected it was a phishing email, as it talked of the need for a secret transaction to be carried out.

However, the worker put aside his early doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognise.

Elsewhere, deepfakes have been used to spread disinformation via social media channels. The Russia Ukraine war has seen both sides using them for propaganda gains and to sow dissension.

"In this conflict, there's no necessity for loss of life. 'I encourage you to choose survival,' echoed the grave tones of Ukrainian leader Volodymyr Zelensky in a widely circulated video from March 2022, following the assault on Ukraine by Russia.

Subsequently, a video emerged featuring Russia's Vladimir Putin discussing the option of a peaceful capitulation. Despite their poor resolution, these videos rapidly spread, sowing confusion and spreading a misleading story.

AI-Driven Phishing Attacks

Phishing attacks, traditionally relying on human creativity and research, are now supercharged by AI. Generative AI tools like ChatGPT can craft personalised, compelling messages that mimic legitimate communications from trusted entities. These AI-driven phishing attempts significantly increase the likelihood of deceiving recipients, bypassing traditional security awareness training's effectiveness.

Deepfake Phishing

Deepfake phishing employs the fundamental tactic of social engineering to deceive users, leveraging their trust to sidestep conventional security defenses. Attackers harness deepfakes in various phishing schemes, such as:

Emails or Messages: The danger of business email compromise (BEC) attacks, costing businesses billions annually, escalates with deepfakes. Attackers can craft more believable identities, for example, by creating fake executive profiles on LinkedIn to ensnare employees.

Video Calls: Utilising deepfake technology, fraudsters can convincingly impersonate others in video conferences, persuading victims to divulge sensitive information or execute unauthorized financial transactions. A notable scam involved a Chinese fraudster who swindled $622,000 using face-swapping technology.

Voice Messages: With technology that can clone a voice from just a three-second sample, attackers can create voicemails or engage in real-time conversations, making it challenging to distinguish between real and fake.

Why is deepfake phishing alarming?

Rapid Growth: Deepfake phishing has seen a staggering 3,000% increase in 2023, fueled by the advancement and accessibility of generative AI.

Highly Personalised Attacks: Deepfakes enable attackers to tailor their schemes, exploiting individual and organisational vulnerabilities.

Detection Difficulty: AI’s ability to mimic writing styles, clone voices, and generate lifelike faces makes these attacks hard to detect.

The Challenges

The rapidly increasing sophistication of AI-driven social engineering attacks makes detection increasingly challenging. Traditional security measures and training are designed to recognise patterns and inconsistencies typical of human-crafted scams. However, AI’s ability to learn and adapt means that it can continuously refine its approach, reducing detectable anomalies and mimicking human behaviour more closely.

Evolving AI Algorithms

AI algorithms, especially those based on machine learning, evolve through interaction with data. This continuous learning process means that AI-driven attacks can become more refined and less detectable over time. Security systems that rely on static detection methods quickly become obsolete, requiring constant updates and adaptations to keep pace with AI’s evolution.

The Human Factor

At the heart of social engineering attacks is the exploitation of human psychology. AI exacerbates this vulnerability by enabling attackers to analyse and understand human behaviour at scale. This deep understanding allows for the crafting of highly targeted attacks that exploit specific vulnerabilities, such as authority bias, urgency, or fear, making traditional cyber security training less effective.

Training and Awareness Challenges

Raising awareness and training individuals to recognize and resist AI-driven social engineering attacks is more challenging than ever. The realistic nature of deepfakes and the personalisation of phishing emails can bypass the sceptical scrutiny trained into employees and individuals. This necessitates a new approach to cyber security education that accounts for the sophistication of AI-driven threats.

Ethical and Regulatory Implications

The use of AI in social engineering attacks also raises complex ethical and regulatory questions. The ability of AI to impersonate individuals and create convincing fake content challenges existing legal frameworks around consent, privacy, and freedom of expression. It also raises the question as to what should happen if an employee is either deepfake or falls for such an attack.

CTA-Incident-Response

Securing the Future

Defending against AI-driven social engineering attacks requires a multifaceted approach that combines technological solutions with human insight. Implementing advanced AI and machine learning tools to detect and respond to threats in real-time is crucial. However, equally important is cultivating a culture of cyber security awareness that empowers individuals to question and verify, even when faced with highly convincing fakes.

By understanding these challenges and adopting a proactive, AI-informed approach to cyber security, organisations can navigate the maze of digital threats. If you’re concerned by the cyber threats in this blog get in touch with the experts at Integrity360.

Contact Us

Sign up to receive the latest insights

Join our cyber security community to stay up to date with the latest news, insights, threat intel and more right in your inbox.  All you have to do is choose how often.