Blog

Four Artificial Intelligence Threats will Challenge the Cybersecurity Industry

Written by SecureOps Team | Jun 12, 2023 4:00:00 AM

Artificial intelligence (AI) and machine learning (ML) systems are becoming increasingly popular, especially with the  advancements of OpenAI’s ChatGPT and other large language models (LLMs). Unfortunately, this popularity has led to these technologies being used for nefarious purposes, making AI and ML an imminent threat to security operations centers (SOCs). SOC teams need to start preparing and building threat models to stay ahead of these emerging risks. 

In this blog post, we’ll cover four major AI threats your security operations team should plan and budget for to stay ahead of the game.

Understanding AI Today

According to a report published by Statista, the amount of big data generated is growing at a rate of 40%, and it will reach 394 zettabytes by 2028. One of the primary outcomes of this explosion of data is the emergence of an artificial intelligence (AI) ecosystem. The term “AI ecosystem” refers to machines or systems with high computing power that mimic human intelligence. The current AI ecosystem features technologies like machine learning (ML), artificial neural networks (ANN), robotics, etc. 

Artificial intelligence (AI) and machine learning (ML) are two terms often interchanged, but their differences are significant. AI is comparable to human learning, where new behaviors are adopted without reference. Conversely, ML is a subset of AI, employing predefined algorithms tailored to specific data types with expected outputs. They are fixed AI algorithms that can learn and postulate.

Types of Attacks Leveraging AI

AI-Powered Malware

AI-powered malware can identify security gaps within an organization’s systems and quickly exploit them. This kind of attack can cause a network to shut down entirely or result in hackers surreptitiously infiltrating sensitive information.

Malware developers are increasingly using AI to evade detection by traditional antivirus tools. Threat actors can use deep learning techniques to create new malware variants that can evade traditional signature-based detection methods. SOC teams need to invest in advanced AI-based malware detection solutions that can identify and thwart these sophisticated attacks.

Figure 1 – A common attack chain following 14 tactics defined by the MITRE ATT&CK Matrix for Enterprise

Social Engineering Using AI

Phishing scams and social engineering attacks are commonplace; however, AI-enabled social engineering attacks can be far more potent. Bad actors can use AI systems to create voice or chatbots that mimic real people and use them to manipulate targets into sharing information that can then be used to gain access to systems.

AI makes it easier for cybercriminals to launch complex social engineering attacks, in which the cybercriminal takes on the identity of someone that the target trusts, such as a senior executive or a business partner. AI manipulates natural language processing to generate content, such as emails or chat messages, that sound authentic. SOC teams should build AI models that can identify content generated by AI and distinguish between genuine messages and those that could be malicious.

Using AI, attackers can generate massive business email compromise (BEC) scams. The process for doing so is remarkably easy:

  1. Scrape data from a trove of employee LinkedIn profiles to 
  2. Map out the products, projects, and groups those employees work on 
  3. Feed that information into an LLM
  4. Generate social engineering content

The result is extremely convincing emails that look as if they are from the employees’ bosses or CFOs. They can even include precise details about the projects they’re working on. If an attacker managed to compromise company data and feed that into the LLM, they could make the attack look even more authentic.

Data Poisoning

Data Poisoning: According to a Gartner article, data poisoning attacks will be a considerable cybersecurity threat in the coming years. The goal of these attacks is to intentionally introduce false data into an organization’s collection of data, thereby skewing the results of any predictive modeling or machine learning algorithms. A form of adversarial attack, data poisoning involves manipulating training datasets by injecting poisoned or polluted data to control the behavior of the trained ML model and deliver false results.

The potential damage of backdoor attacks on machine learning models cannot be overstated. Such attacks are not only more sophisticated than straightforward injection attacks but also more perilous. 

By planting undetected corrupt data into an ML model’s training set, adversaries can slip in a backdoor. This hidden input allows them to manipulate the model’s actions without the knowledge of its creators. The malicious intent behind backdoor attacks can go unnoticed for extended periods, with the model functioning as intended until certain preconditions are met, triggering the adversarial attack. Hence, taking necessary measures to prevent backdoor attacks on your machine learning models is crucial.

Generating Deepfakes with AI

Since AI can create convincing imitations of human activities—like writing, speech, and images—generative AI can be used in fraudulent activities such as identity theft, financial fraud, and disinformation. AI-generated deep fakes are used to generate content that looks and appears authentic while containing false information. For instance, a deep fake video can depict someone saying or doing something they never did, leading to reputation damage, decreased credibility, and other types of harm.

Deepfake technology has been on the rise for decades, however in recent years, deep fakes have become more accessible and advanced, thanks to the development of robust and versatile generative models, such as autoencoders and generative adversarial networks. This only makes it harder for security professionals to distinguish between what is real and what is unreal. Deepfake technology uses machine learning algorithms to analyze and learn from accurate data, such as photos, videos, and voices of people, and then generate new data that resembles the original with some changes.

Deepfake technology uses Artificial Neural Networks (ANNs) that learn from data and carry out tasks requiring human intelligence. Developers use two ANNs to create deepfakes: one generates fake data, and the other discerns how convincing the data looks. The generator applies this judgment to improve its output until it deceives the discriminator, thus creating a Generative Adversarial Network (GAN).

Real-World Examples of Deepfake Technology:

  1. A deepfake video of former US president Barack Obama giving a speech that he never gave, created by comedian Jordan Peele.
  2. A deepfake video of Facebook CEO Mark Zuckerberg boasting about having billions of people’s data was created by artists Bill Posters and Daniel Howe.
  3. Tom Cruise doing various stunts and jokes created by TikTok user @deeptomcruise.
  4. A deepfake app called Reface  allows users to swap their faces with celebrities in videos and gifs.
  5. A deepfake app called Wombo allows users to make themselves or others sing and dance in videos.

Take Proactive Steps to Protect Against AI Threats

AI technologies and machine learning systems are developing rapidly, and their applications in cybersecurity are continually evolving. Security operations center teams must actively prepare for the growing security threats AI poses. SOC teams must start building threat models, upskill, and procure the right tools to meet emerging challenges. In this rapidly changing technological landscape, SOC teams that are proactive in building their resilience to AI-based threats will be better positioned to protect their organizations.