Blog

The Critical Need to Secure AI Deployments Against Cyber Risk

Written by SecureOps Team | Sep 29, 2025 3:32:09 PM

Generative AI use is nearly ubiquitous in companies of all types. Agentic AI is close behind. A recent survey found 82% of responding companies now use AI agents. More than half of AI agents access sensitive data daily. And 80% of those companies’ report experiencing unintended actions from their AI agents. Twenty-three percent reported their AI agents tricked into revealing access credentials.

While 96% of the 353 IT professionals surveyed viewed AI agents as a growing security risk, it’s alarming that only 54% say they have full visibility to the data agents can access and only 44% have any governance policies in place to keep them in check.

Yet, the race to incorporate AI—both generative and agentic—into operational processes races on. The Institute of Business Value’s 2025 COO study finds 70% of executives say Agentic AI is crucial to organizational future. The same number say agentic AI is already market ready. Fifty-nine percent of COOs say the potential productivity gains from automation and AI are so great they must accept significant risks to stay competitive. But, even given technological advancements, 59% of COOs say effectively integrating AI into existing business processes remains challenging.

An Ernst & Young Technology Pulse Poll published in May found that 49% of tech company executives identified data privacy and security breaches as their biggest concern for deploying agentic AI (19 percentage points higher than it was in 2024). Yet 92% expect to increase AI spending over the next year and half of tech executives say more than 50% of AI deployments will be autonomous in their company in the next 24 months.

The growing push for AI demands that companies align their AI strategies with their cyber resilience goals and risk tolerance while in pursuit of an AI-driven competitive advantage. Without a determined approach to securing AI, the potential for damage is just too high.

Gain Clarity on the AI Risk vs. The Hype

Given all the hype and excitement about AI, two things can be true at the same time. AI offers new opportunities and innovation potential. AI also brings new risks to cybersecurity, which is critical for business resilience and enabling the pursuit of those coveted business innovations. This is not new. Speed to market versus resiliency has been an issue with every disruptive technology to date…internet…ecommerce…cloud…mobile.

However, this inexhaustible hype leads to confusion and the potential to focus on the wrong risks depending on which model you’re securing—Generative AI or Agentic AI. While some risks are inherent to both approaches, there are important divergences between them when it comes to cybersecurity.

Most of the common risk frameworks and standards from ISO, NIST, and those shared by ISF, focus on traditional AI (GenAI).

GenAI is a traditional, reactive system. It behaves like a product. It acts in response to a specific human prompt. It doesn’t take independent action or pursue multi-step goals on its own.

Agentic AI is a proactive system. It acts as an end user. It’s designed to define and achieve a specific high-level goal with minimum human oversight. It can break a complex task into smaller steps, interact with various external tools, and adapt its strategy based on real-time feedback.

Unpacking AI Risk for Cybersecurity

While new AI security tools are rolling out just as fast as new AI applications, it’s unwise to silo AI security as a standalone thing. Instead, consider AI security as an extension to your overall cybersecurity strategy grounded on the foundations of visibility, context, and relentless improvement with resilience as the goal.

Risks Specific to Reactive GenAI:

The most common risks identified by current standards include:

  • Model evasion or subversion: exfiltrating data by tricking GenAI to bypass model gating or using prompts and/or data to self-train the model which accidently reveals the data.
  • Hallucination: GenAI generates false, nonsensical, or unfaithful information.
  • Prompt injection: an attacker injects prompts to manipulate the model by overriding its original programming.
  • Data poisoning: an attacker introduces malicious data into the training dataset of a GenAI model to produce biased or incorrect outputs.

Risks Specific to Proactive Agentic AI:

The autonomous nature of AI agents introduces more risks, including:

  • Overwhelming the HITL: Attackers exploit humans by posing too many decision requests for the human in the loop to respond to properly, allowing malicious requests to slip past them.
  • Cascading hallucinations: Hallucinations can affect the deployment of the orchestration layer or introduce conclusions that create a downstream effect that cascades into a systemic, growing issue.
  • Intent breaking and goal manipulation: Agents determine goals based on original intent. Subtle manipulation with prompts can cause diversion from the original goal.
  • Identity Spoofing: AI agents spawn other autonomous agents. Attackers can exploit the complex mess this makes for access control and management.

Our SVP of Security Services and Technology, Erik Montcalm, is diving into this growing threat vector and cyber risks in his session, Protecting the Next Phase of AI, at the ISF World Congress in October, including recommendations for risk mitigation.

Proceed with Caution in Deploying Agentic AI

AI is not just a risk to cybersecurity. An article from MIT Sloan, Agentic AI at Scale: Redefining Management for a Superhuman Workforce, shares experts debating about whether implementing agentic AI demands new management approaches to address accountability. The consensus is, because AI agents are not legal persons, humans and the companies that deploy them are accountable for the outcomes created by an AI agent.

Most relevant to the discussion here is the disruptive nature of AI due to speed and scale, as explained by Shelley McKinley, chief legal officer at GitHub: “Today’s workflows were not built with the speed and scale of AI in mind, so addressing gaps will require new governance models, clearer decision pathways, and redesigned processes that make it possible to trace, audit, and intervene in AI-driven decisions.

It’s critical to remember that if you’re touching an AI, you’re touching an attack surface. You can’t wait to think about AI security until after deployment.