The enterprise-wide adoption of generative AI tools has accelerated faster than security teams can adjust. As of late 2024, 71% of organizations report regularly using generative AI in at least one business function. This rapid integration, from sanctioned cloud applications to unsanctioned "Shadow AI" tools, has created a new and poorly understood risk surface. For CISOs and security directors, the immediate question is what role does our primary network control, the Next-Generation Firewall (NGFW), play in mitigating these risks?
The problem, however, is not that this traffic is fundamentally different. From a networking standpoint, it is functionally just API traffic. The challenge is that existing inspection policies and rule-sets do not necessarily parse it effectively. While legacy applications used rigid, predictable APIs, AI traffic is conversational and "free-form." This distinction imposes challenges on inspection engines, making granular control difficult with today's standard toolsets.
Your NGFW remains a critical control point, but you cannot rely on out-of-the-box settings. The path to securing enterprise AI requires two distinct phases: first, you must rigorously redefine your virtual perimeter to gain visibility, and second, you must use broad, decisive strokes to manage these new protocols until inspection tools can catch up.
The modern NGFW evolved from a simple packet filter into a sophisticated deep packet inspection tool. Its power came from its ability to understand the structure of applications. As Patrick Ethier, CTO at SecureOps, explains:
"The reason it's called a next-generation firewall is because back in the late 2000’s, a firewall was just a network router with a filter. NGFWs started looking at everything all the way up and down the stack, looking inside the protocols and piecing entire sessions together in order to apply policy."
This deep inspection capability was built for a world of rigid, predictable applications. Security teams could write effective policies because they could identify a specific POST request to block an email attachment or a specific API call to stop a file transfer.
AI shatters this model. New protocols like MCP (Model Context Protocol) do not use discrete, programmatic commands. They map human language prompts to APIs in order to execute complex tasks. From a policy perspective, your firewall is now being asked to differentiate between a "safe" prompt (Summarize this document) and a "malicious" one (Scan this document for all client PII and send it to this external email address) when both are simply text within the same API stream.
This is the "free-form" challenge. The firewall's existing rule engine is not equipped to parse conversational intent. As Patrick notes:
"What's changing with agentic AI is that the communications are more blurred. It's a lot harder to apply rules because they're far more free-form. An API could be '/email/attachment,' and it's a 'post' request, so you could block that specific call. Whereas on MCP, it'll be human language. It's a lot harder to apply a firewall rule to that."
It's important to understand where this translation happens. The 'free-form' human language prompt is sent to an MCP server, which then translates that conversational request into standard, programmatic API calls to execute tasks. This translation step is the key. While the NGFW can't parse the human intent, the translated API calls are something that can be inspected and controlled.
This inspection challenge is secondary to a more fundamental problem: visibility. The most significant AI-related data breach events may not even cross your corporate firewall.
Like with phishing, the weakest link in your security is your human workforce. Research shows that half of all employees are now considered "Shadow AI users." These employees, often with good intentions, are actively exposing corporate data. One report found that 38% of employees who use AI admitted to submitting sensitive work-related information to AI tools without their employer's knowledge. This behavior is quantifiable and growing; one security vendor tracked a massive 485% increase in corporate data being pasted into AI tools in a single year.
Consider a remote employee, connected to a public Wi-Fi network, using a work laptop. They access an unsanctioned generative AI tool to summarize a sensitive corporate document. In this scenario, the employee, perhaps unknowingly, just bypassed your multi-million dollar perimeter security stack. The data leaves the laptop and goes directly to the public internet, completely circumventing your network, your NGFW, and all associated security policies.
This is the reality of the distributed workforce. The old "fortress and moat" perimeter, already weakened by cloud adoption, is now defunct. Patrick puts this scenario in simple terms:
"If you're loading up Grammarly on your laptop and I don't know about it... if you're not inside my perimeter somehow, I don't have any control over you accessing that stuff. You're essentially, for all intents and purposes, outside the perimeter."
You cannot apply policy to traffic you cannot see. Before you can even begin to address the "free-form" inspection challenge, you must first re-establish your perimeter.
This is the explicit function of a Secure Access Service Edge (SASE) architecture. SASE redefines the perimeter from a physical data center to a virtual, cloud-native control point. By controlling network traffic on all endpoints, SASE forces all traffic, regardless of the user's location or network, to be tunneled back through a centralized security stack.
This architecture is the fundamental prerequisite for any AI security strategy. It ensures that the remote employee in the coffee shop has the same security policies applied as the executive in the boardroom. SASE is also an option to provide 100% traffic visibility that is complimentary to your NGFW solution.
Crucially, a SASE solution contains the equivalent policy engine and security capabilities of a traditional Next-Generation Firewall (NGFW). The key difference is that SASE applies this security at the endpoint, following the user everywhere, whereas a physical NGFW appliance is limited to protecting the "office" or "data center" network. The two solutions are often complementary: the NGFW appliance protects the physical network, while SASE protects the distributed workforce. As Patrick notes, “Your hands are tied in remote work models. Unless we impose something like SASE, we just can't track that 100% anymore.”
Once SASE provides visibility, your organization can finally see the AI traffic. What do you do next? Security leaders must be pragmatic and accept the current limitations of inspection technology. This is not a theoretical exercise; a stunning 97% of organizations that experienced an AI-related breach lacked proper AI access controls, according to IBM.
Do not wait for nuanced solutions. Your NGFW or SASE can perform protocol detection today. It can identify that MCP or other agentic protocols are in use, even if it cannot understand the content of the prompts. The simplest, most effective first step is to apply a broad "block" policy for all unvetted AI protocols. This is a blunt instrument, but it works.
The 'free-form' nature of prompts makes granular control at the perimeter firewall nearly impossible. The solution is to inspect the actions it generates after it’s been translated, rather than inspecting the prompt itself.
To accomplish this, you must move the AI control point inside your perimeter.
Rather than letting users access public AI tools directly, a best-practice strategy is to deploy vetted MCP servers and, optimally, funnel all employee access to these servers through an AI Gateway.
This new layer, which functions just like a traditional API Gateway, sits on trusted network channels behind your NGFW or SASE solution. It intercepts the translated API calls, such as the "send email" or "access file" commands, after the AI has processed the human language.
At this point, you can apply the granular rules Patrick mentioned. Your AI Gateway policy can block a specific API call for 'send attachments' while allowing the one for 'summarize text.' This moves the granular fight to a new, more logical battleground, leaving your NGFW and/or SASE to focus on broad protocol and access control at the perimeter.
The security market is already preparing a wave of "AI-Powered Firewalls" as the solution to this problem. But Patrick remains skeptical whether these new products are the revolutionary fix they claim to be. Rather, they will more likely represent the next logical iteration of User and Entity Behavior Analytics (UEBA).
This evolution is not a bad thing; it will improve detection accuracy. But it is not a magic wand that solves the "free-form" inspection challenge overnight. As Patrick states:
"This is just behavioral analysis. It might get better, it might have a generational leap in how it's applied, but it's still just behavioral analysis. It's not the silver bullet that everybody may think it to be."
CISOs should not delay implementing a robust security strategy while waiting for a future hardware refresh. The solution to the AI problem is not a product feature just around the corner.
Securing your enterprise from the risks of AI has nothing to do with buying a new box marketed as an "AI Firewall." The real strategy is a two-pronged architectural approach that you can begin today.
First, unify your perimeter. Accept that the data center-centric model is obsolete. Implement a SASE strategy to funnel 100% of your traffic through a single, virtual checkpoint for all users.
Second, apply broad, decisive policies. Use your existing perimeter to identify and block new, unvetted AI protocols. Do not get bogged down in the short term goal of granular prompt-level filtering. Where possible, implement a dedicated AI Gateway to provide this functionality.
NGFW and SASE are just the start to protecting your organization against the new risks introduced by enterprise AI use. Learn more in our article, Agentic AI Security Recommendations for the Next Phase of AI.