The recent critical vulnerability in Microsoft SharePoint Server (CVE-2025-53770), actively exploited in a large-scale campaign, has sent ripples through the cybersecurity community. A variant of a previously patched flaw, this zero-day vulnerability allows unauthenticated remote code execution, enabling it to be easily leveraged by sophisticated threat actors, including state-sponsored groups.
Reports indicate that numerous SharePoint servers globally have been compromised, affecting multinational firms and government entities. Notably, the U.S. Department of Homeland Security, the Department of Energy, Department of Health and Human Services, and multiple government agencies in Quebec have been affected, leading to potential preventive website shutdowns and raising concerns about public safety and trust in digital services.
To unpack the implications of this attack and provide actionable insights for security leaders, we sat down with Erik Montcalm, Senior VP of Security Services and Technology at SecureOps, a Canadian boutique MSSP based with follow-the-sun operations in Montreal, Prague and Manila. With decades of experience at the forefront of cybersecurity, Erik offers a unique perspective on managing such critical threats and building resilient security postures.
Erik: It’s nasty. We're learning this isn't a brand-new attack but a variation of a vulnerability Microsoft inadequately patched back in 2020. It seems they fixed the specific reported exploit but not the root cause. Now, Microsoft has their own scoring for vulnerabilities as well, but the CVSS score is 9.8.
I can’t fathom why it’s not a 10. To me, this is as terrible as it gets. It’s remotely executable, it's automatable, and there's evidence of active exploitation.
It means there's toolkits out there and adversaries are actively campaigning on this. Right now there's probably dozens of thousands of bots scanning for exposed SharePoint systems, exploiting them, or just putting them on a list to come back later. They may not even know what they're going to do with these SharePoint servers, but they're building a roster. If your SharePoint server is online, it's going to be exposed unless you've taken it offline or patched immediately.
What's worse is that attackers are bundling it with other SharePoint exploits into a toolkit they’re calling "ToolShell." So if you patched this latest flaw but missed an older one, you're still exposed. SharePoint has become a super juicy target.
Erik: Sharepoint is everywhere, and the use cases vary widely. For some organizations, it’s just used for internal documentation. But others integrate it into products, automation, or use it as a central hub for data. So for some, this vulnerability is no big deal. They can shut down the application, and the impact may be losing access to their call center documentation for the next 24 hours. They can use hard copies out of a binder in the meantime, even if they’re a bit out dated.
Other organizations have it integrated with critical data and critical applications. We saw government pension fund websites go down in Quebec. That means people's retirement data is hosted on SharePoint and potentially stolen by cybercriminals. This is when CISOs need to make difficult, terrible decisions, because the potential impact is so massive. But, the impact of doing nothing is also very high.
Erik: That's a tough call, and it really depends on your entire defense strategy. We wouldn't make a blanket recommendation. The advice hinges on your unique situation. If you have a strong defense-in-depth posture and you're confident that you can mitigate this until you patch or that you could easily detect if something happened, then you can likely keep it online. However, if you don't have that level of maturity, or that architectural and cyber defense depth, then that’s a scenario where we might recommend taking everything offline to have a look.
Erik: Exactly. The problem isn't just about information leakage from SharePoint. Attackers are very good at "living off the land" and lateral movement. They'll use that server as an entry point to do something else on your network. So you’re faced with a very hard decision: how quickly can you guarantee containment versus how much investigation do you need to do to make sure you don't have a much larger problem?
Erik: It's standard guidance, but what's not clear to me is whether these lower-tiered Microsoft tools would have prevented the attacks or just detected them. I'm not seeing a lot of detail on that. I understand why people might have them turned off; some people feel pretty safe if their SharePoint isn't directly exposed to the internet and they disable them for performance reasons. It’s always good advice to turn on the Microsoft security stack if you don’t have another EDR, but I’m just not sure I would trust these things to solve the problem, especially this soon into the exploit's life cycle.
Erik: I wouldn't be satisfied just patching, rebooting, and turning these tools on. For all I know, it gets rid of this specific vulnerability, but does it get rid of anything else the attackers could have done? Probably not. If it's just an automated tool that got in, ASMI and Defender AV probably do a good job of cleaning it up. What I'm worried about is the more advanced attackers that use this to get in, manually reconfigure things, and create intricate attacks and leave-behinds. I'm not sure those threats would be detected by AMSI and Defender AV. That said, the higher-tiered tools, like Microsoft Defender XDR edition, would be a better fit for this situation.
Erik: All the usual things will at least slow an attacker down: least privilege, application whitelisting, and ensuring you're using MFA. For an important server like this, I'd harden it according to CIS guidelines and make sure it goes through some type of hardening scanner.
I know it’s disruptive to follow those hardening guidelines, because it makes you jump through hoops to actually get the server to do anything after that. But it’s worth it, if you’re hosting anything of importance.
Beyond that, it's going to eventually come down to the response, right? How quickly can your team respond to this? How quickly can you detect an attack through threat intelligence?
I'd bet that there was some chatter about this vulnerability on the Dark Web way before Microsoft made a press release. Things like this don't just happen without a surge of activity. A proper threat intelligence program could have given you a few days’ or hours’ notice. That could be very useful in a situation like this.
After that, consider how quickly your team can patch and monitor for this. I don't mean just monitor the server or monitor for this attack. I mean are you really covered for identity detection? What's going on inside your active directory? Or your Entra? These are all things that will help your confidence level in order to recover.
Organizations that are super confident in their ability to respond were either down for a very small amount of time or not down at all. It's the organizations that are stuck trying to figure out what to do that are taking days or weeks to respond.
That's when it gets scary.
Erik: In the short term, the most critical move is either virtual patching on the network or a robust vulnerability management program, because that gets rid of the original problem quickly. But I would say that in general, the place I would get my confidence from is a mature SOC program.
If I have a robust SOC, I would be able to trust that I would detect other anomalies. If we missed the initial breach and attackers were having a party on my server for a week, that's one thing. But if I could be as confident as possible that I would notice other servers being wonky, or new accounts being created, or whatever else the attacker might do, then I would feel good about the situation. That requires an across-the-board, mature monitoring program, not just something focused on the perimeter SharePoint server.
Erik: First, immediate impact analysis. What data is on that server? What's the worst-case scenario? In parallel, I'd put heightened monitoring in place across the Microsoft ecosystem, Entra ID, Active Directory, everything. You have to draw a map of the related risks and focus monitoring there.
Erik:That's a big hurdle. You need to ask your architects and network teams: What is this server connected to? What APIs does it use? That assumes you have a good partner or internal SOC that can adapt that quickly.
Erik: I think this is a very clear-cut example of why you need to have tested, pre-approved, and pre-trained plans. Sometimes these events go sideways very quickly. When that happens, CISOs need to be able to answer questions from legal, HR, the board, and even their own customers. If you don't have the information, people lose confidence.
The only way to have that information at your fingertips is maturity at every step: maturity in the SOC, maturity in your defense-in-depth and architecture, and maturity in understanding your impact, knowing where your data is and how it's protected. It also means having the operational readiness to ensure you have people on call and you're not stalled by "Joe being on vacation."
Erik: Exactly. This is prime vacation season. For a lot of midsize organizations, you don't have 50 people who know how to figure these things out; you only have a few. If you're realizing you don't have the staff for this day in and day out, you really should look at having a good outsourced partner that can step in, one that knows about you ahead of time and can act as part of the team to help answer these questions.
Erik: The specifics would depend on which services that client consumes, but in any scenario, our SOC would be central to the response. All of our SOC contracts include surge capability, and this is a classic surge scenario. We would immediately develop heightened monitoring and set up a dedicated communication channel, like a Slack channel or a Microsoft Teams room, where the client's team can ask questions. We would essentially become the clearing house for the investigation.
It doesn’t always mean we’re leading the investigation. Sometimes that’s in the contract, but our main goal is to assist the customer in gaining the confidence they so dearly require at that moment, because we know they're fielding a million questions. We would run specific threat hunts and help them figure out all the associated systems and whether any identities have been compromised.
Erik: If we're running their infrastructure, we would put virtual patching in place on the WAF. If we're in charge of their vulnerability management, this becomes part of our standard incident handling. We would be patching this inside of an hour, pretty much as soon as the patch is available. So really, depending on the core services the customer is consuming, we would either be responsible for most of the response or participate in a critical support role.
Erik: In this case, it’s a strong argument for it. This incident is a pretty good indication of where Microsoft's effort and attention are going. It just shows that they run their own cloud infrastructure in a different way than the on-prem solutions are run. Microsoft is probably much better and faster at patching its own cloud compared to 99% of organizations out there. For most of the world’s organizations, moving this kind of infrastructure to the cloud is a very desirable thing.
Erik: Silos don't work anymore. This isn't just "security's problem." Cybersecurity usually doesn't run identity, the network, or the infrastructure. All these teams need to come together.
Erik: It starts with collaboration on projects, making network maps, documenting proper mitigation strategies, and so on. Generally, you need to make sure you know where the data is, but you also have to practice for a crisis. I’d recommend running a tabletop exercise using this exact scenario. It’s like playing Dungeons & Dragons, but with a security vulnerability. You role-play to find the gaps in your processes. Every few years though, organizations need to go further and do actual live-fire attack simulations, not just sit around a room and pretend.
Erik: Take notes on what went right and what went wrong. It's easy to get tunnel vision and just focus on the fix, but this is a golden opportunity to improve your response for the next one.
The biggest thing I see is that people simply don't have the information they need to answer those urgent questions. When I ask, "What are the connected systems? What APIs are in use? What data is on this server?" many organizations don't have a good, current answer. They have a snapshot from when the system was deployed, but we all know that these things evolve. Not having an active, up-to-date view of your environment is the single biggest obstacle to assessing impact and responding effectively. That’s the most neglected area, and fixing that is where I would start.
The Sharepoint vulnerability is an active, evolving story. Stay abreast of the latest changes and review these resources for further information:
Summary article from the U.S. Cybersecurity & Infrastructure Security Agency (CISA)
Reuters article on the vulnerability, reactions, and fallout
If your organization needs to improve its security maturity. Contact SecureOps today to learn how we can help close vulnerabilities and protect against threats related to ToolShell.