Isolated Systems Need Love Too
Information security has changed a lot over the years. Way back in the dinosaur days, life was simple. Companies set up a firewall at the border and life was good. Bad guys stayed on one side of the fancy flashing box, and our personnel lived in the pristine, attacker-free paradise on the inside.
Well, that's how it was supposed to work, at least.
Over the years, the model changed. Increasingly porous borders and multiple entry points into the environment, like leased lines to business partners and site-to-site VPNs, have left us in the state where the perimeter is largely irrelevant, or rapidly becoming so.
In some situations, though, there remain trappings of the old model. For some of us, there are areas where the security model is still built around isolation. It's not every business, and it's not every industry, but it happens more often than we might care to admit. And when it does, it's important for security practitioners to have it on their watch list and explicitly test assumptions about whether the isolation is working. Why the scrutiny? Because without testing, it's hard to know if the isolation is working as it should, and it may not be working as well as you think.
Risks of Closed Networks
First of all, it's important to be clear that the isolation -- as a part of a larger, more comprehensive, well-analyzed and thought-through security posture -- is a perfectly legitimate control. For example, the PCI DSS specifically mentions network isolation as a core scoping consideration; as a result, it's hard (in practice) to become DSS complaint without some degree of segmentation. Concern comes into play, though, in situations in which organizations employ network isolation as the primary or sole control.
This could be done in support of legacy, purpose-built, or high-criticality environments where the risks of downtime are catastrophic or near-so. As an example, consider a hospital that segments portions of the network that directly support patient care. They may decide that since operational issues can have a direct bearing on patient safety (i.e., interference with IP-connected biomedical devices or imaging modalities), these systems should be isolated. Alternatively, consider the situation of an industrial control network (power, manufacturing, etc.) that wishes to ensure uptime in the SCADA network, or communications companies like telephone carriers or broadcasting where impact to specialized systems, even for seconds or less, impacts quality of service.
The danger with this model is that a cycle gets established: Because the network is viewed as so valuable, segmentation and lack of "intrusive" testing becomes paramount. And because it's so controlled, layered security controls are less likely to get implemented. Allow this to run its course for a few years and you wind up with a network that relies almost entirely on one control. But no one knows if the control is working because operational personnel are loathe to explicitly test it due to possible production impact.
So the risk is this: If the network segmentation isn't performing as advertised -- for example, if an operator installed some software they shouldn't, if someone opened a pathway accidentally, etc. -- risk is introduced. But that risk may not be visible at the technical level. It may eventually come up in an audit, through happenstance, or (worst case scenario) because an issue is actively exploited. Over time, it is more likely that pathways into the environment could be potentially introduced. And because testing in that scenario is hard to do, risk compounds over time. It's useful for security organizations to understand this dynamic, because the longer the control remains untested, the riskier it becomes.
Things You Can Do About It
To understand the scope and degree of the problem, or if there's a problem at all, a good first step is to undertake some careful evaluation of the closed network to determine if it's really isolated -- or if maybe it's only partially isolated. This can include select vulnerability scanning tests or even in some cases targeted penetration testing. What's important to note, though, is that there are ways to minimize impact to production systems in doing these tests, but not all resources that do this type of testing will understand how to minimize that impact. Select resources that have personnel with experience testing similar environments to the ones in your scope. Consider contractually negotiating financial penalties should there be a production impact during the test (not every vendor will agree to this).
Also recognize that an issue like this one is too big for security to take on alone and that cooperation from others is required. So a solid first step is to enlist the help of the business and technical communities closest to the network in question. In other words, enlist the partnership of the business process owners and the engineers who work directly with the closed network in question. Explain to them what you want to do and why. Enlist their support to help you. Explain to them why the status quo may be a source of risk and ask them to help you do better.
In the case of situations in which there are multiple different closed networks (e.g. industrial networks at multiple, geographically distributed plants), a useful strategy is to engage all the relevant groups at once when you can. This gets them learning from each other and interacting with each other to ensure an optimal outcome. It also establishes credibility for the effort when they see peers involved and actively participating.
Of course, these are only first steps. Based on your own internal security strategy and architecture, there could be many was that you might move should you learn that your network isolation isn't working the way you thought it was. But you need to get the ball rolling first. That targeted testing to establish that the isolation isn't what you thought it was is the critical first step. And to do it, you need the cooperation of the technical and business communities.