Skip to main content
Category

Blog

Latest blog entries from the Network Perception team

The Importance of Velocity in Cybersecurity

The Importance of Velocity in Cybersecurity

By Blog

Part 4 of 4 in a Blog Series on 3 Key Elements of a Cyber Resiliency Framework: (1) Verification, (2) Visibility, and (3) Velocity

In the first three parts of this blog series on cybersecurity for OT critical infrastructure infrastructures, we discussed the elements and specific roles of verification and visibility for an effective cyber-resiliency framework. However, it is also important to note the requirement of velocity in the resilience equation. You need to achieve verification and velocity at speed to be protected, monitor, and to respond to an incident.

Cybersecurity frameworks and strategies all recognize the need for speed. In the NIST Framework, rapid response and mitigation are prioritized, “Response processes and procedures are executed and maintained, to ensure timely response to detected cybersecurity incidents. Also, activities are performed to prevent expansion of an event, mitigate its effects, and resolve the incident.” Respond | NIST In NERC’s framework CIP-008-5 it mandates that “security incidents related to any critical cyber assets must be identified, classified, responded to and reported in a manner deemed appropriate by NERC.”

VELOCITY – Verification and Visibility at Speed in Protecting Digital and Physical Assets in Critical Infrastructure

The current critical infrastructure threat landscape includes sophisticated and capable hackers from state actors and organized criminal gangs. They often share the latest and most effective hacking tools and tactics among each other. A breach can have catastrophic consequences for OT industrial systems and is essential that security measures require speed to mitigate threats. This operational velocity is required for monitoring ports and services, security patch management, malicious software identification, and especially rapid incident response.

A quote from Gene Yoo at the Forbes technology Council succinctly present the stakes for both IT and OT operations: “In cybersecurity, speed defines the success of both the defender and the attacker. It takes an independent cybercriminal around 9.5 hours to obtain illicit access to a target’s network. Every minute a company does not use to its advantage gives hackers a chance to cause greater damage.” The Importance Of Time And Speed In Cybersecurity (forbes.com)

What is necessary to ensure in achieving verification and visibility at speed in cybersecurity to help reduce the threat of attackers? George Platsis, Senior Lead Technologist, Proactive Incident Response & Crisis Management at Booz Allen Hamilton, sees the need of a combination of three factors: resources, organizational structure, and environment understanding. He notes that “you can have all the resources in the world, but if your organization is not structured to execute, you will have blind spots. Proper resources give you capability. Sound organizational structures give you ability. Strong environmental understanding gives you knowledge. There is your trifecta.” He sees technology as an enabler for bolstering those three factors with velocity: “well configured automation increases your resource capabilities and possible your environmental understanding.”

Automation is also a theme articulated by Patrick C. Miller, CEO at Ampere Industrial Security and Founder and President Emeritus of the Energy Sector Consortium for velocity. He believes that getting operational/security telemetry from systems/networks, then analysis through tools and human review requires a significant amount of integration. He says that making the data useful and removing unnecessary alerts or false positives to chase down is essential for response and that it can probably cover as much as 70%-80% of the work. That automation significantly allows for greater speed. Patrick says that “the challenge is to automate where it makes sense, and with tested/proven process. All automated processes require independent monitoring, as well. Checks and/or tests to ensure the process is still functioning as expected (all controls intact and working) is crucial. This applies to the areas of 1) asset inventory; 2) phase out of fragile systems; 3) architecting networks and systems for defense; 4) change control and configuration management; 5) logging and monitoring; 6) reduction of complexity; 7) well-rehearsed incident response and recovery.”

According to Marcus Sachs, Research Director for Auburn University’s McCrary Institute for Cyber and Critical Infrastructure Security, and former Senior Vice President and Chief Security Officer at the North American Electric Reliability Corporation, we are making headway on verification, visibility, and velocity. If the computer knows what’s going on the machine knows it. It’s logging it. He says that “if you’re a looking at your logs, and doing log reviews, and even having a machine review your logs for you, you’re going to see things very quickly. But if you wait for the phone call, or you wait for the website that goes down to be your first indication there’s a problem and you are way behind the curve.”

Emerging technologies, including artificial intelligence are changing the game in terms of doing things faster and having the ability to monitor equipment, threats, automate incident response. The new capabilities for automation and reaction a speed is highlighted in a new Congressional Research Report on “Evolving Electric Power Systems and Cybersecurity” November 4, 2021.

The report states that “while these new components may add to the ability to control power flows and enhance the efficiency of grid operations, they also potentially increase the susceptibility of the grid to cyberattack. The potential for a major disruption or widespread damage to the nation’s power system from a large-scale cyberattack has increased focus on the cybersecurity of the Smart Grid.

The speed inherent in the Smart Grid’s enabling digital technologies may also increase the chances of a successful cyberattack, potentially exceeding the ability of the defensive system and defenders to comprehend the threat and respond appropriately. Such scenarios may become more common as machine-to-machine interfaces enabled by artificial intelligence (AI) are being integrated into cyber defenses.” R46959 (congress.gov)

In this blog series we discussed the elements of (1 Verification), (2) Visibility, and (3) Velocity for cybersecurity resilience in cybersecurity, particularly OT critical infrastructure systems. Those three elements do not stand alone as pillars and are part of a unified cybersecurity triad. It is this triad of velocity, visibility, and verification that will help critical infrastructure operators assess situational awareness, adhere to compliance mandates, align policies & training, optimize technology integration, promote information sharing, establish mitigation capabilities, maintain cyber resilience, and ultimately be more cyber secure.

The Importance of Visibility in Cybersecurity

The Importance of Visibility in Cybersecurity

By Blog

Part 3 in a Blog Series on 3 Key Elements of a Cyber Resiliency Framework: (1) Verification, (2) Visibility, and (3) Velocity

 

In its July 2021 Memo, the White House created a voluntary industrial control systems (ICS) initiative to encourage collaboration between the federal government and the critical infrastructure community. The key purpose of the initiative is “to defend the nation’s critical infrastructure community by encouraging and facilitating the deployment of technologies and systems that provide threat visibility, indications, detection, and warnings, and enabling response capabilities for cybersecurity in essential control systems and operational technology (OT) networks.” The memo further elaborated that “we cannot address threats we cannot see; therefore, deploying systems and technologies that can monitor control systems to detect malicious activity and facilitate response actions to cyber threats is central to ensuring the safe operations of these critical systems.” New cybersecurity initiative by Homeland Security, NIST to protect critical infrastructure community – Industrial Cyber

The concept of visibility, knowing what assets you must manage and protect, described by the memo is a fundamental aspect of any cybersecurity strategy, especially in regard to critical infrastructure where the costs of a breach may have devastating implications. For this reason, identifying what digital and physical assets in your network is the first basic tenet of The NIST Framework that integrates industry standards to mitigate cybersecurity risks.

NERC has also recognized the importance of visibility for compliance. Visibility of industrial cyber assets include Electronic Access Control or Monitoring Systems – intrusion detection systems, electronic access points, and authentication servers, Physical Access Control Systems – card access systems and authentication servers and Protected Cyber Assets – networked printers, file servers and LAN switches are defined by NERC CIP-002-5.1a: Bulk Electric System (BES) Cyber System Categorization under BERC Identification and Categorization. What are the 10 Fundamentals of NERC CIP Compliance? | RSI Security

VISIBILTY: The Importance of Visibility in Protecting Digital and Physical Assets in Critical Infrastructure

How do we define visibility in cybersecurity? According to Marcus Sachs, Research Director for Auburn University’s McCrary Institute for Cyber and Critical Infrastructure Security, and former Senior Vice President and Chief Security Officer at the North American Electric Reliability Corporation, visibility means knowledge of where you are, or what’s going on. And if you’re a believer in the NIST framework, the first step is identification of your assets. And so, if you don’t know what you own, you can’t protect what you don’t know you have. Visibility of assets, and that includes people. They’re not just wires and blinky light things, but even who has access to what, visibility of files and resources. So, visibility truly starts with knowing what you have.  Also, oftentimes it’s a user who detects something that’s not normal, and calls the help desk, and says, “hey, I see something wrong here.” And then alert help desk to say, “okay, could this be a security incident? Or is it just a user problem, or some malfunctioning software?”

Visibility can also be viewed as the fuel for managing, protecting, and analyzing operations & assets.

Patrick C. Miller, CEO at Ampere Industrial Security and Founder and President Emeritus of the Energy Sector Consortium sees visibility as getting sufficient data from target networks and systems into the analysis engine and then managing that data in such a way as to make it useful and not just “noise.” He notes that visibility is highly dependent on the organization. He believes that visibility starts with a sufficient asset inventory and that without that, the value and effectiveness of visibility goes down. He notes that tailored visibility and a solid asset inventory can be effective and enable IR teams to see what is happening to which systems.

Visibility also requires knowledge of the inventory of what may lurk in software.

Tom Alrich is Co-leader, Energy Sector SBOM Proof of Concept at National Technology & Information Administration US Department of Commerce has worked in the era of NERC CIP issues since 2008. He is focused on the software aspects of visibility. He notes that the average software product has 135 components in it and that 90% of them are open source. Tom states that lots of products have thousands of components and that each component can develop vulnerabilities. He says that “the end user has no way of tracking those without a software bill of materials (SBOM) that provides visibility into component risks.”

Visibility is a management and board issue.

Mary-Ellen Seale, The National Cybersecurity Society, and former Deputy Director of the National Cybersecurity Center at DHS says that one of the things is having visibility of the risk associated with a company or organization at the board level. So, it’s not just an IT guy or an IT team that has visibility or a company, a third party, that’s providing information to that baseline. Visibility requires actually “figuring out what are the critical activities that need to occur? What are the costs associated with that, and how do I present them to leadership to have them correct it?”

Visibility is about awareness.

Paul Ferrillo, Privacy and Cybersecurity Partner at Seyfarth Shaw LLP, brings a legal perspective with questions that pertain to operational visibility. “Do you know who is using your system? Is it just directors, officers, and employees? Is it vendors? Who’s accessing your system? How are they accessing your system? Is it through mainframe computer? Is it through a laptop? Is it from a BYOB device? Are they who they say they are when they’re accessing the network?

I agree with our expert commentators and with the insights provided in the White House memo, and by NIST and NERC on the topic of visibility. It is a must first step for cybersecurity in any vertical or industry. It is important for both operational teams and incident response teams to have transparent inventories of digital and physical assets to assess any vulnerabilities to threats. Mapping interactions between networks, devices, applications, and cyber-resilience roles of management should be part of any risk management strategy protecting critical infrastructure.

Next blog: Part 4: The Importance of Velocity in Cybersecurity

 

The Importance of Verification in Cybersecurity

The Importance of Verification in Cybersecurity

By Blog

Part 2 in a Blog Series on 3 Key Elements of a Cyber Resiliency Framework: (1) Verification, (2) Visibility, and (3) Velocity

The concept of verification is the process of checking and attaining information about the ability of an individual, a company, or an organization to comply with the standards. In the case of cybersecurity, verification is intertwined with compliance of regulatory standards based on industry best practices. The European Union’s General Data Protection Regulation (GDPR) is a good example of the linkage of verification and compliance, as are other regulatory initiatives in government such as CMMC and HIPAA.

The energy and utilities industry requires a strong adherence to verification and compliance in its security posture. Recently, the Federal Energy Regulatory Commission (FERC) released its recommendations to help users, owners, and operators of the bulk-power system (BPS) improve their compliance with the mandatory CIP reliability standards and their overall cybersecurity posture. Staff from FERC’s Office of Electric Reliability and Office of Enforcement conducted the audits in collaboration with staff from the North American Electric Reliability Corporation (NERC) and its regional entities.

In its 2021 Staff Report ‘Lessons Learned from Commission-Led CIP Reliability Audits,’ the agency advised “enhancing policies and procedures to include evaluation of cyber asset misuse and degradation during asset categorization, properly document and implement policies, procedures and controls for low-impact transient cyber assets, and enhance recovery and testing plans to include a sample of any offsite backup images in the representative sample of data used to test the restoration of bulk-electric system cyber systems.”

The report also proposed improving vulnerability assessments to include credential-based scans of cyber assets and boosting internal compliance and controls programs to include control documentation processes and associated procedures pertaining to compliance with the CIP reliability standards. FERC report recommends compliance with CIP reliability standards – Industrial Cyber

Utility security can be viewed as the integration of national security into the power and electricity sectors, especially to protect the power grid. The North American Electric Reliability Corporation (NERC) is the regulatory authority with responsibility for the reliability of service to more than 334 million people. NERC’s standards are directly aimed at encouraging or mandating steps for utilities in protecting their operation.

NERC’s authority has led to critical infrastructure protection (CIP) standards that guide utilities’ planning and activities to eliminate or mitigate the many internal and external threat profiles. The CIP standards have evolved over time both in the scope of their focus and in the level of their authority. Utility Security: Understanding NERC CIP 014 Requirements and Their Impact (electricenergyonline.com)

VERIFICATION: Establishing a Baseline and Validating Risk Assessment Frameworks

Building effective verification begins by defining the scope of the verification process. You start by selecting those mission-critical assets — determine where they are, how critical they are to daily operations and who or what has access to them. To help initiate a strategy for verification within a physical and cyber resiliency framework for mission-essential systems such as utilities, it is helpful to understand the role of verification and compliance.

According to Marcus Sachs, Research Director for Auburn University’s McCrary Institute for Cyber and Critical Infrastructure Security, and former Senior Vice President and Chief Security Officer at the North American Electric Reliability Corporation, “compliance is, as everybody understands, the initial baseline. You’re required by law to be compliant with some framework. And NERC CIP is what we use for the bulk power system. I think most qualified engineers, and security professionals, know that is the baseline, the minimum that you meet.”

“NERC CIP is essentially the minimum security required as a Registered Entity under NERC,” agrees Patrick C. Miller, CEO at Ampere Industrial Security and Founder and President Emeritus of the Energy Sector Consortium. Like many cybersecurity experts, he believes that verification should be possible by any qualified party. Most organizations have SMEs within each business unit who handle the day-to-day operational aspect of compliance, but when it comes to guiding and validating evidence, that is usually performed by a central and authoritative compliance function.

According to Tom Alrich, Co-leader, Energy Sector SBOM Proof of Concept at National Technology & Information Administration US Department of Commerce, the biggest threats in the world are supply chain related, and SolarWinds and Kaseya demonstrated that not enough attention has been paid to those risks.

George Platsis, Senior Lead Technologist, Proactive Incident Response & Crisis Management at Booz Allen Hamilton states that “independent verification is your reality check. Even the best professional athletes have coaches. As good as you can be, you may have a blind spot, or something needs tweaking.”

Clearly, with the newly released FERC/NERC Staff Report on compliance and CIP reliability standards, it signals that verification will remain a key element of future policy. As our SMEs have noted in our discussion, the vulnerabilities and sophistication of potential security threats continue against CIP continue to expand. Therefore, it is important to incorporate a strategy that not only complies with best practices and standards, but also anticipates mitigating new risks. In our next blog we will discuss how visibility is essential to the risk matrix.

Next blog: Part 2: Operational Visibility to Achieve Greater Cyber Resiliency

How to Achieve Cyber Resilience

How to Achieve Cyber Resilience

By Blog

Part 1 in a Blog Series on 3 Key Elements of a Cyber Resiliency Framework: (1) Verification, (2) Visibility, and (3) Velocity

In industry and in government it is not a question of if you will be cyber-attacked and potentially breached, but when. The cyber-attack surface has grown exponentially larger in recent years with the meshing of OT and IT systems, and the greater connectivity brought by the Internet of Things. Also, the threat actors themselves, that include nation states, criminal enterprises, insider threats, and hacktivists, have become more sophisticated and capable. Their activities are increasingly being focused on critical infrastructure, including the energy and utilities industry.

The energy ecosystem includes power plants, utilities, nuclear plants, and the electric grid. Protecting the sector’s critical ICS, OT, and IT systems from cybersecurity threats is complex, as much of the energy critical infrastructure components have unique operational frameworks and access points, and they integrate a variety of legacy systems and technologies.

Because of the changing digital ecosystem, and the consequences of being breached, creating a cybersecurity framework that encompasses resiliency has a top priority for mitigating both current and future threats. There are multiple components to that framework that need to be explored. This is the first blog of a four-part series that will focus on the key elements of a cyber resiliency framework, (1) verification, (2) visibility, and (3) velocity. Another objective with this series is to intersect/combine cyber resiliency and NERC CIP compliance.

What is Cyber Resilience?

 A joint DNI, DHS Report sees cyber resilience as “important for mission-essential systems that support our national security, homeland security, essential government services, and the critical infrastructure that supports the nation’s economy. Cyber resiliency is that attribute of a system that assures it continues to perform its mission-essential functions even when under cyber-attack. For services that are mission-essential, or that require high or uninterrupted availability, cyber resiliency should be built into the design of systems that provide or support those services.“ Cyber Resilience and Response (dni.gov)

In August of 2021, NIST updated its guide on Cybersecurity Resilience by sharing a new definition: The NIST Draft “turns the traditional perimeter defense strategy on its head and moves organizations toward a cyber resiliency strategy that facilitates defending systems from the inside out instead of from the outside in. This guidance helps organizations anticipate, withstand, recover from, and adapt to adverse conditions, stresses, or compromises on systems – including hostile and increasingly destructive cyber-attacks from nation states, criminal gangs, and disgruntled individuals.” SP 800-160 Vol. 2 Rev. 1 (Draft), Developing Cyber-Resilient Systems: SSE Approach | CSRC (nist.gov)

To initiate a strategy for verification, visibility, and velocity within a cyber resiliency framework for mission-essential systems such as utilities, you also need perspectives to build on the DNI/DHS definition of what constitutes cyber resilience from practitioners in the field. We asked leading experts to share their definition of resilience in the context of a cyber system.

According to George Platsis, Senior Lead Technologist, Proactive Incident Response & Crisis Management at Booz Allen Hamilton, utilities, and individual organizations should have that candid talk and define what “cyber resilience” means to them. He notes that the Lawrence Livermore National Laboratory defines their Cyber and Infrastructure Resilience Program’s mission as the ability to enhance the security and resilience of the nation’s critical infrastructure systems and networks to cyber, physical, and environmental hazards and to enable their reliable and sustainable design and operation now and into the future. George interprets that as “the ability to keep the business going, regardless of hazard.”

Marcus Sachs, Research Director for Auburn University’s McCrary Institute for Cyber and Critical Infrastructure Security, and former Senior Vice President and Chief Security Officer at the North American Electric Reliability Corporation, sees resilience as the “ability to recover, or the ability to endure some sort of pain.” For any organization, and that includes utilities, from small distribution up to transmission and generation. If you’re able to continue to operate in the face of an adversary, or be able to recover very, very quickly, should something bad happen, that’s good resilience. Realistically, we’re going to have interruptions. So, how quickly can you recover from an interruption, is a good gauge of your resiliency.”

Patrick C. Miller, CEO at Ampere Industrial Security and Founder and President Emeritus of the Energy Sector Consortium, states that “by and large, most utilities know that resilience means continuing to operate under negative, degraded or even adversarial operating conditions. They understand this from many perspectives, with a long history of response and recovery after natural disasters and other human/animal-caused outages (car/pole, backhoe, squirrels, etc.). Adding cyber to that, whether through accidental or malicious human action, is nothing outside of their world.”

Benjamin Stirling, Former Manager of Generation Cybersecurity at Vistra, believes that frameworks for classifying Process that you are protecting are integral to cyber resilience. He says that the first step in risk analysis for OT and ICS cybersecurity is understanding and classifying the process. He notes that protecting a water treatment plant at a site versus a burner management system at a site may be two very different things. “Once you have this risk categorization piece done, then you can suggest how you’re going to protect those assets and begin to have a methodology. You can go down a path where you can have a reasonable risk-based approach to resilience.

Paul Ferrillo, Privacy and Cybersecurity Partner at Seyfarth Shaw LLP, perhaps has a description of the topic that many can relate. He defines cyber resilience much as a boxing match, as being able to take a punch right in the face and hitting the canvas and getting back up again. For him, resilience is getting back on the internet, doing your backups, restoring your backup tapes, and getting back into play.

All these cybersecurity experts concur that cyber resilience is generally defined as being able recover and go forward and continue to operate in the event of an incident. Sometimes that is easier said than done, especially with morphing of threats, a dearth of skilled cybersecurity workforce, and the regulatory requirements of maintaining critical infrastructure that is often owned by the private sector and government by the public sector.

Also, there is no one size cyber resilience framework that fits all cases, even in the same industry such as utilities. The ability to be cyber resilient starts with a risk management focus and allocation of resources and training to varying threat scenarios to get to the end goal of being able to recover quickly and remain operational. It also requires a customized strategy augmented by automation tools to keep systems optimally prepared and running.

In further discussions with the SME practitioners, it was clearly surmised that cyber risk management is the nexus for helping best secure cyberspace, especially in OT/ICS operating environments. This will require creating a cyber-resilience framework that will assess situational awareness, adhere to compliance mandates, align policies & training, optimize technology integration, promote information sharing, establish mitigation capabilities, and maintain cyber resilience in event of incidents. This is where the specific elements of verification, visibility, and velocity need to be enabled to achieve cyber resilience.

Next blog: Part 2: COMPLIANCE VERIFICATION to achieve greater cyber resiliency

Preventing Lateral Movement Through Network Access Visibility

By Blog

In the first five months of this year, we have already witnessed multiple cyber attacks against critical infrastructure in the US. Those events range from an individual endangering people’s life by poisoning a water-treatment facility to large organized groups disrupting fuel delivery to a significant part of the country.

The increasing number and sophistication of such incidents have reinforced the importance of building resilient cyber infrastructure. Organizations have started identifying their critical systems and protecting them with multiple cyber-defense layers. However, many connected systems that form the perimeter of the organization’s network remain exposed. Such devices include external-facing servers and corporate workstations. Attackers often exploit the perimeter, leveraging existing networking services and unknown loopholes to reach the network’s crown jewels. That approach is termed lateral movement—a set of activities used by attackers to make their way from the initial entry point to critical assets. In such an expansion phase, attackers utilize several exploit techniques and use intermediate devices as stepping stones. Eventually, lateral movement enables attackers to launch data exfiltration or service disruption.

 

Lateral Movement in Action: the SolarWinds Incident


In the
words of Brad Smith, President, Microsoft, the 2020 SolarWinds supply chain attack was an “attack on the United States and its government and other critical institutions, including security firms.” The incident that came into public space in December 2020 had occurred between March and June that year. Sophisticated advanced persistent threat (APT) actors introduced malicious code into the vendor’s Orion platform, a network and endpoint management software. Subsequently, the download of the compromised software provided the APT with a foothold into IT networks of more than 18,000 SolarWinds’ customers that included federal agencies and major private organizations.

Figure 1 illustrates how the malware virtually made it from the Internet to critical segments of a target network. First, the compromised Orion software gave attackers a backdoor into the victim system. Second, since a network management system is typically authorized to have two-way communication with all the devices, attackers could collect authentication keys and tokens. Brute-force password cracking attacks might have also helped attackers to gain privileged access to critical servers. With knowledge of internal architecture and access to credentials, the malicious traffic could go undetected, giving attackers access to confidential information and important services. Due to the large number of entities affected, investigators believe that the extent of the damage from the attack will take years to unravel. Attackers may also carry out follow on attacks using the information collected and tools deployed in victim networks.

Figure 1: Lateral movement in the SolarWinds incident utilized (1) delivery of malware through software update mechanism, (2) Internal reconnaissance and credential harvesting through trusted communications, and (3) Data exfiltration or service disruption.

 

Why does Lateral Movement Need Special Attention?


Lateral movement has been an essential step in a majority of recent cyber attacks. However, since it is a precursor to the actual action on target, organizations have an excellent reason to invest more in defending against lateral movement and the steps that lead to it. Such preparedness would save them significant costs that they would otherwise spend on incident response and repair. 

Achieving resiliency against lateral movement attacks is challenging for three core reasons. First, the attack vectors and techniques that the adversaries can adopt are virtually unlimited. Next, the sophistication of attackers in utilizing benign OS and networking services is increasing. Finally, even though network access and security policies aim to segment networks effectively, unwanted access paths can easily result from misconfigurations, software bugs, and human errors. For example, misconfiguration of firewall access policies was a primary enabler of the attacker’s lateral movement in the 2013 Target Corporation data breach and 2015-16 Ukrainian power grid incident.

 

Preventing Lateral Movement


One important insight that benefits the defender is that an adversary, to move laterally, must have several interactions with the network and leverage the existing access patterns. Therefore, the awareness of network assets and access paths can be vital in measuring and reducing risk concerning lateral movement. Here, an access path refers to a possible network connection between two devices.

At a high level, a common approach to understand lateral movements and reduce risk exposure consists of the following steps:

  1. Computing risk metrics by analyzing the graph structure generated by network paths
  2. Specializing the metrics with additional context from services and vulnerabilities
  3. Changing network configuration to decrease the risk

The first step involves constructing a network access graph and selecting relevant metric(s) to quantify the risk. One commonly adopted metric is the number of (strongly) connected components. A strongly connected component is a directed graph in which every node is reachable from every other node. Because of that property, a connected component becomes a single lateral movement domain. Hence, the presence of large connected components in the network access graph indicates network zones with higher risk.

Figure 2 depicts a sample network segmented into subnets using a Cisco firewall. The figure summarizes network access paths in terms of a connectivity matrix between the different subnets. Such connectivity means that the entire network is one connected component. That is a state of high risk with respect to lateral movement and should be fixed.

Figure 2: With the access policies configured as shown in the table, the network becomes a single fully-connected graph.

 

It is easy to see the value of such analysis for real-world networks consisting of many firewalls and routers. In the second step of the overall process, we can further specialize access paths for specifics of the underlying network and the likely attack vectors. In that context, defenders can implement the following approaches relying on the situational awareness obtained in the previous step:

  1. Analyze paths, both inbound and outbound, for specific networks and devices
  2. Filter paths per service type (protocol-port combinations) to focus more on lateral movement vectors such as authentication, remote access, file transfer, and sharing services
  3. Correlate paths with vulnerability information to evaluate the reachability of highly vulnerable parts of the network to high-value assets

The final step in the risk mitigation process is to be able to identify root causes and fix them. With the precise and actionable information collected so far, security admins can take concrete steps, including the following:

  1. Break down large connected components into many smaller ones to limit the extents of lateral movement domains
  2. Limit reachability of highly vulnerable nodes to critical assets

For instance, in the network presented previously, an admin may choose to limit direct access from ‘Marketing’ to the rest of the network. To accomplish that, as we show in Figure 3a, she can select the specific path and correlate it with the corresponding configuration entry. She can then quickly limit the connectivity and transform the network to a safer state of  Figure 3b.

Figure 3a: Correlating network paths (shown by red arrows) with the corresponding entry in firewall configurations (highlighted by the red box).

 

Figure 3b: Modifying firewall configuration leads to segmenting the network in multiple connected components and improving the overall security posture.

 

Responding to Lateral Movement


The recent attacks against critical infrastructure have reinforced that lateral movement is an integral part of cyber threats. Therefore, as soon as an initial compromise is detected, quickly determining which other systems are endangered is the key to minimizing the damage. Subsequently, one can isolate those assets and restore them in a safe state. 

An accurate understanding of current access paths is a strong ally to reduce risk exposure. Security teams can examine outgoing network access paths from suspected compromised nodes and filter them using compromised services to limit the search space. In particular, a stepping-stone analysis is essential to tell how far specific systems are from a network access standpoint. We have discussed such analyses in detail in our previous article on accelerating incident response.

 

Summary


In this article, we have discussed strategies for countering malicious lateral movement. Specifically, we have demonstrated that situational awareness of network assets and access paths is crucial for blocking lateral movement. In that context, we have illustrated the use of two graph-based risk metrics:
number of connected components and reachability.

Experts have emphasized the importance for cyber-resilient organizations to think in graphs. However, understanding the complex architecture of multi-layer networks can be extremely challenging. Network Perception’s solutions NP-View and NP-Live have been designed to address this challenge by enabling real-time visibility into network assets and access paths, making it easy to adopt the graph-thinking paradigm in practice.

Where was your Baseline when the Colonial Incident Happened?

By Blog

The Importance of Knowing your Baseline

On May 7, Joseph Blount, CEO of Colonial Pipeline, authorized a ransom payment of $4.4 million to Darkside, a cyber criminal gang believed to be based in Eastern Europe. Executives at Colonial were forced to make decisions quickly and with a lack of information they were unsure how badly the cyberattack had breached its systems or how long it would take to bring the pipeline back. Operators of the Colonial Pipeline learned the Company was in trouble when an employee found a ransom note displayed on the screen of a control-room computer. This cyberattack underscores the growing impact of cyberthreats on industrial sectors and the fact that attackers are now specifically targeting critical infrastructure to increase their profit.

It is impossible to determine the target or nature of the next cyber attack, but all critical infrastructure industry executives should be asking themselves the same question right now: where is my baseline? Executives don’t know the who, what, how, where or when of the next attack, but all companies can raise the baseline on their cyber resilience posture. Companies that have invested in creating a higher level of cyber resiliency are working from a different baseline and have put themselves in a better position to respond quickly and effectively to reduce cost and risk. These companies will have the information they need for faster, more efficient decision making. Companies that prioritize and invest in creating cyber resiliency as part of their cybersecurity posture are effectively removing risk from the inevitable next cyber attack.

How to Establish Your Baseline

Establishing the initial cyber resiliency baseline is a core step of the Structure Cyber Resiliency Analysis Methodology (SCRAM) developed by MITRE. The goal is to answer the question what can we build on? This is accomplished by reviewing current capabilities, policies and procedures already in place, cybersecurity solutions deployed, and gaps to achieve relevant cyber resiliency goals. As illustrated in the SCRAM document, the result of this activity can be recorded in a scorecard:

In the context of the Colonial Pipeline ransomware incident, the crucial parts of the baseline to review are:

  • The ability to visualize asset inventory, network architecture, and network access
  • The ability to verify correct privilege restriction and network segmentation
  • The speed of existing response capabilities

An efficient approach to build the initial baseline is to use the Colonial attack as a scenario to engage with relevant subject matter experts (SMEs) in your company. Once the baseline has been defined, then a gap analysis can be conducted in order to create and implement a cyber resiliency plan.

Baseline and Cyber Resiliency

The World Economic Forum published this week a guidance document on cyber resiliency that presents 10 key principles that executives in the industrial sector should understand and adopt. In particular, principle #7 states that:

The board ensures that management supports the officer accountable for cyber resilience through the creation, implementation, testing and ongoing improvement of cyber-resilience plans, which are appropriately harmonized across the business. It requires the officer in charge to monitor performance and to regularly report to the board.

Capturing the initial baseline plays a crucial role to create such plans, since it enables all stakeholders to develop a common understanding on which a path to higher cyber resiliency can be defined. This is important to build alignment among business units and across all levels of the organization.

 

Black and white bottom view of a room equipped with data servers for cloud computing and information storage with bright LED lights in ceiling.

Could CIP-005 have prevented the SolarWinds attack?

By Blog

It has been four months since we discovered the SolarWinds attack and many organizations are still deep into clean-up efforts. If you have been affected by this event, excellent resources have been published to dissect the malware involved and to help with identification and remediation. We previously discussed lessons learned from the SolarWinds compromise to emphasize the importance of maintaining continuous visibility over networks and to ensure clear separation of duties between monitoring and control solutions. In this article, we are exploring the role of network segmentation through the lens of CIP-005 and the concept of Electronic Security Perimeter (ESP).

Best Practices from the Electric Industry

In the world of industrial control systems (ICS), priorities are different compared to a traditional corporate environment. Indeed, an IT server shutting down unexpectedly may frustrate users and cause financial damage, but an Operational Technology (OT) server shutting down unexpectedly may impact industrial equipment and possibly injure people. As a result, safety and reliability are top priorities for ICS and this is why the adoption of a strict risk assessment and compliance framework is paramount in the OT space.

To that end, the NERC CIP standards have significantly impacted the way electric utilities in North America are deploying and configuring the firewalls protecting their critical cyber assets. This is particularly important in the context of the SolarWinds attack since understanding trusted communication paths and data flows can directly help mitigate and prevent not only current but also future cyber attacks. It could even be said that better network segmentation could have prevented the breach of the SolarWinds build environment in the first place. To quote from Tom Alrich’s article:

The software build environment would need to be protected in a similar fashion to how the Electronic Security Perimeter (ESP) is required to be protected by the NERC CIP standards – in other words, there should be no direct connection to the internet, and any connection to the IT network should be carefully circumscribed through measures like those required by CIP-005.

At Network Perception, we know CIP-005 quite well since we have designed NP-View and NP-Live with the specific goal of helping the NERC industry with implementation and control of CIP-005 requirements. Following up on Tom’ suggestion, we are providing practical guidance in the section below about how CIP-005 could be leveraged by any organization that has critical systems to protect, whether they reside in the IT or the OT space.

Hardening Network Segmentation with CIP-005

NERC CIP spans multiple reliability standards, ranging from categorizing critical cyber assets (CIP-002) to personnel and training (CIP-004), as well as incident reporting (CIP-008) and configuration change management (CIP-010). The standard that is explored in this article is CIP-005: Electronic Security Perimeter. Before listing the requirements, it is important to understand the terminology which is provided in the NERC Glossary. Here is the summarized version:

  • The Bulk Electric System (BES) is defined to identify the most critical systems to protect. In the electric industry, the BES covers transmission elements operated at 100 kV or higher. The concept of BES could be translated to other industries. For instance, the systems storing and transmitting credit card information in the payment industry. 
  • A Cyber Asset (CA) is a programmable electronic device, which includes computers, servers, and connected equipment.
  • A BES Cyber Asset (BCA) is a cyber asset that can impact the BES within 15 minutes. This definition is important because it allows us to separate mission-critical systems from the rest. 
  • An Electronic Security Perimeter (ESP) is the logical border surrounding a network to which BCAs are connected. 
  • An Electronic access control and monitoring system (EACMS) is a cyber asset that performs access control or monitoring—like a firewall or an intrusion prevention system.
  • An Electronic access point (EAP) is a cyber asset interface on an ESP that allows routable communication. For example, a network interface on a firewall.
  • A Protected Cyber Asset (PCA) is a cyber asset inside the ESP that is not a BCA.
  • An Interactive Remote Access (IRA) is a user-initiated remote network access that uses a routable protocol. An IRA allows us to identify trusted communication paths and separate them from non-interactive system-to-system communications. 
  • An Intermediate Systems (IS) is a cyber asset performing access control to restrict IRA to only authorized users. Typically, an IS is a jump host on which a user has to authenticate before accessing a critical resource.

The diagram below illustrates a network with ten nodes, among which three nodes are BCA (the crown jewels) and all communications to the BCA have to go through a firewall (the EACMS). An ESP has been defined around the BCA and also includes a non-critical node (the PCA). Since the PCA resides in the same broadcast domain with the BCA, it has to be protected with the same criticality level. Finally, an IS (jump host) enables users to connect to the ESP through an interactive session (for instance, SSH or Remote Desktop). 

 

 

Now that we understand the CIP-005 terminology, we can list the five requirement parts that electric utilities with medium and high impact cyber systems have to comply with:

  • CIP-005 R1.1: All applicable Cyber Assets connected to a network via a routable protocol shall reside within a defined ESP
  • CIP-005 R1.2: All External Routable Connectivity must be through an identified Electronic Access Point (EAP)
  • CIP-005 R1.3: Require inbound and outbound access permissions, including the reason for granting access, and deny all other access by default
  • CIP-005 R2.1: Utilize an Intermediate System such that the Cyber Asset initiating Interactive Remote Access does not directly access an applicable Cyber Asset
  • CIP-005 R2.2: Interactive Remote Access sessions must be encrypted to the Intermediate System to protect the confidentiality and integrity of the communications

In summary, utilities have to (1) identify their critical systems and the networks in which they are connected, (2) protect those networks with firewalls, (3) ensure that firewall access rules are justified and follow a principle of least privilege, (4) ensure that interactive remote connections go through a well-identified jump host, and (5) ensure that those interactive remote connections are encrypted outside of the critical networks. This means that critical systems should be in a separate network zone and not have direct access to the corporate network and the Internet.

Coming back to the SolarWinds attack and the software vendor industry at large, here is how the CIP-005 requirements could be adopted to better protect the digital supply chain:

  • Step 1: We should start by translating the concept of the Bulk Electric System (BES) to the software industry. We are suggesting the Critical Build Environment (CBE) that would cover all systems used to compile, package, and deploy a software application for production.
  • Step 2: We then identify CCA, the cyber assets that are either part of the CBE or can connect directly to the CBE.
  • Step 3: We define an ESP to segment the CCA in a clearly-defined network zone and ensure that there are firewalls to control inbound and outbound connections to the ESP. The access lists should prevent CCA from being directly accessible externally, especially from assets in the corporate network and the Internet.
  • Step 4: We deploy jump hosts to allow engineers and devops to access CCA through interactive remote sessions and we ensure that multi-factor authentication as well as encryption are correctly configured. 

There is, of course, a cost to implementing this framework, but it pales in comparison to the impact of a sophisticated supply chain attack such as the one that targeted SolarWinds. This is work-in-progress and we invite you to start a conversation with your team. If you have questions or would like to make suggestions on how this framework could be applied to different industries, please drop us a note at info@network-perception.com.

Accelerate Incident Response with Next-generation Network Access Visualization

By Blog

“If you really want to protect your network, you really have to know your network”

The advice stated by Rob Joyce (former Chief of Tailored Access Operation at the NSA) in his presentation during the USENIX Enigma conference is gaining further importance in light of the recent SolarWinds attack. For incident response teams who had to investigate the breach inside their environment, a lack of detailed knowledge on networks and connected assets turned their investigation into month-long efforts filled with frustration. The issue of network visibility and clear understanding of network access is affecting most organizations. Two of the top three service engagement findings published in the Dragos 2020 ICS Cybersecurity Year in Review last week are:

  • 90% of service engagements included a finding around lack of visibility across OT networks
  • 88% of service engagements included a finding about improper network segmentation

The core challenge behind these findings comes from the growing complexity of network configurations. A typical firewall configuration includes thousands of statements defining interface settings, access-control lists, and object groups among other categories. Similar to a programming language, network device configuration have bugs that can lead to unexpected consequences, such as enforcing network segmentation partially instead of fully.

Read-only Network Visualization Solutions to the Rescue

The speed at which incident response teams can answer key questions during an attack is crucial to prevent a catastrophic failure. For instance, they may need to understand which port and services are accessible when accessing the control network from a jump host connected to the corporate network. In addition, they need to be able to answer this type of question without relying on network management toolset that can write into the network, since they may be part of the issue (case in point: the SolarWinds Orion application). For these reasons, incident response teams need to be equipped with their own highly-usable solutions that can run outside of the network fabric. This means either offline or through an indirect and readonly connection.

This is an approach that we know well since we spent the last few years training network engineers and cybersecurity analysts to leverage NP-View and NP-Live in order to gain a clear understanding of their networks. The workflow consists in rapidly building a topology map from network device configuration files that serves as a foundation to communicate efficiently among different teams. The map needs to be extremely easy to navigate and understand by both technical and non-technical users. Similar to a heads-up display (HUD) in an aircraft, complex network constructs need to be presented with the correct level of abstraction in order to convey enough details without being overwhelming. A key feature to achieve this objective is to be able to generate a stepping-stone access map.

Breakthrough Insights with Stepping-stone Access Maps

A stepping-stone access map combines end-to-end connections inferred with a path analysis into multi-hop connections. Each hop, or stepping stone, could be used by an attacker to move laterally. For example, a vulnerable web server that is accessible in a DMZ could be exploited and used as a stepping stone to penetrate further into a protected network. In the example below, a vulnerable data historian was selected at the bottom of the map (highlighted with dark circles) and NP-Live analyzed access rules and routes in all the firewalls (red-brick icons) in order to highlight:

  • Nodes that can be directly accessed from the data historian (in red)
  • Nodes that can be indirectly accessed from the data historian (in orange)

An indirect access means that coming from the data historian, an attacker would have to compromise a red node before being able to access an orange node. This type of visualization provides important insights to understand how defense-in-depth is implemented and whether access policy gaps exist. Moreover, it helps everyone to understand critical asset exposure without having to become a firewall or a network expert. For incident response teams, this means precious minutes saved getting to the information they need to take action and also explaining the situation to their colleagues and their leadership.

 

Stepping-stone access map generated by NP-Live to visualize which nodes are directly and indirectly accessible from a vulnerable host

 

Stream the SANS Webinar Recording:

How Can Critical Infrastructure Facilities Become Cyber-Resilient?

By Blog

Network Perception CEO Dr. Robin Berthier, recently joined Luke Fox on The Trust Revolution to discuss cybersecurity in relation to recent attacks on several critical infrastructure industries. Berthier explains, “Utilities have modernized, and that connectivity, especially around equipment and IoT, increases the risk for disruption and attacks.” He elaborates with specific examples and provides best practices.

Berthier also cautions against a singular focus on preventing attack, as that effort is futile. To best prepare for future threats, he recommends building cyber resiliency with an emphasis on “defense in depth or multiple layers of security.” Companies must change the way they think about cybersecurity and prioritize building resiliency.

“It’s impossible to keep everything outside of the perimeter, so design a system with this in mind. Software vulnerabilities are only growing. There were 6000 in 2016 and 18,000 in 2020.”

To achieve cyber resiliency within your organization, he says, “Visibility is key. Know what you have in your network and keep it up to date. Also, follow the principle of least privilege for applications.”

Berthier also emphasized that cyber resiliency and cybersecurity must be a concern for more than just IT teams. For true resiliency, systems need to work harmoniously across a diverse set of tools, and teams need to work together to ensure business continuity.

Listen Online

Listen on Spotify

Listen on Apple Podcasts

 

Introduction to NERC CIP Vulnerability Assessment

By Blog

Compliance to cybersecurity standards, such as NERC CIP, can become an opportunity for organizations to establish standardized processes and gain efficiency. In the electric industry, this opportunity means building a culture of risk assessment and mitigation across all the parties involved with managing, regulating, and overseeing the grid, with the goal of maintaining a more secure and reliable grid in the process. CIP-010 Requirement R3 stipulates that a paper vulnerability assessment (PVA) and an active vulnerability assessment (AVA) need to be performed annually and every three years, respectively.

Vulnerability Assessment Requirements

Per CIP-010, Requirement R3, two types of Vulnerability Assessments are identified. There are requirements for an annual Paper Vulnerability Assessment (PVA) and every-three-years Active Vulnerability Assessment (AVA). For each assessment type, the Guidelines and Technical Basis (G&TB) strongly encourage entities to include at least the following elements, taken from NIST SP 800-115, as well as reviewing this NIST Technical Guide for guidance on approaches and methods to execute each:

  • Network Discovery
  • Network Port and Service Identification
  • Vulnerability Review/Scanning
  • Wireless Review/Scanning

Active Vulnerability Assessments vs. Paper Vulnerability Assessments

Per the G&TB in CIP-010, the following are strongly encouraged tasks for a PVA and an AVA, as well as the associated CIP-005, CIP-007, and CIP-010 Requirements and Parts for which they may provide detective controls:

Paper Vulnerability Assessment Tasks

Task Description Requirement Parts
Network Discovery A review of network connectivity to identify all Electronic Access Points. CIP-005 R1 Part 1.2
Network Port and Service Identification A review to verify that all enabled ports and services have an appropriate business justification. CIP-007 R1 Part 1.1
Vulnerability Review A review of security rule-sets and configurations including controls for default accounts, passwords, and network management community strings. CIP-005 R1 Part 1.3

CIP-007 R5 Parts 5.4 – 5.7

Wireless Review Identification of common types of wireless networks and a review of their controls if they are in any way used for BCS communications. CIP-005 R1 Part 1.1

Active Vulnerability Assessment Tasks

Task Description Requirement Parts
Network Discovery Use of active discovery tools to discover active devices and identify communication paths. CIP-005 R1 Parts 1.1 – 1.2
Network Port and Service Identification Use of active discovery tools to discover open ports and services. CIP-007 R1 Part 1.1

CIP-010 R1 Parts 1.1.2 – 1.1.4

Vulnerability Scanning Use of a vulnerability scanning tool to identify known vulnerabilities associated with services running on open ports. CIP-007 R2 Part 2.3

CIP-007 R5 Parts 5.2, 5.4 – 5.7

Wireless Scanning Use of a wireless scanning tool to discover wireless signals and networks in the physical perimeter of a BCS. CIP-005 R1 Part 1.1

While both PVA and AVA tasks are used as detective controls for complying with the above requirements, the controls provided in AVA tasks are more effective. At a high level, the review of evidence in PVA tasks simply identify issues associated with the documenting and/or maintaining of that evidence. AVA tasks, however, include the collection of fresh (updated) evidence that is reviewed and analyzed. AVA tasks can not only identify those documentation issues, they can also identify issues associated with processes followed to meet their respective compliance obligations. As an example, the review of network port and service evidence in a PVA assumes that port and service list is accurate when identifying missing or insufficient business justifications. In an AVA, the network port and service assessment adds the compilation of a fresh network port and service list to compare to existing evidence. This comparison can shine a light on issues related to the methods followed when the list of ports and services were initially collected, how dynamic port ranges associated with services were determined, or if unaccounted for software was installed enabling a previously undocumented port.

As described above, executing PVAs and AVAs have a much greater importance to an entity’s CIP compliance program than simply complying with CIP-010 Requirement 3 Parts 3.1 and 3.2. While automating PVA and AVA tasks improve the efficiency with which the tasks can be executed, that automation also eliminates instances of potential human error when executing the tasks. Thus, an automated solution, such as NP-View, can play an important role to assist entities with automating a number of the tasks above. NP-View is also leveraged by NERC regional auditors for validating evidence during audits.

Reviewing network path originating from or terminating at the ESP to verify interactive remote access

Preparation

In either a PVA or AVA, one key factor for success is a detailed VA plan, which should include:

  • Roles and responsibilities
  • Preparation, including:
    • Personal protective equipment requirements,
    • Site access requests,
    • System access requests,
    • Change request tickets, and
    • VA data storage location.
  • Data collection
  • Onsite activities
  • Data analysis

Another key success factor is entity subject matter expert (SME) engagement in the VA process. Regardless of how well versed the VA team members are in the VA process, inaccurate or incomplete data collected from the Cyber Assets ensures an unsuccessful VA. Additionally, SMEs typically provide the VA team with a more detailed view of the networks than can be collected from network diagrams alone.

Requirements

At a minimum, the needed data inputs for conducting a NERC CIP Vulnerability Assessment include:

  • NERC CIP Cyber Asset Inventory lists, including:
    • Unique identifier, such as hostname,
    • IP addresses and subnet mask, and
    • Electronic Security Perimeter (ESP).
  • List of Intermediate Systems,
  • List of ESP networks with included network subnets and their respective Electronic Access Points (EAPs),
  • CIP-007 R1 Part 1.1 ports and services justification evidence, and
  • CIP-007 R5 Parts 5.4 – 5.7 password controls evidence.
  • Configuration files in format readable by NP View

NP-View uses device configuration files from firewalls, routers, and switches to create a network diagram that allows compliance auditors and other users to understand objects, routes, permissions, and policies in a user readable format. To input the device files in the correct format, follow the instructions on the NP Knowledge Base. If a particular hardware/software platform is not supported, please contact support@network-perception.com to start the implementation of a new configuration parser.

Next Steps

Having a thorough, efficient, and repeatable methodology for vulnerability assessments lays the groundwork for its successful execution. Executing that methodology with personnel that both have expertise in the NERC CIP Reliability Standards and experience conducting vulnerability assessments with automated tools is crucial to that success. NP-View allows those executing vulnerability assessments to more efficiently complete a number of the tasks while minimizing the risk of human error during the more tedious ones. The time saving and completeness aspects are critical as network environment becomes more complex and our resources remain limited. 

This introduction is part of the Better, Faster NERC CIP Vulnerability Assessments Using NP-View white paper, which includes additional information and step-by-step instructions on how to best leverage NP-View during your CVA. For any questions or feedback, please feel free to contact the Network Perception team or the Network & Security Technologies (N&ST) team who co-wrote the white paper.

Don't miss the next article by subscribing to the NP newsletter