Welcome back to our Decoding NERC CIP series. In this post, we explore the critical role of NERC CIP patch sources in maintaining compliance and operational safety within OT and control system environments.
In earlier posts, we explored foundational topics such as asset identification under NERC CIP, the operational realities of patching OT systems, and risk-based approaches to patch prioritization. These posts build toward a holistic view of the NERC CIP compliance cycle, or “Virtuous Loop,” as discussed in “The Virtuous Loop & NERC CIP.”
This post builds on that foundation by addressing a critical next question: where patches should come from, and why patch source selection matters for both compliance and operational safety.
As always, you can listen to this discussion on our Decoding NERC CIP podcast, where we explore the same topics with OT and cyber security experts sharing real-world lessons.
Why do we need to identify patch sources?
Among cyber security regulations and compliance frameworks, NERC CIP is somewhat unique in explicitly requiring entities to identify their patch source. This was not always the case.
In CIP versions 1-4, the patch management requirement did not address patch sources at all. At this time, it was implicitly assumed that patches would come directly from the software developer or hardware manufacturer. In the electric power industry, however, that assumption often does not hold.
Electric utilities and Independent Power Producers (IPPs) frequently purchase “turnkey” systems that include preconfigured hardware and software. Before allowing a utility to apply a patch for software or firmware included in such a system, the systems integrator typically tests and approves the patch. Only after confirming that the patch will not affect overall system operation does the integrator release it to customers. In practice, this means that a Windows patch may come from the integrator rather than directly from Microsoft, even though the patch itself is unchanged.
This reality created compliance challenges under CIP versions 1-4. CIP-007 R3 required patches to be assessed for applicability within 30 days of “availability”. Because integrators might not approve patches until weeks or months after the original vendor release, many entities could not strictly comply. From their perspective, a patch was not truly “available” until the integrator released it.
Recognized NERC CIP Patch Sources and Requirements
When CIP version 5 was developed beginning in 2010 (and implemented in 2016), the drafting team addressed this issue directly. They allowed Responsible Entities to designate the source or sources from which they would obtain patches for specific software or hardware products. These sources could include the software developer, a systems integrator, or a qualified third-party patch provider.
Today, CIP-007-6 Requirement R2 Part 2.1 requires Responsible Entities to identify “a source or sources for the release of cyber security patches for applicable Cyber Assets.” This gives entities the flexibility to align their patching processes with how their systems are actually supported in practice.
For example, an entity operating multiple systems built by different integrators may only apply a Windows patch to a given system once that system’s integrator has approved it, even if the same operating system is used elsewhere.
This flexibility has also enabled the use of OT patch aggregation services. Many Responsible Entities now identify these organizations as patch sources for certain software products within their Electronic Security Perimeters (ESPs). Patch aggregators collect updates from multiple vendors and make them available through a single platform. In some cases, they also assist with validating patch provenance and integrity, which can support compliance with CIP-010-4 Requirement R1 Part 1.6. Examples include tools such as Foxguard’s Patchintel, which aggregates and tracks OT-relevant patches, and Foxguard Deploy, which supports controlled, auditable patch deployment.
These tools can be used independently by Responsible Entities that retain hands-on control of patch assessment and deployment, or as part of broader managed services where patch monitoring, validation, and execution are handled on the entity’s behalf. The distinction is important from both an operational and compliance perspective, as entities remain responsible for defining patch sources and maintaining evidence regardless of how the work is performed.
What can happen if we apply an unverified patch?
As discussed earlier in this series, OT environments amplify the consequences of patch-related failures due to availability and safety constraints.
A vivid example of the risks associated with unverified patches is the Dragonfly attack. In 2022, the U.S. Department of Justice indicted ”three officers of Russia’s Federal Security Service (FSB)” for their involvement in Dragonfly, a supply chain attack that targeted updates to ICS and SCADA systems.
According to reporting by The Register, “Legitimate updates to that software were infected with malware named ‘Havex’ that allowed the attackers to create back doors and scan networks for more targets. Over 17,000 devices were infected in the US alone. The indictment states that their efforts gave Russia the chance to ‘damage such computer systems at a future time of its choosing.’”
Dragonfly was one of the first supply chain attacks on critical infrastructure. While it did not lead to any known physical damage to ICS or SCADA systems, it could easily have done so, had it not been detected so quickly.
Importantly, Dragonfly involved tampered updates. If requirements like CIP-010-4 Requirement R1 Part 1.6 had been in effect at the time, verification of patch integrity through hash comparison likely would have revealed that the updates had been altered. From a NERC CIP perspective, incidents like this show us why patch source identification and verification are treated as formal requirements rather than best practices.
Clearly, a lot of bad things (including, but not limited to, non-compliance) can happen if a NERC entity doesn’t verify the identity and integrity of software patches.
Why verification alone isn’t enough
While Dragonfly shows the risks of tampered delivery channels, SolarWinds demonstrates that even verified updates can be compromised.
In December 2020, the world learned the hard way that verifying every patch or software update you receive is not enough to protect you from a breach. The SolarWinds attack (along with Stuxnet) ranks among the most complex cyberattacks in recent history.
Approximately 18,000 organizations worldwide were compromised by this attack, including NATO, the European Parliament, the U.S. Department of Commerce, the Treasury Department, the National Nuclear Safety Agency, Microsoft, and VMWare. Sensitive data and emails were exfiltrated from about 200 of these organizations.
The success of the attack stemmed from the perpetrators—assumed to be Russian—ability to insert the Sunburst malware into seven software updates for the SolarWinds Orion platform. This occurred during the software build process, undetected, so the malware became part of the official updates. Like all Orion updates, these were digitally signed by SolarWinds, and a SolarWinds-generated hash confirmed their integrity.
As a result, customer systems automatically verified and installed the updates without suspicion. Unlike the Havex malware in the Dragonfly attack, which compromised the delivery channel, the Orion updates were authentic and untampered in transit. The identity of the source was confirmed, and the integrity of the software was intact. Since SolarWinds had no prior history of major cyber incidents and had successfully released hundreds of updates without issue, there was no reason for customers to suspect that seven updates were malicious.
In retrospect, SolarWinds’ internal security program was seriously deficient, yet there was no practical way for customers to detect the compromise without a comprehensive third-party security audit. Because updates were applied automatically after verification, customers had little ability to prevent network compromise.
Because SolarWinds themselves were unaware of the breach, customers could neither detect nor suspect it. Given the extraordinary efforts of the attackers, even the software developer may never fully know the extent of the compromise. Incidents like these reinforce a theme introduced earlier in this series: compliance-aligned patch processes reduce risk but cannot eliminate it entirely.
So, what’s Plan B?
From both a cyber security and compliance standpoint, the focus shifts from prevention alone to early detection, containment, and documented response. This approach echoes the Virtuous Loop concept, where continuous monitoring, threat detection, and timely remediation close the gap between patch deployment and operational resilience, as highlighted in “The Virtuous Loop & NERC CIP.”
While end users cannot fully prevent sophisticated supply chain attacks like SolarWinds, they can take steps to detect compromise as early as possible. These include:
- If the supplier of a software product you use announces they have been compromised and customers may be affected, follow their instructions to determine whether your organization has been impacted.
- Best practices for detecting a compromise include:
- Monitor for anomalous outbound network traffic from your networks, especially to unknown external domains or IP addresses.
- Monitor inbound traffic for access attempts or network activity from unexpected locations or countries.
- Watch for unusual activity by privileged accounts, such as attempts to escalate privileges, access multiple systems in a short time, or log in from unusual locations.
- Be alert for unusual or high volumes of DNS requests.
- Watch for unauthorized changes to system configurations, services being disabled (especially in security software), or new unknown local user accounts.
- Steps to take if you suspect your network has been compromised:
- Immediately power down compromised devices or disconnect them from the network.
- Examine system and network logs to determine the extent of the compromise and to identify any lateral movement by the attackers.
- Reset all credentials.
- Engage your organization’s incident response team—or a qualified outside team—to manage containment and recovery efforts.
What compliance evidence do auditors expect?
For CIP-007-6 Requirement R2 Parts 2.1 and 2.2, auditors typically expect clear, dated documentation showing how patch sources are identified and monitored. Common practices include:
- Most entities use an Excel spreadsheet. All you need is a list of software from your system baselines, which should be readily available. Include operating systems, commercially available software, and any custom software. These baselines should already exist if asset identification has been performed in line with earlier CIP requirements discussed in this series.
- For each software item, identify the provider of any updates and the URL where patch information can be found. Note that not every item will necessarily have a URL, especially custom or home-grown software.
- SCADA/EMS vendors typically include the Windows or Linux operating system as part of an integrated delivery. Customers of the SCADA/EMS vendor do not get OS updates directly from the operating system provider; they wait for the SCADA/EMS vendor to test and certify compatibility with their application software.
- The SCADA/EMS vendor then packages the operating system into their product release. The 35-day clock for Requirement Part 2.2 starts when the identified provider announces the availability of the release package.
- Many NERC entities use commercially available patch management software, such as Foxguard. The patch management system identifies the applicable updates, which starts the 35-day install-or-mitigate clock in Part 2.2. As described in “Vulnerability Tools to Simplify Patching,” such tools not only track patch availability but also integrate with software inventories and vulnerability databases to streamline compliance and reduce manual effort.
- Auditors will normally request documentation of the patch sources and evidence that these sources are being monitored for updates. Evidence could include a patch management software report, vendor notices, or manual checks. All evidence must be dated.
Moving forward together
Identifying and documenting your NERC CIP patch sources is a critical foundation for compliance, but it does not eliminate all risk associated with software updates. Building on the Virtuous Loop framework, organizations can integrate patch source management with asset awareness, vulnerability tracking, and compliance reporting to create a repeatable, auditable, and resilient patch management program.
Even when patch sources are well defined, Responsible Entities still face practical questions about verification, supplier practices, and how patch management fits into broader supply chain risk programs.
In the next post in the Decoding NERC CIP series, we’ll take a question-and-answer approach to these challenges. We’ll address what to do when vendors do not provide signatures or hash values, how to evaluate the practices of patch aggregation services, and how patch management responsibilities align with CIP-013 supply chain security and CIP-010 verification requirements.
Together, these discussions move the series from where patches come from to how organizations build confidence in the entire patch supply chain—from procurement through installation and audit.