Supply chain attacks used to be theoretical in most enterprise security conversations. They’re not anymore. The last several years have produced a series of incidents that have fundamentally changed how organizations need to think about the software they deploy — not just the software their own developers write, but the compilers they use to build it, the libraries it depends on, the build systems that assemble it, and the update mechanisms that keep it current.
The pattern that unites recent high-profile incidents is that attackers found the path of least resistance into their targets by compromising a trusted intermediary. They didn’t need to break through your perimeter — your organization invited the malicious code in through your standard software deployment process. That’s a category of threat that traditional perimeter and endpoint controls weren’t designed to handle, and it requires a different kind of response.
This post examines what practical supply chain security looks like for organizations that need to manage real delivery timelines, not just theoretical best practices.
What Recent Incidents Have Taught Us
The SolarWinds compromise established the template for the modern supply chain attack: compromise a trusted software vendor’s build process, inject malicious code into a signed, legitimate software update, and let the vendor’s customers do the work of deploying the payload. The attack worked at scale because the trust model of enterprise software — you trust updates from vendors whose software you’ve already deployed — was being exploited rather than circumvented.
XZ Utils demonstrated that supply chain attacks don’t require compromising an established vendor. A sophisticated attacker spent nearly two years building a reputation as a trusted contributor to an open source project before attempting to introduce a backdoor. The patience and operational sophistication of that attack was a signal: open source dependencies maintained by small communities with limited review capacity are a viable target that defenders need to take seriously.
The Codecov breach, Log4Shell exploitation, and various npm package compromises round out the picture. The common thread is that defenders who thought their software supply chain was “trusted” were operating on an assumption that was either false or far more fragile than they realized.
What these incidents teach us at the practical level is that trust must be earned and continuously verified, not granted once and assumed indefinitely. The software your organization deploys should have a documented, verifiable chain of custody — from source to build to deployment — and deviations from that chain should be detectable.
Building a Software Bill of Materials
A Software Bill of Materials (SBOM) is a structured inventory of the components in your software: the libraries, frameworks, and dependencies that a piece of software relies on and that would need to be patched or replaced if a vulnerability were discovered. SBOMs aren’t a new concept, but they’ve gained significant traction as a supply chain security control because they’re the prerequisite for everything else.
You can’t assess your exposure to a vulnerable library if you don’t know which of your applications depend on it. You can’t evaluate the risk of a compromised dependency if you don’t have a current inventory of which version you’re running and where. The SolarWinds response was made harder for many organizations by the fact that they didn’t have a complete picture of where SolarWinds software was deployed across their environment. That’s a basic inventory problem.
Building SBOMs starts with your CI/CD pipeline. Tools like Syft, Trivy, and cyclonedx-cli can generate SBOMs for container images and application builds as part of your build process. The goal is an SBOM that’s generated automatically on every build, stored alongside the build artifact, and queryable when you need to answer “which of our applications use library X?” That query capability is what turns an SBOM from a compliance artifact into an operational security tool.
For organizations that also need to think about the security posture of their cloud assets alongside application dependencies, our post on security posture management and attack path mapping covers how these pieces connect at the program level.
Dependency Risk Management
Having an SBOM is a starting point. Acting on it requires a dependency risk management process that covers both vulnerability management (keeping dependencies current and patching when CVEs are published) and supply chain integrity (verifying that the dependencies you’re using are what they claim to be).
On the vulnerability side, the key discipline is knowing the difference between “this dependency has a CVE” and “this CVE is exploitable in our application.” Dependency scanning tools routinely surface vulnerabilities in transitive dependencies — libraries your direct dependencies rely on — that may not be reachable in your application’s actual execution paths. Treating every CVE as equal creates remediation fatigue and causes teams to de-prioritize even genuinely critical issues. Use reachability analysis where your tooling supports it, and contextualize findings against the actual attack surface before routing them to developers.
On the integrity side, the concerns are different: are the packages you’re installing actually published by the maintainers you trust, or have they been tampered with? Typosquatting attacks — malicious packages with names slightly similar to popular legitimate packages — have been a consistent pattern in public package registries. Controls here include pinning dependencies to specific hashes rather than version ranges, using private artifact repositories that mirror approved packages rather than pulling directly from public registries, and verifying package signatures where they’re available.
For internal packages, implement signing and verification in your build process. If your organization produces software that other teams depend on, those dependencies should be signed artifacts with a verifiable provenance chain — not just packages in an internal registry with no integrity verification.
Securing Your Build Pipeline
Your CI/CD pipeline is itself a supply chain component. The tools used to build your software — compilers, build systems, container base images, CI/CD platform plugins — are all potential attack vectors. If a malicious actor can modify what your pipeline does, they can affect every artifact that pipeline produces.
Practical hardening for CI/CD pipelines covers several areas. Secrets management is the most immediate: build pipelines shouldn’t have long-lived credentials stored as environment variables or configuration files in the repository. Use ephemeral, short-lived credentials generated at build time, and audit which systems your pipeline can authenticate to. A compromised build pipeline that can deploy to production and push artifacts to your artifact registry has enormous blast radius.
Pipeline configuration should be reviewed with the same rigor as application code. Pull requests that modify CI/CD configuration files — .github/workflows, Jenkinsfiles, .gitlab-ci.yml — should require additional reviewers and should be scrutinized for changes that bypass security checks or add unexpected network calls. The XZ Utils attack was partially successful because the backdoor used build-time hooks to inject code that wasn’t visible in the source. Build configuration is code with security consequences.
Third-party GitHub Actions and CI/CD plugins deserve particular scrutiny. Pinning actions to specific commit hashes rather than version tags prevents a scenario where a plugin maintainer’s account is compromised and the tag is redirected to malicious code. This is a small discipline change with meaningful security benefit.
Vendor and Third-Party Risk
Software you procure from commercial vendors is subject to the same supply chain risks as your internal development — arguably more, because you have less visibility into their build processes. Due diligence on software vendors should include questions about their secure development lifecycle, their SBOM availability, their incident notification practices, and their history of supply chain-related incidents.
Post-SolarWinds, many large software vendors have made meaningful investments in build security, code signing, and update integrity verification. Ask about those investments when you’re evaluating vendors and when you’re renewing contracts. Contractual requirements for notification timelines when supply chain incidents affect products you use give you a basis for accountability that isn’t available if you never asked for it.
Network segmentation and endpoint controls for software update processes are a compensating control worth considering for the highest-sensitivity environments. Software that updates automatically and can execute code as part of the update process — security tools and monitoring agents are common examples — should have network access limited to the update infrastructure they need, and update events should be logged and monitorable. You’re not going to block every supply chain attack this way, but you are going to have the visibility to detect and respond to unusual behavior during update events.
Responding When a Supplier Is Compromised
The SolarWinds response forced many organizations to execute incident response plans they’d never tested against a supply chain scenario. The questions are different: instead of “what did the attacker access?” you’re asking “was this system running the affected software, and if so, during what window, and what could that software have accessed?”
Build supply chain scenarios into your IR planning and tabletops. The key capabilities are: rapid asset inventory (which systems have software X installed?), rapid isolation without bricking critical systems, and a communication process for coordinating with the affected vendor. The first 24 hours of a supply chain incident response are often consumed by the inventory question — organizations that can answer it quickly have dramatically more options than those that spend days determining their exposure scope.
The organizations that handled SolarWinds best were the ones that had current asset inventories, well-documented network segmentation, and incident response teams that had practiced large-scale isolation exercises. None of those capabilities require specific supply chain tooling — they’re foundational security operations practices that pay off across every type of incident.