Category: Cyber Resilience

What Caused Cloudflare’s Big Crash? It’s Not Rust

What Caused Cloudflare’s Big Crash? It’s Not Rust

The Promise

Cloudflare’s outage did not just take down a fifth of the Internet. It exposed a truth we often avoid in engineering: complex systems rarely fail because of bad code. They fail because of the invisible assumptions we build into them.

This piece cuts past the memes, the Rust blame game and the instant hot takes to explain what actually broke, why the outrage misfired and what this incident really tells us about the fragility of Internet-scale systems.

If you are building distributed, AI-driven or mission-critical platforms, the key takeaways here will reset how you think about reliability and help you avoid walking away with exactly the wrong lesson from one of the year’s most revealing outages.

1. Setting the Stage: When a Fifth of the Internet Slowed to a Crawl

On 18 November, Cloudflare experienced one of its most significant incidents in recent years. Large parts of the world observed outages or degraded performance across services that underpin global traffic.
As always, the Internet reacted the way it knows best: outrage, memes, instant diagnosis delivered with absolute confidence.

Within minutes, social timelines flooded with:

  • “It must be DNS”
  • “Rust is unsafe after all”
  • “This is what happens when you rewrite everything”
  • “Even Downdetector is down because Cloudflare is down”
  • Screenshots of broken CSS on Cloudflare’s own status page
  • Accusations of over-engineering, under-engineering and everything in between

The world wanted a villain. Rust happened to be available. But the actual story is more nuanced and far more interesting. (For the record, I am still not convinced we should rewrite Linux kernel in Rust !)

2. What Actually Happened: A Clear Summary of Cloudflare’s Report

Cloudflare’s own post-incident write-up is unusually thorough. If you have not read it, you should. In brief:

  • Cloudflare is in the middle of a major multi-year upgrade of its edge infrastructure, referred to internally as the 20 percent Internet upgrade.
  • The rollout included a new feature configuration file.
  • This file contained more than two hundred features for their FL2 component, crossing a size limit that had been assumed but never enforced through guardrails.
  • The oversized file triggered a panic in the Rust-based logic that validated these configurations.
  • That panic initiated a restart loop across a large portion of their global fleet.
  • Because the very nodes that needed to perform a rollback were themselves in a degraded state, Cloudflare could not recover the control plane easily.
  • This created a cascading, self-reinforcing failure.
  • Only isolated regions with lagged deployments remained unaffected.

The root cause was a logic-path issue interacting with operational constraints. It had nothing to do with memory safety and nothing to do with Rust’s guarantees.

In other words: the failure was architectural, not linguistic.

3.2 The “unwrap() Is Evil” Argument (I remember writing a blog titled Eval() is not Evil() ~2012)

One of the most widely circulated tweets framed the presence of an unwrap() as a ticking time bomb, casting it as proof that Rust developers “trust themselves too much”. This is a caricature of the real issue.

The error did not arise because of an unwrap(), nor because Rust encourages poor error handling. It arose because:

  • an unexpected input crossed a limit,
  • guards were missing,
  • and the resulting failure propagated in a tightly coupled system.

The same failure would have occurred in Go, Java, C++, Zig, or Python.

3.3 Transparency Misinterpreted as Guilt

Cloudflare did something rare in our industry.
They published the exact code that failed. This was interpreted by some as:

“Here is the guilty line. Rust did it.”

In reality, Cloudflare’s openness is an example of mature engineering culture. More on that later.

4. The Internet Rage Cycle: Humour, Oversimplification and Absolute Certainty

The memes and tweets around this outage are not just entertainment. They reveal how the broader industry processes complex failure.

4.1 The ‘Everything Balances on Open Source’ Meme

Images circulated showing stacks of infrastructure teetering on boxes labelled DNS, Linux Foundation and unpaid open source developers, with Big Tech perched precariously on top.

This exaggeration contains a real truth. We live in a dependency monoculture. A few layers of open source and a handful of service providers hold up everything else.

The meme became shorthand for Internet fragility.

4.2 The ‘It Was DNS’ Routine

The classic:
“It is not DNS. It cannot be DNS. It was DNS.”

Except this time, it was not DNS.

Yet the joke resurfaces because DNS has become the folk villain for any outage. People default to the easiest mental shortcut.

4.3 The Rust Panic Narrative

Tweets claiming:

“Cloudflare rewrote in Rust, and half the Internet went down 53 days later.”

This inference is wrong, but emotionally satisfying.
People conflate correlation with causation because it creates a simple story: rewrites are dangerous.

4.4 The Irony of Downdetector Being Down

The screenshot of Downdetector depending on Cloudflare and therefore failing is both funny and revealing. This outage demonstrated how deeply intertwined modern platforms are. It is an ecosystem issue, not a Cloudflare issue.

4.5 But There Were Also Good Takes

Kelly Sommers’ observation that Cloudflare published source code is a reminder that not everyone jumped to outrage.

There were pockets of maturity. Unfortunately, they were quieter than the noise.

5. The Real Lessons for Engineering Leaders

This is the part worth reading slowly if you build distributed systems.

Lesson 1: Reliability Is an Architecture Choice, Not a Language Choice

You can build fragile systems in safe languages and robust systems in unsafe languages. Language is orthogonal to architectural resilience.

Lesson 2: Guardrails Matter More Than Guarantees

Rust gives memory safety.
It does not give correctness safety.
It does not give assumption safety.
It does not give rollout safety.

You cannot outsource judgment.

Lesson 3: Blast Radius Containment Is Everything

  • Uniform rollouts are dangerous.
  • Synchronous edge updates are dangerous.
  • Large global fleets need layered fault domains.

Cloudflare knows this. This incident will accelerate their work here.

Lesson 4: Control Planes Must Be Resilient Under Their Worst Conditions

The control plane was unreachable when it was needed most. This is a classic distributed systems trap: the emergency mechanism relies on the unhealthy components.

Always test:

  • rollback unavailability
  • degraded network conditions
  • inconsistent state recovery

Lesson 5: Complexity Fails in Complex Ways

The system behaved exactly as designed. That is the problem.
Emergent behaviour in large networks cannot be reasoned about purely through local correctness.

This is where most teams misjudge their risk.

6. Additional Lesson: Accountability and Transparency Are Strategic Advantages

This incident highlighted something deeper about Cloudflare’s culture.

They did not hide behind ambiguity.
They did not release a PR-approved statement with vague phrasing.

They published:

  • the timeline
  • the diagnosis
  • the exact code
  • the root cause
  • the systemic contributors
  • the ongoing mitigation plan

This level of transparency is uncomfortable. It puts the organisation under a microscope.
Yet it builds trust in a way no marketing claim can.

Transparency after failure is not just ethical. It is good engineering. Very few people highlighted including my man Gergely Orosz.

Most companies will never reach this level of accountability.
Cloudflare raised the bar.

7. What This Outage Tells Us About the State of the Internet

This was not a Cloudflare problem, This is a reminder of our shared dependency.

  • Too much global traffic flows through too few choke points.
  • Too many systems assume perfect availability from upstream.
  • Too many platforms synchronise their rollouts.
  • Too many companies run on infrastructure they did not build and cannot control.

The memes were not wrong.
They were simply incomplete.

8. Final Thoughts: Rust Did Not Fail. Our Assumptions Did.

Outages like this shape the future of engineering. The worst thing the industry can do is learn the wrong lesson.

This was not:

  • a Rust failure
  • a rewrite failure
  • an open source failure
  • a Cloudflare hubris story

This was a systems-thinking failure.
A reminder that assumptions are the most fragile part of any distributed system.
A demonstration of how tightly coupled global infrastructure has become.
A case study in why architecture always wins over language debates.

Cloudflare’s transparency deserves respect.
Their engineering culture deserves attention.
And the outrage cycle deserves better scepticism.

Because the Internet did not go down because of Rust.
It went down because the modern Internet is held together by coordination, trust, and layered assumptions that occasionally collide in surprising ways.

If we want a more resilient future, we need less blame and more understanding.
Less certainty and more curiosity.
Less language tribalism and more systems design thinking.

The Internet will fail again.
The question is whether we learn or react.

Cloudflare learned. The rest of us should too!

When Trust Cracks: The Vault Fault That Shook Identity Security

When Trust Cracks: The Vault Fault That Shook Identity Security

A vault exposed outside the DMZ

Opening Scene: The Unthinkable Inside Your Digital Fortress

Imagine standing before a vault that holds every secret of your organisation. It is solid, silent and built to withstand brute force. Yet, one day you discover someone walked straight in. No alarms. No credentials. No trace of a break-in. That is what the security community woke up to when researchers disclosed Vault Fault. A cluster of flaws in the very tools meant to guard our digital crown jewels.

Behind the Curtain: The Guardians of Our Secrets

Secrets management platforms like HashiCorp Vault and CyberArk Conjur or Secrets Manager sit at the heart of modern identity infrastructure. They store API keys, service credentials, encryption keys and more. In DevSecOps pipelines and hybrid environments, they are the trusted custodians. If a vault is compromised, it is not one system at risk. It is every connected system.

Vault Fault Unveiled: A Perfect Storm of Logic Flaws

Security firm Cyata revealed fourteen vulnerabilities spread across CyberArk and HashiCorp’s vault products. These were not just minor configuration oversights. They included:

  • CyberArk Conjur: IAM authenticator bypass by manipulating how regions are parsed. Privilege escalation by authenticating as a policy. Remote code execution by exploiting the ERB-based Policy Factory.
  • HashiCorp Vault: Nine zero-day issues including the first ever RCE in Vault. Bypasses of multi-factor authentication and account lockout logic. User enumeration through subtle timing differences. Escalation by abusing how policies are normalised.

These were chains of logic flaws that could be combined to devastating effect. Attackers could impersonate identities, escalate privileges, execute arbitrary code and exfiltrate secrets without ever providing valid credentials.

The Fallout: When Silent Vaults Explode

Perhaps the most unnerving fact is the age of some vulnerabilities. Several had been present for up to nine years. Quiet, undetected and exploitable. Remote code execution against a secrets vault is the equivalent of giving an intruder the keys to every door in your company. Once inside, they can lock you out, leak sensitive information or weaponise access for extortion.

Response and Remedy: Patch, Shield, Reinvent

Both vendors have issued fixes:

  • CyberArk Secrets Manager and Self-Hosted versions 13.5.1 and 13.6.1.
  • CyberArk Conjur Open Source version 1.22.1.
  • HashiCorp Vault Community and Enterprise editions 1.20.2, 1.19.8, 1.18.13 and 1.16.24.

Cyata’s guidance is direct. Patch immediately. Restrict network exposure of vault instances. Audit and rotate secrets. Minimise secret lifetime and scope. Enable detailed audit logs and monitor for anomalies. CyberArk has also engaged directly with customers to support remediation efforts.

Broader Lessons: Beyond the Fault

The nature of these flaws should make us pause. They were not memory corruption or injection bugs. They were logic vulnerabilities hiding in plain sight. The kind that slip past automated scans and live through version after version.

It is like delegating your IaaS or PaaS to AWS or Azure. They may run the infrastructure, but you are still responsible for meeting your own uptime SLAs. In the same way, even if you store secrets such as credit card numbers, API tokens or encryption keys in a vault, you remain responsible for securing them. The liability for a breach still sits with you.

Startups are especially vulnerable. Many operate under relentless deadlines and tight budgets. They offload everything that is not seen as part of their “core” operations to third parties. This speeds up delivery but also widens the blast radius when those dependencies are compromised. When your vault provider fails, your customers will still hold you accountable.

This should push us to adopt more defensive architectures. Moving towards ephemeral credentials, context-aware access and reducing reliance on long-lived static secrets.

We also need a culture shift. Secrets vaults are not infallible. Their security must be tested continuously. This includes adversarial simulations, code audits and community scrutiny. Trust in security systems is not a one-time grant. It is a relationship that must be earned repeatedly.

Closing Reflection: Trust Must Earn Itself Again

Vault Fault is a reminder that even our most trusted systems can develop cracks. The breach is not in the brute force of an attacker but in the quiet oversight of logic and design. As defenders, we must assume nothing is beyond failure. We must watch the watchers, test the guards and challenge the fortresses we build. Because the next fault may already be there, waiting to be found.

References and Further Reading

  1. The Hacker News – CyberArk and HashiCorp Flaws Enable Secret Exfiltration Without Credentials: https://thehackernews.com/2025/08/cyberark-and-hashicorp-flaws-enable.html
  2. CSO Online – Researchers uncover RCE attack chains in popular enterprise credential vaults: https://www.csoonline.com/article/4035274/researchers-uncover-rce-attack-chains-in-popular-enterprise-credential-vaults.html
  3. Dark Reading – Critical Zero-Day Bugs in CyberArk, HashiCorp Password Vaults: https://www.darkreading.com/cybersecurity-operations/critical-zero-day-bugs-cyberark-hashicorp-password-vaults
  4. Cyata Security – Vault Fault Disclosure: https://cyata.ai/vault-fault
  5. CyberArk Official Blog – Addressing Recent Vulnerabilities and Our Commitment to Security: https://www.cyberark.com/resources/all-blog-posts/addressing-recent-vulnerabilities-and-our-commitment-to-security
Oracle Cloud Breach Is a Transitive Trust Timebomb : Here’s How to Defuse It

Oracle Cloud Breach Is a Transitive Trust Timebomb : Here’s How to Defuse It

“One mispatched server in the cloud can ignite a wildfire of trust collapse across 140,000 tenants.”

1. The Context: Why This Matters

In March 2025, a breach at Oracle Cloud shook the enterprise SaaS world. A few hours after Rahul from CloudSEK first flagged signs of a possible compromise, I published an initial analysis titled Is Oracle Cloud Safe? Data Breach Allegations and What You Need to Do Now. That piece was an urgent response to a fast-moving situation, but this article is the reflective follow-up. Here, I break down not just the facts of what happened, but the deeper problem it reveals: the fragility of transitive trust in modern cloud ecosystems.

Threat actor rose87168 leaked nearly 6 million records tied to Oracle’s login infrastructure, affecting over 140,000 tenants. The source? A misconfigured legacy server still running an unpatched version of Oracle Access Manager (OAM) vulnerable to CVE‑2021‑35587.

Initially dismissed by Oracle as isolated and obsolete, the breach was later confirmed via datasets and a tampered page on the login domain itself, captured in archived snapshots. This breach was not just an Oracle problem. It was a supply chain problem. The moment authentication breaks upstream, every SaaS product, platform, and identity provider depending on it inherits the risk, often unknowingly.

Welcome to the age of transitive trust. shook the enterprise SaaS world. Threat actor rose87168 leaked nearly 6 million records tied to Oracle’s login infrastructure, affecting over 140,000 tenants. The source? A misconfigured legacy server still running an unpatched version of Oracle Access Manager (OAM) vulnerable to CVE‑2021‑35587.

Initially dismissed by Oracle as isolated and obsolete, the breach was later confirmed via datasets and a tampered page on the login domain itself, captured in archived snapshots. This breach was not just an Oracle problem. It was a supply chain problem. The moment authentication breaks upstream, every SaaS product, platform, and identity provider depending on it inherits the risk, often unknowingly.

Welcome to the age of transitive trust.

2. Anatomy of the Attack

Attack Vector

  • Exploited: CVE-2021-35587, a critical RCE in Oracle Access Manager.
  • Payload: Malformed XML allowed unauthenticated remote code execution.

Exploited Asset

  • Legacy Oracle Cloud Gen1 login endpoints still active (e.g., login.us2.oraclecloud.com).
  • These endpoints were supposedly decommissioned but remained publicly accessible.

Proof & Exfiltration

  • Uploaded artefact visible in Wayback Machine snapshots.
  • Datasets included:
    • JKS files, encrypted SSO credentials, LDAP passwords
    • Tenant metadata, PII, hashes of admin credentials

Validated by researchers from CloudSEK, ZenoX, and GoSecure.

3. How Was This Possible?

  • Infrastructure drift: Legacy systems like Gen1 login were never fully decommissioned.
  • Patch blindness: CVE‑2021‑35587 was disclosed in 2021 but remained exploitable.
  • Trust misplacement: Downstream services assumed the upstream IDP layer was hardened.
  • Lack of dependency mapping: Tenants had no visibility into Oracle’s internal infra state.

4. How This Could Have Been Prevented

Oracle’s Prevention Gaps
VectorPreventive Control
Legacy exposureEnforce infra retirement workflows. Remove public DNS entries for deprecated endpoints.
Patch gapsAutomate CVE patch enforcement across cloud services with SLA tracking.
IDP isolationDecouple prod identity from test/staging legacy infra. Enforce strict perimeter controls.
What Clients Could Have Done
Risk InheritedMitigation Strategy
Blind transitive trustMaintain a real-time trust graph between IDPs, SaaS apps, and their dependencies.
Credential overreachUse scoped tokens, auto-expire shared secrets, enforce rotation.
Detection lagMonitor downstream for leaked credentials or unusual login flows tied to upstream IDPs.

5. Your Response Plan for Upstream IDP Risk

DomainBest Practices
Identity & AccessEnforce federated MFA, short-lived sessions, conditional access rules
Secrets ManagementStore all secrets in a vault, rotate frequently, avoid static tokens
Vulnerability HygieneIntegrate CVE scanners into CI/CD pipelines and runtime checks
Visibility & AuditingMaintain structured logs of identity provider access and token usage
Trust Graph MappingActively map third-party IDP integrations, revalidate quarterly

6. Tools That Help You Defuse Transitive Trust Risks

ToolMitigatesUse Case
CloudSEK XVigilCredential leaksMonitor for exposure of tokens, admin hashes, or internal credentials in open channels
Cortex Xpanse / CensysLegacy infra exposureSurface forgotten login domains and misconfigured IDP endpoints
OPA / OSQuery / FalcoPolicy enforcementDetect violations of login logic, elevated access, or fallback misroutes
Orca / WizRuntime postureSpot residual access paths and configuration drifts post-incident
Sigstore / CosignSupply chain integrityProtect CI/CD artefacts but limited in identity-layer breach contexts
Vault (HashiCorp)Secrets lifecycleAutomate token expiration, key rotation, and zero plaintext exposure
Zerberus.ai Trace-AITransitive trust, IDP visibilityDiscover hidden dependencies in SaaS trust chains and enforce control validation

7. Lessons Learned

When I sat down to write this, these statements felt too obvious to be called lessons. Of course authentication is production infrastructure, any practitioner would agree. But then why do so few treat it that way? Why don’t we build failovers for our SSO? Why is trust still assumed, rather than validated?

These aren’t revelations. They’re reminders; hard-earned ones.

  • Transitive trust is NOT NEUTRAL, it’s a silent threat multiplier. It embeds risk invisibly into every integration.
  • Legacy infrastructure never retires itself. If it’s still reachable, it’s exploitable.
  • Authentication systems deserve production-level fault tolerance. Build them like you’d build your API or Payment Gateway.
  • Trust is not a diagram to revisit once a year; it must be observable, enforced, and continuously verified.

8. Making the Invisible Visible: Why We Built Zerberus

Transitive trust is invisible until it fails. Most teams don’t realise how many of their security guarantees hinge on external identity providers, third-party SaaS integrations, and cloud-native IAM misconfigurations.

At Zerberus, we set out to answer a hard question: What if you could see the trust relationships before they became a risk?

  • We map your entire trust graph, from identity providers and cloud resources to downstream tools and cross-SaaS entitlements.
  • We continuously verify the health and configuration of your identity and access layers, including:
    • MFA enforcement
    • Secret expiration windows
    • IDP endpoint exposure
  • We bridge compliance and security by treating auth controls and access posture as observable artefacts, not static assumptions.

Your biggest security risk may not be inside your codebase, but outside your control plane. Zerberus is your lens into that blind spot.

Further Reading & References

Want to Know Who You’re Really Trusting?

Start your free Zerberus trial and discover the trust graph behind your SaaS stack—before someone else does.

InfoSec’s Big Problem: Too Much Hope in One Cyber Database

InfoSec’s Big Problem: Too Much Hope in One Cyber Database

The Myth of a Single Cyber Superpower: Why Global Infosec Can’t Rely on One Nation’s Database

What the collapse of MITRE’s CVE funding reveals about fragility, sovereignty, and the silent geopolitics of vulnerability management

I. The Day the Coordination Engine Stalled

On April 16, 2025, MITRE’s CVE program—arguably the most critical coordination layer in global vulnerability management—lost its federal funding.

There was no press conference, no coordinated transition plan, no handover to an international body. Just a memo, and silence. As someone who’s worked in information security for two decades, I should have been surprised. I wasn’t. We’ve long been building on foundations we neither control nor fully understand.The CVE database isn’t just a spreadsheet of flaws. It is the lingua franca of cybersecurity. Without it, our systems don’t just become more vulnerable—they become incomparable.

II. From Backbone to Bottleneck

Since 1999, CVEs have given us a consistent, vendor-neutral way to identify and communicate about software vulnerabilities. Nearly every scanner, SBOM generator, security bulletin, bug bounty program, and regulatory framework references CVE IDs. The system enables prioritisation, automation, and coordinated disclosure.

But what happens when that language goes silent?

“We are flying blind in a threat-rich environment.”
Jen Easterly, former Director of CISA (2025)

That threat blindness is not hypothetical. The National Vulnerability Database (NVD)—which depends on MITRE for CVE enumeration—has a backlog exceeding 10,000 unanalysed vulnerabilities. Some tools have begun timing out or flagging stale data. Security orchestration systems misclassify vulnerabilities or ignore them entirely because the CVE ID was never issued.

This is not a minor workflow inconvenience. It’s a collapse in shared context, and it hits software supply chains the hardest.

III. Three Moves That Signalled Systemic Retreat

While many are treating the CVE shutdown as an isolated budget cut, it is in fact the third move in a larger geopolitical shift:

  • January 2025: The Cyber Safety Review Board (CSRB) was disbanded—eliminating the U.S.’s central post-incident review mechanism.
  • March 2025: Offensive cyber operations against Russia were paused by the U.S. Department of Defense, halting active containment of APTs like Fancy Bear and Gamaredon.
  • April 2025: MITRE’s CVE funding expired—effectively unplugging the vulnerability coordination layer trusted worldwide.

This is not a partisan critique. These decisions were made under a democratically elected government. But their global consequences are disproportionate. And this is the crux of the issue: when the world depends on a single nation for its digital immune system, even routine political shifts create existential risks.

IV. Global Dependency and the Quiet Cost of Centralisation

MITRE’s CVE system was always open, but never shared. It was funded domestically, operated unilaterally, and yet adopted globally.

That arrangement worked well—until it didn’t.

There is a word for this in international relations: asymmetry. In tech, we often call it technical debt. Whatever we name it, the result is the same: everyone built around a single point of failure they didn’t own or influence.

“Integrate various sources of threat intelligence in addition to the various software vulnerability/weakness databases.”
NSA, 2024

Even the NSA warned us not to over-index on CVE. But across industry, CVE/NVD remains hardcoded into compliance standards, vendor SLAs, and procurement language.

And as of this month, it’s… gone!

V. What Europe Sees That We Don’t Talk About

While the U.S. quietly pulled back, the European Union has been doing the opposite. Its Cyber Resilience Act (CRA) mandates that software vendors operating in the EU must maintain secure development practices, provide SBOMs, and handle vulnerability disclosures with rigour.

Unlike CVE, the CRA assumes no single vulnerability database will dominate. It emphasises process over platform, and mandates that organisations demonstrate control, not dependency.

This distinction matters.

If the CVE system was the shared fire alarm, the CRA is a fire drill—with decentralised protocols that work even if the main siren fails.

Europe, for all its bureaucratic delays, may have been right all along: resilience requires plurality.

VI. Lessons for the Infosec Community

At Zerberus, we anticipated this fracture. That’s why our ZSBOM™ platform was designed to pull vulnerability intelligence from multiple sources, including:

  • MITRE CVE/NVD (when available)
  • Google OSV
  • GitHub Security Advisories
  • Snyk and Sonatype databases
  • Internal threat feeds

This is not a plug; it’s a plea. Whether you use Zerberus or not, stop building your supply chain security around a single feed. Your tools, your teams, and your customers deserve more than monoculture.

VII. The Superpower Paradox

Here’s the uncomfortable truth:

When you’re the sole superpower, you don’t get to take a break.

The U.S. built the digital infrastructure the world relies on. CVE. DNS. NIST. Even the major cloud providers. But global dependency without shared governance leads to fragility.

And fragility, in cyberspace, gets exploited.

We must stop pretending that open-source equals open-governance, that centralisation equals efficiency, or that U.S. stability is guaranteed. The MITRE shutdown is not the end—but it should be a beginning.

A beginning of a post-unipolar cybersecurity infrastructure, where responsibility is distributed, resilience is engineered, and no single actor—however well-intentioned—is asked to carry the weight of the digital world.

References 

  1. Gatlan, S. (2025) ‘MITRE warns that funding for critical CVE program expires today’, BleepingComputer, 16 April. Available at: https://www.bleepingcomputer.com/news/security/mitre-warns-that-funding-for-critical-cve-program-expires-today/ (Accessed: 16 April 2025).
  2. Easterly, J. (2025) ‘Statement on CVE defunding’, Vocal Media, 15 April. Available at: https://vocal.media/theSwamp/jen-easterly-on-cve-defunding (Accessed: 16 April 2025).
  3. National Institute of Standards and Technology (NIST) (2025) NVD Dashboard. Available at: https://nvd.nist.gov/general/nvd-dashboard (Accessed: 16 April 2025).
  4. The White House (2021) Executive Order on Improving the Nation’s Cybersecurity, 12 May. Available at: https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/ (Accessed: 16 April 2025).
  5. U.S. National Security Agency (2024) Mitigating Software Supply Chain Risks. Available at: https://media.defense.gov/2024/Jan/30/2003370047/-1/-1/0/CSA-Mitigating-Software-Supply-Chain-Risks-2024.pdf (Accessed: 16 April 2025).
  6. European Commission (2023) Proposal for a Regulation on Cyber Resilience Act. Available at: https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act (Accessed: 16 April 2025).
Disbanding the CSRB: A Mistake for National Security

Disbanding the CSRB: A Mistake for National Security

Why Ending the CSRB Puts America at Risk

Imagine dismantling your fire department just because you haven’t had a major fire recently. That’s effectively what the Trump administration has done by disbanding the Cyber Safety Review Board (CSRB), a critical entity within the Cybersecurity and Infrastructure Security Agency (CISA). In an era of escalating cyber threats—ranging from ransomware targeting hospitals to sophisticated state-sponsored attacks—this decision is a catastrophic misstep for national security.

While countries across the globe are doubling down on cybersecurity investments, the United States has chosen to retreat from a proactive posture. The CSRB’s closure sends a dangerous message: that short-term political optics can override the long-term need for resilience in the face of digital threats.

The Role of the CSRB: A Beacon of Cybersecurity Leadership

Established to investigate and recommend strategies following major cyber incidents, the CSRB functioned as a hybrid think tank and task force, capable of cutting through red tape to deliver actionable insights. Its role extended beyond the public-facing reports; the board was deeply involved in guiding responses to sensitive, behind-the-scenes threats, ensuring that risks were mitigated before they escalated into crises.

The CSRB’s disbandment leaves a dangerous void in this ecosystem, weakening not only national defenses but also the trust between public and private entities.

CSRB: Championing Accountability and Reform

One of the CSRB’s most significant contributions was its ability to hold even the most powerful corporations accountable, driving reforms that prioritized security over profit. Its achievements are best understood through the lens of its high-profile investigations:

Key Milestones

Why the CSRB’s Work Mattered

The CSRB’s ability to compel change from tech giants like Microsoft underscored its importance. Without such mechanisms, corporations are less likely to prioritise cybersecurity, leaving critical infrastructure vulnerable to attack. As cyber threats grow in complexity, dismantling accountability structures like the CSRB risks fostering an environment where profits take precedence over security—a dangerous proposition for national resilience.

Cybersecurity as Strategic Deterrence

To truly grasp the implications of the CSRB’s dissolution, one must consider the broader strategic value of cybersecurity. The European Leadership Network aptly draws parallels between cyber capabilities and nuclear deterrence. Both serve as powerful tools for preventing conflict, not through their use but through the strength of their existence.

By dismantling the CSRB, the U.S. has not only weakened its ability to deter cyber adversaries but also signalled a lack of commitment to proactive defence. This retreat emboldens adversaries, from state-sponsored actors like China’s STORM-0558 to decentralized hacking groups, and undermines the nation’s strategic posture.

Global Trends: A Stark Contrast

While the U.S. retreats, the rest of the world is surging ahead. Nations in the Indo-Pacific, as highlighted by the Royal United Services Institute, are investing heavily in cybersecurity to counter growing threats. India, Japan, and Australia are fostering regional collaborations to strengthen their collective resilience.

Similarly, the UK and continental Europe are prioritising cyber capabilities. The UK, for instance, is shifting its focus from traditional nuclear deterrence to building robust cyber defences, a move advocated by the European Leadership Network. The EU’s Cybersecurity Strategy exemplifies the importance of unified, cross-border approaches to digital security.

The U.S.’s decision to disband the CSRB stands in stark contrast to these efforts, risking not only its national security but also its leadership in global cybersecurity.

Isolationism’s Dangerous Consequences

This decision reflects a broader trend of isolationism within the Trump administration. Whether it’s withdrawing from the World Health Organization or sidelining international climate agreements, the U.S. has increasingly disengaged from global efforts. In cybersecurity, this isolationist approach is particularly perilous.

Global threats demand global solutions. Initiatives like the Five Eyes’ Secure Innovation program (Infosecurity Magazine) demonstrate the value of collaborative defence strategies. By withdrawing from structures like the CSRB, the U.S. not only risks alienating allies but also forfeits its role as a global leader in cybersecurity.

The Cost of Complacency

Cybersecurity is not a field that rewards complacency. As CSO Online warns, short-term thinking in this domain can lead to long-term vulnerabilities. The absence of the CSRB means fewer opportunities to learn from incidents, fewer recommendations for systemic improvements, and a diminished ability to adapt to evolving threats.

The cost of this decision will likely manifest in increased cyber incidents, weakened critical infrastructure, and a growing divide between the U.S. and its allies in terms of cybersecurity capabilities.

Conclusion

The disbanding of the CSRB is not just a bureaucratic reshuffle—it is a strategic blunder with far-reaching implications for national and global security. In an age where digital threats are as consequential as conventional warfare, dismantling a key pillar of cybersecurity leaves the United States exposed and isolated.

The CSRB’s legacy of transparency, accountability, and reform serves as a stark reminder of what’s at stake. Its dissolution not only weakens national defences but also risks emboldening adversaries and eroding trust among international partners. To safeguard its digital future, the U.S. must urgently rebuild mechanisms like the CSRB, reestablish its leadership in cybersecurity, and recommit to collaborative defence strategies.

References & Further Reading

  1. TechCrunch. (2025). Trump administration fires members of cybersecurity review board in horribly shortsighted decision. Available at: TechCrunch
  2. The Conversation. (2025). Trump has fired a major cybersecurity investigations body – it’s a risky move. Available at: The Conversation
  3. TechDirt. (2025). Trump disbands cybersecurity board investigating massive Chinese phone system hack. Available at: TechDirt
  4. European Leadership Network. (2024). Nuclear vs Cyber Deterrence: Why the UK Should Invest More in Its Cyber Capabilities and Less in Nuclear Deterrence. Available at: ELN
  5. Royal United Services Institute. (2024). Cyber Capabilities in the Indo-Pacific: Shared Ambitions, Different Means. Available at: RUSI
  6. Infosecurity Magazine. (2024). Five Eyes Agencies Launch Startup Security Initiative. Available at: Infosecurity Magazine
  7. CSO Online. (2024). Project 2025 Could Escalate US Cybersecurity Risks, Endanger More Americans. Available at: CSO Online
Bitnami