Category: Cyber Security

Defence Tech at Risk: Palantir, Anduril, and Govini in the New AI Arms Race

Defence Tech at Risk: Palantir, Anduril, and Govini in the New AI Arms Race

A Chink in Palantir and Anduril’s Armour? Govini and Others Are Unsheathing the Sword

When Silicon Valley Code Marches to War

A U.S. Army Chinook rises over Gyeonggi Province, carrying not only soldiers and equipment but streams of battlefield telemetry, encrypted packets of sight, sound and position. Below, sensors link to vehicles, commanders to drones, decisions to data. Yet a recent Army memo reveals a darker subtext: the very network binding these forces together has been declared “very high risk.”

The battlefield is now a software construct. And the architects of that code are not defence primes from the industrial era but Silicon Valley firms, Anduril and Palantir. For years, they have promised that agility, automation and machine intelligence could redefine combat efficiency. But when an internal memo brands their flagship platform “fundamentally insecure,” the question is no longer about innovation. It is about survival.

Just as the armour shows its first cracks, another company, Govini, crosses $100 million in annual recurring revenue, sharpening its own blade in the same theatre.

When velocity becomes virtue and verification an afterthought, the chink in the armour often starts in the code.

The Field Brief

  • A U.S. Army CTO memo calls Anduril–Palantir’s NGC2 communications platform “very high risk.”
  • Vulnerabilities: unrestricted access, missing logs, unvetted third-party apps, and hundreds of critical flaws.
  • Palantir’s stock drops 7 %; Anduril dismisses findings as outdated.
  • Meanwhile, Govini surpasses $100 M ARR with $150 M funding from Bain Capital.
  • The new arms race is not hardware; it is assurance.

Silicon Valley’s March on the Pentagon

For over half a century, America’s defence economy was dominated by industrial giants, Lockheed Martin, Boeing, and Northrop Grumman. Their reign was measured in steel, thrust and tonnage. But the twenty-first century introduced a new class of combatant: code.

Palantir began as an analytics engine for intelligence agencies, translating oceans of data into patterns of threat. Anduril followed as the hardware-agnostic platform marrying drones, sensors and AI decision loops into one mesh of command. Both firms embodied the “move fast” ideology of Silicon Valley, speed as a substitute for bureaucracy.

The Pentagon, fatigued by procurement inertia, welcomed the disruption. Billions flowed to agile software vendors promising digital dominance. Yet agility without auditability breeds fragility. And that fragility surfaced in the Army’s own words.

Inside the Memo: The Code Beneath the Uniform

The leaked memo, authored by Army CTO Gabriele Chiulli, outlines fundamental failures in the Next-Generation Command and Control (NGC2) prototype, a joint effort by Anduril, Palantir, Microsoft and others.

“We cannot control who sees what, we cannot see what users are doing, and we cannot verify that the software itself is secure.”

The findings are stark: users at varying clearance levels could access all data; activity logging was absent; several embedded applications had not undergone Army security assessment; one revealed twenty-five high-severity vulnerabilities, while others exceeded two hundred.

Translated into security language, the platform lacks role-based access control, integrity monitoring, and cryptographic segregation of data domains. Strategically, this means command blindness: an adversary breaching one node could move laterally without a trace.

In the lexicon of cyber operations, that is not “high risk.” It is mission failure waiting for confirmation.

Inside the Memo: The Code Beneath the Uniform

The leaked memo, authored by Army CTO Gabriele Chiulli, outlines fundamental failures in the Next-Generation Command and Control (NGC2) prototype — a joint effort by Anduril, Palantir, Microsoft and others.

“We cannot control who sees what, we cannot see what users are doing, and we cannot verify that the software itself is secure.”

-US Army Memo

The findings are stark: users at varying clearance levels could access all data; activity logging was absent; several embedded applications had not undergone Army security assessment; one revealed twenty-five high-severity vulnerabilities, while others exceeded two hundred.

Translated into security language, the platform lacks role-based access control, integrity monitoring, and cryptographic segregation of data domains. Strategically, this means command blindness: an adversary breaching one node could move laterally without trace.

In the lexicon of cyber operations, that is not “high risk.” It is a “mission failure waiting for confirmation”.

The Doctrine of Velocity

Anduril’s rebuttal was swift. The report, they claimed, represented “an outdated snapshot.” Palantir insisted that no vulnerabilities were found within its own platform.

Their responses echo a philosophy as old as the Valley itself: innovation first, audit later. The Army’s integration of Continuous Authority to Operate (cATO) sought to balance agility with accountability, allowing updates to roll out in days rather than months. Yet cATO is only as strong as the telemetry beneath it. Without continuous evidence, continuous authorisation becomes continuous exposure.

This is the paradox of modern defence tech: DevSecOps without DevGovernance. A battlefield network built for iteration risks treating soldiers as beta testers.

Govini’s Counteroffensive: Discipline over Demos

While Palantir’s valuation trembled, Govini’s ascended. The Arlington-based startup announced $100 million in annual recurring revenue and secured $150 million from Bain Capital. Its CEO, Tara Murphy Dougherty — herself a former Palantir executive — emphasised the company’s growth trajectory and its $900 million federal contract portfolio.

Govini’s software, Ark, is less glamorous than autonomous drones or digital fire-control systems. It maps the U.S. military’s supply chain, linking procurement, logistics and readiness. Where others promise speed, Govini preaches structure. It tracks materials, suppliers and vulnerabilities across lifecycle data — from the factory floor to the frontline.

If Anduril and Palantir forged the sword of rapid innovation, Govini is perfecting its edge. Precision, not pace, has become its competitive advantage. In a field addicted to disruption, Govini’s discipline feels almost radical.

Technical Reading: From Vulnerability to Vector

The NGC2 memo can be interpreted through a simple threat-modelling lens:

  1. Privilege Creep → Data Exposure — Excessive permissions allow information spillage across clearance levels.
  2. Third-Party Applications → Supply-Chain Compromise — External code introduces unassessed attack surfaces.
  3. Absent Logging → Zero Forensics — Breaches remain undetected and untraceable.
  4. Unverified Binaries → Persistent Backdoors — Unknown components enable long-term infiltration.

These patterns mirror civilian software ecosystems: typosquatted dependencies on npm, poisoned PyPI packages, unpatched container images. The military variant merely amplifies consequences; a compromised package here could redirect an artillery feed, not a webpage.

Modern defence systems must therefore adopt commercial best practice at military scale: Software Bills of Materials (SBOMs), continuous vulnerability correlation, maintainer-anomaly detection, and cryptographic provenance tracking.

Metadata-only validation, verifying artefacts without exposing source, is emerging as the new battlefield armour. Security must become declarative, measurable, and independent of developer promises.

Procurement and Policy: When Compliance Becomes Combat

The implications extend far beyond Anduril and Palantir. Procurement frameworks themselves require reform. For decades, contracts rewarded milestones — prototypes delivered, demos staged, systems deployed. Very few tied payment to verified security outcomes.

Future defence contracts must integrate technical evidence: SBOMs, audit trails, and automated compliance proofs. Continuous monitoring should be a contractual clause, not an afterthought. The Department of Defense’s push towards Zero Trust and CMMC v2 compliance is a start, but implementation must reach code level.

Governments cannot afford to purchase vulnerabilities wrapped in innovation rhetoric. The next generation of military contracting must buy assurance as deliberately as it buys ammunition.

Market Implications: Valuation Meets Validation

The markets reacted predictably: Palantir’s shares slid 7.5 %, while Govini’s valuation swelled with investor confidence. Yet beneath these fluctuations lies a structural shift.

Defence technology is transitioning from narrative-driven valuation to evidence-driven validation. The metric investors increasingly prize is not just recurring revenue but recurring reliability, the ability to prove resilience under audit.

Trust capital, once intangible, is becoming quantifiable. In the next wave of defence-tech funding, startups that embed assurance pipelines will attract the same enthusiasm once reserved for speed alone.

The Lessons of the Armour — Ten Principles for Digital Fortification

For practitioners like me (Old school), here are the Lessons learnt through the classic lens of Saltzer and Schroder.

No.Modern Principle (Defence-Tech Context)Saltzer & Schroeder PrinciplePractical Interpretation in Modern Systems
1Command DevSecOps – Governance must be embedded, not appended. Every deployment decision is a command decision.Economy of MechanismKeep security mechanisms simple, auditable, and centrally enforced across CI/CD and mission environments.
2Segment by Mission – Separate environments and privileges by operational need.Least PrivilegeEach actor, human or machine, receives the minimum access required for the mission window. Segmentation prevents lateral movement.
3Log or Lose – No event should be untraceable.Complete MediationEvery access request and data flow must be logged and verified in real time. Enforce tamper-evident telemetry to maintain operational integrity.
4Vet Third-Party Code – Treat every dependency as a potential adversary.Open DesignAssume no obscurity. Transparency, reproducible builds and independent review are the only assurance that supply-chain code is safe.
5Maintain Live SBOMs – Generate provenance at build and deployment.Separation of PrivilegeIndependent verification of artefacts through cryptographic attestation ensures multiple checks before code reaches production.
6Embed Rollback Paths – Every deployment must have a controlled retreat.Fail-Safe DefaultsWhen uncertainty arises, systems must default to a known-safe state. Rollback or isolation preserves mission continuity.
7Automate Anomaly Detection – Treat telemetry as perimeter.Least Common MechanismShared services such as APIs or pipelines should minimise trust overlap. Automated detectors isolate abnormal behaviour before propagation.
8Demand Provenance – Trust only what can be verified cryptographically.Psychological AcceptabilityVerification should be effortless for operators. Provenance and signatures must integrate naturally into existing workflow tools.
9Audit AI – Governance must evolve with autonomy.Separation of Privilege and Economy of MechanismMultiple models or oversight nodes should validate AI decisions. Explainability should enhance, not complicate, assurance.
10Measure After Assurance – Performance metrics follow proof of security, never precede it.Least Privilege and Fail-Safe DefaultsPrioritise verifiable assurance before optimisation. Treat security evidence as a precondition for mission performance metrics.

The Sword and the Shield

The codebase has become the battlefield. Every unchecked commit, every unlogged transaction, carries kinetic consequence.

Anduril and Palantir forged the sword, algorithms that react faster than human cognition. But Govini, and others of its kind, remind us that the shield matters as much as the blade. In warfare, resilience is victory’s quiet architect.

The lesson is not that speed is dangerous, but that speed divorced from verification is indistinguishable from recklessness. The future of defence technology belongs to those who master both: the velocity to innovate and the discipline to ensure that innovation survives contact with reality.

In this new theatre of code and command, it is not the flash of the sword that defines power — it is the assurance of the armour that bears it.

References & Further Reading

  • Mike Stone, Reuters (3 Oct 2025) — “Anduril and Palantir battlefield communication system ‘very high risk,’ US Army memo says.”
  • Samantha Subin, CNBC (10 Oct 2025) — “Govini hits $100 M in annual recurring revenue with Bain Capital investment.”
  • NIST SP 800-218: Secure Software Development Framework (SSDF).
  • U.S. DoD Zero-Trust Strategy (2024).
  • MITRE ATT&CK for Defence Systems.
When Trust Cracks: The Vault Fault That Shook Identity Security

When Trust Cracks: The Vault Fault That Shook Identity Security

A vault exposed outside the DMZ

Opening Scene: The Unthinkable Inside Your Digital Fortress

Imagine standing before a vault that holds every secret of your organisation. It is solid, silent and built to withstand brute force. Yet, one day you discover someone walked straight in. No alarms. No credentials. No trace of a break-in. That is what the security community woke up to when researchers disclosed Vault Fault. A cluster of flaws in the very tools meant to guard our digital crown jewels.

Behind the Curtain: The Guardians of Our Secrets

Secrets management platforms like HashiCorp Vault and CyberArk Conjur or Secrets Manager sit at the heart of modern identity infrastructure. They store API keys, service credentials, encryption keys and more. In DevSecOps pipelines and hybrid environments, they are the trusted custodians. If a vault is compromised, it is not one system at risk. It is every connected system.

Vault Fault Unveiled: A Perfect Storm of Logic Flaws

Security firm Cyata revealed fourteen vulnerabilities spread across CyberArk and HashiCorp’s vault products. These were not just minor configuration oversights. They included:

  • CyberArk Conjur: IAM authenticator bypass by manipulating how regions are parsed. Privilege escalation by authenticating as a policy. Remote code execution by exploiting the ERB-based Policy Factory.
  • HashiCorp Vault: Nine zero-day issues including the first ever RCE in Vault. Bypasses of multi-factor authentication and account lockout logic. User enumeration through subtle timing differences. Escalation by abusing how policies are normalised.

These were chains of logic flaws that could be combined to devastating effect. Attackers could impersonate identities, escalate privileges, execute arbitrary code and exfiltrate secrets without ever providing valid credentials.

The Fallout: When Silent Vaults Explode

Perhaps the most unnerving fact is the age of some vulnerabilities. Several had been present for up to nine years. Quiet, undetected and exploitable. Remote code execution against a secrets vault is the equivalent of giving an intruder the keys to every door in your company. Once inside, they can lock you out, leak sensitive information or weaponise access for extortion.

Response and Remedy: Patch, Shield, Reinvent

Both vendors have issued fixes:

  • CyberArk Secrets Manager and Self-Hosted versions 13.5.1 and 13.6.1.
  • CyberArk Conjur Open Source version 1.22.1.
  • HashiCorp Vault Community and Enterprise editions 1.20.2, 1.19.8, 1.18.13 and 1.16.24.

Cyata’s guidance is direct. Patch immediately. Restrict network exposure of vault instances. Audit and rotate secrets. Minimise secret lifetime and scope. Enable detailed audit logs and monitor for anomalies. CyberArk has also engaged directly with customers to support remediation efforts.

Broader Lessons: Beyond the Fault

The nature of these flaws should make us pause. They were not memory corruption or injection bugs. They were logic vulnerabilities hiding in plain sight. The kind that slip past automated scans and live through version after version.

It is like delegating your IaaS or PaaS to AWS or Azure. They may run the infrastructure, but you are still responsible for meeting your own uptime SLAs. In the same way, even if you store secrets such as credit card numbers, API tokens or encryption keys in a vault, you remain responsible for securing them. The liability for a breach still sits with you.

Startups are especially vulnerable. Many operate under relentless deadlines and tight budgets. They offload everything that is not seen as part of their “core” operations to third parties. This speeds up delivery but also widens the blast radius when those dependencies are compromised. When your vault provider fails, your customers will still hold you accountable.

This should push us to adopt more defensive architectures. Moving towards ephemeral credentials, context-aware access and reducing reliance on long-lived static secrets.

We also need a culture shift. Secrets vaults are not infallible. Their security must be tested continuously. This includes adversarial simulations, code audits and community scrutiny. Trust in security systems is not a one-time grant. It is a relationship that must be earned repeatedly.

Closing Reflection: Trust Must Earn Itself Again

Vault Fault is a reminder that even our most trusted systems can develop cracks. The breach is not in the brute force of an attacker but in the quiet oversight of logic and design. As defenders, we must assume nothing is beyond failure. We must watch the watchers, test the guards and challenge the fortresses we build. Because the next fault may already be there, waiting to be found.

References and Further Reading

  1. The Hacker News – CyberArk and HashiCorp Flaws Enable Secret Exfiltration Without Credentials: https://thehackernews.com/2025/08/cyberark-and-hashicorp-flaws-enable.html
  2. CSO Online – Researchers uncover RCE attack chains in popular enterprise credential vaults: https://www.csoonline.com/article/4035274/researchers-uncover-rce-attack-chains-in-popular-enterprise-credential-vaults.html
  3. Dark Reading – Critical Zero-Day Bugs in CyberArk, HashiCorp Password Vaults: https://www.darkreading.com/cybersecurity-operations/critical-zero-day-bugs-cyberark-hashicorp-password-vaults
  4. Cyata Security – Vault Fault Disclosure: https://cyata.ai/vault-fault
  5. CyberArk Official Blog – Addressing Recent Vulnerabilities and Our Commitment to Security: https://www.cyberark.com/resources/all-blog-posts/addressing-recent-vulnerabilities-and-our-commitment-to-security
Simple Steps to Make Your Code More Secure Using Pre-Commit

Simple Steps to Make Your Code More Secure Using Pre-Commit

Build Smarter, Ship Faster: Engineering Efficiency and Security with Pre-Commit

In high-velocity engineering teams, the biggest bottlenecks aren’t always technical; they are organisational. Inconsistent code quality, wasted CI cycles, and preventable security leaks silently erode your delivery speed and reliability. This is where pre-commit transforms from a utility to a discipline.

This guide unpacks how to use pre-commit hooks to drastically improve engineering efficiency and development-time security, with practical tips, real-world case studies, and scalable templates.

Developer Efficiency: Cut Feedback Loops, Boost Velocity

The Problem

  • Endless nitpicks in code reviews
  • Time lost in CI failures that could have been caught locally
  • Onboarding delays due to inconsistent tooling

Pre-Commit to the Rescue

  • Automates formatting, linting, and static checks
  • Runs locally before Git commit or push
  • Ensures only clean code enters your repos

Best Practices for Engineering Velocity

  • Use lightweight, scoped hooks like black, isort, flake8, eslint, and ruff
  • Set stages: [pre-commit, pre-push] to optimise local speed
  • Enforce full project checks in CI with pre-commit run --all-files

Case Study: Engineering Efficiency in D2C SaaS (VC Due Diligence)

While consulting on behalf of a VC firm evaluating a fast-scaling D2C SaaS platform, we observed recurring issues: poor formatting hygiene, inconsistent PEP8 compliance, and prolonged PR cycles. My recommendation was to introduce pre-commit with a standardised configuration.

Within two sprints:

  • Developer velocity improved with 30% faster code merges
  • CI resource usage dropped 40% by avoiding trivial build failures
  • The platform was better positioned for future investment, thanks to a visibly stronger engineering discipline

Shift-Left Security: Prevent Leaks Before They Ship

The Problem

  • Secrets accidentally committed to Git history
  • Vulnerable code changes sneaking past reviews
  • Inconsistent security hygiene across teams

Pre-Commit as a Security Gate

  • Enforce secret scanning at commit time with tools like detect-secrets, gitleaks, and trufflehog
  • Standardise secure practices across microservices via shared config
  • Prevent common anti-patterns (e.g., print debugging, insecure dependencies)

Pre-Commit Security Toolkit

  • detect-secrets for credential scanning
  • bandit for Python security static analysis
  • Custom regex-based hooks for internal secrets

Case Study: Security Posture for HealthTech Startup

During a technical audit for a VC exploring investment in a HealthTech startup handling patient data, I discovered credentials hardcoded in multiple branches. We immediately introduced detect-secrets and bandit via pre-commit.

Impact over the next month:

  • 100% of developers enforced local secret scanning
  • 3 previously undetected vulnerabilities were caught before merging
  • Their security maturity score, used by the VC’s internal checklist, jumped significantly—securing the next funding round

Implementation Blueprint

📄 Pre-commit Sample Config

repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.5.0
    hooks:
      - id: trailing-whitespace
      - id: end-of-file-fixer
  - repo: https://github.com/psf/black
    rev: 24.3.0
    hooks:
      - id: black
  - repo: https://github.com/Yelp/detect-secrets
    rev: v1.0.3
    hooks:
      - id: detect-secrets
        args: ['--baseline', '.secrets.baseline']
        stages: [pre-commit]

Developer Setup

brew install pre-commit  # or pip install pre-commit
pre-commit install
pre-commit run --all-files

CI Pipeline Snippet

- name: Run pre-commit hooks
  run: |
    pip install pre-commit
    pre-commit run --all-files

Final Thoughts: Pre-Commit as Engineering Culture

Pre-commit is not just a Git tool. It’s your first line of:

  • Code Quality Defence
  • Security Posture Reinforcement
  • Operational Efficiency

Adopting it is a small effort with exponential returns.

Start small. Standardise. Automate. And let every commit carry the weight of your engineering discipline.

Stay Updated

Follow NocturnalKnight.co and my Substack for hands-on DevSecOps guides that blend efficiency, compliance, and automation.

Got feedback or want the Zerberus pre-commit kit? Ping me on LinkedIn or leave a comment.


Oracle Cloud Breach Is a Transitive Trust Timebomb : Here’s How to Defuse It

Oracle Cloud Breach Is a Transitive Trust Timebomb : Here’s How to Defuse It

“One mispatched server in the cloud can ignite a wildfire of trust collapse across 140,000 tenants.”

1. The Context: Why This Matters

In March 2025, a breach at Oracle Cloud shook the enterprise SaaS world. A few hours after Rahul from CloudSEK first flagged signs of a possible compromise, I published an initial analysis titled Is Oracle Cloud Safe? Data Breach Allegations and What You Need to Do Now. That piece was an urgent response to a fast-moving situation, but this article is the reflective follow-up. Here, I break down not just the facts of what happened, but the deeper problem it reveals: the fragility of transitive trust in modern cloud ecosystems.

Threat actor rose87168 leaked nearly 6 million records tied to Oracle’s login infrastructure, affecting over 140,000 tenants. The source? A misconfigured legacy server still running an unpatched version of Oracle Access Manager (OAM) vulnerable to CVE‑2021‑35587.

Initially dismissed by Oracle as isolated and obsolete, the breach was later confirmed via datasets and a tampered page on the login domain itself, captured in archived snapshots. This breach was not just an Oracle problem. It was a supply chain problem. The moment authentication breaks upstream, every SaaS product, platform, and identity provider depending on it inherits the risk, often unknowingly.

Welcome to the age of transitive trust. shook the enterprise SaaS world. Threat actor rose87168 leaked nearly 6 million records tied to Oracle’s login infrastructure, affecting over 140,000 tenants. The source? A misconfigured legacy server still running an unpatched version of Oracle Access Manager (OAM) vulnerable to CVE‑2021‑35587.

Initially dismissed by Oracle as isolated and obsolete, the breach was later confirmed via datasets and a tampered page on the login domain itself, captured in archived snapshots. This breach was not just an Oracle problem. It was a supply chain problem. The moment authentication breaks upstream, every SaaS product, platform, and identity provider depending on it inherits the risk, often unknowingly.

Welcome to the age of transitive trust.

2. Anatomy of the Attack

Attack Vector

  • Exploited: CVE-2021-35587, a critical RCE in Oracle Access Manager.
  • Payload: Malformed XML allowed unauthenticated remote code execution.

Exploited Asset

  • Legacy Oracle Cloud Gen1 login endpoints still active (e.g., login.us2.oraclecloud.com).
  • These endpoints were supposedly decommissioned but remained publicly accessible.

Proof & Exfiltration

  • Uploaded artefact visible in Wayback Machine snapshots.
  • Datasets included:
    • JKS files, encrypted SSO credentials, LDAP passwords
    • Tenant metadata, PII, hashes of admin credentials

Validated by researchers from CloudSEK, ZenoX, and GoSecure.

3. How Was This Possible?

  • Infrastructure drift: Legacy systems like Gen1 login were never fully decommissioned.
  • Patch blindness: CVE‑2021‑35587 was disclosed in 2021 but remained exploitable.
  • Trust misplacement: Downstream services assumed the upstream IDP layer was hardened.
  • Lack of dependency mapping: Tenants had no visibility into Oracle’s internal infra state.

4. How This Could Have Been Prevented

Oracle’s Prevention Gaps
VectorPreventive Control
Legacy exposureEnforce infra retirement workflows. Remove public DNS entries for deprecated endpoints.
Patch gapsAutomate CVE patch enforcement across cloud services with SLA tracking.
IDP isolationDecouple prod identity from test/staging legacy infra. Enforce strict perimeter controls.
What Clients Could Have Done
Risk InheritedMitigation Strategy
Blind transitive trustMaintain a real-time trust graph between IDPs, SaaS apps, and their dependencies.
Credential overreachUse scoped tokens, auto-expire shared secrets, enforce rotation.
Detection lagMonitor downstream for leaked credentials or unusual login flows tied to upstream IDPs.

5. Your Response Plan for Upstream IDP Risk

DomainBest Practices
Identity & AccessEnforce federated MFA, short-lived sessions, conditional access rules
Secrets ManagementStore all secrets in a vault, rotate frequently, avoid static tokens
Vulnerability HygieneIntegrate CVE scanners into CI/CD pipelines and runtime checks
Visibility & AuditingMaintain structured logs of identity provider access and token usage
Trust Graph MappingActively map third-party IDP integrations, revalidate quarterly

6. Tools That Help You Defuse Transitive Trust Risks

ToolMitigatesUse Case
CloudSEK XVigilCredential leaksMonitor for exposure of tokens, admin hashes, or internal credentials in open channels
Cortex Xpanse / CensysLegacy infra exposureSurface forgotten login domains and misconfigured IDP endpoints
OPA / OSQuery / FalcoPolicy enforcementDetect violations of login logic, elevated access, or fallback misroutes
Orca / WizRuntime postureSpot residual access paths and configuration drifts post-incident
Sigstore / CosignSupply chain integrityProtect CI/CD artefacts but limited in identity-layer breach contexts
Vault (HashiCorp)Secrets lifecycleAutomate token expiration, key rotation, and zero plaintext exposure
Zerberus.ai Trace-AITransitive trust, IDP visibilityDiscover hidden dependencies in SaaS trust chains and enforce control validation

7. Lessons Learned

When I sat down to write this, these statements felt too obvious to be called lessons. Of course authentication is production infrastructure, any practitioner would agree. But then why do so few treat it that way? Why don’t we build failovers for our SSO? Why is trust still assumed, rather than validated?

These aren’t revelations. They’re reminders; hard-earned ones.

  • Transitive trust is NOT NEUTRAL, it’s a silent threat multiplier. It embeds risk invisibly into every integration.
  • Legacy infrastructure never retires itself. If it’s still reachable, it’s exploitable.
  • Authentication systems deserve production-level fault tolerance. Build them like you’d build your API or Payment Gateway.
  • Trust is not a diagram to revisit once a year; it must be observable, enforced, and continuously verified.

8. Making the Invisible Visible: Why We Built Zerberus

Transitive trust is invisible until it fails. Most teams don’t realise how many of their security guarantees hinge on external identity providers, third-party SaaS integrations, and cloud-native IAM misconfigurations.

At Zerberus, we set out to answer a hard question: What if you could see the trust relationships before they became a risk?

  • We map your entire trust graph, from identity providers and cloud resources to downstream tools and cross-SaaS entitlements.
  • We continuously verify the health and configuration of your identity and access layers, including:
    • MFA enforcement
    • Secret expiration windows
    • IDP endpoint exposure
  • We bridge compliance and security by treating auth controls and access posture as observable artefacts, not static assumptions.

Your biggest security risk may not be inside your codebase, but outside your control plane. Zerberus is your lens into that blind spot.

Further Reading & References

Want to Know Who You’re Really Trusting?

Start your free Zerberus trial and discover the trust graph behind your SaaS stack—before someone else does.

Trump’s Executive Order 14144 Overhaul, Part 1: Sanctions, AI, and Security at the Crossroads

Trump’s Executive Order 14144 Overhaul, Part 1: Sanctions, AI, and Security at the Crossroads

I have been analysing cybersecurity legislation and policy for years — not just out of academic curiosity, but through the lens of a practitioner grounded in real-world systems and an observer tuned to the undercurrents of geopolitics. With this latest Executive Order, I took time to trace implications not only where headlines pointed, but also in the fine print. Consider this your distilled briefing: designed to help you, whether you’re in policy, security, governance, or tech. If you’re looking specifically for Post-Quantum Cryptography, hold tight — Part 2 of this series dives deep into that.

Image summarising the EO14144 Amendment

“When security becomes a moving target, resilience must become policy.” That appears to be the underlying message in the White House’s latest cybersecurity directive — a new Executive Order (June 6, 2025) that amends and updates the scope of earlier cybersecurity orders (13694 and 14144). The order introduces critical shifts in how the United States addresses digital threats, retools offensive and defensive cyber policies, and reshapes future standards for software, identity, and AI/quantum resilience.

Here’s a breakdown of the major components:

1. Recalibrating Cyber Sanctions: A Narrower Strike Zone

The Executive Order modifies EO 13694 (originally enacted under President Obama) by limiting the scope of sanctions to “foreign persons” involved in significant malicious cyber activity targeting critical infrastructure. While this aligns sanctions with diplomatic norms, it effectively removes domestic actors and certain hybrid threats from direct accountability under this framework.

More controversially, the order removes explicit provisions on election interference, which critics argue could dilute the United States’ posture against foreign influence operations in democratic processes. This omission has sparked concern among cybersecurity policy experts and election integrity advocates.

2. Digital Identity Rollback: A Missed Opportunity?

In a notable reversal, the order revokes a Biden-era initiative aimed at creating a government-backed digital identity system for securely accessing public benefits. The original programme sought to modernise digital identity verification while reducing fraud.

The administration has justified the rollback by citing concerns over entitlement fraud involving undocumented individuals, but many security professionals argue this undermines legitimate advancements in privacy-preserving, verifiable identity systems, especially as other nations accelerate national digital ID adoption.

3. AI and Quantum Security: Building Forward with Standards

In a forward-looking move, the order places renewed emphasis on AI system security and quantum-readiness. It tasks the Department of Defence (DoD), Department of Homeland Security (DHS), and Office of the Director of National Intelligence (ODNI) with establishing minimum standards and risk assessment frameworks for:

  • Artificial Intelligence (AI) system vulnerabilities in government use
  • Quantum computing risks, especially in breaking current encryption methods

A major role is assigned to NIST — to develop formal standards, update existing guidance, and expand the National Cybersecurity Centre of Excellence (NCCoE) use cases on AI threat modelling and cryptographic agility.

(We will cover the post-quantum cryptography directives in detail in Part 2 of this series.)

4. Software Security: From Documentation to Default

The Executive Order mandates a major upgrade in the federal software security lifecycle. Specifically, NIST has been directed to:

  • Expand the Secure Software Development Framework (SSDF)
  • Build an industry-led consortium for secure patching and software update mechanisms
  • Publish updates to NIST SP 800-53 to reflect stronger expectations on software supply chain controls, logging, and third-party risk visibility

This reflects a larger shift toward enforcing security-by-design in both federal software acquisitions and vendor submissions, including open-source components.

5. A Shift in Posture: From Prevention to Risk Acceptance?

Perhaps the most significant undercurrent in the EO is a philosophical pivot: moving from proactive deterrence to a model that manages exposure through layered standards and economic deterrents. Critics caution that this may downgrade national cyber defence from a proactive strategy to a posture of strategic containment.

This move seems to prioritise resilience over retaliation, but it also raises questions: what happens when deterrence is no longer a credible or immediate tool?

Final Thoughts

This Executive Order attempts to balance continuity with redirection, sustaining selective progress in software security and PQC while revoking or narrowing other key initiatives like digital identity and foreign election interference sanctions. Whether this is a strategic recalibration or a rollback in disguise remains a matter of interpretation.

As the cybersecurity landscape evolves faster than ever, one thing is clear: this is not just a policy update; it is a signal of intent. And that signal deserves close scrutiny from both allies and adversaries alike.

Further Reading

https://www.whitehouse.gov/presidential-actions/2025/06/sustaining-select-efforts-to-strengthen-the-nations-cybersecurity-and-amending-executive-order-13694-and-executive-order-14144/

AI in Security & Compliance: Why SaaS Leaders Must Act On Now

AI in Security & Compliance: Why SaaS Leaders Must Act On Now

We built and launched a PCI-DSS aligned, co-branded credit card platform in under 100 days. Product velocity wasn’t our problem — compliance was.

What slowed us wasn’t the tech stack. It was the context switch. Engineers losing hours stitching Jira tickets to Confluence tables to AWS configs. Screenshots instead of code. Slack threads instead of system logs. We weren’t building product anymore — we were building decks for someone else’s checklist.

Reading Jason Lemkin’s “AI Slow Roll” on SaaStr stirred something. If SaaS teams are already behind on using AI to ship products, they’re even further behind on using AI to prove trust — and that’s what compliance is. This is my wake-up call, and if you’re a CTO, Founder, or Engineering Leader, maybe it should be yours too.

The Real Cost of ‘Not Now’

Most SaaS teams postpone compliance automation until a large enterprise deal looms. That’s when panic sets in. Security questionnaires get passed around like hot potatoes. Engineers are pulled from sprints to write security policies or dig up AWS settings. Roadmaps stall. Your best developers become part-time compliance analysts.

All because of a lie we tell ourselves:
“We’ll sort compliance when we need it.”

By the time “need” shows up — in an RFP, a procurement form, or a prospect’s legal review — the damage is already done. You’ve lost the narrative. You’ve lost time. You might lose the deal.

Let’s be clear: you’re not saving time by waiting. You’re borrowing it from your product team — and with interest.

AI-Driven Compliance Is Real, and It’s Working

Today’s AI-powered compliance platforms aren’t just glorified document vaults. They actively integrate with your stack:

  • Automatically map controls across SOC 2, ISO 27001, GDPR, and more
  • Ingest real-time configuration data from AWS, GCP, Azure, GitHub, and Okta
  • Auto-generate audit evidence with metadata and logs
  • Detect misconfigurations — and in some cases, trigger remediation PRs
  • Maintain a living, customer-facing Trust Center

One of our clients — a mid-stage SaaS company — reduced their audit prep from 11 weeks to 7 days. Why? They stopped relying on humans to track evidence and let their systems do the talking.

Had we done the same during our platform build, we’d have saved at least 40+ engineering hours — nearly a sprint. That’s not a hypothetical. That’s someone’s roadmap feature sacrificed to the compliance gods.

Engineering Isn’t the Problem. Bandwidth Is.

Your engineers aren’t opposed to security. They’re opposed to busywork.

They’d rather fix a real vulnerability than be asked to explain encryption-at-rest to an auditor using a screenshot from the AWS console. They’d rather write actual remediation code than generate PDF exports of Jira tickets and Git logs.

Compliance automation doesn’t replace your engineers — it amplifies them. With AI in the loop:

  • Infrastructure changes are logged and tagged for audit readiness
  • GitHub, Jira, Slack, and Confluence work as control evidence pipelines
  • Risk scoring adapts in real-time as your stack evolves

This isn’t a future trend. It’s happening now. And the companies already doing it are closing deals faster and moving on to build what’s next.

The Danger of Waiting — From an Implementer’s View

You don’t feel it yet — until your first enterprise prospect hits you with a security questionnaire. Or worse, they ghost you after asking, “Are you ISO certified?”

Without automation, here’s what the next few weeks look like:

  • You scrape offboarding logs from your HR system manually
  • You screenshot S3 config settings and paste them into a doc
  • You beg engineers to stop building features and start building compliance artefacts

You try to answer 190 questions that span encryption, vendor risk, data retention, MFA, monitoring, DR, and business continuity — and you do it reactively.

This isn’t security. This is compliance theatre.

Real security is baked into pipelines, not stitched onto decks. Real compliance is invisible until it’s needed. That’s the power of automation.

You Can’t Build Trust Later

If there’s one thing we’ve learned shipping compliance-ready infrastructure at startup speed, it’s this:

Your customers don’t care when you became compliant.
They care that you already were.

You wouldn’t dream of releasing code without CI/CD. So why are you still treating trust and compliance like an afterthought?

AI is not a luxury here. It’s a survival tool. The sooner you invest, the more it compounds:

  • Fewer security gaps
  • Faster audits
  • Cleaner infra
  • Shorter sales cycles
  • Happier engineers

Don’t build for the auditor. Build for the outcome — trust at scale.

What to Do Next :

  1. Audit your current posture: Ask your team how much of your compliance evidence is manual. If it’s more than 20%, you’re burning bandwidth.
  2. Pick your first integration: Start with GitHub or AWS. Plug in, let the system scan, and see what AI-powered control mapping looks like.
  3. Bring GRC and engineering into the same room: They’re solving the same problem — just speaking different languages. AI becomes the translator.
  4. Plan to show, not tell: Start preparing for a Trust Center page that actually connects to live control status. Don’t just tell customers you’re secure — show them.

Final Words

Waiting won’t make compliance easier. It’ll just make it costlier — in time, trust, and engineering sanity.

I’ve been on the implementation side. I’ve watched sprints evaporate into compliance debt. I’ve shipped a product at breakneck speed, only to get slowed down by a lack of visibility and control mapping. This is fixable. But only if you move now.

If Jason Lemkin’s AI Slow Roll was a warning for product velocity, then this is your warning for trust velocity.

AI in compliance isn’t a silver bullet. But it’s the only real chance you have to stay fast, stay secure, and stay in the game.

InfoSec’s Big Problem: Too Much Hope in One Cyber Database

InfoSec’s Big Problem: Too Much Hope in One Cyber Database

The Myth of a Single Cyber Superpower: Why Global Infosec Can’t Rely on One Nation’s Database

What the collapse of MITRE’s CVE funding reveals about fragility, sovereignty, and the silent geopolitics of vulnerability management

I. The Day the Coordination Engine Stalled

On April 16, 2025, MITRE’s CVE program—arguably the most critical coordination layer in global vulnerability management—lost its federal funding.

There was no press conference, no coordinated transition plan, no handover to an international body. Just a memo, and silence. As someone who’s worked in information security for two decades, I should have been surprised. I wasn’t. We’ve long been building on foundations we neither control nor fully understand.The CVE database isn’t just a spreadsheet of flaws. It is the lingua franca of cybersecurity. Without it, our systems don’t just become more vulnerable—they become incomparable.

II. From Backbone to Bottleneck

Since 1999, CVEs have given us a consistent, vendor-neutral way to identify and communicate about software vulnerabilities. Nearly every scanner, SBOM generator, security bulletin, bug bounty program, and regulatory framework references CVE IDs. The system enables prioritisation, automation, and coordinated disclosure.

But what happens when that language goes silent?

“We are flying blind in a threat-rich environment.”
Jen Easterly, former Director of CISA (2025)

That threat blindness is not hypothetical. The National Vulnerability Database (NVD)—which depends on MITRE for CVE enumeration—has a backlog exceeding 10,000 unanalysed vulnerabilities. Some tools have begun timing out or flagging stale data. Security orchestration systems misclassify vulnerabilities or ignore them entirely because the CVE ID was never issued.

This is not a minor workflow inconvenience. It’s a collapse in shared context, and it hits software supply chains the hardest.

III. Three Moves That Signalled Systemic Retreat

While many are treating the CVE shutdown as an isolated budget cut, it is in fact the third move in a larger geopolitical shift:

  • January 2025: The Cyber Safety Review Board (CSRB) was disbanded—eliminating the U.S.’s central post-incident review mechanism.
  • March 2025: Offensive cyber operations against Russia were paused by the U.S. Department of Defense, halting active containment of APTs like Fancy Bear and Gamaredon.
  • April 2025: MITRE’s CVE funding expired—effectively unplugging the vulnerability coordination layer trusted worldwide.

This is not a partisan critique. These decisions were made under a democratically elected government. But their global consequences are disproportionate. And this is the crux of the issue: when the world depends on a single nation for its digital immune system, even routine political shifts create existential risks.

IV. Global Dependency and the Quiet Cost of Centralisation

MITRE’s CVE system was always open, but never shared. It was funded domestically, operated unilaterally, and yet adopted globally.

That arrangement worked well—until it didn’t.

There is a word for this in international relations: asymmetry. In tech, we often call it technical debt. Whatever we name it, the result is the same: everyone built around a single point of failure they didn’t own or influence.

“Integrate various sources of threat intelligence in addition to the various software vulnerability/weakness databases.”
NSA, 2024

Even the NSA warned us not to over-index on CVE. But across industry, CVE/NVD remains hardcoded into compliance standards, vendor SLAs, and procurement language.

And as of this month, it’s… gone!

V. What Europe Sees That We Don’t Talk About

While the U.S. quietly pulled back, the European Union has been doing the opposite. Its Cyber Resilience Act (CRA) mandates that software vendors operating in the EU must maintain secure development practices, provide SBOMs, and handle vulnerability disclosures with rigour.

Unlike CVE, the CRA assumes no single vulnerability database will dominate. It emphasises process over platform, and mandates that organisations demonstrate control, not dependency.

This distinction matters.

If the CVE system was the shared fire alarm, the CRA is a fire drill—with decentralised protocols that work even if the main siren fails.

Europe, for all its bureaucratic delays, may have been right all along: resilience requires plurality.

VI. Lessons for the Infosec Community

At Zerberus, we anticipated this fracture. That’s why our ZSBOM™ platform was designed to pull vulnerability intelligence from multiple sources, including:

  • MITRE CVE/NVD (when available)
  • Google OSV
  • GitHub Security Advisories
  • Snyk and Sonatype databases
  • Internal threat feeds

This is not a plug; it’s a plea. Whether you use Zerberus or not, stop building your supply chain security around a single feed. Your tools, your teams, and your customers deserve more than monoculture.

VII. The Superpower Paradox

Here’s the uncomfortable truth:

When you’re the sole superpower, you don’t get to take a break.

The U.S. built the digital infrastructure the world relies on. CVE. DNS. NIST. Even the major cloud providers. But global dependency without shared governance leads to fragility.

And fragility, in cyberspace, gets exploited.

We must stop pretending that open-source equals open-governance, that centralisation equals efficiency, or that U.S. stability is guaranteed. The MITRE shutdown is not the end—but it should be a beginning.

A beginning of a post-unipolar cybersecurity infrastructure, where responsibility is distributed, resilience is engineered, and no single actor—however well-intentioned—is asked to carry the weight of the digital world.

References 

  1. Gatlan, S. (2025) ‘MITRE warns that funding for critical CVE program expires today’, BleepingComputer, 16 April. Available at: https://www.bleepingcomputer.com/news/security/mitre-warns-that-funding-for-critical-cve-program-expires-today/ (Accessed: 16 April 2025).
  2. Easterly, J. (2025) ‘Statement on CVE defunding’, Vocal Media, 15 April. Available at: https://vocal.media/theSwamp/jen-easterly-on-cve-defunding (Accessed: 16 April 2025).
  3. National Institute of Standards and Technology (NIST) (2025) NVD Dashboard. Available at: https://nvd.nist.gov/general/nvd-dashboard (Accessed: 16 April 2025).
  4. The White House (2021) Executive Order on Improving the Nation’s Cybersecurity, 12 May. Available at: https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/ (Accessed: 16 April 2025).
  5. U.S. National Security Agency (2024) Mitigating Software Supply Chain Risks. Available at: https://media.defense.gov/2024/Jan/30/2003370047/-1/-1/0/CSA-Mitigating-Software-Supply-Chain-Risks-2024.pdf (Accessed: 16 April 2025).
  6. European Commission (2023) Proposal for a Regulation on Cyber Resilience Act. Available at: https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act (Accessed: 16 April 2025).
Is Oracle Cloud Safe? Data Breach Allegations and What You Need to Do Now

Is Oracle Cloud Safe? Data Breach Allegations and What You Need to Do Now

A strange sense of déjà vu is sweeping through the cybersecurity community. A threat actor claims to have breached Oracle Cloud’s federated SSO infrastructure, making off with over 6 million records. Oracle, in response, says in no uncertain terms: nothing happened. No breach. No lost data. No story.

But is that the end of it? – Not quite.

Security professionals have learned to sit up and listen when there’s smoke—especially if the fire might be buried under layers of PR denial and forensic ambiguity. One of the earliest signals came from CloudSEK, a threat intelligence firm known for early breach warnings. Its CEO, Rahul Sasi, called it out plainly on LinkedIn:

“6M Oracle cloud tenant data for sale affecting over 140k tenants. Probably the most critical hack of 2025.”

Rahul Sasi, CEO, CloudSEK

The post linked to CloudSEK’s detailed blog, laying out the threat actor’s claims and early indicators. What followed has been a storm of speculation, technical analysis, and the uneasy limbo that follows when truth hasn’t quite surfaced.

The Claims: 6 Million Records, High-Privilege Access

A threat actor using the alias rose87168 appeared on BreachForums, claiming they had breached Oracle Cloud’s SSO and LDAP servers via a vulnerability in the WebLogic interface (login.[region].oraclecloud.com). According to CloudSEK (2025) and BleepingComputer (Cimpanu, 2025), here’s what they say they stole:

  • Encrypted SSO passwords
  • Java Keystore (JKS) files
  • Enterprise Manager JPS keys
  • LDAP-related configuration data
  • Internal key files associated with Oracle Cloud tenants

They even uploaded a text file to an Oracle server as “proof”—a small act that caught the eye of researchers. The breach, they claim, occurred about 40 days ago. And now, they’re offering to remove a company’s data from their sale list—for a price, of course.

It’s the kind of extortion tactic we’ve seen grow more common: pay up, or your internal secrets become someone else’s leverage.

Oracle’s Denial: Clear, Strong, and Unyielding

Oracle has pushed back—hard. Speaking to BleepingComputer, the company stated:

“There has been no breach of Oracle Cloud. The published credentials are not for the Oracle Cloud. No Oracle Cloud customers experienced a breach or lost any data.”

The message is crystal clear. But for some in the security world, perhaps too clear. The uploaded text file and detailed claim raised eyebrows. As one veteran put it, “If someone paints a map of your house’s wiring, even if they didn’t break in, you want to check the locks.”

The Uncomfortable Middle: Where Truth Often Lives

This is where things get murky. We’re left with questions that haven’t been answered:

  • How did the attacker upload a file to Oracle infrastructure?
  • Are the data samples real or stitched together from previous leaks?
  • Has Oracle engaged a third-party investigation?
  • Have any of the affected companies acknowledged a breach privately?

CloudSEK’s blog makes it clear that their findings rely on the attacker’s claims, not on validated internal evidence. Yet, when a threat actor provides partial proof—and others in the community corroborate small details—it becomes harder to simply dismiss the story.

Sometimes, truth emerges not from a single definitive statement, but from a pattern of smaller inconsistencies.

If It’s True: The Dominoes Could Be Serious

Let’s imagine the worst-case scenario for a moment. If the breach is real, here’s what’s at stake:

  • SSO Passwords and JKS Files: Could allow attackers to impersonate users and forge encrypted communications.
  • Enterprise Manager Keys: These could open backdoors into admin-level environments.
  • LDAP Info: A treasure trove for lateral movement within corporate networks.

When you’re dealing with cloud infrastructure used by over 140,000 tenants, even a tiny crack can ripple across ecosystems, affecting partners, vendors, and downstream customers.

And while we often talk about technical damage, it’s the reputational and compliance fallout that ends up costing more.

What Should Oracle Customers Do?

Until more clarity emerges, playing it safe is not just advisable—it’s essential.

  • Watch Oracle’s advisories and incident response reports
  • Review IAM logs and authentication anomalies from the past 45–60 days
  • Rotate keys, enforce MFA, audit third-party integrations
  • Enable enhanced threat monitoring for any Oracle Cloud-hosted applications
  • Coordinate internally on contingency planning—just in case this turns out to be real

Security teams are already stretched thin. But this is a “better safe than sorry” situation.

Final Thoughts: Insecurity in a Time of Conflicting Truths

We may not have confirmation of a breach. But what we do have is plausibility, opportunity, and an attacker who seems to know just enough to make us pause.

Oracle’s stance is strong and confident. But confidence is not evidence. Until independent investigators, third-party customers, or a whistleblower emerges, the rest of us are left piecing the puzzle together from threat intel, subtle details, and professional instinct.

“While we cannot definitively confirm a breach at this time, the combination of the threat actor’s claims, the data samples, and the unanswered questions surrounding the incident suggest that Oracle Cloud users should remain vigilant and take proactive security measures.”

For now, the best thing the community can do is watch, verify, and prepare—until the truth becomes undeniable.

References

Further Reading

Trump and Cyber Security: Did He Make Us Safer From Russia?

Trump and Cyber Security: Did He Make Us Safer From Russia?

U.S. Cyber Warfare Strategy Reassessed: The Risks of Ending Offensive Operations Against Russia

Introduction: A Cybersecurity Gamble or a Diplomatic Reset?

Imagine a world where cyber warfare is not just the premise of a Bond movie or an episode of Mission Impossible, but a tangible and strategic tool in global power struggles. For the past quarter-century, cyber warfare has been a key piece on the geopolitical chessboard, with nations engaging in a digital cold war—where security agencies and military forces participate in a cyber equivalent of Mutually Assured Destruction (GovInfoSecurity). From hoarding zero-day vulnerabilities to engineering precision-targeted malware like Stuxnet, offensive cyber operations have shaped modern defence strategies (Loyola University Chicago).

Now, in a significant shift, the incoming Trump administration has announced a halt to offensive cyber operations against Russia, redirecting its focus toward China and Iran—noticeably omitting North Korea (BBC News). This recalibration has sparked concerns over its long-term implications, including the cessation of military aid to Ukraine, disruptions in intelligence sharing, and the broader impact on global cybersecurity stability. Is this a calculated move towards diplomatic realignment, or does it create a strategic void that adversaries could exploit? This article critically examines the motivations behind the policy shift, its potential repercussions, and its implications within the frameworks of international relations, cybersecurity strategy, and global power dynamics.

Russian Cyber Warfare: A Persistent and Evolving Threat

1.1 Russia’s Strategic Cyber Playbook

Russia has seamlessly integrated cyber warfare into its broader military and intelligence strategy, leveraging it as an instrument of power projection. Their approach is built on three key pillars:

  • Persistent Engagement: Russian cyber doctrine emphasises continuous infiltration of adversary networks to gather intelligence and disrupt critical infrastructure (Huskaj, 2023).
  • Hybrid Warfare: Cyber operations are often combined with traditional military tactics, as seen in Ukraine and Georgia (Chichulin & Kopylov, 2024).
  • Psychological and Political Manipulation: The use of cyber disinformation campaigns has been instrumental in shaping political narratives globally (Rashid, Khan, & Azim, 2021).

1.2 Case Studies: The Russian Cyber Playbook in Action

Several high-profile attacks illustrate the sophistication of Russian cyber operations:

  • The SolarWinds Compromise (2020-2021): This breach, attributed to Russian intelligence, infiltrated multiple U.S. government agencies and Fortune 500 companies, highlighting vulnerabilities in software supply chains (Vaughan-Nichols, 2021).
  • Ukraine’s Power Grid Attacks (2015-2017): Russian hackers used malware such as BlackEnergy and Industroyer to disrupt Ukraine’s energy infrastructure, showcasing the potential for cyber-induced kinetic effects (Guchua & Zedelashvili, 2023).
  • Election Interference (2016 & 2020): Russian hacking groups Fancy Bear and Cozy Bear engaged in data breaches and disinformation campaigns, altering political dynamics in multiple democracies (Jamieson, 2018).

These attacks exemplify how cyber warfare has been weaponised as a tool of statecraft, reinforcing Russia’s broader geopolitical ambitions.

The Trump Administration’s Pivot: From Russia to China and Iran

2.1 Reframing the Cyber Threat Landscape

The administration’s new strategy became evident when Liesyl Franz, the U.S. Deputy Assistant Secretary for International Cybersecurity, conspicuously omitted Russia from a key United Nations briefing on cyber threats, instead highlighting concerns about China and Iran (The Guardian, 2025). This omission marked a clear departure from previous policies that identified Russian cyber operations as a primary national security threat.

Similarly, the Cybersecurity and Infrastructure Security Agency (CISA) has internally shifted resources toward countering Chinese cyber espionage and Iranian state-sponsored cyberattacks, despite ongoing threats from Russian groups (CNN, 2025). This strategic reprioritisation raises questions about the nature of cyber threats and whether the U.S. may be underestimating the persistent risk posed by Russian cyber actors.

2.2 The Suspension of Offensive Cyber Operations

Perhaps the most controversial decision in this policy shift is U.S. Defence Secretary Pete Hegseth’s directive to halt all offensive cyber operations against Russia (ABC News).

3. Policy Implications: Weighing the Perspectives

3.1 Statement of Facts

The decision to halt offensive cyber operations against Russia represents a significant shift in U.S. cybersecurity policy. The official rationale behind the move is a strategic pivot towards addressing cyber threats from China and Iran while reassessing the cyber engagement framework with Russia.

3.2 Perceived Detrimental Effects

Critics argue that reducing cyber engagement with Russia may embolden its intelligence agencies and cybercrime syndicates. The Cold War’s history demonstrates that strategic de-escalation, when perceived as a sign of weakness, can lead to increased adversarial aggression. For instance, the 1979 Soviet invasion of Afghanistan followed a period of perceived Western détente (GovInfoSecurity). Similarly, experts warn that easing cyber pressure on Russia may enable it to intensify hybrid warfare tactics, including disinformation campaigns and cyber-espionage.

3.3 Perceived Advantages

Proponents of the policy compare it to Boris Yeltsin’s 1994 decision to detarget Russian nuclear missiles from U.S. cities, which symbolised de-escalation without dismantlement (Greensboro News & Record). Advocates argue that this temporary halt on cyber operations against Russia could lay the groundwork for cyber diplomacy and agreements similar to Cold War-era arms control treaties, reducing the risk of uncontrolled cyber escalation.

3.4 Overall Analysis

The Trump administration’s policy shift represents a calculated risk. While it opens potential diplomatic pathways, it also carries inherent risks of creating a security vacuum. Drawing lessons from Cold War diplomacy, effective deterrence must balance engagement with strategic restraint. Whether this policy fosters improved international cyber norms or leads to unintended escalation will depend on future geopolitical developments and Russia’s response.


References & Further Reading

Disbanding the CSRB: A Mistake for National Security

Disbanding the CSRB: A Mistake for National Security

Why Ending the CSRB Puts America at Risk

Imagine dismantling your fire department just because you haven’t had a major fire recently. That’s effectively what the Trump administration has done by disbanding the Cyber Safety Review Board (CSRB), a critical entity within the Cybersecurity and Infrastructure Security Agency (CISA). In an era of escalating cyber threats—ranging from ransomware targeting hospitals to sophisticated state-sponsored attacks—this decision is a catastrophic misstep for national security.

While countries across the globe are doubling down on cybersecurity investments, the United States has chosen to retreat from a proactive posture. The CSRB’s closure sends a dangerous message: that short-term political optics can override the long-term need for resilience in the face of digital threats.

The Role of the CSRB: A Beacon of Cybersecurity Leadership

Established to investigate and recommend strategies following major cyber incidents, the CSRB functioned as a hybrid think tank and task force, capable of cutting through red tape to deliver actionable insights. Its role extended beyond the public-facing reports; the board was deeply involved in guiding responses to sensitive, behind-the-scenes threats, ensuring that risks were mitigated before they escalated into crises.

The CSRB’s disbandment leaves a dangerous void in this ecosystem, weakening not only national defenses but also the trust between public and private entities.

CSRB: Championing Accountability and Reform

One of the CSRB’s most significant contributions was its ability to hold even the most powerful corporations accountable, driving reforms that prioritized security over profit. Its achievements are best understood through the lens of its high-profile investigations:

Key Milestones

Why the CSRB’s Work Mattered

The CSRB’s ability to compel change from tech giants like Microsoft underscored its importance. Without such mechanisms, corporations are less likely to prioritise cybersecurity, leaving critical infrastructure vulnerable to attack. As cyber threats grow in complexity, dismantling accountability structures like the CSRB risks fostering an environment where profits take precedence over security—a dangerous proposition for national resilience.

Cybersecurity as Strategic Deterrence

To truly grasp the implications of the CSRB’s dissolution, one must consider the broader strategic value of cybersecurity. The European Leadership Network aptly draws parallels between cyber capabilities and nuclear deterrence. Both serve as powerful tools for preventing conflict, not through their use but through the strength of their existence.

By dismantling the CSRB, the U.S. has not only weakened its ability to deter cyber adversaries but also signalled a lack of commitment to proactive defence. This retreat emboldens adversaries, from state-sponsored actors like China’s STORM-0558 to decentralized hacking groups, and undermines the nation’s strategic posture.

Global Trends: A Stark Contrast

While the U.S. retreats, the rest of the world is surging ahead. Nations in the Indo-Pacific, as highlighted by the Royal United Services Institute, are investing heavily in cybersecurity to counter growing threats. India, Japan, and Australia are fostering regional collaborations to strengthen their collective resilience.

Similarly, the UK and continental Europe are prioritising cyber capabilities. The UK, for instance, is shifting its focus from traditional nuclear deterrence to building robust cyber defences, a move advocated by the European Leadership Network. The EU’s Cybersecurity Strategy exemplifies the importance of unified, cross-border approaches to digital security.

The U.S.’s decision to disband the CSRB stands in stark contrast to these efforts, risking not only its national security but also its leadership in global cybersecurity.

Isolationism’s Dangerous Consequences

This decision reflects a broader trend of isolationism within the Trump administration. Whether it’s withdrawing from the World Health Organization or sidelining international climate agreements, the U.S. has increasingly disengaged from global efforts. In cybersecurity, this isolationist approach is particularly perilous.

Global threats demand global solutions. Initiatives like the Five Eyes’ Secure Innovation program (Infosecurity Magazine) demonstrate the value of collaborative defence strategies. By withdrawing from structures like the CSRB, the U.S. not only risks alienating allies but also forfeits its role as a global leader in cybersecurity.

The Cost of Complacency

Cybersecurity is not a field that rewards complacency. As CSO Online warns, short-term thinking in this domain can lead to long-term vulnerabilities. The absence of the CSRB means fewer opportunities to learn from incidents, fewer recommendations for systemic improvements, and a diminished ability to adapt to evolving threats.

The cost of this decision will likely manifest in increased cyber incidents, weakened critical infrastructure, and a growing divide between the U.S. and its allies in terms of cybersecurity capabilities.

Conclusion

The disbanding of the CSRB is not just a bureaucratic reshuffle—it is a strategic blunder with far-reaching implications for national and global security. In an age where digital threats are as consequential as conventional warfare, dismantling a key pillar of cybersecurity leaves the United States exposed and isolated.

The CSRB’s legacy of transparency, accountability, and reform serves as a stark reminder of what’s at stake. Its dissolution not only weakens national defences but also risks emboldening adversaries and eroding trust among international partners. To safeguard its digital future, the U.S. must urgently rebuild mechanisms like the CSRB, reestablish its leadership in cybersecurity, and recommit to collaborative defence strategies.

References & Further Reading

  1. TechCrunch. (2025). Trump administration fires members of cybersecurity review board in horribly shortsighted decision. Available at: TechCrunch
  2. The Conversation. (2025). Trump has fired a major cybersecurity investigations body – it’s a risky move. Available at: The Conversation
  3. TechDirt. (2025). Trump disbands cybersecurity board investigating massive Chinese phone system hack. Available at: TechDirt
  4. European Leadership Network. (2024). Nuclear vs Cyber Deterrence: Why the UK Should Invest More in Its Cyber Capabilities and Less in Nuclear Deterrence. Available at: ELN
  5. Royal United Services Institute. (2024). Cyber Capabilities in the Indo-Pacific: Shared Ambitions, Different Means. Available at: RUSI
  6. Infosecurity Magazine. (2024). Five Eyes Agencies Launch Startup Security Initiative. Available at: Infosecurity Magazine
  7. CSO Online. (2024). Project 2025 Could Escalate US Cybersecurity Risks, Endanger More Americans. Available at: CSO Online
Bitnami