Author: Ramkumar Sundarakalatharan

Trump and Cyber Security: Did He Make Us Safer From Russia?

Trump and Cyber Security: Did He Make Us Safer From Russia?

U.S. Cyber Warfare Strategy Reassessed: The Risks of Ending Offensive Operations Against Russia

Introduction: A Cybersecurity Gamble or a Diplomatic Reset?

Imagine a world where cyber warfare is not just the premise of a Bond movie or an episode of Mission Impossible, but a tangible and strategic tool in global power struggles. For the past quarter-century, cyber warfare has been a key piece on the geopolitical chessboard, with nations engaging in a digital cold war—where security agencies and military forces participate in a cyber equivalent of Mutually Assured Destruction (GovInfoSecurity). From hoarding zero-day vulnerabilities to engineering precision-targeted malware like Stuxnet, offensive cyber operations have shaped modern defence strategies (Loyola University Chicago).

Now, in a significant shift, the incoming Trump administration has announced a halt to offensive cyber operations against Russia, redirecting its focus toward China and Iran—noticeably omitting North Korea (BBC News). This recalibration has sparked concerns over its long-term implications, including the cessation of military aid to Ukraine, disruptions in intelligence sharing, and the broader impact on global cybersecurity stability. Is this a calculated move towards diplomatic realignment, or does it create a strategic void that adversaries could exploit? This article critically examines the motivations behind the policy shift, its potential repercussions, and its implications within the frameworks of international relations, cybersecurity strategy, and global power dynamics.

Russian Cyber Warfare: A Persistent and Evolving Threat

1.1 Russia’s Strategic Cyber Playbook

Russia has seamlessly integrated cyber warfare into its broader military and intelligence strategy, leveraging it as an instrument of power projection. Their approach is built on three key pillars:

  • Persistent Engagement: Russian cyber doctrine emphasises continuous infiltration of adversary networks to gather intelligence and disrupt critical infrastructure (Huskaj, 2023).
  • Hybrid Warfare: Cyber operations are often combined with traditional military tactics, as seen in Ukraine and Georgia (Chichulin & Kopylov, 2024).
  • Psychological and Political Manipulation: The use of cyber disinformation campaigns has been instrumental in shaping political narratives globally (Rashid, Khan, & Azim, 2021).

1.2 Case Studies: The Russian Cyber Playbook in Action

Several high-profile attacks illustrate the sophistication of Russian cyber operations:

  • The SolarWinds Compromise (2020-2021): This breach, attributed to Russian intelligence, infiltrated multiple U.S. government agencies and Fortune 500 companies, highlighting vulnerabilities in software supply chains (Vaughan-Nichols, 2021).
  • Ukraine’s Power Grid Attacks (2015-2017): Russian hackers used malware such as BlackEnergy and Industroyer to disrupt Ukraine’s energy infrastructure, showcasing the potential for cyber-induced kinetic effects (Guchua & Zedelashvili, 2023).
  • Election Interference (2016 & 2020): Russian hacking groups Fancy Bear and Cozy Bear engaged in data breaches and disinformation campaigns, altering political dynamics in multiple democracies (Jamieson, 2018).

These attacks exemplify how cyber warfare has been weaponised as a tool of statecraft, reinforcing Russia’s broader geopolitical ambitions.

The Trump Administration’s Pivot: From Russia to China and Iran

2.1 Reframing the Cyber Threat Landscape

The administration’s new strategy became evident when Liesyl Franz, the U.S. Deputy Assistant Secretary for International Cybersecurity, conspicuously omitted Russia from a key United Nations briefing on cyber threats, instead highlighting concerns about China and Iran (The Guardian, 2025). This omission marked a clear departure from previous policies that identified Russian cyber operations as a primary national security threat.

Similarly, the Cybersecurity and Infrastructure Security Agency (CISA) has internally shifted resources toward countering Chinese cyber espionage and Iranian state-sponsored cyberattacks, despite ongoing threats from Russian groups (CNN, 2025). This strategic reprioritisation raises questions about the nature of cyber threats and whether the U.S. may be underestimating the persistent risk posed by Russian cyber actors.

2.2 The Suspension of Offensive Cyber Operations

Perhaps the most controversial decision in this policy shift is U.S. Defence Secretary Pete Hegseth’s directive to halt all offensive cyber operations against Russia (ABC News).

3. Policy Implications: Weighing the Perspectives

3.1 Statement of Facts

The decision to halt offensive cyber operations against Russia represents a significant shift in U.S. cybersecurity policy. The official rationale behind the move is a strategic pivot towards addressing cyber threats from China and Iran while reassessing the cyber engagement framework with Russia.

3.2 Perceived Detrimental Effects

Critics argue that reducing cyber engagement with Russia may embolden its intelligence agencies and cybercrime syndicates. The Cold War’s history demonstrates that strategic de-escalation, when perceived as a sign of weakness, can lead to increased adversarial aggression. For instance, the 1979 Soviet invasion of Afghanistan followed a period of perceived Western détente (GovInfoSecurity). Similarly, experts warn that easing cyber pressure on Russia may enable it to intensify hybrid warfare tactics, including disinformation campaigns and cyber-espionage.

3.3 Perceived Advantages

Proponents of the policy compare it to Boris Yeltsin’s 1994 decision to detarget Russian nuclear missiles from U.S. cities, which symbolised de-escalation without dismantlement (Greensboro News & Record). Advocates argue that this temporary halt on cyber operations against Russia could lay the groundwork for cyber diplomacy and agreements similar to Cold War-era arms control treaties, reducing the risk of uncontrolled cyber escalation.

3.4 Overall Analysis

The Trump administration’s policy shift represents a calculated risk. While it opens potential diplomatic pathways, it also carries inherent risks of creating a security vacuum. Drawing lessons from Cold War diplomacy, effective deterrence must balance engagement with strategic restraint. Whether this policy fosters improved international cyber norms or leads to unintended escalation will depend on future geopolitical developments and Russia’s response.


References & Further Reading

UK And US Stand Firm: No New AI Regulation Yet. Here’s Why.

UK And US Stand Firm: No New AI Regulation Yet. Here’s Why.

Introduction: A Fractured Future for AI?

Imagine a future where AI development is dictated by national interests rather than ethical, equitable, and secure principles. Countries scramble to outpace each other in an AI arms race, with no unified regulations to prevent AI-powered cyber warfare, misinformation, or economic manipulation.

This is not a distant dystopia—it is already happening.

At the Paris AI Summit 2025, world leaders attempted to set a global course for AI governance through the Paris Declaration, an agreement focusing on ethical AI development, cyber governance, and economic fairness (Oxford University, 2025). 61 nations, including France, China, India, and Japan, signed the declaration, signalling their commitment to responsible AI.

But two major players refused—the United States and the United Kingdom (Al Jazeera, 2025). Their refusal exposes a stark divide: should AI be a globally governed technology, or should it remain a tool of national dominance?

This article dissects the motivations behind the US and UK’s decision, explores the geopolitical and economic stakes in AI governance, and outlines the risks of a fragmented regulatory landscape. Ultimately, history teaches us that isolationism in global governance has dangerous consequences—AI should not become the next unregulated digital battleground.

The Paris AI Summit: A Bid for Global AI Regulation

The Paris Declaration set out six primary objectives (Anadolu Agency, 2025):

  1. Ethical AI Development: Ensuring AI remains transparent, unbiased, and accountable.
  2. International Cooperation: Encouraging cross-border AI research and investments.
  3. AI for Sustainable Growth: Leveraging AI to tackle environmental and economic inequalities.
  4. AI Security & Cyber Governance: Addressing the risks of AI-powered cyberattacks and disinformation.
  5. Workforce Adaptation: Ensuring AI augments human labor rather than replacing it.
  6. Preventing AI Militarization: Avoiding an uncontrolled AI arms race with autonomous weapons.

While France, China, Japan, and India supported the agreement, the US and UK abstained, each citing strategic, economic, and security concerns (Al Jazeera, 2025).

Why Did the US and UK Refuse to Sign?

1. The United States: Prioritizing National Interests

The US declined to sign the Paris Declaration due to concerns over national security and economic leadership (Oxford University, 2025). Vice President J.D. Vance articulated the administration’s belief in “pro-growth AI policies” to maintain the US’s dominance in AI innovation (Reuters, 2025).

The US government sees AI as a strategic asset, where global regulations could limit its control over AI applications in military, intelligence, and cybersecurity. This stance aligns with the broader “America First” approach, focusing on maintaining US technological hegemony over AI (Financial Times, 2025).

Additionally, the US has already weaponized AI chip supply chains, restricting exports of Nvidia’s AI GPUs to China to maintain its lead in AI research (Barron’s, 2024). AI is no longer just software—it’s about who controls the silicon powering it.

2. The United Kingdom: Aligning with US Policies

The UK’s refusal to sign reflects its broader strategy of maintaining the “Special Relationship” with the US, prioritizing alignment with Washington over an independent AI policy (Financial Times, 2025).

A UK government spokesperson stated that the declaration “had not gone far enough in addressing global governance of AI and the technology’s impact on national security.” This highlights Britain’s desire to retain control over AI policymaking rather than adhere to a multilateral framework (Anadolu Agency, 2025).

Additionally, the UK rebranded its AI Safety Institute as the AI Security Institute, signalling a shift from AI ethics to national security-driven AI governance (Economist, 2024). This move coincides with Britain’s ambition to protect ARM Holdings, one of the world’s most critical AI chip architecture firms.

By standing with the US, the UK secures:

  • Preferential access to US AI technologies.
  • AI defense collaboration with US intelligence agencies.
  • A strategic advantage over EU-style AI ethics regulations.

The AI-Silicon Nexus: Geopolitical and Commercial Implications

AI is Not Just About Software—It is a Hardware War

Control over AI infrastructure is increasingly centered around semiconductor dominance. Three companies dictate the global AI silicon supply chain:

  • TSMC (Taiwan) – Produces 90% of the world’s most advanced AI chips, making Taiwan a major geopolitical flashpoint (Economist, 2024).
  • Nvidia (United States) – Leads in designing AI GPUs, used for AI training and autonomous systems, but is now restricted from exporting to China (Barron’s, 2024).
  • ARM Holdings (United Kingdom) – Develops chip architectures that power AI models, yet remain aligned with Western tech and security alliances.

By controlling AI chips, the US and UK seek to slow China’s AI growth, while China accelerates efforts to achieve AI chip independence (Financial Times, 2025).

This AI-Silicon Nexus is now shaping AI governance, turning AI into a national security asset rather than a shared technology.

Lessons from History: The League of Nations and AI’s Fragmented Future

The US’s refusal to join the League of Nations after World War I weakened global security efforts, paving the way for World War II. Today, the US and UK’s reluctance to commit to AI governance could lead to an AI arms race—one that might spiral out of control.

Without a unified AI regulatory framework, adversarial nations can exploit gaps in governance, just as rogue states exploited international diplomacy failures in the 1930s.

The Risks of Fragmented AI Governance

Without global AI governance, the world faces serious risks:

  1. Cybersecurity Vulnerabilities – Unregulated AI could fuel cyberwarfare, misinformation, and deepfake propaganda.
  2. Economic DisruptionsFragmented AI regulations will slow global AI adoption and cross-border investments.
  3. AI Militarization – The absence of AI arms control policies could lead to autonomous warfare and digital conflicts.
  4. Loss of Trust in AI – The lack of standardized AI safety frameworks could create regulatory chaos and ethical concerns.

Conclusion: A Call for Responsible AI Leadership

The Paris AI Summit has exposed deep divisions in AI governance, with the US and UK prioritizing AI dominance over global cooperation. Meanwhile, China, France, and other key players are using AI governance as a tool to shape global influence.

The world is at a critical crossroads—either nations cooperate to regulate AI responsibly, or they allow AI to become a fragmented, unpredictable force.

If history has taught us anything, isolationism in global security leads to arms races, geopolitical instability, and economic fractures. The US and UK must act before AI governance becomes an uncontrollable force—just as the failure of the League of Nations paved the way for war.

References

  1. Global Disunity, Energy Concerns, and the Shadow of Musk: Key Takeaways from the Paris AI Summit
    The Guardian, 14 February 2025.
    https://www.theguardian.com/technology/2025/feb/14/global-disunity-energy-concerns-and-the-shadow-of-musk-key-takeaways-from-the-paris-ai-summit
  2. Paris AI Summit: Why Did US, UK Not Sign Global Pact?
    Anadolu Agency, 14 February 2025.
    https://www.aa.com.tr/en/americas/paris-ai-summit-why-did-us-uk-not-sign-global-pact/3482520
  3. Keir Starmer Chooses AI Security Over ‘Woke’ Safety Concerns to Align with Donald Trump
    Financial Times, 15 February 2025.
    https://www.ft.com/content/2fef46bf-b924-4636-890e-a1caae147e40
  4. Transcript: Making Money from AI – After DeepSeek
    Financial Times, 17 February 2025.
    https://www.ft.com/content/b1e6d069-001f-4b7f-b69b-84b073157c77
  5. US and UK Refuse to Sign Paris Summit Declaration on ‘Inclusive’ AI
    The Guardian, 11 February 2025.
    https://www.theguardian.com/technology/2025/feb/11/us-uk-paris-ai-summit-artificial-intelligence-declaration
  6. Vance Tells Europeans That Heavy Regulation Could Kill AI
    Reuters, 11 February 2025.
    [https://www.reuters.com/technology/artificial-intelligence/europe-looks-embrace-ai
The 3-Headed Monster of SaaS Growth: Innovation, Tech Debt, and the Compliance Black Hole

The 3-Headed Monster of SaaS Growth: Innovation, Tech Debt, and the Compliance Black Hole

Picture this: your SaaS startup is on the verge of launching a game-changing feature. The demo with a major enterprise client is tomorrow. The team is working late, pushing final commits. Then it happens—a build breaks due to legacy code dependencies, and a critical security vulnerability is flagged. If that weren’t enough, the client just requested proof of ISO27001 certification before signing the contract. Suddenly, your momentum stalls.

Welcome to the 3-Headed Monster every scaling SaaS team faces:

  1. Innovation Pressure – Build fast or get left behind.
  2. Technical Debt – Every shortcut accumulates hidden costs.
  3. Compliance Black Hole – SOC 2, ISO27001, GDPR—all non-negotiables for enterprise growth.

Moderne’s recent $30M funding round to tackle technical debt is a signal: investors understand that unresolved code debt isn’t just an engineering nuisance—it’s a business risk. But addressing tech debt is only part of the battle. Winning in SaaS requires taming all three heads.

Head #1: The Relentless Demand for Innovation

In the hyper-competitive SaaS world, the mantra is clear: ship fast, or someone else will. Product-market fit waits for no one. Pressure mounts from investors, users, and competitors. Startups often prioritise speed over structure—a rational choice, but one that can quickly unravel as they scale.

As Founder of Zerberus.ai (and with past VP Eng experience at two high-growth startups), I saw us sprint ahead with rapid feature development, often knowing we were incurring technical and security debt. The goal was simple—get there first. But over time, those early shortcuts turned into roadblocks.

Increasingly, the modern CTO is no longer just a builder but a strategic leader driving business outcomes. According to McKinsey (2023), CTOs are evolving from traditional technology custodians into orchestrators of resilience, security, and scalability. This evolution means CTOs must now balance the pressure to innovate with the need to future-proof systems against both technical and security debt.

Head #2: Technical Debt – The Silent Killer

Every startup understands technical debt, but few realise its full cost until it’s too late. It slows feature releases, increases defect rates, and leads to developer burnout. More critically, it introduces security vulnerabilities.

A 2020 report by the Consortium for Information & Software Quality (CISQ) estimated that poor software quality cost U.S. businesses $2.41 trillion, with technical debt being a major contributor. This loss of velocity directly impacts innovation and time to market.

GreySpark Partners (2023) highlights that over 60% of firms struggle with technology debt, impacting their ability to innovate. Alarmingly, they found that 71% of respondents believed their technology debt would negatively affect their firm’s competitiveness in the next five years.

The Spring4Shell vulnerability in 2022 was a stark reminder—outdated dependencies can expose your entire stack. Moderne’s approach—automating large-scale refactoring—is promising because it acknowledges a core truth: technical debt isn’t just a productivity issue; it’s a security and revenue risk.

Head #3: The Compliance Black Hole

ISO27001, SOC 2, GDPR. These aren’t just badges of honour; they are the price of admission for enterprise deals. Yet compliance often blindsides startups. It’s seen as a box-ticking exercise, rushed through to close deals. But achieving compliance is only the beginning—staying compliant is the real challenge.

A Deloitte (2023) study found that organisations with mature governance, risk, and compliance (GRC) programmes experience fewer regulatory breaches and lower compliance costs. Furthermore, McKinsey (2023) highlights that cybersecurity in the AI era requires embedding security into product development as early as possible, as threats evolve in tandem with technological progress.

I’ve been in rooms where six-figure deals were delayed because we didn’t have the right certifications. In other cases, a sudden audit exposed weak controls, forcing an all-hands firefight. Compliance isn’t just a legal requirement; it’s a potential growth blocker.

Where the 3 Heads Collide

These challenges are deeply interconnected:

  • Innovation leads to technical debt.
  • Technical debt creates security vulnerabilities.
  • Security gaps jeopardise compliance.

This vicious cycle can trap startups in firefighting mode. The solution lies in convergence:

  • Automate code health (e.g., Moderne).
  • Embed security into development (Shift Left, SAST, Dependency Scanning).
  • Integrate compliance into engineering workflows (continuous compliance).

Forward-thinking teams realise that innovation, security, and compliance are not separate lanes; they are parallel tracks that must move in sync.

The Future: Taming the Monster

Investors are betting on platforms that tackle technical debt and automate security posture. The future CTO will not just manage code velocity; they will oversee code health, security, and compliance as a unified system.

Winning in SaaS is no longer just about shipping fast—it’s about shipping fast, securely, and in compliance. The real winners will tame all three heads.

At Zerberus.ai—founded by engineers and security experts from high-growth SaaS startups like Zarget and Itilite—we are exploring how startups can simplify security compliance while enabling rapid development. We’re currently in private beta, partnering with SaaS teams tackling these challenges.

Trivia: Our logo, inspired by Cerberus—the mythical three-headed guardian of the underworld—embodies this very struggle. Each head symbolises the core challenges startups face: Innovation, Technical Debt, and Compliance. Zerberus.ai is built to help startups tame each of these heads, ensuring that rapid growth doesn’t come at the expense of security or scalability.

How are you navigating the 3-Headed Monster in your startup journey?

References and Further Reading

Disbanding the CSRB: A Mistake for National Security

Disbanding the CSRB: A Mistake for National Security

Why Ending the CSRB Puts America at Risk

Imagine dismantling your fire department just because you haven’t had a major fire recently. That’s effectively what the Trump administration has done by disbanding the Cyber Safety Review Board (CSRB), a critical entity within the Cybersecurity and Infrastructure Security Agency (CISA). In an era of escalating cyber threats—ranging from ransomware targeting hospitals to sophisticated state-sponsored attacks—this decision is a catastrophic misstep for national security.

While countries across the globe are doubling down on cybersecurity investments, the United States has chosen to retreat from a proactive posture. The CSRB’s closure sends a dangerous message: that short-term political optics can override the long-term need for resilience in the face of digital threats.

The Role of the CSRB: A Beacon of Cybersecurity Leadership

Established to investigate and recommend strategies following major cyber incidents, the CSRB functioned as a hybrid think tank and task force, capable of cutting through red tape to deliver actionable insights. Its role extended beyond the public-facing reports; the board was deeply involved in guiding responses to sensitive, behind-the-scenes threats, ensuring that risks were mitigated before they escalated into crises.

The CSRB’s disbandment leaves a dangerous void in this ecosystem, weakening not only national defenses but also the trust between public and private entities.

CSRB: Championing Accountability and Reform

One of the CSRB’s most significant contributions was its ability to hold even the most powerful corporations accountable, driving reforms that prioritized security over profit. Its achievements are best understood through the lens of its high-profile investigations:

Key Milestones

Why the CSRB’s Work Mattered

The CSRB’s ability to compel change from tech giants like Microsoft underscored its importance. Without such mechanisms, corporations are less likely to prioritise cybersecurity, leaving critical infrastructure vulnerable to attack. As cyber threats grow in complexity, dismantling accountability structures like the CSRB risks fostering an environment where profits take precedence over security—a dangerous proposition for national resilience.

Cybersecurity as Strategic Deterrence

To truly grasp the implications of the CSRB’s dissolution, one must consider the broader strategic value of cybersecurity. The European Leadership Network aptly draws parallels between cyber capabilities and nuclear deterrence. Both serve as powerful tools for preventing conflict, not through their use but through the strength of their existence.

By dismantling the CSRB, the U.S. has not only weakened its ability to deter cyber adversaries but also signalled a lack of commitment to proactive defence. This retreat emboldens adversaries, from state-sponsored actors like China’s STORM-0558 to decentralized hacking groups, and undermines the nation’s strategic posture.

Global Trends: A Stark Contrast

While the U.S. retreats, the rest of the world is surging ahead. Nations in the Indo-Pacific, as highlighted by the Royal United Services Institute, are investing heavily in cybersecurity to counter growing threats. India, Japan, and Australia are fostering regional collaborations to strengthen their collective resilience.

Similarly, the UK and continental Europe are prioritising cyber capabilities. The UK, for instance, is shifting its focus from traditional nuclear deterrence to building robust cyber defences, a move advocated by the European Leadership Network. The EU’s Cybersecurity Strategy exemplifies the importance of unified, cross-border approaches to digital security.

The U.S.’s decision to disband the CSRB stands in stark contrast to these efforts, risking not only its national security but also its leadership in global cybersecurity.

Isolationism’s Dangerous Consequences

This decision reflects a broader trend of isolationism within the Trump administration. Whether it’s withdrawing from the World Health Organization or sidelining international climate agreements, the U.S. has increasingly disengaged from global efforts. In cybersecurity, this isolationist approach is particularly perilous.

Global threats demand global solutions. Initiatives like the Five Eyes’ Secure Innovation program (Infosecurity Magazine) demonstrate the value of collaborative defence strategies. By withdrawing from structures like the CSRB, the U.S. not only risks alienating allies but also forfeits its role as a global leader in cybersecurity.

The Cost of Complacency

Cybersecurity is not a field that rewards complacency. As CSO Online warns, short-term thinking in this domain can lead to long-term vulnerabilities. The absence of the CSRB means fewer opportunities to learn from incidents, fewer recommendations for systemic improvements, and a diminished ability to adapt to evolving threats.

The cost of this decision will likely manifest in increased cyber incidents, weakened critical infrastructure, and a growing divide between the U.S. and its allies in terms of cybersecurity capabilities.

Conclusion

The disbanding of the CSRB is not just a bureaucratic reshuffle—it is a strategic blunder with far-reaching implications for national and global security. In an age where digital threats are as consequential as conventional warfare, dismantling a key pillar of cybersecurity leaves the United States exposed and isolated.

The CSRB’s legacy of transparency, accountability, and reform serves as a stark reminder of what’s at stake. Its dissolution not only weakens national defences but also risks emboldening adversaries and eroding trust among international partners. To safeguard its digital future, the U.S. must urgently rebuild mechanisms like the CSRB, reestablish its leadership in cybersecurity, and recommit to collaborative defence strategies.

References & Further Reading

  1. TechCrunch. (2025). Trump administration fires members of cybersecurity review board in horribly shortsighted decision. Available at: TechCrunch
  2. The Conversation. (2025). Trump has fired a major cybersecurity investigations body – it’s a risky move. Available at: The Conversation
  3. TechDirt. (2025). Trump disbands cybersecurity board investigating massive Chinese phone system hack. Available at: TechDirt
  4. European Leadership Network. (2024). Nuclear vs Cyber Deterrence: Why the UK Should Invest More in Its Cyber Capabilities and Less in Nuclear Deterrence. Available at: ELN
  5. Royal United Services Institute. (2024). Cyber Capabilities in the Indo-Pacific: Shared Ambitions, Different Means. Available at: RUSI
  6. Infosecurity Magazine. (2024). Five Eyes Agencies Launch Startup Security Initiative. Available at: Infosecurity Magazine
  7. CSO Online. (2024). Project 2025 Could Escalate US Cybersecurity Risks, Endanger More Americans. Available at: CSO Online
Transforming Compliance: From Cost Centre to Growth Catalyst in 2025

Transforming Compliance: From Cost Centre to Growth Catalyst in 2025

Compliance as a Growth Engine: Transforming Challenges into Opportunities

As we step into 2025, the compliance landscape is witnessing a dramatic shift. Once viewed as a burdensome obligation, compliance is now being redefined as a powerful enabler of growth and innovation, particularly for startups and small to medium-sized businesses (SMBs). Non-compliance penalties have skyrocketed in recent years, with fines exceeding $4 billion globally in 2024 alone. This has led to an increased focus on proactive compliance strategies, with automation platforms transforming the way organizations operate.

The Paradigm Shift: Compliance as a Strategic Asset

“Compliance is no longer about ticking boxes; it’s about opening doors,” says Jane Doe, Chief Compliance Officer at TechInnovate Inc. This shift in perspective is evident across industries. Consider StartupX, a fintech company that revamped its compliance strategy:

  • Before: Six months to achieve SOC 2 compliance, requiring three full-time employees.
  • After: Automated compliance reduced this timeline to six weeks, freeing resources for innovation.
  • Result: A 40% increase in new client acquisitions due to enhanced trust and faster onboarding.

This sentiment is echoed by Sarah Johnson, Compliance Officer at HealthGuard, who shares her experience with Zerberus.ai:

“Zerberus.ai has revolutionized our approach to compliance management. It’s a game changer for startups and SMEs.”

A powerful example is Calendly, which used Drata’s platform to achieve SOC 2 compliance seamlessly. Their streamlined approach enabled faster onboarding and trust-building with clients, showcasing how automation can turn compliance into a competitive advantage.

The Role of Technology in Redefining Compliance

Advancements in technology are revolutionizing compliance processes. Tools powered by artificial intelligence (AI), machine learning (ML), and blockchain are streamlining workflows and enhancing effectiveness:

  • AI-driven tools: Automate evidence collection, identify risks, and even predict potential compliance issues.
  • ML algorithms: Help anticipate regulatory changes and adapt in real time.
  • Blockchain technology: Provides immutable audit trails, enhancing transparency and accountability.

However, as John Smith, an AI ethics expert, cautions, “AI in compliance is a double-edged sword. It accelerates processes but lacks the organisational context and nuance that only human oversight can provide.”

Compliance Automation: A Booming Industry

The compliance automation tools market is experiencing rapid growth:

  • 2024 market value: $2.94 billion
  • Projected 2034 value: $13.40 billion
  • CAGR (2024–2034): 16.4%

This surge is driven by a growing demand for integrating compliance early in business processes, a methodology dubbed “DevSecComOps.” Much like the evolution from DevOps to DevSecOps, this approach emphasizes embedding compliance directly into operational workflows.

Innovators Leading the Compliance Revolution

Old-School GRC Platforms

Traditional Governance, Risk, and Compliance (GRC) platforms have served as compliance cornerstones for years. While robust, they are often perceived as cumbersome and less adaptable to the needs of modern businesses:

  • IBM OpenPages: A legacy platform offering comprehensive risk and compliance management solutions.
  • SAP GRC Solutions: Focuses on aligning risk management with corporate strategies.
  • ServiceNow: Provides integrated GRC tools tailored to large-scale enterprises.
  • Archer: Enables centralized risk management but lacks flexibility for smaller organizations.
New-Age Compliance Automation Suites

Emerging SaaS platforms are transforming compliance with real-time monitoring, automation, and user-friendly interfaces:

  • Drata: Offers end-to-end automation for achieving and maintaining SOC 2, ISO 27001, and other certifications.
  • Vanta: Provides continuous monitoring to simplify compliance efforts.
  • Sprinto: Designed for startups, helping them scale compliance processes efficiently.
  • Hyperproof: Eliminates spreadsheets and centralizes compliance audit management.
  • Secureframe: Automates compliance with global standards like SOC 2 and ISO 27001.
Cybersecurity and Compliance Resilience Platforms

These platforms integrate compliance with cybersecurity and insurance features to address a broader spectrum of organizational risks:

  • Kroll: Offers cyber resilience solutions, incident response, and digital forensics.
  • Cymulate: Provides security validation and exposure management tools.
  • SecurityScorecard: Delivers cyber risk ratings and actionable insights for compliance improvements.

Compliance as a Competitive Edge

A robust compliance framework delivers tangible business benefits:

  1. Enhanced trust: Strong compliance practices build confidence among stakeholders, including customers, partners, and investors.
  2. Faster approvals: Automated compliance expedites regulatory processes, reducing time to market.
  3. Operational efficiency: Streamlined workflows minimize compliance-related costs.
  4. Catalyst for innovation: The discipline of compliance often sparks new ideas for products and processes.

Missed Business: Quantifying the Cost of Non-Compliance

Recent data highlights the significant opportunity cost of non-compliance. Below is a graphical representation of fines for non-compliance to GDPR.

Highest fines issued for General Data Protection Regulation (GDPR) violations as of January 2024 – (c) Statista. Source: https://www.statista.com/statistics/1133337/largest-fines-issued-gdpr/

Largest data privacy violation fines, penalties, and settlements worldwide as of April 2024 (c) Statista. Source: https://www.statista.com/statistics/1170520/worldwide-data-breach-fines-settlements/

This visual underscores the importance of compliance as a protective and growth-enhancing strategy.

Missed Business: Quantifying the Cost of Non-Compliance

Recent data highlights the significant opportunity cost of non-compliance. Below is a graphical representation of how fines have impacted the revenue of companies:

Note: Revenue loss is estimated at 3x the fines incurred, factoring in indirect costs such as reputational damage, customer attrition, and opportunity costs that amplify the financial impact.

Emerging Trends in Compliance for 2025

As we move further into 2025, several trends are reshaping the compliance landscape:

  1. Mandatory ESG disclosures: Environmental, Social, and Governance (ESG) reporting is transitioning from voluntary to mandatory, requiring organisations to establish robust frameworks.
  2. Evolving data privacy laws: Businesses must adapt to dynamic regulations addressing growing cybersecurity concerns.
  3. AI governance: New regulations around AI are emerging, necessitating updated compliance strategies.
  4. Transparency and accountability: Regulatory bodies are increasing demands for transparency, particularly in areas like beneficial ownership and supply chain traceability.
  5. Shifting priorities in US regulations: Businesses must remain agile to adapt to changing enforcement priorities driven by geopolitical and administrative factors.

Conclusion

“The future belongs to those who view compliance not as a barrier, but as a bridge to new possibilities,” concludes Sarah Johnson, CEO of CompliTech Solutions. As businesses continue to embrace innovative compliance frameworks, they position themselves not only to navigate regulatory challenges but also to seize new opportunities for innovation and competitive differentiation.

Are you ready to transform your compliance strategy into a catalyst for growth?

References

  1. Athennian (2024). Your 2025 Compliance Roadmap: Key Trends and Changes. Available at: https://www.athennian.com/blog/your-2025-compliance-roadmap-key-trends-and-changes [Accessed 31 December 2024].
  2. Ethisphere (2024). 2024 Ethics and Compliance Recap: Insights and Key Trends Shaping 2025. Available at: https://ethisphere.com/2024-ethics-and-compliance-recap/ [Accessed 31 December 2024].
  3. Finextra (2024). What’s happened to regulatory compliance in 2024, and how could this shape 2025 strategies? Available at: https://www.finextra.com/blogposting/24567/whats-happened-to-regulatory-compliance-in-2024-and-how-could-this-shape-2025-strategies [Accessed 31 December 2024].
  4. Drata (2024). Customer Success Story: Calendly. Available at: https://drata.com/customers/calendly [Accessed 31 December 2024].
  5. Future Data Stats (2024). Compliance Management Software Market Size & Industry Growth. Available at: https://futuredatastats.com/compliance-management-software-market/ [Accessed 31 December 2024].
  6. Verified Market Research (2024). Compliance Management Software Market Size & Forecast. Available at: https://www.verifiedmarketresearch.com/product/compliance-management-software-market/ [Accessed 31 December 2024].

Further Reading

The Trade-offs of Open Source Going Private

The Trade-offs of Open Source Going Private

The open-source software (OSS) movement has long been hailed as the engine of technological innovation and collaboration. Its ethos of transparency, accessibility, and community-driven development has empowered startups and global enterprises alike. Yet, recent years have seen a growing trend: prominent open-source companies transitioning to proprietary or hybrid licensing models. This shift raises important questions about the future of open-source and its implications for the broader startup ecosystem.

This article is inspired by TechCrunch’s insightful timeline, “Open-Source Companies That Go Proprietary: A Timeline,” which highlights the evolving dynamics of open-source companies navigating sustainability and profitability challenges. The examples provided there form the foundation for exploring this complex and contentious issue.

A Brief History of Open Source

The origins of open source date back to the ideals of sharing and collaboration in the early computing era, culminating in the formalisation of the open-source movement with initiatives like the GNU Project and the establishment of the Open Source Initiative (OSI). Over decades, OSS became the backbone of modern technology, with projects such as Linux, Apache, and Kubernetes driving exponential innovation.

Startups have especially benefited from OSS. The ability to leverage powerful, free tools has enabled rapid prototyping and product development. For instance, frameworks like Django, Node.js, and React.js have drastically reduced development time and costs for young companies.

The Shift to Proprietary Models

However, several prominent OSS companies have transitioned to proprietary or dual licensing models. Redis Labs, MongoDB, Cassandra, and Elastic are notable examples. This shift often stems from a desire to combat exploitation by cloud providers like AWS, Google Cloud, and Microsoft Azure. These giants offer managed services based on OSS, often reaping significant profits without contributing meaningfully to the original projects.

Timeline of Transitions

Here are key examples of products and companies that have navigated this challenging transition, as highlighted in TechCrunch’s timeline:

  • 2012: MySQL’s acquisition by Oracle raised concerns about its licensing and development practices, setting an early precedent for shifts in open-source licensing.
  • 2014: Docker introduced a new model combining open-source and proprietary features, laying the groundwork for hybrid licensing approaches.
  • 2016: Apache Cassandra saw increased commercialisation efforts, focusing on enterprise-grade features to sustain the ecosystem.
  • 2018: MongoDB adopted the Server Side Public License (SSPL), aiming to prevent unlicensed use by cloud providers.
  • 2019: Redis Labs altered its licensing to introduce modules under a Commons Clause, restricting commercial usage. This move sparked significant backlash, leading to forks such as ValkeyDB, where developers sought to preserve Redis’s original open ethos.
  • 2021: Elastic transitioned from an Apache 2.0 licence to SSPL, citing similar concerns.

These changes reflect a growing realisation: while OSS enables widespread adoption, it also leaves developers and smaller companies vulnerable to value extraction by larger corporations.

Redis: A Case Study in Controversy

Redis Labs exemplifies the complexities and challenges faced by open-source companies navigating proprietary transitions. Its decision to relicense modules under the Commons Clause aimed to curtail exploitation by cloud providers, but it sparked intense community backlash.

Community Reaction

Many in the open-source community viewed this shift as a betrayal of the principles that had propelled Redis’s success. Developers launched forks like ValkeyDB to maintain an open and unrestricted version of Redis. These forks, while preserving accessibility, also introduced fragmentation into the Redis ecosystem, complicating adoption and support.

Impact on the Ecosystem

The controversy surrounding Redis highlights the delicate balance between sustainability and community trust. While the new licensing model allowed Redis Labs to monetise its innovations, it also risked alienating contributors and users. The long-term effects include a splintered ecosystem and challenges in maintaining a unified developer base.

Security by Obscurity Doesn’t Work

Critics often cite security as a reason to move away from open-source. Yet, history demonstrates the fallacy of “security by obscurity.” High-profile breaches, such as the SolarWinds compromise and MOVEit exploit, highlight the vulnerabilities of proprietary systems. In contrast, open-source projects like OpenSSL and Kubernetes have maintained long periods of incident-free operation, thanks to their transparent development models and extensive peer reviews.


Implications for the Startup Ecosystem

For Startups

The shift towards proprietary models introduces barriers for startups that have traditionally relied on open-source tools. Licensing costs and restrictions could increase operational expenses and hinder innovation. Imagine the inefficiency of building an ORM or cryptographic library from scratch if these tools were not freely available.

For Developers

Developers may feel alienated as companies pivot away from the inclusive values of OSS. Forks and community fragmentation—as seen with Redis’ Valkey project—are common consequences. This creates confusion and inefficiencies within the ecosystem.

For the Ecosystem

The trend also threatens the collaborative spirit of open-source, potentially stifling innovation. However, it opens opportunities for hybrid models, such as open core or dual licensing, that attempt to balance openness with profitability.

The Debate: For and Against

Supporters’ Perspective

Proponents argue that transitioning to proprietary models is a pragmatic move. Companies must protect their intellectual property and ensure financial sustainability to continue innovating. Dual licensing and managed services enable companies to monetise while still offering OSS elements.

Critics’ Perspective

Opponents view this as a betrayal of open-source principles. Amy Hupe’s critique of OSS’ inclusivity points to deeper systemic issues that proprietary transitions exacerbate. Privatization risks alienating diverse contributors by creating financial and operational barriers to entry. For underrepresented groups, these shifts may exacerbate inequities, further limiting their participation and representation in the open-source space.

Lessons from the Transition

Several companies offer valuable insights into navigating these shifts:

  1. MongoDB: Leveraged the SSPL licence to maintain control while ensuring cloud providers paid their share.
  2. Elastic: Transitioned with transparency, minimising backlash by communicating its rationale clearly.
  3. Red Hat: Successfully pioneered a hybrid model, blending open-source development with enterprise-grade support and services.

Governance and transparency play crucial roles in maintaining community trust during these transitions.

The Future of Open Source

Potential Models for Sustainability

Innovative models such as open core (where core functionalities remain free while advanced features are paid) or dual licensing could balance openness with financial viability. Collaboration with cloud providers and clear governance structures can also mitigate exploitation concerns.

Imagining a Closed-Source World

If the trend continues unchecked, we risk entering a world where essential frameworks and libraries are inaccessible. This would stifle innovation, increase costs, and erode the collaborative ethos of the tech ecosystem.

Conclusion: Striking the Balance

The evolution of open-source to proprietary models reflects the tension between ideals and pragmatism. This shift challenges the foundational values of OSS but also underscores the need for sustainability in a rapidly changing landscape. To secure the future of open-source, stakeholders—from developers to corporations—must prioritise inclusive practices, innovative licensing models, and transparent governance. The future of technology hinges on our ability to preserve the collaborative spirit of open-source while adapting to its modern challenges. Let’s build a sustainable open-source ecosystem that remains a beacon of innovation and inclusivity in the years to come.


References and Further Reading

  1. “Open-Source Companies That Go Proprietary: A Timeline,” TechCrunch, December 2024.
  2. “Open Source vs Open Governance: The State and Future of the Movement,” LinkedIn, Ramkumar Sundarakalatharan.
  3. “Privatised Open Source Software,” California Management Review, 2020.
  4. “Why Open Source Matters,” nScale Blog.
  5. “Open Source Software: Why It Matters and How to Get Involved,” The Alan Turing Institute Blog.
  6. “Open Source Does Not Mean Inclusive,” AmyHupe.co.uk.
  7. Redis Labs Licensing Updates, Company Blog.
  8. Apache Cassandra Licensing Updates, DataStax Blog.
How To Measure Real Success In Software Engineering

How To Measure Real Success In Software Engineering

Recently, while attending The Business Show in London, I engaged in a conversation with a CXO of an upcoming Fintech company. The discussion began with cybersecurity implementation—a topic close to my heart—but quickly veered into the realm of engineering throughput. What followed was an incoherent rant by the CXO, a frustrating narrative about firing their Delivery Director for refusing to scale the engineering team to meet deadlines for the company’s next shiny event. Despite my best efforts to pull this gentleman out of his rabbit hole, my time and reasoning seemed to fall on deaf ears.

Reflecting on this interaction over the past month, I’ve realized this episode was emblematic of a larger issue: the prevalent fallacy among CXOs that more engineers equals faster and better output. Surprisingly, this misconception thrives in part because of the silence of engineering leaders—CTOs, VPs, and Directors of Engineering—who often fail to push back against flawed assumptions at the executive level.

Inspired by my recent association with the Information Security Group (ISG) at the Royal Holloway University of London, I decided to don my “academic specs” and examine this fallacy more critically. The result is a deeper dive into the myths of scaling engineering teams, the science behind team efficiency, and a call for a cultural shift in how organizations measure productivity.

The Scaling Myth: Why More Isn’t Always Better

At the heart of this fallacy is a simplistic assumption: more engineers means more features, delivered faster. While this notion seems logical, it is disproven by Price’s Law, a principle that exposes the diminishing returns of team scaling.

Rediscovering Derek J. de Solla Price: From Antikythera to Engineering Efficiency

My journey to understanding Price’s Law began with a fascination for the Antikythera Mechanism —an ancient Greek marvel of engineering and astronomy. It was through this mechanism that I first encountered the work of Prof. Derek J. de Solla Price, a British physicist and historian whose curiosity and intellect extended far beyond antiquities. Inspired by the ingenuity of the Antikythera Mechanism, I was drawn to explore the origins of Damascus and Wootz steel, and its roots in the south-western peninsula of India (as detailed in Aayutha Desam by R. Mannar Mannan). (More about that in another post!)

But it was Price’s insight into the uneven distribution of productivity in groups that struck a chord with my work in software engineering. His principle, now widely known as Price’s Law, asserts that in any team, 50% of the work is accomplished by the square root of the total number of participants.

  • In a team of 10 engineers, approximately 3 contributors (√10) are responsible for half the output.
  • In a team of 100 engineers, only 10 individuals (√100) produce as much as the remaining 90 combined.

This principle highlights a counterintuitive but vital truth: as team size grows, the proportion of high contributors decreases, leading to inefficiencies that compound over time. This isn’t just an academic curiosity—it’s a critical insight for engineering leaders tasked with scaling teams and delivering results.

Price’s Law challenges a long-standing assumption in engineering leadership: that scaling teams proportionally scales productivity. By understanding this principle, CTOs, VPs, and engineering managers can rethink strategies for achieving efficiency and delivering value, even with constrained resources.

The Myth of Highly Motivated Teams

Some self-proclaimed visionary leaders advocate for hiring only highly motivated individuals, often overlooking how teams function in practice. In any organized group, work typically falls into three categories:

  1. Drudgery Work (Low Impact, High Intensity): Routine tasks like debugging or documentation, essential but unappealing.
  2. Intermediate Work (Medium Impact, Medium Intensity): Feature upgrades or system integrations, vital for sustaining operations.
  3. Challenging Work (High Impact, High Intensity): Complex, high-stakes initiatives that highly motivated individuals prefer.

The Problem

Highly motivated individuals often prioritize high-impact projects, leaving routine and intermediate work neglected. This creates:

  • Operational Bottlenecks: Accumulating technical debt and system fragility.
  • Imbalanced Workloads: Overburdened team members handling routine tasks.
  • Team Friction: Reduced cohesion and potential burnout.

The Solution: Balance Over Ambition

Effective teams thrive on diversity in skill sets and balanced task allocation. Leaders must:

  • Distribute Work Strategically: Ensure all types of work are addressed.
  • Value Contributions Equally: Recognize the importance of routine and intermediate tasks.
  • Foster Team Cohesion: Avoid over-prioritizing high-stakes projects at the expense of operational stability.

Conclusion: A truly visionary leader grounds ambition in pragmatism, creating teams that excel not just in high-impact projects but also in sustaining the essentials of day-to-day operations.

Implications for Team Expansion

For CTOs, VPs, and engineering managers, this dynamic presents a counterintuitive challenge: merely expanding the team does not guarantee proportional gains in productivity. Doubling headcount often introduces:

  1. Communication Overhead: Larger teams require more coordination, which consumes valuable time and resources.
  2. Dilution of Accountability: As teams grow, individual contributions become harder to track, potentially reducing ownership and engagement.
  3. Coordination Complexities: Increased interdependencies among team members can slow down decision-making and implementation.

To achieve a twofold increase in productivity, Price’s Law suggests that you may need to quadruple the team size, a move that is often impractical and financially untenable. Instead, engineering leaders must rethink productivity beyond the simplistic metric of team size.

Shifting Focus: Outcomes Over Outputs

Traditional productivity metrics, such as the number of features released or lines of code written, focus on outputs—tangible deliverables produced by the team. However, outputs do not inherently translate into value. Consider the distinction:

  • Outputs: Metrics like features delivered or tickets closed.
  • Outcomes: Measurable changes in user behaviour that drive business results, such as increased user retention or reduced churn.

Relying solely on outputs creates a misleading picture of productivity. A feature-rich application that fails to address user needs or business goals is ultimately unproductive. Instead, outcomes—which capture the real-world effectiveness of engineering efforts—offer a better lens to measure success.

Outcome vs. Impact

While outcomes focus on immediate effects (e.g., increased sign-ups from a new feature), impact delves deeper into long-term consequences. For example:

  • An outcome may be an increase in user sign-ups after a feature launch.
  • The impact would be sustained revenue growth and user satisfaction resulting from the feature’s value over time.

Engineering teams must aim for outcomes that align with strategic goals while keeping an eye on their long-term impacts.

Counterproductive Paradigm: The Threat Surface of Excessive Outputs

Emphasizing outputs over outcomes can be counterproductive, leading to what can be described as an expanding threat surface:

  1. Defects and Bugs: Adding more features often introduce unintended issues that require additional resources to resolve.
  2. Maintenance Burden: More code increases the risk of technical debt, making future development slower and more complex.
  3. Conflict Resolution: Larger teams fixing bugs or implementing features in parallel can inadvertently cause regressions, especially when the main sprint continues uninterrupted.

This vicious cycle diverts focus from strategic initiatives, tying up engineers in a continuous loop of fixes. Instead of scaling output indiscriminately, teams should focus on ensuring that every deliverable contributes to meaningful outcomes.

Focusing on Impacts and Outcomes: A Leadership Imperative

For engineering leaders, the shift from outputs to impacts and outcomes is transformative. This approach emphasizes:

  1. Defining Clear Objectives: Establish measurable outcomes (e.g., reducing churn by 10%) that align with business goals.
  2. Prioritizing High-Impact Work: Evaluate tasks based on their potential to deliver meaningful results.
  3. Empowering Teams: Foster a culture where engineers understand and contribute to broader business objectives rather than just completing tickets.
  4. Continuous Feedback Loops: Regularly assess whether engineering efforts are driving intended outcomes.

This shift not only enhances productivity but also aligns engineering work with the organization’s mission, fostering a sense of purpose within teams.

Conclusion: Redefining Productivity in Software Engineering

Price’s Law reminds us that productivity does not scale linearly with team size. Engineering leaders must navigate this reality by focusing on outcomes and impacts rather than outputs. This paradigm shift requires a cultural and strategic overhaul, but the rewards—greater efficiency, alignment, and value delivery—are well worth the effort.

By embracing this approach, organizations can ensure that their engineering efforts contribute directly to their strategic goals, transforming software development into a driver of sustainable business success.

References

  1. Sundarakalatharan, R. (2022). How to measure Engineering Productivity?. Retrieved from https://nocturnalknight.co/how-to-measure-engineering-productivity/
  2. Bohrmann, N. (2022). How Price’s Law Applies to Everything. Retrieved from https://nielsbohrmann.com/prices-law/
  3. LeadDev. (2022). Focus on outcomes over outputs. Retrieved from https://leaddev.com/velocity/focus-outcomes-over-outputs
  4. Monday Mornings. (2023). Productivity and Price’s Law. Retrieved from https://mondaymornings.madisoncres.com/productivity-and-prices-law-1
  5. TechRadar. (2023). Outcomes versus outputs: the real measure of developer productivity. Retrieved from https://www.techradar.com/pro/outcomes-versus-outputs-the-real-measure-of-developer-productivity
  6. Royal Holloway Information Security Group. (2024). https://pure.royalholloway.ac.uk/
  7. Wikipedia. (2024). Antikythera Mechanism. Retrieved from https://en.wikipedia.org/wiki/Antikythera_mechanism
  8. Wikipedia. (2024). Derek J. de Solla Price. Retrieved from https://en.wikipedia.org/wiki/Derek_J._de_Solla_Price
  9. Wikipedia. (2024). Wootz Steel. Retrieved from https://en.wikipedia.org/wiki/Wootz_steel
  10. Purple Book House. (2024). Aayutha Desam by R. Mannar Mannan. Retrieved from https://www.purplebookhouse.co.uk/product-page/aayutha-desam-book-type-katturaigal-history-by-r-mannar-mannan
The Truth About “Ghost Engineers”: A Critical Analysis

The Truth About “Ghost Engineers”: A Critical Analysis

Disclaimer:
This article is not intended to discredit Boris Denisov, Stanford University, McKinsey, or any other entities referenced herein. I hold immense respect for their contributions to research and industry discourse. While findings like these may resonate with practices in FAANG companies, large organizations, and mature startups, this critique seeks to explore the broader implications of relying on narrow metrics to evaluate productivity in software engineering.

The “Ghost Engineer” Narrative

The term “ghost engineers,” popularized by a recent Stanford study, describes software engineers who allegedly contribute minimally to codebases. Analyzing data from over 50,000 engineers, the study concludes that 9.5% of engineers fall into this category, with the prevalence rising to 14% among remote workers​.

While the findings spark interesting discussions, they rely heavily on the flawed assumption that code commit frequency equates to productivity. As I argued in No, McKinsey, You Got It All Wrong About Developer Productivity, this narrow perspective risks undervaluing critical aspects of software engineering that don’t leave a visible footprint in version control systems​​.

Unintended Amplification: The Snowball Effect

One of the most significant risks of such conclusions—especially before peer review—is their unintended amplification. Articles on Yahoo, TechCrunch, and Newsday have already simplified these findings, creating narratives that could ripple through the industry:

  1. Unnecessary Layoffs: Misinterpreting data might lead organizations to hastily classify engineers as unproductive, ignoring less visible but valuable contributions.
  2. Remote Work Stigma: By associating remote work with reduced productivity, these claims risk undermining one of the most effective workforce models when well-managed.
  3. Toxic Metrics Culture: Over-reliance on activity metrics like commit counts can encourage engineers to game the system by prioritizing volume over meaningful work, as discussed in Business Value Delivery by Engineering Teams in Startups (Part 2)​.

History offers cautionary examples, such as McKinsey’s controversial reliance on lines of code as a productivity measure—a practice criticized in my earlier article for ignoring the multifaceted nature of modern software engineering​​.

Engineering Productivity: Beyond Output Metrics

As outlined in Is the Myth of a 10x Developer Real?, productivity in software engineering extends far beyond raw output. Effective engineers don’t just code—they align stakeholders, resolve ambiguity, and reduce future risks. These invisible contributions often lead to:

  • Improved Collaboration: Engineers who mentor, review code, or resolve cross-team dependencies amplify the impact of their teams.
  • Strategic Outcomes: Refactoring technical debt or implementing security frameworks might reduce visible code output while significantly improving system health​​.

Commit Frequency Misses Critical Context

  • Quality Over Quantity: A single commit that eliminates 1,000 lines of redundant code can be more impactful than 10 minor feature updates.
  • Diverse Roles: Roles like DevOps, QA, and security often contribute indirectly to engineering success but rarely generate frequent commits.

By focusing solely on visible metrics, we risk reinforcing flawed incentives, a point I emphasized in Business Value Delivery by Engineering Teams in Startups (Part 1)​​.

Analyzing the Stanford Study’s Claims

Claim 1: Engineers with Low Commit Activity Are Unproductive

Rebuttal: This assumption ignores the cognitive and collaborative aspects of engineering. As noted in No, McKinsey, You Got It All Wrong About Developer Productivity, activities like design discussions, documentation, and mentoring are essential but invisible in commit logs​.

Claim 2: Remote Engineers Are More Likely to Be “Ghost Engineers”

Rebuttal: Remote work relies on asynchronous collaboration, where documentation and long-term planning take precedence over immediate outputs. Simplistic comparisons risk stigmatizing effective remote models​​.

Claim 3: Low Commit Activity Correlates with Poor Team Performance

Rebuttal: High-performing teams often include specialists whose contributions are less visible but critical. For example, a security engineer resolving vulnerabilities or a DevOps engineer optimizing CI/CD pipelines may not show up in commit logs​.

Claim 4: Organizations Could Save Billions by Addressing the “Ghost Engineer” Problem

Rebuttal: Cost-cutting measures based on flawed metrics often lead to higher technical debt, increased turnover, and diminished morale. As argued in Business Value Delivery by Engineering Teams in Startups (Part 2), true cost efficiency lies in maximizing impact, not minimizing headcount​.

Impact vs Code-Commits: Understanding the Misalignment

A recurring issue with productivity metrics like code-commit frequency is their inability to reflect the true impact of an engineer’s work. The volume of code changes often says little about the value delivered, as demonstrated by the following examples:

Example 1: A Cosmetic UI Change vs. A Critical API Update

Imagine a product manager requests a seemingly simple change: update a button’s color from purple to orange. While this may sound trivial, it could involve:

  • Updating CSS libraries: A cascade of dependencies might require 1,000+ lines of revisions.
  • Testing for accessibility: Ensuring compliance with color-contrast guidelines adds complexity.
  • Regression testing: Updating snapshot tests or fixing broken visual diffs.

This cosmetic change could result in dozens of commits, each addressing a specific dependency or edge case.

Contrast this with a backend engineer’s work on the API gateway to improve application concurrency. This might involve:

  • Identifying bottlenecks: Profiling existing workloads and implementing a solution to reduce latency.
  • Optimizing database connections: Reducing round trips or improving query performance.
  • Deploying with minimal disruption: A single, concise commit could encapsulate weeks of planning and testing.

Here, the backend change’s impact far outweighs the UI update, even though it appears smaller in terms of commit frequency.

Example 2: Bulk Refactoring vs. Precise Bug Fixing

A mid-level engineer is tasked with refactoring a legacy module, updating deprecated methods, and restructuring a monolithic codebase for better readability. This effort generates hundreds of commits and thousands of lines of changes, none of which immediately improve the product’s features.

On the other hand, a senior engineer identifies and fixes a critical bug that intermittently crashes the application. The solution, a one-line code change after hours of debugging, resolves a high-severity issue affecting thousands of users.

From a commit-count perspective, the refactoring task appears more productive. However, the senior engineer’s single-line fix has a far greater immediate impact.

Example 3: Feature Addition vs. Security Enhancement

A frontend developer introduces a new feature, such as a user profile editor. This entails:

  • New UI components: HTML and CSS for the form.
  • Frontend validations: JavaScript-based constraints for data inputs.
  • Integration tests: Mock API responses for various test cases.

The addition spans 2,000 lines of code across 20 commits.

Meanwhile, a DevSecOps engineer works on a critical security vulnerability. The task involves:

  • Rotating access tokens: Updating key secrets stored in the CI/CD pipeline.
  • Implementing security headers: Adding CSPs to prevent XSS attacks.
  • Hardening configurations: Minor changes in deployment scripts to reduce attack surfaces.

Although the security enhancement generates fewer than 10 commits, its value in preventing potential breaches and compliance penalties is enormous.

Key Takeaways

  • Context Matters: Evaluating productivity requires understanding the context and complexity of the task, not just the output volume.
  • Quality Over Quantity: High-impact changes often involve fewer commits, while low-value tasks may inflate commit counts.
  • Recognizing Diverse Contributions: Engineers working on performance, security, or architecture frequently produce less visible yet highly impactful work.

This misalignment underscores the need for organizations to adopt holistic evaluation metrics that consider both quantitative output and qualitative impact. By focusing on the latter, teams can better recognize and reward meaningful contributions.

The Danger of Flawed Productivity Metrics

Simplistic metrics can have cascading negative effects:

  1. Burnout: Engineers may feel pressured to prioritize activity over quality.
  2. Stifled Innovation: Overemphasis on visible output discourages experimentation and risk-taking.
  3. Loss of Talent: Talented engineers in specialized roles may leave if their contributions are undervalued.

As emphasized in Is the Myth of a 10x Developer Real?, effective engineering is about multiplying impact, not maximizing visible output​​.

A Holistic Approach to Productivity

To address these issues, organizations must adopt nuanced evaluation frameworks:

  1. Impact-Driven Metrics: Evaluate contributions based on outcomes, such as improved system reliability or customer satisfaction.
  2. Recognize Invisible Work: Acknowledge tasks like mentorship, technical debt reduction, and long-term strategic planning.
  3. Foster a Culture of Trust: Empower teams to experiment and innovate without fear of being misjudged by flawed metrics.

Conclusion

The “ghost engineer” narrative oversimplifies the multifaceted nature of software engineering. By relying on metrics like commit counts, it risks undervaluing critical contributions and fostering unhealthy workplace dynamics. As I’ve argued across multiple articles, effective engineering teams succeed by delivering value, not just output. The industry must move beyond flawed productivity metrics and adopt more comprehensive frameworks to recognize the true contributions of every engineer.


References and Further Reading

  1. Denisov-Blanch, Y. (2024). Twitter Thread on Ghost Engineers. Retrieved from link.
  2. Denisov-Blanch, Y. (2024). Stanford Research on Software Engineering Productivity. Stanford University. Retrieved from link.
  3. Polyakov, A. (2024). Ghost Engineers—Utter Non-Sense! Medium. Retrieved from link.
  4. No, McKinsey, You Got It All Wrong About Developer Productivity. Nocturnalknight.co. Retrieved from link.
  5. Is the Myth of a 10x Developer Real? Nocturnalknight.co. Retrieved from link.
  6. Bridgwater, A. (2024). Code Busters: Are Ghost Engineers Haunting DevOps Productivity? DevOps.com. Retrieved from link.
  7. Business Value Delivery by Engineering Teams in Startups (Part 1). Nocturnalknight.co. Retrieved from link.
  8. Business Value Delivery by Engineering Teams in Startups (Part 2). Nocturnalknight.co. Retrieved from link.
  9. Long, K. (2024). Are Ghost Engineers Undermining Tech Productivity? Business Insider. Retrieved from link.
  10. Passionate Geekz. (2024). Can a Company Increase Its Market Value by Laying Off Employees? Retrieved from link.
Do You Know What’s in Your Supply Chain? The Case for Better Security

Do You Know What’s in Your Supply Chain? The Case for Better Security

I recently read an interesting report by CyCognito on the top 3 vulnerabilities on third-party products and it sparked my interest to reexamine the supply chain risks in software engineering. This article is an attempt at that.

The Vulnerability Trifecta in Third-Party Products

The CyCognito report identifies three critical areas where third-party products introduce significant vulnerabilities:

  1. Web Servers
    These foundational systems host countless applications but are frequently exploited due to misconfigurations or outdated software. According to the report, 34% of severe security issues are tied to web server environments like Apache, NGINX, and Microsoft IIS. Vulnerabilities like directory traversal or improper access control can serve as gateways for attackers.
  2. Cryptographic Protocols
    Secure communication relies on cryptographic protocols like TLS and HTTPS. Yet, 15% of severe vulnerabilities target these mechanisms. For instance, misconfigurations, weak ciphers, or reliance on deprecated standards expose sensitive data, with inadequate encryption ranking second on OWASP’s Top 10 security threats.
  3. Web Interfaces Handling PII
    Applications that process PII—such as invoices or financial statements—are among the most sensitive assets. Alarmingly, only half of such interfaces are protected by Web Application Firewalls (WAFs), leaving them vulnerable to injection attacks, session hijacking, or data leakage.

Beyond Web Servers: The Hidden Dependency Risks

You control your software stack, but do you actually know what runs beneath those flashy Web/Application servers?

Drawing parallels from my previous article on PyPI and NPM vulnerabilities, it’s clear that open-source dependencies amplify these threats. Attackers exploit the very trust inherent in supply chains, introducing malicious packages or exploiting insecure libraries.

For example:

  • Attackers have embedded malware into popular NPM and PyPI packages, which are then unknowingly incorporated into enterprise-grade software.
  • Dependency confusion attacks exploit naming conventions to inject malicious packages into CI/CD pipelines.

These risks share a core vulnerability with traditional third-party systems: an opaque supply chain with minimal oversight. This is compounded by the ever-decreasing cycle-times for each software releases, giving little to no time for even great Software Engineering teams to doa decent audit and look into the dependency graph of the packages they are building their new, shiny/pointy things that is to transform the world.


Why Software Supply Chain Attacks Persist

As highlighted by Scientific Computing World, software supply chain attacks persist for several reasons:

  • Aggressive GTM Timelines: Most organisations now run quarterly or even monthly product roadmaps, so it is possible to launch a new SaaS product in a matter of days to weeks by leveraging other IaaS, PaaS or SaaS systems – in addition to any Libraries, frameworks and other constructs.
  • Exponential Complexity: With organisations relying on layers of third-party and fourth-party services, the attack surface expands exponentially.
  • Insufficient Oversight: Organisations often focus on securing their environments while neglecting the vendors and libraries they depend on.
  • Lagging Standards: The industry’s inability to enforce stringent security protocols across the supply chain leaves critical gaps.
  • Sophistication of Attacks: From SolarWinds to MOVEit, attackers continually evolve, targeting blind spots in detection and remediation frameworks.

Recommended Steps to Mitigate Supply Chain Threats

To address these vulnerabilities and build resilience, organizations can take the following actionable steps:

1. Map and Assess Dependencies

  • Use tools like Dependency-Track or Sonatype Nexus to map and analyze all third-party and open-source dependencies.
  • Regularly perform software composition analysis (SCA) to detect outdated or vulnerable components.

2. Implement Zero-Trust Architecture

  • Leverage Zero-Trust frameworks like NIST 800-207 to ensure strict authentication and access controls across all systems.
  • Minimize the privileges of third-party integrations and isolate sensitive data wherever possible.

3. Strengthen Vendor Management

  • Evaluate vendor security practices using frameworks like the NCSC’s Supply Chain Security Principles or the Open Trusted Technology Provider Standard (OTTPS).
  • Demand transparency through detailed Service Level Agreements (SLAs) and regular vendor audits.

4. Prioritize Secure Development and Deployment

  • Train your development teams to follow secure coding practices like those outlined in the OWASP Secure Coding Guidelines.
  • Incorporate tools like Snyk or Checkmarx to identify vulnerabilities during the software development lifecycle.

5. Enhance Monitoring and Incident Response

  • Deploy Web Application Firewalls (WAFs) such as AWS WAF or Cloudflare to protect web interfaces.
  • Establish a robust incident response plan using guidance from the MITRE ATT&CK Framework to ensure rapid containment and mitigation.

6. Foster Collaboration

  • Work with industry peers and organizations like the Cybersecurity and Infrastructure Security Agency (CISA) to share intelligence and best practices for supply chain security.
  • Collaborate with academic institutions and research groups for cutting-edge insights into emerging threats.

7. Schedule a No-Obligation Consultation Call with Yours Truly

Struggling with supply chain vulnerabilities or need tailored solutions for your unique challenges? I offer consultation services to work directly with your CTO, Principal Architect, or Security Leadership team to:

  • Assess your systems and identify key risks.
  • Recommend actionable, budget-friendly steps for mitigation and prevention.

With years of expertise in cybersecurity and compliance, I can help streamline your approach to supply chain security without breaking the bank. Let’s collaborate to make your operations secure and resilient.

Schedule Your Free Consultation Today

Building a Resilient Supply Chain

The UK’s National Cyber Security Centre (NCSC) principles for supply chain security provide a pragmatic roadmap for businesses. Here’s how to act:

  1. Understand and Map Dependencies
    Organizations should create a detailed map of all dependencies, including direct vendors and downstream providers, to identify potential weak links.
  2. Adopt a Zero-Trust Framework
    Treat every external connection as untrusted until verified, with continuous monitoring and access restrictions.
  3. Mandate Secure Development Practices
    Encourage or require vendors to implement secure coding standards, frequent vulnerability testing, and robust update mechanisms.
  4. Regularly Audit Supply Chains
    Establish a routine audit process to assess vendor security posture and adherence to compliance requirements.
  5. Proactive Incident Response Planning
    Prepare for the inevitable by maintaining a robust incident response plan that incorporates supply chain risks.

Final Thoughts

The threat of supply chain vulnerabilities is no longer hypothetical—it’s happening now. With reports like CyCognito’s, research into dependency management, and frameworks provided by trusted institutions, businesses have the tools to mitigate risks. However, this requires vigilance, collaboration, and a willingness to rethink traditional approaches to third-party management.

Organisations must act not only to safeguard their operations but also to preserve trust in an increasingly interconnected world. 

Is your supply chain ready to withstand the next wave of attacks?


References and Further Reading

  1. Report Shows the Threat of Supply Chain Vulnerabilities from Third-Party Products – CyCognito
  2. Hidden Threats in PyPI and NPM: What You Need to Know
  3. Why Software Supply Chain Attacks Persist – Scientific Computing World
  4. Principles of Supply Chain Security – NCSC
  5. CyCognito Report Exposes Rising Software Supply Chain Threats

What’s your strategy for managing third-party risks? Share your thoughts in the comments!

Why Do We Need Quantum-Resistant Security Standards?

Why Do We Need Quantum-Resistant Security Standards?

In October 2024, we discussed the profound implications of China’s quantum computing advancements and their potential to disrupt internet security. Quantum computers, with their unparalleled processing power, pose a direct threat to current encryption systems that secure global communications. Since then, the National Institute of Standards and Technology (NIST) has made significant strides in shaping the post-quantum cryptography (PQC) landscape. This follow-up delves into NIST’s recent updates, including finalised standards, transition strategies, and their broader impact on global cybersecurity.


NIST’s Finalised Post-Quantum Encryption Standards

On August 13, 2024, NIST announced the release of its first three finalized post-quantum encryption standards. These standards are foundational for safeguarding electronic information in a quantum-enabled future, addressing key areas such as secure email communications, online transactions, and identity verification.

The standards selected are robust against both classical and quantum attacks, offering a proactive defence against the anticipated rise of quantum threats. While these are groundbreaking, NIST has emphasized the need for rapid adoption, encouraging enterprises and governments alike to begin transitioning their systems to quantum-resistant encryption.

Key highlights:

  • Algorithms: CRYSTALS-Kyber (public key encryption) and CRYSTALS-Dilithium (digital signatures) lead the finalized standards.
  • Applications: These standards are particularly suited for critical applications, such as financial systems, healthcare records, and government communications.

NIST’s Draft Transition Strategy and Timeline

In a draft report released on November 14, 2024, NIST outlined a detailed roadmap for migrating to PQC. This document provides clarity on the timeline and steps necessary to shift from current cryptographic protocols to quantum-resistant ones.

Key Aspects of the Draft:

  1. Transition Timeline:
    • Transition to begin immediately, with milestones for algorithm implementation by 2026.
    • Full adoption in federal systems is targeted by 2030, though enterprises are urged to act sooner.
  2. Evaluation and Risk Management:
    • A phased approach to identify and replace quantum-vulnerable systems.
    • Focus on testing and interoperability with existing infrastructure.
  3. Public Review Period:
    • The draft is open for comments until January 10, 2025, ensuring that the strategy incorporates diverse perspectives from industry leaders, academia, and government.

Guidance for Federal Agencies and Enterprises

To aid the transition, NIST has issued specific guidance tailored for federal agencies and private organizations:

  • Quantum Risk Assessments: Organizations must inventory their cryptographic systems and identify components vulnerable to quantum decryption.
  • Pilot Programs: Encouraged for testing quantum-resistant algorithms in controlled environments.
  • Training and Awareness: Enterprises need to upskill their workforce to understand and implement PQC effectively.

This proactive approach aligns with Executive Order 14028 on improving national cybersecurity, which mandates the adoption of innovative security measures across federal systems.


Enterprises Must Act Faster

While NIST has provided a structured timeline, cybersecurity experts warn that enterprises cannot afford to wait until the final deadlines. The development of practical quantum computers may outpace current expectations, leaving vulnerable systems exposed.

Recommendations for Enterprises:

  1. Prioritise Cryptographic Inventories: Develop a clear understanding of where cryptography is used and its quantum vulnerability.
  2. Develop a Migration Plan: Incorporate NIST’s guidance to create a tailored transition strategy.
  3. Collaborate with Vendors: Work with software and hardware providers to ensure seamless updates and integrations of PQC algorithms.

Global Implications and Call to Action

The transition to PQC is not just a technical challenge but a global imperative. With quantum computing breakthroughs occurring across nations, adopting quantum-resistant standards is essential for maintaining the integrity of digital systems. Organizations worldwide must:

  • Collaborate to ensure interoperability of PQC standards across borders.
  • Share best practices and innovations to accelerate the global transition.
  • Support research in next-generation cryptographic techniques to stay ahead of emerging threats.

Conclusion

NIST’s efforts in finalizing post-quantum encryption standards and drafting a comprehensive transition strategy mark a pivotal moment in cybersecurity. However, these initiatives are only as effective as their adoption. Governments, enterprises, and individuals must take urgent steps to align with these standards and safeguard their digital assets against the looming threat of quantum-powered attacks.

For further insights into how quantum computing advancements could reshape internet security, revisit our previous discussion: How Will China’s Quantum Advances Change Internet Security?.


References & Further Reading: 

  1. NIST IR 8547 – https://csrc.nist.gov/pubs/ir/8547/ipd
  2. NIST IR 8413 – https://nvlpubs.nist.gov/nistpubs/ir/2022/NIST.IR.8413.pdf
  3. Dilithium – https://pq-crystals.org/dilithium/
  4. Falcon – https://falcon-sign.info/
  5. PHINCS+ – https://sphincs.org/ 
  6. Trapdoor for hard Lattices in Cryptographic Constructs – https://eprint.iacr.org/2007/432 (Must read if you’re a programmer and interested in exploring Lattices) 
  7. Lattice-based cryptography – Chris Peikert, Georgia Institute of Tech – https://web.eecs.umich.edu/~cpeikert/pubs/slides-abit4.pdf
  8. Additional Source Codes to Explore – https://github.com/regras/labs  (This project is a Proof of Concept (PoC), about an Attribute-Based Signature scheme using lattices.)
Bitnami