Category: Engineering Leadership

How To Measure Real Success In Software Engineering

How To Measure Real Success In Software Engineering

Recently, while attending The Business Show in London, I engaged in a conversation with a CXO of an upcoming Fintech company. The discussion began with cybersecurity implementation—a topic close to my heart—but quickly veered into the realm of engineering throughput. What followed was an incoherent rant by the CXO, a frustrating narrative about firing their Delivery Director for refusing to scale the engineering team to meet deadlines for the company’s next shiny event. Despite my best efforts to pull this gentleman out of his rabbit hole, my time and reasoning seemed to fall on deaf ears.

Reflecting on this interaction over the past month, I’ve realized this episode was emblematic of a larger issue: the prevalent fallacy among CXOs that more engineers equals faster and better output. Surprisingly, this misconception thrives in part because of the silence of engineering leaders—CTOs, VPs, and Directors of Engineering—who often fail to push back against flawed assumptions at the executive level.

Inspired by my recent association with the Information Security Group (ISG) at the Royal Holloway University of London, I decided to don my “academic specs” and examine this fallacy more critically. The result is a deeper dive into the myths of scaling engineering teams, the science behind team efficiency, and a call for a cultural shift in how organizations measure productivity.

The Scaling Myth: Why More Isn’t Always Better

At the heart of this fallacy is a simplistic assumption: more engineers means more features, delivered faster. While this notion seems logical, it is disproven by Price’s Law, a principle that exposes the diminishing returns of team scaling.

Rediscovering Derek J. de Solla Price: From Antikythera to Engineering Efficiency

My journey to understanding Price’s Law began with a fascination for the Antikythera Mechanism —an ancient Greek marvel of engineering and astronomy. It was through this mechanism that I first encountered the work of Prof. Derek J. de Solla Price, a British physicist and historian whose curiosity and intellect extended far beyond antiquities. Inspired by the ingenuity of the Antikythera Mechanism, I was drawn to explore the origins of Damascus and Wootz steel, and its roots in the south-western peninsula of India (as detailed in Aayutha Desam by R. Mannar Mannan). (More about that in another post!)

But it was Price’s insight into the uneven distribution of productivity in groups that struck a chord with my work in software engineering. His principle, now widely known as Price’s Law, asserts that in any team, 50% of the work is accomplished by the square root of the total number of participants.

  • In a team of 10 engineers, approximately 3 contributors (√10) are responsible for half the output.
  • In a team of 100 engineers, only 10 individuals (√100) produce as much as the remaining 90 combined.

This principle highlights a counterintuitive but vital truth: as team size grows, the proportion of high contributors decreases, leading to inefficiencies that compound over time. This isn’t just an academic curiosity—it’s a critical insight for engineering leaders tasked with scaling teams and delivering results.

Price’s Law challenges a long-standing assumption in engineering leadership: that scaling teams proportionally scales productivity. By understanding this principle, CTOs, VPs, and engineering managers can rethink strategies for achieving efficiency and delivering value, even with constrained resources.

The Myth of Highly Motivated Teams

Some self-proclaimed visionary leaders advocate for hiring only highly motivated individuals, often overlooking how teams function in practice. In any organized group, work typically falls into three categories:

  1. Drudgery Work (Low Impact, High Intensity): Routine tasks like debugging or documentation, essential but unappealing.
  2. Intermediate Work (Medium Impact, Medium Intensity): Feature upgrades or system integrations, vital for sustaining operations.
  3. Challenging Work (High Impact, High Intensity): Complex, high-stakes initiatives that highly motivated individuals prefer.

The Problem

Highly motivated individuals often prioritize high-impact projects, leaving routine and intermediate work neglected. This creates:

  • Operational Bottlenecks: Accumulating technical debt and system fragility.
  • Imbalanced Workloads: Overburdened team members handling routine tasks.
  • Team Friction: Reduced cohesion and potential burnout.

The Solution: Balance Over Ambition

Effective teams thrive on diversity in skill sets and balanced task allocation. Leaders must:

  • Distribute Work Strategically: Ensure all types of work are addressed.
  • Value Contributions Equally: Recognize the importance of routine and intermediate tasks.
  • Foster Team Cohesion: Avoid over-prioritizing high-stakes projects at the expense of operational stability.

Conclusion: A truly visionary leader grounds ambition in pragmatism, creating teams that excel not just in high-impact projects but also in sustaining the essentials of day-to-day operations.

Implications for Team Expansion

For CTOs, VPs, and engineering managers, this dynamic presents a counterintuitive challenge: merely expanding the team does not guarantee proportional gains in productivity. Doubling headcount often introduces:

  1. Communication Overhead: Larger teams require more coordination, which consumes valuable time and resources.
  2. Dilution of Accountability: As teams grow, individual contributions become harder to track, potentially reducing ownership and engagement.
  3. Coordination Complexities: Increased interdependencies among team members can slow down decision-making and implementation.

To achieve a twofold increase in productivity, Price’s Law suggests that you may need to quadruple the team size, a move that is often impractical and financially untenable. Instead, engineering leaders must rethink productivity beyond the simplistic metric of team size.

Shifting Focus: Outcomes Over Outputs

Traditional productivity metrics, such as the number of features released or lines of code written, focus on outputs—tangible deliverables produced by the team. However, outputs do not inherently translate into value. Consider the distinction:

  • Outputs: Metrics like features delivered or tickets closed.
  • Outcomes: Measurable changes in user behaviour that drive business results, such as increased user retention or reduced churn.

Relying solely on outputs creates a misleading picture of productivity. A feature-rich application that fails to address user needs or business goals is ultimately unproductive. Instead, outcomes—which capture the real-world effectiveness of engineering efforts—offer a better lens to measure success.

Outcome vs. Impact

While outcomes focus on immediate effects (e.g., increased sign-ups from a new feature), impact delves deeper into long-term consequences. For example:

  • An outcome may be an increase in user sign-ups after a feature launch.
  • The impact would be sustained revenue growth and user satisfaction resulting from the feature’s value over time.

Engineering teams must aim for outcomes that align with strategic goals while keeping an eye on their long-term impacts.

Counterproductive Paradigm: The Threat Surface of Excessive Outputs

Emphasizing outputs over outcomes can be counterproductive, leading to what can be described as an expanding threat surface:

  1. Defects and Bugs: Adding more features often introduce unintended issues that require additional resources to resolve.
  2. Maintenance Burden: More code increases the risk of technical debt, making future development slower and more complex.
  3. Conflict Resolution: Larger teams fixing bugs or implementing features in parallel can inadvertently cause regressions, especially when the main sprint continues uninterrupted.

This vicious cycle diverts focus from strategic initiatives, tying up engineers in a continuous loop of fixes. Instead of scaling output indiscriminately, teams should focus on ensuring that every deliverable contributes to meaningful outcomes.

Focusing on Impacts and Outcomes: A Leadership Imperative

For engineering leaders, the shift from outputs to impacts and outcomes is transformative. This approach emphasizes:

  1. Defining Clear Objectives: Establish measurable outcomes (e.g., reducing churn by 10%) that align with business goals.
  2. Prioritizing High-Impact Work: Evaluate tasks based on their potential to deliver meaningful results.
  3. Empowering Teams: Foster a culture where engineers understand and contribute to broader business objectives rather than just completing tickets.
  4. Continuous Feedback Loops: Regularly assess whether engineering efforts are driving intended outcomes.

This shift not only enhances productivity but also aligns engineering work with the organization’s mission, fostering a sense of purpose within teams.

Conclusion: Redefining Productivity in Software Engineering

Price’s Law reminds us that productivity does not scale linearly with team size. Engineering leaders must navigate this reality by focusing on outcomes and impacts rather than outputs. This paradigm shift requires a cultural and strategic overhaul, but the rewards—greater efficiency, alignment, and value delivery—are well worth the effort.

By embracing this approach, organizations can ensure that their engineering efforts contribute directly to their strategic goals, transforming software development into a driver of sustainable business success.

References

  1. Sundarakalatharan, R. (2022). How to measure Engineering Productivity?. Retrieved from https://nocturnalknight.co/how-to-measure-engineering-productivity/
  2. Bohrmann, N. (2022). How Price’s Law Applies to Everything. Retrieved from https://nielsbohrmann.com/prices-law/
  3. LeadDev. (2022). Focus on outcomes over outputs. Retrieved from https://leaddev.com/velocity/focus-outcomes-over-outputs
  4. Monday Mornings. (2023). Productivity and Price’s Law. Retrieved from https://mondaymornings.madisoncres.com/productivity-and-prices-law-1
  5. TechRadar. (2023). Outcomes versus outputs: the real measure of developer productivity. Retrieved from https://www.techradar.com/pro/outcomes-versus-outputs-the-real-measure-of-developer-productivity
  6. Royal Holloway Information Security Group. (2024). https://pure.royalholloway.ac.uk/
  7. Wikipedia. (2024). Antikythera Mechanism. Retrieved from https://en.wikipedia.org/wiki/Antikythera_mechanism
  8. Wikipedia. (2024). Derek J. de Solla Price. Retrieved from https://en.wikipedia.org/wiki/Derek_J._de_Solla_Price
  9. Wikipedia. (2024). Wootz Steel. Retrieved from https://en.wikipedia.org/wiki/Wootz_steel
  10. Purple Book House. (2024). Aayutha Desam by R. Mannar Mannan. Retrieved from https://www.purplebookhouse.co.uk/product-page/aayutha-desam-book-type-katturaigal-history-by-r-mannar-mannan
The Truth About “Ghost Engineers”: A Critical Analysis

The Truth About “Ghost Engineers”: A Critical Analysis

Disclaimer:
This article is not intended to discredit Boris Denisov, Stanford University, McKinsey, or any other entities referenced herein. I hold immense respect for their contributions to research and industry discourse. While findings like these may resonate with practices in FAANG companies, large organizations, and mature startups, this critique seeks to explore the broader implications of relying on narrow metrics to evaluate productivity in software engineering.

The “Ghost Engineer” Narrative

The term “ghost engineers,” popularized by a recent Stanford study, describes software engineers who allegedly contribute minimally to codebases. Analyzing data from over 50,000 engineers, the study concludes that 9.5% of engineers fall into this category, with the prevalence rising to 14% among remote workers​.

While the findings spark interesting discussions, they rely heavily on the flawed assumption that code commit frequency equates to productivity. As I argued in No, McKinsey, You Got It All Wrong About Developer Productivity, this narrow perspective risks undervaluing critical aspects of software engineering that don’t leave a visible footprint in version control systems​​.

Unintended Amplification: The Snowball Effect

One of the most significant risks of such conclusions—especially before peer review—is their unintended amplification. Articles on Yahoo, TechCrunch, and Newsday have already simplified these findings, creating narratives that could ripple through the industry:

  1. Unnecessary Layoffs: Misinterpreting data might lead organizations to hastily classify engineers as unproductive, ignoring less visible but valuable contributions.
  2. Remote Work Stigma: By associating remote work with reduced productivity, these claims risk undermining one of the most effective workforce models when well-managed.
  3. Toxic Metrics Culture: Over-reliance on activity metrics like commit counts can encourage engineers to game the system by prioritizing volume over meaningful work, as discussed in Business Value Delivery by Engineering Teams in Startups (Part 2)​.

History offers cautionary examples, such as McKinsey’s controversial reliance on lines of code as a productivity measure—a practice criticized in my earlier article for ignoring the multifaceted nature of modern software engineering​​.

Engineering Productivity: Beyond Output Metrics

As outlined in Is the Myth of a 10x Developer Real?, productivity in software engineering extends far beyond raw output. Effective engineers don’t just code—they align stakeholders, resolve ambiguity, and reduce future risks. These invisible contributions often lead to:

  • Improved Collaboration: Engineers who mentor, review code, or resolve cross-team dependencies amplify the impact of their teams.
  • Strategic Outcomes: Refactoring technical debt or implementing security frameworks might reduce visible code output while significantly improving system health​​.

Commit Frequency Misses Critical Context

  • Quality Over Quantity: A single commit that eliminates 1,000 lines of redundant code can be more impactful than 10 minor feature updates.
  • Diverse Roles: Roles like DevOps, QA, and security often contribute indirectly to engineering success but rarely generate frequent commits.

By focusing solely on visible metrics, we risk reinforcing flawed incentives, a point I emphasized in Business Value Delivery by Engineering Teams in Startups (Part 1)​​.

Analyzing the Stanford Study’s Claims

Claim 1: Engineers with Low Commit Activity Are Unproductive

Rebuttal: This assumption ignores the cognitive and collaborative aspects of engineering. As noted in No, McKinsey, You Got It All Wrong About Developer Productivity, activities like design discussions, documentation, and mentoring are essential but invisible in commit logs​.

Claim 2: Remote Engineers Are More Likely to Be “Ghost Engineers”

Rebuttal: Remote work relies on asynchronous collaboration, where documentation and long-term planning take precedence over immediate outputs. Simplistic comparisons risk stigmatizing effective remote models​​.

Claim 3: Low Commit Activity Correlates with Poor Team Performance

Rebuttal: High-performing teams often include specialists whose contributions are less visible but critical. For example, a security engineer resolving vulnerabilities or a DevOps engineer optimizing CI/CD pipelines may not show up in commit logs​.

Claim 4: Organizations Could Save Billions by Addressing the “Ghost Engineer” Problem

Rebuttal: Cost-cutting measures based on flawed metrics often lead to higher technical debt, increased turnover, and diminished morale. As argued in Business Value Delivery by Engineering Teams in Startups (Part 2), true cost efficiency lies in maximizing impact, not minimizing headcount​.

Impact vs Code-Commits: Understanding the Misalignment

A recurring issue with productivity metrics like code-commit frequency is their inability to reflect the true impact of an engineer’s work. The volume of code changes often says little about the value delivered, as demonstrated by the following examples:

Example 1: A Cosmetic UI Change vs. A Critical API Update

Imagine a product manager requests a seemingly simple change: update a button’s color from purple to orange. While this may sound trivial, it could involve:

  • Updating CSS libraries: A cascade of dependencies might require 1,000+ lines of revisions.
  • Testing for accessibility: Ensuring compliance with color-contrast guidelines adds complexity.
  • Regression testing: Updating snapshot tests or fixing broken visual diffs.

This cosmetic change could result in dozens of commits, each addressing a specific dependency or edge case.

Contrast this with a backend engineer’s work on the API gateway to improve application concurrency. This might involve:

  • Identifying bottlenecks: Profiling existing workloads and implementing a solution to reduce latency.
  • Optimizing database connections: Reducing round trips or improving query performance.
  • Deploying with minimal disruption: A single, concise commit could encapsulate weeks of planning and testing.

Here, the backend change’s impact far outweighs the UI update, even though it appears smaller in terms of commit frequency.

Example 2: Bulk Refactoring vs. Precise Bug Fixing

A mid-level engineer is tasked with refactoring a legacy module, updating deprecated methods, and restructuring a monolithic codebase for better readability. This effort generates hundreds of commits and thousands of lines of changes, none of which immediately improve the product’s features.

On the other hand, a senior engineer identifies and fixes a critical bug that intermittently crashes the application. The solution, a one-line code change after hours of debugging, resolves a high-severity issue affecting thousands of users.

From a commit-count perspective, the refactoring task appears more productive. However, the senior engineer’s single-line fix has a far greater immediate impact.

Example 3: Feature Addition vs. Security Enhancement

A frontend developer introduces a new feature, such as a user profile editor. This entails:

  • New UI components: HTML and CSS for the form.
  • Frontend validations: JavaScript-based constraints for data inputs.
  • Integration tests: Mock API responses for various test cases.

The addition spans 2,000 lines of code across 20 commits.

Meanwhile, a DevSecOps engineer works on a critical security vulnerability. The task involves:

  • Rotating access tokens: Updating key secrets stored in the CI/CD pipeline.
  • Implementing security headers: Adding CSPs to prevent XSS attacks.
  • Hardening configurations: Minor changes in deployment scripts to reduce attack surfaces.

Although the security enhancement generates fewer than 10 commits, its value in preventing potential breaches and compliance penalties is enormous.

Key Takeaways

  • Context Matters: Evaluating productivity requires understanding the context and complexity of the task, not just the output volume.
  • Quality Over Quantity: High-impact changes often involve fewer commits, while low-value tasks may inflate commit counts.
  • Recognizing Diverse Contributions: Engineers working on performance, security, or architecture frequently produce less visible yet highly impactful work.

This misalignment underscores the need for organizations to adopt holistic evaluation metrics that consider both quantitative output and qualitative impact. By focusing on the latter, teams can better recognize and reward meaningful contributions.

The Danger of Flawed Productivity Metrics

Simplistic metrics can have cascading negative effects:

  1. Burnout: Engineers may feel pressured to prioritize activity over quality.
  2. Stifled Innovation: Overemphasis on visible output discourages experimentation and risk-taking.
  3. Loss of Talent: Talented engineers in specialized roles may leave if their contributions are undervalued.

As emphasized in Is the Myth of a 10x Developer Real?, effective engineering is about multiplying impact, not maximizing visible output​​.

A Holistic Approach to Productivity

To address these issues, organizations must adopt nuanced evaluation frameworks:

  1. Impact-Driven Metrics: Evaluate contributions based on outcomes, such as improved system reliability or customer satisfaction.
  2. Recognize Invisible Work: Acknowledge tasks like mentorship, technical debt reduction, and long-term strategic planning.
  3. Foster a Culture of Trust: Empower teams to experiment and innovate without fear of being misjudged by flawed metrics.

Conclusion

The “ghost engineer” narrative oversimplifies the multifaceted nature of software engineering. By relying on metrics like commit counts, it risks undervaluing critical contributions and fostering unhealthy workplace dynamics. As I’ve argued across multiple articles, effective engineering teams succeed by delivering value, not just output. The industry must move beyond flawed productivity metrics and adopt more comprehensive frameworks to recognize the true contributions of every engineer.


References and Further Reading

  1. Denisov-Blanch, Y. (2024). Twitter Thread on Ghost Engineers. Retrieved from link.
  2. Denisov-Blanch, Y. (2024). Stanford Research on Software Engineering Productivity. Stanford University. Retrieved from link.
  3. Polyakov, A. (2024). Ghost Engineers—Utter Non-Sense! Medium. Retrieved from link.
  4. No, McKinsey, You Got It All Wrong About Developer Productivity. Nocturnalknight.co. Retrieved from link.
  5. Is the Myth of a 10x Developer Real? Nocturnalknight.co. Retrieved from link.
  6. Bridgwater, A. (2024). Code Busters: Are Ghost Engineers Haunting DevOps Productivity? DevOps.com. Retrieved from link.
  7. Business Value Delivery by Engineering Teams in Startups (Part 1). Nocturnalknight.co. Retrieved from link.
  8. Business Value Delivery by Engineering Teams in Startups (Part 2). Nocturnalknight.co. Retrieved from link.
  9. Long, K. (2024). Are Ghost Engineers Undermining Tech Productivity? Business Insider. Retrieved from link.
  10. Passionate Geekz. (2024). Can a Company Increase Its Market Value by Laying Off Employees? Retrieved from link.
Do You Know What’s in Your Supply Chain? The Case for Better Security

Do You Know What’s in Your Supply Chain? The Case for Better Security

I recently read an interesting report by CyCognito on the top 3 vulnerabilities on third-party products and it sparked my interest to reexamine the supply chain risks in software engineering. This article is an attempt at that.

The Vulnerability Trifecta in Third-Party Products

The CyCognito report identifies three critical areas where third-party products introduce significant vulnerabilities:

  1. Web Servers
    These foundational systems host countless applications but are frequently exploited due to misconfigurations or outdated software. According to the report, 34% of severe security issues are tied to web server environments like Apache, NGINX, and Microsoft IIS. Vulnerabilities like directory traversal or improper access control can serve as gateways for attackers.
  2. Cryptographic Protocols
    Secure communication relies on cryptographic protocols like TLS and HTTPS. Yet, 15% of severe vulnerabilities target these mechanisms. For instance, misconfigurations, weak ciphers, or reliance on deprecated standards expose sensitive data, with inadequate encryption ranking second on OWASP’s Top 10 security threats.
  3. Web Interfaces Handling PII
    Applications that process PII—such as invoices or financial statements—are among the most sensitive assets. Alarmingly, only half of such interfaces are protected by Web Application Firewalls (WAFs), leaving them vulnerable to injection attacks, session hijacking, or data leakage.

Beyond Web Servers: The Hidden Dependency Risks

You control your software stack, but do you actually know what runs beneath those flashy Web/Application servers?

Drawing parallels from my previous article on PyPI and NPM vulnerabilities, it’s clear that open-source dependencies amplify these threats. Attackers exploit the very trust inherent in supply chains, introducing malicious packages or exploiting insecure libraries.

For example:

  • Attackers have embedded malware into popular NPM and PyPI packages, which are then unknowingly incorporated into enterprise-grade software.
  • Dependency confusion attacks exploit naming conventions to inject malicious packages into CI/CD pipelines.

These risks share a core vulnerability with traditional third-party systems: an opaque supply chain with minimal oversight. This is compounded by the ever-decreasing cycle-times for each software releases, giving little to no time for even great Software Engineering teams to doa decent audit and look into the dependency graph of the packages they are building their new, shiny/pointy things that is to transform the world.


Why Software Supply Chain Attacks Persist

As highlighted by Scientific Computing World, software supply chain attacks persist for several reasons:

  • Aggressive GTM Timelines: Most organisations now run quarterly or even monthly product roadmaps, so it is possible to launch a new SaaS product in a matter of days to weeks by leveraging other IaaS, PaaS or SaaS systems – in addition to any Libraries, frameworks and other constructs.
  • Exponential Complexity: With organisations relying on layers of third-party and fourth-party services, the attack surface expands exponentially.
  • Insufficient Oversight: Organisations often focus on securing their environments while neglecting the vendors and libraries they depend on.
  • Lagging Standards: The industry’s inability to enforce stringent security protocols across the supply chain leaves critical gaps.
  • Sophistication of Attacks: From SolarWinds to MOVEit, attackers continually evolve, targeting blind spots in detection and remediation frameworks.

Recommended Steps to Mitigate Supply Chain Threats

To address these vulnerabilities and build resilience, organizations can take the following actionable steps:

1. Map and Assess Dependencies

  • Use tools like Dependency-Track or Sonatype Nexus to map and analyze all third-party and open-source dependencies.
  • Regularly perform software composition analysis (SCA) to detect outdated or vulnerable components.

2. Implement Zero-Trust Architecture

  • Leverage Zero-Trust frameworks like NIST 800-207 to ensure strict authentication and access controls across all systems.
  • Minimize the privileges of third-party integrations and isolate sensitive data wherever possible.

3. Strengthen Vendor Management

  • Evaluate vendor security practices using frameworks like the NCSC’s Supply Chain Security Principles or the Open Trusted Technology Provider Standard (OTTPS).
  • Demand transparency through detailed Service Level Agreements (SLAs) and regular vendor audits.

4. Prioritize Secure Development and Deployment

  • Train your development teams to follow secure coding practices like those outlined in the OWASP Secure Coding Guidelines.
  • Incorporate tools like Snyk or Checkmarx to identify vulnerabilities during the software development lifecycle.

5. Enhance Monitoring and Incident Response

  • Deploy Web Application Firewalls (WAFs) such as AWS WAF or Cloudflare to protect web interfaces.
  • Establish a robust incident response plan using guidance from the MITRE ATT&CK Framework to ensure rapid containment and mitigation.

6. Foster Collaboration

  • Work with industry peers and organizations like the Cybersecurity and Infrastructure Security Agency (CISA) to share intelligence and best practices for supply chain security.
  • Collaborate with academic institutions and research groups for cutting-edge insights into emerging threats.

7. Schedule a No-Obligation Consultation Call with Yours Truly

Struggling with supply chain vulnerabilities or need tailored solutions for your unique challenges? I offer consultation services to work directly with your CTO, Principal Architect, or Security Leadership team to:

  • Assess your systems and identify key risks.
  • Recommend actionable, budget-friendly steps for mitigation and prevention.

With years of expertise in cybersecurity and compliance, I can help streamline your approach to supply chain security without breaking the bank. Let’s collaborate to make your operations secure and resilient.

Schedule Your Free Consultation Today

Building a Resilient Supply Chain

The UK’s National Cyber Security Centre (NCSC) principles for supply chain security provide a pragmatic roadmap for businesses. Here’s how to act:

  1. Understand and Map Dependencies
    Organizations should create a detailed map of all dependencies, including direct vendors and downstream providers, to identify potential weak links.
  2. Adopt a Zero-Trust Framework
    Treat every external connection as untrusted until verified, with continuous monitoring and access restrictions.
  3. Mandate Secure Development Practices
    Encourage or require vendors to implement secure coding standards, frequent vulnerability testing, and robust update mechanisms.
  4. Regularly Audit Supply Chains
    Establish a routine audit process to assess vendor security posture and adherence to compliance requirements.
  5. Proactive Incident Response Planning
    Prepare for the inevitable by maintaining a robust incident response plan that incorporates supply chain risks.

Final Thoughts

The threat of supply chain vulnerabilities is no longer hypothetical—it’s happening now. With reports like CyCognito’s, research into dependency management, and frameworks provided by trusted institutions, businesses have the tools to mitigate risks. However, this requires vigilance, collaboration, and a willingness to rethink traditional approaches to third-party management.

Organisations must act not only to safeguard their operations but also to preserve trust in an increasingly interconnected world. 

Is your supply chain ready to withstand the next wave of attacks?


References and Further Reading

  1. Report Shows the Threat of Supply Chain Vulnerabilities from Third-Party Products – CyCognito
  2. Hidden Threats in PyPI and NPM: What You Need to Know
  3. Why Software Supply Chain Attacks Persist – Scientific Computing World
  4. Principles of Supply Chain Security – NCSC
  5. CyCognito Report Exposes Rising Software Supply Chain Threats

What’s your strategy for managing third-party risks? Share your thoughts in the comments!

Starling Bank’s Penalty: How to Strengthen Your Compliance Efforts

Starling Bank’s Penalty: How to Strengthen Your Compliance Efforts

Introduction

The rapid growth of the fintech industry has brought with it immense opportunities for innovation, but also significant risks in terms of regulatory compliance and real security. Starling Bank, one of the UK’s prominent digital banks, recently faced a £29 million fine in October 2024 from the Financial Conduct Authority (FCA) for serious lapses in its anti-money laundering (AML) and sanctions screening processes. This fine is part of a broader trend of fintechs grappling with regulatory pressures as they scale quickly. Failures in compliance not only lead to financial penalties but also damage to reputation and customer trust. In most cases, it also leads to revenue loss and or a significant business impact.

In this article, we explore what went wrong at Starling Bank, examine similar compliance issues faced by other major financial institutions like Paytm, Monzo, HDFC, Axis Bank & RobinHood and propose practical solutions to help fintech companies strengthen their compliance frameworks. This also helps to establish the point that these cybersecurity and compliance control lapses are not restricted to geography and are prevalent in the US, UK, India and many other regions. Additionally, we dive into how vulnerabilities manifest in growing fintechs and the increasing importance of adopting zero-trust architectures and AI-powered AML systems to safeguard against financial crime.

Background

In October 2024, Starling Bank was fined £29 million by the Financial Conduct Authority (FCA) for significant lapses in its anti-money laundering (AML) controls and sanctions screening. The penalty highlights the increasing pressure on fintech firms to build robust compliance frameworks that evolve with their rapid growth. Starling’s case, although high-profile, is just one in a series of incidents where compliance failures have attracted regulatory action. This article will explore what went wrong at Starling, examine similar compliance failures across the global fintech landscape, and provide recommendations on how fintechs can enhance their security and compliance controls.

What Went Wrong and How the Vulnerability Manifested

The FCA investigation into Starling Bank uncovered two major compliance gaps between 2019 and 2023, which exposed the bank to financial crime risks:

  1. Failure to Onboard and Monitor High-Risk Clients: Starling’s systems for onboarding new clients, particularly high-risk individuals, were not sufficiently rigorous. The bank’s AML mechanisms did not scale in line with the rapid increase in customers, leaving gaps where sanctioned or suspicious individuals could go undetected. Despite the bank’s growth, the compliance framework remained stagnant, resulting in breaches of Principle 3 of the FCA’s regulations for businesses​(Crowdfund Insider)​(FinTech Futures).
  2. Inadequate Sanctions Screening: Starling’s sanctions screening systems failed to adequately identify transactions from sanctioned entities, a critical vulnerability that persisted for several years. With insufficient real-time monitoring capabilities, the bank did not screen many transactions against the latest sanctions lists, leaving it exposed to potentially illegal activity​(FinTech Futures). This is especially concerning in a financial ecosystem where transactions are frequent and high in volume, requiring robust systems to ensure compliance at all times.

These vulnerabilities manifested in Starling’s inability to effectively prevent financial crime, culminating in the FCA’s action in October 2024.

Learning from Similar Failures in the Fintech Industry

  1. Paytm’s Cybersecurity Breach Reporting Delays (October 2024): In India, Paytm was fined for failing to report cybersecurity breaches in a timely manner to the Reserve Bank of India (RBI). This non-compliance exposed vulnerabilities in Paytm’s internal governance structures, particularly in their failure to adapt to rapid business expansion and manage cybersecurity threats​(Reuters).
  2. HDFC and Axis Banks’ Regulatory Breaches (September 2024): The RBI fined HDFC Bank and Axis Bank in September 2024 for failing to comply with regulatory guidelines, emphasizing how traditional banks, like fintechs, can face compliance challenges as they scale. The fines were related to lapses in governance and risk management frameworks​(Economic Times).
  3. Monzo’s PIN Security Breach (2023): In 2023, UK-based challenger bank Monzo experienced a breach where customer PINs were accidentally exposed due to an internal vulnerability. Although Monzo responded swiftly to mitigate the damage, the breach illustrated the need for fintechs to prioritize backend security and implement zero-trust security architectures that can prevent such incidents​(Wired).
  4. LockBit Ransomware Attack (2024): The LockBit ransomware attack on a major financial institution in 2024 demonstrated the growing cyber threats that fintechs face. This attack exposed the weaknesses in traditional cybersecurity models, underscoring the necessity of adopting zero-trust architectures for fintech companies to protect sensitive data and transactions from malicious actors​(NCSC).
  5. Robinhood’s Regulatory Scrutiny (2021-2022): In June 2021, Robinhood was fined $70 million by FINRA for misleading customers, causing harm through platform outages, and failing to manage operational risks during the GameStop trading frenzy. Robinhood’s systems were not equipped to handle the surge in trading volumes, leading to severe service disruptions and a failure to communicate risks to customers.
  6. Robinhood Crypto’s Cybersecurity Failure (2022): In August 2003, Robinhood was fined $30 million by the New York State Department of Financial Services (NYDFS) for failing to comply with anti-money laundering (AML) regulations and cybersecurity obligations related to its cryptocurrency trading operations. The fine was issued due to inadequate staffing, compliance failures, and improper handling of regulatory oversight within its crypto business. Much like Starling, Robinhood’s compliance systems lagged behind its rapid business growth​ (Compliance Week)

Key Statistics in the Fintech Compliance Landscape

  • 65% of organizations in the financial sector had more than 500 sensitive files open to every employee in 2023, making them highly vulnerable to insider threats​.
  • The average cost of a data breach in financial services was $5.85 million in 2023, a significant figure that shows the financial impact of security vulnerabilities​.
  • 27% of ransomware attacks targeted financial institutions in 2022, with the number of attacks continuing to rise in 2024, further highlighting the importance of robust cybersecurity frameworks​.
  • 81% of financial institutions reported a rise in phishing and social engineering attacks in 2023, emphasizing the need for employee awareness and strong access controls​.
  • By 2025, the global cost of cybercrime is projected to exceed $10.5 trillion annually, a figure that will disproportionately impact fintech companies that fail to implement strong security protocols​.

Recommendations for Strengthening Compliance and Security Controls

To prevent future compliance breaches, fintech firms should prioritise scalable, technology-enabled compliance solutions. This requires empowering Compliance Heads, Information Security Teams, CISOs, and CTOs with the necessary budgets and authority to develop secure-by-design environments, teams, infrastructure, and products.

  1. AI-Powered AML Systems: Leverage artificial intelligence (AI) and machine learning to enhance AML systems. These technologies can dynamically adjust to new threats and process high volumes of transactions to detect suspicious patterns in real time. This approach will ensure that fintechs can comply with evolving regulatory requirements while scaling.
  2. Zero-Trust Security Models: As the LockBit ransomware attack showed in 2024, fintechs must adopt zero-trust architectures, where every user and device interacting with the system is continuously authenticated and verified. This reduces the risk of internal breaches and external attacks​(Cloudflare).
  3. Real-Time Auditing and Blockchain for Transparency: Real-time auditing, combined with blockchain technology, provides an immutable and transparent record of all financial transactions. This would help fintechs like Starling avoid the pitfalls of delayed sanctions screening, as blockchain ensures immediate and traceable compliance checks​(EY).
  4. Multi-Layered Sanctions Screening: Implement a multi-layered sanctions screening system that combines automated transaction monitoring with manual oversight for high-risk accounts. This dual approach ensures that fintechs can monitor suspicious activities while maintaining compliance with global regulatory frameworks​(Exiger)​(FinTech Futures).
  5. Continuous Employee Training and Governance: Strong governance structures and regular compliance training for employees will ensure that fintechs remain agile and responsive to regulatory changes. This prepares the organization to adapt as new regulations emerge and customer bases expand.

Conclusion

The £29 million fine imposed on Starling Bank in October 2024 serves as a crucial reminder for fintech companies to integrate robust compliance and security frameworks as they grow. In an industry where regulatory scrutiny is intensifying, the fintech players that prioritize compliance will not only avoid costly fines but also position themselves as trusted institutions in the financial services world.


Further Reading and References

  1. RBI Fines HDFC, Axis Bank for Non-Compliance with Regulations (September 2024)
  2. RBI Fines Paytm for Not Reporting Cybersecurity Breaches on Time (October 2024)
  3. LockBit’s Latest Attack Shows Why Fintech Needs More Zero Trust (2024)
  4. Monzo PIN Security Breach Explained (2023)
  5. Varonis Cybersecurity Statistics (2023)

Scholarly Papers & References

  1. Barr, M.S.; Jackson, H.E.; Tahyar, M. Financial Regulation: Law and Policy. SSRN Scholarly Paper No. 3576506, 2020. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3576506
  2. Suryono, R.R.; Budi, I.; Purwandari, B. Challenges and Trends of Financial Technology (Fintech): A Systematic Literature Review. Information 202011, 590. https://doi.org/10.3390/info11120590
  3. AlBenJasim, S., Dargahi, T., Takruri, H., & Al-Zaidi, R. (2023). FinTech Cybersecurity Challenges and Regulations: Bahrain Case Study. Journal of Computer Information Systems, 1–17. https://doi.org/10.1080/08874417.2023.2251455

By learning from past failures and adopting stronger controls, fintechs can mitigate the risks of financial crime, protect customer data, and ensure compliance in an increasingly regulated industry.

Why Did Elastic Decide to Go Open Source Again?

Why Did Elastic Decide to Go Open Source Again?

Elastic’s Return to Open Source: The Knight is back to the Pavilion

Elastic, the company behind Elasticsearch, recently decided to revert to an open-source licensing model after four years of operating under a proprietary license. This decision reflects a shift in strategy that emphasizes community-driven innovation and collaboration. In 2019, Elastic initially adopted a proprietary model to protect its intellectual property from cloud providers like Amazon Web Services (AWS), which were benefiting from Elasticsearch without contributing to its development. However, the move away from open-source posed its own challenges, including alienating the developer community that had helped build Elasticsearch into a widely-used tool.

In 2024, Elastic CEO Shay Banon announced the company’s return to an open-source framework. He explained that this decision stems from the belief that open collaboration fosters innovation and better serves the long-term interests of both the company and its user base. “We believe that the best products are built together,” Banon stated, emphasizing the value of community engagement in product development.

Recent Changes in Open-Source Licensing Models

Elastic’s decision is not an isolated incident. Over the past few years, several other technology companies have reconsidered their licensing models in response to the changing dynamics of software development and cloud service providers. These companies have struggled with how to balance open-source principles with the need to protect their commercial interests.

  1. Redis Labs
    Redis Labs initially licensed Redis under a permissive open-source license, but in 2018, the company adopted the Commons Clause to prevent cloud providers from offering Redis as a service without contributing to its development. However, after facing backlash from the developer community, Redis Labs adjusted its approach by introducing Redis Stack under more community-friendly terms, highlighting the difficulty of maintaining open-source integrity while ensuring business protection.
  2. HashiCorp
    In 2023, HashiCorp, known for popular tools like Terraform, adopted a Business Source License (BSL), which restricts the usage of its software in certain commercial contexts. HashiCorp’s move was driven by concerns over cloud providers monetizing its tools without contributing back to the open-source community. While BSL is not a traditional open-source license, HashiCorp continues to maintain a balance between openness and protecting its intellectual property, showing how companies are navigating complex market dynamics.
  3. MongoDB
    MongoDB’s shift to the Server Side Public License (SSPL) in 2018 was another major development in the open-source licensing debate. The SSPL aims to prevent cloud service providers from exploiting MongoDB’s open-source code without contributing back. While the SSPL is more restrictive than traditional open-source licenses, MongoDB’s goal was to retain the open-source ethos while ensuring that cloud vendors could not commercialize the software without contributing to its development.
  4. Chef Software
    Chef, an automation tool provider, switched all of its products to open-source in 2019 after years of operating under a mixed licensing model. This shift was largely a response to the growing demand for transparency and community collaboration. Chef’s decision allowed it to rebuild trust within its user base and align its business strategy with the broader trends in software development.

Impact on the Average Software Developer

For the average software developer, these licensing model changes can profoundly impact their work, career growth, and day-to-day development practices.

  1. Access to Cutting-Edge Tools
    When companies like Elastic and MongoDB return to open-source models, developers gain unrestricted access to powerful tools and frameworks. This democratizes the technology, allowing developers from small companies, startups, and even personal projects to leverage the same tools that major enterprises use, without the barrier of expensive proprietary licenses. For many developers, open-source provides not just tools, but an entire ecosystem for experimentation, learning, and rapid prototyping.
  2. Contributing to Open-Source Communities
    Open-source contributions are an essential career-building tool for many developers. By contributing to open-source projects, developers can gain real-world experience, build portfolios, and even influence the direction of widely-used technologies. When companies like HashiCorp and Redis Labs shift their focus back to open-source, it increases opportunities for developers to become part of a larger, global development community.
  3. Career and Learning Opportunities
    Exposure to open-source projects allows developers to work with cutting-edge technology and methodologies. This can accelerate learning, as open-source projects are often evolving quickly with input from diverse and global teams. Additionally, contributing to popular open-source projects like Elastic or Kubernetes can greatly enhance a developer’s resume and open doors to career opportunities, including job offers and consulting roles.
  4. Navigating Licensing Restrictions
    Developers must also become more adept at navigating the complexities of new licenses like SSPL and BSL. These licenses place restrictions on how open-source software can be used, especially in cloud environments. Understanding the fine print is crucial for developers working in enterprise environments or launching their own SaaS products, as improper use of open-source software can lead to legal complications. This makes legal and compliance knowledge increasingly important in modern software development roles.

Open Source vs. Open Governance: A Crucial Distinction

Elastic’s journey highlights a key debate in the software development world: the difference between open source and open governance. While many companies have embraced open-source models, few have transitioned to open governance frameworks, which involve community-driven decision-making for the project’s future direction.

As highlighted in my previous article, “Open Source vs. Open Governance: The State and Future of the Movement,” the distinction lies in control. In open-source projects, the code is freely available, but decisions regarding the project’s roadmap and key developments may still be controlled by a single entity, such as a company. In contrast, open governance ensures that decision-making is decentralized, often involving multiple stakeholders, including developers, users, and companies that contribute to the project.

For Elastic and others, returning to open-source doesn’t necessarily mean embracing open governance. Although Elastic’s code will be open for contributions, the strategic direction will still be managed by the company. This is a common approach in many high-profile open-source projects. For example, Google’s Kubernetes operates under the open-source model but is governed by a diverse group of stakeholders, ensuring the project’s direction isn’t controlled by a single entity. On the other hand, projects like OpenStack follow a more open governance approach, with broader community involvement in decision-making.

Understanding the difference between open-source and open governance is critical as the software industry evolves. Companies are beginning to realize that open-source alone doesn’t always translate into the collaborative, community-driven development they seek. Open governance provides a framework for more inclusive decision-making, but it also presents challenges in terms of efficiency and control.

Looking Ahead: Open Source as a Business Strategy

The return of Elastic and other companies to more open models indicates a growing recognition of the importance of open-source in the software industry. For Elastic, this decision is about more than just licensing; it’s about reconnecting with a developer community that thrives on transparency and collaboration. By embracing open-source again, Elastic hopes to accelerate product development and foster stronger relationships with users.

This broader trend shows that while companies are still cautious about cloud providers exploiting their software, they are increasingly finding ways to leverage open-source models as a business strategy. These recent changes to licensing frameworks highlight the evolving nature of software development and the role open-source plays in it.

For organizations navigating the complex decision between proprietary and open-source models, the key lesson from Elastic’s experience is that the long-term benefits of community-driven development and innovation can outweigh the short-term protection of proprietary models. As more companies follow suit, it’s clear that open-source is not just a technical choice—it’s a business strategy.

Further Reading:

  1. Why Open Source Matters for Innovation – Alan Turing Institute
  2. The Future of Open Source: What to Expect in 2024 and Beyond – MIT Technology Review
  3. Why Every Company Should be Open-Source Aligned – Forbes

References:


Key Reasons Founding CTOs Move Sideways in Tech Startups

Key Reasons Founding CTOs Move Sideways in Tech Startups

In the world of startups, it’s not uncommon to hear about founding CTOs being ousted or sidelined within a few years of the company’s inception. For many, this seems paradoxical—after all, these are often individuals who are not only experts in their fields but also the technical visionaries who brought the company to life. Yet, within 3–5 years, many of them find themselves either pushed out of their executive roles or relegated to a more visionary or peripheral position in the organization.

But why does this happen?

The Curious Case of the Founding CTO

About 6-7 years back, while assisting a couple of VC firms in performing technical due diligence with their investments, I noticed a pattern: founding CTOs who had built groundbreaking technology and secured millions in funding were being removed from their positions. These were not just “any” technologists—they were often world-class experts, with pedigrees from prestigious institutions like Cambridge, Stanford, Oxford, MIT, IIT(Israel) and IIT (India). Their technical competence was beyond question, so what was causing this rapid turnover?

The Business Acumen Gap

After numerous conversations with both the displaced CTOs and the investors who backed their companies, a common theme emerged: there was a significant gap in business acumen between the CTOs and the boards of directors. As the companies grew, this gap widened, eventually becoming a chasm too large to bridge.

The Perception of Arrogance

One of the most frequently cited issues was the perception of arrogance. Many founding CTOs, steeped in deep technical knowledge, would often express disdain or impatience towards board members and executive leadership team (ELT) members who lacked a technical background. This disdain often manifested in meetings, where CTOs would engage in “geek speak,” using highly technical language that alienated non-technical stakeholders. This attitude can make the board feel undervalued and disconnected from the technology’s impact on the business, leading to friction between the CTO and other executives.

Failure to Translate Technology into Business Outcomes

Another critical issue was the inability—or unwillingness—to translate technical initiatives into tangible business outcomes. CTOs would present technology roadmaps without tying them to the company’s broader business objectives; and in extreme cases, even product roadmaps! This disconnect led to frustration among board members who wanted to understand how technology investments would drive revenue, reduce costs, or create competitive advantages. According to an article in Harvard Business Review, this lack of alignment between technical leadership and business strategy often results in a loss of confidence from investors & executive leadership who see the CTO as out of sync with the company’s growth trajectory.

Lack of Proactive Communication and Risk Management

Founding CTOs were also often criticized for failing to communicate proactively. When projects fell behind schedule or technical challenges arose, many CTOs would either remain silent or offer vague assurances such as, “You have to trust me.” Sometimes, they fail to communicate the underlying problems causing this. This lack of transparency and the absence of a clear, proactive plan to mitigate risks eroded the board’s confidence in their leadership. As noted by TechCrunch, this lack of foresight and communication can lead to the CTO being perceived as “dead weight” on the cap table, ultimately leading to their removal or sidelining.

The Statistics Behind the Trend

Research supports the observation that founding CTOs often struggle to maintain their roles as companies scale. According to a study by Harvard Business Review, more than 50% of founding CTOs in high-growth startups are replaced within the first 5 years. The reasons cited align with the issues mentioned above—poor communication, lack of business alignment, and a failure to scale leadership skills as the company grows.

Additionally, a survey by the Startup Leadership Journal revealed that 70% of venture capitalists have replaced a founding CTO at least once in their careers. This statistic underscores the importance of not only possessing technical expertise but also developing the necessary business acumen to maintain a leadership role in a rapidly growing company.

Real-World Examples: CTOs Who Fell from Grace

Several high-profile cases illustrate this trend. For instance, at Uber, founding CTO Oscar Salazar eventually took a step back from his leadership role as the company’s growth demanded a different set of skills. Similarly, at Twitter, co-founder and CTO Noah Glass was famously sidelined during the company’s early years, despite his pivotal role in its creation.

In another notable case, at Zenefits, founding CTO Laks Srini was moved to a less central role as the company faced regulatory challenges and rapid growth. The decision to shift his role was driven by the need for a leadership team that could navigate the complexities of a scaling business.

And, the list is too long, so I am adding about 8 names which is bound to elicit a reaction.

NameCompanyFired/Left on YearMost Likely Reason
Scott ForstallApple2012Abrasive management style and failure of Apple Maps
Kevin LynchAdobe2013Contention over Flash technology, departure to join Apple
Tony FadellApple2008Internal conflicts over strategic directions
Amit SinghalGoogle/Uber2017Dismissed from Uber due to harassment allegations
Balaji SrinivasanCoinbase2019Strategic shifts away from decentralization
Alex StamosFacebook2018Disagreements over handling misinformation and security issues
Michael AbbottTwitter2011Executive reshuffle during strategic redirection
Shiva RajaramanWeWork2018Departure during company instability and failed IPO

The Path Forward for Aspiring CTOs

For current and aspiring CTOs, the lessons are clear: technical expertise is essential, but it must be complemented by strong business acumen, communication skills, and a proactive approach to leadership. As a company scales, so too must the CTO’s ability to align technology with business objectives, communicate effectively with non-technical stakeholders, and manage both risks and expectations.

CTOs who can bridge the gap between technology and business are far more likely to maintain their executive roles and continue to drive their companies forward. For those who fail to adapt, the fate of being sidelined or replaced is an all-too-common outcome.

Conclusion

The role of the CTO is critical, especially in the early stages of a startup. However, as the company grows, the demands on the CTO evolve. Those who can develop the necessary business acumen, communicate effectively with a diverse range of stakeholders, and maintain a strategic focus will thrive. For others, the writing may be on the wall well before the 3–5 year mark.

What other reasons have you found that got the founding CTO fired? Share your thoughts in the comments.


References: & Further Reading

Tech Founder to CTO: The Hidden Challenges of Managing Growth in Startups

Tech Founder to CTO: The Hidden Challenges of Managing Growth in Startups

The role of the Chief Technology Officer (CTO) in a startup is dynamic and challenging, particularly for first-time technical cofounders. While the early stages of a startup demand intense technical involvement and innovation, the role evolves significantly as the company grows. This evolution often highlights stark differences in the required skill sets at different stages, posing challenges for first-time technical cofounders but offering opportunities for serial entrepreneurs.

The Initial Phase: Technical Mastery and Hands-On Development

In a startup’s early days, the technical cofounder, often assuming the CTO role, is deeply immersed in product development’s intricacies. This period is characterized by rapid prototyping, extensive coding, and constant iteration based on user feedback. The technical cofounder’s primary focus is to bring the product vision to life, often working with limited resources and under significant time pressure. This phase requires not just technical expertise but also a high degree of creativity and problem-solving prowess.

The Transition: From Builder to Leader

As the startup scales, the CTO’s demands change dramatically. The focus shifts from hands-on development to strategic leadership. This transition involves managing larger teams, setting long-term technical directions, and ensuring that the technology strategy aligns with the overall business goals. First-time technical cofounders often find this shift challenging because it demands skills they may not have developed. The ability to code and build is no longer enough; the role now requires people management, strategic planning, and the capacity to handle complex organizational dynamics.

The Skill Set Gap

For first-time technical cofounders, this transition can be particularly daunting. Their expertise lies in building and innovating, but scaling a technology team and managing a growing organization are entirely different challenges. These new responsibilities require experience in leadership, communication, and strategic thinking—areas where first-time founders might lack experience. The result is a skill set gap that can lead to frustration and inefficiency, both for the individual and the organization.

Serial Entrepreneurs: Experience Matters

In contrast, serial entrepreneurs often handle this transition more effectively. Having navigated the startup journey multiple times, they possess a broader range of skills and experiences. They are familiar with the different phases of growth and the changing demands of the CTO role. Serial entrepreneurs are better equipped to balance hands-on technical work with strategic leadership. They have likely experienced the pitfalls and challenges of scaling a company before and have developed the necessary skills to manage them.

Learning from Experience

Serial entrepreneurs and or seasoned engineering leaders bring a wealth of knowledge from their previous ventures, allowing them to anticipate challenges and implement solutions proactively. Their past experiences help them build robust management structures, delegate effectively, and maintain strategic focus. This adaptability and foresight are crucial for a scaling startup, where the ability to pivot and adjust is often the difference between success and failure.

The Burnout Factor

Another critical difference is how first-time technical cofounders and serial entrepreneurs handle burnout. The relentless pace and high stakes of a startup can lead to significant stress and fatigue. First-time founders, driven by their passion and vision, might find it hard to step back and delegate, leading to burnout. On the other hand, serial entrepreneurs, having experienced this before, are often more adept at recognizing the signs of burnout and taking steps to mitigate it. They understand the importance of work-life balance and are better at creating a sustainable work environment for themselves and their teams.

Strategic Decisions and Stakeholder Management

As startups grow, they attract more investors and stakeholders whose interests need to be managed. Serial entrepreneurs typically have more experience dealing with investors and understanding their expectations. They are skilled at navigating the complex landscape of stakeholder management, making strategic decisions that align with the broader goals of the company while maintaining the confidence of their investors.

Conclusion: The Path Forward

For startups, recognizing the strengths and limitations of their technical cofounders is crucial. While first-time technical cofounders bring passion and technical prowess, they may struggle with the strategic and managerial aspects as the company scales. In contrast, serial entrepreneurs, with their diverse experiences and refined skills, are often better suited to handle the evolving demands of the CTO role.

Startups should consider these dynamics when planning their leadership strategies. Providing support, mentorship, and training to first-time technical cofounders can help bridge the skill set gap. Alternatively, involving experienced leaders who can complement the technical cofounder’s strengths can create a balanced leadership team capable of steering the company through its growth phases.

Ultimately, the journey from a technical cofounder to a successful CTO is complex and challenging. Recognizing the unique contributions and potential limitations of first-time technical cofounders, while leveraging the experience of serial entrepreneurs, can significantly enhance a startup’s chances of success.

The Future is Now: How Mojo🔥 is Outpacing Python at 90000X Speed

The Future is Now: How Mojo🔥 is Outpacing Python at 90000X Speed

Calling all AI wizards and machine learning mavericks! Get ready to be blown away by Mojo, a revolutionary new programming language designed specifically to conquer the ever-evolving realm of artificial intelligence.

Just last year, Modular Inc. unveiled Mojo, and it’s already making waves. But here’s the real kicker: Mojo isn’t just another language; it’s a “hypersonic” language on a mission to leave the competition in the dust. We’re talking about a staggering 90,000 times faster than the ever-popular Python! I wanted to share a minor disclaimer there, this is not the “Official” benchmark by The Computer Language Benchmark Game or anything institutional, it is all Modular’s internal benchmarking!

That’s right, say goodbye to hours of agonizing wait times while your AI models train. With Mojo, you’ll be churning out cutting-edge algorithms at lightning speed. Imagine the possibilities! Faster development cycles, quicker iterations, and the ability to tackle even more complex AI projects – the future is wide open.

Mind-Blowing Speed and an Engaged Community

But speed isn’t the only thing Mojo boasts about. Launched in August 2023, this open-source language (open-sourced just last month, on March 29th, 2024!) has already amassed a loyal following, surpassing a whopping 17,000 stars on its GitHub repository. That’s a serious testament to the developer community’s excitement about Mojo’s potential.

The momentum continues to build. As of today, there are over 2,500 active projects on GitHub utilizing Mojo, showcasing its rapid adoption within the AI development space.

Unveiling the Magic Behind Mojo

So, what’s the secret sauce behind Mojo’s mind-blowing performance? The folks at Modular Inc. are keeping some of the details close to their chest, but we do know that Mojo is built from the ground up for AI applications. This means it leverages advancements in compiler technology and hardware acceleration, specifically targeting the types of tasks that AI developers face every day (SIMD, vectorisation, and parallelisation)

Here’s a sneak peek at some of the advantages:

  • Multi-Paradigm Muscle: Mojo is a multi-paradigm language, offering the flexibility of imperative, functional, and generic programming styles. This allows developers to choose the most efficient approach for each specific task within their AI project.
  • Seamless Python Integration: Don’t worry about throwing away your existing Python code. Mojo plays nicely with the vast Python ecosystem, allowing you to leverage existing libraries and seamlessly integrate them into your Mojo projects.
  • Expressive Syntax: If you’re familiar with Python, you’ll feel right at home with Mojo’s syntax. It builds upon the familiar Python base, making the learning curve much smoother for experienced developers.

The Future of AI Development is Here

If you’re looking to push the boundaries of AI and machine learning, then Mojo is a game-changer you can’t afford to miss. With several versions already released, including the most recent update in March 2024 (version 0.7.2), the language is constantly evolving and incorporating valuable community feedback.

Dive into the open-source community, explore the comprehensive documentation, and unleash the power of Mojo on your next groundbreaking project. The future of AI is here, and it’s moving at breakneck speed with Mojo leading the charge! Go ahead and get it here

One Trick Pony

Just be warned that Mojo is not general purpose in nature and Python will win hands down on generic computational tasks due to,

  • Libraries –
    • Python boasts an extensive ecosystem of libraries and frameworks, such as TensorFlow, NumPy, Pandas, and PyTorch, with over 137,000 libraries.
    • Mojo has a developing library ecosystem but significantly lags behind Python in this regard.
  • Compatibility and Integration –
    • Python is known for its compatibility and integration with various programming languages and third-party packages, making it flexible for projects with complex dependencies.
    • Mojo, while generally interoperable with Python, falls short in terms of integration and compatibility with other tools and languages.
  • Popularity (Availability of devs)
    • Python is a highly popular programming language with a large community of developers and data scientists.
    • Mojo, being introduced in 2023, has a much smaller community and popularity compared to Python.
    • It is just now open sourced, has limited documentation, and is targeted at developers with system programming experience.
    • According to the TIOBE Programming Community Index, a programming language popularity index, Python consistently holds the top position.
    • In contrast, Mojo is currently ranked 174th and has a long way to go.
Mastering Cyber Defense: The Impact Of AI & ML On Security Strategies

Mastering Cyber Defense: The Impact Of AI & ML On Security Strategies

The cybersecurity landscape is a relentless battlefield. Attackers are constantly innovating, churning out new threats at an alarming rate. Traditional security solutions are struggling to keep pace. But fear not, weary defenders! Artificial Intelligence (AI) and Machine Learning (ML) are emerging as powerful weapons in our arsenal, offering the potential to revolutionize cybersecurity.

The Numbers Don’t Lie: Why AI/ML Matters

  • Security Incidents on the Rise: According to the IBM Security X-Force Threat Intelligence Index 2023 https://www.ibm.com/reports/threat-intelligence, the average organization experienced 270 data breaches in 2022, a staggering 13% increase from the previous year.
  • Alert Fatigue is Real: Security analysts are bombarded with a constant stream of alerts, often leading to “alert fatigue” and missed critical threats. A study by the Ponemon Institute found that it takes an average of 280 days to identify and contain a security breach https://www.ponemon.org/.

AI/ML to the Rescue: Current Applications

AI and ML are already making a significant impact on cybersecurity:

  • Reverse Engineering Malware with Speed: AI can disassemble and analyze malicious code at lightning speed, uncovering its functionalities and vulnerabilities much faster than traditional methods. This allows defenders to understand attacker tactics and develop effective countermeasures before widespread damage occurs.
  • Prioritizing the Vulnerability Avalanche: Legacy vulnerability scanners often generate overwhelming lists of potential weaknesses. AI can prioritize these vulnerabilities based on exploitability and potential impact, allowing security teams to focus their efforts on the most critical issues first. A study by McAfee found that organizations can reduce the time to patch critical vulnerabilities by up to 70% using AI https://www.mcafee.com/blogs/internet-security/the-what-why-and-how-of-ai-and-threat-detection/.
  • Security SIEMs Get Smarter: Security Information and Event Management (SIEM) systems ingest vast amounts of security data. AI can analyze this data in real-time, correlating events and identifying potential threats with an accuracy far exceeding human capabilities. This significantly improves threat detection accuracy and reduces the time attackers have to operate undetected within a network.

The Future of AI/ML in Cybersecurity: A Glimpse Beyond

As AI and ML technologies mature, we can expect even more transformative applications:

  • Context is King: AI can be trained to understand the context of security events, considering user behaviour, network activity, and system configurations. This will enable highly sophisticated threat detection and prevention capabilities, automatically adapting to new situations and attacker tactics.
  • Automating Security Tasks: Imagine a future where AI automates not just vulnerability scanning, but also incident response, patch management, and even threat hunting. This would free up security teams to focus on more strategic initiatives and significantly improve overall security posture.

Challenges and Considerations: No Silver Bullet

While AI/ML offers immense potential, it’s important to acknowledge the challenges:

  • Explainability and Transparency: AI models can sometimes make decisions that are difficult for humans to understand. This lack of explainability can make it challenging to trust and audit AI-powered security systems. Security teams need to ensure they understand how AI systems reach conclusions and that these conclusions are aligned with overall security goals.
  • Data Quality and Bias: The effectiveness of AI/ML models heavily relies on the quality of the data they are trained on. Biased data can lead to biased models that might miss certain threats or flag legitimate activity as malicious. Security teams need to ensure their training data is diverse and unbiased to avoid perpetuating security blind spots.

The Takeaway: Embrace the Future

Security practitioners and engineers are at the forefront of adopting and shaping AI/ML solutions. By understanding the current applications, future potential, and the associated challenges, you can ensure that AI becomes a powerful ally in your cybersecurity arsenal. Embrace AI/ML, and together we can build a more secure future!

#AI #MachineLearning #Cybersecurity #ThreatDetection #SecurityAutomation

P.S. Check out these resources to learn more:

NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0): https://www.nist.gov/itl/ai-risk-management-framework) by National Institute of Standards and Technology (NIST)

The Fork in the Road: The Curveball that Redis Pitched

The Fork in the Road: The Curveball that Redis Pitched

In a move announced on March 20th, 2024, Redis, the ubiquitous in-memory data store, sent shockwaves through the tech world with a significant shift in its licensing model. Previously boasting a permissive BSD license, Redis transitioned to a dual-license approach, combining the Redis Source Available License (RSAL) and the Server Side Public License (SSPL). This move, while strategic for Redis Labs, has created ripples of concern in the SAAS ecosystem and the open-source community at large.

The Split: From Open to Source-Available

At its core, the change restricts how users, particularly cloud providers offering managed Redis services, can leverage the software commercially. The SSPL, outlined in the March 24th press release, stipulates that any derivative work offering the “same functionality as Redis” as a service must also be open-sourced. This directly impacts companies like Amazon (ElastiCache) and DigitalOcean, forcing them to potentially alter their service models or acquire commercial licenses from Redis Labs.

A History of Licensing Shifts

This isn’t the first time Redis Labs has ruffled feathers with licensing changes. As a 2019 TechCrunch article [1] highlights, Redis Labs has a history of tweaking its open-source license, sparking similar controversies. Back then, the company argued that cloud providers were profiting from Redis without giving back to the open-source community. The new SSPL appears to be an extension of this philosophy, aiming to compel greater contribution from commercial users.

SAAS Providers in a Squeeze

For SAAS providers, the new licensing throws a wrench into established business models. Modifying core functionality to comply with the SSPL might not be feasible, and open-sourcing their entire platform could expose proprietary code. This could lead to increased costs for SAAS companies, potentially impacting end-user pricing.

Open Source Community Divided

The open-source world is also grappling with the implications. While the core Redis functionality remains open-source under RSAL, the philosophical shift towards a more restrictive model has some worried. The Linux Foundation even announced a fork, Valkey, as an alternative, backed by tech giants like Google and Oracle. This fragmentation could create confusion and slow down innovation within the open-source Redis ecosystem.

The Road Ahead: Uncertainty and Innovation

The long-term effects of Redis’s licensing change remain to be seen. It might pave the way for a new model for open-source software sustainability, where companies can balance community development with commercial viability. However, it also raises concerns about control and potential fragmentation within open-source projects.

In conclusion, Redis’s licensing shift presents a complex scenario. While it aims to secure Redis Labs’ financial future, it disrupts the SAAS landscape and creates uncertainty in the open-source world. Only time will tell if this is a necessary evolution or a roadblock to future innovation.

References & Further Reading:

Bitnami