Author: Ramkumar Sundarakalatharan

The Future is Now: How MojođŸ”„ is Outpacing Python at 90000X Speed

The Future is Now: How MojođŸ”„ is Outpacing Python at 90000X Speed

Calling all AI wizards and machine learning mavericks! Get ready to be blown away by Mojo, a revolutionary new programming language designed specifically to conquer the ever-evolving realm of artificial intelligence.

Just last year, Modular Inc. unveiled Mojo, and it’s already making waves. But here’s the real kicker: Mojo isn’t just another language; it’s a “hypersonic” language on a mission to leave the competition in the dust. We’re talking about a staggering 90,000 times faster than the ever-popular Python! I wanted to share a minor disclaimer there, this is not the “Official” benchmark by The Computer Language Benchmark Game or anything institutional, it is all Modular’s internal benchmarking!

That’s right, say goodbye to hours of agonizing wait times while your AI models train. With Mojo, you’ll be churning out cutting-edge algorithms at lightning speed. Imagine the possibilities! Faster development cycles, quicker iterations, and the ability to tackle even more complex AI projects – the future is wide open.

Mind-Blowing Speed and an Engaged Community

But speed isn’t the only thing Mojo boasts about. Launched in August 2023, this open-source language (open-sourced just last month, on March 29th, 2024!) has already amassed a loyal following, surpassing a whopping 17,000 stars on its GitHub repository. That’s a serious testament to the developer community’s excitement about Mojo’s potential.

The momentum continues to build. As of today, there are over 2,500 active projects on GitHub utilizing Mojo, showcasing its rapid adoption within the AI development space.

Unveiling the Magic Behind Mojo

So, what’s the secret sauce behind Mojo’s mind-blowing performance? The folks at Modular Inc. are keeping some of the details close to their chest, but we do know that Mojo is built from the ground up for AI applications. This means it leverages advancements in compiler technology and hardware acceleration, specifically targeting the types of tasks that AI developers face every day (SIMD, vectorisation, and parallelisation)

Here’s a sneak peek at some of the advantages:

  • Multi-Paradigm Muscle: Mojo is a multi-paradigm language, offering the flexibility of imperative, functional, and generic programming styles. This allows developers to choose the most efficient approach for each specific task within their AI project.
  • Seamless Python Integration: Don’t worry about throwing away your existing Python code. Mojo plays nicely with the vast Python ecosystem, allowing you to leverage existing libraries and seamlessly integrate them into your Mojo projects.
  • Expressive Syntax: If you’re familiar with Python, you’ll feel right at home with Mojo’s syntax. It builds upon the familiar Python base, making the learning curve much smoother for experienced developers.

The Future of AI Development is Here

If you’re looking to push the boundaries of AI and machine learning, then Mojo is a game-changer you can’t afford to miss. With several versions already released, including the most recent update in March 2024 (version 0.7.2), the language is constantly evolving and incorporating valuable community feedback.

Dive into the open-source community, explore the comprehensive documentation, and unleash the power of Mojo on your next groundbreaking project. The future of AI is here, and it’s moving at breakneck speed with Mojo leading the charge! Go ahead and get it here

One Trick Pony

Just be warned that Mojo is not general purpose in nature and Python will win hands down on generic computational tasks due to,

  • Libraries –
    • Python boasts an extensive ecosystem of libraries and frameworks, such as TensorFlow, NumPy, Pandas, and PyTorch, with over 137,000 libraries.
    • Mojo has a developing library ecosystem but significantly lags behind Python in this regard.
  • Compatibility and Integration –
    • Python is known for its compatibility and integration with various programming languages and third-party packages, making it flexible for projects with complex dependencies.
    • Mojo, while generally interoperable with Python, falls short in terms of integration and compatibility with other tools and languages.
  • Popularity (Availability of devs)
    • Python is a highly popular programming language with a large community of developers and data scientists.
    • Mojo, being introduced in 2023, has a much smaller community and popularity compared to Python.
    • It is just now open sourced, has limited documentation, and is targeted at developers with system programming experience.
    • According to the TIOBE Programming Community Index, a programming language popularity index, Python consistently holds the top position.
    • In contrast, Mojo is currently ranked 174th and has a long way to go.
Mastering Cyber Defense: The Impact Of AI & ML On Security Strategies

Mastering Cyber Defense: The Impact Of AI & ML On Security Strategies

The cybersecurity landscape is a relentless battlefield. Attackers are constantly innovating, churning out new threats at an alarming rate. Traditional security solutions are struggling to keep pace. But fear not, weary defenders! Artificial Intelligence (AI) and Machine Learning (ML) are emerging as powerful weapons in our arsenal, offering the potential to revolutionize cybersecurity.

The Numbers Don’t Lie: Why AI/ML Matters

  • Security Incidents on the Rise: According to the IBM Security X-Force Threat Intelligence Index 2023 https://www.ibm.com/reports/threat-intelligence, the average organization experienced 270 data breaches in 2022, a staggering 13% increase from the previous year.
  • Alert Fatigue is Real: Security analysts are bombarded with a constant stream of alerts, often leading to “alert fatigue” and missed critical threats. A study by the Ponemon Institute found that it takes an average of 280 days to identify and contain a security breach https://www.ponemon.org/.

AI/ML to the Rescue: Current Applications

AI and ML are already making a significant impact on cybersecurity:

  • Reverse Engineering Malware with Speed: AI can disassemble and analyze malicious code at lightning speed, uncovering its functionalities and vulnerabilities much faster than traditional methods. This allows defenders to understand attacker tactics and develop effective countermeasures before widespread damage occurs.
  • Prioritizing the Vulnerability Avalanche: Legacy vulnerability scanners often generate overwhelming lists of potential weaknesses. AI can prioritize these vulnerabilities based on exploitability and potential impact, allowing security teams to focus their efforts on the most critical issues first. A study by McAfee found that organizations can reduce the time to patch critical vulnerabilities by up to 70% using AI https://www.mcafee.com/blogs/internet-security/the-what-why-and-how-of-ai-and-threat-detection/.
  • Security SIEMs Get Smarter: Security Information and Event Management (SIEM) systems ingest vast amounts of security data. AI can analyze this data in real-time, correlating events and identifying potential threats with an accuracy far exceeding human capabilities. This significantly improves threat detection accuracy and reduces the time attackers have to operate undetected within a network.

The Future of AI/ML in Cybersecurity: A Glimpse Beyond

As AI and ML technologies mature, we can expect even more transformative applications:

  • Context is King: AI can be trained to understand the context of security events, considering user behaviour, network activity, and system configurations. This will enable highly sophisticated threat detection and prevention capabilities, automatically adapting to new situations and attacker tactics.
  • Automating Security Tasks: Imagine a future where AI automates not just vulnerability scanning, but also incident response, patch management, and even threat hunting. This would free up security teams to focus on more strategic initiatives and significantly improve overall security posture.

Challenges and Considerations: No Silver Bullet

While AI/ML offers immense potential, it’s important to acknowledge the challenges:

  • Explainability and Transparency: AI models can sometimes make decisions that are difficult for humans to understand. This lack of explainability can make it challenging to trust and audit AI-powered security systems. Security teams need to ensure they understand how AI systems reach conclusions and that these conclusions are aligned with overall security goals.
  • Data Quality and Bias: The effectiveness of AI/ML models heavily relies on the quality of the data they are trained on. Biased data can lead to biased models that might miss certain threats or flag legitimate activity as malicious. Security teams need to ensure their training data is diverse and unbiased to avoid perpetuating security blind spots.

The Takeaway: Embrace the Future

Security practitioners and engineers are at the forefront of adopting and shaping AI/ML solutions. By understanding the current applications, future potential, and the associated challenges, you can ensure that AI becomes a powerful ally in your cybersecurity arsenal. Embrace AI/ML, and together we can build a more secure future!

#AI #MachineLearning #Cybersecurity #ThreatDetection #SecurityAutomation

P.S. Check out these resources to learn more:

NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0): https://www.nist.gov/itl/ai-risk-management-framework) by National Institute of Standards and Technology (NIST)

The Fork in the Road: The Curveball that Redis Pitched

The Fork in the Road: The Curveball that Redis Pitched

In a move announced on March 20th, 2024, Redis, the ubiquitous in-memory data store, sent shockwaves through the tech world with a significant shift in its licensing model. Previously boasting a permissive BSD license, Redis transitioned to a dual-license approach, combining the Redis Source Available License (RSAL) and the Server Side Public License (SSPL). This move, while strategic for Redis Labs, has created ripples of concern in the SAAS ecosystem and the open-source community at large.

The Split: From Open to Source-Available

At its core, the change restricts how users, particularly cloud providers offering managed Redis services, can leverage the software commercially. The SSPL, outlined in the March 24th press release, stipulates that any derivative work offering the “same functionality as Redis” as a service must also be open-sourced. This directly impacts companies like Amazon (ElastiCache) and DigitalOcean, forcing them to potentially alter their service models or acquire commercial licenses from Redis Labs.

A History of Licensing Shifts

This isn’t the first time Redis Labs has ruffled feathers with licensing changes. As a 2019 TechCrunch article [1] highlights, Redis Labs has a history of tweaking its open-source license, sparking similar controversies. Back then, the company argued that cloud providers were profiting from Redis without giving back to the open-source community. The new SSPL appears to be an extension of this philosophy, aiming to compel greater contribution from commercial users.

SAAS Providers in a Squeeze

For SAAS providers, the new licensing throws a wrench into established business models. Modifying core functionality to comply with the SSPL might not be feasible, and open-sourcing their entire platform could expose proprietary code. This could lead to increased costs for SAAS companies, potentially impacting end-user pricing.

Open Source Community Divided

The open-source world is also grappling with the implications. While the core Redis functionality remains open-source under RSAL, the philosophical shift towards a more restrictive model has some worried. The Linux Foundation even announced a fork, Valkey, as an alternative, backed by tech giants like Google and Oracle. This fragmentation could create confusion and slow down innovation within the open-source Redis ecosystem.

The Road Ahead: Uncertainty and Innovation

The long-term effects of Redis’s licensing change remain to be seen. It might pave the way for a new model for open-source software sustainability, where companies can balance community development with commercial viability. However, it also raises concerns about control and potential fragmentation within open-source projects.

In conclusion, Redis’s licensing shift presents a complex scenario. While it aims to secure Redis Labs’ financial future, it disrupts the SAAS landscape and creates uncertainty in the open-source world. Only time will tell if this is a necessary evolution or a roadblock to future innovation.

References & Further Reading:

From Soaring High to Stalling Out: How Boeing Lost Its Engineering Edge

From Soaring High to Stalling Out: How Boeing Lost Its Engineering Edge

The world’s largest aerospace conglomerate turns 108 this year. Boeing’s 1st plane, a Boeing Model 1 officially took off on 15 July 1916 when Wong Tsu (A Chinese graduate from MIT) completed the construction at the Heath Shipyard. As of 2023 September, a total of 78000 aircraft have rolled out of Boeing factories (excluding license-produced models elsewhere) with a total of 500+ unique aircraft designed across civilian, military, concept, prototypes and experiential designs. Boeing, really used to be a powerhouse of aviation technologies.

Boeing, once synonymous with aviation innovation, has hit turbulence in recent years. The company’s gradual decline can be traced to a shift in focus, prioritising short-term profits over the long-term commitment to hardcore engineering excellence that built its reputation.

P&L of Boeing across last 10 years.

P&L trend of Boeing, Infographics source Statista

A Legacy of Innovation Tarnished

Boeing’s history is a testament to American ingenuity. From the iconic 747 “Jumbo Jet” revolutionising passenger travel (over 1,500 delivered) to the technologically advanced 787 Dreamliner boasting superior fuel efficiency (over 1,700 delivered) [1], the company consistently pushed the boundaries of aerospace engineering. However, a gradual cultural shift began prioritising financial goals over engineering rigour. A Harvard Business Review article [2] highlights the pressure placed on engineers to meet aggressive deadlines and cost-cutting measures, potentially contributing to the tragic crashes of the 737 MAX aircraft. IMHO, The Boeing engineering disaster had roots in Welch’s deeply flawed management doctrines which were spread across American industry by his acolytes.

Lost Market Share and a Bleak Future

This shift in priorities has had significant financial consequences. The 737 MAX grounding, coupled with production delays of the 787 Dreamliner, significantly eroded Boeing’s market share. In the single-aisle passenger jet market, the crown jewel of commercial aviation, Airbus, Boeing’s main competitor, now holds a commanding lead of over 60% [3]. While Boeing struggles with a backlog of unfulfilled orders (around 4,000), Airbus boasts a healthier backlog exceeding 7,000 aircraft [4]. This translates to a stark difference in profitability. In 2023, Airbus reported a net profit of €4.2 billion ($4.5 billion) compared to Boeing’s net loss of $3.7 billion.

Examples of Lost Focus:

  • 737 MAX: The faulty design and subsequent crashes of the 737 MAX (over 100 undelivered orders due to grounding) exposed a culture that prioritised speed to market over thorough engineering review.
  • 787 Dreamliner: Production problems with the Dreamliner, including issues with electrical wiring and fuselage construction (hundreds of delayed deliveries), further eroded trust in Boeing’s manufacturing capabilities.
  • X-32 JSF: The loss of the JSF contract to Lockheed Martin in 2001 was a major blow to Boeing, as it represented the most important international fighter aircraft project since the Lightweight Fighter program competition of the 1960s 

Can Boeing Recover?

The road to recovery for Boeing will be long and arduous. Rebuilding trust with airlines and passengers will require a renewed commitment to safety and engineering excellence. This may involve significant changes in leadership and corporate culture, prioritizing long-term sustainability over short-term gains.

Boeing’s story serves as a cautionary tale for any company. While financial goals are important, sacrificing core values and engineering expertise can lead to devastating consequences. The future of this aviation giant remains uncertain, but one thing is clear: regaining its former glory will require a return to the principles that made it great in the first place.

References and Further Reading:

Understanding The Implications Of The Data Breaches At Microsoft.

Understanding The Implications Of The Data Breaches At Microsoft.

Note: I started this article last weekend to try and explain the attack path  “Midnight Blizzard” used and what Azure admins should do to protect themselves from a similar attack. Unfortunately, I couldn't complete/publish it in time and now there is another breach at Microsoft. (đŸ€ŠđŸż) Now, I had to completely redraft it and change the focus to a summary of data breaches at Microsoft and a walkthrough on the current breach. I will publish the Midnight Blizzard defence later this week.
Microsoft Data Breach

The Timeline of the Breaches

  • 20th-25th September 2023: 60k State Department Emails Stolen in Microsoft Breach
  • 12th-25th January 2024: Microsoft breached by “Nation-State Actors”
  • 11th-14th February 2024: State-backed APTs are weaponising OpenAI models 
  • 16th-19th February 2024: Microsoft admits to security issues with Azure and Exchange servers.
Date/MonthBreach TypeAffected Service/AreaSource
February 2024Zero-day vulnerabilities in Exchange serversExchange serversMicrosoft Security Response Center blog
January 2024Nation State-sponsored attack (Russia)Email accountsMicrosoft Security Response Center blog
February 2024State-backed APTs are weaponising OpenAI modelsNot directly impacting MS services
July 2023Chinese Hackers Breach U.S. Agencies Via Microsoft CloudAzureThe New York Times, Microsoft Security Response Center blog
October 2022BlueBleed Data Leak, 0.5 Million user data leakedUser Data
December 2021Lapsus$ intrusionSource code (Bing, Cortana)The Guardian, Reuters
August 2021Hafnium attacks Exchange serversExchange serversMicrosoft Security Response Center blog
March 2021SolarWinds supply chain attackVarious Microsoft products (indirectly affected)The New York Times, Reuters
January 2020Misconfigured customer support databaseCustomer data (names, email addresses)ZDNet
This is a high-level summary of breaches and successful hacks that got reported in the public domain and picked up by tier 1 publications. There are at least a dozen more in the period, some are of negligible impact, and others are less probable

Introduction:

Today, The digital landscape is a battlefield, and even tech giants like Microsoft aren’t immune to cyberattacks. Understanding recent breaches/incidents and their root causes, and effective defence strategies is crucial for Infosec/IT and DevSecOp teams navigating this ever-evolving threat landscape. This blog post dives into the security incidents affecting Microsoft, analyzes potential attack paths, and equips you with actionable defence plans to fortify your infrastructure/network.

Selected Breaches:

  • January 2024: State actors, purported to be affiliated with Russia leveraged password spraying and compromised email accounts, including those of senior leadership. This highlights the vulnerability of weak passwords and the critical need for multi-factor authentication (MFA).
  • January 2024: Zero-day vulnerabilities in Exchange servers allowed attackers to escalate privileges. This emphasizes the importance of regular patching and prompt updates to address vulnerabilities before they’re exploited.
  • December 2021: Lapsus$ group gained access to source code due to misconfigured access controls. This underscores the importance of least-privilege access and regularly reviewed security configurations.
  • Other incidents: Supply chain attacks (SolarWinds, March 2021) and data leaks (customer database, January 2020) demonstrate the diverse threats organizations face.

Attack Paths:

Understanding attacker motivations and methods is key to building effective defences. Here are common attack paths:

  • Social Engineering: Phishing emails and deceptive tactics trick users into revealing sensitive information or clicking malicious links.
  • Software Vulnerabilities: Unpatched software with known vulnerabilities offers attackers an easy entry point.
  • Weak Passwords: Simple passwords are easily cracked, granting access to accounts and systems.
  • Misconfigured Access Controls: Overly permissive access rules give attackers more power than necessary to escalate privileges and cause damage.
  • Supply Chain Attacks: Compromising a vendor or partner can grant attackers access to multiple organizations within the supply chain.

Defence Plans:

Building a robust defense requires a multi-layered approach:

  • Patch Management: Prioritize timely patching of vulnerabilities across all systems and software.
  • Strong Passwords & MFA: Implement strong password policies and enforce MFA for all accounts.
  • Access Control Management: Implement least privilege access and regularly review configurations.
  • Security Awareness Training: Educate employees on phishing, social engineering, and secure password practices.
  • Threat Detection & Response: Deploy security tools to monitor systems for suspicious activity and respond promptly to incidents.
  • Incident Response Planning: Develop and test a plan to mitigate damage, contain breaches, and recover quickly.
  • Penetration Testing: Regularly test your defenses by simulating real-world attacks to identify and fix vulnerabilities before attackers do.
  • Network Segmentation: Segment your network to limit the potential impact of a breach by restricting access to critical systems.
  • Data Backups & Disaster Recovery: Regularly back up data and have a plan to restore it in case of an attack or outage.
  • Stay Informed: Keep up-to-date on the latest security threats and vulnerabilities by subscribing to security advisories and attending industry conferences.

Conclusion:

Cybersecurity is an ongoing battle, but by understanding the tactics employed by attackers and implementing these defence strategies, IT/DevOps admins can significantly reduce the risk of breaches and protect their networks and data. Remember, vigilance and continuous improvement are key to staying ahead of the curve in the ever-evolving cybersecurity landscape.

Disclaimer: This blog post is for informational purposes only and should not be considered professional security advice. Please consult with a qualified security professional for guidance specific to your organization or mail me for an obligation free consultation call.

References and Further Reading:

What Makes SM2 Encryption Special? China’s Recommended Algorithm

What Makes SM2 Encryption Special? China’s Recommended Algorithm

This article is intended for security enthusiasts or otherwise for people with an advanced understanding of Cryptography and some Programming. I have tried to give in some background theory a very basic implementation.

Are there backdoors in AES and what is China’s response to it?

The US NIST has been pushing AES as the standard for symmetric key encryption. However, many luminaries in cryptographic research and industry observers suspect that as possibly pushing a cipher with an NSA/ GCHQ backdoor. For Chinese entities (Government or commercial), the ShāngMÏ (SM) series of ciphers provide alternatives. The SM9 standards provide a family of algorithms which will perform the entire gamut of things that RSA or AES is expected to do. They include the following.

SM4 was developed by LĂŒ Shuwang in 2007 and became a national standard (GB/T 32907–2016) in 2016 [RFC 8998].

Elliptic Curve Cryptography (ECC)

ECC is one of the most prevalent approaches to public-key cryptography, along with Diffie–Hellman, RSA & YAK

Public-key Cryptography

Public-key cryptography relies on the generation of two keys:

  • one private key which must remain private
  • one public key which can be shared with the world

It is impossible to know a private key from a public key (it takes more than centuries to compute – assuming a workable quantum computer is infeasible using existing material science). It is possible to prove the possession of a private key without disclosing it. This proof can be verified by using its corresponding public key. This proof is called a digital signature.

High-level Functions

ECC can perform signature and verification of messages (authenticity). ECC can also perform encryption and decryption (confidentiality), however, not directly. For encryption/decryption, it needs the help of a shared secret aka Key.

It achieves the same level of security as RSA (Rivest-Shamir-Adleman), the traditional public-key algorithm, using substantially shorter key sizes. This reduction translates into lower processing requirements and reduced storage demands. For instance, an ECC 256-bit key provides comparable security to an RSA 3072-bit key.

For brevity’s sake, I’d refer you to Hans Knutson’s very well-explained article on Hacker Noon

Theory Summary: A Look Inside SM2 Key Generation

This section aims to offer a simplified understanding of different parameters found in SM2 libraries and their corresponding meanings, drawing inspiration from the insightful guides by Hans Knutson on Hacker Noon and Svetlin Nakov’s CryptoBook. (links in the reference section)

Comparing RSA and ECC Key Generation:

  • RSA: Based on prime number factorization.
    • Private key: Composed of two large prime numbers (p and q).
    • Public key: Modulus (m) obtained by multiplying p and q (m = p * q).
    • Key size: Determined by the number of bits in modulus (m).
    • Difficulty: Decomposing m back into p and q is computationally intensive.
  • ECC: Leverages the discrete logarithm of elliptic curve elements.
    • Elliptic curve: Defined as the set of points (x, y) satisfying the equation y^2 = x^3 + ax + b.
    • Example: Bitcoin uses the curve secp256k1 with the equation y^2 = x^3 + 7.
    • Point addition: Defined operation on points of the curve.

Key Generation in SM2:

  1. Domain parameters:
    • A prime field p of 256 bits.
    • An elliptic curve E defined within the field p.
    • A base point G on the curve E.
    • Order n of G, representing the number of points in the subgroup generated by G.
  2. Private key:
    • Randomly chosen integer d (1 < d < n).
  3. Public key:
    • Point Q = d * G.

Understanding Parameters:

  • Prime field p: Defines the mathematical space where the curve operates.
  • Elliptic curve E: Provides a structure for performing cryptographic operations.
  • Base point G: Serves as a starting point for generating other points on the curve.
  • Order n: Represents the number of points in the subgroup generated by G, which dictates the security level of the scheme.
  • Private key d: Secret integer randomly chosen within a specific range.
  • Public key Q: Point obtained by multiplying the private key d with the base point G.

Visualization:

Imagine a garden with flowers planted on specific points (x, y) satisfying a unique equation. This garden represents the elliptic curve E. You have a special key (d) that allows you to move around the garden and reach a specific flower (Q) using a defined path. Each step on this path is determined by the base point G. While anyone can see the flower (Q), only you have the knowledge of the path (d) leading to it, thus maintaining confidentiality.

This analogy provides a simplified picture of key generation in SM2, illustrating the interplay between different parameters and their cryptographic significance.

Diving Deeper into SM2/SM3/SM4 Integration with Golang

This section focuses on the integration of the Chinese cryptographic standards SM2, SM3, and SM4 into Golang applications. It details the process of porting Java code to Golang and the specific challenges encountered.

Open-Source Implementations:

  • GmSSL: Main open-source implementation of SM2/SM3/SM4, stands for “Guomi.”
  • Other implementations: gmsm (Golang), gmssl (Python), CFCA SADK (Java).

Porting Java Code to Golang:

  • Goal: Reverse-engineer the usage of CFCA SADK in Java code and adapt the corresponding functionality in Golang using gmsm.
  • Approach:
    • Hashing (SM3) and encryption (SM4) algorithms were directly ported using equivalent functions across languages.
    • Security operations added to a classic REST API POST required specific attention.
    • Step 1:
      • Original parameters are concatenated in alphabetical order.
      • API key is appended.
      • The combined string is hashed using SM3.
      • The resulting hash is added as an additional POST parameter.
    • Step 2:
      • Original parameters are concatenated in alphabetical order.
      • The signature is generated using SM2.
      • Challenge: Golang library lacked PKCS7 formatting support for signatures, only supporting American standards.
      • Solution: Modification of the Golang library to support PKCS7 formatting for SM2 signatures.

Response Processing:

  • Response body is encrypted using SM4 with a key derived from the API key.
  • Response body includes both an SM3 hash and SM2 signature for verification.

Key Takeaways:

  • Porting cryptographic algorithms across languages requires careful consideration of specific functionalities.
  • Lack of standard support for specific formats (PKCS7 in this case) might necessitate library modification.
  • Integrating SM2/SM3/SM4 in Golang requires utilizing libraries like gmsm and potentially adapting them for specific needs.

Getting your Hands Dirty

Go to https://github.com/guanzhi/GmSSL/releases download the version for your OS and move to your working directory.

1 - $ unzip or tar -xvf GmSSL-master.zip/tar
2 - $ mkdir build
    $ cd build
    $ cmake ..
    $ make
    $ make test
    $ sudo make install
3 - $ gmssl version
    $ GmSSL 3.1.0 Dev
4 -
$ KEY=11223344556677881122334455667788
$ IV=11223344556677881122334455667788

$ echo hello | gmssl sm4 -cbc -encrypt -key $KEY -iv $IV -out sm4.cbc
$ gmssl sm4 -cbc -decrypt -key $KEY -iv $IV -in sm4.cbc

$ echo hello | gmssl sm4 -ctr -encrypt -key $KEY -iv $IV -out sm4.ctr
$ gmssl sm4 -ctr -decrypt -key $KEY -iv $IV -in sm4.ctr

$ echo -n abc | gmssl sm3
$ gmssl sm2keygen -pass 1234 -out sm2.pem -pubout sm2pub.pem
$ echo -n abc | gmssl sm3 -pubkey sm2pub.pem -id 1234567812345678
$ echo -n abc | gmssl sm3hmac -key 11223344556677881122334455667788

$ gmssl sm2keygen -pass 1234 -out sm2.pem -pubout sm2pub.pem

$ echo hello | gmssl sm2sign -key sm2.pem -pass 1234 -out sm2.sig #-id 1234567812345678
$ echo hello | gmssl sm2verify -pubkey sm2pub.pem -sig sm2.sig -id 1234567812345678

$ echo hello | gmssl sm2encrypt -pubkey sm2pub.pem -out sm2.der
$ gmssl sm2decrypt -key sm2.pem -pass 1234 -in sm2.der

$ gmssl sm2keygen -pass 1234 -out sm2.pem -pubout sm2pub.pem

$ echo hello | gmssl sm2encrypt -pubkey sm2pub.pem -out sm2.der
$ gmssl sm2decrypt -key sm2.pem -pass 1234 -in sm2.der

$ gmssl sm2keygen -pass 1234 -out rootcakey.pem
$ gmssl certgen -C CN -ST Beijing -L Haidian -O PKU -OU CS -CN ROOTCA -days 3650 -key rootcakey.pem -pass 1234 -out rootcacert.pem -key_usage keyCertSign -key_usage cRLSign
$ gmssl certparse -in rootcacert.pem

How to Get Keys

The private key used for SM2 signing was provided to us, along with a passphrase for testing purposes. Of course, in production systems, the private key is generated and kept private. The file extension is .sm2; the first step was to make use of it.

It can be parsed with:

$ openssl asn1parse -in file.sm2

    0:d=0  hl=4 l= 802 cons: SEQUENCE
    4:d=1  hl=2 l=   1 prim: INTEGER           :01
    7:d=1  hl=2 l=  71 cons: SEQUENCE
    9:d=2  hl=2 l=  10 prim: OBJECT            :1.2.156.10197.6.1.4.2.1
   21:d=2  hl=2 l=   7 prim: OBJECT            :1.2.156.10197.1.104
   30:d=2  hl=2 l=  48 prim: OCTET STRING      [HEX DUMP]:8[redacted]7
   80:d=1  hl=4 l= 722 cons: SEQUENCE
   84:d=2  hl=2 l=  10 prim: OBJECT            :1.2.156.10197.6.1.4.2.1
   96:d=2  hl=4 l= 706 prim: OCTET STRING      [HEX DUMP]:308[redacted]249

The OID 1.2.156.10197.1.104 means SM4 Block Cipher. The OID 1.2.156.10197.6.1.4.2.1 simply means data.

.sm2 files are an ASN.1 structure encoded in DER and base64-ed. The ASN.1 structure contains (int, seq1, seq2). Seq1 contains the SM4-encrypted SM2 private key x. Seq2 contains the x509 cert of the corresponding SM2 public key (ECC coordinates (x,y) of the point X). From the private key x, it is also possible to get X=x‱P.

The x509 certificate is signed by CFCA, and the signature algorithm 1.2.156.10197.1.501 means SM2 Signing with SM3.

How to Sign with SM2

Now that the private key x is known, it is possible to use it to sign the concatenation of parameters and return the PKCS7 format expected.

As a reminder, ECC Digital Signature Algorithm takes a random number k. This is why it is important to add a random generator to the signing function. It is also difficult to troubleshoot: signing the same message twice will provide different outputs.

The signature will return two integers, r and s, as defined previously.

The format returned is PKCS7, which is structured with ASN.1. The asn1js tool is perfect for reading and comparing ASN.1 structures. For maximum privacy, it should be cloned and used locally.

The ASN.1 structure of the signature will follow:

  • The algorithm used as hash, namely 1.2.156.10197.1.401 (sm3Hash)
  • The data that is signed, with OID 1.2.156.10197.6.1.4.2.1 (data)
  • A sequence of the x509 certificates corresponding to the private keys used to sign (we can sign with multiple keys)
  • A set of the digital signatures for all the keys/certificates signing. Each signature is a sequence of the corresponding certificate information (countryName, organizationName, commonName) and finally the two integer r and s, in hexadecimal representation

To generate such signature, the Golang equivalent is:

import (
	"math/big"
	"encoding/hex"
	"encoding/base64"
	"crypto/rand"
	"github.com/tjfoc/gmsm/sm2"
	"github.com/pgaulon/gmsm/x509" // modified PKCS7
)

[...]

	PRIVATE, _ := hex.DecodeString("somehexhere")
	PUBLICX, _ := hex.DecodeString("6de24a97f67c0c8424d993f42854f9003bde6997ed8726335f8d300c34be8321")
	PUBLICY, _ := hex.DecodeString("b177aeb12930141f02aed9f97b70b5a7c82a63d294787a15a6944b591ae74469")

	priv := new(sm2.PrivateKey)
	priv.D = new(big.Int).SetBytes(PRIVATE)
	priv.PublicKey.X = new(big.Int).SetBytes(PUBLICX)
	priv.PublicKey.Y = new(big.Int).SetBytes(PUBLICY)
	priv.PublicKey.Curve = sm2.P256Sm2()

	cert := getCertFromSM2(sm2CertPath) // utility to provision a x509 object from the .sm2 file data
	sign, _ := priv.Sign(rand.Reader, []byte(toSign), nil)
	signedData, _ := x509.NewSignedData([]byte(toSign))
	signerInfoConf := x509.SignerInfoConfig{}
	signedData.AddSigner(cert, priv, signerInfoConf, sign)
	pkcs7SignedBytes, _ := signedData.Finish()
	return base64.StdEncoding.EncodeToString(pkcs7SignedBytes)

Key Takeaways: Demystifying SM2 Cryptography

  1. SM2 relies on Elliptic Curve Cryptography (ECC): This advanced mathematical method provides superior security compared to traditional RSA algorithms.
  2. ECC keys are unique: The public key is a point reached by repeatedly adding the base point to itself a specific number of times. This number acts as the private key and remains secret.
  3. ECC signatures are dynamic: Unlike static signatures, ECC signatures use a random element, ensuring they vary even for the same message. Each signature consists of two unique values (r and s).
  4. Troubleshooting tools: ASN.1 issues can be tackled with asn1js, while Java problems can be identified using jdb and jd-gui.
  5. Cryptography requires expertise: Understanding and implementing cryptographic algorithms like SM2 demands specialized knowledge and careful attention.

References & Further Reading:

  1. Elliptic Curve Cryptography (ECC) 
  2. What is the math behind elliptic curve cryptography? | HackerNoon 
  3. Releases · guanzhi/GmSSL
Why Startups Need To Architect Cloud Agnostic Products

Why Startups Need To Architect Cloud Agnostic Products

Nobody plans to leave AWS in the startup world, but as they say, “sh** happens.”

An image of multiple clouds over a desk

As engineers, when we write software, we’re taught to keep it elegant by never depending directly on external systems. We write wrappers for external resources, we encapsulate data and behaviour and standardise functions with libraries. 

But, When it comes to the cloud
 “eerie silence”

Companies have died because they needed to move off AWS or GCP but couldn’t do it in a reasonable and cost-effective timeline.

We (at Itilite) had a close call with GCP, which served as our brush with the fire. Google had arguably one of the best Distance Matrix capabilities out there.  It was used in one of our core logic and ML models. And on one fine Monday afternoon, I have to set up a meeting with my CEO to communicate that we will have to spend ~250% more on our cloud service bill in about 60 days.

Actually, google increased the pricing by 1400% and gave 60 days to rewrite, migrate, move out or perish!  

The closest competitor in terms of capability was DistanceMatrix and a reliable “Large” player was Bing. But, both left a lot for in the “Accuracy”. So, for us, the business decision was simple: make the entire product work in “Reduced Functionality” mode for all or start differential pricing for better accuracy!  In either case, those APIs must be rewritten with a new adaptor. 

It is not an enigma why we do this. It’s simple: there are no alternatives, there is no time to GTM,  But maybe there is. I’ll explain why you should take cloud-agnostic architecture seriously and then show you what I do to keep my projects cloud-agnostic.

Cloud Service Rationalisation

The prime reason you should consider the ability to switch clouds and cloud services is so you can choose to use the cloud service that is price and performance-optimized for your use case.

When I first got into serverless, we wrote a transformative API on Oracle Cloud (Bcoz we were part of their Accelerator Program and had a huge credit.) but it fed part of the data that the customer-facing API relied on.

No prize for guessing what happened?

It was a horrible mistake. Our API had an insane latency problem. Cold start requests added additional latency of at least 2 seconds per request. The AWS team has worked hard to build a service that can do things that GCP’s Cloud Functions simply can’t, specifically around cold starts and latency.

I had to move my infrastructure to a different service and a revised network topology.

Guess we would have learned the problem by now, but as we will find out, we did not.

This time it was a combination of Kafka and the AWS Lambda that created an issue. We had relied on Confluent’s connectors for much of the workload interfaces and had to shell out almost $1000 per month per connector!

Avoiding the Cloud Provider Killswitch

Protect Your Business from Unexpected Termination

As a CXO, you may not be aware that cloud providers like AWS, GCP, and Azure reserve the right to terminate your account and destroy your infrastructure at any time, effectively shutting down your business operations. While this may seem like an extreme measure, it’s important to understand that cloud providers have strict terms of service that can lead to account termination for a variety of reasons, even if you’re not engaged in illegal or harmful activities.

A Chilling Example

I recently spoke with a friend who is the founder of a fintech platform. He shared a chilling incident that highlights the risks of relying on cloud providers. His team was using GCP’s Cloud Run, a container service, to host their API. They had a unique use case that required them to call back to their own API to trigger additional work and keep the service active. Unfortunately, GCP monitors this type of behaviour and flags it as potential crypto-mining activity.

On an ordinary Sunday, their infrastructure vanished, and their account was locked. It took them six days of nonstop effort to migrate to AWS.

Protect Your Business

This incident serves as a stark reminder that any business operating on cloud infrastructure is vulnerable to unexpected termination. While you may not be intentionally engaging in activities that violate cloud provider terms of service, it’s crucial to build your infrastructure with the possibility of termination in mind.

Here are some key steps you can take to protect your business from the cloud provider killswitch:

  1. Read and understand the terms of service for each cloud provider you use.
  2. Choose a cloud provider that aligns with your industry and business model.
  3. Avoid relying on a single cloud provider.
  4. Have a backup plan in place.
  5. Regularly review your cloud usage and ensure compliance with cloud provider terms of service.

By taking these proactive measures, you can significantly reduce the risk of your business being disrupted by cloud provider termination and ensure the continuity of your operations.

Unleash the Power of Free Cloud Credits

For early-stage startups operating on a shoestring budget, free cloud credits can be a lifeline, shielding your runway from the scorching heat of cloud infrastructure costs. Acquiring these credits is a breeze, but the way most startups build their infrastructure – akin to an unbreakable blood oath with their cloud provider – restricts them to the credits granted by that single provider.

Why limit yourself to the generosity of one cloud provider when you could seamlessly switch between them to optimize your resource allocation? Imagine the possibilities:

  • AWS to GCP: Upon depleting your AWS credits, you could effortlessly migrate your infrastructure to GCP, taking advantage of their generous $200,000 credit offer.
  • Y Combinator: As a Y Combinator startup, you’re entitled to a staggering $150,000 in AWS credits and a mind-boggling $200,000 on GCP.
  • AI-Powered Startups: If you’re developing AI solutions, Azure welcomes you with open arms, offering $300,000 in free credits to fuel your AI models on their cloud.

By embracing cloud-agnostic architecture, you unlock the freedom to switch between cloud providers, potentially saving you a significant $200,000 upfront. Why constrain yourself to a single cloud provider when cloud-agnosticism empowers you to navigate the cloud landscape with flexibility and cost-efficiency?

Building Resilience: The Importance of Cloud Redundancy

In the ever-evolving world of technology, no system is immune to failure. Even industry giants like Silicon Valley Bank can outright disappear over a weekend or AWS’ main Datacenter can go offline due to a power fluctuation, highlighting the importance of proactively safeguarding your business operations.

Imagine the potential financial impact of a 12-hour outage on AWS for your company. The costs could be staggering, not only in lost revenue but also in reputational damage and customer dissatisfaction or even potential churn.

This is where cloud redundancy comes into play. By running parallel segments of your platform on multiple cloud providers, such as AWS and GCP, you’re essentially creating a fail-safe mechanism.

In the event of an outage on one cloud platform, the other can seamlessly pick up the slack, ensuring uninterrupted service for your customers and minimizing the impact on your business. Cloud redundancy is not just about disaster preparedness; it’s also about optimizing performance and scalability. By distributing your workload across multiple cloud providers, you can tap into the unique strengths and resources of each platform, maximizing efficiency and responsiveness.

In our case, we run the OCR packages, SAML, and Accounts service on Azure, our core “Recommendation engine” and “Booking Engine” on AWS. Yes, having a multi-cloud will involve initial costs that might be prohibitive, but in the long run, the benefits will far outweigh the costs.

Cloud Cost Negotiation: A Matter of Leverage

In the realm of business negotiations, the ultimate power lies in the ability to walk away. If the other party senses your lack of alternatives, they gain a significant advantage, effectively holding you hostage. Cloud cost negotiations are no exception.

Imagine you’ve built a substantial $10 million infrastructure on AWS, heavily reliant on their proprietary APIs like S3, Cognito, and SQS. In such a scenario, walking away from AWS becomes an unrealistic option. You’re essentially at their mercy, accepting whatever cloud costs they dictate.

While negotiating cloud costs may seem insignificant to a small company, for an organization with $10 million of AWS infrastructure, even a 3% discount translates into substantial savings.

To gain leverage in cloud cost negotiations, you need to establish a credible threat of walking away. This requires careful planning and strategic implementation of cloud-agnostic architecture, enabling you to seamlessly switch between cloud providers without disrupting your operations.

Cloud Agnosticism: Your Negotiating Edge

Cloud-agnostic architecture empowers you to:

  1. Diversify your infrastructure: Run your applications on multiple cloud platforms, reducing reliance on a single provider.
  2. Reduce switching costs: Design your infrastructure to minimize the effort and cost of migrating to a new cloud provider.
  3. Strengthen your negotiating position: Demonstrate to cloud providers that you have alternative options, giving you more bargaining power.

By embracing cloud-agnosticism, you transform from a captive customer to a savvy negotiator, capable of securing favorable cloud cost terms.

Unforeseen Challenges: The Importance of Cloud Agnosticism

In the dynamic world of business, unforeseen challenges (and opportunities) can arise at any moment. We often operate with limited visibility, unable to predict every possible scenario that could impact our success. Here’s an actual scenario that highlights the importance of cloud-agnostic architecture:

Acquisition Deal Goes Through

This happened with One of my previous organisations, we tirelessly built this company from the ground up. Our hard work and dedication paid off when a large SaaS Unicorn approached us with an acquisition proposal.

However, during the due diligence, a critical issue emerged: Our company’s infrastructure was entirely reliant on AWS. The Acquiring company had a multi-year multi-million dollar deal with Azure and the M&A team made it clear that unless our platform can operate on Azure, the deal is off the table!

Our team faced the daunting task of migrating the entire infrastructure to Azure within a limited timeframe and budget. Unfortunately, the complexities of the migration proved time-consuming and the merger took 5 months to complete and the offer was reduced by $2 million!

The Power of Cloud Agnosticism

This story serves as a stark reminder of the risks associated with a single-cloud strategy. Had our company embraced cloud-agnostic architecture, we would have possessed the flexibility to seamlessly switch between cloud providers, potentially leading to a bigger exit for all of us!

Cloud-agnostic architecture offers several benefits:

  • Reduced Vendor Lock-in: Avoids dependence on a single cloud provider, empowering you to switch to more favourable options based on your needs.
  • Improved Negotiation Power: Gains leverage in cloud cost negotiations by demonstrating the ability to switch providers.
  • Increased Resilience: Protects your business from disruptions caused by cloud provider outages or policy changes.
  • Enhanced Scalability: Enables seamless expansion of your infrastructure across multiple cloud platforms as your business grows.

Embrace Cloud Agnosticism for Business Continuity

In today’s ever-changing technological landscape, cloud-agnostic architecture is not just a benefit; it’s a necessity for businesses seeking long-term success and resilience. By adopting a cloud-agnostic approach, you empower your company to navigate the complexities of the cloud landscape with agility, adaptability, and cost-efficiency, ensuring that unforeseen challenges don’t derail your journey.

My Solution

Here’s what I do about it, now after the lessons learnt. I use Multy. Multy is an open-source tool that simplifies cloud infrastructure management by providing a cloud-agnostic API. This means that developers can define their infrastructure configurations once and deploy them to any cloud provider without having to worry about the specific syntax or nuances of each cloud platform. While Multy provides an abstraction layer for deploying cross-cloud environments, you will also need to incorporate cloud-environment agnostic libraries to really make a difference.

References & Further Reading: 

  1. https://kobedigital.com/google-maps-api-changes/
  2. https://www.reddit.com/r/geoguessr/comments/cslpja/causes_of_google_api_price_increase_suggestion/ 
  3. https://multy.dev/
  4. https://github.com/multycloud/multy
  5. https://github.com/serverless/multicloud 
  6. https://aws.amazon.com/startups/credits
Achieve Peak Performance: AI Tools for Developers to Unlock Their Potential

Achieve Peak Performance: AI Tools for Developers to Unlock Their Potential

You were scrolling through Twitter or your favourite SubReditt on the latest tech trend and a sudden feeling of FOMO creeps in. You’re not alone.

While the notion of a “10x developer” has traditionally been considered aspirational, the emergence of AI-powered tools is levelling the playing field, empowering developers to achieve remarkable productivity gains. While there might be 1000s of possible “AI tools”, I’ll restrict to tools which could yield a direct productivity boost to a developer’s day-to-day work as well as the outcome.

1. AI Pair-Developer / Code Assistants

Sourcegraph Cody & Github Copilot — Read, write and understand code

If you have used GitHub Copilot. Think of a Cody as a Turbocharger for Copilot. If you have not used Copilot, you should first try it. Either of these can understand your entire codebase, code graphs, and documentation and help you write efficient code, write unit tests, and document the codebase for you.

While the claim of a 10x speed increase is not substantiated, it shows clear intent to improve productivity drastically. However, it’s in beta, and the tool acknowledges that it’s not always correct, though they’re making rapid improvements. Yes, GitHub Copilot X is there — but then, your organisation needs to be on the Enterprise plan or you might have to add an additional $10-20 per user per month, and Cody is already here.

2. AI Code reviews – Offload the often mundane task of code reviews

While CodeRabbit and DeepCode (now acquired by Snyk) are some of the trailblazers in this space, I have not had the opportunity to work with either of them for any stretch of time. If you know about their relative strengths or benefits, please add a comment, and I will incorporate it.
The tool I use most regularly is called Robin-AI-Reviewer, from the good folks at Integral Healthcare (funded by Haystack). My reasoning is two-fold, It is open-source and if it is good enough for HIPPA-compliant app development and certification assessment, it’s a good starting point.

3. AI Test writing – Delegate the task of writing tests to AI- CodiumAI

CodiumAI serves as an AI test-writing assistant. It analyses your code, docstrings, and comments to suggest tests intelligently. CodiumAI addresses a critical aspect of software development that often consumes valuable time: testing. While numerous tools prioritize code writing and optimization, ensuring code functionality is equally vital. CodiumAI seamlessly fills this gap, and its intelligent test generation capability can substantially enhance development efficiency and maintain superior code quality.

4. AI Documentation Assistant — Get AI to write docs for you

This is a no-brainer, who loves writing code walkthroughs and docs? No? Didn’t think so! Mintlify serves as your team’s technical writer. It reads and interprets your code, turning it into a clear, readable document. By all accounts, it is a definite must.

Disclaimer: I have not personally used this and have been mostly able to get this done with Cody, itself. And then, I am no longer doing the primary documentation as my main responsibility.

5. AI Comment Assistant – Readable AI — Never write comments again

Readable AI automates the process of generating comments for your source code. It’s compatible with several popular IDEs, like VSCode, Visual Studio, IntelliJ, and PyCharm, and it can read most languages.

6. AI Tech Debt Assistant – Grit.io

Grit.io is an automated technical debt management tool. Its prime function is auto-generating pull requests that manage code migrations and dependency upgrades. Grit is in beta and available for free till beta moves to RC1. But it actually has about 50+ pattern libraries and it is growing.

I absolutely love it and Grit alleviates a significant portion of the manual work involved in managing migrations and dependency upgrades. They say it 10x’s the refactoring and migration process. I’d say at 33% of what they say, It will still be 300% of what productivity increases. And it is a considerable gain. If you’re an Engineering Leader and you have a “Budget” for 1 tool only, It should be this!

7. AI Pull Request Assistant – An “AI” powered DIff tool

What The Diff AI is an AI-powered code review tool. It writes pull request descriptions, scrutinises pull requests, identifies potential risks, and more. What The Diff claims to be able to significantly speed up development timelines and improve code quality in the long run. It could take a great deal of pain out of the process.

Disclaimer: I have not personally used this

8. AI-driven residential Wizard – Adrenaline AI — Explain it to me

Adrenaline AI helps you understand your codebase. The tool leverages static analysis, vector search, and advanced language models to clarify how features function and explain anything about it to you. The thing I like about this tool very much is, it can be leveraged to automate the “How tos” for your software engineering teams!

9. AI collaboration companion for software projects

Stepsize AI by Stepsize is an AI companion for software projects. It seamlessly integrates with tools like Slack, Jira, and GitHub, providing insightful overviews of your activities and offering strategic suggestions.

The tool uses a complex AI agent architecture, providing long-term “memory” and a deep understanding of the context of your projects.

10. AI-Driven Dev Metrics Collection – Hivel.ai

While strictly speaking, not an AI-driven “assistant” to an average developer, I feel it is nevertheless a good tool for the Engineering org and Engineering leaders to keep track and make course corrections. It provides a Cockpit/Dashboard of all the metrics that matter.

Hivel is built by an awesome team of devs and led by Sudheer

How to Manage Technical Debt in 2023: A Guide for Leadership

How to Manage Technical Debt in 2023: A Guide for Leadership

In this article, I will summarise effective strategies and best practices to tackle tech debt head-on.

Technical debt is an inevitable reality in software development. But, it can be leveraged just like a financial loan/debt can help you achieve your goals, if managed properly.
It can be used to drive competitive advantage by allowing companies to launch new products and features faster, experiment with new technologies, and improve the scalability and performance of their systems. However, like all loans, it need to be “Repaid” properly and at the right time, failing on it will create a downward spiral.

If you’re not careful, technical debt can quickly become a major burden that slows down development and makes it difficult to add new features or even fix bugs in a timely manner.

We will discuss how to identify technical debt and the signs of poorly managed debt, and then provide a strategy for reducing it. We will also discuss what a healthy level of technical debt looks like and how leaders can use it to their advantage.

Good Tech Debt Vs Bad Tech Debt

Robert Kiyosaki, the author of Rich Dad Poor Dad, famously said:

Bad debt takes money out of your pocket, while good debt puts money in your pocket.

– Robert Kiyosaki

The same is true of tech debt.

Technical debt is the cost of not doing things the right way the first time. Good technical debt is accrued when you make trade-offs to meet deadlines or deliver new features quickly. Bad technical debt is accrued when you make poor decisions or cut corners.

Bad tech debt will probably make your PMs, Sales and CEO happy for a quarter or two. But after that, they will be asking why everything is behind schedule and dealing with customer complaints because things aren’t working properly.

Now that I have presented the obvious in a familiar “Quadrant”, you can actually skip the terminologies and definitions part of this article! 😀

For my verbal brethren, Which is the Tech Debt you’d need to ruthlessly hunt down to extinction? Obviously, it is the untracked, undocumented ones. And the ones which are dragging your team on a downward spiral (immaterial of whether it is tracked or not)

Why does your Tech Debt keep accumulating?

Before we can think about building a strategy to solve tech debt, we need to understand how it gets out of control in the first place.

It’s called “impact visibility”.

Fixing code debt issues is impossible if:

1, You’ve no record of what technical debt issues you have

2, You’ve got a backlog, but you can’t see which issues are related to what code

In both cases, you can’t prioritise tech debt over shipping new features.

We need to get more granular about what impacts these two tech debt cases above.

  • Issue invisibility — There’s no source of shared knowledge. Codebase health info is locked in (few) engineers’ heads.
  • No code quality culture — Shipping fast, whatever the cost, like it’s going out of fashion.
  • Poor process — Tech debt work sucks. Nobody likes creating Jira tickets. “Jira” has become a dirty word.
  • Low-time investment — Justifying the time to fix tech debt or to refactor is a constant uphill battle. After a point, engineers become silent!

Lack of context — Issues in Jira are a world away from the hard reality of the codebase. They’re not related in any way.

So what’s the source of this? Let’s talk strategy.

Spoiler
 It’s about changing organisational culture and developer behaviour to track issues properly.

Creating a strategy to reduce technical debt

Track. Issues. Properly.

Good tech debt management starts with team-wide excellence at tracking issues.

You can’t have a tech debt strategy without tracking.

The engineering leader’s job is to make that “issue tracking” easy for your team. There is supposed to be a software for that – Jira, Asana, Rally or something of that sort.

The problem is, I’ve never believed they really get to the bottom of the problem, and after speaking with scores of engineers and leaders about it, they usually don’t either. My personal belief is most companies suffer on the velocity after their Jira rollout! It is a bit like,

No two countries that both have a McDonald’s have ever fought a war against each other.

Thomas L. Friedman – in The Lexus and the Olive Tree!

As a leader, You need to find a way to


  • Show engineers when they’re working on code with tech debt, without them having to jump thru 3 hoops.
  • Make it really easy for team members to report tech debt.
  • Create a natural way to discuss codebase issues.
  • Integrate tech debt work into your workflows and involve PMs if required.

There are multiple ways to achieve this, the easiest is to not address it. Ie: not address it intentionally, just tweak your existing pipeline. This can be done by,

  • A very robust linting & integration to the IDE
  • Tighter Git rules for commits
  • SAST which runs on the pipeline
  • and can feed into the IDE

Prioritising impactful tech debt

At this point, it should be obvious, but prioritising the right issues is only possible if you’re tracking the impact of these “issues” and it could be direct or indirect (Dependency, Sequencing, Rework avoidance etc) .

Once you’ve got them, you should regularly and consistently use them to decide what to address. This usually happens during the backlog grooming or sprint planning sessions. But, this decision-making process needs to be strategic. Not at all tactical, ie: DO NOT delegate it to the whims and whimsicals of your TL/PM or even EM.

You or someone with a context of the organisation and position on sales, clients, revenue etc., should be doing this.

A good way to start is by choosing a theme each time you prioritise issues. For example, you could prioritise issues that


  • Are impacting a specific feature you need to work on in the next quarter
  • Are impacting the customer’s UX
  • Are affecting efficiency/morale on the team
  • Are impacting the security posture

This is often straightforward if you’ve got high-quality issues that traceable to code and tagged as such.

Most people wonder how to get the time for these “Tasks”. I have two recommendations.

  • Take an entire sprint every quarter to repay the tech debt (Will need high-level buy-in, It is slightly harder to align your CXOs)
  • Allocate 15-20% of bandwidth in every sprint. (Easier to achieve buy-in from CXOs, harder to drive with engineers)

Engineers generally won’t prioritise tech debt work by themselves because of the conflict of interest/pressure of shipping fast. This was evident from multiple high velocity/impact software engineering teams including ones at AirBnb, Netflix and Spotify. A commitment to code refactoring and maintenance work should be endorsed and supported from the top and reinforced regularly.

How much Tech Debt can you take on?

Managing technical debt is like managing financial debt. You can use it to your advantage, but you need to be careful not to let it get out of control.

Your technical debt budget is the amount of technical debt that you are willing to take on in order to achieve your business goals. You should not try to solve all of your technical debt at once, but instead focus on the most important items.

Prudent technical debt is debt that you take on deliberately and knowingly, in order to achieve a specific goal. For example, you might take on technical debt to launch a new product quickly, or to add a new feature that is in high demand by your customers.

If you manage your technical debt properly, it can be a powerful tool for gaining a competitive advantage. However, if you let your technical debt get out of control, it can lead to serious problems, such as increased costs, delays, and security vulnerabilities.

Concluding remarks:

Technical debt is one of the most neglected areas of software development. It is often only given priority when it is too late and has already caused serious problems.

However, when leaders work together and develop a consistent and process-driven strategy, technical debt can be effectively managed.

The best engineering teams are constantly thinking about how to use their technical debt budget to their advantage.

References and Further Reading:

No McKinsey, You got it all wrong about developer productivity!

No McKinsey, You got it all wrong about developer productivity!

Disclaimer: I have been an enormous proponent of Developer Productivity and have tried to implement automated metrics collection in 3 orgs with varied success. In my Mentoring sessions with early-stage startup leaders as well, I (re)enforce the importance of being aware of Dev Productivity. So much so, that I have written a 2-part article on the same here, here and here. I have also been a huge fan of McKinsey and how they seem to get answers which eluded the attention and resources of mega-corporations or governments alike. However, this article is written to communicate an entirely different perspective. In my opinion, McKinsey has got this entire “framework” thing about “dev productivity” wrong.

Introduction:

About a month back, McKinsey published an article claiming that they have developed a framework to measure productivity. They also acknowledged the fact that they were simply rehashing some of the existing metrics (like DORA and SPACE), which were used by Engineering Leaders and have simplified it (without the context) and are pitching it to their traditional buyers, the C-Suite executives in Mega corporations. Actually, some of these metrics can be useful tools if used correctly -One example is Hand-offs. But, the main reason I have chosen to write this article is their central focus seems to be “Coders should code”. It also appears to have A) missed the context of every metric, OR B) Omitted the context so as not to burden their target audience.

Finally, there is a mix-max of things to track, metrics to monitor and Opportunities to Focus, which looks like

Captain Ramius Pointing to a young Jack Ryan that Admiral Halsey was reckless!

Captain Ramius Pointing to a young Jack Ryan that Admiral Halsey was Stupid!

The Legendary Kent Beck has written a deep 2-part piece on countering the conjectures presented by McKinsey and elaborating on the gaps that engineering orgs are traditionally bound to manifest. It is very well written and covers almost everything. There are also a bunch of other eminent Software Engineers who have written on this and I have tried to give a quick lot at the bottom of this article.

What Was I concerned about?

Focus On Activities

I was primarily concerned about the lack of focus on Outcomes and Impact and a focus on the “Activities” in the proposed framework!

Any engineering leader or manager will tell you that Code Review Velocity and Deployment Frequency have nothing to do with measuring outcomes. While I will not discount Cycle Time or MTTR (I take pride in building multiple teams with one of the lowest MTTR and Cycle times in the ecosystem). They are indicators of some process elements/activities that could lead to outcomes. If we want to measure something, it should be Outcomes, not activities!

Focus on Optimisation of Irrelevant Metrics

Code Review Velocity:

If you want to time-motion the code review process in the entire stream map, you’ll find that async code review is killing your productivity. Pairing improves that dramatically. Instead of trying to sub-optimize for code review, measure the thing we actually want to improve. Which will be “Cycel Time”.

Story Points Completed:

Let’s agree on a basic fact. A “story point” is a made-up number. It was conceived as yet another way to obfuscate estimates for thought work that is difficult to estimate. As originally conceived, it represented the number of mythical “ideal days” of effort. There’s so much time wasted on getting better at “story pointing,” arguing about the Fibonacci sequence, “planning poker,” and other story point nonsense. Frankly, it is one of the “Bad” elements of Scrum! As a leader, you should find and remove handoffs and wait times. Story points are useless for anything and even more useless for this goal. Track throughput instead. 

Handoffs:

This is a good one. Good job, McKinsey. You got something right. Stop using testing teams, use pairing instead of code review, operate what you build, and don’t have any people doing anything manual to the right of development.

Contribution Analysis and Opportunities focus

In the other focus areas, they have listed metrics at the individual level that can be useful unless you measure “developer satisfaction,” “retention,” and “interruptions” at the individual level. These should only be measured in aggregates to prevent any cognitive bias. IMO, Things start getting really toxic in the “Opportunities focus” section, though.

I have been part of organisations and processes where there was a focus on tracking and measuring the outcomes of individuals. It did not play out well, ever. My Conclusion after reading the article for the second time is that McKinsey thinks their intended audience (CEOs and CFOs) cannot understand “systems thinking.” Now, If you roll out this or a similar framework and announce this and what do you think will happen?

You have a group of people all working on the same backlog but not acting as a team. Code review suffers, mentoring sufferers, pairing is hard, work breakdown suffers, etc. Anything that requires more than one person to conduct/conclude, including helping someone get unstuck, will get deprioritised!

Overall, The inferences seem to be based on hard facts, but the conjectures are all flawed.

Why This Now?

At this point, I want to highlight what “Triggered” me to write this, read the following.,

For example, one company found that its most talented developers were spending excessive time on noncoding activities such as design sessions or managing interdependencies across teams. In response, the company changed its operating model and clarified roles and responsibilities to enable those highest-value developers to do what they do best: code.

McKinsey’s Article on the purported Framework

Wow. I pray for that company.

So, I believe after McKinsey pointed to the fact, that developers are involved in irrelevant things like design, architecture etc. They created separate towers of responsibility for design. In that case, I am puzzled about who will be responsible for the minor things like dependency management, prerequisites, versioning, capacity planning, concurrency, scalability etc.

Did they get anything Right?

Yes. There are tonnes, but they are buried at the bottom. Their focus on Hand-offs and cycle times are really worth tracking in any engineering org. To the authors’ credit, they have also identified some of the core issues with measuring Developer Productivity. But, someone higher in the firm seem to have suggested to soften the blow. So, they have diluted and buried those sections. I will share 2 gems here.

To truly benefit from measuring productivity, leaders and developers alike need to move past the outdated notion that leaders “cannot” understand the intricacies of software engineering, or that engineering is too complex to measure.

The real problem is that in many large organisations, “The Management” doesn’t understand the work they manage. Management can understand the intricacies of software engineering if they become leaders and study the work they manage. In a large behemoth, not all managers are leaders. They want a framework and will enforce it with an iron fist. Now, McKinsey has delivered them a framework!

Learn the basics. All C-suite leaders who are not engineers or who have been in management for a long time will need a primer on the software development process and how it is evolving.

This one Nailed it! The primary reason “Management” finds it difficult to measure the right thing is because they sometimes do not understand the work they want to measure. Leaders who understand do measure the right things. My primary concern with this framework is, in trying to solve this, McKinsey has made the problem worse!

Just google “McKinsey developer productivity” and you’ll find more articles on how this framework is flawed than the original article link!

Anto’s Response to the Article and the purported Framework.

References & Further Reading/Watching:

1, Mc.Kinsey Article – https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/yes-you-can-measure-software-developer-productivity
2, Kent Beck’s rebuttal – https://newsletter.pragmaticengineer.com/p/measuring-developer-productivity
3, Redidit – https://www.reddit.com/r/programming/comments/1650595/measuring_developer_productivity_a_response_to/
4, Level Up Coding – https://levelup.gitconnected.com/the-developers-productivity-can-t-be-measured-in-mckinsey-s-way-an-analysis-4d81924279ae
5, Measuring Developers Productivity
 McKinsey what’s the point? – https://www.youtube.com/watch?v=wjQn8nnkXTs
6, Can We Measure Developer Productivity? A Reaction to McKinsey’s Article – https://www.youtube.com/watch?v=ETa24ErdcwQ
7, HOW TO MEASURE ENGINEERING PRODUCTIVITY? – https://nocturnalknight.co/2022/11/how-to-measure-engineering-productivity/
8, Business Value delivery by Engineering Teams in StartUps – Part 1 – https://nocturnalknight.co/2021/10/business-value-delivery-by-engineering-teams-in-startups-part-1/#comment-773
9, Business Value Delivery by Engineering Teams in StartUps – Part 2 – https://nocturnalknight.co/2021/10/business-value-delivery-by-engineering-teams-in-startups-part-2/
10, Space Metrics – https://www.harness.io/blog/space-metrics-get-started
11, DORA Metrics – https://www.leanix.net/en/wiki/vsm/dora-metrics
12, Dave Farley’s Response To The NONSENSE McKinsey Article On Developer Productivity – https://www.youtube.com/watch?v=yuUBZ1pByzM

Bitnami