Month: June 2023

Is the myth of a “10X Developer” Real?

Is the myth of a “10X Developer” Real?

If you’re a software engineer, manager or leader, I am sure you have heard the term ‘10x developer’ used in discussions. It refers to developers who are purportedly 10 times more productive, or capable, than their peers, while it is a hotly contested category. Some refer to it very liberally, others deny that it even exists. In the last 40+ years, the ‘10X developer` has become a ‘Loch Ness` of the tech world, fueled by the hype associated with Silicon Valley.

I’m not about to delude myself into thinking that writing a blog on it to pass my verdict will put these theories to rest, but the question has gained enough traction that it deserves a little articulation. 

Do 10x developers really exist, and if so, how would we distinguish them?

Framing the Issue

All of us can acknowledge that the range of skills in most human activities can be extensive. A marathon runner can cover roughly 10 times the distance that an untrained person could, while a professional chef can cook a 5-course meal in 1/5th the time it takes an average person to do a 2-course meal.

Coding, which is a hugely complex field unencumbered by physical limitations, should naturally show differences in the skill that vary by orders of magnitude. Thus, if by 10x developer we simply mean a person whose skill level is in a different league compared to someone else, then clearly they exist. 

How could anyone argue otherwise?

Here’s the rub though, in the data-driven and lexically precise world of modern tech, that’s not what 10x developer means. Instead, a 10x developer is supposed to be someone who genuinely outperforms others by 10 times or more on some quantifiable scale. That ‘quantifiable scale’ is where the problems start.

Where did the term actually come from? Enter Coding War Games.

Coding War Games

Tom DeMarco and Tim Lister have conducted the “Coding War Games” since 1977. This is a public productivity survey in which teams of software implementors from different organizations compete to complete a series of benchmarks in minimal time with minimal defects. They’ve had over 600 developers participate. Its results are publicly available and is very informative, to say the least. Jeff Lester published a wonderful piece on the origins of the 10X developer here

The top findings from these are,

1, Get your working environment Right

The overriding observation from this study is that quiet, private, dedicated working space with fewer interruptions led to groups that performed significantly better.

2, Remove the Net Negative Producing Programmer

Some developers are “net negative producing programmers” (NNPP), that is they produce so many defects that removing them from the team increases productivity. This is the opposite of a 10X developer, these people are the ones that make the team productivity go from bad to worse.

The Problem with Measuring ‘Skill’

Even where skill can vary wildly, differences will not necessarily be quantifiable. A talented artist may know how to create a painting that teleports you to a whole other world compared to an average artiste who can transport you to the scene.

But the question is, can you attach numbers to that painting’s beauty?

The work of a developer isn’t nearly as abstract, but not all of it can be reduced to metrics either, and definitely not the programming skill itself.

A less glamorous approach may be to judge a 10x dev not in terms of skill but in terms of productivity. Someone who can write 500 lines of code when it takes others to write 50 would then fall in that category.

If you know anything about programming, however, you’ve probably already spotted the problem with this line of thinking. Longer code isn’t necessarily more efficient, and for most people there tends to be a positive correlation between how quickly one works and how many bugs one creates.

This is not to say that programmers can’t produce bug-free code much faster than their peers. Where this statement proves fallacious though, is in trying to peg that difference to a single metric. There are myriad factors at play that will affect a developer’s productivity outside of their skill, including their team and the environment they find themselves. In fact, depending on the situation, the 10x tag may be inaccurate because a developer could be programming well over 10 times as much as another and still produce 1/10th of the ‘Outcomes’!

What should be our conclusion? There can be no doubt that the field of programming has its own Mozarts and Vincent Van Goghs, and few would object if these people were described as being ‘orders of magnitude’ better than the rest. But it is important to recognize that this is only a figure of speech, and not something meant to be used according to its precise quantitative meaning.

I can’t presume to speak for the tech industry as a whole, but I for one have noticed a worrying tendency to read the expression ‘10x developer’ literally.

Ultimately this does more harm than good, as it spreads the myth that there is some universal metric whereby every programmer’s value can always be quantified. 

Important qualities like creativity, client focus and teamwork are entirely omitted in this way of thinking, which is why my final suggestion is to stop worrying about lofty 10x developers and whether you are, aren’t, or may or not become one. 

Simply focus on being the best developer you can be. That will always be enough.

References & Further Reading

Origin of a 10X developer

https://medium.com/ingeniouslysimple/the-origins-of-the-10x-developer-2e0177ecef60 

https://gwern.net/doc/cs/algorithm/2001-demarco-peopleware-whymeasureperformance.pdf

https://news.ycombinator.com/item?id=22349531

Is Pedigree and Old Boy network really relevant in the 21st Century?

Is Pedigree and Old Boy network really relevant in the 21st Century?

Not sure how to categorise this post, it’s an amalgamation of a critique of the social stratification that we witness today with a fair amount of my experiences interspaced and viewed through the lenses of a book, Pedigree. I have always held that Pedigree and education from Elite Institutions are a bit overrated and are not necessarily an indication of capability or skill, but rather of discipline and or perseverance.

Prior to my arrival in the United Kingdom eleven years ago, I had a very different picture of effort and success in the world. I assumed that professional success was not just a possibility, but a certainty if you were skilled and worked hard. Especially, after you read things like the 10000-hour rule (Outliers) or the 67 Principles (The success principle). Nothing seemed to be able to stop the onslaught of hard-working smart people’s success. My personal yardstick was, of course, me. After all, here I am in the heart of London building the very first meta-search engine for betting odds (in the UK) leading a team across 3 continents.

Imagine a tortoise roll aka Flashbak scenes from Masala movies:

I did not have the “Blue Blood” in me, I graduated with an Electronics degree from a Tier 2 University in South India. To make it abundantly clear, the first time I had come to the state capital, Madras (now Chennai) was to enrol in IEEE student chapter and the second was to apply for a passport! So, you can guess my “exposure”, the fact that I even got to know that something called IITs exists is a testament to the wonderful Teachers I had during my school days. Anyway, from there, it took me 3 different jobs (one of which was for Paypal), my own entrepreneurship journey and a successful exit to hitch this stint with EasyOdds/MarCol group.

Going through this and seeing my peers with a similar trajectory, I could be forgiven for thinking this is how success looks, a slow march with perseverance and dedication and “years under the belt”. This is not just me, but most people I knew did think that the recipe for success is the long game. This couldn’t be far from the truth. We have been oblivious to the fact that two classes exist in any workforce, the elites and the others! I started noticing that people with degrees from “fancy institutions” climb the ladder much faster or even start from a higher step. Now, Rivera starts her book by saying,

“Most Americans believe that hard work- not blue blood- is the key to success.”

Pedigree: How Elite Students Get Elite Jobs.

Rivera is a Professor at Northwestern University’s Kellogg School of Management and has received her Ph.D. in sociology from Harvard University. She has spent around a year researching her book by working in the HR department of a major New York City consulting firm.

Rivera starts her book by discussing and pinpointing the macroeconomic environment that sets “elite” students from “other” students. She uses a lot of data analysis throughout her book, (some of them went right over my head). The book is categorized as “vocational guidance” on Amazon! However, the book doesn’t directly guide you to secure an “elite” job! So, do not bother. Instead, the book is a critique of the hiring process and reveals the actions of those responsible for hiring. The book’s thesis is “that the way in which elite employers define and evaluate “merit” in hiring strongly tilts the playing field for America’s highest paying jobs toward children from socioeconomically privileged backgrounds.”

However, in her last chapter, she makes sure to show that the hiring process isn’t completely rigged and that some candidates from less affluent backgrounds were able to break the code and get hired while other candidates with affluent backgrounds failed to get hired. But, as stated earlier this is rare and not the norm. The author does a really good job by chronologically taking the reader through the steps of the hiring process, “from the initial decision (of firms) of where to post job advertisements to the final step, when the hiring committee meets to make final offer and rejection decisions.”

Rivera does a good job of explaining earlier in the book how the reproduction of elites starts from a very young age during college and that “Today, the transition of economic privilege from one generation to the next tends to be indirect. It operates largely through the education system” (3) She follows up by using the sociological research conducted by Alexandra Radford in which she shows that many top achieving high school valedictorians “from lower-income families do not apply to prestigious, private, four year universities because the high price tags associated with these schools. Illustrating how money and cultural know-how work together, some who would have qualified for generous financial aid packages from these institutions did not apply because they were unaware of such opportunities. Other had difficulty obtaining the extensive documentation required for financial aid applications.” (5) Since these students are unfortunately unable to apply to “elite” schools they are also unable to apply to “elite” EPS firms that only hire from a select number of IVY league schools. Therefore, as one of the attorneys said in the opening of the second chapter, “There are many smart people out there. We just refuse to look at them”. Why? Because they primarily hire from a select number of schools which are accessible to a select few. Therefore, the universities are the “engines of inequality” as she says in her book. Rivera points out that there are two methods of allocating high status career opportunities. One is the contest system, in which competition is open to all and success is based on competence. The other is the sponsored system in which existing elites select the winners, either directly or through third parties. The system in the U.S is a combination of both models according to Rivera. However, what is better for a society’s institutions? A combination of both models or strictly a contest system? It seems quite clear that a contest system would certainly drive the most deserving and competent students to the jobs that suit them. However, that’s unfortunately not the case as earlier stated through the attorney. Does this not seem like an irony in age where shareholder maximization seems like the first commandment of firms today? It certainly is. Where is the efficiency? It’s traded off for the sake of job “fit” and “polish”. According to Rivera’s study more than half of the evaluators of applicants regarded fit “as the most important criterion at the job interview stage, rating it above analytical skills and polish.” But what is fit?

According to Rivera’s sample of evaluators they defined and measured fit saying that it means they have a similarity in “play styles- how applicants preferred to conduct themselves outside the office- rather than in their work styles or job skills. In particular, they looked for matches in leisure pursuits, backgrounds, and self-presentation styles between candidates and firm employees (including themselves)” (137). This definition of fit unfortunately tilts the hiring process in favor of the already dominant elite while rejecting the competent and hardworking yet “unfit” candidates. Furthermore, it results in a monoculture where homphily is the norm and widely practiced. Yes, fit might generate stronger cohesion among employees, but a diverse number of competent employees from diverse backgrounds can certainly be healthier for society by motivating employees to work harder while increasing efficiency. Another metric that Rivera mentioned was Polish. Now what is Polish? Well, according to Rivera “interviewers in my study initially had difficult explaining to me how they recognized and assessed polish during job interviews.” (171) One Banker went so far that she likened polish to pornography, laughingly saying, “You kind of know it when you see it”. The general idea of Polish is that firms want to recruit employees who can maintain a reputable, luxurious and elite picture of the firm they represent. One consultant Natalie said, “In an ideal world, you have people who are folks that you want to throw in front of a client, that you feel are professional and mature. People that you know can walk into a room full of people who are twice their age ad be able to command it with self-confidence, but not too much self-confidence.” (170) Although, this might certainly be a good thing for a hiring firm it might also be a bias for those with families that have executives in the family and have taught their children to deal with executives and clients growing up. Therefore, besides the fact that polish is very arbitrary and can lead to a monoculture it can certainly lead to inequality as well.

In conclusion, I found this book to be great at illustrating all the short comings of the interviewing process at Elite Professional Service firms and that it unfortunately leads to more inequality in society. Furthermore, as Rivera suggests, firms needs to widen their interviewing scope not just for the sake of candidates, but even for the sake of hiring smarter and more competent students. They must stress the importance of having grades over institutional prestige and culture extra-circulars. In addition, to handing the interviewing process to more professional interviewers who can structure the interviews and detach themselves from the arbitrary metrics currently used.

References, Further Reading:

Old Boys Network

https://en.wikipedia.org/wiki/Old_boy_network

Pedigree in Tech

https://news.ycombinator.com/item?id=25486065

Insight: In the Silicon Valley start-up world, pedigree counts – https://www.reuters.com/article/us-usa-startup-connections-insight-idUSBRE98B15U20130912

Does the startup world have a Pedigree problem – https://qz.com/work/1695042/does-the-startup-world-have-a-pedigree-problem

The 10000 hours rule

https://www.theguardian.com/science/2019/aug/21/practice-does-not-always-make-perfect-violinists-10000-hour-rule

https://www.vox.com/science-and-health/2019/8/23/20828597/the-10000-hour-rule-debunked

Can ChatGPT accelerate No-Ops to Replace DevOps in EarlyStage Startups?

Can ChatGPT accelerate No-Ops to Replace DevOps in EarlyStage Startups?

Background: I work with multiple CTOs and Heads of Engineering of early-stage startups to help them set up their engineering orgs, review their product architecture, help them prioritise their hiring etc. I also help multiple engineering leaders via Plato. Recently, there have been too many questions on whether can I use ChatGPT for this or that, but the most interesting one among these is “Can ChatGPT accelerate No-Ops to Replace DevOps?”. I have answered that multiple times in verbatim. But thought that writing it down will help me with two things, I can point them to the URL and I can also clearly structure my thoughts on this. So this is an attempt at that.  

Glossary First:

DevOps:

For the uninitiated, DevOps is a software development methodology that emphasizes collaboration and communication between developers and operations teams. The goal of DevOps is to increase the speed and quality of software delivery while reducing errors and downtime. DevOps engineers are responsible for the design, development, and delivery of software applications, as well as the management and maintenance of the underlying infrastructure including the pipelines, quality assurance automation etc.

No-ops:

No-ops, on the other hand, is a philosophy that aims to automate and simplify the operations side of software development. The goal of no-ops is to eliminate the need for manual intervention in the deployment and maintenance of applications, freeing up time for developers to focus on creating new features and fixing bugs.

ChatGPT:

ChatGPT is a language model developed by OpenAI that has the potential to revolutionize the way we interact with technology. It can perform a wide range of tasks, from answering questions to generating text and even a rudimentary bit of coding. In the context of DevOps, ChatGPT could be used to automate many of the manual tasks that DevOps engineers currently perform, such as infrastructure management, deployment, and monitoring.

Now, The question: Can ChatGPT accelerate No-Ops to Replace DevOps?

ChatGPT (or any of the other generative AIs)  has the potential to automate many of the manual tasks performed by DevOps teams, it could potentially replace the most mundane tasks that DevOps perform pretty soon. So, before writing this piece, I wanted to actually put my understanding to test.

I had to use Bard for this experiment, but I do not think there is going to be much of a difference in the outcomes.

Experiment 1: Beginner-Level Task

Writing a simple Autoscaling script to scale as per CPU and memory utilisation.

Simple AutoScaling Script, written by Bard

Experiment 2: Intermediate-Level Task

Creation of a new VPC with 3 autoscaling groups of EC2 and 1 NAT gateway and VPC peering.

Results were mixed.

Though Bard did give information that this can be tweaked, there was one major gap between the ask and the outcome. The NAT gateway was supposed to be the single point of ingress/egress, whereas the script is entirely different.

Assume an early-stage startup that gets used to the early success of Generative AI to preempt the DevOps culture, at some point the AI won’t have the context of what all is in your infrastructure and you could end up misfiring things.

My submission is, ChatGPT or Bard or any Generative can be super helpful for a good DevOps engineer and cannot replace her/him anytime soon. In military parlance, certain materiels are termed Force Multipliers. But, they themselves cannot be the force! (Aircraft carriers or Tanker aircraft are the prime example)

Why do I believe so?

There are several reasons for this:

  1. Human creativity: Despite its advanced capabilities, ChatGPT is just another AI model and lacks the creativity and innovation that a human DevOps engineer brings to the table. DevOps engineers can think outside the box and find new and innovative solutions to “Business” problems, whereas ChatGPT operates within the constraints of its programming and is simply solving the “Constraints” for that technical problem. 
  2. Human oversight: While ChatGPT can automate many tasks, it still requires human oversight to ensure that everything is running smoothly. DevOps play a crucial role in monitoring and troubleshooting any issues that may arise during the deployment and maintenance of applications.
  3. Complexity: Many DevOps tasks are complex (or at the very least, the DevOps teams would want us to believe that) and require a deep understanding of the underlying infrastructure and applications. ChatGPT does not yet have the capability to perform these tasks at the same level of expertise as a human.
  4. Customization: Every organization has unique requirements for its development and deployment process. ChatGPT may not be able to accommodate these specific needs, whereas DevOps engineers can tailor the process to meet the non-stated requirements and organisational and platform context
  5. Responsibility: DevOps engineers are ultimately responsible for ensuring the success of the development and deployment process. While ChatGPT can assist in automating tasks, it is not capable of assuming full responsibility for the outcome.

In conclusion, while ChatGPT has the potential to automate many manual tasks performed by DevOps engineers, it is unlikely to replace DevOps entirely in the near future. The role of DevOps will continue to be important in ensuring the smooth deployment and maintenance of software applications, while ChatGPT can assist in automating certain tasks and increasing efficiency. Ultimately, the goal of both DevOps and no-ops is to increase the speed and quality of software delivery, and the use of ChatGPT in DevOps can play a significant role in achieving this goal.

References:

https://humanitec.com/whitepapers/devops-benchmarking-study-2023

Is NoOps the End of DevOps?

Is NoOps the End of DevOps?

Some say that NoOps is the end of DevOps. Is that really true? If you need to answer this question, you must first understand NoOps better.

Things are moving at warp speed in the field of software development. You can subscribe to almost anything “as a service” be it storage, network, computing, or security. Cloud providers are also increasingly investing in their automation ecosystem. This leads us to NoOps, where you wouldn’t require an operations team to manage the lifecycle of your apps, because everything would be automated.

Picture Courtesy: GitHub Blog

You can use automation templates to provision your app components and automate component management, including provisioning, orchestration, deployments, maintenance, upgradation, patching and anything in between meaning significantly less overhead for you and minimal to no human interference. Does this sound wonderful? 

But is this a wise choice, and what are some advantages and challenges to implementing it?

Find out the answers to these questions, including whether NoOps is DevOps’s end in this article.

NoOps — Is It a Wise Choice?

You already know that DevOps aims to make app deployments faster and smoother, focusing on continuous improvement. NoOps — no operations — a term coined by Mike Gualtieri at Forrester, has the same goal at its core but without operations professionals!

In an ideal NoOps scenario, a developer never has to collaborate with a member of the operations team. Instead, NoOps uses serverless and PaaS to get the resources they need when they need them. This means that you can use a set of services and tools to securely deploy the required cloud components (including the infrastructure and code). Additionally, NoOps leverages a CI/CD pipeline for deployment. What is more, Ops teams are incredibly effective with data-related tasks, seeing data collection, analysis, and storage as a crucial part of their functions. However, keep in mind that you can automate most of your data collection tasks, but you can’t always get the same level of insights from automating this analysis.

Essentially, NoOps can act as a self-service model where a cloud provider becomes your ops department, automating the underlying infrastructure layer and removing the need for a team to manage it.

Many argue that a completely automated IT environment requiring zero human involvement — true NoOps — is unwise, or even impossible.

Maybe people are afraid of Skynet becoming self-aware!

NoOps vs. DevOps — Pros and Cons

DevOps emphasizes the collaboration between developers and the operations team, while NoOps emphasizes complete automation. Yet, they both try to achieve the same thing — accelerated GTM and a better software deployment process. However, there are both advantages and challenges when considering a DevOps vs. a true NoOps approach.

Pros

More automation, less maintenance

By automating everything using code, NoOps aims to eliminate the additional effort required to support your code’s ecosystem. This means that there will be no need for manual intervention, and every component will be more maintainable in the long run because it’ll be deployed as part of the code. But does this affect DevOps jobs?

Uses the full power of the cloud

There are a lot of new technologies that support extreme automation, including Container as a Service (CaaS) or Function as a Service (FaaS) as opposed to just Serverless, so most big cloud service providers can help you kickstart NoOps adoption. This is excellent news because Ops can ramp up cloud resources as much as necessary, leading to higher capacity, performance & availability planning compared to DevOps (where Dev and Ops work together to decide where the app can run).

Rapid Deployment Cycles

NoOps focuses on business outcomes by shifting focus to priority tasks that deliver value to customers and eliminating the dependency on the operations team, further reducing time-to-market.

Cons

You still need Ops!

In theory, not relying on an operations team to take care of your underlying infrastructure can sound like a dream. Practically, you may need them to monitor outcomes or take care of exceptions. Expecting developers to handle these responsibilities exclusively would take their focus away from delivering business outcomes and wouldn’t be advantageous considering NoOps benefits.

It also wouldn’t be in your best interest to rely solely on developers, as their skill sets don’t necessarily include addressing operational issues. Plus, you don’t want to further overwhelm devs with even more tasks.

Security, Compliance, Privacy

You could abide by security best practices and align them with automatic deployments all you want, but that won’t completely eliminate the need for you to take delicate care of security. Attack methods evolve and change each day, therefore, so should your cloud security controls.

For example, you could introduce the wrong rules for your AI or automate flawed processes, inviting errors in your automation or creating flawed scripts for hundreds or thousands of infrastructure components or servers. If you completely remove your Ops team, you may want to consider investing additional funds into a security team to ensure you’re instilling the best security and compliance methods for your environments.

Consider your environment

Considering NoOps uses serverless and PaaS to get resources, this could become a limiting factor for you, especially during a refactor or transformation. Automation is still possible with legacy infrastructures and hybrid deployments, but you can’t entirely eliminate human intervention in these cases. So remember that not all environments can transition to NoOps, therefore, you must carefully evaluate the pros and cons of switching.

So Is NoOps Really the End of DevOps?

TL:DR: NO!

Detail: NoOps is not a Panacea. It is limited to apps that fit into existing #serverless and #PaaS solutions. As someone who builds B2B SaaS applications for a living, I know that most enterprises still run on monolithic legacy apps and even some of the new-gen Unicorns are in the middle of Refactoring/Migration which will require total rewrites or massive updates to work in a PaaS environment, you’d still need someone to take care of operations even if there’s a single legacy system left behind.

In this sense, NoOps is still a way away from handling long-running apps that run specialized processes or production environments with demanding applications. Conversely, operations occur before production, so, with DevOps, operations work happens before code goes to production. Releases include monitoring, testing, bug fixes, security and policy checks on every commit, etc.

You must have everyone on the team (including key stakeholders) involved from the beginning to enable fast feedback and ensure automated controls and tasks are effective and correct. Continuous learning and improvement (a pillar of DevOps teams) shouldn’t only happen when things go wrong; instead, members must work together and collaboratively to problem-solve and improve systems and processes.

The Upside

Thankfully, NoOps fits within some DevOps ways. It’s focused on learning and improvement, uses new tools, ideas, and techniques developed through continuous and open collaboration, and NoOps solutions remove friction to increase the flow of valuable features through the pipeline. This means that NoOps is a successful extension of DevOps.

In other words, DevOps is forever, and NoOps is just the beginning of the innovations that can take place together with DevOps, so to say that NoOps is the end of DevOps would mean that there isn’t anything new to learn or improve.

Destination: NoOps

There’s quite a lot of groundwork involved for true NoOps — you need to choose between serverless or PaaS, and take configuration, component management, and security controls into consideration to get started. Even then, you may still have some loose ends — like legacy systems — that would take more time to transition (or that you can’t transition at all).

One thing is certain, though, DevOps isn’t going anywhere and automation won’t make Ops obsolete. However, as serverless automation evolves, you may have to consider a new approach for development and operations at some point. Thankfully, you have a lot of help, like automation tools and EaaS, to make your transition easier should you choose to switch.

Bitnami