The Valuable Dev

8 Cognitive Biases in Software Development

development tasks are often not as easy as they seem

Bad news: you’re in the meeting room.

You’re listening to your team leader while she’s proposing the solution she had in mind for the next feature or the product you’re working on. Dave, your colleague developer, will have to implement it.

“Great solution! Pretty easy to implement!”, he suddenly claims. “I only need to change two fields in the database, modify two or three existing features, plug everything back together, and that’s all!”

He gives his estimation. Two hours will be enough. “I did similar changes often enough!”, he adds.

Unfortunately, Dave might have fallen into multiple cognitive biases in only three minutes. Did you ever create bugs or write nonsense because your certitudes and your self-confidence gave you wrong assumptions? I surely did.

This article is about our mistakes caused by the most common and discussed cognitive biases in software development. We’ll talk about:

  • The cognitive biases and how they can pop up in our work.
  • Different techniques to prevent these biases to cloud our judgment.
  • The optimistic bias.
  • The overconfidence bias.
  • The confirmation bias.
  • Wishful thinking.
  • The anchoring bias.
  • The bandwagon effect.
  • The cargo cult.
  • The correspondence bias.

The biases I choose to write about is mainly based on this study and my own experience.

Why is it important? Development is not about languages and technobabble. It’s about problem solving. When we solve problems, our assumptions can mislead us. Cognitive bias will bring us a lot of them on a platter. The result: we’ll come up with wrong solutions. We’ll get lost on the wrong path. The only condition we need to have these biases: being human.

Depending on the project, the impact of these biases fills a whole spectrum from insignificant to highly detrimental.

Let’s not wait longer: we’ll now adventure ourselves in the Land of the Biases, to see how they appear in our lives and what we can do to avoid them.

What’s a Cognitive Bias?

Let’s open the sacrosanct Cambridge Dictionary to find out what are these weird beasts:

The way a particular person understands events, facts, and other people, which is based on their own particular set of beliefs and experiences and may not be reasonable or accurate.

As I explain in my article about the expert blind spot as well as in my article about abstraction, it’s easier for our brain to reason about things by cutting details out. But it’s not easy to throw away the insignificant details; sometimes, our past beliefs and experience are not adequate for the situation at hand.

Everyone takes the limit of his own vision for the limits of the world.

Schopenhauer

There are no definitive codex for the cognitive biases and you can’t catch them with Pokeballs. Looking at the psychology, sociology, and management studies, we can roughly count more than 200 of them.

That said, it’s important to keep in mind that there is a lack of study about interconnected bias, which means that some cognitive biases will be similar to others. Additionally, some of these biases might only appearing when another one is already in action.

Be aware as well that there is a serious lack of consistency in the research papers. Depending on the study, they will be called with different names, even if they refer to the same ideas.

Stop and thinks before saying it's easy

How to avoid these biases? We could think that pointing out a bias to somebody will solve the problem. But it has been shown than blaming people for their bias (or even showing them their misconceptions) won’t help. Instead, we should link a bias to a specific context instead of a specific human being.

Debiasing techniques are no silver bullets. There are no strong evidence of their effectiveness: we lack studies about them. But they are still useful guidelines to show us the good direction to take. We can see them as experiments and see if they work in our decision-making processes.

The Optimistic Bias and the Overconfidence bias

The optimistic bias is the tendency to be unrealistically optimistic for events happening around us. This is a common and well documented bias in software development. It can be linked to the overconfidence bias, which makes us utterly optimistic in our own skills and talents.

What Does Optimism Look Like?

Did you hear, during your career, one of your colleague developer claiming, without even checking the codebase, that a random new feature should be easy to implement and won’t take time? If you have a bit of experience as a developer, I’m sure you did. How many of these estimates have proven to be wrong? How many times the work took more time than estimated? Often. Very often.

We have tendency to be very optimistic when estimating tasks. In fact, some studies show that technical persons, including developers, fall into the optimistic bias trap even more than anybody else. The cherry on the cake: we have even tendency to be optimistic for hard tasks and pessimistic for easy task. This is known as the hard-easy bias.

Optimistic biases will also reveal themselves more easily if the task is abstract and far in the future. After all, we all have difficulties to project ourselves too far in the future. Even when we try, we’re often wrong.

This bias can be detrimental for our estimations as well as for understanding the requirements of a task. The malicious bias can whisper to our ears that we understand everything, even if we didn’t look at the details involved in the realization of that task.

This can lead to an incorrect understanding of the feature to implement and, as a result, to a wrong implementation.

Debiasing the Optimistic Bias

Asking Directed Questions

To find out if an estimate is too optimistic or to be sure we really understand everything, we can ask directed questions to our colleagues:

  • Do you see anything which might cause problem?
  • Do you see any reason why your solution might be wrong?
  • Do you think about dependent features which are coupled to the one we modify?
  • Can we look at the codebase before giving our estimation?

These questions try to put the optimistic affirmations on their heads. It will force people to reverse their way of thinking, to allow everybody to think about the difficult parts of the problem instead of the easy ones.

Optimism is an occupational hazard of programming; feedback is the treatment.

The Double Loop Learning

Questioning our initial assumptions or the ones from our colleagues is always a good thing to do, if we do it for the good reasons. The goal is not to try to be right for example, but to avoid biases.

The double loop learning approach does exactly that. Basically, you can try to frame the problems differently, using another mental model than the primary one used.

Let’s take an example. One of your colleague tells you that changing every name of every table in your wonderful database is an easy task which will take one hour. After all, it’s only changing names! But instead of thinking about changing names, let’s think about the database itself. How many tables do we have there? Where these tables are used in the source code? Are their names hard coded?

We shifted our mental model: we were thinking about a bunch of names, now we think about the important details, where the database is used and how. It can drastically change the difficulty of the tasks and their estimations!

Smaller Tasks Are Easier

Estimating future features is hard. Estimating huge features full of details and complexity is harder. I’ll never stress it enough: if your featues are big, break them down and push these small chunks in production as often as you can. You’ll see if everything is still working as excepted. If the users shouldn’t see tiny bit of features, just hide these changes to them.

Estimations will become way easier, you’ll learn the joys of continuous deployment, you’ll have quick feedback on your possible over-optimistic estimations and features, and everybody will live happily till the end of time.

Logging Estimations

I tried an experiment not long ago: logging my estimations. Each time I estimate a task I write it in my development journal. I did the same for my colleagues, too.

As expected, we were all very optimistic. It led me to always add 50% of my first estimation on top. Two days of work becomes three days.

I would encourage you to log your estimations too, and see for yourself if your team is overly optimistic too.

Another idea: each time your estimations are wrong, ask yourself why. Did you forget some other features tight to the one you modified? Were you interrupted too much the last few days? Finding the reasons can help you. You can then try to fix the problems you might have faced.

Confirmation Bias and Wishful Thinking

Once you start looking for confirmation bias you see it everywhere.

The confirmation bias is another well known and well studied bias, very present in our modern lives. We have the tendency to only pay attention to the information confirming our existing believes and ignore the ones which challenge them. Should I precise that it’s not the best approach to improve our thinking?

This bias is similar to wishful thinking, which will push you to favor pleasing information instead of confronting the reality.

The Confirmation Bias in Action

Let’s say that you believe firmly that inheritance has always been a pillar of OOP from the beginning on. One of your colleague argues that it’s not the case. Inheritance wasn’t accepted as easily and it’s still a source of debate today.

To prove to your colleague you’re right, you Google “inheritance pillar of OOP” and you find the first results confirming your existing belief. You’re right! You win the argument! Congratulation. Inheritance has always been a pillar of OOP, and will always be!

But your colleague is right: Alan Key, considered as one of the father of OOP, didn’t want to implement inheritance in the first version of Smalltalk. He didn’t trust this concept. Inheritance has been the subject of hot debates ever since.

You’re not totally wrong either: inheritance is still considered for many as a pillar of OOP, mainly because it has been dumbed down over the years. Yet, doing a quick research only to confirm your opinion prevent you to see the whole picture.

You fell into the trap of the confirmation bias.

Another example: if you look at the unit tests of a given project, they will often test how it should work instead of trying to catch the possible mistakes and failures. Confirming that the code do what it was intended to do is good enough for us, most of the time. But it’s not enough.

Debiasing the Confirmation Bias

Testing For Failures

You already guessed it: we should try to find the problems which could pop up instead of only testing the happy path. Again, we should flip our way or thinking. For the folks who don’t like automated test, testing manually is even worse: you’ll bring a lot of biases each time you’ll do it. If you think that everybody write some automated tests nowadays, you’re unfortunately wrong.

Traceability

We can try to trace the bugs in your application using a development journal. Then, we can look for patterns, see if multiple bugs are the result of more general, broader assumptions created by our confirmation bias. After all, when we have misconceptions, we’re likely to repeat them in multiple parts of your code.

Let’s take an extreme example: your colleague thinks that users are always honest, well-meaning, and he succeeded to find on Internet that yes, most of them won’t try to break your software. He only searched for information confirming his opinion. As a result, you see an input in your applications which is not sanitized. Knowing that it can be the result of a bias, likely the confirmation bias, you search for more. Bingo! Not only one, but two, three inputs are potential back doors.

At that point you should speak with your colleague. Try to find out why he did what he did, and teach him what you know. Explain why it’s bad in this context, not that she’s bad to do it that way. You’ll fix the root problem instead of fixing the symptoms.

No More Assumptions

In general, we should thrive to find good, provable justifications for every assumption you make. First, we need to find out that it’s possibly an assumption. Often, we take them as granted, without asking ourselves more questions.

Then, try to find what could make the assumption wrong.

If you do it for yourself, you’ll do it for the others too, bringing good arguments with enough research and data. This might improve code and processes dramatically.

The Anchoring bias

The anchoring bias appears when our opinions is only based on the initial information we get, no matter if everything else proves that we’re wrong. This bias is the third and last most studied bias in software development.

A Developer Anchored

You are in a feature meeting. Again. Don’t worry, it will be over soon. Your project managers show to you and your colleagues a pretty complex feature to implement, and throw the usual question to the audience:

“How long will it be to implement? Two weeks?”

Whatever the task and its complexity, studies show that the final estimation everybody is likely to agree with will be two weeks. This is a textbook case of the anchoring bias: we’re influenced by the first bit of information our brains receives, without asking ourselves if analysing the task a bit more could change it.

Let’s take another example, in a codebase this time. You might follow practices and patterns everybody uses in your codebase, even if you know that they are mediocre, messy, or plainly wrong. You’ll reuse bad queries, copy past poorly designed code, pushing your beloved application in the Damned World of the Legacy Code.

Coherence is only good if what’s coherent makes sense. Even if the requirements and the codebase change, your mental model can be full of old and outdated ideas because you’re involved in a legacy project for too long.

Debiasing the Anchoring

Model Based Forecasting

Estimations, when you think about it, is a special case of forecasting. There are two types of them:

  1. Expert-based forecasting (based on human judgments)
  2. Model-based forecasting (based on mathematical models)

Since biases are a human problem, mathematics can help debiasing them. That’s why using the model-based forecasting can be considered as a valid tool to go beyond our biases. But this kind of forecasting require a lot of past information which is often not available in software projects.

Questions Are Not Made to Influence People

Another way to debias the anchoring bias for estimations: ask your colleague developers not to focus on the estimation itself but on the task. Instead of asking “How much X can you do in two weeks?”, we should ask “How much time would take to make X?”. We should be aware that our way of communicating and the way we ask our questions can influence people around us.

As an aside, the anchoring bias is often used in surveys intentionally. That’s one of the reasons many don’t consider surveys as a reliable source of information, even if well-made surveys are. For example, in some political surveys, questions will try to influence your point of view, implying some truth which might not be so true.

In short, trying not to influence people when asking questions is always a good idea. Point the bias out if somebody tries to influence you. Again, don’t point out the person doing it but more the situation. For example, we shouldn’t say “you always try to influence us!”, but more “I think saying X will influence us. Let’s try to be neutral. What do you think?”.

After all, if you try to influence people when asking question, you only try to confirm your own opinion. The Return of the Confirmation Bias™.

Multiple Opinions are Better Than One

Anchoring can be debiased when multiple opinions are given at the same time. It’s exactly the purpose of doing some good old planning poker: everybody gives their estimate at the same time not to influence the others.

In the codebase itself, you can try to think about numerous solutions for implementing or fixing something. Then, you can ask yourself:

  • What solution looks the best?
  • Is your solution following the bad practices you saw earlier?

Different people can also ask the ones responsible for new features in a project why they think the feature will bring value to the end user. In general, sharing opinions avoid a single person to use the first (or last) information or experience as anchor.

The Bandwagon Effect and the Cargo Cult

The bandwagon effect and the cargo cult are biases which have been less studied in the software development world. Nonetheless, I saw them pretty often, so I include them here. The bandwagon effect pushes us to align our opinions with the majority’s opinion, or what we perceive as the majority. It’s similar to the cargo cult, which pushes us to follow blindly what the others do.

Jump Off The Bandwagon!

You are again in a meeting.

Your team leader is arguing about the benefits of replacing the REST APIs used by your of fancy microservices with GraphQL. She’s demonstrating with length the potential technical benefits for the whole company. You feel the warmth of her passion coming into your heart. Your colleagues seem to feel the same. She’s so right! What a charismatic leader you have!

Unfortunately, it’s likely that you’re the victim of the bandwagon effect. Indeed, your team leader didn’t really prove the business value of her idea, only the technical benefits. Will the customers notice or even care? Will it brings more time, customers, or money?

Additionally, the fact that your team leader wants to use GraphQL is attractive for us: this is a new technology, and we like to experiment with new toys.

Have you been in this situation before? I certainly did. Many times.

Debiasing The Bandwagon Effect

Value is Everything

Let’s be clear on this one: we develop software to support a business, most of the time. There is no point to use the new trendy technology if it doesn’t bring any value to the business itself.

That’s why we should always ask:

  • What is the value of this idea?
  • How will it bring new customers, time, or any other benefit?
  • Is the benefits outweigh the cost?

“Senior” Developers Are Not Better Than You

Whatever the titles of the people proposing what seems the best idea ever, they’re not better than you.

They might have more experience, more knowledge, and a bigger paycheck, you still have some qualities they don’t. If you’re a total beginner, for example, you won’t look at problems with the misconceptions accumulated with years of experience. It’s not a small quality.

Don’t trust people’s opinion because of their background. Always try to separate the solution they speak about with the one proposing it, and, as honestly as you can, assess the good, the bad, and the ugly. If these people are leaders, it might be even more true. Managers don’t necessarily have time to assess technical solutions. They have often many other responsibilities and things to do.

Again, trying to inverse the ideas can be beneficial too. Ask yourself what would happen if you don’t follow the majority’s opinion. If you find good arguments that the solution is not worth implementing, go ahead and defend your new idea with solid arguments and, if you can, concrete data.

Even if you’re proven wrong, you’ll always learn something. That’s always a positive result.

The Correspondence Bias

The correspondence bias push us to explain behaviors and results because of personality traits, even if these behaviors are mostly the result of the environment.

Blaming Others, Excusing Ourselves

You’re not in a meeting anymore! How great!

Instead, you’re on your favorite computer, coding in your little bubble. Suddenly, you see some ugly, crappy code. Obviously, you run the good old git blame everybody loves, and the name of the responsible appears on your screen.

It’s Dave, your colleague developer.

Of course it was! Dave is the worst. He’s not careful, doesn’t care, and doesn’t think about what he’s doing. You would have done way better than him!

You take a deep breath, try to remember the precious advice of your favorite and trendy yoga-meditation teacher to calm your anger, and you continue your coding. Unfortunately, you bump again into a weird piece of code. A horrible hack made to avoid the pain of refactoring a big chunk of code.

“Dave again!” you whisper like a snake, full of hate. But this time, git blame tells a different story. You wrote this code. You’re the responsible.

Shame begins to fill your soul. Questions begin to fly around in your mind: “Why did I do that? Am I a bad developer? Am I like Dave?”

Without warning, it strikes you: of course it wasn’t really your fault! The deadline was tight, you didn’t have enough time, you felt sick, your partner yelled at you this morning, and your dog was in the vet. On these reassuring thoughts, you continue your work, as nothing happened. You write on a little sheet of paper to speak with Dave, his ugly code needs a fix!

This was a showcase of the correspondence bias: you explained Dave’s crappy code by the way you perceive him, and you excused yours using the context you were in while writing it.

Debiasing the Correspondence Bias

The environment explains very often the mistakes and wrong decisions people make. It doesn’t matter if it’s you, Dave, another colleague, or your grandmother. If you see some bad code written by some of your colleagues, simply talk to them. Why did they do that? What was the context they were in, at that point in time?

Blames never helped anybody or solved anything. Try to assess the problem and work on finding a solution. Does Dave lack some knowledge? Try to teach him what he doesn’t know. Was the context too stressful? Why? Can somebody prevent stress creeping in again?

At least, reassure the poor Dave that he’s not a failure. Try to refactor his code with him. You might learn something along the way.

Paranoia Is Not The Solution

Biases are popping in our lives all the time and we often fall in their traps. Quickly assess your ideas and opinions with a cold head, try to find the mistakes brought by some “common wisdom”, be very wary when people justify their opinion because they’re “normal” or “obvious”, and try to find the solutions which seems correct after further analysis.

In short, always be aware and careful about what you think you know, about what you don’t know, the context you’re in, and the context where are the others. That being said, there is no need to become paranoid and fearful. Sometimes, it’s not worth discussing ideas and decisions into infinite meetings.

Knowing how and when to discuss your ideas is important, but at one point, you need to begin to code or, in general, act, instead of speaking. You can still prove (or disprove) your point along the way. Before pushing your code in production would be nice.

It’s time to summarize. What did we learn in this article?

  • We all fall into the traps of the cognitive biases. Being aware of the most common ones will give you an edge to avoid them.
  • Don’t blame people for their bias, try to speak about them in a precise context without blaming anybody.
  • The most studied biases in software development are the optimistic bias, the confirmation bias, and the anchoring bias.
  • The bandwagon effect, the cargo cult and the correspondence bias are also frequent. They can have disastrous effects on software projects.
  • You can use these powerful techniques to debunk your biases or the ones of your colleagues:
    • Be careful when you take quick decisions and take time to analyse the problem further. A couple of minutes more can be enough.
    • Play Devil’s advocate for important decisions and actions.
    • Ask yourself who can influence your judgments easily.

In general, don’t hesitate to experiment with different way of thinking. Try to look at ideas from different angles. Trying to improve our thinking is important for knowledge workers.

Share Your Knowledge