Bad news: you’re in the meeting room.
You can distinctly see your team leader proposing the solution he had in mind for the next feature of your startup’s disrupting product.
“Great solution! Pretty easy to implement!”, you think. You only need to change two fields in the database, modify two or three existing features, and that’s all!
You give your estimation. Only two hours. After all, you did that so often. This feature won’t match your impressive skills! You will crush it, and everybody will praise you for generations.
Not so fast! I’m sorry, but you might have fallen into multiple cognitive biases, in only three minutes.
Did you ever create bugs or wrote nonsense because your certitudes and your self-confidence gave you wrong assumptions and wrong solutions? Don’t lie to me. I know you did. I did, too.
If not, you might not have a lot of experience yet. That’s fine. Be patient. It will happen.
This article is about our mistakes, caused by the most common and discussed cognitive biases in software development.
On today’s menu:
- We’ll learn what are cognitive biases and how they can pop up in our work, as software developers.
- We’ll study more in details:
- The optimistic bias
- The overconfidence bias
- The confirmation bias
- Wishful thinking
- The anchoring bias
- The bandwagon effect
- The cargo cult
- The correspondence bias
- We’ll see different techniques to prevent these biases, to take more logical decisions.
Why is it important? If you think that programming is about languages and technobabble, you’re wrong. It’s about problem solving. When you solve problems, assumptions will mislead you. Cognitive bias will bring you a lot of them, on a platter.
The result: you’ll come up with wrong solutions. You’ll get lost on the wrong path. It’s not only about you, but about everybody else, too. You only need to be a human.
Depending on the project, the impact of biases can be completely different, from insignificant to dangerous for the survival of the project itself.
Let’s not wait longer. Come with me, I will show you what’s our enemy looks like, and how to bring it down with a sharp mind. Bim.
What’s a Cognitive Bias?
Let’s open the sacrosanct Cambridge dictionary to find out:
The way a particular person understands events, facts, and other people, which is based on their own particular set of beliefs and experiences and may not be reasonable or accurate.
As I explain briefly in my article about the expert blind spot, it’s easier for our cognitive system to reason about things by cutting details out, using already what we know. However, these details might be important, and our past believes and experience not adequate for the situation at hand.
Everyone takes the limit of his own vision for the limits of the world.
There are no definitive cognitive biases codex, but we can roughly count more than 200 of them in psychology, sociology and management research.
Keep in mind, however, that there is a lack of study about interconnected bias, which means that some cognitive biases will be similar to one another. Plus, some of these biases might be more frequent when another one is already in action.
Be aware as well that there is a serious lack of consistency in the research papers. Depending on the study, they will be called with different names, even if they refer to the same ideas, more or less.
The biases I choose to write about is mainly based on this study and my own experience.
- The biases themselves
- In what context they can appear
- Some technique to fight them (debiasing techniques).
The most evident of these debiasing techniques is to point out the biases when somebody makes a mistake; however, it has been shown that blaming people for their bias, or showing them that they, personally, are victim of biases, won’t help.
Instead, try to show the biases creeping in specific task or situation.
This article can be useful as well for you to learn how to fight your own biases.
Debiasing techniques are no silver bullets. There is no strong evidence of their effectiveness: we lack studies about them. However, they are still useful guidelines to show you the good direction to take. See them as experiments you can try to take better, more logical decisions.
The Optimistic Bias and the Overconfidence bias
The optimistic bias is this tendency to be unrealistically optimistic for events happening around us. This is a common and well documented bias in software development.
It can be linked to the overconfidence bias, which makes us utterly optimistic in our own skills and talents.
What Does Optimism Look Like?
Did you hear, during your career, one of your colleague developer claiming, without even checking the codebase, that a random new feature should be easy to implement and won’t take time?
If you have a bit of experience as a developer, I’m sure you did. Developers have tendency to be optimistic when asking to estimate tasks.
In fact, studies show that technical persons, including developers, fall into the optimistic bias trap even more than anybody else!
How many of these estimates have proven to be wrong? How many times the work took more time than estimated? Yep. Too many.
The cherry on the cake: people have even tendency to be optimistic for hard tasks, and pessimistic for easy task. This is known as the hard-easy bias.
Optimistic bias will kick in as well more easily if the task is abstract and far in the future. It makes sense: we have all difficulties to project ourselves too far in the future. Even when we try, we’re often wrong.
This bias can be detrimental for estimations and for understanding the requirements of a task. The optimistic bias can whisper to you that you understood everything, even if you didn’t spend enough time to really assess all the nuances of the task.
This can lead to a superficial, or even incorrect, understanding of a situation, which will bring wrong implementations.
Debiasing the Optimistic Bias
Ask Directed Questions
To find out if an estimate is too optimistic, or to be sure you really understood everything, you can ask directed questions to your colleagues:
- Can you see any reason why your solution might be wrong?
- Do you see anything which might cause problem?
- Can you think about dependent features which are coupled to the one we modify?
- Can we look at the codebase before giving our estimation?
These questions try to put the optimistic affirmations on their heads. It will force people to reverse their way of thinking, and think about the difficult parts instead of the easy ones.
Optimism is an occupational hazard of programming; feedback is the treatment.
The Double Loop Learning
Questioning your initial assumptions, or the ones from your colleagues, is always good, if it’s done for the good reason. The goal is not trying to be right whatever the cost, but to avoid biases.
The double loop learning approach does exactly that. Basically, you can try to frame the problems differently, with another mental model than the primary one used.
Let’s take an example. One of your colleague tells you that changing every name of every table in your wonderful database is an easy task which takes 1 hour. After all, it’s only changing names!
You could easily answer that it will be done quickly. You’re a senior developer, you know much about programming and you want people to know that, for you, changing a bunch of names is easy.
However, instead of thinking about changing simply names, let’s think about the database itself. How many tables does it have? Where these tables are queried in the source code, with their names hard-coded?
Changing the mental model, from thinking about simply a bunch of names to think about the database itself and where it’s queried, can change drastically the difficulty of the task, and its estimation!
Smaller Tasks Are Easier
It’s true that estimating future features is hard. Estimating features which have 13647 sub-tasks and 82372 nuances is harder.
I will never stress it enough: if your features are big, break them down to the smaller unit you can, and push these units in production as often as you can. Even if the customers don’t want to see a tiny bit of a feature: you can put a switch on it and hide it to their eyes.
Estimations will become way easier, you’ll learn the joys of continuous deployment, you’ll have quick feedback on your possible over-optimistic estimations, and everybody will live happy for the rest of their lives.
Logging the Estimations
I tried an experiment not long ago: logging my estimations. Each time I estimated a task, I would write it in my development journal. I did the same for my colleagues, too.
As expected, in the company I was working with, we were all very optimistic. It led me to always add 50% of my first estimation on top. Two days of work becomes three days.
I would encourage you to log your estimations too, and see for yourself if you are overly optimistic.
Another good idea: each time your estimations are wrong, ask yourself why. Did you forget some other features tight to the one you modified? Were you interrupted too much the last few days?
Finding the reasons can help you next time to fix the efficiency problems you might have faced.
Confirmation Bias and Wishful Thinking
Once you start looking for confirmation bias you see it everywhere.
The confirmation bias is another well known and well studied bias, very present in our lives in general. Because of it, you’ll pay attention to source of information which confirm your existing believes, and ignore the ones which challenge them.
This bias is close to wishful thinking, which will push you to favor pleasing information, instead of confronting the reality.
The Confirmation Bias in Action
Imagine you believe firmly that inheritance has always been a pillar of OOP, from the beginning on. One of your colleague argues that it’s not the case. Inheritance wasn’t accepted de facto as something which should be implemented at the beginning of OOP.
Easy peasy lemon squizzy! You Google “inheritance pillar of OOP” and you find the first results going into your sense.
Your conclusion: you were right! You won the argument. Inheritance has always been a pillar of OOP, and will always be! Nobody’s contesting it! You’re the King of the Objects!
However, your colleague is right too: Alan Key, considered as the father of OOP, didn’t want to implement inheritance in the first version of Smalltalk. He simply doesn’t like the concept.
You’re not totally wrong either: inheritance is still considered for many as a pillar of OOP. Yet, doing a quick research only to confirm your opinion prevent you to see the whole reality.
You fell into the confirmation bias’ trap.
Another example: if you look at the unit tests of a given project, they will often test that everything goes as expected, instead of trying to catch mistakes and possible failures. Again, confirming that the code do what it was intended to do was good enough for the developers. They only wanted to confirm that their code is right.
Manual tests, who are adulated by many developers too scared to write any kind of automatic test, are even more subject to the confirmation bias. Let’s face it: nobody like to test manually, and even less failing scenarios. It takes time, especially since you need to retest everything when a piece of code change.
That’s why manual tests only test artificially the best possible scenarios, not the ones which break your application.
Debiasing the Confirmation Bias
Tests For Failures
You already guessed it. Ask yourself, or ask your colleagues, to find evidence of problems instead of evidences that everything work. Again, it’s flipping your way of reasoning.
Regarding tests, you need to write automated tests, test the complex part of your code, and the moving parts which might breaks or creates error. Test for failures, not only how the code is intended to work.
Using again your development journal, you can try to trace the bugs in your application. Look for patterns, see if multiple bugs are the result of more general, broader assumptions created by the confirmation bias.
After all, when you have a misconception, you’re likely to follow it in multiple parts of your code.
Let’s take an extreme example: your colleague thinks that users are always honest, well-meaning, and he succeeded to find on Internet that yes, most of them won’t try to break your software. He only searched for information confirming his opinion.
Consequently, you see an input in your applications which is not sanitized. Knowing that it can be the result of a bias, likely the confirmation bias, you search for more. Bingo! Not only one, but two, three inputs are potential back doors.
Because of the assumption of your colleague, you should definitely speak with him. Try to find out why he did what he did, and teach him what you now. Instead of fixing artificially some bugs, you’ll fix the root problem.
No More Assumptions
In general, always try to find good, provable justifications for every assumption you make.
First, you need to find out that it’s possibly an assumption. It’s often believes you can’t really justify. Often, we take them as granted, without asking ourselves questions about them.
Then, try to find what could make the assumption wrong.
If you do it for yourself, you’ll do it for the others too, bringing good arguments with enough research and data. This might improve code and processes dramatically.
The Anchoring bias
The anchoring bias occurs when you fix your opinion on the initial information you’ve got, again without further thinking or research.
The anchoring bias is the third and last most studied bias in software development.
A Developer Anchored
You are in a feature meeting. Again. Don’t worry, it will be over soon.
Your project managers show to you and your colleagues a pretty complex feature to implement, and throw the usual question to the audience:
“How long will it be to implement? Two weeks?”
Whatever the complexity, studies show that the estimations will be likely to be close to two weeks.
This is a textbook case of the anchoring bias: you’re influenced by the first bit of information your brain receives.
Let’s take another example, in the code this time. You might be influenced to blindly follow the practices you find in your codebase, even if you know that they are mediocre, messy, or plainly wrong.
You’ll reuse bad queries, copy past poorly designed code, pushing your beloved application in the Damned World of the Legacy Code.
Coherence, in that case, won’t do any good. The software’s entropy will grow, pushing every possible developer to curse your name on five generations, after casting the
git blame spell.
Even if the requirements and the codebase change, your mental model can be full of old and outdated ideas. This might happen when you are involved in projects for a long time.
This will lead to wrong modifications, creating bug and possibly doom the whole world.
Debiasing the Anchoring
Model Based Forecasting
Estimations, when you think about it, is a special case of forecasting. There are two types of them:
- Expert-based forecasting (based on human judgments)
- Model-based forecasting (based on mathematical models)
Since biases are a human problem, mathematics is always a way to debias them. That’s why using the model-based forecasting is considered a valid way to debias estimations.
Although, this kind of forecasting require past information, which is often not available in software projects.
Questions Are Not Made to Influence People
Another way to debias anchors on estimations is simply to ask people not to focus on the estimation itself, but on the task.
Instead of “How much X can you do in two weeks?”, one should say “How much time would take to make X?”.
As an aside, anchoring bias are often used in surveys intentionally. That’s one of the reasons many don’t consider surveys as a reliable source of information, even if well-made surveys are. For example, in some political surveys, questions will try to influence your point of view, implying some truth which might not be so true.
In short, trying not to influence people when asking questions is always a good idea. Point the bias out if somebody tries to influence you.
Again, don’t point out the person doing it (“You always try to influence us!") but more the situation (“I think saying X will influence us. Let’s try to be neutral. What do you think?").
After all, if you try to influence people when asking question, you only try to confirm your own opinion. Welcome back, confirmation bias!
Multiple Opinions are Better Than One
Anchoring can be debiased when multiple opinions are given at the same time.
It’s exactly the purpose of planning poker: everybody gives his estimate at the same time, not to influence the others.
In the codebase itself, you can try to think about numerous solutions for implementing or fixing something. What solution looks the best? Is your solution following the bad practices you saw earlier?
On the feature level, project managers can be asked by different people why they think the feature will bring value. In general, sharing opinions avoid a single person to use their very last experience as anchor, but a wider array of them.
The Bandwagon Effect and the Cargo Cult
Contrary to the optimistic bias, confirmation bias and anchoring bias, the bandwagon effect and the cargo cult are less studied in the software development world. Nonetheless, I saw this bias pretty often.
The bandwagon effect align your opinion with the majority’s opinion, or what you perceive as the majority. It’s similar to the cargo cult: following blindly what the others do.
Jump Off The Bandwagon!
You are again in a meeting.
Your team leader is arguing rhetorically about the goods of replacing the REST APIs, used by your dozen of fancy micro services, with GraphQL. He’s demonstrating with length the potential technical benefits for the whole company.
He’s in fire, and you feel the warmth of passion coming into your heart. Your colleagues seem to feel the same. He’s so right! What a beautiful leader you have!
Unfortunately, it’s likely that you’re the victim of the bandwagon effect. Indeed, your team leader didn’t really prove the business value of his idea, only the technical benefits. Will the customers notice, and even care? Will it bring more time, customers, or money?
Moreover, your team leader wanting to use GraphQL is attractive for you: this is a new technology, and developers like to experiment with new toys.
This is the cargo cult operating.
Have you been in this situation before? I certainly did. Many, many times.
Debiasing The Bandwagon Effect
Value is Everything
Let’s be clear on this one: we develop software to support a business, most of the time. There is no point to use the new technology-using-old-principles-everybody-speak-about-now-and-selling-them-as-new if it doesn’t bring any value to that business.
That’s why you should always ask:
- What is the value of this idea?
- How will it bring new customers, time, easiness to scale our software?
- Will we ever need to scale in this direction, anyway?
- Is the benefits outweigh the cost?
“Senior” Developers Are Not Better Than You
Whatever the title of the people proposing the idea of the century, they are not better than you.
They might have more experience, more knowledge, and a bigger paycheck, you still have some qualities they don’t. If you’re a total beginner, for example, you won’t look at problems with the misconceptions accumulated with years of experience. It’s not a small quality.
Don’t trust somebody’s opinion because of his background. Always try to separate the solution they speak about with the one proposing it, and, as honestly as you can, assess the good, the bad, and the ugly.
If he’s a team leader, it might be even more true. Managers don’t necessarily have time to assess technical solutions. They have often many other responsibilities and things to do.
You might have this time. Don’t be afraid to go again the status quo and the common wisdom.
Again, inversion here can be beneficial too. Ask yourself what would happen if you don’t follow the majority’s opinion. If you find good arguments that the solution is not worth implementing, go ahead and defend your new idea.
Even if it’s wrong, you’ll always learn something. That’s always a positive result.
The Correspondence Bias
The correspondence bias push you to explain behaviors and results because of personality traits, even if these behaviors are mostly the result of the environment.
Blaming Others, Excusing Yourself
You’re not in a meeting anymore! How great!
Instead, you’re on your favorite computer, coding in your little bubble.
Suddenly, you see some ugly, crappy code. Obviously, you run the good old
git blame everybody loves, and the name of the responsible appears on your screen.
It’s Dave, your colleague developer.
Of course, it was! Dave is the worst. He’s not careful, doesn’t care, and doesn’t think about what he’s doing. You would have done way better than him!
You take a deep breath, try to remember the precious advice of your favorite trendy yoga-meditation teacher to calm your anger, and you continue your coding.
Unfortunately, you bump again into a weird piece of code. A discussing shortcut somebody made, to avoid the pain to refactor a chunk of code.
“Dave again!” you whisper like a snake, full of hate.
However, this time,
git blame tells a different story. You wrote this code. You are responsible.
Shame begins to fill your soul. “Why did I do that? Am I a bad developer? Am I like Dave?”, you think.
Without warning, it strikes you: of course it wasn’t really your fault! The deadline was tight, you didn’t have enough time, you felt sick, your girlfriend (or boyfriend) yelled at you this morning, and your dog was in the vet.
On these reassuring thoughts, you continue your work, as nothing happened. You’ll remember to speak to Dave though, his ugly code needs a fix!
This was a showcase of the correspondence bias: you explained Dave’s crappy code by some of his personality traits, and your crappy code only being the result of the context you were in.
Debiasing the Correspondence Bias
Most of the time, the environment plays a big role into the mistakes and wrong decisions people make. It doesn’t matter if it’s you, Dave, another colleague, or your grandmother.
If you see some bad code written by one of your colleague, simply talk to him. Why did he do that? What was the context he was in, at that point in time?
Blame never helped anybody, or solved anything. Try to assess the problem, and find a solution. Does Dave lack some knowledge? Try to teach him what he doesn’t know. Was the context too stressful? Why? Can somebody prevent stress creeping in again?
At least, reassure the poor Dave that he’s not a failure. Try to refactor his code with him. You might learn something along the way.
Paranoia Is Not The Solution
Biases are a reality, and we might often fall in their traps. Quickly assess your ideas and opinions with a cold head, try to find the mistakes brought by some common wisdom, and propose your solutions.
In short, always be aware and careful about what’s said and decided.
That being said, there is no need to become paranoid and fearful. Sometimes, it’s not worth discussing ideas and decisions into infinite meetings.
Knowing how and when to discuss your ideas is important, but at one point, you need to begin to code or, in general, act, instead of speaking. You can still prove (or disprove) your point along the way.
It’s time to summarize. What did we learn in this article?
- We all fall into cognitive bias’ traps. Being aware of the most commons will give you an edge to avoid them.
- Don’t blame the responsible for the bias, but fix the possible situations the bias is acting on.
- The most studied biases in software development are the optimistic bias, the confirmation bias, and the anchoring bias.
- The bandwagon effect, the cargo cult and the correspondence bias are frequent, too. They can have disastrous effects on software projects.
- Debiasing can be resumed as following:
- Be careful with quick decision and take time to search for more information.
- Play Devil’s advocate for important decisions and actions.
- Ask yourself what and who can influence your judgment unreasonably.
Don’t hesitate to experiment with the debiasing techniques! You can share your observations in the comment section.