How confident do you feel about the quality of decisions and actions that you make and take on a daily basis if I told you that one of the most renowned economist and psychologist and a Nobel laureate who has been studying the human brain and behavior for over 40 years admits that his brain is still and regularly playing tricks on him - and that it didn't improve over the past decades?
I felt a bit downhearted to be honest - but also incited to try and understand the most common tricks that our brain plays on us. In this article I want to share some of the most common ones with you.
The Halo-Effect is a type of cognitive bias in which our overall impression of a person influences how we feel and think about their character. Essentially, your overall impression of a person ("He is nice!") impacts your evaluations of that person's specific traits ("He is also smart!").
This effect manipulates our judgement not only in regards to people, but also past experiences quite a bit. Imagine a situation where you are asked to interview a potential candidate for a management position in your company. Statistics show that if this person is attractive and nice - and you are not conducting a structured and standardized interview that prevents you from falling for the Halo-Effect - you will pass on a better impression of this person to your HR department than if the person was not as attractive.
A key aspect of the Halo-Effect is that our brain tends to generalize when it comes to evaluations like this. Having a good chat with someone at a party might lead to thinking of that person if you look for someone to donate to a charity. We are generalizing "being nice" by also characterizing that person as "being charitable", even though it's not the same.
A good way to circumvent the Halo-Effect while making important decisions is to ask yourself: "Do I really know this? Or am I just projecting this attribute on to the person/the situation/etc.?"
The Fundamental Attribution Error...
Another interesting error that we tend to make also revolves around our judgement of other people. It is one of the basic errors that are frequently researched in different fields of psychology.
In social psychology, the fundamental attribution error (FAE), also known as correspondence bias or attribution effect, is the tendency for people to under-emphasize situational (external) explanations for an individual's observed behavior while over-emphasizing dispositional (internal) and personality-based explanations for their behavior.
In general, if we look at a situation, e.g. a person's behavior in a restaurant, we have different ways and options of judging their behavior. Imagine this person is angry at the waiter in a restaurant because the food is bad... We now have different options to judge this situation:
Is this person only angry here and now (or also in other restaurants)? - yes (only here): internal attribution, no (also in other places): external attribution
Are there other people that are angry as well? - no: internal attribution, yes: external attribution
Is he angry always at the same place (if you are able to observe this)? - yes: internal and external attribution
However, the general tendency is that we attribute the person's behavior to internal factors, e.g. "the person is the grumpy type", "the person already looked angry when he turned up", etc.. Knowing this should help us to look at the circumstances and external factors more often before making "quick and dirty" judgements - especially since we tend to use heuristics, stereotypes and intuition more often than we should...
The "Less-is-More" Effect or "Conjunction-Fallacy"...
Ok, this one is a bit longer - but it's also one of the most famous and most dangerous ones, because it neglects base rates - and therefore a very important factor that everyone should take into consideration with every decision they make.
The less-is-more effect refers to the finding that heuristic decision strategies can yield more accurate judgments than alternative strategies that use more pieces of information. Understanding these effects is part of the study of ecological rationality.
There's one famous study by Kahneman and Tversky around the case of "Linda". This case involves the representativeness heuristic (judging the probability of something in terms of how similar that something is to a stereotype), and our tendency to ignore base rates when making these judgments.
Here's a description of a hypothetical woman named Linda: Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
Subjects are given a list of eight possible scenarios for Linda and asked to rank them in terms of which scenarios are more or less likely to be true for Linda. Here they are:
... is a teacher in an elementary school.
... works in a bookstore and takes yoga classes.
... is active in the feminist movement.
... is a psychiatric social worker.
... is a member of the League of Women Voters.
... is a bank teller.
... is an insurance salesperson.
... is a bank teller and is active in the feminist movement.
The participants of the study tend to agree that Linda is a very good fit for an active feminist, a good fit for someone who works in a bookstore and takes yoga classes - and a poor fit for a bank teller or an insurance salesperson.
In the second question they were asked if it is more likely that Linda is "a bank clerk" or "a feminist bank clerk". Most of the participants suggested the latter...
There are lots of factors that can influence how likely people are to commit the conjunction fallacy. One of the most well-known is that when people are asked the questions in terms of natural frequencies ("how many?", "what fraction?", etc.) rather than in terms probabilities or percentages ("what is more probable?", "estimate the percentage"), they're less prone to the error. This is probably because this language naturally invokes a spatial representation of the problem, where set and subset relationships are clear. The language of probability and percentages is more abstract, it seems, and fails to evoke a spatial picture of the problem.
This is one of the most frequently discussed "de-biasing" techniques when it comes to reasoning with probabilities. If you give people the information in terms of natural frequencies and proportions rather than in terms of probabilities or percentages, their probability judgments are more reliable - in some cases, significantly so.
The Primacy & Recency Effect... (also: Serial-Position Effect)
The next few effects, errors and biases revolve around information and the mistakes we make while dealing with information given to us, looking at data, etc..
The first one is called the Primacy & Recency Effect. When asked to recall a list of items in any order (free recall), people tend to begin recall with the end of the list, recalling those items best (the recency effect). Among earlier list items, the first few items are recalled more frequently than the middle items (the primacy effect).
The effects described above can help us a great deal if we are conscious about them and use them to our advantage. Understanding the Primacy and Recency effects helps - not only in learning, but in understanding why we respond to certain situations as we do. Some interesting applications of theses effects are:
Advertisers use recency to make sure the first and last portions of their promotions create a desire to purchase
Lawyers will call their strongest witnesses either first or last
Speakers at conferences are scheduled with the strongest first and last
Teachers use recency to determine the sequence of lectures within a course of instruction
A similar bias is the serial position effect (also called the beginning-ending list bias). Given a large number of choices, people who don’t read the entire list tend to pick items at the beginning or at the end of the list. Even if they do read the whole lists, people tend to remember the first or last choice they read and are therefore more likely to choose the first or last options.
This one we usually encounter whenever we are asked to fill out a questionnaire or a customer satisfactory form...
The central tendency bias (sometimes called central tendency error) is a tendency for a rater to place most items in the middle of a rating scale. For example, on a 10 point scale, a manager might place most of his employees in the middle (4-7), with a few people getting high (8-10) or low (1-3) rated performances.
Here are some examples of how to avoid this bias:
Making questions clear. If the rater isn’t clear on what the question is asking for, they are more likely to answer in the middle.
Not requiring justification for higher ratings. Some employee performance scales require a manager to provide written justification for placing an employee higher on the scale. This has been shown to increase bias.
Having raters rank items from highest to lowest. If no two items can have the same rank, this avoids the rater placing items in the middle.
Leaving out the center items. For example, use the numbers 0-1-5-9 on a customer satisfactory scale instead of 1-2-3-4-5. This is common practice in the so called "house of quality" framework while deciding on product features in regards to their impact on product quality.
Peak-End-Rule and Duration-Neglect...
Last but not least, I want to share with you how our brain deals with evaluating our own experiences and the mistakes we make with remembering and recalling past experiences.
Do you generally feel like you judge past experiences accurately, including the duration and quality of the whole event? Think again.
Kahneman presents two selves in his book "Thinking, Fast and Slow":
The experiencing self: the self that feels pleasure and pain, moment to moment. This experienced utility would best be assessed by measuring happiness over time, then summing the total happiness felt over time.
The remembering self: the self that reflects on past experiences and evaluates it overall.
The remembering self factors heavily in our thinking. After a moment has passed, only the remembering self exists when thinking about our past lives. The remembering self is often the one making future decisions.
But the remembering self evaluates differently from the experiencing self in two critical ways:
Duration neglect: The duration of the experience has little effect on the memory of the event.
Peak-end rule: The overall rating is determined by the peak intensity of the experience and the end of the experience. It does not care much about the averages throughout the experience.
Both effects operate in classic "intuitive thinking" style: by averages and norms, not by sums. This leads to preferences that the experiencing self would find odd, and shows that we cannot trust our preferences to reflect our interests.
Reading about this dazzled me at first, but after thinking about it for a while, I realized how true it was and how often it happened to me.
I want to share the two experiments that were made by Kahneman and his associates to provide empirical results for both, duration neglect and the peak-end rule:
In the so called "Ice Water Experiment", participants were asked to stick their hand in cold water, then to evaluate their experience. This happened in two episodes:
1) A short episode: 60 seconds in 14°C water, and
2) A long episode: 60 seconds in 14°C, plus an additional 30 seconds, during which the temperature increased by one degree.
They were then asked which they would repeat for a third trial. The experiencing self would clearly consider the long episode worse - you’re suffering for more time. But the longer episode had a more pleasant end.
Counter-intuitively, 80% of participants preferred the long episode, therefore suffering 30 seconds of needless pain. They picked the option they liked more. Oddly, people would prescribe the shorter episode for others, since they care about the experiencing self of others. But when thinking about themselves, they care more about the remembering self.
To close this chapter here are a couple of interesting phenomenons that are worth mentioning.
Regression to the Mean...
In statistics, regression to the mean is the phenomenon that arises if a sample point of a random variable is extreme (nearly an outlier), a future point will be closer to the mean or average on further measurements.
In real life, a lot of managers misinterpret this statistical phenomenon. Whenever they give positive feedback to one of their employees for an outstanding performance, they see the same person regressing to the mean on the next occasion. This leads them to think that people perform worse when receiving positive feedback and stop giving good feedback altogether.
Oftentimes, the opposite happens - which is even worse: they start shaming and blaming whenever they get the opportunity. And after their employees regressed to the mean - e.g. by improving their performance after having a bad day - they start thinking that criticizing, lecturing or even punishing their employees leads to a better performance.
The Mere-Exposure Effect...
The mere-exposure effect means that people prefer things that they are most familiar with.
It is also called the familiarity principle, because it's built on the establishment of familiarity.
Remember the days when you were in school? Who were you friends with? The first friends you made were most probably the ones you sat next to in the first couple of weeks (you are not necessarily friends with them now, but try to remember.) This is a good example of the mere-exposure effect.
Another example is that we tend to pick the things while shopping for groceries that we've been familiarized with over a longer period of time. Whenever I recognize an item that my mother's been using in the kitchen, I think about buying this one instead of the cheaper or "better" option...
On these occasions the mere-exposure effect goes hand in hand with other biases and heuristics, such as the availability bias (thinking that examples of things that come readily to mind are more representative/better than it is actually the case) and the representativeness heuristic (estimating the likelihood of an event by comparing it to an existing prototype that already exists in our minds).
What you see is all there is (WYSIATI)...
WYSIATI is the acronym for "What You See Is All There Is", a cognitive bias which explains how irrational we are when making decisions and how little it matters to us. I wrote a whole article about this bias and you can find it here:
This is just an extract of all the tricks that our brain plays on us. I might return to them in the future or deep-dive into one of them in upcoming articles. Which one of these have you experienced lately? Are you more or less prone to these errors and biases? Let's discuss!
And if you want to find out more about this topic, please consider buying Daniel Kahneman's book "Thinking, Fast and Slow"!