2/ Thinking About Thinking
Default policy options (opt-in vs opt-out) make a huge difference when it comes to outcomes (e.g. organ donations). Such options shouldn’t be confused with voluntary people’s choices. Understanding why people are making certain choices is an important task.
“Rational choice” is a mythical concept similar to unicorns but is still being used to drive policy decisions. This concept ignores the social norms and conventions, reputation and social capital.
One of the ways to sugarcoat the “rational choice” is attempting to include political and social behaviour to the expected economic behaviour. This creates a wider area for options development but still there’s a 2-step approach: develop options —> pick the best one. The selection process becomes context dependent.
The core assumption here is that all human behaviour can be understood in terms of individuals’ attempts to satisfy their preferences. [MK: I’d say “known and formulated preferences”.]
A lot of people’s seemingly irrational behaviour is in fact very rational – an observer simply isn’t aware of the full context. For many sociologists “rationalizing” the behaviour is equal to “understanding”. It seems like almost everyone is rational, if you look at the context close enough.
Still, rationalizing human behaviour is subjective as it requires the observer to stand in the shoes of the person they try to understand. And that’s where the fun starts.
A core (and incorrect) underlying assumption is that people respond only to the visible factors – motivations, preferences and beliefs – but the societal defaults (opt-in vs opt-out) and manipulations are ignored.
There are countless books on how to manipulate people into buying one brand vs another and how to inject seemingly irrelevant information into the decision making flow. (Add in all the psychological experiments, too.) Even in science there’s strong motivation to question evidence instead of conclusions (thank you, confirmation bias and motivated reasoning).
The key takeaway is that many potentially relevant factors affecting behaviour that are outside of conscious awareness and thus can’t be identified in building context. One factor probably can be identified, but the combination of several factors can’t.
Purchase decisions are made based on a number of factors, which are based on comparable past purchase decisions. But knowing if a situation is comparable requires knowing which features are relevant (or attempting to predict which will be relevant at a future date). The list of relevant factors is painfully long, and many of these factors can safely be ignored, too. (It’s called a “frame problem”.) This is why straightforward AI (learning by reasoning) can’t possibly work.
Machine Learning (learning by observing) models are more promising, although not without their own limitations, of course.
We Don’t Think the Way We Think We Think
The less info we’re given – the more we tend to fill in the gaps with the snippets of our past memories, images, experiences, cultural norms and imagined outcomes. [MK: that’s why I have a life rule that the more I get to know someone – the less I like them.]
Decision making involves consciously and subconsciously picking certain relevant facts and ignoring others. Reasoning ex post (i.e., after the fact) when the outcome is clear leads humans to reassess the relevancy of the chosen factors, creating a cohesive, logical and most likely wrong explanation of the events and factors leading to said outcome.
This tendency has huge effect on the exec compensation packages linked to performance and targets. And there’s no convincing argument that it actually works as intended. [MK: hence my hypothesis that ESG-linked performance targets are a lesser evil than financial KPIs.] And what’s really funny is that no matter how much people are paid – they always think they deserve more, which in a sense negates some of the effects of the purely financial motivation. Financial rewards can lead to a risk-averse behaviour (why risk a sizeable bonus?), too.
A well-known negative effect of KPI-driven motivation is that in complex projects employees tend to focus disproportionally more on the measured tasks, leaving seemingly less important tasks behind. (In schools it’s the “teaching to the test”.)
Refusing to challenge the conclusion and scrutinizing the assumptions leads to the incorrect belief that incentive systems actually work, it’s just the inexperienced HR has got the incentives wrong. This is yet another example of when “common sense” fails to predict the outcome.
We can never know all the facts that can be relevant to any situation. And much of what’s relevant is subconscious. These two uncomfortable facts simply lead to an obvious conclusion: common sense is terrible for predicting outcomes and behaviours based on the past “rationalization” of prior events.
And the problem gets much worse when it comes to predicting group behaviours.