Calling Bullshit 3/x
Carl T. Bergstrom, Jevin D. West
5/ Numbers and Nonsense
(let’s get the obvious out of the way) Not everything can be reliably quantified (think about the number of stars in our galaxy). Sampling helps to get an idea of the number / range, provided the sample was representative: don’t estimate the average height of the population by only taking measures on a basketball court. Data never lies but often misleads.
The choice how to represent a numerical value (“framing”) sets a context for that value. MK: framing is one of the many things are skilled Board Director should look for when reading management presentations.
An example is cocoa being advertised as 99.9% caffeine-free. Well, the strongest coffee (surprise!) is also 99.9% caffeine-free, as caffeine is that potent that one needs just a tiny dose of it.
MK: Most numbers in advertising and marketing presentations can be thrown away as they mislead (or set the wrong context) more than guide.
Percentages (absolute and relative) are the #1 tool to mislead. 1% and 2% are absolute 1% apart, but 1% is 50% (relative) of 2%. Using percentages and percent points carelessly opens room for lots of manipulations.
It’s helpful to keep Goodhart’s law in mind: when a measure becomes a target, it ceases to be a good measure. [MK: this partially explains why most KPIs don’t work.] Measuring things alter people’s behaviours, and not always in the intended ways.
Mathiness – the introduction of a smart-looking math formula with variables that can’t be quantified (despite the claims to the contrary) and even the relationships between which is hard to establish. Taste = (Satisfaction from oranges + Colour of a lemon) / dollars spent on fruits last year. Another issue with these things that are presumably quantifiable is the dimension: in what units do we measure taste, satisfaction, colours, etc.?
One can argue that mathiness is used to demonstrate a metaphor, but math is not and should not be used for something that’s not precise.
Zombie statistics are numbers that are cited badly out of context, severely outdated or simply made up – but they’re still being quoted now. Think of the “10 000 hours to become a pro” or “walking 10 000 steps a day” or “drinking 2 litres of water a day”, etc. Simple figures and statistics are particularly prone to be spread and never die.
6/ Selection Bias
What you see depends on where you look. Samples must be random in respect to the feature we’re interested about. Results may also be different because of the sampling (actual behaviour and beliefs vs stated behaviour and beliefs).
The sentiment on Facebook is not neutral: people self-sensor and beautify their posts to look happy and successful – one needs to take this into account. Also, “100% of internet users said they used internet”.
A majority of studies in social psychology is conducted on WEIRD (Western, Educated, Industrialised, Rich, Democratic) populations, and the cheapest tests are run on students. This must not be discounted.
Different populations vary widely in their perception of the Müller-Lyer illusion, with American undergraduates being disproportionally affected by it.
Car insurance companies run ads claiming that an average person switching to their policies will save $500 p/a. Here the trick is that they count those who have actually switched, and most of them had been with an insurer who charged more. However, if we assume that everyone shops around and buys the cheapest suitable insurance, they simply won’t switch and won’t dilute the statistical savings. In other words, those who switch are NOT a random sample of drivers. Most people don’t shop for insurance every year.
Universities are scored higher if their average class size is 20 or under. It’s not uncommon to see classes of 140 students, though. This stats can be tricked by unis offering lots of small (boring and useless) classes, which again would dilute the average class size. Gaming the system is not that hard!
There’s a mathematical paradox that most likely, my friends have more friends that I do. (No, I’m not a bad person.) This is thanks to the fact that some people have disproportionally more friends / followers than me, and I know a lot of them. Large numbers ruin averages.
Observation selection effects occur when the observer is present to report on a variable. Statistically there are more drivers in the slow lane than in the fast lane, because the distance between cars in the fast lane is bigger. Buses stuck in traffic statistically make the average wait time longer, even if they depart in even intervals.
Why are hot guys such jerks? Not necessarily because of their high status because of the looks and as such – lack of negative consequences. In fact, Berkson’s Paradox tells that optimising for two traits (attractiveness and niceness) creates a negative correlation between them; if both people optimize for these traits, they will end up in a narrow band of possible combinations of attractiveness and niceness when these traits are the opposites of each other. The key here is optimizing for both traits. Same works with talented developers being unbearable.
Data censoring is a phenomenon closely related to selection bias. Even if the original sample is selected at random, part of it doesn’t make it to the final analysis (e.g., clinical trial patients with side effects drop off). Is it true that rock and rap musicians die at the age half of the blues, jazz and country musicians (30-35 vs 60-70 y/o)? While the data suggests so, it’s important to understand that the data has ONLY the info about dead people; most rock and rap musicians are still alive and well and haven’t made it into the statistics. Yet. These genres are relatively young (30-50 years), so the death statistics mostly contain the indulgent and unfortunate musicians.
Wellness programs in corporations are aimed at preventing diseases and / or health conditions and complications. Corporations try to encourage people to do more of a good thing (moving, playing sports, eating healthy, sleeping well, etc.) and less of a bad thing (smoking, drinking, drugs, etc.). The pretence is, of course, employee well-being, while it’s, of course, insurance premiums reduction (a valid business case with a thick layer of S/ESG bullshit on top).
Do these programs work? The selection bias here is obvious: since participation in all these programs is voluntary, statistically the healthier people choose to opt in. Does it constitute a selection bias? Sure! A proper analysis with control groups showed no meaningful impact on health, absenteeism, retention or health care costs.
But why such a difference between the expected and observed outcomes? The people who participated were already healthier and were less likely to leave the firm. [MK: I’ll make an important distinction: the program was introduced for existing employees; wellbeing programs are being used as part of the company benefits offered to would-be employees, even if they won’t use these benefits.]
Thanks for reading Course Notes: Continuous Business Learning! Subscribe for free to receive new posts and support my work.