Discover more from Course Notes: Continuous Business Learning
Flaws and Fallacies in Statistical Thinking 1/5
Stephen K. Campbell
Source: Stephen K. Campbell (book)
1/ Dangers of Statistical Ignorance
It’s fair to assume that the more data a person is exposed to, the more fallacies s/he gets exposed to as well; being well-informed doesn’t really imply knowing just the correct information (see the book on “common sense”).
The author quotes a wonderful example of Volvo car ads: an owner trades their car every 3.25 years and will own 15 cars in their lifetime; with Volvo cars lasting 11 years that’s going to be 4.5 cars in the lifetime. The fallacy here, of course, is that a single Volvo car changes three owners over its own lifetime. (And the 11 years number was equally arbitrary, too.) This is an example of how marketers attempt to influence consumers’ purchasing decisions based on not-so-clean manipulations.
Measuring business metrics is more tricky because the average consumer of the data is somewhat smarter than the general population. Nonetheless, Boards have to be smart enough to catch often deliberate attempts by the management to massage the data or change the way it’s measured to receive bonuses.
Comparing numbers is quite easy; however, many statistical fallacies are really about differences of opinion (i.e., what the numbers mean or imply).
People’s attitude towards statistics changes as they become older. Younger people are more prone to take statistics-driven conclusions as the truth; older people tend to insist that with the carefully selected data, statistics can prove any possible point. People grow to be grumpy contrarians – or not.
2/ Some Basic Measurement and Definition Problems
Many things are objectively quantifiable (length, weight), but some are not (unemployment, poverty, popularity, mental health, etc.). Complex terms require universal definitions, or they become open to interpretations and manipulations. [MK: look no further but the EBITDA calculations or its derivatives like EBITDAM or Adjusted EBITDA, don’t get me started on this one.]
The problem of a good definition is that it is a compromise between the attempt to capture all aspects of a term and the practical ability and inexpensiveness of capturing the data for measurement. [MK: think of the number of stars in our galaxy.] Such “Friendly Definitions” are an implicit contract between the supplier and the user of the data.
Not all data suppliers willingly agree to adhere to the Friendly Definitions either out of arrogance or because they can show better numbers by using their own definitions.
The same term can have different meaning to or treatment by professionals in different trades. “Overhead cost” means a fixed number to an economist, but a variable number to an accountant who decides to allocate it based on the number of units of output.
The precise definitions of “industry” and god forbid – “market” are way more elusive. But the definition of “poverty” beats them both. It’s very tempting to redefine the term or some of its components to serve a theory or a proposed policy. Any numeric criteria in a definition (e.g., a family living on under $3000 p/a is poor) has to have a compelling explanation as to why exactly this number was chosen and not another arbitrary one?
Every simple definition is misleading, so an additional asset test exists in most jurisdictions: if someone lives in a mansion they can sell but are starving on less than $3000 p/a, it’s one’s decision not to downsize and use the difference for living expenses over time.
Poverty can also be transitory when the main breadwinner loses a job for some time thus making the family poor – but only until they find another job.
I loved the author’s term Statistical Leverage: how much the change in the definition of something changes the measurement of that something. Example: changing the definition of poverty (as of 1964) could increase the number of affected people from 30 up to 80 million.
Every interested party finds the definition that suits their agenda the most.
Comparing data among countries is often far from comparing apples and apples because the definitions and data collection methodologies and abilities differ between countries.
A very good definition can’t save the data from being omitted or incorrectly counted (deliberately or otherwise). Aggregated data often is a mix of fact, estimate and judgement.
Spurious accuracy is providing an overly accurate number without justification, but implying the seriousness of the estimation process, which is often unattainable (e.g., the age of a mountain accurate up to 1000 years). Marketers never fail to count their customers (95 745 per year) or ingredients (15.4 tons of steel, 18 998 oranges, 1 890 576 grains of salt). The more precise the number – the more trust the source commands.
Valid measures can be used inappropriately: 9 of 10 dentists don’t in fact recommend a certain brand of a toothpaste, but rather the ingredients in this toothpaste, which happen to be present in other toothpastes, too. Proxy measures are only usable for comparison if they are used consistently across multiple data sources.