Flaws and Fallacies in Statistical Thinking 3/5
Stephen K. Campbell
5/ Cheating Charts
It’s going to be quite hard to write about charts without actually drawing them, but I’ll try.
Charts are the #1 tool to fool readers. Grids make an enormous difference when showing data. Vertical axis can be displayed arithmetically (the usual way) or logarithmically (showing the changes in the dependent variable). If one company grows faster than the other – the best way to emphasize it is through a logarithmic grid.
The first sin of massaging a horizontal axis is the “clever” choice of the range of the independent variable (which, as we know, is reflected by the horizontal axis): it can legitimately make the chart look more attractive or less attractive at the presenter’s will.
Unmarked axes are evil and are designed to deceive. No exceptions.
Bar charts: if the widths of the bars are the same – the bars can easily be compared. Introducing variable width makes the brain compare the bars on two dimensions at once, which is: a) much harder; b) misleading.
Using pictograms instead of bars creates visual mess and can lead to unintended and wrong conclusions. First, pictograms have two or three dimensions, which automatically makes comparing them confusing. Second, instead of comparing numbers the pictograms are supposed to represent, people start comparing the actual properties of the pictograms themselves.
6/ Accommodating Averages
There is no such thing than an “average person”, “average weather”, etc. Comparisons can be made against a certain specific quantifiable property, but not several of them at once. What people usually mean by calling each other “average” is that the subject is not outstanding (i.e., above average) in ANY property people usually get compared against.
Statisticians know many kinds of suitable averages to describe certain situations. And things like “the arithmetic mean”, “the median” and “the mode” are taught in middle school. It makes sense to re-read one’s old notes because this is where the manipulation is.
The “averages” of uninformed or informed but malicious sources are likely to be useless or misleading. The frivolous use of mean instead of median or vice versa on distributions other than normal can lead to serious distortions on the picture.
7/ Ignoring Dispersion
Dispersion is the extent to which a distribution (a function giving probabilities of occurrences of different possible outcomes) is stretched or squeezed. (Wikipedia) Individual observations may differ substantially from each other, and the difference might make the difference.
An average by itself doesn’t provide enough information to help with rational decision making. On a practical level, averaging any growth spanning across multiple reporting periods is misleading, because it will imply the straight-line growth, while in business there’s no such thing as the straight-line anything other than depreciation.
Without knowing the dispersion, it doesn’t make sense to compare a single observation (or data point) with the average because the comparison will be uninformative. It won’t tell whether the value falls into the acceptable range or it’s an anomaly.
8/ Puffing Up a Point with Percentages
Percentages (especially the ones with decimals) are very believable to the observer. Below will be a very simple math still needed to be understood (feel free to skip it).
Percent change – the percent difference between the “before” and “after” number. It’s not reversible, i.e., adding 20% to a number and then subtracting 20% from the resulting number will not produce the original number. It’s the effect of the new base.
Also, if the original figures are all positive, a decrease of more than 100% is impossible. Think of it when a journalist writes something like “the CEO reduced her salary by 150%”.
Percent Point of Change is the percent difference between two percentages – but calculated not as an algebraic difference (p2-p1), but rather as (p2 / p1 – 1). It can make a large difference in decision making.
It may be embarrassing to accidentally switch the base of calculating a percentage. For instance: “our towels absorb 50% more moisture because they have 2 layers instead of 1”. Assuming the linear increase in moisture absorption, the original statement should’ve said “100% more”.
If a percentage of something (say, state spending vs the GDP) is declining, it doesn’t necessarily mean that something is getting worse; the total (against which the percentage is calculated) may be growing faster. However, if both metrics are linked to each other (state spending and GDP are linked very loosely), then the decline of, say, company sales vs the market size is a sign of trouble.
Comparing plan vs outcome (i.e., the completion percentage) should be done by comparing the actual change vs the planned change, not the actual result vs planned result. (Result being the original number + the change). It’s a wonderful way of manipulating KPIs.
Some percentages are not additive. Increasing two different salaries – one by 10% and another – by 20% won’t result in either increase of 30%, nor (10% + 20%) / 2 = 15%. One needs to use a weighted mean to find out: (S1 * 110% + S2 * 120%) / (S1+S2).
The percentages need to relate to the same base figure to be comparable. If the unit cost is $1 and its selling price is $2, then cost increase of 12% and sale price increase of 8%: a) should be compared on the absolute basis (12c vs 16c); b) shouldn’t be compared as percentages at all.
Low base values should be looked at with care as even small numerical increases in them will result in large percent increases (making for headline stories). So, any sizeable percent increase should be accompanied by a base number to understand the context.
If one wants to show a big percentage increase / decrease, they will choose a small base number and vice versa – diminishing things is better via comparison with something big. In many arguments both sides use the opposite bases to emphasize their points, making the discussion much harder to follow. The only reliable way to handle this difference of “supported opinions” is to ignore the percentages altogether and evaluate the arguments on their own merits.
The free choice of the base period (time range) has been mentioned before but is still a powerful manipulation technique. It’s prudent to compare the data when an alternative base period is used with the data from the base period in question, and if the conclusions are substantially different – the original conclusion must be rejected.
Reducing ratios. Let’s get back to the claim that “9 out of 10 dentists recommend this particular toothpaste”. It doesn’t really tell how many dentists were in the sample, how was the sample selected and whether all the dentists provided a clear response or any response at all. 9/10 is very much less reliable than 900/1000 simply due to the sample size.
Ignoring the sample size destroys valuable information. If there is no way to find out the sampling method, the conclusion should be ignored.
Very resembles juggling budgets at the project inception forecasting capex with contingency pockets. Embedding contingency in various levels gives better safety net rather than taking bare estimate and apply total contingency over the top, those PM who know it use it at leverage. I.e if original vendor cost $100k adding vendor variance 15% and then 15% project allowed variance will take us to $132k max cost whereas if 30% was taken over the original $100 will give $130. If the are multiple levels the end variance might be significant. Logarithmic graphs use for making decisions commonly to magnify the options deference but for the regular reporting and controls only regular axis.