Weapons of Math Destruction 6/6

Cathy O'Neil

Part 5.

9/ No Safe Zone

  • Insurance works in a way that individuals pool collective risk, so it also uses buckets. Smaller buckets allow for personalised pricing that looks almost like it’s an individual pricing (it’s not). The core assumption is that people who have a number of similar traits have similar risk appetites (what could go wrong?).

  • One of the statistically significant inputs into insurance models is the ubiquitous credit score – if that’s not an instance of discrimination, what is? While drunk driving records (at least according to the book) are given much lower weight in the overall score, which sounds like nonsense.

  • There’s a chance that the revenue maximising algorithm actually preys on the vulnerable people with low credit scores not because they’re bad drivers, but because it’s possible to extract higher insurance premiums from them. So, it’s an opportunistic pricing, not a reflection of the risk of that particular individual.

  • Predicting consumer behaviour allows for another opportunity to price discriminate: if the algorithm determines that the “mark” will not do price shopping and will take whatever price is presented to them, it will spit out the highest possible price that the person is predicted to take without looking elsewhere. [MK: an example from our industry would be when online travel agencies will give a higher price to the people who came to their web sites directly rather than through price comparison web sites like Aviasales.]

  • And here is the trucking industry where owners install cameras and telemetric sensors recording every truck acceleration and every blink of the driver’s eye. Initially implemented to reduce insurance payments (under the noble idea of calculating insurance premiums based on the driving habits rather than a suitable bucket), monitoring has been found to have many more cost-saving and efficiency-increasing benefits: saving fuel, choosing better routes, and comparing drivers’ performance. If this sounds slightly dystopian, it’s OK because it is.

  • What could go wrong? Using GPS data still allows for discrimination: a person living in a poor neighbourhood (where the chance of an accident is statistically higher) and has to use a car to get to the job with chaotic schedules (i.e., arriving home at 12am is an option) will still be penalised by the algorithm. Or taking a certain road littered with pubs may even suggest the algorithm that this driver in question is part of the drinking public (based on location, time, age) making things very unpleasantly complicated.

  • What is now opt-in (trackers in cars) eventually and quite likely will become mandatory with those who refuse having to pay extra. That’s the nature of data economy. Privacy is a luxury.

  • Insurance industry by definition tries to reduce information asymmetry between the insured (who wants to say only as much as not to have premiums increased) and the insurer (who needs to know as much as possible to better calculate risks). It’s not always legal to deny insurance (since this, among many other things in 2021, can divide the society into haves and have-nots), but making it prohibitively expensive will lead to the same outcome. Profit motives of the insurance industry outweigh the social benefits.

  • Many AI algorithms resemble black boxes more than anything; the question is what will happen if they start feeding each other (i.e., one algorithm’s output get fed directly into the second algorithm)? Is it already time to start worrying or not?

  • Let’s talk about health and the “wellness” programs many companies have introduced in the past years and especially in the dark COVID age. The idea is simple: companies invest in their employees’ well-being while enjoying lower insurance premiums. Win-Win?

  • First, there’s the issue of privacy. It’s one thing that insurance companies know the history of one’s doctor visits and medical procedures administered. It’s yet another thing to ask employees to fill in their health diaries and penalising them with higher premiums for not doing so. [MK: from a purely economic standpoint that extra insurance premium is precisely the price of one’s medical privacy]. Employers can also get access to this information, too, and use it for whatever reasons they choose as long as discrimination is not based on a single factor being an employee’s health. Tying insurance premiums to the bogus measure called BMI is another example where pseudoscience affects lives and pockets.

  • Even with the best intentions, tying one’s BMI score to anything work-related is bordering discrimination (and what’s worse, it’s another proxy metric to target specifically Black women). [MK: another example of good intentions and poor implementation is the mandatory COVID vaccination.]

  • But wait, there’s more. Evidence than corporate wellness programs create long-lasting positive effect on employees’ blood pressure, cholesterol or blood sugar is simply inconclusive. An exception is quitting smoking. There’s also no conclusive evidence that smokers and obese people cost companies more in medical costs during their employment [MK: it looks like a very carefully crafted assertion because it ignores the factor of lost productivity in these two groups]. It’s true, though, that they are likely to incur higher medical costs after retirement, and many of these costs are funded by taxpayers.

10/ The Targeted Citizen

  • The book describes the influence Facebook’s algorithms have on people’s voting behaviours as well as on the moods (more positive updated – better mood and v.v.). Algorithms have a potential to change our moods without our awareness.

  • It’s easy to accuse Facebook’s news feed or Google’s search results of opinion manipulation but there’s no substantial proof other than the political affiliations of their top management. One has to decide whether to rely on these platforms for news or not.

  • Marketing targeting allows distributing political messages to the audience most receptive to the message. This works thanks to the confirmation bias. At the same time, it’s not clear if microtargeting actually works or delivers what was promised.

  • MK: Politicians become two-sided marketplaces: they sell their tailored messages to voters and buy audience for top dollar. Since there’s not much accountability for the outcome, this game can be played as inefficiently as one wants.

  • Not all votes were made equal: if a state is virtually guaranteed to vote red or blue, the value of a swing voter is much lower than the value of a swing voter in a key state where the outcome is far from predetermined. [MK: and as any marketer knows, there’s temptation to sell cheap swing voters and charge the client as if that voter comes from a swing state. I wonder what sort of arbitrage potential exists.]

  • The dynamic with donors is even more interesting: donors are only useful if they donate, but if they donate everything in one go, they don’t need any attention anymore. Hence, maximising the LTV of a donor requires a number of targeted communications, and this creates a two-sided street: politicians’ teams become trained dogs who perform tricks (i.e., send the right messages) when they’re being fed with donations.

  • Many campaigns never surface publicly: they target specific audiences with shock content to manipulate them.

  • What’s interesting about political campaigns is the rule of the minority: voters who are very likely to vote one way or another are much less interesting than swing voters. Sad but true.

  • Non-voters looks expensive compared to swing voters, so often they get completely disregarded, which in effect cements their aversion to vote.

  • The author believes that changing the objective from targeting people to identifying people who need help (and eventually helping them) is much more useful to the society as a whole.

/the end