4/ Propaganda Machine
Targeted advertising promised to give people what they wanted; in fact, its goal is maximizing customer value (instant or recurring), and unethical targeting (e.g., poor or uneducated) can easily turn predatory: either promising something of value that is not, or simply overcharging people for something they wouldn’t think of comparing prices of.
Predatory ads work “best” when there’s great need coupled with ignorance. And vulnerability can be gold; vulnerable people (let’s call them “marks”) self-select by responding to ads. Finding a pain point is 50% of the job for the marketer.
Social stratification gets reinforced by targeted ads, not weakened to the point of disappearing. Anyone can be a victim, the rich are not as immune to predatory ads as one would expect.
A well-known example of a WMD is the misleading promise that university education (with for-profit universities being the worst offenders) is the path to rich future. This promise is enough to sell pricier-than-normal education funded by loans.
5/ Civilian Casualties
Crime prevention software predicts the geographical areas where certain types of crime are likely to occur. Send a police squad to patrol the area – and this will likely deter crimes that otherwise would’ve gone ahead. Win-win?
The model is as good as the data that gets fed in and the people who interpret the conclusions. For instance, not all crimes get reported: the more minor the crime – the less likely it is to be reported. Also, if the model spits out the probabilities of certain high-profile crimes (burglaries, car theft, assault, etc. – most of which get reported to the police) that get prioritized over the “nuisance” crimes that don’t get reported and/or recorded.
Adding “nuisance” crimes into the model will create an evil feedback loop when more and more people are charged, thus increasing the need for policing to catch even more “criminals”. Why do this? Because of a belief in the “broken windows” theory stating that the presence of the broken windows (or minor crimes committed) leads to overall disorder (more serious crimes committed), so it’s worth fixing the windows now to establish a sense of order and deter further violations.
Adding nuisance crimes to the model feeds a self-fulfilling prophecy: there’s no better way to show lawlessness of the society if all crimes are shown without regard to their seriousness or the presence of a victim.
However, this leads the media to a more serious conclusion: since the nuisance crimes map more or less coincides with the map of the poor areas, it’s easy to jump to a conclusion that poor people are the cause of most crimes and are responsible for their own poverty.
Sadly, police crime models don’t and can’t account for (let alone prevent) white-collar crime, which has way more impact on the society than even mass shooting. So predictive models are useful, but have an unintended consequence of targeting the poor.
MK: it’s not what the book says per se, but my take is that if the policing quality gets judged by numbers, policing will take place in the areas where the chance of booking someone for nuisance crimes is higher (guess the colour of the skin), but KPIs are KPIs. Add to this the recidivism, and we’re helping create one unhappy chunk of the population.
But wait, there’s more. As an ex-policeman myself, I know all too well that the police mentality is “a person is guilty unless there’s evidence to the contrary”. This viewpoint is more efficient than any other one. Crime prevention models are built with the same mentality in mind (of the developer, of course). The fact that legally a person is not guilty unless proven otherwise can’t be accommodated into efficient models.
So, the question to the society as a whole is whether efficiency is more important or fairness? (No, “they should be balanced” doesn’t cut it for an answer.)
To reiterate, more data leads to skewed impact on minorities and less fortunate. It’s unlikely any system would voluntarily drop some data to make the model only look for certain outputs. If anything, modern society is obsessed with more data.
The most straightforward conclusion is that if the police start stopping people on the streets of wealthy suburbs, this will create data points that will get fed into the model, and all of a sudden, these suburbs will be perceived as full of crime.
Assuming the models produce a long-lasting status quo, it’s fair to assume that the recipients of the outputs of the policing + incarceration models (i.e., prisoners) are the inputs into other profitable models of prison operators. The higher the “load factor” – the merrier.
So far privacy seems to be the only working deterrent against efficiency (widespread facial recognition systems are not common yet), but the insatiable thirst for more data and budgets bringing about higher storage and processing capacities no doubt will change this rather sooner than later.
Pre-emptive or prevention models pose danger to individual freedoms. This may sound quite dystopian, and it probably is, but step one is preventing a crime and step two is charging a person with a crime they may have not even contemplated yet. Even if the person actually commits this particular, or even lesser, crime – the model may suggest a more severe punishment.
Lastly, as models tend to simplify things, it’s very tempting to use an arrest as a proxy for safety. (In other words, more arrests don’t lead to more safety.)