The Loop. How Technology Is Creating a World Without Choices and How to Fight Back 5/5
Jacob Ward
Part 4.
10/ Mission Critical
In the military machines don’t make decisions but are used to generate well-developed options. Machines can’t (yet) make the final decision to pull a trigger on someone – a live human is needed to give the final go-ahead. But who knows how this will play out in the future?
Anti-missile systems (i.e., defensive) can and should be fully automated as there’s no time to consult a human: takes too long. Cyber warfare is another case where fast automated response is the best – every second counts.
Fully automated weaponry is disposable, even an automated fighter jet, and this takes away some challenges (manoeuvres that will make a pilot faint) but creates others like making a decision to shoot. Maybe ethics will shift towards making it acceptable, especially when one machine is fighting another machine without human involvement.
Crime fighting via predictive systems like PredPol (now Geolitica) is only as good as the data it’s being fed, and there are many accounts of data manipulation to improve the scores of individuals and departments. Crimes can go unreported or misrepresented to make the performance scores look better.
Another major source of systematic bias of predictive policing systems is that people from certain backgrounds and locations get arrested and convicted more frequently, get sentenced for longer (since they can’t afford a good lawyer), and this loop feeds itself.
There’s an argument that the mere presence of a police car in the area prone to certain kinds of crimes will reduce the number of such crimes (a good thing) and make residents feel safer (a good thing, too). This seems to work, however, the long-term effects of such ever-presence are not clear yet.
The approach is still biased: there are certain crimes that are disproportionally committed by certain demographics and while the policing systems don’t take obvious things like race into account (it’s simply not needed anymore), it’s hard to argue that it’s easier to catch ten drug dealers than one white-collar criminal. Coupled with simplistic KPIs, this changes the societal outcomes, too.
Also, the more data there is, the better the model is trained. Low-level offenses are way more frequent than complex cases, so it’s natural to expect that they will be predicted and addressed better. More data creates a feedback loop targeting certain areas and demographics, and again, the bias is unavoidable.
Social network in action: one’s chances of getting shot increase dramatically if they know someone who has been shot before.
A typical police officer more likely to pull a trigger on another person is well-connected, more likely to be young, likely to have had civilian complaints along with other officers (i.e., not a lone wolf). Race and gender don’t matter.
The business case for AI deployment for, say, policing is reducing the number of payouts to innocent victims’ families as well as reducing insurance premiums and deductibles. Cynical, but a practical starting point.
11/ Weak Perfection
The underlying assumption between using AI in more areas of our lives is that it can improve anything. There are, however, instances where human inefficiency is a hidden human safeguard.
A plea in court is one of these safeguards: even if we imagine that it’s possible to do it remotely (it’s done in person) and if all the calculations of statistical probabilities tell that a certain decision needs to be made, there may be disastrous consequences of such decisions, and so humans must take their time to think everything through.
Weak Perfection means that the system remains difficult to help people make better decisions.
AI makes managing people easier (in governments and companies) but doesn’t really help the people affected by its recommendations. Or it makes our choices simpler than they really are.
Transparency of AI may not be transparent to everyone: even if the inner workings of the system have to be explained, the assessor often has to sign NDAs so that the “secret sauce” is not spilled.
RATs (Risk Assessment Tools) take into account lots of variables before producing a score indicating the likelihood of a defendant to flee or commit another crime while waiting for the trial. They take all the obvious parameters into account: access to housing, job history, criminal history, etc. but can be biased by ignoring the conditions of the local area or the specifics of the local court. It’s comparing specifics to generalities and when it comes to human lives - it’s bad. Use of such tools is likely to increase inequity rather than decrease it.
Outsourcing the decisions on who is eligible to benefits and doesn’t lie in their application (i.e., doesn’t raise red flags) to AI is also dangerous due to the false negatives it produces. It’s infinitely faster than humans – but what are the effects on society?
12/ Higher Math
Overgeneralising in AI models is bad, as we’ve already discovered. When dealing with risk, more precise models that go above and beyond the traditional outdated models mainstream insurers use allow offering some products to the people who would’ve otherwise not be fit for various reasons.
An example is fire insurance: traditionally homeowners who put extra effort into fireproofing their houses were placed in the same bucket with those who didn’t – and both were denied insurance. The use of AI, aerial imaging, access to home improvement databases – all kinds of signals that can convincingly demonstrate the difference between the two homeowners – allows: a) writing an insurance policy to the first homeowner (a social component), and b) even lowering the premiums due to the extra investment this homeowner has put into fireproofing their place (an economic component).
Even better, such models allow people to proactively assess the steps they need to make to get insured and to reduce their premiums. (In practice it’s almost never happening, but this is another story.) What’s important is that predictive models shouldn’t solely be based on the past data as there are innovations and best practices that may be ignored.
Predictive systems can recommend people to build a house in a certain location to reduce risks or to avoid buying a place altogether because of the unacceptably high risks.
However, it’s not hard to think of the world where people consuming common resources (“public goods”) can’t opt out of the system: it’s part of being a citizen. There’s no exiting, just accepting and living with it.
The author touches on my favourite topic of VSL – the Value of Statistical [human] Life. Adding $9.6m per human life (US) saved into a model can have peculiar side effects.
A trickier example is the one of vaccines (COVID vaccines come into mind): if the severe side effects are present in 1 in 1 million cases (or so we’re led to believe), it’s up to humans to decide whether they agree to take on this risk or not.
What about self-driving cars? If they can substantially reduce the number of injuries and deaths on the road, is it worth allowing them to drive on public roads and the manufacturers covering the legal bills? Not so quick: it may be emotionally and culturally unacceptable to agree to a system where someone dies because of a self-driving car.
Mandatory backup cameras installed on all new cars in the US since 2018 prevent 100 children’s deaths per year. The extra costs to consumers exceed 100 * $10m per life = $1B, but no one wants to be that parent who killed their child by a horrific accident. So, people agree to pay more than the cold economic calculations would suggest.
In public policy there is still huge untapped potential for AI deployment to improve human lives, but it doesn’t come for free (as any budget-constrained municipality will appreciate), and social services often don’t have a compelling business case. (Cost savings, unfortunately, are a poor business case.)
the end