
Discover more from Course Notes: Continuous Business Learning
The Loop. How Technology Is Creating a World Without Choices and How to Fight Back 3/5
Jacob Ward
Part 2.
6/ Life on the Rails
When humans are subjected to a system they don’t understand, they treat it as being able to arrive at the outcome better and more fairly. They feel less pressure to fight – and more pressure to comply. This also applies to the people who execute these orders made by a process or suggested by the AI – there’s no second-guessing, just following the procedure.
There are many examples of cases when people are going with their guts, replying on powerful intuition, which is a form of magical thinking. Even when people understand the error of rationality, they often choose not to correct it (e.g., knocking on the wood knowing full well it’s a superstition).
People can form relationships with tools, cars, etc., but also (!) with chatbots that keep asking questions: conversations (interactions) may turn really personal.
We are surrounded by systems driving our behaviours in different ways, be it a fire alarm or a “change oil” car indicator, - and whatever causes that call to action to fire up is often beyond our understanding. Individually these calls to action seem innocent, but collectively they’re shaping us in a way we probably haven’t intended to.
Human behaviour to a large degree is a collection of recognisable patterns, and what can be recognised with confidence – can be predicted and often – manipulated. What’s bothersome here is that AI only builds on the past behaviour, so all its recommendations are based on what it already “knows” and recognised in statistically significant quantities.
No one wants to build a system that empowers critical thinking (System 2). There are systems for “augmenting” it, “improving”, “facilitating” or even “replacing” it. But too much critical thinking is bad for everyone, with the exception of an individual.
MK: The goal of AI is exploiting the built-in weaknesses and shortcuts (tribalism, bias, gut feeling, etc.) for some gain – financial (buy something), political (vote for the right candidate) or compliance (perform a certain act).
7/ What AI Isn’t
MK: Parts of this piece is very basic, but is needed for those who are confusing terms
Machine Learning (ML) – algorithms that get better at a task through experience. Build upon the past patterns to forecast the future. New predictions need new data.
Supervised Learning – systems learning from large sets of labelled data (this is a cat, this is a dog) to pick up patterns in the data. Trained to spot specific outcomes, these models spot the patterns correlated with these outcomes.
Unsupervised Learning – systems are given unlabelled data sets and are asked to sort them into arbitrary clusters; the clusters become the sorting mechanism.
Reinforcement Learning – processing raw, unlabelled data through reward and punishment. The algorithm infers what it needs from the data, and the researcher then votes down the wrong answers and votes up the right ones. The model then learns to search for the right answers while avoiding the wrong ones.
There are two major forces at work that guide the outcome of the process. The first is called the Objective Function. It’s the purpose of the AI project: park a car between two other cars or cook a cheeseburger according to the recipe. If important constraints are omitted, the outcome with occur with unintended side effects. The second is the Ruthless Efficiency – the shortest pathway to the right answer.
The combination of the Objective Function and Ruthless Efficiency creates a black box, the workings of which are hidden even from the model developer. Hence the Explainability movement asking for all major models to be transparent to be ethical. Transparency, though, is incredibly complicated, as the models draw inferences in numerous ways that may not be traced to each other, otherwise the model won’t be as sophisticated or accurate.
Systems can be designed for transparency with the constraints and variables clear from the start; the model will only rely on or be bound by them to be interpretable. Such interpretable systems can be combined into larger systems that can be decomposed and understood. But the systems that weren’t built with transparency in mind can’t be reliably decomposed and explained.
Black boxes are faster to build and are harder to imitate by competitors. They are usually more accurate than humans, but (and this is important) don’t carry liability for the wrong decisions that can’t be contested.
An objective function is suitable for certain simple situations but is completely oblivious in the cases when social concerns are involved: there are no universal truths or morals, and the understanding of values is not shared by everyone. And one’s benefit may result in other’s loss. Yes, and the same model output may be of different use to different people.
Pattern recognition creates a “universal” benchmark, which is entirely artificial. Universal decisions that at least minimally satisfy everyone don’t exist for most of the serious topics. A sense of legitimacy of the decision (and the transparency of the process) won’t change the outcome but may ease the pain by offering an explanation to those affected. (The theory of “social choice”.)
Building AI for every custom task (vs reusing what’s already available) is a prohibitively expensive task. Thus, the same models end up being reused while making an increasing number of mistakes due to the loose applicability of the model to the new task. Think of using a cooking recipe AI to sort out through resumes. Finding such mistakes may also take a prohibitively long time.
Philosophically, the inbuilt hope that things are always getting better causes us to believe that advances in technology are mostly for the good and will make human lives better.
Practical applications require AI to be predictable (say, in a battlefield a fighter needs to make sure their robotic helper is where it needs to be). We humans believe that things are simpler than they are (try explaining things to a robot) – they’re not. We also believe our goals are more complex and that we are not so predictable – they aren’t, but we are. However, since most of our decisions are unconscious, and we rationalise our actions ex post, such decision making can’t be automated.
Morality (think of the Trolley Problem) is impossible to automate. Human values are not consistent, so it’s not possible to properly train AI to provide a universally acceptable response to a given situation. We forgive people for being imperfect but may not tolerate the same mistakes in AI systems.
Autonomous systems (since they learn on pre-existing scenarios) must learn on a bunch of scenarios describing situations that may occur – even remotely.
Part 4.