The Loop. How Technology Is Creating a World Without Choices and How to Fight Back 4/5
Jacob Ward
Part 3.
8/ Collapsing Spiral
The final ring of the Loop (after which the book is named) is the convergence of pattern-recognition technology and unconscious human behaviour. This hasn’t happened yet, but the trajectory goes towards this development.
There are components that already exist and drive certain decisions; it’s a matter of time when these components get aggregated to interact with and be inputs for each other. If it happens before a legal foundation is built (meaning the responsibility of platform owners for the decisions of their AI), human options will be severely limited.
The interoperability of machine learning means that algorithms built for one group of tasks get reused for other, similar, but different, tasks. I liked the term “metastasise into other areas”.
Surveillance is the most obvious example of using (and abusing) AI. COVID contact tracing is one of these applications that seemingly are in public interest. Do people like it? Not a single bit. In law enforcement the combination of eliminating mundane tasks coupled with shifting the responsibility from an officer to an algorithm is a gift from God.
Recommendation systems choose the next music track to play or the best route minimising the time wasted in traffic. It’s not unreasonable to admit that these systems have already started making decisions for us.
Facial recognition systems now have huge data sets allowing matching faces from various sources and quite effectively identifying the people we see. One can’t use the facial recognition system as the proof that will stand in court but marking the individual as a “lead” and following/monitoring them will rather sooner than later create enough evidence on its own. (It’s called the parallel construction.) Ethics? What ethics?
Ethics gets thrown out of the window when efficiency is involved; KPIs are being set based on what the technology can deliver, not what the humans being replaced by this technology can do. People who publish their photos on social media (especially if these photos prove their location at a time of a major event occurring) are doing themselves a huge disservice by giving the face matching algorithms enough data.
Critical AI systems exist in the multi-layered darkness. Actual developers don’t (or shouldn’t) know how the system will be used or are deliberately being pointed at ethical applications (e.g., disaster relief). Such systems are built with double use in mind, with the major use being way more sinister.
The book mentions the level of ethical restraint, which varies between countries – as with medical research involving live humans, whoever can go the farthest will build a better model. Improvement of models will inevitably happen, so it’s a matter of industry and government policy to decide if certain parts of the technology simply must not be used despite the benefits. Needless to say, with dire consequences for non-compliance. Another concern is that finding non-compliance is a nontrivial task in itself.
We probably will never become slaves of the machines but we’re already starting to become slaves of the behaviours amplified by the algorithms. The most visible (and probably dangerous) example is the use of technology by kids – a compulsion around technology.
Devices with screens are not mere toys – they offer interactions and types of play that other toys won’t be able to offer. And the deep involvement with interactions can override the natural instincts to monitor the surroundings (an evolutionary trait).
Compulsive-forming potential of apps in infinitely higher than that of books and toys. “Educational” (in reality – highly addictive entertainment) apps still require the engagement and retention metrics that work even better on children.
Parents teach these behaviours to their kids who embrace them with open hands and don’t let go. This affects the children’s perception of what’s appropriate and what’s not when it comes to their children (the iPhone is 15 years old, and teenagers who got the early versions of this device now have their own children who are playing with iPads). The circle of life and technology.
9/ The Loop
MK: The book talks about the Survivorship Bias, which has been written about extensively. The author’s take on this bias is that humans may perceive their (and others’ success) as the result of their genius / vision and not thanks to the algorithm that prioritised their piece of work over others. In online services gaming the SEO (search engine optimisation) algorithms was an essential (and underappreciated by others) piece of growth strategy.
Algorithms don’t just take into account the properties of each content item, they look at the metainformation, i.e., the data about the content. Again, it’s the Engagement set of metrics within product management. The risk is that the diversity of content may and will suffer (an echo chamber) because algorithms optimise recommendations to meet certain engagement and retention thresholds.
The content created by AI can’t be great by design; it may only be great by accident.
The author then goes into explaining what NFTs are and why people believe they’re buying not just pixels but experiences.
Generating music is one of the simplest applications of AI. [MK: I fondly remember when in 2005 Wolfram Alpha approached me to help them integrate our mobile ringtone generation technology into their product.] And it’s way cheaper: a unique track with all the rights can be made by AI for under $1000, while a “proper” track would be 10x that at least.
All algorithms are biased to an extent. As a bare minimum, they create an illusion that anyone can become a star, while it’s very deceiving. They can’t be decomposed (see above) and the inner workings can’t be deduced from the outcome.
It’s easy to think that algorithms can be unbiased by simply removing the offending inputs. It’s not always clear how to find these inputs and what their weight is. Even more importantly, it may be the model that makes extensive use of the sensitive data (like race or ZIP code, which often is not allowed) that is to blame.
The book talks about the Lindy’s law (and I’m a subscriber of its namesake Revue blog) that the expected longevity of an event or a person’s career adds two extra years for every additional year that event or person is relevant. Having AI via its recommendations extend the useful lifetime of a person or an event will ensure we get even more of the same. It’s the Lindy’s law that actually gets enforced instead of just observed.
Performativity: supervised learning tends to make predictions that wind up influencing the thing being predicted. An interesting method to counter this bias is to perform the evaluation offline, i.e., not via another algorithm downvoting the unfavourable results – let’s leave this to live humans.
AI, however, can spot patterns in human communications and make such interactions more constructive, completely replacing the face-to-face communication between people who can’t stand each other. The same algorithm may work to manage employee conflict or achieve performance improvements.
Reading people’s faces via AI is error-prone, although the developers of such systems claim otherwise.
The author then mentions everyone’s “favourite” programmatic advertising believably claiming that a typical American sees 10 000 of such ads every day. This is also a manifestation of the Loop.
Brand safety (the refusal of brand owners to have their ads published alongside “unsafe” content) is understandable from the brand owner’s point of view, but it effectively “defunds” the news as 20-50% of articles even at the major news outlets are deemed “brand unsafe” for the purpose of advertising in them. Ooops. It’s not hard to come to a conclusion that unless a news outlet has an alternative monetisation mechanism (native ads or paywall), it won’t be investing too much into unmonetizable content, unless it attracts visitors to other – monetisable – pages, too. And it may not even be because of the topic discussed: certain things can’t be talked about without a specific dictionary that may be deemed “brand unsafe”.
Part 5.