AI Leadership for Boards – The Future of Corporate Governance 2/3
F. Torre, R. Teigland, L. Engstam
Part 1.
4/ Supervising AI Governance Capability
Implementing AI has benefits, but Boards need to understand and be able to mitigate the risks that come with it.
4.1. Supervising Data Management, Ethics and Black-Box Decision-Making
MK: I bought this book because of the name of this chapter.
Most companies collect data, but since among other things data strategy includes mechanisms to ensure data consistency and completeness, and the quality of decision making based on the quality of the data is lacking.
Model developers can introduce their own biases thus making algorithms even less trustworthy than the processes they’ve replaced.
Dataset bias occurs when algorithms are trained on data with low diversity (i.e., not fully representative of all the segments leading to poor sample selection).
Association bias is when the data set used reinforces and amplifies cultural bias.
Automation bias ignores the established interactions between humans replacing them with a loosely compatible algorithm.
Interaction bias is deliberately introduced into the model.
Confirmation bias is when the model looks correct if its outcomes match the expected (often biased) conclusions and preconceptions.
Some data is easier to obtain than others; data about humans is not distributed evenly, so relying on just the available data inevitably leads to reinforcing discrimination and bias.
Making AI non-biased as much as possible is a very complex task; anything less than that may lead to ethical failures and harm.
Black box decision making has the following features:
Its outcomes can’t be explained by a step-by-step algorithm
It makes model developers and users unaccountable for the outcomes
The source of error can’t be reliably pinpointed
Re-training the model often becomes harder than training it in the first place
It reinforces its conclusions based on the feedback it’s been given; incomplete or biased feedback leads to the indoctrination of the model (MK: I’ve just come up with this term, do you like it?).
Thus, many regulators are not willing to approve such models without questioning. This poses an interesting ethical dilemma of “explainability” vs “understanding”: is it enough to explain to others how the model works, or do Boards need to actually understand how it works? (the latter is much harder)
Ideally, Boards should be aware of the use of both internally developed and off-the-shelf AI solutions / applications and have broad understanding of the implications of said solutions. [MK: I would suggest at least putting them into the risk register and reviewing their applications quarterly.]
Large companies already include AI risks (brand reputation, competitive harm, loss of revenue or legal liability) in their SEC filings. [MK: ethical concerns are a face-saving way of refusing to sell an AI technology if this may lead to reputational damage.]
The book proposes that Boards should also be capable of supervising the creation and monitoring of a data governance framework of a firm. How to do this in practice and how much information would the otherwise busy Directors be able to absorb and process is a rhetorical question.
4.2. Supervising AI Cybersecurity
Hacking and ransomware attacks are the risks that must be present in the risk register and reviewed periodically. While it’s tempting to beef up IT security and call it a day – there’s a social engineering component, too, and addressing it requires company-wide training and refresher courses.
Outsourced operations introduce one extra layer of complexity as ensuring data security becomes way more challenging. Security awareness training is helpful but not a replacement for vigilance.
AI systems relying on other systems can be manipulated, either by amplifying the scale of an error, or by re-training themselves based on the biased input data fed by a hacker. Say, in image recognition there are manipulated images that humans will still interpret correctly, but AI systems will misclassify.
The book suggests Boards to be aware of such attacks and have a mitigation and response plan. [MK: to me it’s an incredibly naïve suggestion; my approach is addressing these concerns via a risk register and having professionals look into the issue. Yes, cybersecurity is the most important risk at the moment, but it doesn’t mean that the Boards must turn security experts this very instant.]
Where the book is right (and it’s a very good suggestion) is having a war plan for instances where the breach has already occurred. There must be clear responsibilities of everyone involved; the PR machine has to be well-oiled and prepared, and the managers of the firm need to know how to run their operations in a temporarily degraded state.
4.3. Supervising Business Ecosystem Participation and Leadership
Business ecosystems don’t work well with hierarchical structures, hence it’s important to move decision making to lower levels of the firms involved as well as ramping up information exchange and collaboration. “Community-driven organizations” are a dream of most modern Boards.
Talent acquisition and retention is one thing; ethical and legal data acquisition is another. As data flows between different ecosystem participants, it becomes essential to ensure all of them play by the same rules (think of ESG).
Choosing an AI vendor needs to be assessed from the power balance standpoint, too, because choosing the wrong supplier can mean the difference between long-term performance or failure.
Implementation of AI and the possible externalities it may cause forces Boards to pay additional attention to non-shareholder stakeholders; also, the strategy planning process can be turned upside down by AI, so the old “plan and execute” controlling approach won’t work.
Part 3.