Part 3.
6/ Ineligible to Serve
The chapter starts with a case when a personality test blackballed a person from getting even a minimum-wage job. It’s not legally clear whether the use of such tests constitutes a medical exam (which in many cases may be prohibited) or not. The use of IQ tests for hiring in the US is illegal.
Applicant testing tools are supposed to be proxies for human screening of candidates. In many companies the cost of a “bad” hire is higher than the opportunity cost of not getting a “good” hire, so the use of tools to reduce the thinking effort on behalf of HR is justified.
Personality tests are poor predictors of future performance compared to aptitude tests and job references. They may be useful for team building and personal development – just not for making a hiring decision.
Certain questions may be illegal because they’re just borderline discriminating; at the same time, asking two indirect questions instead of one direct question may do the trick. It’s the pattern of answers that make candidates dig their own holes.
Applicant weeding models, according to the book, receive little feedback, thus reinforcing the biases introduced by their developers. MK: I’d add to the status quo argument that while it may be cheaper to weed out the right candidate (false negative), collecting feedback on the candidate who was greenlighted by the system to give to the model developer may be way more time consuming. So false positives and true positives don’t routinely get fed into the model to train it.
These models, if there’s an implicit or explicit bias against a certain trait or mental condition, makes certain groups of people possessing this trait or condition unemployable.
It’s not a secret that white names get more callbacks than black names. While candidate tracking tools are supposed to reduce racial bias, they can reinforce it quite as easily.
Resume parsing and analysis tools are used to automatically screen for the right keywords, positions, and achievements. Deviations from the machine-readable formats are not welcome as the goal is to weed out as many resumes as possible to focus on the top matches.
A more equitable approach to interpreting the model outputs is whether an otherwise capable person may be ineligible to be hired, the more humane way is to assess whether the person’s shortcoming may be curable (such as the command of English) instead of outright rejection. Models have objectives, and objectives determine behaviours.
Another job of HR is finding future stars and resumes alone are bad predictors. Retaining existing stars is a challenge, too. This has a lot to do with the social capital, which in this case is a measure of the person’s status in and contribution to the professional society. Hard to do for offline jobs, though.
7/ Sweating Bullets
Scheduling is another example where models can go wrong. Optimizing schedules has a potential to emphasize efficiency but put unnecessary strain on overworked staff if their interests and requests are ignored.
The benefits to the business are clear: employee productivity is maximized as there’s minimum downtime; employees can’t be busy with their own activities (reading, studying, slacking) as it’s the money straight out of their employer’s pockets. The number of hours can be reduced to just be under the threshold making employees eligible for benefits.
If efficiency (including scheduling efficiency) is built into managers’ KPIs, there’s no way on Earth they will loosen their drip regardless of what the Corporate says.
People who need money or are otherwise disadvantaged are the ones being subdued to the models that control them. Their lives should be miserable stopping short of a point of resignation; the pay can be slightly raised if absolutely needed to keep them from leaving the company. At the same time, if the supply of low-level workers is high, existing employees have virtually no bargaining power. Chaotic schedules also make holding second jobs close to impossible thus reducing employee’s earning power.
Chaotic schedules take a toll not just on physical health, but also on family routines and educational and emotional outcomes for children. In other words, children who can’t have established routines with mentally stable parents (i.e., who don’t work chaotic schedules) are worse students and have troubles socializing.
A similar example is when software decides which employees need to be fired during downsizing: the ones that tend to be less sociable and leave fewer public artifacts are usually the most obvious candidates. Algorithms make tough decisions easy. While doubtful, the conclusion of the model creates its own reality.
Part 5.
So the model developers rarely get feedback on what their models are doing, which is kind of undermining the whole AI paradigm... Gross. I didn't know that.