You might not have heard of algorithms or know exactly what they are, but they play a major role in your life. On the more innocuous side, algorithms are behind the prevalent recommendation engines that deliver custom content to you constantly: Netflix’s “what to watch next”, Amazon’s “you might also like”, Google’s “related searches”, Facebook and LinkedIn’s “recommended people”, and so many others.
But on the more dangerous side, algorithms are playing an increasingly important role in some very human decisions. It’s up for debate whether we want these decisions to be automated. How much should we allow technology to do for us? That is the question of our time.
Amazon got in a lot of hot water when it introduced its hiring algorithm. The algorithm itself was a great idea in theory – utilize the vast amount of HR data the company had to pre-screen candidates for success. The problem wasn’t with the data or the algorithm’s implementation; the problem was with the algorithms outcomes.
Despite Amazon’s best efforts to remove gender indicators from the data sets used, the algorithm was able to figure out which candidates were men and which candidates were women and overwhelmingly hired the former.
Now, before we get in too much hot water, it does need to be said that perhaps the algorithm was working perfectly and men are simply better employees than women at Amazon. There are any number of factors that might play into that, but that’s not really the point of this discussion. The point is whether we as a society should choose to use algorithms if they lead to outcomes that we don’t desire.
And that is, for better or worse, how the world works now. Start with an end and figure out the best means to get there. That might not be optimal for performance, that might not be optimal for quality, that might not be optimal for any number of reasons. But that generally seems to be how we want to structure our organizations, be they places of business, government agencies, or any other collection of people.
So when an algorithm goes awry, is it the algorithm’s fault? Not really, but it is an eliminating factor when analyzing the efficacy of an algorithm and a reason to be wary of their increasing use, especially with companies and governments that are less transparent about the mechanisms and results. If we want more gender diversity in hiring and the algorithm tells us to basically hire only men, the fact that the algorithm is subjectively wrong (disregarding whether it is objectively right or not) means that it shouldn’t be used.
But it’s not just subjective outcomes that show where algorithms fail, the algorithms themselves also often reward negligent behavior. Consider the case of the DC school district where teacher performance was tied to firing and hiring algorithm. What do you think happened? The worst teachers, naturally, helped their students game the system to artificially inflate their performance metrics. And the best teachers, sadly, were often stuck the next year with kids who were so far behind thanks to their teacher’s cheating that the best teacher performed horribly. It’s hard enough to teach kids at the appropriate level for their grade; try teaching kids who are functionally a year behind thanks to awful teaching.
And of course the true tragedy is that the best teachers were fired and the worst teachers were given raises and bonuses. If you put all your faith in data, you have to believe it. But the stark reality is that numbers can be so skewed by devious human behavior that they tell complete falsehoods about reality. The worst teachers look like gurus and the best teachers look like hacks, when the exact opposite is the truth. This is the danger of algorithms.
And the danger of algorithms becomes even more pressing when the issues are literally matters of life and death. Google has created AI that can predict with 95% accuracy – a percentage that will surely only go higher – the result of someone’s hospital visit. “That’s great!” you might think. But the reality is a bit muddier than that.
Consider, for example, a hospital that knows that an extremely expensive treatment has an extremely low probability of saving someone’s life. Should the hospital not perform the treatment from a strictly cost-benefit analysis perspective? (This might be a good point to highlight the argument that healthcare shouldn’t be monetized, but that’s a discussion for another time.) If we trust the algorithm enough, we might condemn someone to death who might otherwise have lived simply because the algorithm calculated the odds and didn’t like them.
The real danger with using algorithms to predict healthcare results is that these predictions seem to have a sort of self-fulfilling prophecy aspect to them. If you tell someone they’re going to die in a few weeks, that seems to cause them to die sooner. And, vice versa, if you predict extremely positive outcomes, the brighter outlook that gives the patient seems to make them live longer. And this isn’t just a case of sampling error – if you give patients with similarly dire circumstances completely different diagnoses (one, the truth, that the outlook is terrible and the other, a lie, that the outlook is great), the latter patients do better.
So if we have algorithms telling us baldly about our health, that might not be the best thing for us. And if we have algorithms telling us to skip an expensive surgery because it only has a 1% chance at succeeding, that’s a risk most humans would be willing. That 1% chance is a parent or a sibling or a best friend. Algorithms are cold, calculating, heartless processes by design. But humans aren’t, and it’s human lives that matter, not algorithmic efficiency.
Do you want humans making decisions for you or algorithms? That’s the real question. And surely there is a middle ground where humans interpret the results generated by algorithms, but the momentum seems to be shifting towards the purely algorithmic side. And surely, there is something to be said about the fickleness and arbitrariness of human beings when it comes to decision making, but that’s what makes us human!
As we turn more and more of our lives over to technology, do we not run the risk of eroding the very qualities that make us human in the first place? Whether it’s determining who to hire or how to teach children or what procedures to perform on a patient, the goal should be on humanizing that interaction as much as possible. What algorithms do, by definition, is to dehumanize the process. The last thing we need right now is to dehumanize the world any further.