Tech’s Ethics: AI

Published 09/30/2019

Both the benefits and the dangers of AI are numerous. The problem is you can’t simply take the benefits without getting the dangers. If it were as simple as that, no one would care about the rise of artificial intelligence. But the sad reality is that AI is already having incredibly negative impacts on our world and is poised to produce even more disastrous outcomes in the near future.

It’s difficult to see the danger of AI when the wonders of AI are so prevalent. It seems like every week a new gadget comes out that automates a part of your daily life that you probably weren’t too fond of doing. But the same AI that powers your Roomba or your Tesla can also be used for some pretty awful.

Practical Considerations

Speaking of Tesla, one of the major issues with AI is liability. Several Tesla drivers at this point have suffered severe injuries or even death as a result of the car’s AI-powered automated driving functionality failing. One driver’s Tesla drove him directly into a median at high speed thanks to an old, faded lane marker that the car interpreted as active. That driver died. Or consider the woman in Arizona who was hit by a self-driving Uber car. Who’s at fault there? The courts ruled Uber and its technology weren’t liable at all, which sets a fairly terrifying precedent.

Or consider the current military practices being utilized around the globe by the United States, the world’s military superpower and the greatest force of potential destruction this planet has ever seen. (That’s simply an objective evaluation based on the amount of firepower in our arsenal, not a commentary on the military’s propensity to use said force.) By as early as 2008, the number of autonomous military vehicles (drones, robots, etc.) in Iraq outnumbered the number of ground troops of all of our allies combined. What happens if military technology advances to the point where power is granted wholly to automated systems? What if human involvement becomes a limiting factor in the push for a swift outcome? The decision to strike a target, along with all of its aftereffects and the potential for collateral damage, seems to be the most human of all decisions. Relegating such decisions to AI seems like a rash decision.

And largely that is due to the potential for AI to malfunction. Consider the stock market flash crash from 2010. A simple bug in a machine learning algorithm designed to time the stock market led to an absolute fire sale and the erosion of trillions of dollars of market capital. The drop eventually recovered, but not until after serious damage was done. The most reasonable explanation is that the programmers of various high frequency trading platforms simply screwed up. Unfortunately, their little coding errors caused a 10% drop in the market in a little over a half an hour. That’s a record that will never be broken. (Unless an even worse trading algorithm makes its way onto the scene.)

Offloading responsibilities onto AI makes our lives faster, easier, and more efficient. But maybe our lives shouldn’t be so fast or easy or efficient. Maybe we need to slow down, make things more difficult, and struggle with stuff to ensure that small mistakes don’t turn into huge problems. But that’s certainly not the goal of AI.

Philosophical Considerations

There are even deeper issues at hand when one considers the non-practical, more philosophical impacts of AI. Because while AI can easily replace the more mundane aspects of our day-to-day lives, it can also replace those deeply important parts of our lives that keep us human.

Joseph Weizenbaum is famous for developing ELIZA in 1966 at MIT. It was an incredibly simple program that did some basic natural language processing and more or less replicated the behavior of a bad psychologist (stuff like digging for information – “Men are all alike?” “In what way?” – or repeating back a statement as a question – “My boyfriend made me come here.” “Your boyfriend made you come here?”). The results shocked Weizenbaum. People felt a strong emotional attachment to ELIZA and considered it a great listener. He would later spend the latter half of his life decrying AI and its increasing prevalence in society.

One hugely important question with AI is what should be turned over to it? Another key question is who gets to decide? The problem today is that no one is bothering to ask either question and all aspects of everyday life are gladly turned over to AI and the decision makers are the vested interests involved in AI development itself. We might, for example, question the ethics of an automated assistant making a dinner reservation with a live human being, but Google doesn’t care and has developed technology to do just that, and no one seems to want to question whether that’s right in any meaningful sense. But there’s a very real case to be made that any human interaction that requires respect or care or even love should be limited to humans and humans alone. Because the more we utilize AI, the more we humanize it, and the more we dehumanize ourselves in the process.

A humanized AI raises all sorts of ethical conundrums. What rights does AI have, for example? If we’re considering mandatory kill switches on all AI – and we, of course, should implement a kill switch for what is increasingly dangerous technology – would flipping a kill switch be killing a sentient being? Would we rewrite what it means to be conscious and wrap AI under our human umbrella, protecting it from “murder” and all other crimes? That seems to be what technologists desire. They see AI as sympathetic. We should see AI as what it is, a tool with no rights whatsoever. AI deserves as much respect as a microwave.


Joseph Weizenbaum once said, “The computer has almost since its beginning been basically a solution looking for a problem.” The struggle of our time lies in the fact that the technology has advanced so far, and the solutions thereby have become so robust, that there is almost no problem that technology can’t tackle. With AI in particular, nearly every arena of human experience will someday be replaceable. Endless jobs have already been replaced and endless more are on the way out.

We must remember what it means to be human and elevate the human experience above all else. Otherwise we run the risk of turning our lives over to machines and becoming something of a machine ourselves in the process. Our relationships, our history, and our culture are the very real human aspects of our lives. AI doesn’t get this, and so it seeks to trivialize these crucial aspects of our lives into oblivion. As Weizenbaum also wondered, “How long will it take before what counts as fact is determined by the system, before all other knowledge, all memory, is simply declared illegitimate?” Our human heritage is infinitely more meaningful than our current technological obsession.

Give us a try free for 30 days!

Don't take our word for it. New clients get to try our services free for 30 days.

We'll put together a team of analysts, developers and designers to partner with you and get to work.

To get started, just fill out the form below.

They show a passion for understanding our business objectives

They show a passion for understanding our business objectives

They get the job done on time and are quite adept at using open source technology, which saves us money. Gunner balances pragmatism and perfectionism, which is important to us. After using them for both short term and long term projects, we cannot give a higher recommendation

Sam Petteway - CEO

5348 Vegas Drive
Las Vegas, NV 89108
GSA: GS-35F-306GA | CAGE: 7Q6F5 | DUNS: 078818362
© 2020 Gunner Technology
Privacy Policy | Terms of Use