Home

Tech’s Ethics: Surveillance

Published 09/19/2019

Our lives are increasingly being watched. Everything we do is being monitored. There are cameras nearly everywhere tracking our every move. There are so many GPS-enabled devices on our person that our location is known down to the meter for large parts of our day. Our internet activity, especially, is being monitored (and monetized) and our browsing patterns are be scrutinized constantly by automated processes.

Is all of this surveillance good for us? What effect is it having on our psyches? Is privacy simply a legacy concept no longer applicable to our modern age?

Panopticon

The origins of our surveillance state go back hundreds of years, to the writings of Jeremy Bentham in the 1700s. Bentham, the founder of Utilitarianism, a favorite philosophy of Silicon Valley, had dreams of the perfect design for a prison. In the Panopticon, cells would be arranged in a circle, with their doors facing inwards toward a central guard tower. The guard tower would be able to see into every cell, but most importantly, the prisoners would never know whether the guard was watching them or not. (Imagine a one-way mirror all around the guard tower.) So prisoners would never know when they were being watched. And because they were always at risk of a guard watching them, Bentham theorized, the prisoners would be much more obedient. (You could even get away with having no guards in the guard tower – the illusion of oversight.)

George Orwell, in his book “1984”, seemed to take this idea to its logical conclusion. Why not watch over the entire population if it so vastly improves behavior? As he says in his book, “there was of course no way of knowing whether you were being watched at any given moment… you had to live… in the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinised.”

As more technology companies (and, more scarily, more governments) move towards this model of pervasive surveillance, we find ourselves living in a world that more and more resembles those dystopian nightmare visions of authors like Orwell, where any shred of privacy is the ultimate goal.

Social Scoring

Obviously, as with so much surveillance activity today, China leads the way with social scoring. Their new credit scoring system has rightly been described as Orwellian. (Media reports downplaying the draconian nature of the Chinese system should be taken with a massive grain of salt as so many of them rely on arguments that begin “China says…” as if the Chinese government could be trusted to accurately describe its system of constant monitoring of its citizens.) Under the Chinese credit scoring system, everything could potentially be monitored from your internet and cell phone usage to your manners as a pedestrian. (No jaywalking!)

Even the European Union has toyed with the idea of social scoring for its citizenry, but there, thank goodness, they still seem to respect autonomy and privacy. Numerous pieces of legislation have been introduced in various member countries to ban such oversight by EU governments. (Industry, not surprisingly, has pushed back saying that any regulations would stifle innovation. As if stifling the innovation of constant monitoring shouldn’t be an explicit goal.)

It’s not just countries that have social scoring, however. Offices are increasingly utilizing monitoring software to oversee the behavior of employees. Internet traffic, breaks, movement around the office – everything is watched to ensure the company is maximizing its return on investment in your human capital. (What seems to be lost in these scenarios, as with so much in the social scoring space, is how valuable down time is in producing better work during one’s up time. If a 15-minute break means the worker will be 1.5x as effective when they return, then you’re certainly ending up net positive. But these simplistic oversight operations wittle people down to nothing more than cogs in a machine.)

Social scoring is an excellent resource for producing a bland, obedient citizenry. A nation of automata, striving to cohere perfectly to the goals and mandates of an all-seeing, all-knowing government bureaucracy. That is obviously China’s goal since the undercurrents of dissidence and dissent could prove fatal to their Communist cause. As populist uprisings surge throughout the world, social scoring systems might become more prevalent. If people are rewarded for following the rules, that makes it that much harder to stand up to your government and its representatives, the rule makers themselves.

Constant Monitoring

Beyond social scoring, there is, of course, the simple act of constant monitoring. Social scoring seems like the softer version of constant monitoring somehow, monitoring daily behavior instead of identifying people as potential threats a la “Minority Report”. But constant monitoring is not without its problems.

For one thing, facial recognition software designed to catch criminals often misidentifies minorities, especially black people. (That’s what led San Francisco, among other cities, to outright ban the use of monitoring software that relied on facial recognition.) The idea is to use the huge and ever-expanding network of cameras to constantly surveil neighborhoods for potential threats or criminals at large. Devices like Ring, the video doorbell that apparently millions of people didn’t know they couldn’t live without, are connected with police networks and other law enforcement agencies, providing a wealth of visual data and allowing those agencies to monitor huge areas at once. We’ve somehow devised a way to monitor ourselves and paid companies for the privilege of doing so via their monitoring devices.

And it’s not just our homes and neighborhoods that are constantly under watch now, but even spaces like our schools. In the wake of the numerous school shootings that have occurred in recent years, monitoring devices and services are now being installed to detect raised voices and other signifiers of potential impending violence. (Of course, one of the early false alarms occurred in the theater department as kids yelled at each other as part of a performance. Oh well, technology isn’t perfect.) But what does this mean for legitimate arguments? Are kids never supposed to yell at each other? What sort of generation of kids will we rise if we never allow them to get angry?

Google

Finally, we come to Google, surveillance company extraordinaire. Google’s entire business model, as with so many Silicon Valley companies today, is based on the constant surveillance of its users. User data is what it’s all about. And Google knows more about you today than ever. And they have plans for even more massive surveillance. We’re not just talking simple stuff like storing your browser history via Chrome or scanning your entire Google Drive for certain keywords. We’re talking surveillance and building user profiles at a much deeper level than that. (And all to target ads better. Who would’ve thought it could be so lucrative?)

Sidewalk Labs is described as an “urban innovation organization” and you can’t disagree with that description. Constantly monitoring the entire population of an area and gathering massive amounts of data on them is certainly urban and innovative – nobody had thought to encroach on privacy like that before. And while Sidewalk Labs certainly has noble goals in terms of the sustainability of its neighborhood (an abandoned shipyard in Toronto), one must ask, “At what cost?” Because the environmental benefits only come as a consequence of constantly monitoring the population of the neighborhood. Knowing everything about everyone who lives there. And if the carrot proves enticing enough, who’s to stop this surveillance operation from expanding to other cities? Is the benefit of improved environmental impacts really sufficient to justify the negative impacts of massive data collection and user profiling?

But Sidewalk Labs isn’t Google’s first surveillance rodeo. Not by a long shot. Many will certainly remember Google Glass, which was a pair of spectacles that digitally recorded every moment of its wearer’s life. It’s a terrible idea and died a swift death, but was probably just ahead of its time. In a few years, as privacy and control of data erode even further, lots of people will probably feel comfortable wearing devices that monitor and record everything. Google’s already working on its next iteration of monitor devices with Google Clips, which is like an always-on, always-recording pocket protector. And if someone doesn’t want to be recorded, too bad. In this strange, dystopian surveillance future, the ones who are willing to abandon their data and their privacy decide for the rest of us. Strange that the ones who benefit from that the most are the massive tech conglomerates creating the tech that powers that abandonment of privacy and control of idea. Strange, indeed.

Conclusion

Surveillance is the new reality. In the not too distant past, you could disappear for a while. You could be unreachable. You could be untraceable. You could have true privacy. And you certainly can today, as well, but the combination of surveillance technology and the pariahood that comes associated with actually valuing privacy makes it increasingly difficult. So a constant presence in the public eye is becoming the new normal. And that’s a terrifying thought.

So much of human greatness is born out of periods of solitude and the freedom to push boundaries, both artistic and non-artistic. If everyone is always on, will these periods where we can turn off disappear? And if those off times go away, what will be lost? These are questions that technology companies have a vested interest in you not answering or even asking. So much value that Silicon Valley derives today is from the gulping up and digesting of unimaginable amounts of user data. If that supply of data dries up, the companies will starve. So they constantly need more and more and more user data. And as the surveillance state grows, we seem to be more than happy to give it to them. Potentially at our peril.