Home

Tech’s Ethics: Fake News

Published 06/06/2019

We begin, not unlike so much fake news these days, with a tweet. A now deleted tweet that topped out at almost 20,000 retweets and a nearly equal number of likes. One of those viral tweets that struck a nerve. A Dallas man snapped a picture of some busses and claimed they were being used to transport in fake protestors to raise a ruckus at a Donald Trump rally. It was huge news,  but had one small problem. It was a completely false.

The busses were there, of course. We can still believe in photographs as visual evidence. But they weren’t there to shuttle legions of anti-Trumpers. They were there for a tech conference for a piece of software. (The most shocking aspect of this story is 20,000 people show up for this conference for niche software, but more power to them.) The author eventually apologized – although he stood by the belief that democrats were attempting to sabotage Donald Trump – but the damage from the tweet had been done. It had been seen by millions of people and spread to numerous other sites. On Facebook alone it had nearly 150,000 shares. This is what fake news looks like, and the impact it’s having on the world is too negative to ignore it any longer.

What is fake news?

But what is fake news? We need to define the problem before we try to understand it (and how to fight it). According to our President, there appears to be two good definitions. First, fake news is anything that is unflattering to Donald Trump. Obviously that’s just textbook narcissism, and it goes without saying that Donald Trump isn’t the only person in the world who denies a piece of information that paints them in a negative light. He just happens to hold the most powerful office in the world. His second definition is any news that contains any inaccuracies at all. Just as one bad apple spoils the bunch, a single slip up in an otherwise pristine article makes the entire piece false. Is there an incorrect date? A number that’s slightly too low or too high? A person who was supposedly present, but wasn’t there the whole time? Minor infractions, to be sure, but in the president’s mind it’s damning.

That’s not what we’re talking about when we talk about fake news. The fake news we mean is news that is intentionally misleading with malicious intent. The intents themselves are myriad: character assasination, political swaying, increasing paranoia, increasing polarization, or even just, like the Joker, a desire to watch the world burn. Whatever the reason, fake news is incredibly destructive despite often being easily recognizable.

So what?

It’s a fair question. We might admit fake news exists, but who cares? (Plus maybe all the news about fake news is itself fake news in a sort of M.C. Escher self-reflexive situation.) Lots of people have been saying lots of nonsense for a very long time. What makes it different now? The major difference in terms of the communication technology itself is reach. Fake news, if allowed, travels far and fast. When fake news was contained to the local kook, it usually didn’t make it past the town tavern. But now any insanity posted on the internet can travel globally, and travel instantaneously, making the problem that much worse.

And that has a real effect on our world. Political polarization has been increasing for decades, and the internet and fake news are at least partially responsible for this rise. Not only that, but fake news can have real impacts on political outcomes, which lead to real policy decisions, which have a very real, very direct effect on day-to-day life. It’s a problem that can’t safely be ignored any longer if we want to have the same kind of political discourse as that of yesteryear, which led to so many improvements around the world.

The Businesses

Why is a solution challenging from a business perspective? First off, you have older media companies like CNN and much of print media that desperately need to grow their audiences. And the sad reality is that stories about fake news get folks to watch or read like nothing else. As much as people get riled up in sharing conspiratorial pieces of news, people get just as riled up (if not more so) lambasting these examples of seemingly mass insanity. So CNN, for example, will pick up on a piece of fake news about Nancy Pelosi being drunk and stir the pot as much as they can, claiming that this is the kind of thing that conservatives share and hosting numerous debates about what that means for the party and the political system at large.

Not only that, but the rise of fake news has occurred contiguously with another major change in old media: the shift from hard reporting to opinions and analysis. This has proved to be a major shift in the way news organizations are run, and it hasn’t been one for the better. Well, it’s certainly been better for the bottom line, and that’s really the point of it. It used to be that when something was happening on the other side of the world, news organizations would send a team there to report from the scene. They wanted boots on the ground. But that’s expensive. It’s much cheaper to have someone, a single person often, sitting at their computer typing out their opinion on the news in question. (And you don’t really need to fact check opinions and analyses since they’re substantially more subjective than the hard facts reporting of the past.) Plus, opinions and analyses, get people incredibly riled up, just like stories about fake news. So not only are you producing content cheaply, but you’re producing content that people are going to have sharp reactions to. It’s the best of both worlds! (Ignoring the fact that it’s debasing the effectiveness of old media.)

The Audiences

Why is a solution challenging from an audience perspective? Well, first, you can’t fix stupid. We don’t mean to be derogatory, but the painful reality is that the global populous is becoming less intelligent (with a few exceptions like the Nordic countries and much of Asia). We certainly know more factual information than ever before, but the kind of wisdom that people used to possess that allowed them to analyze assertions and test out the logical reasoning behind an argument are fading fast. And, unfortunately, we’re not doing enough to combat that. The place to start would be schools, but for a long time now, the curriculum has been moving to tests based on simple knowledge and away from tests based on critical reasoning.

Beyond a simple lack of general knowledge and the ability to reason, there are very real trust issues in society today. The pillars of society have lost the trust of the citizens. Nobody trusts old media. Nobody trusts the government. It’s difficult to call yourself real news when nobody believes you.

And then there is a further compounding factor called the backfire effect. Basically, what has been observed is that when people are presented with information that contradicts a strongly held belief, they in fact believe in that belief more strongly, not less. People want to believe what they believe and they will happily ignore evidence to the contrary. Or, in fact, take that contradictory evidence and paint it as some sort of conspiracy to make their viewpoint seem outlandish (which it probably is). There doesn’t seem to be a way to win.

The Content

Why is a solution challenging from a content perspective? Well, it has to do with the content itself and the platform it appears on. First, as far as the content goes, how do you prove that something is intended as fake news and not as comedy or satire. The Onion, for example, is entirely fake news (based on real news, of course). But nobody is up in arms about The Onion, even as it has fooled people into thinking it was real on numerous occasions. So how can you tell if content is meant as fake news? (The only real solution there seems to be to say, to quote old Justice Potter of the Supreme Court, “I know it when I see it.”)

Plus, the problem is deeper than simply identifying fake news. The problem is also who is going to be delegated the task of identifying fake news. Do we really want social media companies, who the public at large completely distrust, to decide when something is or isn’t fake news? That seems to be a solution that’s dead on arrival. But in lieu of Facebook or Google or Twitter deciding, then who decides? You could say to leave the content to the people and let them decide, but that’s what we’ve been doing, and it’s not working.

There is another problem with the content itself and that is both how much of it is out there and how good it is at directing people in a certain direction. Content creators have gotten incredibly good at convincing other people to believe in what they’re saying. And with fake news, this has led to internet death spirals whereby someone starts by drifting from reasonable news, to iffier news, to extremely suspect news, to outright fake, malicious, and conspiratorial news. Someone goes, for example, (and we’re only using the conservative track as an example, it happens to people of every political persuasion) from the Wall Street Journal to The Washington Examiner to Fox News and The Daily Wire to, finally, sites like Infowars and Newsmax. The content takes control of people and leads them down rabbit holes they don’t even know they’re in.

Solutions I: Things That Won’t Work

So as we start to imagine some solutions to the problem, let’s first consider ones that are doomed to fail. One thing we could try to do is to revert technological decisions and changes that have happened over the last couple decades. The reason this solution would fail is because with anything tech-related, you can never put the toothpaste back in the tube. Once a technology is out there, it’s out there for good. You can modify it, certainly, but can never be rid of it. (Plus the companies that build this technology have a vested interest in sticking around, and have a ton of cash and whole host of lobbyists ready to make their case.)

So maybe we should educate people before the fake news takes its hold. This is somewhat akin to what YouTube is doing with its displaying of information related to certain channels associated with fake news, letting users know that the content might not be 100% trustworthy. But we’ve already see in the backlash to that policy decision why this solution is doomed to fail. Any legislative attempt to fight the problem will immediately be targeted as partisanship, and yet another attempt to silence one particular side of the debate (usually conservative). Leveraging these perceived attacks on the free speech rights of the conservative base by California liberals, Republicans might end up winning key political victories. And with those additional seats, they could eventually overturn the policies themselves, possibly even swinging the pendulum farther to their side.

Solutions II: Things That Might/Could Work

So then what are some possible solutions that might work? Twitter has already implemented a partial solution with their shadowbanning system. What Twitter does is track a user’s tweets and if too many get reported or even automatically identified as negative fake news content, Twitter will hide all of that user’s tweets from the community at large. It’s like the user still imagines they have an audience, but nobody is actually seeing their content. We would actually go a step further and create fake engagement with the content itself from fake users. If someone posts an abhorrent conspiracy theory on their Twitter account, retweet it dozens of times and like it in similar numbers with fake accounts. That way the spammer will feel they’re actually reaching an audience and getting real engagement when, in fact, they’re shouting into an abyss with no one around.

Furthermore, a user’s spam score (which is used as the basis of shadowbanning) could be publicly visible to everyone. If a user had a score of 40 out of 100, for example, you might be hesitant to believe anything they say. Or you might be encouraged to go to direct sources and decide for yourself whether the author is right. And that’s great! That’s what people used to do when presented with seemingly wild news – make sure that it’s actually true.

Finally, one solution that sadly could work is simply removing the source from whatever article is posted. Whether something is from MSNBC or Fox News, The Palmer Report or Red State, the user should have no idea. Then their biases would at least be somewhat mitigated. You could even go a step further and attach a random media company to each piece of news to completely screw with people’s heads. Imagine seeing PizzaGate news coming from Mother Jones or news about Trump’s tax records coming from The Drudge Report. It would definitely make people pause and question the actual veracity of a piece of news instead of just blindly following their trusted sources, many of whom might not be trustworthy in the slightest.

Conclusion

Fake news seems to be here to stay. Many social media companies, notably Facebook, have committed to combating fake news however they can. (And it might actually be having a positive effect – enough time just simply hasn’t passed to know for sure one way or the other.) But as we’ve pointed out, there are so many factors that make fake news so powerful. Beyond the backfire effect, where people believe in something more, not less, when presented with conflicting evidence, there is just so much content out there and so many gullible people willing to consume it that the problem might seem unbeatable. And it very well may be unstoppable. But certainly ideas like spam scores and anonymized sources could be tried to see what positive effect, if any, they had. What we certainly can’t afford to do, both for the sake of our political institutions and for the health of our civic communities, is to ignore the problem of fake news any longer. Something has to be done, and it’s high time we did something.