Menu
How fake news is changing the internet

How fake news is changing the internet

The tools we all use for knowledge, communication and business are being re-engineered to stop disinformation. Is our loss of control worth it?

Fake news has been linked to extremist politics, social division, mob violence and crime. Who’s to blame?

Old people, apparently.

A new study found that Facebook users over the age of 65 are far more likely to share fake news than younger users. The reasons for this include a lack of digital media literacy by people who didn’t grow up with the internet and age-related cognitive decline.

China’s WeChat found similar results on that network and also concluded that country folk are more likely to share fake news than city slickers.

But it hardly matters. The truth is that fake news is becoming big business. And like cybercriminals, fake news publishers are evolving their methods faster than the public’s ability to avoid being duped.

And so the task of saving the world from disinformation inevitably falls to Silicon Valley and the tech industry.

Where does fake news come from?

Fake news isn’t the same as disagreeable opinions, bad reporting, erroneous journalism or divisive speech.

Fake news requires that the people who write or broadcast the news know they’re delivering false information.

The Russian government has become the poster child for political misinformation and disinformation because of the mountains of (real) news reports about its role in fake news leading up to the 2016 U.S. presidential election. More recent reports show Russian efforts to spread fake news not only in the U.S. but in many countries around the world. Some of that fake news seeks to debunk even the idea that Russia spreads fake news.

The Russian government, widely considered the largest and most sophisticated state sponsor of fake news and disinformation, itself passed a bill to ban what it called fake news this month. The new law, which punishes violators with fines or prison, clusters together fake news and any “disrespect” of government leaders or state symbols.

Russian state-sponsored disinformation campaigns are driven by politics. But fake news is mostly spread for profit.

Dutch and Belgian researchers have shown that North Macedonian fake-news creators are often middle-aged, and work as families. It’s a growing type of family business there.

And everywhere. Fake news attracts eyeballs, which in turn sells advertising. It’s a growing industry around the world.

In search of the viral news vaccination

A disproportionate share of the world’s fake news spreads on networks owned by Facebook, including the eponymous social network, Instagram, WhatsApp and Facebook Messenger, simply because that’s where most of the internet’s users are.

WhatsApp has a big fake news problem. For example, fake news about child abductions on WhatsApp in India has been blamed for driving mob lynchings.

One of Facebook’s challenges in curbing fake news on WhatsApp is that it’s an end-to-end encryption service, so the company has no access to the content shared.

That’s why WhatsApp this week announced a new limitation on forwarding. Users worldwide can now forward any specific message just five times. The aim is to slow down the viral spreading of misinformation on the network.

WhatsApp, which has 1.5 billion users, had previously added a feature that auto-labeled forwarded messages, so users didn’t believe they were the words of the sender.

Facebook also recently removed accounts, pages, groups and Instagram profiles connected to the Russian state-owned Sputnik news and disinformation network. After the accounts built large audiences by posting legitimate news, they started adding Russian disinformation from Sputnik.

Facebook now says it reserves the right to remove pages and ban groups merely “affiliated” with Facebook community standards violators, even if they have not broken any rules.

In an apparently unrelated move, The New York Times reported Friday that Facebook plans to integrate Facebook Messenger, Instagram and WhatsApp so that encrypted messages can be sent between users on the different platforms.

This change should increase user privacy, but also potentially give fake news publishers more options for distributing disinformation secretly. Because all those messages will now be encrypted end to end, potentially connecting billions of users, we’ll have to rely solely on Facebook to find and take action on viral fake news campaigns.

Other social and messaging platforms are scrambling to stop fake news.

Twitter is testing an icon designed to label tweets that start a thread. Called an “Original Tweeter” icon, the label is intended to notify users that a fake account impersonating the original tweeter during a conversation thread is illegitimate.

China’s WeChat, which is owned by Tencent and has more than a billion users, recently partnered with 774 third-party organizations to provide users with more than 4,000 articles that debunk fake news reports. WeChat also posts a top-ten list of the most popular false rumors. It flags fake news articles. It also bans content and blocks links on the service.

Lumped into its campaign against fake news, WeChat helps the government censor political and other banned speech and represses links to competitive social services, according to critics and competitors.

WeChat is even blamed for spreading fake news in Canada among Chinese-Canadians.

Microsoft added new features to its Edge browser that integrate a third-party, anti-disinformation tool called NewsGuard. An extension of Microsoft’s Defending Democracy program, NewsGuard uses a five-point color system to indicate the quality of the source. A green check means that the news source upholds “basic standards of accuracy and accountability.” A red exclamation point means it’s a purveyor of fake or unreliable news. Clicking on the badge reveals basic information about the news source.

NewsGuard, which rates news sites based on accuracy, transparency, sourcing policy and other factors, is run by entrepreneurial journalist Steven Brill and former Wall Street Journal publisher Gordon Crovitz, who are co-CEOs.

NewGuard is not the default, and has to be turned on by each user.

NewsGuard browser extensions also exist for Chrome, Firefox and Safari, and all of them show ratings on Facebook and Twitter, and search results on Google and Bing and on other sites.

This is just the beginning

All these changes to how the internet works, could work or will work happened in the last week or two. They’ll likely help but not solve the growing fake-news problem.

That’s why I’m predicting increasingly radical changes in the way everything works online.

For starters, I believe NewsGuard and NewsGuard-like services will stop being options, and instead become defaults or even requirements. Our browsers will judge the quality of our news sources so we won’t have to.

And if Facebook’s five-forward limitation is any indication, the “solution” for messaging platforms is to build in limitations, slow them down and make them less powerful.

Here’s what these current and possible future changes all have in common: They take control away from users and businesses and give more control and discretion to the Facebooks, Twitters and WeChats of the world.

They’ll tell us what we should and shouldn’t read (or, at least, anoint the arbiter of these decisions). They’ll choose which businesses to ban, even if those businesses haven’t violated the published rules.

Is the reduction of fake news worth it?

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments
[]