It’s a common experience for people to stumble upon news stories on social media or mobile applications that make them wonder if what they read is real or fake. This is because fake news is serious business. Not only does it require less effort than truth-seeking but is also easier to propagate and reap profits from.
Fake news is engaging – a quality that advertisers love to pay for. It itches the souls of people, who love to or feel obliged to dispel myths, and gratifies the souls of those, who like to or want to believe in it.
What can we do to prevent fake news from undermining credible and real news? I propose the following measures recommended by tech engineers and media professionals.
Solving Fake News Problem
Solving the fake news problem is a difficult task. So far, there is no comprehensive solution. We only have a series of small solutions, which can lead to larger systemic changes that foster a more credible and transparent news culture.
The build-up to this stage must involve two types of solutions – one that fosters credibility and trust by proving the truthfulness of news stories and another that does the same by actively and legitimately disproving the definitely-fake ones.
Focusing on Real News
Social media structures like Facebook, Twitter, etc are based on democratic principles, which give all users the freedom of expression. The problem of fake news arises when there is a misuse of this freedom.
Nonetheless, a curtailing of the freedom due to paranoia of fake news isn’t the solution. Instead, we must focus on improving existing media structures to keep them democratized – thereby ensuring the freedom of expression – and credible by verifying real and promoting it.
Determining the trust-worthiness of news items is a collaborative process involving three equal participants – engineers that run social media platforms and applications, the traditional gatekeepers like journalists and news editors, and users of social media. These three players have a role to play in both proving true news stories and disproving false ones. We need a credibility rating system to regulate the actions of these participants.
Rome was not built in a day. So, why should trust-worthiness and credibility be? The guiding principles of digital media platforms that are “for the people” (not advertisers) like Medium ask this very question. Authors on Medium and other information sharing platforms like Quora establish trust amongst readers by regularly engaging with the community and providing insightful and sincere opinions that have credible basis.
Threat of Misusing Credibility
There is a concern that anyone with substantial credibility may misuse it in a system that values their opinions more than that of others. This is a genuine concern. Opinion leaders in a credible system must continue to be accountable for their opinions despite their high credibility rating.
To ensure they are, the system must grant the privilege of being a “credible” source in exchange for the duty and responsibility of verifying the authenticity of news pieces. This privilege should not, at any point in time, be a given. Rather, participants must constantly earn it.
Any suspicious activity from credible sources must not be treated lightly. It must result in a substantial loss of credibility to discourage malpractices and maintain credibility of the system.
The second purpose of the news-filtering process must be to directly combat fake news by actively disproving the stories that are false before they have a significant impact.
With machine learning algorithms, it is now possible to identify fake news stories by checking the credibility ratings of those, who report it. In credible media, we may expect credible sources to report important matters that they can verify. Any news that cannot be verified and is spread by a variety of less credible sources can be flagged and stopped from becoming big in real-time.
Credible sources may then verify these stories before they appear in more places. If the stories turn out to be true, according to a considerable number of credible sources, then the participants who reported the story first may gain credibility for being quick to report on important matters. If the stories turn out to be fake, then it must be revealed to all participants and result in loss of credibility of propagators and increase in credibility of dis-provers.
Between the factually real and entirely fictional is a huge set of grey news stories that threaten to discredit credible media. Machine learning techniques like natural language processing are creating ways of dealing with them. The hope is that with time programs like these can extract enough features of real news to be able to classify even those news stories that fall in the grey areas of human perception. This is possible because machines can learn more deeply and efficiently from data than humans can.
The Challenge Going Forward
It is difficult to estimate how feasible it is to build credible media structures and democratically maintain them. Advancements in machine learning capabilities and trust-fostering technologies like blockchain provide hope for a future without fake news.
Nonetheless, fake news is easy to create and spread. It is also more profitable to do so. Therefore, media channels have a vested interest in promoting them.With the proliferation of credible media, however, pressure will mount on social media platforms and applications to be more accountable.
In the long-run, whether fake news or artificial intelligence will win is debatable as improving machine learning algorithms and techniques to combat fake news remains a challenge. Any media platform that aims to do so must maintain privacy of their positive actions as creators of fake news are always on the lookout for ways to game existing news systems.
If you liked this article, you’ll like: