False news has consistently been growing around us, generally as clickbait, which often tends to sensationalize, or even falsify, information. These articles and stories are created to mislead and misinform people into believing narratives that otherwise hold no merit.
As the coronavirus - or to be precise Covid-19 - spreads around the world, getting accurate information about the disease becomes ever more important. In the social media age, this topic puts a heavy responsibility on the likes of Facebook and Twitter.
These are the platforms that billions now rely on for news and they are already providing a platform for those wanting to spread misinformation. With advancements made in technology, the rumor and propaganda mills have been handed over to advanced artificial intelligence (AI) algorithms that are designed to permeate content.
Google, Microsoft and Facebook have been using AI in an attempt to counter the threat of fake news by automatically assessing the truth of articles, effectively leaving AI on both sides of the battle – a possible cause of, and hopefully a solution to, the growing problem.
On Facebook groups, it is evident people who believe in one conspiracy theory are often prone to finding others credible. In groups set up by campaigners who believe the new 5G mobile phone networks are a conspiracy against the public, you can now find posts from people who believe 5G has weakened people's immune systems making them vulnerable to the virus.
While this may seem relatively harmless, Paul Hunter, an epidemiologist and professor of health protection at the University of East Anglia tells us misinformation can be dangerous.
He points to the lessons learned from the Ebola virus in West Africa in 2016. "People who believed conspiracy theories about Ebola were less likely to adopt safe practices. And so they were putting themselves at an increased risk of getting the infection and ultimately increased risk of dying."
As the world continues the battle against fake news, AI is now looked upon as the cornerstone to separate the good from the bad. That is because AI makes it easy to learn behaviors, possible through pattern recognition. Harnessing AI’s power, fake news can be identified by taking a cue from articles that were flagged as inaccurate by people in the past.
AI is also at the core of ascertaining the semantic meaning of a web article. For instance, a natural language processing (NLP) engine can go through the subject of a story, headline, main body text and the geo-location. Further, AI will find out if other sites are reporting the same facts. In this way, facts are weighed against reputed media sources using AI. Keyword analytics is a form of AI that has been instrumental in discovering and flagging fake news headlines.
Manipulation of visual media is enabled by the wide scale availability of sophisticated image and video editing applications, as well as automated manipulation algorithms that permit editing in ways that are very difficult to detect either visually or with current image analysis and visual media forensics tools.
DARPA, an innovator of technologies for national security, is developing technologies for the automated assessment of the integrity of an image or video to help in the fight against fake media. With the increase in such media being used to promote propaganda and other types of fake news, it’s needed now more than ever.
With advancements like this in artificial intelligence, the overall integrity of the media can be enhanced, and ultimately tackle the predominance of fake news.