Why humans and AI are stuck in a standoff against fake news

[ad_1]

Fake news is a plague on the global community. Despite our best efforts to combat it, the problem runs deeper than simply checking the facts or removing specialized disinformation publications. Current thinking still tends to support an AI-based solution, but what does this really mean?

According to recent research, including this article from scientists at the University of Tennessee and Rensselaer Polytechnic Institute, we will need more than just smart algorithms to correct our broken speech.

The problem is simple: AI can’t do anything that a person can’t. Sure, it can do a lot of things faster and more efficiently than people – like counting to a million – but at its core, artificial intelligence only scales what people already can. to do. And people are really worried about identifying fake news.

According to the aforementioned researchers, the problem lies in what is called “confirmation bias”. Basically, when a person thinks they already know something, they are less likely to be swayed by a “fake news” tag or “dubious source” description.

According to the team’s article:

In two sequential studies, using data collected from news consumers via Amazon Mechanical Turk (AMT), we investigate whether there are differences in their ability to correctly identify fake news under two conditions: when the intervention targets news current situations and when the intervention is appropriate. to specific heuristics. We find that in new and current situations, users are more receptive to AI advice, and furthermore, in this condition, personalized advice is more effective than generic advice.

This makes it incredibly difficult to design, develop, and train an AI system to detect fake news.

While most of us may think we can spot fake news when we see it, the truth is that bad actors who create disinformation don’t do it in a vacuum: they are better at lying than we are at. tell the truth. At least when they say something, we already believe.

Scientists found that people – including freelance Amazon Mechanical Turk workers – were more likely to mistakenly view an article as a fake if it contained information contrary to what they believed to be true.

On the other hand, people were less likely to make the same mistake when the news presented was seen as part of a new situation. In other words: when we think we know what’s going on, we’re more likely to agree with fake news that matches our preconceptions.

While researchers are identifying several methods by which we can use this information to strengthen our ability to let people know when they present them with fake news, the bottom line is that accuracy is not the issue. Even when AI does it right, we’re even less likely to believe real press article when the facts do not correspond to our personal prejudices.

It’s not surprising. Why should anyone trust a machine built by great technology instead of the word of a human journalist? If you think: because machines don’t lie, you are completely wrong.

When an AI system is designed to identify fake news, it typically needs to be trained on pre-existing data. In order to teach a machine to recognize and report fake news in nature, we need to feed it a mixture of real and fake articles so that it can learn to spot which is which. And the datasets used to train AI are typically hand-labeled, by humans.

Often times, this means that the labeling rights for crowdfunding are handed over to a low-cost third-party company such as Amazon’s Mechanical Turk or a number of data stores that specialize in assemblies. data, not in the news. The humans who decide whether a given article is fake or not may or may not have real journalism experience or expertise and the tricks bad actors can use to create compelling and hard to detect fake news.

And, as long as humans are biased, we’ll continue to see fake news thrive. Not only does confirmation bias make it difficult for us to differentiate the facts we disagree with from the lies we tell, but the perpetuation and acceptance of outright lies and misinformation from celebrities, our family members, peers, bosses and the highest political office makes it difficult to convince people otherwise.

While artificial intelligence systems can certainly help identify extremely false claims, especially when made by news organizations that routinely engage in false information, the fact remains that the veracity of an article by press is not really a problem most people.

Take, for example, the most watched cable network on TV: Fox News. Despite the fact that lawyers for Fox News have repeatedly stated that many programs – including the second most watched program on its network, hosted by Tucker Carlson – are in fact fake news.

According to a ruling in a defamation case against Carlson, U.S. District Judge Mary Kay Vyskocil – a person appointed by Trump – ruled in favor of Carlson and Fox after discerning that reasonable people would not take the daily rhetoric of the facilitator as truthful:

The ‘general tenor’ of the show should then inform the viewer that [Carlson] does not “state the real facts” about the topics it discusses and instead indulges in “exaggeration” and “non-literal commentary”.[…]Fox convincingly argues that given Mr. Carlson’s reputation, any reasonable viewer “arrives with an appropriate degree of skepticism.”

And that’s why, under the current news paradigm, it may be impossible to create an AI system that can definitively determine whether a given press release is true or false.

If the news organizations themselves, the general public, elected officials, big techs and so-called experts cannot decide whether a given news article is true or false without bias, there is no way. to trust an AI system to do it. . As long as the truth remains as subjective as the politics of any given reader, we will be inundated with fake news.

Humanoid greetings! Did you know that we have an AI newsletter? You can subscribe to it here.

Published March 16, 2021 – 21:57 UTC


[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *