| by J.M. Berger

This piece appears in J.M. Berger’s World Gone Wrong. Subscribe here to read more of his work.

A preprint released this week is titled “Partisans are more likely to entrench their beliefs in misinformation when political outgroup members fact-check claims.” Beneath that headline, which may seem obvious but deserves to be demonstrated and quantified, a number of interesting details can be found.

  • Fact-checking misinformation has a net positive effect, albeit a small one.
  • Partisanship had a much more powerful effect than fact-checking.
  • When a fact-check came from a perceived political out-group, it was 52 percent more likely to backfire, causing people to double-down on their wrong beliefs.
  • In the words of the authors (Reinero, Harris, et. al.), these findings suggest “that partisan identity may drive irrational belief updating.” I wasn’t totally clear on the whether “irrational” refers to the belief or the updating process.
  • Fact-checks were more effective when they came from people who were perceived as in-group members or neutral authorities.

The paper offers more granular detail for those who are interested, but I wanted to highlight a few dynamics that are particularly relevant to extremist belief and the current political landscape in the United States and around the world.

The effect of partisanship on belief about the veracity of information is an outgrowth of the social construction of reality. When doubts arise concerning a piece of information—regardless of whether the information is objectively true or untrue—most people turn to members of their in-group for clarification. This is a normal and almost universal human behavior. Philosopher Joseph Petraglia, in a critique of constructionism, summarizes the view in these words:

“Briefly (and broadly), a social constructionist argues that knowledge is created, maintained, and altered through an individual’s interaction with and within his or her ‘discourse community.’ Knowledge resides in consensus rather than in any transcendent or objective relationship between a knower and that which is to be known.”

Constructionists can (and do) take issue with elements of this definition, but it zeroes in on a key component of social construction. In an ideal world, we would hope that in-groups might congeal around some objective criteria for expertise and factuality. But like it or not, it’s very hard to argue for the existence of some perfect, “objective” vantage point from which reality can be definitively viewed. Truth is not immutable, and it does not exist independently of the people who perceive it.

Human societies are complex systems. Most people identify with overlapping and sometimes conflicting identity in-groups, and most encounter out-groups with radically different beliefs at some point in their lives. When two or more groups present differing information about the world, members of each group must resolve the conflict. Typically, but not universally, members prefer information endorsed by the in-group with whom they identify with most strongly. If the challenge to one’s preferred in-group consensus continues or escalates, people tend to adopt more hostile attitudes toward the conflicting out-group consensus. In social construction theory, this is called “nihilation”—the negation of a competing social consensus by deeming an out-group inferior to the in-group.

As I’ve written before (in The Atlantic and in Optimal), the problem with online social platforms is that they quantify this in-group consensus in swift, powerful and distortive ways. Algorithms make it easy to find in-group members, and they reward social media users who articulate the in-group consensus with measurable engagement. The new findings from Reinero, Harris et. al. fit nicely with previous findings showing that engagement metrics such as likes and shares increase users’ vulnerability to misinformation (Avram, et. al., 2020). High levels of engagement lead to more sharing and less concern for fact-checking.

There are many potential downsides for society in these findings. For instance, Reinero, Harris et. al. found that neutral or in-group fact-checking sources were more effective at countering disinformation than out-group sources. Extremists have adapted to this dynamic by claiming that neutral news outlets are controlled by or even equivalent to a hostile out-group. Relentless campaigns to discredit the neutrality of mainstream media are very effective at immunizing in-group members against fact-checkers who are not part of the in-group and demonizing the very concept of fact-checking.

Another hazard comes from manipulative behavior on social media, whether carried out by bots or well-organized humans, whether state-sponsored or grassroots. It doesn’t take a lot of likes and shares to sway social media users. Higher engagement numbers had a stronger effect according to Avram et. al., but lower numbers still had some effect, and it’s all relative. A social media user who normally gets very little engagement may feel validated by relatively small numbers of likes, say ten, 20, or 50. And users who have inserted themselves into manipulative networks may be rewarded with objectively large engagement numbers—hundreds, thousands, tens of thousands.

Complexity aggravates these challenges. Even in an age of unprecedented global mobility, it’s impossible for most people to travel around the world and verify every fact for themselves. The world of knowledge is too vast for any one person to process; we can only chip away at our little corners of expertise. But thanks to our information environment, the great wide world is always knocking at the door.

Most people rely on trusted others to tell them what’s happening in society, politics, health and science. Our opinions about COVID or climate change may be informed, but for most people get informed by listening to trusted intermediaries rather than by direct knowledge. When bad actors constantly attack those trusted intermediaries, it degrades trust and discourages experts from speaking out.

The right’s increasingly explicit campaign to make every aspect of objective truth into a partisan battlefield also makes it difficult for social platforms to devise any kind of consistent, practical policies in response to dis- and misinformation. While companies can and still do sometimes mobilize to meet urgent large-scale challenges (like ISIS or COVID), these mobilizations are often ephemeral, prone to change with the political and corporate winds.

Even companies that are fully committed to fighting misinformation are reticent to make themselves into the final arbiters of objective truth. The epistemological hurdles would be daunting for anyone. And given that some of Silicon Valley’s most prominent leaders seem uniquely unsuited to the task, it may be a blessing in disguise.

 

Our work is made possible by research grants and gifts from supporters. We appreciate your generosity.

Donate Today

Stay up to date on CTEC’s activities!

Join Our Newsletter

Open positions at CTEC are advertised through the Middlebury Institute’s employment opportunities Handshake.

Current Openings