fake news – Nieman Lab https://www.niemanlab.org Thu, 18 Nov 2021 21:33:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 Personality type, as well as politics, predicts who shares fake news https://www.niemanlab.org/2021/11/personality-type-as-well-as-politics-predicts-who-shares-fake-news/ https://www.niemanlab.org/2021/11/personality-type-as-well-as-politics-predicts-who-shares-fake-news/#respond Thu, 18 Nov 2021 10:00:37 +0000 https://www.niemanlab.org/?p=197853 We thought that conscientiousness could help explain the link between political conservatism and sharing fake news. Specifically, we predicted that low-conscientiousness conservatives (LCCs) would disseminate more misinformation than other conservatives or low-conscientiousness liberals. We decided to investigate the relationship between personality, politics, and sharing fake news though a series of eight studies, involving 4,642 participants.

First, we measured people’s political ideology and conscientiousness through assessments that asked participants about their values and behaviors. We then showed the same people a series of real and fake news stories relating to COVID and asked them to rate how accurate the stories were. Then we asked whether they would consider sharing each story. We found that both liberals and conservatives sometimes saw false stories as accurate—and this error was likely driven in part by wanting certain stories to be true because they aligned with their beliefs. In addition, people of all political persuasions share false news, but this behavior was markedly higher among LCCs when compared with everyone else in the study. At high levels of conscientiousness, for example, there was no difference between liberals and conservatives. Low-conscientiousness liberals did not share more misinformation than their high-conscientiousness liberal counterparts.

In a second study, we replicated these results with fake news containing a strong political slant and observed an even greater effect. Once again, liberals across the conscientiousness spectrum, along with highly conscientious conservatives, did not engage in spreading misinformation at a high rate. But conservatives low in conscientiousness were frequent spreaders.

We next asked: what explains LCCs’ exceptional tendency to share fake news? To explore this question, we designed an experiment in which we not only gathered information about our participants’ politics and personality, but also administered questionnaires to assess their desire for chaos, support of socially and economically conservative issues, support for Donald Trump, trust in mainstream media, and time spent on social media. LCCs, we learned, expressed a general need for chaos—the desire to disrupt and destroy the existing political and social institutions—and this may explain their greater proclivity to spread misinformation. This need reflects an underlying desire to assert superiority of one’s ideas or group over others and is especially elevated among conservatives with lower conscientiousness. Importantly, other factors we studied, including support for Trump, time spent on social media, and political and economic conservatism were not as strongly tied to LCCs’ heightened tendency to share fake news.

Unfortunately, our work on this personality trait also suggests that accuracy labels on news stories will not solve the problem of misinformation. We ran a study where we explicitly stated whether each news story in question was false, using a “disputed” tag commonly seen on social media, or true, using a “supported” tag. We found that the supported tag increased the rate at which real stories were shared among both liberals and conservatives. However, LCCs continued to share misinformation at a greater rate, despite explicit warnings that the stories were false. Though it’s possible these participants did not believe the fact-check system, the findings support the contention that LCCs share fake news to intentionally sow chaos.

In fact, we ran another study that involved explicitly telling participants that an article they wanted to share was inaccurate. Participants then had the chance to change their answer. Not only did LCCs still share fake news at a higher rate than others in the study, they also were comparatively insensitive to direct warnings that the stories they wanted to share were fake.

Asher Lawson is a graduate student at Duke University in the Management and Organizations program. In his work, he examines cognitive and gender biases in organisations and society, building on judgment and decision-making theory and using big data. Hemant Kakkar is an assistant professor of management at Duke University’s Fuqua School of Business. In his research, he draws on social psychological and evolutionary theories of status to examine judgments and behaviors of individuals and groups within social hierarchies.

Photo of Fake News Keyboard by Jeso Carneiro used under a Creative Commons license.

]]>
https://www.niemanlab.org/2021/11/personality-type-as-well-as-politics-predicts-who-shares-fake-news/feed/ 0
Does reading fake news actually change people’s behavior? This Covid-19 study says yes, a bit — but potentially an important bit https://www.niemanlab.org/2021/06/does-reading-fake-news-actually-change-peoples-behavior-this-covid-19-study-says-yes-a-bit-but-potentially-an-important-bit/ https://www.niemanlab.org/2021/06/does-reading-fake-news-actually-change-peoples-behavior-this-covid-19-study-says-yes-a-bit-but-potentially-an-important-bit/#respond Wed, 30 Jun 2021 14:01:08 +0000 https://www.niemanlab.org/?p=194261 “The spread of Covid-19 is linked to 5G mobile networks.” “Place a halved onion in the corner of your room to catch the Covid-19 germs.” “Sunny weather protects you from COVID-19.”

These fake news stories and others like them spread rapidly on social media during the early stages of the pandemic. The wave of misinformation was so great that the authorities coined a word for it: “infodemic.”

Fake news isn’t new. But interest in it has increased sharply in recent years, corresponding with the rise of social media. Attention spiked in 2016, amid concerns that the Brexit referendum and the U.S. presidential election may have been influenced by misinformation spread by other countries.

It’s assumed that fake news has a negative effect on people’s behavior. For example, it has been claimed that fake news might affect people’s willingness to wear a mask, get a vaccine or comply with other public health guidelines. Yet, surprisingly, virtually no research has directly tested this assumption, so my colleagues and I took on the challenge of measuring what effect fake news actually has on people’s behavior.

In May 2020, we recruited over 4,500 participants to an online study via an article on the Irish news website TheJournal.ie. Participants were told that the purpose of the study was to “investigate reactions to a range of public health messages and news stories relating to the novel coronavirus outbreak.”

Each person was shown four true news stories about the pandemic and two fake news stories (selected from a list of four fake stories). These fake articles were designed to be very similar to those circulating at the time. They stated that drinking coffee might protect against the coronavirus, that eating chili peppers might reduce COVID-19 symptoms, that pharmaceutical companies were hiding harmful side-effects of a vaccine then in development, and that the forthcoming contact-tracing app to be released by Ireland’s public health service had been developed by people with ties to Cambridge Analytica.

After reading the stories, the participants indicated how likely they were to act on the information over the next several months, such as drinking more coffee or downloading the contact-tracing app.

My colleague Gillian Murphy and I found that fake stories did seem to change people’s behavior, but not dramatically so. For example, people who were shown the fake story about privacy concerns with the contact-tracing app were 5 percent less willing to download the contact-tracing app than those who hadn’t read this story.

Some participants even developed false memories about the fake stories they had read (which we had also seen happen in some of our previous research). “Remembering” previously hearing a fake COVID-19 story seemed to make some people in our study more likely to act in a certain way. For example, people who falsely remembered hearing about the contact-tracing app’s privacy issues were 7 percent less likely to download the contact-tracing app than those who read the story but didn’t “remember” it.

Such effects were small and they didn’t happen with every fake story. But even small effects can produce big changes. Unfounded concerns about a link between the MMR vaccine and autism led to a relatively small drop in childhood vaccination rates in the early 2000s — about 10 percent — which in turn led to a significant spike in measles cases. So it’s possible that the small effects of fake news we saw in our study could have bigger effects on people’s health.

However, there are some important points to consider. First, we measured people’s intentions to do things, not what they actually did. Intentions don’t always translate into actions — consider, for example, your past plans to eat more healthily or exercise more. However, if people don’t even intend to change their behavior, the chances of them actually doing so are slim, so measuring intentions is an important first step.

Second, our study was based on people reading new made-up stories just once. In the real world, people may come across fake news stories many times on social media. Being repeatedly exposed to the same story can increase how true it seems. The effects of repeatedly seeing fake news stories therefore need further investigation.

A secondary aim of our study was to look the effects of general warnings about misinformation, such as those shared by governments and media organisations. These warnings typically encourage people to think critically about online information and think before they share.

Again, there hasn’t been a lot of research on this topic. We were aware of only one study that had looked at whether these sorts of generic warnings have an effect on whether people accept misinformation. Crucially, people in that study were aware that they were taking part in research on fake news, which might have made them more suspicious of what they were viewing.

In our research, some participants were randomly made to read a generic misinformation warning before then reading the true and fake stories. Surprisingly, we found that reading a warning had no effect on people’s responses to the fake stories. Governments should think about this when considering their fake news strategies: While the effect of fake news may be less than expected, the effect of any warning could also be low.

Ciara Greene is an associate professor of psychology at University College Dublin. This article is republished from The Conversation under a Creative Commons license.The Conversation

]]>
https://www.niemanlab.org/2021/06/does-reading-fake-news-actually-change-peoples-behavior-this-covid-19-study-says-yes-a-bit-but-potentially-an-important-bit/feed/ 0
Parler will be hate speech–free — on iOS only https://www.niemanlab.org/2021/05/parler-will-be-hate-speech-free-on-ios-only/ https://www.niemanlab.org/2021/05/parler-will-be-hate-speech-free-on-ios-only/#respond Thu, 20 May 2021 15:26:24 +0000 https://www.niemanlab.org/?p=193177

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This roundup offers the highlights of what you might have missed.

Parler will be nicer, but only on iOS. In March Apple blocked Parler, the “free speech” app that has become a haven for far-right extremists and conspiracy theorists, from the App Store after initially removing it in January following the Capitol riots. A letter that Apple sent to Parler at the time “included several screenshots to support the rejection,” Bloomberg reported, including “user profile pictures with swastikas and other white nationalist imagery, and user names and posts that are misogynistic, homophobic and racist.”

As of Monday, Parler’s app is back in the App Store, but what you’ll see on its iOS app is different from what you’ll see on its website or other smartphones (Parler is still banned from Google Play, but can be side-loaded onto Android phones from Parler’s site). From The Washington Post’s Kevin Randall:

Posts that are labeled “hate” by Parler’s new artificial intelligence moderation system won’t be visible on iPhones or iPads. There’s a different standard for people who look at Parler on other smartphones or on the Web: They will be able to see posts marked as “hate,” which includes racial slurs, by clicking through to see them.

Parler has resisted placing limits on what appears on its social network, and its leaders have equated blocking hate speech to totalitarian censorship, according to Amy Peikoff, chief policy officer. But Peikoff, who leads Parler’s content moderation, says she recognizes the importance of the Apple relationship to Parler’s future and seeks to find common ground between them. […]

Parler is still pressing Apple to allow a function where users can see a warning label for hate speech, then click through to see it on iPhones. But the banning of hate speech was a condition for reinstatement on the App Store.

Also, Parler is getting trending topics.

Christian Staal Bruun Overgaard and Natalie (Talia) Jomini Stroud surveyed 1,010 U.S. adults in August 2020. They found that “Americans’ perceptions of hot-button issues are largely driven by partisanship.”

Respondents were presented with four statements and asked to rate whether each one was “Definitely true,” “probably true,” “unsure,” “possibly false,” or definitely false. Here are the statements and correct answers:

— Russia tried to interfere in the 2016 presidential election. (True)
— Since February 2020, the flu has resulted in more deaths than the coronavirus. (False)
— Trump failed to send U.S. health experts to China to investigate coronavirus. (False)
— It is illegal to mail ballots to every registered voter. (True)

Age and education levels were correlated with giving correct answers to some of the questions — older people and people with at least a bachelor’s degree were more likely to correctly asses the statements about Russia’s interference in the 2016 election and flu deaths. But “partisanship turned out to be the strongest predictor of Americans’ knowledge, even surpassing education.” Democrats were more likely to rate statements favoring their political party as true; Republicans were more likely to rate statements favoring their political party as true.

When evaluating the statement — mostly congenial to Democrats — regarding Russia’s interference with the 2016 U.S. presidential election, almost nine in ten (87.4%) Democrats correctly said it was “probably true” or “definitely true,” whereas fewer than half (48.5%) of Republicans said so.

For the two false statements, partisans’ responses were closely related to their political preferences. For the statement claiming that the flu had resulted in more deaths since February than the coronavirus, close to seven in ten (65.8%) Democrats correctly labeled it as “probably false” or “definitely false,” whereas fewer than four in ten (34.6%) Republicans did so.

Conversely, for the statement asserting that Trump had failed to send U.S. health experts to China to investigate the coronavirus, almost half (49.5%) of the Republicans correctly labeled the statement as “probably false” or “definitely false,” whereas fewer than one in ten (6.9%) Democrats gave these responses.

When evaluating a true statement — congenial to Republicans — which correctly said that it is illegal to mail ballots to every registered voter in the U.S., fewer than one in ten (7.7%) Democrats answered “probably true” or “definitely true,” whereas just over a quarter (25.8%) of Republicans gave these answers.

The study is here.

How news organizations fought misinformation during the pandemic. As part of a larger American Press Institute report called “How local news organizations are taking steps to recover from a year of trauma,” a report for the American Press Institute, Jane Elizabeth takes a look at news orgs’ efforts to fight misinformation during the pandemic — locally:

Mahoning Matters in Ohio wanted to debunk a viral conspiracy about antifa groups looting the local Wal-Mart, so they actually went to the Wal-Mart and showed on Facebook Live that there was no antifa, no looting. “Instead of just reporting about this as a misinformation trend, we went out there and dispelled the rumors,” says former publisher Mandy Jenkins. “We can do that with every story. We’re local.”

Back in March 2020, when there was only one confirmed coronavirus case in Arizona, The Tucson Sentinel decided to jump proactively into a potential pit of conspiracies and lies: Facebook. “It’s important to challenge [misinformation] right where it happens,” says Dylan Smith, the Sentinel’s editor and publisher, so the Tucson Coronavirus Updates Facebook group was launched …

The Sentinel team set up guidelines and rules for participating in its Facebook group, and designated administrators and monitors — comprised of volunteers from the community as well as Sentinel staff — to keep the conversations in check. “Too many newsrooms try to fix social media disasters after the train’s already run off the trestle and exploded on the rocks below,” says Smith. “That never works.”

Importantly, the Sentinel set a limit on participation in the Facebook group: Users must be local residents. “By restricting membership to those people who actually live in the Tucson area, we’ve eliminated a lot of drive-by trolls, and while we haven’t had to ban too many people or even mute them, we don’t hesitate if there’s someone who’s not there to participate in good faith,” says Smith.

[In] West Virginia, Black by God, a local startup for Black residents, recognized that the lack of trustworthy information in the community left it wide open for misinformation — an issue examined in a project supported by the Lenfest Institute and a study published by the Harvard Kennedy School in January. Journalist Crystal Good of Charleston, W.Va., launched the Black by God Substack newsletter and a website in part to help improve “political literacy” and the lack of access to COVID-19 data in diverse communities.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

]]>
https://www.niemanlab.org/2021/05/parler-will-be-hate-speech-free-on-ios-only/feed/ 0
Someone *wrong* on the internet? Correcting them publicly may make them act like a bigger jerk https://www.niemanlab.org/2021/05/someone-wrong-on-the-internet-correcting-them-publicly-may-make-them-act-like-a-bigger-jerk/ https://www.niemanlab.org/2021/05/someone-wrong-on-the-internet-correcting-them-publicly-may-make-them-act-like-a-bigger-jerk/#respond Wed, 19 May 2021 15:16:38 +0000 https://www.niemanlab.org/?p=193124 You see a bit of fake news on Twitter. Should you debunk it? Why not, right?

Fact-checkers and researchers have looked at the impact of debunking on the belief in the false claim — and have found little evidence that issuing a correction could backfire, though debates continue. A new paper from Mohsen Mosleh, Cameron Martel, Dean Eckles, and David Rand, however, takes a look at the effect of debunking on subsequent behavior.

Would being publicly corrected reduce a user’s tendency to share fake news? Maybe prompt them to tweet with a little more civility? The results were not encouraging.

In the 24 hours after being corrected, Twitter users who received a reply debunking a claim made in one of their posts posted more content from disreputable sources. There was also a significant uptick in the partisan slant and toxicity of their subsequent posts.

Here’s how the field experiment worked. The researchers created a fleet of human-impersonating bots and waited until each account had amassed 1,000 followers and was, at least, three months old. Then, the accounts began to issue corrections by dropping Snopes links in replies to tweets with false information. (The bots were literally reply guys; the bots were all styled as white men “since a majority of our subjects were also white men.”)

All told, about 1,500 debunking replies were made.

Some of the fake news targeted for correction? “A photograph of U.S. President Donald Trump in his Trump Tower office in 2016 with several boxes of Sudafed in the background provides credible evidence of stimulant abuse” and “Virginia Gov. Ralph Northam said the National Guard would cut power and communications before killing anyone who didn’t comply with new gun legislation.” Both claims have been debunked by Snopes.

The debunking was public, but fairly gentle. (“I’m uncertain about this article — it might not be true. I found a link on Snopes that says this headline is false.”) The replies also came late. Corrections were delivered, on average, 81 days after the original post.

For 24 hours after the public correction, users shared more news from sources identified by professional fact-checkers as low-quality. The decrease in news quality was small — like 1 to 1.4% small — but statistically significant. Being corrected also increased the partisan slant in subsequent tweets and significantly increased “language toxicity.”

Researchers found that retweeted content, in particular, suffered. The negative effects of a public debunking were less prominent in “primary tweets,” those composed by the users, rather than those merely shared or retweeted without comment.

The results were surprising. A previous experiment had found that nudging Twitter users to consider the accuracy of a headline improved the quality of the news they shared.

So what gives? The researchers have a few theories. Because the effects were stronger for retweeted material — as compared to primary tweets — the authors suggest that users just weren’t paying as close attention to content they merely shared.

The method of debunking — a public reply, rather than a private message sent via DM — may have played a role, too. Being called out on a specific tweet may have prompted a more emotional response than a subtle nudge about accuracy more generally. Here’s what the researchers suggest the difference is between the two field experiments on Twitter:

A private message asking users to consider the accuracy of a benign (politically neutral) third-party post, sent from an account that explicitly identified itself as a bot, increased the quality of subsequently retweeted news links; and further survey experiments support the interpretation that this is the result of attention being directed towards the concept of accuracy. This is in stark contrast to the results that we observe here. It seems likely that the key difference in our setup is that being publicly corrected by another user about one’s own past post is a much more emotional, confrontational, and social interaction than the subtle accuracy prime.

The public nature of this more recent experiment, the researchers argue, could have shifted the users’ attention to social dynamics like embarrassment, indignation over self-expression or partisanship, and their relationship with the “person” issuing the correction. In the battle for users’ attention, the social considerations won.

Twitter has experimented with prompting users to read articles before sharing and to reconsider replying with hostile language. There’s more research to be done, but this experiment suggests public corrections may not be as effective as other nudges toward accuracy and civility.

“Overall, our findings raise questions about potentially serious limits on the overall effectiveness of social corrections,” the researchers conclude. “Before social media companies encourage users to correct misinformation that they observe on-platform, detailed quantitative work and normative refection is needed to determine whether such behavior is indeed overall beneficial.”

Photo by Claudio Schwarz used under a Creative Commons license.

]]>
https://www.niemanlab.org/2021/05/someone-wrong-on-the-internet-correcting-them-publicly-may-make-them-act-like-a-bigger-jerk/feed/ 0
Why do Americans share so much fake news? One big reason is they aren’t paying attention, new research suggests https://www.niemanlab.org/2021/03/why-do-americans-share-so-much-fake-news-one-big-reason-is-they-arent-paying-attention-new-research-suggests/ https://www.niemanlab.org/2021/03/why-do-americans-share-so-much-fake-news-one-big-reason-is-they-arent-paying-attention-new-research-suggests/#respond Tue, 23 Mar 2021 17:43:16 +0000 https://www.niemanlab.org/?p=191530 Many Americans share fake news on social media because they’re simply not paying attention to whether the content is accurate — not necessarily because they can’t tell real from made-up news, a new study in Nature suggests.

Lack of attention was the driving factor behind 51.2% of misinformation sharing among social media users who participated in an experiment conducted by a group of researchers from MIT, the University of Regina in Canada, University of Exeter Business School in the United Kingdom and Center for Research and Teaching in Economics in Mexico. The results of a second, related experiment indicate a simple intervention — prompting social media users to think about news accuracy before posting and interacting with content — might help limit the spread of online misinformation.

“It seems that the social media context may distract people from accuracy,” study coauthor Gordon Pennycook, an assistant professor of behavioral science at the University of Regina, said. “People are often capable of distinguishing between true and false news content, but fail to even consider whether content is accurate before they share it on social media.”

Pennycook and his colleagues conducted seven behavioral science and survey experiments as part of their study, “Shifting attention to accuracy can reduce misinformation online,” published last week. Some experiments focused on Facebook and others focused on Twitter.

The researchers recruited participants for most of the experiments through Amazon’s Mechanical Turk, an online crowdsourcing marketplace that many academics use. For one experiment, they selected Twitter users who previously had shared links to two well-known, right-leaning websites that professional fact-checkers consistently rate as untrustworthy — Breitbart.com and Infowars.com. The sample size for each experiment varies from 401 U.S. adults for the smallest to 5,379 for the largest.

For several experiments, researchers asked participants to review the basic elements of news stories — headlines, the first sentences and accompanying images. Half the stories represented actual news coverage while the other half contained fabricated information. Half the content was favorable to Republicans and half was favorable to Democrats. Participants were randomly assigned to either judge the accuracy of headlines or determine whether they would share them online.

For the final experiment, researchers sent private messages to 5,379 Twitter users who previously had shared content from Breitbart and Infowars. The messages asked those individuals to rate the veracity of one news headline about a topic unrelated to politics. Researchers then monitored the content those participants shared over the next 24 hours.

The experiments reveal a host of insights on why people share misinformation on social media:

  • One-third — 33.1% — of participants’ decisions to share false headlines were because they didn’t realize they were inaccurate.
  • More than half of participants’ decisions to share false headlines — 51.2% — were because of inattention.
  • Participants reported valuing accuracy over partisanship — a finding that challenges the idea that people share misinformation to benefit their political party or harm the opposing party. Nearly 60% of participants who completed a survey said it’s “extremely important” that the content they share on social media is accurate. About 25% said it’s “very important.”
  • Partisanship was a driving factor behind 15.8% of decisions to share false headlines on social media.
  • Social media platform design could contribute to misinformation sharing. “Our results suggest that the current design of social media platforms — in which users scroll quickly through a mix of serious news and emotionally engaging content, and receive instantaneous quantified social feedback on their sharing — may discourage people from reflecting on accuracy,” the authors write in their paper.
  • Twitter users who previously shared content from Breitbart and Infowars were less likely to share misinformation after receiving private messages asking them for their opinion of the accuracy of a news headline. During the 24 hours after receiving the messages, these Twitter users were 2.8 times more likely to share a link to a mainstream news outlet than a link to a fake news or hyperpartisan website.

Pennycook and his colleagues note that the Twitter intervention — sending private messages — seemed particularly effective among people with a larger number of Twitter followers. Pennycook told me that’s likely because Twitter accounts with more followers are more influential within their networks.

“The downstream effect of improving the quality of news sharing increases with the influence of the user who is making better choices,” he explained. “It may be that the effect is as effective (if not more so) for users with more followers because the importance of ‘I better make sure this is true’ is literally greater for those with more followers.”

Pennycook said social media platforms could encourage the sharing of higher-quality content — and re-orient people back to truth — by nudging users to pay more attention to accuracy.

Platforms, the authors point out, “could periodically ask users to rate the accuracy of randomly selected headlines, thus reminding them about accuracy in a subtle way that should avoid reactance (and simultaneously generating useful crowd ratings that can help identify misinformation.”

The researchers received funding for their study from the Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation, the William and Flora Hewlett Foundation, the Omidyar Network, the John Templeton Foundation, the Canadian Institutes of Health Research, and the Social Sciences and Humanities Research Council of Canada.

Denise-Marie Ordway is managing editor of Journalist’s Resource. This article first appeared on Journalist’s Resource and is republished here under a Creative Commons license.

Illustration by Filip Jovceski used under a Creative Commons license.

]]>
https://www.niemanlab.org/2021/03/why-do-americans-share-so-much-fake-news-one-big-reason-is-they-arent-paying-attention-new-research-suggests/feed/ 0
New research shows how journalists are responding and adapting to “fake news” rhetoric https://www.niemanlab.org/2021/02/new-research-shows-how-journalists-are-responding-and-adapting-to-fake-news-rhetoric/ https://www.niemanlab.org/2021/02/new-research-shows-how-journalists-are-responding-and-adapting-to-fake-news-rhetoric/#respond Fri, 05 Feb 2021 15:51:24 +0000 https://www.niemanlab.org/?p=190331

Editor’s note: Longtime Nieman Lab readers know the bylines of Mark Coddington and Seth Lewis. Mark wrote the weekly This Week in Review column for us from 2010 to 2014; Seth’s written for us off and on since 2010. Together they’ve launched a new monthly newsletter on recent academic research around journalism. It’s called RQ1 and we’re happy to bring each issue to you here at Nieman Lab.

Adapting to the misinformation era, journalists emphasize transparency in their daily practices

“Fake news” is an unfortunate phrase. It is so casually invoked and widely deployed as to be almost devoid of meaning. And, most infamously, it has been weaponized by politicians (one former president in particular) as both a ready tool to dismiss inconvenient truths in the moment and also, more perniciously, to cast doubt on the legitimacy of journalism as a whole.

Yet, “fake news” captures for many people a defining set of features about our information environment: from declining trust in news media to concerns about the seemingly supercharged spread of misinformation on social media to the general unease with the level of fakery that seems to fight for our precious attention at every turn online. This creates a conundrum for journalists: Given how directly the “fake news” phenomenon and the discourse surrounding it challenges the authority behind producing “real” news, what are journalists to do? How should they respond and adapt?

new article in Journalism & Mass Communication Quarterly offers some initial answers. Researchers Hong Tien Vu and Magdalena Saldaña use a nationally representative survey of U.S. journalists to examine how newsroom practices have changed (or not) amid the rise of misinformation and the rhetoric of “fake news.” Specifically, the authors focused on whether journalists reported having either adopted new approaches or intensified existing ones as a way of “preventing” misinformation and thereby avoiding complaints of spreading fake news.

First, Vu and Saldaña found that “journalists were most likely to cross-check with sources more often, limit the use of anonymity, and make it as clear as possible where the information comes from.” On the other hand, journalists did not report substantially increasing their involvement in vetting information with lawyers or training on fact-checking platforms — though it’s possible that, particularly in the case of fact-checking tactics, they were already habitually doing these things. No intensification of such activities was needed.

Second, the researchers tested for differences between two types of professional practices that are core to journalism: accountability and transparency. The former emphasizes traditional fact-checking and verification, while the latter points to emergent forms of opening up the journalistic process to audience view — e.g., by providing raw footage, limiting the use of anonymous sources, making it clear how information was obtained, and disclosing details about a journalist’s background.

Survey results suggest that, against the current backdrop of misinformation and how it challenges the news industry, journalists have more readily adopted or intensified practices that promote transparency in their work. This may be seen as part of a larger effort among journalists to better understand and connect with their audiences, or it may simply reflect that transparency practices are being taken up increasingly as a means of delivering on journalistic accountability, just in a new way.

Regardless, it’s noteworthy that journalists who saw the rise in fake news as a threat to democracy were more likely to report using transparency-oriented practices — perhaps because they saw transparency as a solution to the misinformation problem.

Another key finding, the authors note, is that “those who felt responsible for providing accurate information to their social media followers were more likely to adopt/intensify both accountability and transparency practices.” A possible explanation for this is that journalists with a clearly perceived audience base online might feel compelled, in an accountability sense, “to do something to improve the information environment for their audience.” And, at the same time, social media, in their design and culture, encourage the kind of self-disclosure and relational exchanges that are indicative of the transparency approach to journalism.

In all, Vu and Saldaña offer an important step forward in understanding how journalists, depending on their background, role, and attitudes, may perceive and respond to the misinformation moment in ways that contribute to larger transformations taking place in the field today.

Research roundup

Here are some other studies that caught our eye this month:


News media use, talk networks, and anti-elitism across geographic location: Evidence from Wisconsin by Chris Wells, Lewis A. Friedland, Ceri Hughes, Dhavan V. Shah, Jiyoun Suk, and Michael W. Wagner, in The International Journal of Press/Politics.

Polarization continues to be one of the dominant themes of contemporary Western political analysis, and one of the primary axes along which that polarization has run is geography — that is, rural and urban settings. But the rural-urban political dynamic is much more complex than the simple binary of popular imagination, with many geographical nooks and crannies, from the exurbs to small cities, complicating the picture. This team of University of Wisconsin researchers used their state as a case to examine the rural-urban divide in relation to three factors: News consumption, political talk networks, and anti-elitism.

They found that those in small towns, small cities, and the suburbs reported more politically diverse discussion partners than those in urban areas, particularly the state’s capital, Madison. And while rural residents consumed less centrist/liberal and prestige media than others, they also consumed less conservative media than urban residents, when controlling for other variables. Anti-elitism was strongest on the left from Madison and on the right from rural areas, but lowest in conservative suburbs.

The results don’t indicate a clean rural-urban split that we might be tempted to imagine. And the researchers note that for all the differences they found, one similarity was striking: Across the board, the top news source was local TV news and local newspapers, which attract only a fraction of the scholarly attention of cable news and Facebook. “This is an important reminder for our field,” the authors wrote, “not to neglect mundane news media, even as they wane in popularity.”

When journalists see themselves as villains: The power of negative discourse by Ruth Moon, in Journalism & Mass Communication Quarterly.

In much of the world, we expect journalists to reflexively defend themselves against external criticism and encroachment from the state and from competing spheres of influence. Dozens of studies on concepts like boundary workparadigm repair, and metajournalistic discourse explore the ways journalists use their public discourse to protect their own autonomy and jockey for cultural legitimacy. For many journalists, defending yourself is just part of the job.

That’s why Moon’s study of Rwandan journalists is so remarkable. In interviews with 40 Rwandan journalists as part of an ethnography of the country’s newsrooms, Moon found that their professional identity is dominated by a metanarrative in which they are untrustworthy, too powerful, and need to be reined in by other social institutions. This narrative stems from Rwandan journalists’ deeply rooted sense of complicity and guilt in helping foment the genocide of the 1990s. As a result, they’re treated extremely skeptically by audiences, sources, and policymakers, and in their eyes, they deserve it. It’s a haunting and fascinating picture of the power of negative discourse to shape professional identity in post-conflict journalism, fueled by collective guilt.

Legitimating a platform: Evidence of journalists’ role in transferring authority to Twitter by Logan Molyneux and Shannon C. McGregor, in Information, Communication & Society.

Over the past decade or so, researchers have spent a lot of time — seriously, a lot a lot — studying how journalists use Twitter. That focus has extended to how news organizations use Twitter as a source: How heavily they rely on ithow they verify it (or don’t)how they use it to quote politicians. But Molyneux and McGregor advance that line of research with a provocative argument. Journalists, they say, don’t approach Twitter as a source at all, something to be scrutinized. Instead, they treat it simply as content, an interchangeable, largely unquestioned building block of news.

Molyneux and McGregor (who’ve been looking at this for a while) argue that as they cite tweets in their stories, journalists use the tools they’ve long used to build their own authority to instead transfer that authority to Twitter, an external platform. In a content analysis of 365 articles citing tweets, they found that journalists rarely explain or qualify tweets, simply passing them along without evidence of journalistic processing. In doing so, journalists present Twitter as a news source whose legitimacy is self-evident enough not to need their validation or scrutiny, and they reduce their own authority to merely amplifying the algorithmic judgment of Twitter.

The tragedy of errors: Political ideology, perceived journalistic quality, and media trust by Tamar Wilner, Ryan Wallace, Ivan Lacasa-Mas, and Emily Goldstein, in Journalism Practice.

When audiences are asked why they don’t trust the news media, one of the major reasons they frequently give is accuracy: They say they don’t trust the news media because they regularly see errors in their work. But that response has drawn its own skepticism, as researchers have wondered whether what news consumers call “errors” are really just another form of perceived bias, heavily influenced by political ideology and the hostile media effect.

That’s the question that drives this study, as Wilner and her colleagues used a U.S. survey to look at the relationships between perceptions of various types of errors, media trust, political ideology, and news consumption. They found that economic conservatives perceive more errors in news, but not social conservatives. Overall, though, error perceptions didn’t seem closely tied to ideology.

Some types of perceived errors — inaccurate headlines, factual errors, and missing information — were significantly related to lower media trust, but strangely, those who perceived a lot of misspellings and grammar errors had more trust in the news media. Ultimately, while political ideology (specifically conservatism) was a greater driver of media distrust, errors played a significant role as well, and couldn’t simply be chalked up to partisan attitudes.

‘Forced to report’: Affective proximity and the perils of local reporting on Syria by Omar Al-Ghazzi, in Journalism.

When local or national conflicts escalate into issues that draw global concern, a complex power dynamic emerges between local journalists and the foreign correspondents who come in to cover the conflict. Al-Ghazzi’s study offers a nuanced look at that dynamic, and particularly the tensions at work for local journalists in those situations.

Drawing on 19 interviews with Syrian activist-journalists, Al-Ghazzi vividly illustrates the tug-of-war between those two roles. These media practitioners feel drawn into activism by their strong emotional connection to the place and the cause they are covering. But they also feel “forced to report” — to take on the journalistic norms of objectivity and neutrality in bearing witness, because of their lack of power relative to foreign journalists.

Al-Ghazzi centers on the concept of affective proximity to capture these dynamics. This proximity, he argues, is a form of emotional labor that rather perversely undermines local journalists’ authority rather than bolsters it. Proximity, he says, is “deemed the source of locals’ authority to take part in the news story but also what is held against them since they are deemed too attached to their countries and causes.”

The epistemologies of breaking news by Mats Ekström, Amanda Ramsälv, and Oscar Westlund, in Journalism Studies.

In the past decade, several researchers have sought to answer questions about how journalists balance accuracy and speed in reporting breaking news by looking at it through the lens of epistemology — how journalists establish knowledge about news and communicate it. Ekström and colleagues add a rich study to this line of research with their examination of the continuous news and live broadcast desk of a Swedish for-profit news organization.

In three weeks at the desk, the researchers observed a variety of strategies by which journalists dealt with an environment in which “reporters without much preparation and information are sent to report on events where not much happens.” In the process, Ekström and his co-authors found that journalists did care about accuracy, but developed routines to hedge against the uncertainty of their knowledge and the speed with which they might be proven wrong.

One particularly interesting concept they developed was epistemic dissonance, which occurs when a news item that journalists have structured as important turns out to be a non-story, or one that journalists can know very little about immediately. The authors outline the ways journalists grappled with epistemic dissonance in their coverage, but conclude that it inevitably erodes journalists’ authority by breaking their contract with the audience to produce reliable and proportionate news. (Full disclosure: Seth previously has worked with Ekström and Westlund on studies of journalism and epistemology.)

A photographer at the U.S. Capitol on January 6, 2021, by Elvert Barnes, used under a Creative Commons license.

]]>
https://www.niemanlab.org/2021/02/new-research-shows-how-journalists-are-responding-and-adapting-to-fake-news-rhetoric/feed/ 0
How to reduce the spread of fake news — by doing nothing https://www.niemanlab.org/2021/01/how-to-reduce-the-spread-of-fake-news-by-doing-nothing/ https://www.niemanlab.org/2021/01/how-to-reduce-the-spread-of-fake-news-by-doing-nothing/#respond Tue, 05 Jan 2021 13:39:25 +0000 https://www.niemanlab.org/?p=189638 When we come across false information on social media, it is only natural to feel the need to call it out or argue with it. But my research suggests this might do more harm than good. It might seem counterintuitive, but the best way to react to fake news — and reduce its impact — may be to do nothing at all.

False information on social media is a big problem. A UK parliament committee said online misinformation was a threat to “the very fabric of our democracy.” It can exploit and exacerbate divisions in society. There are many examples of it leading to social unrest and inciting violence, for example in Myanmar and the United States.

It has often been used to try to influence political processes. One recent report found evidence of organized social media manipulation campaigns in 48 different countries, including the United States and United Kingdom.

Social media users also regularly encounter harmful misinformation about vaccines and virus outbreaks. This is particularly important with the roll-out of Covid-19 vaccines because the spread of false information online may discourage people from getting vaccinated — making it a life or death matter.

With all these very serious consequences in mind, it can be very tempting to comment on false information when it’s posted online — pointing out that it is untrue, or that we disagree with it. Why would that be a bad thing?

Increasing visibility

The simple fact is that engaging with false information increases the likelihood that other people will see it. If people comment on it, or quote tweet — even to disagree — it means that the material will be shared to our own networks of social media friends and followers.

Any kind of interaction at all — whether clicking on the link or reacting with an angry face emoji — will make it more likely that the social media platform will show the material to other people. In this way, false information can spread far and fast. So even by arguing with a message, you are spreading it further. This matters, because if more people see it, or see it more often, it will have an even greater effect.

I recently completed a series of experiments with a total of 2,634 participants looking at why people share false material online. In these, people were shown examples of false information under different conditions and asked if they would be likely to share it. They were also asked about whether they had shared false information online in the past.

Some of the findings weren’t particularly surprising. For example, people were more likely to share things they thought were true or were consistent with their beliefs.

But two things stood out. The first was that some people had deliberately shared political information online that they knew at the time was untrue. There may be different reasons for doing this (trying to debunk it, for instance). The second thing that stood out was that people rated themselves as more likely to share material if they thought they had seen it before. The implication is that if you have seen things before, you are more likely to share when you see them again.

Dangerous repetition

It has been well established by numerous studies that the more often people see pieces of information, the more likely they are to think they are true. A common maxim of propaganda is that if you repeat a lie often enough, it becomes the truth.

This extends to false information online. A 2018 study found that when people repeatedly saw false headlines on social media, they rated them as being more accurate. This was even the case when the headlines were flagged as being disputed by fact checkers. Other research has shown that repeatedly encountering false information makes people think it is less unethical to spread it (even if they know it is not true, and don’t believe it).

So to reduce the effects of false information, people should try to reduce its visibility. Everyone should try to avoid spreading false messages. That means that social media companies should consider removing false information completely, rather than just attaching a warning label. And it means that the best thing individual social media users can do is not to engage with false information at all.

Tom Buchanan is a professor of psychology at the University of Westminster. This article is republished from The Conversation under a Creative Commons license.The Conversation

Depiction of a black hole by The European Southern Observatory used under a Creative Commons license.

]]>
https://www.niemanlab.org/2021/01/how-to-reduce-the-spread-of-fake-news-by-doing-nothing/feed/ 0
Two new studies show, again, that Facebook doesn’t censor conservatives https://www.niemanlab.org/2020/10/two-new-studies-show-again-that-facebook-doesnt-censor-conservatives/ https://www.niemanlab.org/2020/10/two-new-studies-show-again-that-facebook-doesnt-censor-conservatives/#respond Fri, 30 Oct 2020 14:15:04 +0000 https://www.niemanlab.org/?p=186960

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“Right-leaning pages consistently earn more interactions than left-leaning or ideologically nonaligned pages.” Conservatives have long complained that their views are censored on Facebook. Republican Sen. Mike Lee of Utah said in Congressional hearings this week that fact-checking — like the labels that Facebook and Twitter attach to false posts — count as censorship: “When I use the word ‘censor’ here, I’m meaning blocked content, fact-check, or labeled content, or demonetized websites of conservative, Republican, or pro-life individuals or groups or companies.” (Censorship is the suppression of speech or other information on the grounds that it’s considered offensive or questionable. The studies below make it very clear that these stories are not being suppressed.)

The idea that right-leaning content is actually censored — that people are prevented from seeing it — is “short on facts and long on feelings,” as Casey Newton has written. This week, a couple stories and studies focused showed again that conservative content outperforms liberal content on Facebook. (See also: Progressive publication Mother Jones’ recent claims, with that its traffic was throttled as Facebook tweaked its algorithm to benefit conservative sites like The Daily Wire instead.)

— Politico worked with the Institute for Strategic Dialogue, a London-based thinktank that studies extremism online, to “analyze which online voices were loudest and which messaging was most widespread around the Black Lives Matter movement and the potential for voter fraud in November’s election.” In their analysis of more than 2 million Facebook, Instagram, Twitter, Reddit, and 4Chan posts, the researchers found that

a small number of conservative users routinely outpace their liberal rivals and traditional news outlets in driving the online conversation — amplifying their impact a little more than a week before Election Day. They contradict the prevailing political rhetoric from some Republican lawmakers that conservative voices are censored online — indicating that instead, right-leaning talking points continue to shape the worldviews of millions of U.S. voters.

For instance:

At the end of August, for instance, Dan Bongino, a conservative commentator with millions of online followers, wrote on Facebook that Black Lives Matter protesters had called for the murder of police officers in Washington, D.C. Bongino’s social media posts are routinely some of the most shared content across Facebook, based on CrowdTangle’s data.

The claims — first made by a far-right publication that the Southern Poverty Law Center labeled as promoting conspiracy theories — were not representative of the actions of the Black Lives Matter movement. But Bongino’s post was shared more than 30,000 times, and received 141,000 other engagements such as comments and likes, according to CrowdTangle.

In contrast, the best-performing liberal post around Black Lives Matter — from DL Hughley, the actor — garnered less than a quarter of the Bongino post’s social media traction, based on data analyzed by Politico.

— A nine-month study by the progressive nonprofit Media Matters, using CrowdTangle data, found both that partisan content (left and right) did better than non-partisan content and that “right-leaning pages consistently earned more average weekly interactions than either left-leaning or ideologically nonaligned pages.[…] Between January 1 and September 30, right-leaning Facebook pages tallied more than 6 billion interactions (reactions, comments, shares), or 43% of total interactions earned by pages posting about American political news, despite accounting for only 26% of posts.”

Beware George Soros stories. The New York Times is working with Zignal Labs, a firm that tracks information online, to analyze which news topics in 2020 are most associated with misinformation. “The topic most likely to generate misinformation this year, according to Zignal, was an old standby: George Soros, the liberal financier who has featured prominently in right-wing conspiracy theories for years,” the Times’ Kevin Roose reports. Here’s the full list:

1. George Soros (45.7 percent misinformation mentions)

2. Ukraine (34.2 percent)

3. Vote by Mail (21.8 percent)

4. Bio Weapon (24.2 percent)

5. Antifa (19.4 percent)

6. Biden and Defund the Police (14.2 percent)

7. Hydroxychloroquine (9.2 percent)

8. Vaccine (8.2 percent)

9. Anthony Fauci (3.2 percent)

10. Masks (0.8 percent)

For the top-three subjects — George Soros, Ukraine, and vote by mail — “some of the most common spreaders of misinformation were right-wing news sites like Breitbart and The Gateway Pundit,” Roose notes. “YouTube also served as a major source of misinformation about these topics, according to Zignal.”

“Moving slowly is a Wikipedia super-power.” At Wired, Noam Cohen writes about Wikipedia’s plan to prevent election-related misinformation from making its way onto the platform.

On Wednesday, Wikipedia moved to protect its main 2020 election page, and will likely apply those safeguards to the many other articles that will need to be updated depending on the outcome of the race. The main tools for doing this are similar to the steps it has already deployed to resist disinformation about the Covid-19 pandemic: installing controls to prevent new, untested editors from even dipping a toe until well past Election Day and making sure that there are large teams of editors alerted to any and all changes to election-related articles. Wikipedia administrators will rely on a watchlist of “articles on all the elections in all the states, the congressional districts, and on a large number of names of people involved one way or another,” wrote Drmies, an administrator who helps watch over political articles.

Per Wednesday’s change, anyone editing the article about November’s election must have had a registered account for more than 30 days and already made 500 edits across the site. “I am hoping this will reduce the issue of new editors trying to change the page to what they believe to be accurate when it doesn’t meet the threshold that has been decided,” wrote Molly White, a software engineer living in Boston known on Wikipedia as GorillaWarfare, who put the order in place. The protection for that article, she wrote, was meant to keep away bad actors as well as overly exuberant editors who feel the “urge to be the ones to introduce a major fact like the winner of a presidential election.”

On Election Night, she wrote, Wikipedia is likely to impose even tighter restrictions, limiting the power to publish a winner in the presidential contest — sourced, of course, to reputable outlets like the Associated Press or big network news operations — to the most experienced, most trusted administrators on the project.

]]>
https://www.niemanlab.org/2020/10/two-new-studies-show-again-that-facebook-doesnt-censor-conservatives/feed/ 0
Older people and Republicans are most likely to share Covid-19 stories from fake news sites on Twitter https://www.niemanlab.org/2020/10/older-people-and-republicans-are-most-likely-to-share-covid-19-stories-from-fake-news-sites-on-twitter/ https://www.niemanlab.org/2020/10/older-people-and-republicans-are-most-likely-to-share-covid-19-stories-from-fake-news-sites-on-twitter/#respond Mon, 26 Oct 2020 15:53:35 +0000 https://www.niemanlab.org/?p=187173 Since March, a group of scholars from Northeastern, Harvard, Rutgers, and Northwestern have been working to understand how social behaviors affect transmission of Covid-19. They’ve issued a series of reports over the months, and the most recent one is an analysis of nearly 30 million Covid-19-related tweets collected between January 1 and September 30, 2020, from over 500,000 registered U.S. voters.

The researchers found that a little over 1 percent of the URLs shared in the group of tweets linked to sites that “systematically” publish fake news.1 Sixty percent of the tweets linked to URLs from “known, reputable domains,” and 39.8% linked to “domains with unknown quality.”

Here are some of the findings:

Older registered voters (of all political orientations) shared more news overall, and also more stories from fake news sites.

Republicans over the age of 65 were the most likely to share stories about Covid-19 from fake and misleading sites. 5.3% of the URLs that they shared between January 1 and September 30 came from fake domains.

Older women were especially likely to share news from disreputable sites, Northeastern professor David Lazer said in a separate article about the data:

Researchers were curious about who was behind the sharing of bad information, not who was believing it. The average age of these so-called “super sharers” is 59, “considerably older than the average Twitter user,” Lazer says.

“In terms of the data, it’s disproportionately older women,” he says.

Even as they shared possible misinformation about Covid-19, older voters were less likely to believe it than younger voters. Previous research by this same consortium had found that “younger people, regardless of political orientation, are more likely to believe one of 11 pieces of Covid-19 misinformation when compared to older people.”

The far-right site The Gateway Pundit — which in the past has, for instance, identified the wrong person as the 2017 Las Vegas shooter and falsely reported that Hillary Clinton had a seizure on camera — was the most-shared misleading site. Not only did it greatly outperform the other fake news domains…

…but in some months it was almost as popular as reputable news sites: “In August and September respectively, The Gateway Pundit was ranked the 4th and 6th most shared domain [overall],” the researchers note. “In August, the only domains with more shares were The New York Times, The Washington Post, and CNN.” The Gateway Pundit has White House press credentials and Trump has given it special treatment during briefings. Recent Covid-related headlines on The Gateway Pundit include “IT’S A SCAM: After 48,299 COVID-19 Cases at 37 US Universities — Only 2 Hospitalizations and ZERO Deaths — More Likely to Be Killed By a Dog,” “Despite President Trump Contracting China Coronavirus All Signs Are COVID-19 Is Dissipating,” and “New WHO Data Reveals Coronavirus Less Lethal than Last Three Major US Pandemics — And they Destroyed the Economy for This.”

You can explore the data yourself — and answer questions like “Which stories were shared most often by Florida residents in September?” — in the researchers’ Covid-19 tweets dashboard here.

  1. The researchers used the categorization system outlined here: “We labeled as ‘black’ a set of websites taken from preexisting lists of fake news sources constructed by fact-checkers, journalists, and academics who identified sites that published almost exclusively fabricated stories[…]To measure fake news more comprehensively, we labeled additional websites as ‘red’ or ‘orange’ via a manual annotation process of sites identified by Snopes.com as sources of questionable claims. Sites with a red label (e.g., Infowars.com) spread falsehoods that clearly reflected a flawed editorial process, and sites with an orange label represented cases where annotators were less certain that the falsehoods stemmed from a systematically flawed process.” For the purposes of this study, the researchers identified only tweets linking to “black” or “red” sites as fake, but if they’d included “orange” sites too, “the percentage of shared fake news URLs increases to 1.8%.”
]]>
https://www.niemanlab.org/2020/10/older-people-and-republicans-are-most-likely-to-share-covid-19-stories-from-fake-news-sites-on-twitter/feed/ 0
Facebook has been terrible about removing vaccine misinformation. Will it do better with election misinformation? https://www.niemanlab.org/2020/09/facebook-has-been-terrible-about-removing-vaccine-misinformation-will-it-do-better-with-election-misinformation/ https://www.niemanlab.org/2020/09/facebook-has-been-terrible-about-removing-vaccine-misinformation-will-it-do-better-with-election-misinformation/#respond Fri, 04 Sep 2020 13:45:00 +0000 https://www.niemanlab.org/?p=185790

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“Even when companies are handed misinformation on a silver platter, they fail to act.” The nonprofit Center for Countering Digital Hate and global agency Restless Development trained young volunteers to identify, record, and report vaccine misinformation on Facebook, Instagram, YouTube, and Twitter.” The platforms’ responses were not impressive: “Of the 912 posts flagged and reported by volunteer [between July 21 and August 26], fewer than 1 in 20 posts containing misinformation were dealt with (4.9%).”

Many of the posts were about a coronavirus vaccine. More than a tenth of posts in the sample referred to Bill Gates in some way. Others suggest that 5G and the annual flu vaccine worsen Covid-19 symptoms, that a coronavirus vaccine will change people’s DNA, and that the government is trying to kill people of color with the vaccine. Here’s one example, and there are more in the report:

In June, the Center for Countering Digital Hate had performed a similar exercise with Covid-related misinformation on social platforms — identifying it, reporting it, and tracking what happened. At the time, they found that the platforms removed fewer than one in 10 of the posts reported.

Following publication [of that report], Facebook requested and we supplied a complete list of the misinformation posts our volunteers had collected.

For this report, we revisited those posts to audit whether further action was taken in the last three months[…]while some further action was taken, three quarters remains intact.

Despite requesting and receiving a full list of Facebook posts containing misinformation featured in our Will to Act report, the platform still only removed one quarter of the posts we identified as breaching their rules. These include posts claiming that Covid is a “bioweapon,” that it is “caused by vaccines” and various conspiracies about Bill Gates.

Facebook proved to be particularly poor at removing the accounts and groups posting misinformation, with just 0.3 percent banned.[…]

Twitter proved to be most effective in removing accounts, with 12.3% banned from the platform. This follows encouraging signs that Twitter is taking a proactive approach to removing and
flagging misinformation about coronavirus on its platform.

Removal rates were notably poorer on Instagram than they were on Facebook, despite both companies sharing the same set of community standards and similar policies on Covid misinformation. This is particularly concerning given that this report shows Instagram remains a strong source of follower growth for anti-vaxxers.

The report comes out amid new promises from Facebook to quash misinformation ahead of the 2020 U.S. presidential election. On Thursday the company said, among other things, that it would “bar any new political ads on its site in the week before Election Day,” “strengthen measures against posts that tried to dissuade people from voting,” “quash any candidates’ attempts at claiming false victories by redirecting users to accurate information on the results,” “place a voting information center — a hub for accurate, up-to-date information on how, when and where to register to vote — at the top of its News Feed through Election Day,” “remove posts that tell people they will catch Covid-19 if they vote,” remove posts that cause “confusion around who is eligible to vote or some part of the voting process,” and “limit the number of people that users [can] forward messages to in its Messenger app to no more than five people, down from more than 150.”

The social platforms’ policies are filled with gray areas that don’t always make it clear which types of election-related misinformation must be taken down. A recent report from the Election Integrity Partnership — a collaboration between the Stanford Internet Observatory and Program on Democracy and the Internet, Graphika, the Atlantic Council’s Digital Forensic Research Lab, and the University of Washington’s Center for an Informed Public — finds that “few platforms [out of 14 studied] have comprehensive policies on election-related content as of August 2020,” and that the category of misinformation that “aims to delegitimize election results on the basis of false claims” is particularly problematic because “none of these platforms have clear, transparent policies on this type of content, which is likely to make enforcement difficult and uneven.”

“Content that uses misrepresentation to disrupt or sow doubt about the larger electoral process or the legitimacy of the election can carry exceptional real-world harm,” the report’s authors conclude. “Combating online disinformation requires action to be taken quickly before content goes viral or reaches a large population of users predisposed to believe that disinformation. Quick and effective action will require the platforms to make decisions against a well-documented framework and for those decisions to be enforced fairly — without adjusting those actions due to concerns about the political consequences.”

Vaccine photo by Self Magazine used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/09/facebook-has-been-terrible-about-removing-vaccine-misinformation-will-it-do-better-with-election-misinformation/feed/ 0
What makes fake news feel true when it isn’t? For one thing, hearing it over and over again https://www.niemanlab.org/2020/08/what-makes-fake-news-feel-true-when-it-isnt-for-one-thing-hearing-it-over-and-over-again/ https://www.niemanlab.org/2020/08/what-makes-fake-news-feel-true-when-it-isnt-for-one-thing-hearing-it-over-and-over-again/#respond Fri, 21 Aug 2020 14:05:57 +0000 https://www.niemanlab.org/?p=185559

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“Crocodiles sleep with their eyes closed”? Just out: The Psychology of Fake News: Accepting, Sharing, and Correcting Misinformation, a new collection of research articles edited by Rainer Greifeneder, Mariela Jaffe, Eryn Newman, and Norbert Schwarz. The book, published by Routledge, is available as a free download or online read (and here it is on Kindle), and it includes a lot of research on why and how people believe false information.

In several of the chapters, researchers look at “what makes a message ‘feel’ true, even before we have considered its content in any detail,” and consider the implications of this for misinformation. Here are some things that make people believe something is true:

Repetition

The influence of repetition is most pronounced for claims that people feel uncertain about, but is also observed when more diagnostic information about the claims is available (Fazio, Rand, & Pennycook, 2019; Unkelbach & Greifeneder, 2018). Worse, repetition even increases agreement among people who actually know that the claim is false — if only they thought about it (Fazio, Brashier, Payne, & Marsh, 2015). For example, repeating the statement “The Atlantic Ocean is the largest ocean on Earth” increased its acceptance even among people who knew that the Pacific is larger. When the repeated statement felt familiar, they nodded along without checking it against their knowledge. Even warning people that some of the claims they will be shown are false does not eliminate the effect, although it attenuates its size. More importantly, warnings only attenuate the influence of repetition when they precede exposure to the claims — warning people after they have seen the claims has no discernable influence (Jalbert, Newman, & Schwarz, 2019).

Pronounceability

Merely having a name that is easy to pronounce is sufficient to endow the person with higher credibility and trustworthiness. For example, consumers trust an online seller more when the seller’s eBay username is easy to pronounce — they are more likely to believe that the product will live up to the seller’s promises and that the seller will honor the advertised return policy (Silva, Chrobot, Newman, Schwarz, & Topolinski, 2017). Similarly, the same claim is more likely to be accepted as true when the name of its source is easy to pronounce (Newman et al., 2014).

Familiarity

Even exposing people to only true information can make it more likely that they accept a false version of that information as time passes. Garcia-Marques, Silva, Reber, and Unkelbach (2015) presented participants with ambiguous statements (e.g., “crocodiles sleep with their eyes closed”) and later asked them to rate the truth of statements that were either identical to those previously seen or that directly contradicted them (e.g., “crocodiles sleep with their eyes open”). When participants made these judgments immediately, they rated repeated identical statements as more true, and contradicting statements as less true, than novel statements, which they had not seen before. One week later, however, identical as well as contradicting statements seemed more true than novel statements. Put simply, as long as the delay is short enough, people can recall the exact information they just saw and reject the opposite. As time passes, however, the details get lost and contradicting information feels more familiar than information one has never heard of — yes, there was something about crocodiles and their eyes, so that’s probably what it was.

As time passes, people may even infer the credibility of the initial source from the confidence with which they hold the belief. For example, Fragale and Heath (2004) exposed participants two or five times to statements like “The wax used to line Cup-o-Noodles cups has been shown to cause cancer in rats.” Next, participants learned that some statements were taken from the National Enquirer (a low credibility source) and some from Consumer Reports (a high credibility source) and had to assign the statements to their likely sources. The more often participants had heard a statement, the more likely they were to attribute it to Consumer Reports rather than the National Enquirer. In short, frequent exposure not only increases the apparent truth of a statement, it also increases the belief that the statement came from a trustworthy source. Similarly, well-intentioned efforts by the Centers for Disease Control and the Los Angeles Times to debunk a rumor about “flesh-eating bananas” morphed into the belief that the Los Angeles Times had warned people not to eat those dangerous bananas, thus reinforcing the rumor (Emery, 2000). Such errors in source attribution increase the likelihood that people convey the information to others, who themselves are more likely to accept (and spread) it, given its alleged credible source (Rosnow & Fine, 1976).

Just regular photos

[People] were asked to participate in a trivia test where they saw a series of general knowledge claims appear on a computer screen (Newman et al., 2012). The key manipulation in this experiment was that half of the claims appeared with a related non-probative photo [Ed. note: i.e., the photo provided no evidence for the claim one way or the other], much like the format one might encounter in the news or on social media, and half of the claims appeared without a photo. For example, participants in this trivia study saw claims like “Giraffes are the only mammals that cannot jump” presented either with a photo, like the headshot of a giraffe[…]or without a photo. Despite the fact that the photos provided no evidence of whether the claims were accurate or not — the headshot of the giraffe tells you nothing about whether giraffes can jump — the presence of a photo biased people toward saying the associated claims were true. Photos produced truthiness, a bias to believe claims with the addition of non-probative information. In another set of experiments, published in the same article, Newman and colleagues conceptually replicated the finding. In these experiments, participants were asked to play a different trivia game: “Dead or Alive” (a game that a co-author remembered from old radio programing). The key task was to judge whether the claim “This person is alive” was true or false for each celebrity name that appeared on the screen. Half the time, those celebrity names appeared with a non-probative photo — a photo that depicted the celebrity engaged in their profession but did not provide any evidence about the truth of the claim “This person is alive”. For instance, subjects may have seen the name “Nick Cave” with a photo of Nick Cave on stage with a microphone in his hand and singing to a crowd[…]Nothing about the photo provided any clues about whether Nick Cave was in fact alive or not. In many ways, the photos were simply stock photos of the celebrities. The findings from this experiment were clear: people were more likely to accept the claim “This person is alive” as true when the celebrity name appeared with a photo, compared to when there was no photo present. Perhaps more surprisingly, the same pattern of results was found when another group of subjects were shown the same celebrity names, with the same celebrity photos, but evaluated the opposite claim: “This person is dead”. In other words, the very same photos nudged people toward believing not only claims that the celebrities were “alive” but also claims that the same people were “dead”.

Across a series of experiments, Cardwell, Lindsay, Förster, and Garry (2017) asked people to rate how much they knew about various complex processes (e.g., how rainbows form). Half the time, people also saw a non-probative photo with the process statement (e.g., seeing a photo of a watch face with the cue “How watches work”). Although the watch face provides no relevant information about the mechanics of a watch, when people saw a photo with a process cue, they claimed to know more about the process in question. When Cardwell et al. examined actual knowledge for these processes, those who saw photos had explanations that were similar in quality to those who did not see a photo. In the context of fake news and misinformation, such findings are particularly worrisome and suggest that stock photos in the media may not only bias people’s assessments of truth but also lead to an inflated feeling of knowledge or memory about a claim they encounter.

You can check out the full book here.

Bat-eared foxes (not fake!) in Tanzania by Scott Presnell used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/08/what-makes-fake-news-feel-true-when-it-isnt-for-one-thing-hearing-it-over-and-over-again/feed/ 0
People who engage with false news are hyper-concerned about truth. But they think it’s being hidden. https://www.niemanlab.org/2020/08/people-who-engage-with-false-news-are-hyper-concerned-about-truth-but-they-think-its-being-hidden/ https://www.niemanlab.org/2020/08/people-who-engage-with-false-news-are-hyper-concerned-about-truth-but-they-think-its-being-hidden/#respond Thu, 06 Aug 2020 14:24:00 +0000 https://www.niemanlab.org/?p=185215 We might also fail to understand how certain ways of knowing, such as media literacy, can be manipulated and weaponized. We know that some people are more likely to seek alternative, all-explaining narratives — those with low social status, victims of discrimination, or people who feel politically powerless. As well as witnessing the rise in 5G conspiracy theories, we may also be experiencing the rise of certain ways of knowing and their manipulation, especially in the context of a resistance to institutions and elites.

Donald Trump’s 2020 campaign has begun to engage with the idea of “truth over facts” with its campaign website thetruthoverfacts.com, which mocks a series of gaffes by Democratic candidate Joe Biden. Though the website is satirical, it primes the idea of the truth being something more fundamental — and Trumpian — than Biden’s misremembered facts.

At First Draft, we plan to develop techniques for monitoring and analyzing these behaviors in the coming months. We want to speak to others interested in this line of research as we experiment with new techniques. If you are interested in the study of online ways of knowing, or have something to tell us that we can use, we want to hear from you. Please comment below or get in touch on Twitter.

Tommy Shane is First Draft’s head of policy and impact. A version of this story originally ran on Footnotes.

Photo taken in New York’s Union Square on April 14, 2020, by Eden, Janine and Jim, used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/08/people-who-engage-with-false-news-are-hyper-concerned-about-truth-but-they-think-its-being-hidden/feed/ 0
How much does fake coronavirus news affect people’s real-life health behavior? https://www.niemanlab.org/2020/07/how-much-does-fake-coronavirus-news-affect-peoples-real-life-health-behavior/ https://www.niemanlab.org/2020/07/how-much-does-fake-coronavirus-news-affect-peoples-real-life-health-behavior/#respond Fri, 31 Jul 2020 11:46:13 +0000 https://www.niemanlab.org/?p=185001

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

How much does exposure to fake coronavirus information change people’s behavior? What about when it comes from the president? In a working paper (not yet peer-reviewed) by Ciara M. Greene and Gillian Murphy of Ireland’s University College Dublin and University College Cork find that, in some cases, exposure to false information about the pandemic can change people’s actions — but the size of the effect is small, at least in Ireland. From the paper:

In this study, we exposed participants to fake news stories suggesting, for example, that certain foods might help protect against Covid-19, or that a forthcoming vaccine might not be safe. We observed only very small effects on intentions to engage in the behaviors targeted by the stories, suggesting that the behavioral effects of one-off fake news exposure might be weaker than previously believed. We also examined whether providing a warning about fake news might reduce susceptibility, but found no effects. This suggests that, if fake news does affect real-world health behavior, generic warnings such as those used by governments and social media companies are unlikely to be ineffective.

Greene and Murphy recruited the 3,746 participants for their study via a call-out in TheJournal.ie. They note that “the majority of participants were well-educated, with 2,395 participants (64%) having earned at least an undergraduate degree.”

Participants were shown public health and misinformation warning posters “designed to mimic the format and style of government-issued public health messages relating to Covid-19 in the Republic of Ireland”; they were also shown four fake stories and four real stories. During the study, they weren’t told that some of the stories were fake. (They were debriefed afterward.)

These were the fake stories:

1. “New research from Harvard University shows that the chemical in chili peppers that causes the ‘hot’ sensation in your mouth reduces the replication rate of coronaviruses. The researchers are currently investigating whether adding more spicy foods to your diet could help combat Covid-19”

2. “A whistleblower report from a leading pharmaceutical company was leaked to the Guardian newspaper in April. The report stated that the coronavirus vaccine being developed by the company causes a high rate of complications, but that these concerns were being disregarded in favor of releasing the vaccine quickly.”

3. “A study conducted in University College London found that those who drank more than three cups of coffee per day were less likely to suffer from severe Coronavirus symptoms. Researchers said they were conducting follow-up studies to better understand the links between caffeine and the immune system.”

4. “The programming team who designed the HSE1 app to support coronavirus contact-tracing were found to have previously worked with Cambridge Analytica, raising concerns about citizen’s data privacy. The app is designed to monitor people’s movements in order to support the government’s contact-tracing initiative.”

These were the real stories:

1. “A new study from Trinity College Dublin revealed that vitamin D is likely to reduce serious coronavirus complications. The researchers urged the government to advise Irish citizens to take daily vitamin D supplements.”

2. “Mixed-martial arts fighter Conor McGregor posted an online video urging the Irish government to enforce a complete lockdown, with the help of the army. ‘I urge our government to utilize our defense forces,’ he stated.”

3. “Sinn Féin President Mary Lou McDonald called off two Sinn Féin rallies in March, after a case of coronavirus was reported at her children’s school.”

4. “As most of Europe is in lockdown, Sweden is pursuing a different strategy against COVID-19. Pubs, restaurants, gyms and most schools remain open in the Scandinavian state, with the government relying on personal responsibility for compliance rather than strict enforcement. Official guidance states that citizens may socialize, as long as they stay at ‘arm’s length’ from each other.”

The study also had a false memory component: Participants were asked if they remembered seeing six news stories — all four true stories and two randomly selected fake ones.

The researchers found that “exposure to misinformation was associated with small but significant changes to two of the four critical health behaviors assessed”:

Participants who viewed a story about privacy concerns relating to a contact-tracing app reported being less willing to download the app, while participants who remembered having seen this story before also reported small decreases in intention. Participants who reported a false memory for the coffee story reported stronger intentions to drink more coffee in future, though notably the opposite effect was observed among participants who were merely exposed to the story. No significant effects of seeing or remembering stories about the benefits of eating spicy food or problems with a Covid-19 vaccine were observed; effects were generally in the expected direction, but did not reach statistical significance. Truthfulness ratings were correlated with behavioral intentions; participants who believed stories promoting a particular behavior (e.g. drinking coffee or eating spicy food) tended to report stronger intentions to engage in that behavior. Similarly, participants who believed stories encouraging caution about particular behaviors (e.g. downloading a contact-tracing app or getting a vaccine) were less likely to engage in that behavior in future.

“We report some evidence that exposure to fake news may ‘nudge’ behavior, however the observed effects were very small,” the researchers note. However, they raise the question of what happens when people are exposed to a fake story multiple times, over time:

It is important to note that effects in the present study are based on a single exposure to a novel fake news story. Real-world behavioral effects may arise following multiple exposures to a story; multiple sources might increase consumers’ faith in a story and thus influence their subsequent behavior. Indeed, just two exposures to a fake news story can increase its perceived truthfulness (Pennycook et al., 2018).

The study took place in Ireland, so its applicability to the United States is less clear. “Coronavirus issues are relatively politically neutral in Ireland, where the data were collected, in comparison with the U.S. where the virus has become something of a political football,” Greene told me in an email, adding:

In some of our other fake news research, using similar methods, we’ve found that acceptance of misinformation tends to be higher when the fabricated stories align with the participant’s existing views. For example, we have a paper under review at the moment examining fake news related to Brexit, in which Leave voters are more susceptible to fake news that reflects badly on Remain voters, and vice versa. The effect is also somewhat magnified if participants are first exposed to a threat to their social identity — in this case, as a Leaver or Remainer. In the case of the US, there is a highly polarized political climate which is further heightened at present as it is an election year; that could theoretically enhance the effects of fake news.

She concluded:

We certainly don’t want to state categorically that fake news is not dangerous, but we suspect that real-world behavioral effects will mostly emerge in contexts where individuals seek out many stories all advocating the same position, and which are congenial to the individual’s existing views; anti-vax or climate change denial networks would be a good example of this. What our research strongly suggests is that casual exposure to a novel fake news story is likely to have negligible effects on future behavior.

How would this apply to a well-known figure promoting a fake cure? Take, for example, Trump promoting hydroxychloroquine as a coronavirus treatment despite multiple studies showing that it doesn’t work. Or Trump musing that ingesting disinfectant might cure the virus.

“I think it’s really important to recognize the difference between ‘fake news’…as in fabricated stories shared on social media, and inaccurate scientific comments or suggestions from a political leader,” Greene said. “The news that people are hearing in the latter case is not fake. Trump really did suggest that bleach might be curative. It’s just that he was wrong to make the suggestion.”

Even in these cases, though, it’s not clear how many people actually take the president’s advice.

Researchers from Brigham and Women’s found that prescriptions for hydroxychloroquine surged in the U.S. between February 16 and April 25 after Trump praised it as a treatment for Covid-19, causing shortages for patients who actually needed it. It’s unclear, however, how many people who filled prescriptions actually took the medication.

As for reports of increased accidental bleach poisonings this spring, Greene is wary. “Bleach-related accidents in the U.S. had already spiked enormously in the weeks prior to Trump’s comments, simply because people were using much more of than usual to disinfect their homes and workplaces,” she said. By May, the accidental poisonings had fallen sharply, either because people were being more careful with the disinfectants or because they’d simply lost interest in cleaning their homes obsessively. (It turns out deep cleaning probably isn’t all that effective in the fight against Covid-19 anyway.)

“I think it’s a mistake to jump to the conclusion that the change in accident rates is necessarily down to Trump, if for no other reason than the fact that — frankly — most people aren’t that stupid,” Greene said. “People might vote for a leader or espouse support for him for a range of personal or political reasons, but it doesn’t necessarily follow that they treat every word from his mouth as gospel, particularly when it comes to their own health.”

Bleach bottle warning by John Lodder used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/07/how-much-does-fake-coronavirus-news-affect-peoples-real-life-health-behavior/feed/ 0
UK readers find the government’s COVID-19 messages more misleading than actual fake news https://www.niemanlab.org/2020/06/uk-readers-find-the-governments-covid-19-messages-more-misleading-than-actual-fake-news/ https://www.niemanlab.org/2020/06/uk-readers-find-the-governments-covid-19-messages-more-misleading-than-actual-fake-news/#respond Mon, 15 Jun 2020 18:11:21 +0000 https://www.niemanlab.org/?p=183726 Studies have suggested social media is rife with disinformation, with surveys showing a high proportion of people have been exposed to false or misleading claims about COVID-19, fueling dramatic headlines.

But our six-week diary study of news audiences between April 16 and May 27 found that the vast majority of our panel of 200 participants could easily spot fake news. They found stories such as the conspiracy theory that 5G is responsible for the spread of COVID-19 or the quack remedy that gargling with saltwater cures coronavirus immediately suspect.

So it wasn’t fake news being peddled on social media or conspiracy websites that was of most concern. When we asked them about what false or misleading information about COVID-19 they had encountered, many instead referenced examples of what they saw as government or media misinformation.

Our panel of news audiences was made up from a representative mix of the UK population. We mainly asked them questions aimed at finding out about their knowledge of the pandemic and the way it was reported by news media.

Rethinking COVID-19 misinformation

When representative surveys reveal that many people have seen disinformation about COVID-19, it is not always clear what the false or misleading information they have seen was — or where it came from. What’s more, even when people are exposed to disinformation, we often assume rather than question whether it will affect their understanding of the pandemic.

But when we asked respondents about some of the most prominent false claims associated with COVID-19 disinformation they were easily detected. For example, the vast majority of participants rightly said 5G was not responsible for spreading the pandemic, that drinking more water does not kill the coronavirus and that gargling with saltwater is not a cure for COVID-19.

But while our panel could easily spot fake news, they were less aware of issues that may help them understand how the pandemic is being handled. Three in ten respondents did not know the government had failed to regularly meet its testing targets, for example.

Almost a third did not realize living in more deprived areas of England and Wales increased the likelihood of catching the coronavirus. And many participants underestimated the UK’s death toll compared to other countries and were suspicious of the UK government’s figures.

After new lockdown measures were announced in England on May 10, we also found many people did not realize they did not necessarily apply to Scotland, Wales, or Northern Ireland. Half of all respondents wrongly believed the UK government was in charge of the lockdown measures across all four nations.

Government and media misinformation

When we asked participants what counted as misinformation, some respondents mentioned discredited medical claims, such as Donald Trump believing that injecting disinfectant protects against the coronavirus. But many more told us that either government claims or the media were responsible for spreading false or misleading information. As one respondent told us:

Misinformation to me would be reading an article saying schools to go back on June 1 without many details and then finding out it’s just a phased reintroduction for certain age groups. It’s panicking many parents when that didn’t need to happen, headlines should still be brief but not misleading.

Another participant believed misinformation “related to things the government have said that have later turned out to be false, such as the PPE shortage, number of tests done, care home deaths. I think misinformation in this case relates to the media re-reporting facts that haven’t been clear by the government in statements.”

Broadcasters have long been caught up in debates about how far they fact-check government statements while ensuring that the public believe they are being impartial. When we asked respondents about this difficult balancing act, many believed a greater emphasis on fact-checking would enhance rather than undermine public trust in journalism.

As one respondent put it: “I think fact checking is more relevant than ever before because unfortunately people in power make false claims that, sadly, are believed by many people.”

Addressing public confusion

In the UK, public faith in the government’s handling of the pandemic has fallen dramatically since mid-April. We watched this develop during our six-week study from April 16 to May 27.

Over this period, participants told us they wanted more media scrutiny of government decision-making, including fact-checking dubious claims. They also wanted more experts informing media coverage rather than politicians.

If it becomes routine for government ministers to appear at the daily Downing Street press briefing without any scientific or health experts — as happened on June 6 — it may limit how far journalists can quiz them about their advice to ministers. As Sky News’ political correspondent, Sam Coates, has pointed out, with no scientific advisers present, it allows the government to select the most politically convenient evidence when responding to questions.

With public confidence in the UK government plummeting, broadcasters have an increasingly important role to play in the pandemic. Over the six-week study, our respondents have consistently said they want accurate and impartial information, with journalists regularly fact-checking political statements and challenging any dubious claims.

Our research suggests broadcasters may have helped people become fairly confident in spotting egregious examples of fake news. But many participants were confused by more routine political decisions, most strikingly the lockdown measures that can affect people in England differently to Scotland, Wales and Northern Ireland.

For broadcasters to more effectively counter misinformation, our research tells us it is not only about boldly questioning what politicians say and holding the government to account. It is about identifying what people are most confused about and finding ways to raise their level of understanding about complex and contentious issues.

Stephen Cushion is a professor of journalism at Cardiff University. Maria Kyriakidou is a lecturer there. Marina Morani and Nikki Soo are research associates there. This article is republished from The Conversation under a Creative Commons license.The Conversation

UK Prime Minister Boris Johnson holds a coronavirus press conference on March 12, 2020. Photo by Number 10 used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/06/uk-readers-find-the-governments-covid-19-messages-more-misleading-than-actual-fake-news/feed/ 0
Unvetted scientific research about COVID-19 is becoming a partisan weapon https://www.niemanlab.org/2020/05/unvetted-scientific-research-about-covid-19-is-becoming-a-partisan-weapon/ https://www.niemanlab.org/2020/05/unvetted-scientific-research-about-covid-19-is-becoming-a-partisan-weapon/#respond Fri, 15 May 2020 12:31:53 +0000 https://www.niemanlab.org/?p=182883

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“The dangers of open-access science in a pandemic.” Preprint servers make it easy for scientists to share academic research papers before they are peer-reviewed or published, and COVID-19 is leading to a flood of research being uploaded. That can be a good thing, getting new and cutting-edge research into decision-makers’ hands quickly, writes Gautama Mehta in Coda Story. But it can also spread misinformation.

As of Thursday evening, medRxiv (pronounced “med archive”), a preprint server run in partnership by BMJ, Yale University, and Cold Spring Harbor Laboratory, had 2,740 COVID-19–related papers.

It’s not as if just anyone can upload anything to a preprint server. There are screening processes in place, Diana Kwon reported in Nature last week, and those have been enhanced in light of COVID-19:

BioRxiv and medRxiv have a two-tiered vetting process. In the first stage, papers are examined by in-house staff who check for issues such as plagiarism and incompleteness. Then manuscripts are examined by volunteer academics or subject specialists who scan for non-scientific content and health or biosecurity risks. BioRxiv mainly uses principal investigators; medRxiv uses health professionals. Occasionally, screeners flag papers for further examination by Sever and other members of the leadership team. On bioRxiv, this is usually completed within 48 hours. On medRxiv, papers are scrutinized more closely because they may be more directly relevant to human health, so the turnaround time is typically four to five days.

Sever emphasizes that the vetting process is mainly used to identify articles that might cause harm — for example, those claiming that vaccines cause autism or that smoking does not cause cancer — rather than to evaluate quality. For medical research, this also includes flagging papers that might contradict widely accepted public-health advice or inappropriately use causal language in reporting on a medical treatment.

But during the pandemic, screeners are watching for other types of content that need extra scrutiny — including papers that might fuel conspiracy theories. This additional screening was put in place at bioRxiv and medRxiv after a backlash against a now-withdrawn bioRxiv preprint that reported similarities between HIV and the new coronavirus, which scientists immediately criticized as poorly conducted science that would prop up a false narrative about the origin of SARS-CoV-2. “Normally, you don’t think of conspiracy theories as something that you should worry about,” [medRxiv and bioRxiv cofounder Richard Sever] says.

These heightened checks and the sheer volume of submissions has meant that the servers have had to draft in more people. But even with the extra help, most bioRxiv and medRxiv staff have been working seven-day weeks, according to Sever. “The reality is that everybody’s working all the time.”

MedRxiv has a disclaimer at the top of the search page: “medRxiv is receiving many new papers on coronavirus SARS-CoV-2. A reminder: these are preliminary reports that have not been peer-reviewed. They should not be regarded as conclusive, guide clinical practice/health-related behavior, or be reported in news media as established information.” But publications don’t always heed that guidance — as in the case of a much-tweeted (and then much-criticized) LA Times article claiming there was a new mutant strain of the virus, for instance.

One of the issues that factors into media coverage of preprints is that the journalists covering the coronavirus are not always science reporters. [Fiona Fox, head of the UK’s Science Media Center] told me that many of the people now reporting about preprint studies have been taken off their usual beats and “have no idea what peer review is and have no idea what a preprint is, and are having to cover this because there’s no other story in town.”

This plays into another problem posed by preprint servers: they are essentially dumps of information which require scientific expertise to adjudicate or contextualize. “Everything comes out as it’s received,” [Derek Lowe, who covers the pharmaceutical industry], told me. “There is no way to know what might be more interesting or important, and no way to find it other than by using keyword searches. It really puts people back on using their own judgment on everything at all times, and while that should always be a part of reading the literature, not everyone is able to do it well.”

Jonathan Gitlin wrote for Ars Technica earlier this month:

If a paper posted to arXiv regarding a particular flavor of subatomic particle turns out to be erroneous or flawed, no one’s going to die. But if a flawed research paper about a more contagious mutation of a virus in the middle of a global pandemic is reported on uncritically, then there really is the potential for harm.

Indeed, this is not an abstract fear. We are in the middle of a global pandemic, and a recent study in The Lancet found that much of the discussion (and even policymaking) about COVID-19’s transmissibility (also known as R0) during January 2020 was driven by preprints rather than peer-reviewed literature.

Nobody claims that the conventional peer-review process is perfect. And “a kind of de facto real-time peer review has emerged in the comment sections of preprint studies, as well as in discussions on Twitter,” Mehta notes. “These are precisely the places where large numbers of scientists gathered to discuss the flaws in the Indian study on similarities between the coronavirus and HIV before it was retracted.”

In the worst-case scenrios, though, scientific research may also be becoming a partisan weapon, Northeastern’s Aleszu Bajak and Jeff Howe wrote in The New York Times this week.

Conspiracy theories and election disinformation on TikTok. Rolling Stone’s EJ Dickson found COVID-19–related conspiracy theories in abundance on TikTok:

Some of the most popular videos exist at the nexus of anti-vaccine and anti-government conspiracy theorist content, thanks in part to the heightened presence of Qanon accounts on the platform. One video with more than 457,000 views and 16,000 likes posits that Microsoft’s founding partnership in digital ID program ID2020 Alliance is targeted at the ultimate goal of “combining mandatory vaccines with implantable microchips,” with the hashtags #fvaccines and #billgates. Another popular conspiracy theory, among evangelicals in particular, involves the government attempting to place a chip inside unwitting subjects in the form of a vaccine. Some Christians view this as the “Mark of the Beast,” a reference to a passage in Revelations alluding to the mark of Satan. The #markofthebeast hashtag has more than 2.3 million combined views on TikTok, and some videos with the hashtag have likes in the tens of thousands.

On The Verge’s podcast this week, Alex Stamos, director of the Stanford Internet Observatory and former chief security officer for Facebook, talked about bad actors on TikTok (“If I was the Russians right now, I would put all of my money, all of my effort behind TikTok and Instagram”).

“Over one-quarter of the most viewed YouTube videos on COVID-19 contained misleading information.” Researchers in Ottawa screened the top COVID-19–related, English-language YouTube videos on March 21 and found that more than a quarter of them contained inaccurate information. The sample size here is small: The reachers started out with an original set of 150 videos, but after screening for duplicates, non-English language, a lack of audio, and so on, they had 69 videos to work with; those videos had been viewed over 62 million times. More than 25 percent of the videos contained “non-factual” information — including inaccurate statements (“A stronger strain of the virus is in Iran and Italy,” racism (“Chinese virus”), and conspiracy theories. “Government and professional videos” contained factual information but “only accounted for 11 percent of videos and 10 percent of views.”

Illustration by Andrey Osokin used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/05/unvetted-scientific-research-about-covid-19-is-becoming-a-partisan-weapon/feed/ 0
These are the four waves of journalism studies over the past 20 years: the participatory, crisis, platforms, and populist eras https://www.niemanlab.org/2020/03/these-are-the-four-waves-of-journalism-studies-over-the-past-20-years-the-participatory-crisis-platforms-and-populist-eras/ https://www.niemanlab.org/2020/03/these-are-the-four-waves-of-journalism-studies-over-the-past-20-years-the-participatory-crisis-platforms-and-populist-eras/#respond Mon, 23 Mar 2020 17:38:09 +0000 https://www.niemanlab.org/?p=181169 <academia-alert class="italian-policomm-journal">

It was about 20 years ago that the academic field of “journalism studies” came into rough early shape. Sure, there were people who studied journalism long before then — but for the most part, they were doing so from the intellectual home of another field. They were sociologists, economists, political scientists, communications scholars, or part of some other academic sub-brand — who chose to study some element of journalism.

It was in 2000 that the International Communications Association created a Journalism Studies division and two journals were founded, Journalism: Theory, Practice, and Criticism and Journalism Studies

It has also been 20 years since the founding of the Italian journal Comunicazione politica (Political Communication, if it wasn’t obvious). To mark the twin anniversaries, the journal is out with a new special issue in which it asked a variety of scholars one question: “What does it mean, from your scholarly viewpoint, to study political communication today?”

I want to highlight one of their responses, this paper by C.W. Anderson, a past Nieman Lab contributor and now professor of media and communication at the University of Leeds. In it, Chris tries to sum up the past 20 years of journalism studies along two axes, one time-based and one geographic.

It serves, for me at least, as a very nice intellectual overview of what People Who Talk About Journalism — both pure academics and those of us at sites like Nieman Lab — have been yammering on about for all this time.

Most interesting to me is the time axis: “Since the late 1990s, I would argue that we have actually seen at least four ‘eras’ come and go as online journalism, and the larger culture in which it is embedded, have evolved.”

  • The participatory era. “The early years of the internet were marked by an excitement that the relatively low costs of digital content production, combined with the ease through which such content could be distributed, would mark of flourishing of creative practices more generally.”

    Think Clay Shirky, Henry Jenkins, Yochai Benkler — mashups on the rise, new networks always being born, the Internet as a tool for media democratization. Their ideas “combined legal, economic, and socio-cultural strands of scholarship to sketch a 21st-century information utopia in which a relatively bottom up stream of digital content circulated relatively friction free, could be combined with other cultural products, and would be enabled by a relatively permissive copyright regime.”

    It’s a pretty American idea — the ever-widening marketplace of ideas, “in which the more ideas in circulation at any one time, the greater the likelihood that truth would emerge from an open and transparent clash of perspectives.”

    The core scholarly and popular concerns during the participatory era might be summed up by a question which was once meant seriously and now has become something of a joke in media sociology circles: “Is blogging journalism?” This question, though now rather silly, gets at a fundamental intellectual preoccupation of the participatory era. In a world where everyone can, at least in theory, contribute bits of factual media content to the public realm, what separates professional journalism (with its low barriers to entry, lack of mechanisms of occupational exclusion, and seemingly simple forms of work and content production) from the fact-generating activities of ordinary people?

  • The crisis era. Nearly every anguished discussion of “Are bloggers journalists????” took place in an environment where the traditional business of professional journalism was in a state of financial collapse, especially during the global financial crisis. “Once again, the dominant discourse about these developments in the journalism studies field were American. In the United States, between 2003 and 2015, news print advertising revenue plummeted by more than 50%. Newsroom employment, likewise, was down by 30% during the same time, dropping to a level not seen since 1978.”

    The American economic crisis in journalistic production, finally, drew intellectual sustenance from the explosion of participatory media that began a decade before.

    If part of the destruction of the traditional business model for journalism was the dramatic crash of the value of display advertising due to an unlimited supply of digital inventory — as theorists like Shirky argued — then the blame for this could in part be lain at the feet of the thousands of media makers that populated this new digital space. While the primary impact of the radical media makers discussed in the previous section was psychological, cultural, and professional, in other words, there was a powerful line of argument that drew additional economic consequences from these professional shifts.

    As we will see, however, this was an incorrect argument. The most important culprits in the collapse of the newsroom business model were not increased competition, but rather the efficiency of hyper-personalized digital advertising and the concentrated market power of digital platforms.

    The economic problem for journalism was not competition, in other words, but surveillance and monopoly.

  • The platform era. Okay, so we’ve gone from “random bloggers are a threat to journalists’ sense of authority” to “digital media is a threat news organizations’ business models” to “FACEBOOOOOOK! [shakes fist at the sky].” The web had opened up an entirely new universe of information sources — blogs, digital-native publishers, and so on — but consumption of news was still heavily driven by a reader’s initiative. Say, typing nytimes.com into her web browser and seeing what stories editors have decided to highlight.

    It took social media platforms inserting themselves in that directing-audience-attention layer to launch a new era, in which the real threat was not the individual citizen opining online, but the Silicon Valley megacorporations that has used the aggregation of all that opining to bring a new degree of market power to what had, not that many years before, been hyped as a definitionally “open” medium.

    I think the shift from talking about the blogosphere as an economic and professional threat to journalism to talking about Facebook as a similar threat marks an evolution from an academic perspective on digital news that thinks primarily in competition/speech/free-market terms to one that thinks in terms of institutional power and monopoly.

    As I noted above, this marks the emergence of a more genuinely “European” (or at least non-American) view on the relationship between digital news and political communication. This evolution also sheds light on some deep ideological blind spots embedded in the first wave of theorizing about the crisis in journalism, one which relates, again, to its American roots.

    The original perspective saw the economic crisis in news as caused by an explosion in content supply and a corresponding collapse in the value of display advertising generated by digital overabundance. From this point of view, the decline in the economic fortunes of old media organizations could be seen as the revenge of the free-market on hidebound news monopolies, even if the public utility of these monopolies could still be justified in normative terms. Both advertisers and readers now had their choice of news.

    If our attention shifts to platforms, however, what we see is the replacement of one (local) quasi-monopoly by another (global) monopoly — from the one newspaper town to Facebook. The first perspective fits well with the libertarian and law and economics perspective of much writing about the early internet. The second point of view does not.

  • The populist era. This final era is tied intimately to the rise of candidates like Donald Trump in the United States and nationalist/quasi-nationalist movements like Brexit. This has meant for some a revived interest in news organizations’ role as institutions setting boundaries around the least-generous aspects of human behavior — as well as those platforms’ roles in promoting them.

    How are populist political actors using the affordances of social media and platforms to shift the meanings of, and the participants in, electoral politics? Are state actors, disguised as ordinary citizen journalists or professional news reporters, hijacking the public discourse for nefarious purposes?

    These concerns also tie into the renewed focus on digital platforms like Facebook, and demonstrates just how much the debate over the future of journalism has changed over the past ten years. From a somewhat abstract debate about the who counts as a journalist online and offline, the surge of so-called “fake news” has injected concerns about national security, cyberspying, enemy propaganda, and the toxic power of trolls into arguments about the boundaries of the journalism profession.

    This wave has also led to an inversion of the “formerly utopian journalism scenarios” of the participatory era; the “are being stood on their head under the pressure of the populist and right-wing wave sweeping the nations of the liberal west.”

I’ll leave Anderson’s geographic lens for the interested reader; it looks at the jockeying between different academic approaches in both the United States and elsewhere.

(It also features this very good footnote: “I realize that an American scholar dividing a field of scholarship into the ‘American’ side and the ‘rest of the world’ is poor form…In this I can only plead realpolitik: the dominance of American scholarship in political communication, political science, and journalism studies is an empirical fact, even if it is not a normative ideal.”)

You can also check out the rest of the special issue; pre-prints are available ungated for some unspecified period of time. The other articles:

  • “Political Communication Today: The Perspective of a Political Scientist Who Studies Public Opinion and Electoral Behavior,” Hanspeter Kriesi
  • “Interrogating the analytical value of ‘media system’ for comparative political communication,” Silvio Waisbord
  • “Three Consequences of Big Data on the Practices and Scholarships of Political Communication,” Fabio Giglietto
  • “Cognitive and Psychosocial Factors in Online Political Communication,” Patrizia Catellani
  • “Political Discourse A Perspective from Italian Linguistics,” Stefano Ondelli
  • “What Can Semiotics Do for Political Communication?,” Giovanna Cosenza
  • “Reflecting on New Media, Post-Truth and Affect through the Lenses of Cultural, Literary and Discourse Studies,” Lidia De Michelis
  • “Popular Culture and Political Communication,” John Street

</academia-alert>

Illustration of waves by Mario De Meyer used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/03/these-are-the-four-waves-of-journalism-studies-over-the-past-20-years-the-participatory-crisis-platforms-and-populist-eras/feed/ 0
Who needs deepfakes? Simple out-of-context photos can be a powerfully low-tech form of misinformation https://www.niemanlab.org/2020/02/who-needs-deepfakes-simple-out-of-context-photos-can-be-a-powerfully-low-tech-form-of-misinformation/ https://www.niemanlab.org/2020/02/who-needs-deepfakes-simple-out-of-context-photos-can-be-a-powerfully-low-tech-form-of-misinformation/#respond Mon, 24 Feb 2020 14:37:06 +0000 https://www.niemanlab.org/?p=180197 When you think of visual misinformation, maybe you think of deepfakes — videos that appear real but have actually been created using powerful video editing algorithms. The creators edit celebrities into pornographic movies, and they can put words into the mouths of people who never said them.

But the majority of visual misinformation that people are exposed to involves much simpler forms of deception. One common technique involves recycling legitimate old photographs and videos and presenting them as evidence of recent events.

For example, Turning Point USA, a conservative group with over 1.5 million followers on Facebook, posted a photo of a ransacked grocery store with the caption “YUP! #SocialismSucks.” In reality, the empty supermarket shelves have nothing to do with socialism; the photo was taken in Japan after a major earthquake in 2011.

In another instance, after a global warming protest in London’s Hyde Park in 2019, photos began circulating as proof that the protesters had left the area covered in trash. In reality, some of the photos were from Mumbai, India, and others came from a completely different event in the park.

I’m a cognitive psychologist who studies how people learn correct and incorrect information from the world around them. Psychological research demonstrates that these out-of-context photographs can be a particularly potent form of misinformation. And unlike deepfakes, they are incredibly simple to create.

Out-of-context photos are very common source of misinformation. In the day after the January Iranian attack on U.S. military bases in Iraq, reporter Jane Lytvynenko at BuzzFeed News documented numerous instances of old photos or videos being presented as evidence of the attack on social media. These included photos from a 2017 military strike by Iran in Syria, video of Russian training exercises from 2014 and even footage from a video game. In fact, out of the 22 false rumors documented in the article, 12 involve this kind of out-of-context photos or video.

This form of misinformation can be particularly dangerous because images are a powerful tool for swaying popular opinion and promoting false beliefs. Psychological research has shown that people are more likely to believe true and false trivia statements, such as “turtles are deaf,” when they’re presented alongside an image. In addition, people are more likely to claim they’ve previously seen freshly made-up headlines when they’re accompanied by a photograph. Photos also increase the numbers of likes and shares that a post receives in a simulated social media environment, along with people’s beliefs that the post is true.

And pictures can alter what people remember from the news. In an experiment, one group of people read a news article about a hurricane accompanied by a photograph of a village after the storm. They were more likely to falsely remember that there were deaths and serious injuries compared to people who instead saw a photo of the village before the hurricane strike. This suggests that the false pictures of the Jan. 2020 Iranian attack may have affected people’s memory for details of the event.

There are a number of reasons photographs likely increase your belief in statements. First, you’re used to photographs being used for photojournalism and serving as proof that an event happened. Second, seeing a photograph can help you more quickly retrieve related information from memory. People tend to use this ease of retrieval as a signal that information is true.

Photographs also make it more easy to imagine an event happening, which can make it feel more true.

Finally, pictures simply capture your attention. A 2015 study by Adobe found that posts that included images received more than three times the Facebook interactions than posts with just text.

Journalists, researchers and technologists have begun working on this problem. The News Provenance Project, a collaboration between The New York Times and IBM, released a proof-of-concept strategy for how images could be labeled to include more information about their age, location where taken and original publisher. This simple check could help prevent old images from being used to support false information about recent events.

In addition, social media companies such as Facebook, Reddit, and Twitter could begin to label photographs with information about when they were first published on the platform.

Until these kinds of solutions are implemented, though, readers are left on their own. One of the best techniques to protect yourself from misinformation, especially during a breaking news event, is to use a reverse image search. In Google Chrome, it’s as simple as right-clicking on a photograph and choosing “Search Google for image.” You’ll then see a list of all the other places that photograph has appeared online.

As consumers and users of social media, we have a responsibility for ensuring that information we share is accurate and informative. By keeping an eye out for out-of-context photographs, you can help keep misinformation in check.

Lisa Fazio is an assistant professor of psychology at Vanderbilt University. This article is republished from The Conversation under a Creative Commons license.The Conversation

Apologies to René Magritte and his “The Treachery of Images.”

]]>
https://www.niemanlab.org/2020/02/who-needs-deepfakes-simple-out-of-context-photos-can-be-a-powerfully-low-tech-form-of-misinformation/feed/ 0
There are lots of ways to combat misinformation. Here are some creative ones from across three continents https://www.niemanlab.org/2020/02/there-are-lots-of-ways-to-combat-misinformation-here-are-some-creative-ones-from-across-three-continents/ https://www.niemanlab.org/2020/02/there-are-lots-of-ways-to-combat-misinformation-here-are-some-creative-ones-from-across-three-continents/#respond Thu, 20 Feb 2020 18:59:56 +0000 https://www.niemanlab.org/?p=180266 There are many studies on misinformation and ways to combat it, but they’re often focused on traditional reporters and editors. In four new reports published today, Full Fact, an independent fact-checking charity in the United Kingdom, partnered with Africa Check (which fact checks in several countries on the continent) and Argentina’s Chequeado analyzed academic research and fact-checking experiments in the three regions, and recommends how members of the public sector (politicians, health officials, educators, etc.) can contribute to correcting and limiting the spread of bad information.

“From well-trodden conspiracies on climate science, to creative but downright dangerous “beauty hacks” circulating on social media, tackling bad information can be daunting,” the release said. One of our colleagues put it best in his book: a lie can travel halfway around the world while the truth is still getting its boots on. We know the consequences: bad information ruins lives.”

But first, some of the relevant findings for journalists include:

  • Photos and videos can be engaging, but the best way to inform a reader is through a jargon-free written story. A study of story structure with 210 participants found the most information retained was from stories written with the inverted pyramid structure. Story formatting is also influential, so stick to short paragraphs where you can.
  • While it’s true that older adults and adults without college education find it more difficult to separate fact from opinion, “We all find it harder to remember the source of stories we encounter on social media. We tend to believe rumours which are repeated, easy to process, and those which align with our existing worldviews. Above all, we all have a part to play in the quality of public debate.”
  • Fact-checking has had a positive influence on both politicians and journalists, and qualitative studies show that doing so has caused politicians and news organizations to correct themselves and to not repeat false information.

Intervention programs across different age demographics in three countries (Uganda, Argentina, and the U.K.) also yielded positive results. Some examples:

In Uganda, the organization Informed Health Choices designed educational programs for both children and adults in order to help them make better decisions about healthcare. The children learned about critical thinking tactics with a comic book formatted textbook, posters, a song, and a workbook. The adults were required to listen to a podcast and given a summary checklist.

Tests administered at the end of the experiment suggested that, overall, the intervention had been successful in raising awareness of health misinformation. Presented with a series of multiple choice questions designed to replicate real life health choices, 69% of the students in the intervention group passed, by getting at least half the answer right, compared to only 27% in the control condition. Similar results emerged from testing parents’ learning. A total of 71% of adults who listened to the tailored podcast passed the multiple choice test, compared to 38% in the control group.

In Argentina, Chequeado recruited more than 3,000 of its adult readers to participate in a 15-minute online program that would participants how to spot fact-checkable claims. First participants were given 16 statements and had to rate if they could be checked. After each answer they were told why their answer was correct or incorrect. Then, using a fake political speech, participants were asked rate which of the statements could be fact-checked.

The training had a small but statistically significant effect on participants’ ability to identify if statements contained checkable facts. Overall, controlling for the effects of gender, age, profession and political affiliation, participants in the experimental condition scored 4% higher than those in the control condition…Online interventions are worth considering. The fact that the training proposed was fairly simple and only took 15 minutes of readers’ time is particularly interesting. Adults may not necessarily need the structure of classroom environments. As this study indicates, education may also come in small doses of online training, which can be integrated in their everyday media consumption practices.

In the U.K., the National Literacy Trust simulated a newsroom for children ages 9 to 11. After working with 2,400 students from 500 schools, the results show that teaching media literacy through simulation can be a viable option:

As many as 70% of students reported thinking about the importance of fact checking after the workshop, compared to 52% before. Similarly, the workshops appeared to increase confidence in students’ ability to assess the quality of news. A third of students (33%) reported finding it difficult to tell if a news story was trustworthy after the workshop, compared to almost half (49%) before.

Simulation techniques can also work on adults. The organization also developed a 15-minute game that required players to assume “the role of a fake news reporter” so that they could learn about misinformation tactics and later apply that knowledge in the real world.

Applied to a large but self-selecting sample of 14,000 participants, the study found that the game made a significant contribution to players’ ability to spot inaccurate news. Tested before and after, with questions that asked them to rate the reliability of tweets and headlines, participants were significantly better at identifying unreliable information after playing. Notably, the authors observed the highest effects for participants who were also most likely to be vulnerable to false news in the first phase.

Find the full briefings in English here and in Spanish here.

]]>
https://www.niemanlab.org/2020/02/there-are-lots-of-ways-to-combat-misinformation-here-are-some-creative-ones-from-across-three-continents/feed/ 0
Ctrl-F: Helping make networks more resilient against misinformation can be as simple as two fingers https://www.niemanlab.org/2020/01/ctrl-f-helping-make-networks-more-resilient-against-misinformation-can-be-as-simple-as-two-fingers/ https://www.niemanlab.org/2020/01/ctrl-f-helping-make-networks-more-resilient-against-misinformation-can-be-as-simple-as-two-fingers/#respond Wed, 29 Jan 2020 14:35:39 +0000 https://www.niemanlab.org/?p=179483

Editor’s note: Fellow Mac people, just imagine “Command-F” in place of every “Ctrl-F.”

In the misinformation field, there’s often a weird dynamic between the short-term and long-term gains folks.

Maybe I don’t go to the right meetings. But if, say, you went to a conference on structural racism and talked about redesigning the mortgage interest deduction to help build black wealth, my guess is most people there would be fine with yes-anding it. Let’s get that done short-term, and we can do other stuff long-term. Put it on the road map.

In misinformation, however, the short-term and long-term people are perpetually at war. It’s as if you went to that structural racism conference, talked about revising the mortgage deduction, and someone asked you how that freed children from cages on the border. And when you said it didn’t, they threw up their hands and said: “See?”

Ctrl-F as an example of a targeted approach

Here’s an example: Ctrl-F. In my classes, I teach our students to use Ctrl-F to find stuff on web pages. And I beg other teachers to teach Ctrl-F as well.

Some folks look at that and say: That’s ridiculous, Mike, you’re not going to de-radicalize Nazis by teaching people Ctrl-F. It’s not going to address cognitive bias. It doesn’t give them deep critical thinking powers, and it doesn’t undo the resentment that fuels disinformation’s spread.

But consider the tactics used by propagandists, conspiracy theorists, bad actors, and the garden-variety misinformed. Here’s a guy with 126,000 followers yesterday implying that the current coronavirus outbreak is potentially a bioweapon developed with the help of Chinese spies. (That’s how I read the implication, at least.)

Screenshotted tweet links to CBC article and claims it describes a husband and wife were Chinese “spies” removed from a facility for sending pathogens back to China.

Now is that true? The tweet links to the CBC, after all. That’s a reputable outlet.

The first thing you have to do to verify it is click the link. And right there, most students don’t know they should do that. They really don’t. It’s where most students fail, actually, their lack of link-clicking.

But the second thing you have to do is see whether the article actually supports that summary. How do you do that?

Well, you could advise people to fully read the article — in which case zero people are going to do that because it takes too long to do for every tweet or email or post. And if it takes too long, the most careless people in the network will tweet unverified claims (because they’re comfortable not verifying) and the most careful people will tweet nothing (because they don’t have time to verify to their level of certainty).

Multiply that out over a few hundred million nodes and you get the web as we have it today: a victim of the Yeats Effect. (“The best lack all conviction, while the worst / Are full of passionate intensity.”). The reckless are happy to post constantly, and the careful barely post at all.

The Yeats Effect is partly about time disparities

One important reason the best lack conviction, though, is time. They don’t have the time to get to the level of conviction they need. It’s a knotty problem because that higher level of care is precisely what makes their participation in the network so beneficial. (In fact, when I ask people who have unintentionally spread misinformation why they did so, the most common answer I hear is that they were either pressed for time, or had a scarcity of attention to give to that moment).

But what if — and hear me out here — what if there was a way for people to quickly check whether linked articles actually supported the points they’re claimed to? Actually quoted things correctly? Actually provided the context of the original from which they quoted?

And what if, by some miracle, that function was shipped with every laptop and tablet, and available in different versions for mobile devices?

This super-feature actually exists already, and it’s called Ctrl-F. Roll the animated GIF!

In the GIF above, we see someone checking whether key terms in the tweet about the virus researchers were actually found in the article. Here we check “spy,” but we can quickly follow up with other terms: coronavirus, threat, steal, send.

I just did this for the tweeted article, and repeatedly those terms are found either not at all or in links to other unrelated stories. (Except for “threat,” which turned up a paragraph that says the opposite of what the tweet alleges.)

The idea here is not that if those specific words aren’t found, then the contextualization is wrong. But rather than reading every article cited to determine whether it’s been correctly represented and contextualized, a person can quickly identify cases that have a high probability of being miscontextualized and might, therefore, be worth the effort to correct.

And for every case like this, where it’s a reckless summary, there are maybe 10 other cases where the first term helps the user verify it’s good to share. Again, in less than a few seconds.

But people know this, right?

Now here’s the kicker. You might think that, since this sort of verification triage is so easy to do, we’d be in a better situation than we are today when it comes to misinformation.

One theory is that people know about Ctrl-F, but they just don’t care. They like their disinformation, they can’t be bothered. (I know there’s an issue with doing these sorts of searches on mobile, too, but that’s another post.) If everybody knows about Ctrl-F and doesn’t do it, isn’t that just more evidence that we’re not looking at a skills issue?

Except, if you were going to make that argument, you’d have to show that everybody really does know about Ctrl-F. That wouldn’t be the end of the argument — I could reply that knowing and having a habit are different things — but that’s where we’d start.

So think for a minute. How many people know that you can use Ctrl-F or related functions to search a page? What percentage of internet users? How close to 100 percent is it? What do we have to work with?

Eh, I can’t drag out the suspense any longer. Barely anyone uses Ctrl-F or knows what it does.

Here’s an older finding (2011) from an internal Google survey: Only 10 percent of internet users know how to use control-F. (“90 percent of the U.S. Internet population does not know that,” said Google search anthropologist Dan Russell, saying it was “a sample size of thousands.” “I do these field studies and I can’t tell you how many hours I’ve sat in somebody’s house as they’ve read through a long document trying to find the result they’re looking for. At the end I’ll say to them, ‘Let me show one little trick here,’ and very often people will say, ‘I can’t believe I’ve been wasting my life!'”)

After Google’s number came out, Mozilla (the makers of Firefox) examined its own user data. Mozilla (with permission, of course) tracked the behavior of 69,000 Windows users over a seven-day period. In that span, 81 percent of users didn’t use Ctrl-F even once over a seven-day period. This was a technologically savvy group of people — people who had chosen not only to install Firefox but a beta version of the browser. And even among them, using Ctrl-F was still very much a minority taste.

And the people who don’t use keyboard shortcuts like Ctrl-F are not a random sample of the population. They are disproportionately people who are less comfortable using computers and digital interfaces. In 2004, when researchers at Rice specifically surveyed people who used or didn’t use keyboard shortcuts, they found that the average shortcut user spent 20 hours more a week using computers than non-users. They also rated their level of expertise with computers much higher than non-users (6.7 vs. 3.2 on a 1-10 scale). People with lower digital skills are often people susceptible to misinformation.

I’ve looked for more recent studies and haven’t found much — though I’d expect the widespread transition away from hardware keyboards to phones and tablets has only reduced knowledge.

But I do know that in my classes, many-to-most students have never heard of Ctrl-F. Another portion is aware it can be used in things like Microsoft Word, but unaware it’s a cross-application feature available in web browsers too. When I look over students’ shoulders as they execute web search tasks, I repeatedly find them reading every single word of a document to answer a specific question about its content. In a class of 25 or so, there might be one student who already uses Ctrl-F naturally at the beginning of the class.

People have a limited amount of effort they’ll expend on verification, the lack of knowledge here may be as big a barrier as other cognitive biases. Why we aren’t vigorously addressing issues like this in order to build a more resilient information network (or even to just help students study efficiently!) is something I continue to not understand. Yes, we have big issues. But can we take five minutes and show people how to search?

]]>
https://www.niemanlab.org/2020/01/ctrl-f-helping-make-networks-more-resilient-against-misinformation-can-be-as-simple-as-two-fingers/feed/ 0
The Wuhan coronavirus is the latest front for medical misinformation. How will China handle it? https://www.niemanlab.org/2020/01/the-wuhan-coronavirus-is-the-latest-front-for-medical-misinformation-how-will-china-handle-it/ https://www.niemanlab.org/2020/01/the-wuhan-coronavirus-is-the-latest-front-for-medical-misinformation-how-will-china-handle-it/#respond Fri, 24 Jan 2020 16:07:20 +0000 https://www.niemanlab.org/?p=178974

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

Coronavirus and misinformation. The Wuhan Coronavirus has infected more than 800 people, mostly in and around Wuhan, China, and killed at least 26. (This morning, a second case was confirmed in the United States, in Chicago. The virus has also been found in Japan, South Korea, Thailand, Singapore, Taiwan, and Vietnam.)

As was the case with the Ebola virus, the coronavirus outbreak is responsible for the spread of a lot of misinformation, although it’s early enough in the epidemic that we don’t yet have tons of info on what that misinformation looks like. From The Wall Street Journal:

A post circulating on the popular messaging app WeChat suggested that cities where patients had fallen sick should set off fireworks to kill the disease in the air. Another viral post declared vinegar and indigowoad root — a flower commonly used in Chinese medicine — to be the “golden pair,” or ideal solution, in preventing infection, forcing China’s cabinet-level National Health Commission to clarify in its own social-media post that the “golden pair” wouldn’t fend off the deadly virus.

The Chinese government’s response to (mis)information about the epidemic is an interesting complicating factor to watch. Back on January 3, the BBC reported:

There has been speculation on social media about a possible connection to the highly contagious disease. Wuhan police said eight people had been punished for “publishing or forwarding false information on the internet without verification.”

Poynter’s Cristina Tardáguila and Summer Chen, editor-in-chief at Taiwan FactCheck Center, note:

More than 20 days have passed since those detentions, and still the world doesn’t know much about what occurred with that group. Were these people actually false news producers? Or were they just sharing content about what is now known as the 2019 coronavirus?

Tardáguila and Chen haven’t been able to find out much about the eight people who were detained. They note that the case has been written about briefly on Weibo by Hu Xijin, the editor-in-chief of the Global Times, a state-run media outlet. (He’s also been tweeting about the broader situation this week.)

Facebook allows “rampant climate denialism” around the Australian wildfires. Read this along with my colleague Hanaa’ Tameez’s recent reporting on how YouTube pushes misinformation about climate change: BuzzFeed reported this week, using data from CrowdTangle, that “during the worst of the [Australian bushfires, which have burned more than 42 million acres, destroyed thousands of buildings and homes, and killed more than a million animals], far-right, fringe and conspiratorial Facebook pages were enjoying unusual success by spreading content that misdirected blame away from climate change.”

In some cases, viral climate-denying content appears to have been used as part of a successful publishing strategy — to take advantage of huge interest in the fires, sow doubt about climate change and increase a page’s audience at the same time.

One climate denial page, “Climate Change LIES,” had a big start to January. In the week starting January 5, it published 36 posts — more than double its average number. Its page likes jumped by an unusually high number that week, 132.

Its most successful post that week blamed arson for the fires, explicitly spelling out that the cause was “not global warming.” The post linked to, and incorrectly described, a Sydney Morning Herald op-ed with a headline ripe for confusion — the figures in the article were about fires more broadly and not the bushfires specifically. The post was shared over 1,100 times, 33 times more than an average post on the page.

That same (still) poorly headlined article has been tweeted more than 3,500 times. Here’s the misleading way it appears on Twitter:

Separately, Media Matters for America has covered the ways that mainstream media has ignored or downplayed the fires’ connection to climate change, especially Rupert Murdoch’s The Australian. Rupert Murdoch’s younger son James and his wife Kathryn have criticized (via spokesperson) the elder Murdoch’s news outlets’ coverage of the crisis, with said spokesperson recently telling the Daily Beast that “Kathryn and James’ views on climate are well established and their frustration with some of the News Corp and Fox coverage of the topic is also well known. They are particularly disappointed with the ongoing denial among the news outlets in Australia given obvious evidence to the contrary.”

If you’re a female politician in India, getting trolled comes with the job. CNN writes up a new report from Amnesty International that tracked the Twitter mentions of 95 Indian female politicians and found that “about one in seven tweets sent to the women were abusive or problematic” — “in other words, Indian female politicians receive “nearly twice the amount of trolling experienced by their female counterparts in the United States and United Kingdom.”

[Dr. Debarati Halder, managing director of the Center For Cyber Victim Counseling and co-author of Cyber Crimes Against Women in India[, whose research has looked at the trolling and abuse of women politicians, journalists, celebrities and activists, says that India’s patriarchal social structure has taken on a new dimension online, where men vandalize women’s internet profiles, use filthy language to describe their sex appeal, publish intimate images without their consent or share doctored imagery — known as “deepfakes” — depicting them in pornography.

India’s youngest parliamentarian, Chandrani Murmu, was subjected to such a “deepfake,” with her face superimposed onto an obscene video, before she was elected last year.

Photo of passengers deboarding a Wuhan–Tokyo flight January 23 for quarantine inspection by The Yomiuri Shimbun via AP.

]]>
https://www.niemanlab.org/2020/01/the-wuhan-coronavirus-is-the-latest-front-for-medical-misinformation-how-will-china-handle-it/feed/ 0
Instagram is busy fact-checking memes and rainbow hills while leaving political lies alone https://www.niemanlab.org/2020/01/instagram-is-busy-fact-checking-memes-and-rainbow-hills-while-leaving-political-lies-alone/ https://www.niemanlab.org/2020/01/instagram-is-busy-fact-checking-memes-and-rainbow-hills-while-leaving-political-lies-alone/#respond Fri, 17 Jan 2020 15:08:31 +0000 https://www.niemanlab.org/?p=179171 It’s a tough line for Instagram to walk as it tries to filter out misinformation and bad-faith faked images while leaving art — you know, art-art, the good kind — alone. Instagram users also called out the platform this month for fact-checking a Warren Buffett meme while leaving alone politicians’ lies and political ads. (Trump’s approval rating with Republicans is nowhere near as high as he says it is; his overall approval rating is currently around 42 percent.)

The downside of slapping on that big publisher logo. New academic journal about misinformation? Yes, thank you. The Shorenstein Center at Harvard’s Kennedy School just launched the Misinformation Review, for which “content is produced and ‘fast-reviewed’ by misinformation scientists and scholars, released under open access, and geared towards emphasizing real-world implications.” There’s a bunch of good stuff in the first issue — here is some:

Emphasizing the publisher of an article doesn’t do much to make people better B.S. detectors. Various trusting-news initiatives have suggested that adding more context to news stories shared on social media can make people more likely to trust it. Seems logical, right? Facebook rolled out a feature that includes this information in 2018.

But it turns out that “increasing the visibility of publishers is an ineffective, and perhaps even counterproductive, way to address misinformation on social media,” Nicholas Dias, Gordon Pennycook, and David Rand find in a new study in which they showed participants real headlines from social media, in Facebook’s format. In some of the cases, publisher information was emphasized; in others, it was removed.

They found no effect:

We found that publisher information had no significant effects on whether participants perceived the headline as accurate, or expressed an intent to share it — regardless of whether the headline was true or false. In other words, seeing that a headline came from a misinformation website did not make it less believable, and seeing that a headline came from a mainstream website did not make it more believable.

In a follow-up survey, the researchers found that

providing publisher information only influenced headline accuracy ratings when headline plausibility and publisher trust were “mismatched” — for example, when a headline was plausible but came from a distrusted publisher (e.g., fake-news or hyperpartisan websites).

In these cases of mismatch, identifying the publisher reduced accuracy ratings of plausible headlines from distrusted publishers, and increased accuracy ratings of implausible headlines from trusted publishers.

However, when we fact-checked the 30% of headlines from distrusted sources in our set that were that were rated as plausible by participants, we found they were mostly true. In other words, providing publisher information would have increased the chance that these true headlines would be mistakenly seen as false — raising the possibility of unintended negative consequences from emphasizing sources.

The lesson? “These observations underscore the importance of social media platforms and civil society organizations rigorously assessing the impacts of interventions (source-based and otherwise), rather than [implementing] them based on intuitive appeal.”

The number of Americans who believe misinformation about vaccines is relatively high. People who get their information about vaccines from social media are more likely to believe misinformation than people who get their information from traditional media.

Dominik Andrzeg Stecula, Ozan Kuru, and Kathleen Hall Jamieson surveyed a nationally representative sample of nearly 2,500 U.S. adults and found that a relatively high percentage of people are misinformed about vaccines:

18% of our respondents mistakenly state that it is very or somewhat accurate to say that vaccines cause autism, 15% mistakenly agree that it is very or somewhat accurate to say that vaccines are full of toxins, 20% wrongly report that it is very or somewhat accurate to say it makes no difference whether parents choose to delay or spread out vaccines instead of relying on the official CDC vaccine schedule, and 19% incorrectly hold that it is very or somewhat accurate to say that it is better to develop immunity by getting the disease than by vaccination.

The biggest indicator of whether or not someone believes the misinformation? Distrust of medical authorities.

Mistaken beliefs were also “remarkably consistent over a five-month period” in 2019, but the people who became more misinformed over that period “said that they were exposed to an increased amount of content about measles or the Measles, Mumps, and Rubella (MMR) vaccine on social media.”

Overall, U.S. confidence in vaccines has declined: Gallup said this week that 84 percent of U.S. adults believe vaccinating children is important, down from 94 percent in 2001, and “the only group that has maintained its 2001 level of support for vaccines is highly educated Americans, those with postgraduate degrees.”

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

]]>
https://www.niemanlab.org/2020/01/instagram-is-busy-fact-checking-memes-and-rainbow-hills-while-leaving-political-lies-alone/feed/ 0
“Rated false”: Here’s the most interesting new research on fake news and fact-checking https://www.niemanlab.org/2020/01/rated-false-heres-the-most-interesting-new-research-on-fake-news-and-fact-checking/ https://www.niemanlab.org/2020/01/rated-false-heres-the-most-interesting-new-research-on-fake-news-and-fact-checking/#respond Fri, 10 Jan 2020 16:48:05 +0000 https://www.niemanlab.org/?p=178971

Editor’s note: There’s a lot of interesting academic research going on in digital media — but who has time to sift through all those journals and papers?

Our friends at Journalist’s Resource, that’s who. JR is a project of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School, and they spend their time examining the new academic literature in media, social science, and other fields, summarizing the high points and giving you a point of entry.

Here, JR’s managing editor, Denise-Marie Ordway, sums up some of the most compelling papers on fake news and fact-checking published in 2019. (You can also read some of her other roundups focusing on research from 2018 and 2017.)

What better way to start the new year than by learning new things about how best to battle fake news and other forms of online misinformation? Below is a sampling of the research published in 2019 — seven journal articles that examine fake news from multiple angles, including what makes fact-checking most effective and the potential use of crowdsourcing to help detect false content on social media.

Because getting good news is also a great way to start 2020, I included a study that suggests President Donald Trump’s “fake news” tweets aimed at discrediting news coverage could actually help journalists. The authors of that paper recommend journalists “engage in a sort of news jujitsu, turning the negative energy of Trump’s tweets into a force for creating additional interest in news.”

“Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media”: From Dartmouth College and the University of Michigan, published in Political Behavior. By Katherine Clayton, Spencer Blair, Jonathan A. Busam, Samuel Forstner, John Glance, Guy Green, Anna Kawata, Akhila Kovvuri, Jonathan Martin, Evan Morgan, Morgan Sandhu, Rachel Sang, Rachel Scholz‑Bright, Austin T. Welch, Andrew G. Wolff, Amanda Zhou, and Brendan Nyhan.

This study provides several new insights about the most effective ways to counter fake news on social media. Researchers found that when fake news headlines were flagged with a tag that says “Rated false,” people were less likely to accept the headline as accurate than when headlines carried a “Disputed” tag. They also found that posting a general warning telling readers to beware of misleading content could backfire. After seeing a general warning, study participants were less likely to believe true headlines and false ones.

The authors note that while their sample of 2,994 U.S. adults isn’t nationally representative, the feedback they got demonstrates that online fake news can be countered “with some degree of success.” “The findings suggest that the specific warnings were more effective because they reduced belief solely for false headlines and did not create spillover effects on perceived accuracy of true news,” they write.

“Fighting misinformation on social media using crowdsourced judgments of news source quality”: From the University of Regina and Massachusetts Institute of Technology, published in the Proceedings of the National Academy of Sciences. By Gordon Pennycook and David G. Rand.

It would be time-consuming and expensive to hire crowds of professional fact-checkers to find and flag all the false content on social media. But what if the laypeople who use those platforms pitched in? Could they accurately assess the trustworthiness of news websites, even if prior research indicates they don’t do a good job judging the reliability of individual news articles? This research article, which examines the results of two related experiments with almost 2,000 participants, finds the idea has promise.

“We find remarkably high agreement between fact-checkers and laypeople,” the authors write. “This agreement is largely driven by both laypeople and fact-checkers giving very low ratings to hyper-partisan and fake news sites.”

The authors note that in order to accurately assess sites, however, people need to be familiar with them. When news sites are new or unfamiliar, they’re likely to be rated as unreliable, the authors explain. Their analysis also finds that Democrats were better at gauging the trustworthiness of media organizations than Republicans — their ratings were more similar to those of professional fact checkers. Republicans were more distrusting of mainstream news organizations.

“All the president’s tweets: Effects of exposure to Trump’s ‘fake news’ accusations on perceptions of journalists, news stories, and issue evaluation”: From Virginia Tech and EAB, published in Mass Communication and Society. By Daniel J. Tamul, Adrienne Holz Ivory, Jessica Hotter, and Jordan Wolf.

When Trump turns to Twitter to accuse legitimate news outlets of being “fake news,” does the public’s view of journalists change? Are people who read his tweets less likely to believe news coverage? To investigate such questions, researchers conducted two studies, during which they showed some participants a sampling of the president’s “fake news” tweets and asked them to read a news story.

Here’s what the researchers learned: The more tweets people chose to read, the greater their intent to read more news in the future. As participants read more tweets, their assessments of news stories’ and journalists’ credibility also rose. “If anything, we can conclude that Trump’s tweets about fake news drive greater interest in news more generally,” the authors write.

The authors’ findings, however, cannot be generalized beyond the individuals who participated in the two studies — 331 people for the first study and then 1,588 for the second, more than half of whom were undergraduate students.

Based on their findings, the researchers offer a few suggestions for journalists. “In the short term,” they write, “if journalists can push out stories to social media feeds immediately after Trump or others tweet about legitimate news as being ‘fake news,’ then practitioners may disarm Trump’s toxic rhetoric and even enhance the perceived credibility of and demand for their own work. Using hashtags, quickly posting stories in response to Trump, and replying directly to him may also tether news accounts to the tweets in social media feeds.”

“Who shared it?: Deciding what news to trust on social media”: From NORC at the University of Chicago and the American Press Institute, published in Digital Journalism. By David Sterrett, Dan Malato, Jennifer Benz, Liz Kantor, Trevor Tompson, Tom Rosenstiel, Jeff Sonderman, and Kevin Loker.

This study looks at whether news outlets or public figures have a greater influence on people’s perception of a news article’s trustworthiness. The findings suggest that when a public figure such as Oprah Winfrey or Dr. Oz shares a news article on social media, people’s attitude toward the article is linked to how much they trust the public figure. A news outlet’s reputation appears to have far less impact.

In fact, researchers found mixed evidence that audiences will be more likely to trust and engage with news if it comes from a reputable news outlet than if it comes from a fake news website. The authors write that “if people do not know a [news outlet] source, they approach its information similarly to how they would a [news outlet] source they know and trust.”

The authors note that the conditions under which they conducted the study were somewhat different from those that participants would likely encounter in real life. Researchers asked a nationally representative sample of 1,489 adults to read and answer questions about a simulated Facebook post that focused on a news article, which appeared to have been shared by one of eight public figures. In real life, these adults might have responded differently had they spotted such a post on their personal Facebook feeds, the authors explain.

Still, the findings provide new insights on how people interpret and engage with news. “For news organizations who often rely on the strength of their brands to maintain trust in their audience, this study suggests that how people perceive their reporting on social media may have little to do with that brand,” the authors write. “A greater presence or role for individual journalists on social networks may help them boost trust in the content they create and share.”

“Trends in the diffusion of misinformation on social media”: From New York University and Stanford University, published in Research and Politics. By Hunt Allcott, Matthew Gentzkow, and Chuan Yu.

This paper looks at changes in the volume of misinformation circulating on social media. The gist: Since 2016, interactions with false content on Facebook have dropped dramatically but have risen on Twitter. Still, lots of people continue to click on, comment on, like and share misinformation.

The researchers looked at how often the public interacted with stories from 569 fake news websites that appeared on Facebook and Twitter between January 2015 and July 2018. They found that Facebook engagements fell from about 160 million a month in late 2016 to about 60 million a month in mid-2018. On Twitter, material from fake news sites was shared about 4 million times a month in late 2016 and grew to about 5 million shares a month in mid-2018.

The authors write that the evidence is “consistent with the view that the overall magnitude of the misinformation problem may have declined, possibly due to changes to the Facebook platform following the 2016 election.”

“Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning”: From Yale University, published in Cognition. By Gordon Pennycook and David G. Rand.

This study looks at the cognitive mechanisms behind belief in fake news by investigating whether fake news has gained traction because of political partisanship or because some people lack strong reasoning skills. A key finding: Adults who performed better on a cognitive test were better able to detect fake news, regardless of their political affiliation or education levels and whether the headlines they read were pro-Democrat, pro-Republican or politically neutral. Across two studies conducted with 3,446 participants, the evidence suggests that “susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se,” the authors write.

The authors also discovered that study participants who supported Trump had a weaker capacity for differentiating between real and fake news than did those who supported 2016 presidential candidate Hillary Clinton. The authors write that they are not sure why that is, but it might explain why fake news that benefited Republicans or harmed Democrats seemed more common before the 2016 national election.

“Fact-checking: A meta-analysis of what works and for whom”: From Northwestern University, University of Haifa, and Temple University, published in Political Communication. By Nathan Walter, Jonathan Cohen, R. Lance Holbert, and Yasmin Morag.

Even as the number of fact-checking outlets continues to grow globally, individual studies of their impact on misinformation have provided contradictory results. To better understand whether fact-checking is an effective means of correcting political misinformation, scholars from three universities teamed up to synthesize the findings of 30 studies published or released between 2013 and 2018. Their analysis reveals that the success of fact-checking efforts varies according to a number of factors.

The resulting paper offers numerous insights on when and how fact-checking succeeds or fails. Some of the big takeaways:

— Fact-checking messages that feature graphical elements such as so-called “truth scales” tended to be less effective in correcting misinformation than those that did not. The authors point out that “the inclusion of graphical elements appears to backfire and attenuate correction of misinformation.”

— Fact-checkers were more effective when they tried to correct an entire statement rather than parts of one. Also, according to the analysis, “fact-checking effects were significantly weaker for campaign-related statements.”

— Fact-checking that refutes ideas that contradict someone’s personal ideology was more effective than fact-checking aimed at debunking ideas that match someone’s personal ideology.

— Simple messages were more effective. “As a whole, lexical complexity appears to detract from fact-checking efforts,” the authors explain.

Illustration by Rafael Serra used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/01/rated-false-heres-the-most-interesting-new-research-on-fake-news-and-fact-checking/feed/ 0
“Warts and all”: Facebook will continue to allow politicians to lie in their ads https://www.niemanlab.org/2020/01/warts-and-all-facebook-will-continue-to-allow-politicians-to-lie-in-their-ads/ https://www.niemanlab.org/2020/01/warts-and-all-facebook-will-continue-to-allow-politicians-to-lie-in-their-ads/#respond Fri, 10 Jan 2020 15:25:19 +0000 https://www.niemanlab.org/?p=178976

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“People should be able to hear from those who wish to lead them, warts and all.” Facebook’s announcement this week that it’s banning deepfakes ahead of the 2020 election didn’t exactly leave people cheering, especially since it also repeated that it will continue to allow politicians and political campaigns to lie on the platform (a decision “largely supported” by the Trump campaign and “decried” by many Democrats, in The New York Times’ phrasing).

“People should be able to hear from those who wish to lead them, warts and all,” Facebook said in a blog post.

Federal Election Commissioner Ellen Weintraub:

But this shouldn’t be framed as a partisan issue, argued Alex Stamos, the former chief security officer at Facebook who is now at Stanford.

Facebook is also adding a feature that lets people choose to see fewer political ads. From the company’s blog post:

Seeing fewer political and social issue ads is a common request we hear from people. That’s why we plan to add a new control that will allow people to see fewer political and social issue ads on Facebook and Instagram. This feature builds on other controls in Ad Preferences we’ve released in the past, like allowing people to see fewer ads about certain topics or remove interests.

Meanwhile, in the week that Facebook banned deepfakes — but not the much more common “cheapfakes” or other kinds of manipulated media — Reddit banned both. The updated rule language:

Do not impersonate an individual or entity in a misleading or deceptive manner.

Reddit does not allow content that impersonates individuals or entities in a misleading or deceptive manner. This not only includes using a Reddit account to impersonate someone, but also encompasses things such as domains that mimic others, as well as deepfakes or other manipulated content presented to mislead, or falsely attributed to an individual or entity. While we permit satire and parody, we will always take into account the context of any particular content.

TikTok also clarified some of its content moderation rules this week.

Most trusted: Twitter? Also this week, the Reuters Institute for the Study of Journalism released a report surveying 233 “media leaders” from 32 countries. They believe that, of the platforms, Twitter is doing the best job of combatting misinformation, and Facebook is doing the worst. Still, even Twitter had only 41 percent of respondents saying the job they were doing was “average” or better:

An argument that YouTube would do just as much damage without its algorithm. Becca Lewis, who researches media manipulation and political digital media at Stanford and Data & Society, argues in FFWD (a Medium publication about online video) that “YouTube could remove its recommendation algorithm entirely tomorrow and it would still be one of the largest sources of far-right propaganda and radicalization online.”

“When we focus only on the algorithm, we miss two incredibly important aspects of YouTube that play a critical role in far-right propaganda: celebrity culture and community,” Lewis writes. From her article:

When a more extreme creator appears alongside a more mainstream creator, it can amplify their arguments and drive new audiences to their channel (this is particularly helped along when a creator gets an endorsement from an influencer whom audiences trust). Stefan Molyneux, for example, got significant exposure to new audiences through his appearances on the popular channels of Joe Rogan and Dave Rubin.

Importantly, this means the exchange of ideas, and the movement of influential creators, is not just one-way. It doesn’t just drive people to more extremist content; it also amplifies and disseminates xenophobia, sexism, and racism in mainstream discourse. For example, as Madeline Peltz has exhaustively documented, Fox News host Tucker Carlson has frequently promoted, defended, and repeated the talking points of extremist YouTube creators to his nightly audience of millions.

Additionally, my research has indicated that users don’t always just stumble upon more and more extremist content — in fact, audiences often demand this kind of content from their preferred creators. If an already-radicalized audience asks for more radical content from a creator, and that audience is collectively paying the creator through their viewership, creators have an incentive to meet that need…

All of this indicates that metaphor of the “rabbit hole” may itself be misleading: it reinforces the sense that white supremacist and xenophobic ideas live at the fringe, dark corners of YouTube, when in fact they are incredibly popular and espoused by highly visible, well-followed personalities, as well as their audiences.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

]]>
https://www.niemanlab.org/2020/01/warts-and-all-facebook-will-continue-to-allow-politicians-to-lie-in-their-ads/feed/ 0
People who are given correct information still misremember it to fit their own beliefs https://www.niemanlab.org/2019/12/people-who-are-given-correct-information-still-misremember-it-to-fit-their-own-beliefs/ https://www.niemanlab.org/2019/12/people-who-are-given-correct-information-still-misremember-it-to-fit-their-own-beliefs/#respond Fri, 13 Dec 2019 12:50:52 +0000 https://www.niemanlab.org/?p=177761

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“People can self-generate their own misinformation. It doesn’t all come from external sources.” Researchers at Ohio State found that even when people are provided with accurate numerical information, they tend to misremember those numbers to match whatever beliefs they already hold: “For example, when people are shown that the number of Mexican immigrants in the United States declined recently — which is true but goes against many people’s beliefs — they tend to remember the opposite,” OSU’s Jeff Grabmeier reports in an article summarizing the research on Phys.org (the full paper is here).

110 people were presented with “short written descriptions of four societal issues that involved numerical information.” On two of the issues, the statistics fit the conventional wisdom; for two of the issues, the statistics belied it.

For example, most people believe that the number of Mexican immigrants in the United States grew between 2007 and 2014. But in fact, the number declined from 12.8 million in 2007 to 11.7 million in 2014.

After reading all the descriptions of the issues, the participants got a surprise. They were asked to write down the numbers that that were in the descriptions of the four issues. They were not told in advance they would have to memorize the numbers.

The researchers found that people usually got the numerical relationship right on the issues for which the stats were consistent with how many people viewed the world. For example, participants typically wrote down a larger number for the percentage of people who supported same-sex marriage than for those who opposed it — which is the true relationship.

But when it came to the issues where the numbers went against many people’s beliefs — such as whether the number of Mexican immigrants had gone up or down — participants were much more likely to remember the numbers in a way that agreed with their probable biases rather than the truth.

In a second study, participants played a “Telephone”-like game. The first person in the chain saw the correct “saw the accurate statistics about the trend in Mexican immigrants living in the United States (that it went down from 12.8 million to 11.7 million),” wrote those numbers down from memory and passed them on to a second person, who did the same with a third person, and so on.

Results showed that, on average, the first person flipped the numbers, saying that the number of Mexican immigrants increased by 900,000 from 2007 to 2014 instead of the truth, which was that it decreased by about 1.1 million.

By the end of the chain, the average participant had said the number of Mexican immigrants had increased in those 7 years by about 4.6 million.

“These memory errors tended to get bigger and bigger as they were transmitted between people,” [study coauthor Matt Sweitzer] said.

At least 30 journalists worldwide are imprisoned for spreading “fake news.” That’s a huge increase since 2012. At least 250 journalists are in jail worldwide for reasons related to their work, the Committee to Protect Journalists said this week in its annual report. Of those, “the number charged with ‘false news’ rose to 30 compared with 28 last year. Use of the charge, which the government of Egyptian president Abdel Fattah el-Sisi applies most prolifically, has climbed steeply since 2012, when CPJ found only one journalist worldwide facing the allegation.”

From The Washington Post:

It wasn’t this way five years ago, said Courtney Radsch, advocacy director of the CPJ, which tracks these trends.

In 2012, there was just one journalist in jail on fake-news charges. By 2014, there were eight. Then came 2016, when the most dramatic rise began, in which 16 journalists worldwide were in jail on fake-news charges. The number rose to 27 in jail by the end of last year.

Overall, between 2012 and 2019, there have been 65 journalists imprisoned on false-news charges. For comparison, since 1992, when the CPJ started tracking the trend, an overall 120 journalists have at one point been locked up for spreading so-called fake news. That means more than half of the journalists jailed on these charges were in prison sometime in the past seven years.

Most of the journalists who have been jailed on fake news charges over the past 7 years are in Egypt (7), followed by Turkey (6), Somalia (5), and Cameroon (5). Singapore passed a restrictive fake news law this year.

“Strategic intent is not strategic impact.” Digital disinformation campaigns can be large and organized — and still have very little impact on their targets, writes David Karpf, an associate professor of media and public affairs at George Washington University, in MediaWell. (MediaWell is run out of the Social Science Research Council, the independent nonprofit that, among other things, is assisting on the project that gets Facebook to share data with academics. It’s not going that well, reportedly!)

Much of the attention paid by researchers, journalists, and elected officials to online disinformation and propaganda has assumed that these disinformation campaigns are both large in scale and directly effective. This is a bad assumption, and it is an unnecessary assumption. We need not believe digital propaganda can “hack” the minds of a fickle electorate to conclude that digital propaganda is a substantial threat to the stability of American democracy. And in promoting the narrative of IRA’s direct effectiveness, we run the risk of further exacerbating this threat. The danger of online disinformation isn’t how it changes public knowledge; it’s what it does to our democratic norms. […]

The first-order effects of digital disinformation and propaganda, at least in the context of elections, are debatable at best. But disinformation does not have to sway many votes to be toxic to democracy.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

]]>
https://www.niemanlab.org/2019/12/people-who-are-given-correct-information-still-misremember-it-to-fit-their-own-beliefs/feed/ 0
Who becomes a Reddit conspiracy theorist? They have these things in common https://www.niemanlab.org/2019/11/who-becomes-a-reddit-conspiracy-theorist-they-have-these-things-in-common/ https://www.niemanlab.org/2019/11/who-becomes-a-reddit-conspiracy-theorist-they-have-these-things-in-common/#respond Fri, 22 Nov 2019 14:28:07 +0000 https://www.niemanlab.org/?p=177141

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

Do people mainly share misinformation because they get distracted? A new working paper suggests that “most people do not want to spread misinformation, but are distracted from accuracy by other salient motives when choosing what to share.” And when the researchers — Gordon Pennycook, Ziv Epstein, Mohsen Mosleh, Antonio Arechar, Dean Eckles, and David Rand — DM’d Twitter users who’d shared news from unreliable websites, they found that “subtly inducing people to think about the concept of accuracy decreases their sharing of false and misleading news relative to accurate news.”

“Our accuracy message successfully induced Twitter users who regularly shared misinformation to increase the quality of the news they shared,” the authors write. They suggest the effect of the intervention may not last long, but the platforms could increase it by continuing to offer nudges:

What are conspiracy theorists like? Researchers tracked Reddit users over eight years to figure out how they ended up as active members of the r/conspiracy subreddit.

We undertook an exploratory analysis using a case control study design, examining the language use and posting patterns of Reddit users who would go on to post in r/conspiracy. (the r/conspiracy group). We analyzed where and what they posted in the period preceding their first post in r/conspiracy to understand how personal traits and social environment combine as potential risk factors for engaging with conspiracy beliefs.

Our goal was to identify distinctive traits of the r/conspiracy group, and the social pathways through which they travel to get there. We compared the r/conspiracy group to matched controls who began by posting in the same subreddits at the same time, but who never posted in the r/conspiracy subreddit. We conducted three analyses.

First we examined whether r/conspiracy users were different from other users in terms of what they said. Our hypothesis was that users eventually posting in r/conspiracy would exhibit differences in language use compared to those who do not post in r/conspiracy, suggesting differences in traits important for individual variation.

Second, we examined whether the same set differed from other users in terms of where they posted. We hypothesized that engagement with certain subreddits is associated with a higher risk of eventually posting in r/conspiracy, suggesting that social environments play a role in the risk of engagement with conspiracy beliefs.

Third, we examined language differences after accounting for the social norms of where they posted. We hypothesized that some differences in language use would remain after accounting for language use differences across groups of similar subreddits, suggesting that some differences are not only a reflection of the social environment but represent intrinsic differences in those users.

There were “significant differences” between the 15,370 r/conspiracy users and a 15,370-user control group. Here are some of those differences:

  • Users who ended up in the r/conspiracy group used more words related to “crime,” “stealing,” and “law,” compared to the control group. Control group users used more words related to “friends,” “optimism,” and “affection.”
  • r/conspiracy users were also much more active in the Politics subreddit, “where there were 2.4 times as many r/conspiracy users as control users that posted in at least one subreddit in the group,” and they posted five times as many comments in the Politics community overall.
  • They were overrepresented in what the researchers refer to as “Drugs and Bitcoin” Reddit, and what they refer to as “Toxic Reddit, where “the subreddits in which r/conspiracy posters are also most over-represented include several that have since been banned for questionable content, such as r/WhiteRights and r/fatpeoplehate.”

If you’d like to spend more time with conspiracy theorists, CNN’s Rob Picheta took a trip to the third annual Flat Earth International Conference. Here’s Mark Sargent, the “godfather of the modern flat-Earth movement” and the subject of the 2018 Netflix documentary “Behind the Curve”:

“I don’t say this often, but look — there is a downside. There’s a side effect to flat Earth … once you get into it, you automatically revisit any of your old skepticism…I don’t think [flat Earthers and populists] are just linked. They kind of feed each other … it’s a slippery slope when you think that the government has been hiding these things. All of a sudden, you become one of those people that’s like, ‘Can you trust anything on mainstream media?'”

What if the most effective deepfake video is actually a real video? And to end on a downer, just like last week, here’s Craig Silverman:

Everyone thinks there will be a rather effective deepfake video, but I wonder if, in the next year, will we see something that is actually authentic being effectively dismissed as a deepfake, which then causes a mass loss of trust.

If there is an environment in which you can undermine not what is fake, and make it convincing, but undermine what is real — that is even more of a concern for me.

Illustration by Filip Jovceski used under a Creative Commons license.

]]>
https://www.niemanlab.org/2019/11/who-becomes-a-reddit-conspiracy-theorist-they-have-these-things-in-common/feed/ 0
Galaxy brain: The neuroscience of how fake news grabs our attention, produces false memories, and appeals to our emotions https://www.niemanlab.org/2019/11/galaxy-brain-the-neuroscience-of-how-fake-news-grabs-our-attention-produces-false-memories-and-appeals-to-our-emotions/ https://www.niemanlab.org/2019/11/galaxy-brain-the-neuroscience-of-how-fake-news-grabs-our-attention-produces-false-memories-and-appeals-to-our-emotions/#respond Thu, 21 Nov 2019 16:17:43 +0000 https://www.niemanlab.org/?p=177074 “Fake news” is a relatively new term, but it’s now seen by some as one of the greatest threats to democracy and free debate. But how does it work? Neuroscience can provide at least some insight.

The first job of fake news is to catch our attention, and for that reason, novelty is key. Researchers Gordon Pennycook and David Rand have suggested that one reason hyperpartisan claims are so successful is that they tend to be outlandish. In a world full of surprises, humans have developed an exquisite ability to rapidly detect and orient towards unexpected information or events. Novelty is an essential concept underlying the neural basis of behavior, and plays a role at nearly all stages of neural processing.

Sensory neuroscience has shown that only unexpected information can filter through to higher stages of processing. The sensory cortex may have therefore evolved to adapt to, to predict, and to quiet down the expected regularities of our experiences, focusing on events that are unpredictable or surprising. Neural responses gradually reduce each time we’re exposed to the same information, as the brain learns that this stimulus has no reward associated with it.

Novelty itself is related to motivation. Dopamine, a neurotransmitter associated with reward anticipation, increases when we are confronted by novelty. When we see something new, we recognize its potential to reward us in some way. Studies have shown that the hippocampus’ ability to create new synaptic connections between neurons (a process known as plasticity) is increased by the influence of novelty. By increasing the brain’s plasticity, the potential to learn new concepts is also increased.

Fake news, false memory

The primary region involved in responding to novel stimuli — the substantia nigra/ventral segmental area, or SN/VTA — is closely linked to the hippocampus and the amygdala, both of which play important roles in learning and memory. While the hippocampus compares stimuli against existing memories, the amygdala responds to emotional stimuli and strengthens associated long-term memories.

This aspect of learning and memory formation is of particular interest to my own lab, where we study brain oscillations involved in long-term memory consolidation. That process occurs during sleep, a somewhat limited timeframe to integrate all of our daily information. For that reason, the brain is adapted to prioritize certain types of information. Highly emotionally provocative information stands a stronger chance of lingering in our minds and being incorporated into long-term memory banks.

The allure of fake news is therefore reinforced by its relationship to memory formation. A recent study published in Psychological Science highlighted that exposure to propaganda can induce false memories. In one of the largest false-memory experiments to date, scientists gathered up registered voters in the Republic of Ireland in the week preceding the 2018 abortion referendum. Half of the participants reported a false memory for at least one fabricated event, with more than a third of participants reporting a specific “eyewitness” memory. In-depth analysis revealed that voters were most susceptible to forming false memories for fake news that closely aligned with their beliefs, particularly if they had low cognitive ability.

Emotional appeals

The ability of fake news to grab our attention and then highjack our learning and memory circuitry goes a long way to explaining its success. But its strongest selling point is its ability to appeal to our emotions. Studies of online networks show text spreads more virally when it contains a high degree of “moral emotion,” which drives much of what we do. Decisions are often driven by deep-seated emotions that can be difficult to identify. In the process of making a judgment, people consult or refer to an emotional catalog carrying all the positive and negative tags consciously or unconsciously associated with a given context.

We rely on our ability to place information into an emotional frame of reference that combines facts with feelings. Our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, long before we’re aware of them. This processing operates with exposures to emotional content as short as 1/250th of a second, “an interval so brief that there is no recognition or recall of the stimulus.

Merely being exposed to a fake news headline can increase later belief in that headline — so scrolling through social media feeds laden with emotionally provocative content has the power to change the way we see the world and make political decisions.

The novelty and emotional conviction of fake news, and the way these properties interact with the framework of our memories, exceeds our brains’ analytical capabilities. Though it’s impossible to imagine a democratic structure without disagreement, no constitutional settlement can function if everything is a value judgement based on misinformation. In the absence of any authoritative perspective on reality, we are doomed to navigate our identities and political beliefs at the mercy of our brains’ more basal functions.

The capacity to nurture and sustain peaceful disagreement is a positive characteristic of a truly democratic political system. But before democratic politics can begin, we must be able to distinguish between opinions and facts, fake news and objective truth.

Rachel Anne Barr is a PhD student in neuroscience at the Université Laval. This article is republished from The Conversation under a Creative Commons license.The Conversation

]]>
https://www.niemanlab.org/2019/11/galaxy-brain-the-neuroscience-of-how-fake-news-grabs-our-attention-produces-false-memories-and-appeals-to-our-emotions/feed/ 0
News portals like Yahoo still bring Democrats and Republicans together for political news, but they’re fading fast https://www.niemanlab.org/2019/11/news-portals-like-yahoo-still-bring-democrats-and-republicans-together-for-political-news-but-theyre-fading-fast/ https://www.niemanlab.org/2019/11/news-portals-like-yahoo-still-bring-democrats-and-republicans-together-for-political-news-but-theyre-fading-fast/#respond Fri, 15 Nov 2019 15:25:03 +0000 https://www.niemanlab.org/?p=176817

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“We observe segregation in political news consumption.” In this working paper, “Partisan Enclaves and Information Bazaars: Mapping Selective Exposure to Online News,” Stanford researchers examined a “data set of web browsing behavior collected during the 2016 U.S. presidential election” to see how Democrats and Republicans seek out news sources and how they change their news consumption levels in response to different political events. (The data set is from YouGov and was also used in this paper.)

The researchers looked at two specific events during the 2016 campaign — Trump’s “grab ’em by the pussy” Access Hollywood tape and the Comey letter revealing that he’d reopened the FBI investigation into Hillary Clinton — to see how people reacted. One thing they found:

Democrats consumed more news stories after the release of the Access Hollywood tape, while Republican consumption was largely unaffected. Conversely, Republicans increased their news consumption following the release of the Comey letter, while Democrats consumed no more news than usual.

The paper also offers yet more evidence that Democrats and Republicans prefer different sources for political news. Of the top 30 political news sites most visited by Democrats, just a handful also received attention and were more favored by Republicans — The Hill, the Disney-owned go.com (which includes ABC News), Real Clear Politics, and Fox News.

Of the top 30 political news sites most visited by Republicans, several were favored by Democrats.

From the paper:

Visually, these plots indicate considerable partisan selectivity overall, and especially for use of non-portals. If partisanship had nothing to do with domain visits, the top thirty domains would all have approximately 0 favorability. The fact that so few domains are near 0 implies that partisanship is a strong predictor of how individuals allocate their attention to political news online. Democrats and Republicans do give some minor amount of attention to sources favored by their opponents, but most of the audience heterogeneity occurs at portal sites. Furthermore, in Appendix D, we show that portal visits are mostly concentrated in a minority of heavy portal users, and even these individuals prefer congenial sources when they traverse off of their preferred portal site.

It is worth noting that Republicans appear slightly more willing to visit sites favored by Democrats than vice versa. Comparing Figure 2 and Figure 1, we see that at least eight of the top thirty Republican domains are favored more by Democrats: CNN, Washington Post, New York Times, Five Thirty Eight, Snopes, NBC News, Huffington Post, and CBS News. In contrast, only about four of the top thirty Democratic domains are favored more by Republicans: Real Clear Politics, Go, Fox News, and The Hill. We have no way of knowing if these Republicans actually have a preference for (or no preference against) Democratic-favored news or if they are simply unaware of their alternatives. Most of the Democratic-favored platforms visited by Republicans fall in the legacy category, such as the New York Times and Washington Post, and thus benefit from a historical brand that the Republican alternatives, such as Breitbart, have yet to achieve.

A different way of wording that: Republicans do still consume some mainstream/legacy news sources (CNN, The New York Times, The Washington Post, NBC News, CBS News), but Democrats prefer them.

As for those portal sites — Yahoo, MSN, AOL — they still dominate for news in general (including sports scores, weather, celebrity gossip, and so on), but for political news, their use is fading:

In this study, “only 2 percent of the share of traffic to portals involved political news,” and “among individuals who encounter political information, portals account for only one-quarter of their browsing.”

“A whole new class of online creator: the scientist-influencer debunking false information in their area of expertise.” In Wired, Emma Grey Ellis writes about the “apolitical” debunkers whose mission is to expose “lifestyle misinformation.” They work primarily on YouTube and Instagram, debunking content that is unlikely to violate the platforms’ rules or surface their official fact-checking mechanisms — the “questionable beauty products, “pseudoscientific claims about diets and nutrition,” the channels “telling viewers to consume only raw fruit, to drink copious amounts of celery juice, to avoid vegetables entirely and go carnivore, to eat nothing at all.”

Let’s talk about fact-checking. CJR’s Galley brought in a bunch of fact-checking folks to talk this week — here are conversations with Mike Caufield, Lead Stories’ Maarten Schenk, PolitiFact’s Angie Drobnic Holan, Snopes founder David Mikkelson, the International Fact Checking Network’s Baybars Orsek, Stanford Internet Observatory’s Reneé DiResta, NewsGuard’s Gordon Crovitz, Northwestern professor Nathan Walter, Google News Lab’s Alexios Mantzarlis, Truth or Fiction’s Brooke Binkowski, Tow’s Jonathan Albright, and The Daily Beast’s Kelly Weill. It’ll cap off today with a roundtable discussion.

Dead newspapers, revived in the support of Indian interests. The EU DisinfoLab, a Brussels-based NGO that looks at disinformation campaigns that target Europe, says it uncovered at network of 265 fake local news sites around the world that seem to be tied to India and were pushing pro-India messages. Many of them use the names of dead newspapers (the New York Journal-American, the New York Morning Telegraph) or even live ones (the “Times of Los Angeles,” which is not the same thing as the Los Angeles Times).

Here are some findings from these websites:

  • Most of them are named after an extinct local newspaper or spoof real media outlets;
  • They republish content from several news agencies (KCNA, Voice of America, Interfax);
  • Coverage of the same Indian-related demonstrations and events;
  • Republications of anti-Pakistan content from the described Indian network (including EP Today, 4NewsAgency, Times Of Geneva, New Delhi Times);
  • Most websites have a Twitter account as well.

One may wonder: why have they created these fake media outlets? From analysing the content and how it is shared, we found several arguments to do so:

  • Influence international institutions and elected representatives with coverage of specific events and demonstrations;
  • Provide NGOs with useful press material to reinforce their credibility and thus be impactful;
  • Add several layers of media outlets that quote and republish one another, making it harder for the reader to trace the manipulation, and in turn (sometimes) offer a “mirage” of international support;
  • Influence public perceptions on Pakistan by multiplying iterations of the same content available on search engines.

You can see a map of the “local” sites here. They’re spread across 60-plus countries; in the U.S., the real brands (dead or alive) lifted include the Topeka State Journal, the Houston “Morning” Chronicle, the Minneapolis Evening Journal, the Seattle Star, the Indianapolis Daily Herald, the Charlottesville Tribune, the Salt Lake Telegram, and the Oregon Journal.

Finally, your weekly dose of cheer. First Draft’s Claire Wardle on what she’s most negative about:

I’m most negative about the threats posed by information disorder. I think we have two to three years until the majority of people no longer know what information or evidence to trust. If this problem is left unsolved, information disorder will have a catastrophic impact on the way people think about climate, vaccinations, democratic institutions, and each other.

Illustration by Filip Jovceski used under a Creative Commons license.

]]>
https://www.niemanlab.org/2019/11/news-portals-like-yahoo-still-bring-democrats-and-republicans-together-for-political-news-but-theyre-fading-fast/feed/ 0
“The Facebook environment…muddies the waters between fact and fiction” https://www.niemanlab.org/2019/11/the-facebook-environment-muddies-the-waters-between-fact-and-fiction/ https://www.niemanlab.org/2019/11/the-facebook-environment-muddies-the-waters-between-fact-and-fiction/#respond Fri, 08 Nov 2019 16:06:37 +0000 https://www.niemanlab.org/?p=176663 Researchers attached EEGs to 83 undergrad students’ heads and tracked their brain activity as they analyzed whether fake news stories — including those that had been flagged as false — were fake. While the students showed “reactions of discomfort…when headlines supported their beliefs but were flagged as false,” that dissonance didn’t stop them from going with what they already believed:

This dissonance was not enough to make participants change their minds. They overwhelmingly said that headlines conforming with their preexisting beliefs were true, regardless of whether they were flagged as potentially fake. The flag did not change their initial response to the headline, even if it did make them pause a moment longer and study it a bit more carefully.

It didn’t matter whether the subjects identified as Republicans or Democrats: That “didn’t influence their ability to detect fake news,” lead author Patricia Moravec said, “and it didn’t determine how skeptical they were about what’s news and what’s not.” The students assessed only 44 percent of the stories accurately.

The study was published this week in Management Information Systems Quarterly.

]]>
https://www.niemanlab.org/2019/11/the-facebook-environment-muddies-the-waters-between-fact-and-fiction/feed/ 0
It is still incredibly easy to share (and see) known fake news about politics on Facebook https://www.niemanlab.org/2019/11/it-is-still-incredibly-easy-to-share-and-see-known-fake-news-about-politics-on-facebook/ https://www.niemanlab.org/2019/11/it-is-still-incredibly-easy-to-share-and-see-known-fake-news-about-politics-on-facebook/#respond Fri, 08 Nov 2019 14:29:57 +0000 https://www.niemanlab.org/?p=176590 As a test, I tried sharing the top 20 fake stories that Avaaz found on Facebook, to see what got flagged. [Laura also asked her Nieman Lab colleagues to do the same, so apologies to any of our friends and family who thought we seemed suddenly conspiracy-friendly. —Ed.] I’d never tried to share fake news to Facebook before, and the process was illuminating.

In six out of the 20 cases, Facebook flagged the posts as containing “False Information” and noted that, if I went ahead and shared them, a fact-checking notice would be attached to my post. But when I went ahead and clicked “Share Anyway,” I found that the stories posted to my timeline without any warning attached to them. I would have expected some kind of overlay, like the one you now get when you try to share an image that Facebook’s fact-checkers have determined to be false. Instead, the posts look normal with the exception of a little “ⓘ” button that you can click on (but likely would never notice) for more info.

Here are the stories where Facebook gave me a share warning:

1. Trump’s grandfather was a pimp and tax evader; his father a member of the KKK

8. Tim Allen quote: Trump’s wall costs less than the Obamacare website [Because this was an image, it was also overlaid with a “False information” warning]

10. Joe Biden Calls Trump Supporters “Dregs of Society”

12. Bernie Sanders Says Christianity Is An Insult To Muslims

14. MN Democrats vote for elementary school pornography

19. “President Trump is asking everyone to forward this email…” [image]

Facebook didn’t flag the remaining 14 stories as fake. Three of these “stories” were actually Facebook statuses where the status contained the false information.

2. Pelosi Diverts $2.4 Billion From Social Security To Cover Impeachment Costs

3. Ocasio-Cortez Proposes Nationwide Motorcycle Ban

4. Trump is now trying to get Pence impeached

5. Ilhan Omar holding secret fundraisers with Islamic groups tied to terror

6. BREAKING: Nancy Pelosi’s Son Was Exec At Gas Company That Did Business In Ukraine

7. Democrats Vote To Enhance Med Care for Illegals Now, Vote Down Vets Waiting 10 Years for Same Service

9. NYC coroner who declared the death of Jeffrey Epstein a suicide made half a million dollars a year working for the Clinton Foundation until 2015 [This was posted as a status, not a link, which made it extremely easy to re-share]

11. Mitch McConnell campaign essentially says boys will be boys after Alexandria Ocasio-Cortez blasts disturbing groping pic [This one’s sort of complicated; debunk here]

13. Democrats Wrote to Ukraine in May 2018, Demanding It Investigate Trump [this Breitbart story was shared by Trump on his own Facebook page]

15. Democrats don’t mind executing babies after birth [Trump status on his own Facebook page]

16. Olive Garden is funding Trump’s re-election in 2020 [posted as a status, with no link]

17. Alexandra Ocasio-Cortez said “A society that allows billionaires to exist…is wrong” [presented as an image; fact-check here]

18. A different time Trump said Democrats don’t mind executing babies after birth [Trump Facebook status]

20. Democratic party passes resolution against Christianity

Avaaz came up with an “estimated views” metric for each post, “based on the cumulative CrowdTangle data for each post across the pages, groups and profiles featuring the post on Facebook. (See page 15 here for more detail.) But it’s a metric that Princeton professor Andy Guess says is flawed. Based on Avaaz’s estimates of how many times each of these 20 items were viewed, the six posts Facebook flagged have been viewed about 45.6 million times. The 14 posts that Facebook didn’t flag have been viewed about 93.5 million times, per Avaaz. But skepticism of those figures seems warranted; they may be too high.

“Like the experts predicted back in 2016, we did end up heading down the dystopian path.” Researchers looked at the experiences of Muslim Democratic House Reps. Ilhan Omar and Rashida Tlaib running for Congress in 2018 and found that an organized Islamophobic social media campaign against them kicked off nearly as soon as they announced their candidacies. “These operations largely replaced Breitbart and other extreme-right media entities that were the primary source of anti-Muslim dialogue in the 2016 presidential campaign,” write Lawrence Pintak, Jonathan Albright, Brian J. Bowe, and Shaheen Pasha. The authors wrote in The New York Times:

The sheer number and proportion of negative tweets indicate that much of the targeting was done by people or organizations from far outside the districts in which the candidates ran. Our review of profiles of those accounts, which included 2,354 that attacked both women, bears this out.

But the most striking thing we uncovered happened in the months after the election. When we revisited these accounts in July, a significant portion of them were simply gone. Some had been suspended by Twitter for violating standards, such as posting inappropriate content or showing characteristics of bots. Others had been deleted by the account holders. Malicious actors will often remove the accounts that make up their bot networks — like drug dealers tossing burner phones — to cover their tracks. […]

This all suggests that this Islamophobic and xenophobic narrative largely originated with a handful of bigoted activists and was then amplified by vast bot networks whose alleged owners never existed. “Ordinary” account holders, many retweeting just one post, were then swept up in the rancorous energy of the crowd.

Jonathan Albright in Wired:

This is a new twist to electoral politics and democratic participation in 2020 and in the coming decade. Over time, and especially across disparate Twitter communities, groups, and hashtags, these tactics will continue to surface anger and emotional vitriol. They will connect political candidates’ identities to controversial issues, raising them in tandem, and then connecting them in the form of a narrative to real voters. This manufacturing of outrage legitimizes otherwise unsustainable rumors and ideas.

Illustration by Filip Jovceski used under a Creative Commons license.

]]>
https://www.niemanlab.org/2019/11/it-is-still-incredibly-easy-to-share-and-see-known-fake-news-about-politics-on-facebook/feed/ 0
This text-generation algorithm is supposedly so good it’s frightening. Judge for yourself. https://www.niemanlab.org/2019/11/this-text-generation-algorithm-is-supposedly-so-good-its-frightening-judge-for-yourself/ https://www.niemanlab.org/2019/11/this-text-generation-algorithm-is-supposedly-so-good-its-frightening-judge-for-yourself/#respond Thu, 07 Nov 2019 16:12:37 +0000 https://www.niemanlab.org/?p=176589 The best weapons are secret weapons. Freed from the boundaries of observable reality, they can hold infinite power and thus provoke infinite fear — or hope. In World War II, as reality turned against them, the Nazis kept telling Germans about the Wunderwaffe about to hit the front lines — “miracle weapons” that would guarantee victory for the Reich. The Stealth Bomber’s stealth was not just about being invisible to radar — it was also about its capabilities being mysterious to the Soviets. And whatever the Russian “dome of light” weapon is and those Cuban “sonic attacks” are, they’re all terrifying.

So whether intentionally or not, the creators of the text-generating algorithm GPT-2 played the PR game brilliantly in February when they announced that, well, it just may be too powerful to release to the general public. That generated a wave of global publicity that is, shall we say, uncommon for new text-generating algorithms. (Elon Musk is involved, you’ll be shocked to learn.)

In any event, now, nine months later, the folks at OpenAI have apparently decided that the infopocalypse is not right around the corner and released its secret superweapon GPT-2 into the wild. They say they have “seen no strong evidence of misuse so far” from more limited releases of the technology.

The alleged threat is not, as some journalists have feared, that this machine is going to eventually cover city council meetings and aggregate viral tweets more efficiently than a human reporter could. It’s that the ease of generating semi-convincing output could make it efficient to pump out thousands or millions of individually tailored misinformation, spam emails, or some other text-based form of grift.

I suppose that’s something legitimate to worry about, but my experiences playing around with the GPT-2 has mostly matched this guy’s: Creating readable prose, sensical prose, and believable prose are still pretty different things.

To test out its capabilities, I thought I’d feed it the ledes of a few Nieman Lab stories and see what it autogenerated from there. For each of these below, the actual human-written text is in italics; everything after that is “by” GPT-2. (Note that GPT-2, as used on Talk to Transformer, often ends its output mid-sentence.)

🤖   🤖   🤖

Original story: “Meet The Salt Lake Tribune, 501(c)(3): The IRS has granted nonprofit status to a daily newspaper for the first time,” by Christine Schmidt, published Nov. 4.

It was a “happy surprise,” Fraser Nelson said, when The Salt Lake Tribune received a letter from the IRS on Friday giving the 148-year-old news outlet nonprofit 501(c)(3) status — no questions asked.

The Tribune, like all nonprofit news organizations in Utah, is also required by IRS code to register with the agency under the section 501(c)(4) that allows “social welfare” nonprofits like the Tribune to accept donations, make grants and engage in the business of journalism.

The Tribune’s board, however, approved the decision to not register under 501(c)(3), and instead register under 501(c)(4), a section that provides tax-exempt status to religious organizations.

“We felt that our reporting is political in nature and not editorial, so we felt the tax exemption was needed to protect the integrity of our journalism,” Nelson said.

According to IRS code, 501(c)(4) organizations can engage in political, educational or religious activities, but not engage in political campaign activity or engage

Verdict: This is in the broad ballpark of news prose style. It’s interesting it chose to contradict the lede so quickly — specifically saying it rejected 501(c)(3) status. The real Fraser Nelson wouldn’t like that quote, I imagine. Call it a B+.

🤖   🤖   🤖

Original story: “Watch your language: ‘Data voids’ on the web have opened a door to manipulators and other disinformation pushers,” by me, published Nov. 1.

One day fifteen long years ago, in 2004, some SEO consultants decided to have a contest to determine quién es más macho in the gaming-search-results game. As they put it: “Are you a Player or a Stayer?”

At the time, Google and Bing were using a variety of techniques to determine whether search results for a given query were “Google-friendly.”

The main criterion was the number of Web pages containing the search string.

In the contest, the contest participants decided to make a bolder claim: “No, you’re not a player.”

They set a goal: a “threshold.”

The goal: to make the Google ranking algorithm think that a page was being searched for in the way you might expect it to be.

They wanted the Google algorithm to think the page was being used, and not just viewed.

The results of their contest were remarkable.

The

Verdict: Choppy short sentences without much flow or sense, despite sticking in the general topic area. I was sorta hoping it would throw in some more Spanish. I’ll give it a C-.

🤖   🤖   🤖

Original story: “The Daily Wire and black salve show that Facebook’s takedown policies have one thing in common: Inconsistency,” by Laura Hazard Owen, published Nov. 1.

Inauthentic coordinated behavior, in the U.S.? One of modern media’s mysteries the past few years has been: How does the right-wing website The Daily Wire do so incredibly well on Facebook?

The site is so successful that its co-founder, Ben Shapiro, now regularly speaks to thousands of college students in Washington, D.C. and, most recently, at the Heritage Foundation. Yet, the site has been criticized for its lack of coverage of domestic issues. The problem is that the right-wing site has a left-leaning bias: its stories often criticize left-wing figures such as President Barack Obama.

In an article for The Wall Street Journal, conservative writer Ben Shapiro said that Facebook users are “left-leaning” — and, thus, the site should have more of a bias toward the right.

The Daily Wire, a right-wing news site, is the most-shared conservative site on Facebook and has more than 30 million users. Its Facebook page has more than 11 million likes and

Verdict: Pretty realistic at the sentence level, and maybe the self-contradicting “The problem is that the right-wing site has a left-leaning bias” is actually a galaxy-brain commentary on our current politics. A solid B.

🤖   🤖   🤖

Original story: “Why I’m starting a company to build out a new model for local news in the 2020s,” by Ken Doctor, published Oct. 25.

Over the past decade here at Nieman Lab, I’ve reported a lot of news industry news. Today, I’m sharing some of my own. This morning, I’m posting the latest story on the news industry.

“As the story behind today’s story is well known, the National Security Agency was listening to telephone calls made in Germany by the German Chancellor Angela Merkel. This story comes from an important source: German journalist Ulf Laessing. A few days ago, he told me that a former U.S. intelligence officer had revealed this to him. He gave me the name of the source, but the American intelligence officer has not been seen publicly since.”

– Edward Snowden, 2013 interview with Glenn Greenwald,

the Guardian

“The NSA has been listening to the Chancellor of the German Federal Republic [of Germany] Angela Merkel’s phone calls since 2008. They also spied on the phones of other senior leaders of the G20, including

Verdict: Um, no. There’s no mention of Germany in Ken’s piece and I can’t quite figure out what would make the algorithm want to jump straight to Snowden and Merkel. The fractured lines would make this easy to spot. F.

🤖   🤖   🤖

And finally, to go full ouroboros, let’s feed the lede of this story right here into GPT-2.

Original story: This one, silly

The best weapons are secret weapons. Freed from the boundaries of observable reality, they can hold infinite power and thus provoke infinite fear — or hope. In World War II, as reality turned against them, the Nazis kept telling Germans about the Wunderwaffe about to hit the front lines — “miracle weapons” that would guarantee victory for the Reich. When the Germans couldn’t take it, they bombed their own cities, as well as a small number of Allied cities.

Today, there are many secret weapons in use around the globe — weapons that, if deployed, would have immense consequences. A weapon that can cause such destruction and pain would be a threat not just to its users, but to humanity as a whole.

The Pentagon has repeatedly refused to acknowledge the existence of secret weapons — weapons it would be a crime for the world’s media to disclose to the public. In the past, these weapons have been used to wage undeclared wars, including those in Cambodia, Laos, and Afghanistan. These weapons can kill more innocent civilians than nuclear weapons,

Verdict: I’m sure it was right about to pivot back to text-generation algorithms in a couple more grafs. But this is a very reasonable continuation of the lede (other than that first sentence). B.

🤖   🤖   🤖

GPT-2 is not coming to take the jobs of journalists, as some have worried. Paid reporting jobs generally require a certain level of factuality that the algorithm can’t match.

Is it coming for the “jobs” of fake-news writers, those Macedonian teens who until now have had to generate their propaganda (gasp!) by hand? Probably not. Whether your intention is to make money off ad arbitrage or to elect Donald Trump as president of the United States, the key value-add comes in knowing how to exploit a reader’s emotions, biases, preconceptions, and other lizard-brain qualities that can make a lie really hit home. Baiting that hook remains something an algorithm can reliably do. And it’s not as if “lack of realistic writing in grafs 3 through 12” was a real problem limiting most misinformation campaigns.

But I can see some more realistic impacts here. This quality of generated text could allow you to create a website will what appear to be fully fleshed out archives — pages and pages of cogent text going back years — which might make it seem more legitimate than something more obviously thrown together.

GPT-2’s relative mastery of English could give foreign disinformation campaigns a more authentic sounding voice than whatever the B-team at the Internet Research Agency can produce from watching Parks & Rec reruns.

And the key talent of just about any algorithm is scale — the ability to do something in mass quantities that no team of humans could achieve. As Larry Lessig wrote in 2009 (and Philip Bump reminded us of this week), there’s something about a massive data dump that especially encourages the cherry-picking of facts (“facts”) to support one’s own narrative. Here’s Bump:

In October 2009, he wrote an essay for the New Republic called “Against Transparency,” a provocative title for an insightful assessment of what the Internet would yield. Lessig’s argument was that releasing massive amounts of information onto the Internet for anyone to peruse — a big cache of text messages, for example — would allow people to pick out things that reinforced their own biases…

Lessig’s thesis is summarized in two sentences. “The ‘naked transparency movement’…is not going to inspire change,” he wrote. “It will simply push any faith in our political system over the cliff”…

That power was revealed fully in the 2016 election by one of the targets of the Russia probe: WikiLeaks. The group obtained information stolen by Russian hackers from the Democratic National Committee and Hillary Clinton’s campaign chairman, John Podesta…In October, WikiLeaks slowly released emails from Podesta…Each day’s releases spawned the same cycle over and over. Journalists picked through what had come out, with novelty often trumping newsworthiness in what was immediately shared over social media. Activists did the same surveys, seizing on suggestive (if ultimately meaningless) items. They then often pressured the media to cover the stories, and were occasionally successful…

People’s “responses to information are inseparable from their interests, desires, resources, cognitive capacities, and social contexts,” Lessig wrote, quoting from a book called “Full Disclosure.” “Owing to these and other factors, people may ignore information, or misunderstand it, or misuse it.”

If you wanted to create something as massive as a fake cache of hacked emails, GPT-2 would be of legitimate help — at least as a starting point, producing something that could then be fine-tuned by humans.

The key fact of the Internet is that there’s so much of it. Too much of it for anyone to have a coherent view. If democracy requires a shared set of facts — facts traditionally supplied by professional journalists — the ability to flood the zone with alternative facts could take the bot infestation of Twitter and push it out to the broader world.

Illustration by Zypsy ✪ used under a Creative Commons license.

]]>
https://www.niemanlab.org/2019/11/this-text-generation-algorithm-is-supposedly-so-good-its-frightening-judge-for-yourself/feed/ 0