repubs – Nieman Lab https://www.niemanlab.org Thu, 04 May 2023 16:42:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 The voices of NPR: How four women of color see their roles as hosts https://www.niemanlab.org/2023/05/the-voices-of-npr-how-four-women-of-color-see-their-roles-as-hosts/ https://www.niemanlab.org/2023/05/the-voices-of-npr-how-four-women-of-color-see-their-roles-as-hosts/#respond Thu, 04 May 2023 16:42:05 +0000 https://www.niemanlab.org/?p=214871

This story was originally published by The 19th.

Juana Summers is struck by “an incredible sense of responsibility.”

She took over one of the host chairs at “All Things Considered” in June 2022 after many years as a political correspondent for NPR. Now almost a year into her new role, she sees herself as a guide to making the news program — and NPR in general — a place where people can feel represented.

Part of that, Summers believes, starts with the audience knowing her.

“I am never setting at the door that I am a Black woman, I am a stepparent, I am a woman who grew up in the Midwest and lived in a low- and middle-income home growing up, and who went to private religious schools,” said Summers, 34. “All of those dynamics are things that inform how I do my journalism, and the degree in which I lean into any part of that varies from story to story.”

Summers is one of the four women of color — three of them Black — who have taken over hosting duties at flagship NPR programs over the past year. Leila Fadel moved to “Morning Edition” in January 2022, joined just over a year later by veteran NPR host Michel Martin. Ayesha Rascoe became the host of “Weekend Edition Sunday” in March 2022. Their roles extend beyond the voices delivering the headlines. Each are editorial leaders with immense influence over what and who is covered.

“It would seem that after NPR top executives and news managers saw its three most popular women of color hosts departed within the last year, they would have pondered seriously about why these stellar female journalists left, with serious determination to make progressive changes at the public network to recruit and retain women of color hosts,” said Sharon Bramlett-Solomon, an associate professor at Arizona State University and an expert on race and gender in broadcast journalism.

Bramlett-Solomon added that leadership at NPR now faces a unique challenge in showing their commitment “to move forward with dramatic and meaningful transformation in programming inclusion and not simply window dressing on the set.”

Whitney Maddox, who was hired for the newly created diversity, equity and inclusion manager role in January 2021, said that in her role, she has especially focused on women of color at NPR: “What’s happening with them? How are they doing? What do they need?”

She has created a monthly space for women of color at the organization to meet and share about their day-to-day experiences and voice what resources they need. Her work also includes consulting to the flagship programs and checking in on their workplace cultures and how people are supported, including how stories are pitched and edited.

Maddox also started Start Talking About Race, or STAR, which is a twice-monthly event open to everyone in the organization. From these conversations, Maddox said she’s already seen an impact.

“This is moving beyond this space into how people are pitching their stories,” Maddox said. “Editors have come back to me and said, ‘A point that somebody made in STAR — I used that when I brought up a point of how we should think differently about how to source this story.’ There is a process, there is time, there is changing people’s hearts and raising their consciousness to understand why this work is important.”

The leadership of Fadel, Martin, Rascoe, and Summers are key parts in acting on conversations surrounding equity. Their roles in the host chairs are a signal of NPR’s commitment to reflecting their audience, Marrapodi said. “Our job is to be public media for the entire public,” he said. “We’re here for everybody. Our job is to hold a mirror up to society. And this is what society looks like.”

Fadel, 41, is acutely aware that her in the host chair is a representation of what society looks — and sounds — like.

“I just never thought it could happen. Clearly women have held host seats at NPR long before me, and people of color have moved up through the organization, but I just never imagined it,” she said. “How could I have imagined this for myself? There was no one who looked like me, who was an Arab woman, who was Muslim, doing this job.”

“I think about that responsibility and I take it very seriously,” Rascoe said. She said she thinks about her late grandmother, a sharecropper from North Carolina, constantly as she works. “I think about her and I think about my whole family, and I never want to make them not proud of me. I never want them to look at what I’m doing and say, ‘What is she out here doing? How is she representing us? We didn’t raise her that way.’”

Even the sound of her voice has been true to Rascoe’s roots: As her executive producer Sarah Lucy Oliver said, “Ayesha sounds like herself. She says she sounds like a Black woman from Durham, North Carolina.”

Oliver described what she calls “NPR voice” — a low register, stripped of any regional dialects, that registers as white and male — as the prevailing sound of NPR. Rascoe, she said, is a disruption to that.

“For decades, listeners have been accustomed to a particular kind of NPR voice. You can run through the dial and figure out when you’ve landed on an NPR member station,” Oliver said.

Hearing a voice like Rascoe’s “is a definite change in direction, and is exactly the kind of voice NPR wants to bring to the air,” Oliver said. “This is part of the real world. People speak differently. People have different regional accents. People use different kinds of colloquialisms. NPR is trying to sound more like the real world now.”

But Rascoe has had to reckon with the way audiences — used to that staid, white, and masculine NPR voice — perceive her. She has received racist listener feedback: coded messaging urging her to “sound professional,” telling Rascoe about how they don’t like her voice.

Rascoe says this is “all just a way of saying, ‘You are Black, and you are a Black woman from the South, and therefore you are stupid.’” When she first arrived at NPR in 2018 from Reuters, where she was a White House correspondent, it was a shock. “I will not try to pretend that it didn’t hurt and that it wasn’t frustrating,” she said.

Though Rascoe said she has received nothing but support from her colleagues and managers, her experience speaks to the dynamic at play in newsrooms nationwide aspiring to evolve, change and grow in being representative in their journalism.

The work of “disrupting the whiteness that journalists of color so often feel in white newsrooms” is hard and real, Bramlett-Solomon said — but doable, especially when there is real, on-the-ground support from management, with actual dollars behind it to show it.

Rascoe is aware of the fact that by simply being on air in such a high-profile way, she is doing that work of not only changing NPR, but changing American audiences’ expectations more broadly on credibility within the news.

“I have a voice that is not a voice that people necessarily expect — but it’s mine…Hopefully, it helps expand their idea of what authority, what professionalism and what intelligence can sound like,” Rascoe said.

Martin, 63, first joined NPR in 2006 to launch “Tell Me More,” an interview-focused show that aired on NPR member stations nationwide from 2007 to 2014. She then became host of “Weekend All Things Considered,” a position she held until joining “Morning Edition” as one of the hosts in March.

After decades in journalism, and many years at NPR specifically, Martin is deliberate and thoughtful in her imagining of the organization’s future and her role in it. As someone with a long history with the organization — one that includes using her voice to help push NPR forward on how it thinks about and covers race — she brings to her new role the ability to hold the past in the present while continuing to look forward. She said she’s constantly thinking not only about her time at NPR, but the ways journalists of color have worked to serve communities throughout history in how she approaches her role today.

“I just want us to keep getting better,” Martin said. “I want us to keep getting better because if you aren’t, then you are not growing. If you are not growing, then you’re dying.”

Martin steps into leadership at the flagship program at a time when the political climate often makes it tough for journalists to report. Martin says she thinks journalists play an especially important role “to help us understand each other’s experiences.” Without helping audiences dig into the nuances of why people believe what they do — and why the political is so personal for so many — journalists aren’t doing the job they are charged with executing.

“Yes, the words are changing and we’re all having to learn to use different language and to recognize different identities that were perhaps not part of our own experiences before. But that’s life. That’s learning. That’s education. That’s the news,” she said.

Martin said she takes inspiration from the way that Spanish-language and Black newspapers have crafted their coverage strategies.

“The origin of these news outlets was not just to talk about the politics that particularly affected these communities, but also to help people understand how to live in the new world that they were in: telling people things like how to register to vote, how to register your kids for school,” Martin said.

It is exactly this kind of work — often dismissed by editors and audiences alike as “unserious” for being service-oriented or because its target audience is those from historically underrepresented backgrounds — that Martin has always prioritized, and sees as essential now more than ever.

“What I want the most is for us to keep moving forward and not lose sight of our mission, which is to serve the public…I want us to keep doing our jobs because I think the country really needs us,” Martin said. “I want us to get more honest and stronger and more clear in how we serve people by helping them understand the world that does not lead them to carry the baggage of waking up and saying, ‘Whose side am I on today?’ That’s not who we are and I think not being that is so important.”

To think about equitable journalism means to consider the weight and totality of experience that exists behind each voice that feels unheard. That’s something that Martin’s “Morning Edition” co-host Fadel thinks a lot about, too. Fadel said she sees a major part of her job as being someone who can actively make and support a “safe space” for a range of voices, opinions, and perspectives in the editorial process.

Being fully present as herself in the host chair is a critical element of that, Fadel said — while also acknowledging that simply sitting in the seat doesn’t mean her newsroom, or any other, is done with the work of thinking holistically about what representation means in journalism.

“Change doesn’t happen because one person sits in a chair,” Fadel said. “It requires action at every level. This is happening at NPR, but of course there is more work to be done. Diversity isn’t just who is in a newsroom, but whose voices are in a story, and there is still a lot of work to be done there. We have a very diverse newsroom, but we need to always make sure it is more than white men whose voices get to talk about what happened.”

Fadel said she thinks all the time about how her own journalism can help change other people’s — and predominantly white listeners’ — perceptions.

“I didn’t see myself and my family in the stories I saw on the news. My father is from Lebanon and I grew up in Saudi Arabia. Talking about people like my family in the news meant only talking about people who were ensconced in conflict. It was all conflict. But there are real people who exist in these places where there is conflict. And they are nuanced and may all experience pain differently and joy differently and have lives outside of the conflict going on around them.”

Sitting in the host chair at “Morning Edition” is a way that she feels she can change this kind of sentiment across journalism, writ large.

“Representation matters,” she said.

Jennifer Gerson is a reporter at The 19th, where this story was originally published.

Ayesha Rascoe, Michel Martin, Leila Fadel and Juana Summers, four women of color who have taken over host chairs at flagship NPR programs over the past year. Photo by Lexey Swall for The 19th.

]]>
https://www.niemanlab.org/2023/05/the-voices-of-npr-how-four-women-of-color-see-their-roles-as-hosts/feed/ 0
U.S. politicians tweet much more misinformation than those in the U.K. and Germany https://www.niemanlab.org/2022/09/u-s-politicians-tweet-much-more-misinformation-than-those-in-the-u-k-and-germany/ https://www.niemanlab.org/2022/09/u-s-politicians-tweet-much-more-misinformation-than-those-in-the-u-k-and-germany/#respond Thu, 22 Sep 2022 13:22:33 +0000 https://www.niemanlab.org/?p=208058 Building on earlier work that showed how former U.S. president Donald Trump could set the political agenda using Twitter, we conducted a systematic examination of the accuracy of the tweets of politicians in three countries: the U.S., the U.K., and Germany.

Along with colleagues David Garcia, Fabio Carrella, Almog Simchon, and Segun Aroyehun, we collected all available tweets from former and current members of the U.S. Congress, the German parliament, and the British parliament. Combined, we collected more than 3 million tweets posted from 2016 to 2022.

Politicians from mainstream parties in the U.K. and Germany post few links to untrustworthy websites on Twitter, and this has remained constant since 2016, according to our new research. By contrast, U.S. politicians post a much higher percentage of untrustworthy content in their tweets, and that share has been increasing steeply since 2020.

We also found systematic differences between the parties in the U.S., where Republican politicians were found to share untrustworthy websites more than nine times as often as Democratic politicians.

For Republican politicians, overall around 4% (one in 25) links came from untrustworthy sites, compared with around 0.4% (one in 250) among Democratic politicians. That gap has widened in the last few years. Since 2020, more than 5% of tweets from Republican members of Congress contained links to untrustworthy information. Democratic politicians predominantly share information that is trustworthy, we found.

Over the five-year period we studied, mainstream elected U.K. members of parliament shared only 74 links to misinformation (0.01% of all their tweets), compared with 4,789 (1.8%) from elected mainstream U.S. politicians and 812 (1.3%) from German politicians.

To determine the trustworthiness of information shared by the politicians, we extracted all links to external websites contained in the tweets and then used the NewsGuard database to assess the trustworthiness of the domain being linked to. NewsGuard curates a large number of sites in numerous different countries and languages and evaluates them along nine criteria that characterize responsible journalism — for example, whether a site publishes corrections and whether it differentiates between opinion and news.

Our team looked at members of parliament from the U.K.’s Conservative and Labour parties and from Germany (Greens, SPD, FDP, CDU/CSU), as well as U.S. members of Congress.

Members of the conservative parties in Germany (CDU/CSU) and the U.K. (Conservatives) shared links to untrustworthy websites more frequently than their counterparts in the center or center-left. However, even conservative parliamentarians in Europe were more accurate than U.S. Democrats, with only around 0.2% (one in 500) links from European conservatives being untrustworthy.

We repeated our analyses using a second database of news website trustworthiness instead of NewsGuard. This robustness check was important to minimize the risk of possible partisan bias in what is considered “untrustworthy.”

The second database was compiled by academics and fact checkers such as Media Bias/Fact Check. Reassuringly, the results matched our primary analyses and we found the same trends.

The world has been awash with concern about the state of our political discourse for many years now. There is ample justification for this concern, given that 30% to 40% of Americans believe the baseless claim that the presidential election of 2020 was “stolen” by President Biden, and that around 10% of the British public believes in at least one conspiracy theory surrounding Covid-19.

Much of the discussion of the misinformation problem — and much of the blame — has focused on social media, and in particular the algorithms that curate our news feeds and that may nudge us toward more extreme and outrage-provoking content. There is now considerable evidence that social media has been harmful to democracy in at least some countries.

However, social media is not the only source of the misinformation problem. Donald Trump made more than 30,000 false or misleading claims during his presidency and there are political leaders in Europe who have a poor track record.

However, compared with the plethora of research that has focused on the role of social media, and the relationship between technology and democracy more generally, there have been few attempts to systematically characterize the role of political leaders in the dissemination of low-quality information.

Our results are interesting in light of several recent analyses of the American public’s news diet, which have repeatedly shown that conservatives are more likely to encounter and share untrustworthy information than liberals. To date, the origins of that difference have remained disputed.

Our results contribute to a potential explanation if we assume that what politicians say sets the agenda and resonates with members of the public. By sharing misinformation, Republican members of Congress not only directly provide misinformation to their followers, but also legitimize the sharing of untrustworthy information more generally.

Stephan Lewandowsky is chair of cognitive psychology at the University of Bristol in the U.K. Jana Lasser is a postdoc researcher at Graz University of Technology in Austria. This article is republished from The Conversation under a Creative Commons license.The Conversation

]]>
https://www.niemanlab.org/2022/09/u-s-politicians-tweet-much-more-misinformation-than-those-in-the-u-k-and-germany/feed/ 0
Vaccinating people against fake news https://www.niemanlab.org/2022/09/vaccinating-people-against-fake-news/ https://www.niemanlab.org/2022/09/vaccinating-people-against-fake-news/#respond Thu, 01 Sep 2022 13:00:45 +0000 https://www.niemanlab.org/?p=207447 My first move in the online game Harmony Square is to transform myself into a fake-news mastermind. “We hired you to sow discord and chaos,” my fictional boss informs me in a text box that pops up on a stark blue background. “We’ve been looking for a cartoonishly evil information operative. You seemed like a good fit.”

Through a series of text-box prompts, the game goads me to inflame my pretend social media audience as much as possible. I stoke an online firestorm with a ginned-up takedown article about a fictitious local politician: “PLOOG LIED ABOUT PAST—SUPPORTED ANIMAL ABUSE IN COLLEGE!” At management’s behest, I unleash an army of bots to comment approvingly on my story, driving more traffic to it. As I escalate my crusade against Ploog, the game cheers me on.

Harmony Square is one of several games University of Cambridge researchers have developed to bolster people’s resistance to disinformation. “What we thought would be interesting was having people make their own fake news in a safe environment,” says Cambridge psychologist Jon Roozenbeek, a lead researcher on the games project with fellow psychologist Sander van der Linden. “The goal is to prevent unwanted persuasion.”

These games rest on a single, overarching premise: You can inoculate people against fake news by exposing them to small amounts of such content — much as low doses of live virus can vaccinate people against a disease — if you catch them before they are fully infected by conspiratorial belief. So far, games like Harmony Square are among the best-developed vehicles for disinformation inoculation. Researchers are also proposing and testing other, related strategies, including inoculating students in classroom settings, having people cook up their own conspiracy theories, and creating online classes that teach how to identify common fake-news tactics.

Reaching enough people to achieve something akin to herd immunity against disinformation is a significant challenge, however. In addition to bolstering people’s BS detection skills, a broad immunity-building campaign would need to neutralize fake news’s strong emotional pull. “Even as this approach of science and inoculation takes off, the problem has to be solved at the cultural level,” says Subramaniam Vincent, director of journalism and media ethics at Santa Clara University’s Markkula Center. “So many efforts have to come together.”

Mentally vaccinating people against fake news goes back to the 1960s, when psychologist William McGuire proposed making people resistant to propaganda using a strategy he called a “vaccine for brainwash.” Much as weakened viruses can teach the immune system to recognize and fight off disease, alerting people to false arguments — and refuting those arguments — might keep them from succumbing to deception, McGuire reasoned.

Take, for example, the public-health recommendation that everyone visit a doctor every year. In an experiment, McGuire gave people counterarguments against going to the doctor annually (say, that regular visits promote health anxiety and actually lead people to avoid the doctor). Then he poked holes in those counterarguments (in reality, regular doctor visits reduce undue health anxiety). In McGuire’s studies, people became better at resisting false arguments after their beliefs were challenged.

The inoculation messages warned people of impending attempts to persuade them, causing them to recognize that they might be vulnerable. The brain is wired to mount a defense against apparent threats, even cognitive ones; when challenged, people therefore seek fresh ways to protect their beliefs, much as they’d fight back if someone attacked them in a bar. Threat is a critical component of inoculation, says Josh Compton, a Dartmouth speech professor who specializes in inoculation theory. “Once we experience threat, we are motivated to think up counterarguments folks might raise and how we’ll respond,” he says.

In the 1980s and 90s, experts put inoculation theory into practice with fairly limited goals, like preventing teenage smoking, and limited but promising outcomes. It wasn’t until the mid-2010s, as fake news gained traction online, that Cambridge’s Van der Linden was inspired to take the inoculation concept to a higher level. Like McGuire, he was convinced that “prebunking,” or sensitizing people to falsehoods before they encountered them, was better than debunking fake stories after the fact. Multiple studies show that once someone has internalized a nugget of false information, it’s very hard to get that person to disavow it, even if the original creator posts a correction.

Van der Linden found that focusing on a single issue, as McGuire had done, has its limits. Warning people about lies on a particular subject like smoking may help them fend off falsehoods about that one topic, but it doesn’t help them resist fake news more broadly. So Van der Linden started focusing on building people’s general immunity by cluing them in to the persuasion techniques in every fake-news creator’s toolbox.

In a series of mostly online studies, Van der Linden gave people general warnings about bad actors’ methods. For instance, he told them that politically motivated groups were using misleading tactics, like circulating a petition signed by fake scientists, to convince the public that there was lots of scientific disagreement about climate change. When Van der Linden revealed such fake-news tactics, people readily understood the threat and, as a result, got better at sniffing out and resisting disinformation.

The idea of turning fake-news inoculation into something fun was conceived in 2016 at a bar in the Netherlands. Over beers with friends, Roozenbeek batted around the possibility of using a game to combat false information online. He created a prototype, which he called Bad News. As he researched the idea further, Roozenbeek came across Van der Linden’s studies, and the two agreed to work together on more advanced online inoculation games. Their collaboration expanded on Bad News, then added Harmony Square, which is now freely available at harmonysquare.game.

In tongue-in-cheek fashion, the games introduce players to a host of common fake-news tactics. As I type a fake headline about a local politician in Harmony Square, my boss stresses the importance of stoking people’s fear with inflammatory language. “You missed some. Do better,” she scolds when I don’t include enough incendiary words like “corrupt” or “lie” in my headline. “Remember: Use words that upset people.” The game also goads me to create a website that claims to be a legitimate news outlet, sucking people in by projecting the appearance of credibility.

The argument against these dishonest tactics is embedded in the game play. The more disinformation you spread, the more unrest you sow in the fictional town of Harmony Square. By the end of the game, normally placid townspeople are screaming at one another. As I play, I get caught up in the narrative of how fake-news tactics undermine community from within.

To evaluate whether the games are truly effective, Roozenbeek and Van der Linden surveyed about 14,000 people before and after they played Bad News. After playing the game to the end, people were better overall at spotting falsehoods, rating the reliability of fake tweets and news reports about 20 percent lower than they had before. The effects lasted for more than two months. These results are in line with those of other anti-disinformation tactics such as correcting or fact-checking suspect content, according to a meta-analysis of such interventions by researchers from the University of Southern California.

Social scientists see promise in the Cambridge team’s efforts to inoculate people against fake news. “Walking in the perpetrators’ shoes, so to speak, can be very effective for understanding how disinformation can be produced and some reasons why,” says Robert Futrell, a sociologist and extremism researcher at the University of Nevada, Las Vegas, although he has not reviewed specific data from Bad News or Harmony Square.

Even if they work well, games alone will not be enough to inoculate whole populations against online disinformation. Several million people have played the Cambridge team’s offerings so far, according to Roozenbeek, a tiny fraction of the global population. Daniel Jolley, a psychologist at the University of Nottingham, notes that large-scale inoculation will have to be implemented in a wide range of settings, from classrooms to community centers. Ideally, such programs should reach students during their school years, before they have been extensively exposed to fake news, Stanford education professor Sam Wineburg has argued.

Finland is the first country to try inoculating people against fake news on a national scale. As Russian fake news began making its way across the border into Finland in 2014, the Finnish government developed a digital literacy course for state-run elementary and high schools. The curriculum, still in use, asks students to become disinformation czars, writing their own off-the-wall fake-news stories. As they learn how fake news is produced, students also learn to recognize and be skeptical of similar content in the real world.

Elsewhere, researchers and organizations are experimenting with inoculation efforts on a smaller scale. In Australia, communications professor John Cook designed an online course in 2015 to teach people how to detect common disinformation tactics used by climate deniers. So far, more than 40,000 people have enrolled in Cook’s course.

In the United States, nonprofits like the News Literacy Project teach middle and high school students how to distinguish between fact and fiction in the media. NLP has developed a series of 14 interactive lessons, some of which walk students through fake-news creation and give examples of bogus stories likely to spread like wildfire (“Fireman Suspended & Jailed by Atheist Mayor for Praying at Scene of Fire”). More than 40,000 U.S. educators have signed up to work with NLP so far. (The Science Literacy Foundation, which supports OpenMind, where this piece originally ran, is also a financial supporter of the News Literacy Project.)

Adding to the challenge of fake-news inoculation, a serious literacy campaign must do more than train people to ferret out falsehoods. It must also counter the emotional pull of those falsehoods. People tend to wade into conspiracies and false narratives when they feel scared and vulnerable, according to Jolley. When their brains flood with stress hormones, their working memory capacity takes a hit, which can affect their critical thinking. “You’ve got the skills” to mentally counter conspiracy theories, Jolley says, “but you may not be able to use them.” Research shows that people who feel socially isolated are also more likely to believe in conspiracies.

By contrast, the more fulfilled and capable people feel, the less vulnerable they are to disinformation. Jolley suggests that community-building ventures in which people feel part of a larger whole, like mentoring programs or clubs, could help individuals grow psychologically secure enough to resist the pull of a conspiracy theory. Making it easier to access mental health services, he adds, might also support people’s well-being in ways that improve their immunity to common fake-news tactics.

As the disinformation-vaccine movement grows, one crucial unknown is just how much inoculation is enough. “What’s the equivalent of herd immunity for human society?” Vincent asks. “Do we have to have inoculation for, let’s say, 80% of a country in order for the spread of misinformation to be mitigated?” Calculating that percentage, he notes, is a complex undertaking that would have to account for different ways of reaching people online and the multiple strategies used to counter fake news.

Given how challenging it will be to defang disinformation, it seems fitting that the Cambridge team’s Harmony Square game builds to an open-ended finish. When I complete the game’s final chapter, everyone in town is still fighting over the content my fake news empire churns out, and it’s unclear whether the destruction I’ve caused can be reversed. Surveying the damage, my boss applauds me. “They’re all at each other’s throats now.”

Elizabeth Svoboda is a science writer in San Jose, Calif., and the author of What Makes a Hero?: The Surprising Science of Selflessness. This story originally appeared on OpenMind, a digital magazine tackling science controversies and deceptions, which Nieman Lab covered here.

]]>
https://www.niemanlab.org/2022/09/vaccinating-people-against-fake-news/feed/ 0
Bad actors are returning to old-school methods of sowing chaos https://www.niemanlab.org/2021/01/bad-actors-are-returning-to-old-school-methods-of-sowing-chaos/ https://www.niemanlab.org/2021/01/bad-actors-are-returning-to-old-school-methods-of-sowing-chaos/#respond Thu, 07 Jan 2021 13:15:58 +0000 https://www.niemanlab.org/?p=189712

Ed. note: Here at Nieman Lab, we’re long-time fans of the work being done at First Draft, which is working to protect communities around the world from harmful information (sign up for its daily and weekly briefings). We’re happy to share some First Draft stories with Lab readers.

As millions of people around the world were under lockdown this year, social media became a lifeline for many. While researchers and journalists were focused on mis- and disinformation flourishing on the main social media platforms, information disruptors returned to old-school methods of sowing chaos and confusion through leaflets, billboards, emails, SMS, and robocalls.

The pandemic became an opportunity for the dissemination of Covid-19 hoaxes and conspiracy theories through mailboxes straight into people’s homes. Leaflets sent out in the UK claimed that the government, the media and National Health Service representatives were attempting to “create the illusion of an unprecedented deadly pandemic” to justify “extreme lockdown measures.” People living near the Canberra Hospital in Australia received flyers alleging that Covid-19 is being spread by the government through the water supply, and that a vaccination would contain a tracking device. Misleading claims about the virus were also printed on billboards and posters: An Indian example promoted essential oils to protect people from Covid-19. Two U.S. billboards bore the message that “It’s NOT about a ‘VIRUS’! It’s about CONTROL” alongside an image of a crash-test dummy wearing a mask.

People received emails from fraudsters pretending to be with the Ministry of Health in Colombia, alleging they had to have mandatory Covid-19 tests. Similar attempts to gain access to personal information were conducted over text messages and phone calls, such as in South Korea, which saw a rise in “smishing,” scam text messages that spread false information about Covid-19 cures and offered free masks in exchange for personal information.

The U.S. presidential election was a greatest-hits compilation of the old-school genre, with unsolicited, misinformation-filled newspapers such as The Epoch Times sent to households across the country, unofficial “ballot boxes” erected on sidewalks, and robocalls telling people to “stay home, stay safe” on Election Day that reached millions.

As the social media platforms become more active in tackling false claims around politics and health, disinformation agents are searching for new ways to spread their messages.

Darren Linvill, an associate communications professor at Clemson University, told First Draft: “If you want to spread disinformation, you don’t go where everybody is watching. You go somewhere where nobody is looking.” Online and offline channels are not mutually exclusive to disinformation actors, who often use multiple platforms to spread untruths, Linvill said. “We frequently saw content from text messages that were screen-grabbed and shared on social media.”

For purveyors of disinformation, one advantage of offline distribution is that provenance can be obscured — physical copies don’t leave digital traces that could point people to the source. Amid worldwide protests against systemic racism this year, misleading flyers designed to undermine Black Lives Matter were circulated in the US and the UK. In both cases, it was unclear who created the leaflets. As Full Fact noted, “almost anybody can make a sticker that looks like an official one, whether they may support or oppose the goals of the group in question.”

And weeks before the U.S. election, suspicious flyers threatening Trump supporters were sent to residents in New Hampshire. Photos of these flyers, whose origin and authenticity were unknown at the time, were uploaded and amplified by social media influencers and partisan groups. Some social media posts with high engagement falsely claimed residents in Kansas had received the letters. Kansas City police investigated the rumor and reported that no resident had received this message on paper, but it did appear on social media.

As mis- and disinformation researchers know, leaflets, billboards, emails, SMS and robocalls present logistical challenges. It’s impossible to be everywhere at once. Unless these messages are flagged by the recipient, they can remain under the radar. That makes it challenging to determine how far these hoaxes are spreading and — if the authors choose to remain anonymous — who is behind them.

ProPublica senior technology journalist Jack Gillum reported on the impact of the robocall operation that reached at least 800,000 residents living in key states that may have affected voter turnout for the 2020 U.S. election. “When it comes to robocalls, getting data for that is really difficult,” he said. “I didn’t know what data is easily available and that we can confirm that sort of stuff, so basically I had to rely on U.S. government sourcing.”

Perhaps the most high-profile example in 2020 was the case of a fraudulent email targeting Democratic voters, sent before the presidential election. It prompted a press conference hosted by the FBI and the nation’s director of national intelligence. Experts say it was the work of Iranian hackers posing as the far-right Proud Boys group. Evie Sorrell, an undergraduate student at the University of Pennsylvania who lives in Philadelphia, was among those who received a threatening email telling her to vote for Trump.

“When I first got the email, I was like, ‘Huh, that’s really weird. Also pretty illegal,’” Sorrell said. “And then I realized that if it was real, then they might have information on me.”

Of the episode, including finding out that Iran might have been behind it, she said she felt violated. She speculated that her knowledge of internet culture and digital literacy skills might have put her at an advantage: “You could definitely be swayed to, at the least not vote, or take it very seriously and vote for Trump because you’re worried for your life and safety.”

As Sorrell’s experience shows, many of these messages can feel uncomfortably intimate to the recipient, as they were sent directly to homes or mobile phone numbers. Linvill says, “They have the potential to be more persuasive, simply because they’re more personal. Because they’re sent to you directly, as opposed to messages on social media that you scroll down through and it’s one message in a list of messages.”

There are laws regulating false advertising and broadcasting materials, but these vary from country to country, as does enforcement. In January, the U.S. government took additional steps to limit the scourge of illegal robocalls, putting the onus on phone service providers instead of consumers. But days before the U.S. election, voters were still flooded with text messages containing damaging disinformation narratives, as The Washington Post reports. Peer-to-peer texting platforms used during elections are not as clearly covered by the anti-robocall rules, as the companies contend they are not an automated service.

In 2021, it’s important to remember that misinformation is not just happening on the major social platforms. Journalists and researchers will need to devise ways to understand the complexities of scope and impact, beyond just hoping concerned citizens will report problematic emails and phone calls.

Bethan John and Keenan Chen are reporters for First Draft.

Rotary phone by piperfirst used under a Creative Commons license.

]]>
https://www.niemanlab.org/2021/01/bad-actors-are-returning-to-old-school-methods-of-sowing-chaos/feed/ 0
How to reduce the spread of fake news — by doing nothing https://www.niemanlab.org/2021/01/how-to-reduce-the-spread-of-fake-news-by-doing-nothing/ https://www.niemanlab.org/2021/01/how-to-reduce-the-spread-of-fake-news-by-doing-nothing/#respond Tue, 05 Jan 2021 13:39:25 +0000 https://www.niemanlab.org/?p=189638 When we come across false information on social media, it is only natural to feel the need to call it out or argue with it. But my research suggests this might do more harm than good. It might seem counterintuitive, but the best way to react to fake news — and reduce its impact — may be to do nothing at all.

False information on social media is a big problem. A UK parliament committee said online misinformation was a threat to “the very fabric of our democracy.” It can exploit and exacerbate divisions in society. There are many examples of it leading to social unrest and inciting violence, for example in Myanmar and the United States.

It has often been used to try to influence political processes. One recent report found evidence of organized social media manipulation campaigns in 48 different countries, including the United States and United Kingdom.

Social media users also regularly encounter harmful misinformation about vaccines and virus outbreaks. This is particularly important with the roll-out of Covid-19 vaccines because the spread of false information online may discourage people from getting vaccinated — making it a life or death matter.

With all these very serious consequences in mind, it can be very tempting to comment on false information when it’s posted online — pointing out that it is untrue, or that we disagree with it. Why would that be a bad thing?

Increasing visibility

The simple fact is that engaging with false information increases the likelihood that other people will see it. If people comment on it, or quote tweet — even to disagree — it means that the material will be shared to our own networks of social media friends and followers.

Any kind of interaction at all — whether clicking on the link or reacting with an angry face emoji — will make it more likely that the social media platform will show the material to other people. In this way, false information can spread far and fast. So even by arguing with a message, you are spreading it further. This matters, because if more people see it, or see it more often, it will have an even greater effect.

I recently completed a series of experiments with a total of 2,634 participants looking at why people share false material online. In these, people were shown examples of false information under different conditions and asked if they would be likely to share it. They were also asked about whether they had shared false information online in the past.

Some of the findings weren’t particularly surprising. For example, people were more likely to share things they thought were true or were consistent with their beliefs.

But two things stood out. The first was that some people had deliberately shared political information online that they knew at the time was untrue. There may be different reasons for doing this (trying to debunk it, for instance). The second thing that stood out was that people rated themselves as more likely to share material if they thought they had seen it before. The implication is that if you have seen things before, you are more likely to share when you see them again.

Dangerous repetition

It has been well established by numerous studies that the more often people see pieces of information, the more likely they are to think they are true. A common maxim of propaganda is that if you repeat a lie often enough, it becomes the truth.

This extends to false information online. A 2018 study found that when people repeatedly saw false headlines on social media, they rated them as being more accurate. This was even the case when the headlines were flagged as being disputed by fact checkers. Other research has shown that repeatedly encountering false information makes people think it is less unethical to spread it (even if they know it is not true, and don’t believe it).

So to reduce the effects of false information, people should try to reduce its visibility. Everyone should try to avoid spreading false messages. That means that social media companies should consider removing false information completely, rather than just attaching a warning label. And it means that the best thing individual social media users can do is not to engage with false information at all.

Tom Buchanan is a professor of psychology at the University of Westminster. This article is republished from The Conversation under a Creative Commons license.The Conversation

Depiction of a black hole by The European Southern Observatory used under a Creative Commons license.

]]>
https://www.niemanlab.org/2021/01/how-to-reduce-the-spread-of-fake-news-by-doing-nothing/feed/ 0
In 2021, it’s time to refocus on health and science misinformation https://www.niemanlab.org/2020/12/in-2021-its-time-to-refocus-on-health-and-science-misinformation/ https://www.niemanlab.org/2020/12/in-2021-its-time-to-refocus-on-health-and-science-misinformation/#respond Tue, 08 Dec 2020 14:23:55 +0000 https://www.niemanlab.org/?p=188252 Preprints are scientific reports that are uploaded to public servers before the results have been vetted by other researchers, the process known as peer review. The purpose of preprints is to allow researchers to get a heads up on new research, and to encourage others to try and replicate and build on the results. In 2020, these pre-prints reached a larger audience than usual because of Twitter bots such as bioRxiv and Promising Preprints, which automatically tweeted new publications, giving researchers and journalists immediate access to non-peer-reviewed Covid-19 studies. Unfortunately, these studies, often with small sample sizes or very preliminary research, were published by media outlets or re-shared on social media without the necessary caveats, amplifying early findings as fact.

For 2021, journalists and communication professionals in the field of misinformation should ensure they include necessary disclaimers when reporting on non-peer-reviewed research, and more frequently consider whether reporting on such early research benefits the public. Similarly, platforms should train fact checkers and internal content moderation teams on how to respond to health and science information. Many of the fact checkers in Facebook’s Fact-Checking Project are excellent at debunking political claims or viral misinformation. Few fact checkers have deep health and science expertise in-house, yet they are being increasingly asked to work in these fields.

Educate journalists about science and research

The current information ecosystem is no longer structured in a linear fashion, dominated by gatekeepers using broadcast techniques to inform. Instead it is a fragmented network, where members of different communities use their own content distribution strategies and techniques for interacting and keeping one another informed.

Scientists and health communication professionals have been in the spotlight this year, and we have to learn the lessons from mistakes that have been made. These include the real-world impact of the equivocation about the efficacy of masks or the dangers of airborne transmission. There is also the need to reflect on the impact of different language choices on different communities. We need to recognize the ways in which the complexity and nuance of scientific discovery lead to confusion, and often inspire people to seek out answers on the internet, leaving them vulnerable to the conspiracy theories that provide simple, powerful explanations. We also need to communicate simply and increasingly visually, rather than via long blocks of text and PDFs.

Raise awareness about the harm done to communities of color

Explaining methodology and experimental limitations will not address institutional trust concerns ingrained in Black communities. Starting in the 1930s and concluding in 1972, the United States Public Health Service collaborated with the Tuskegee Institute, a historically Black college, to study syphilis in Black men. Those who participated were never informed of their diagnosis and did not receive the free healthcare they were promised. Additionally, doctors declined to treat the participants with penicillin, despite knowing it could cure the disease.

Of the original 399 participants, 28 died of syphilis, 100 died of related complications, 40 of their wives became infected, and 19 of their children were born with congenital syphilis, creating generational harm. These concerns have spread to online spaces, where users fear that Black people will be used as “guinea pigs” when Covid-19 vaccinations arrive.

Earlier this year, concerns were raised again over allegations of forced sterilization and hysterectomies of undocumented women in a for-profit Immigration and Customs Enforcement detention center, building off a long history of unwanted medical testing and eugenics programs. Injustices such as these can lead to increased distrust in both government and the medical health system in Black, Latinx and Indigenous communities. Several science communication initiatives have focused on Covid-19 misinformation in 2020, but health professionals must begin 2021 by acknowledging, appreciating, and discussing mistrust.

As research in West Africa later showed, efforts by the World Health Organization, the Red Cross and other global organizations to curb Ebola misinformation that didn’t take into account “historical, political, economic, and social contexts” were ineffective. Communication around health protocols such as hand washing did not result in behavioral changes because people did not view the action as a priority. Instead, they turned to trusted local sources, such as religious leaders, for direction. This occurred in the United States in the midst of the first wave of Covid-19 in the spring, where some pastors preached conspiracy theories to their congregations.

In 2018, the Ebola epidemic spread both disease and disinformation in the Democratic Republic of Congo. Citizens blamed foreigners and Western doctors for the spread of the virus, using social media platforms such as Facebook and WhatsApp. Many pushed back against safety precautions, with rumors leading to attacks on hospitals and health care workers.

Researchers, journalists and policymakers must take into account cultural and religious tradition, apprehension toward the medical health industry and government, and the role trusted local leaders play when building effective science communication strategies.

But! Remember health and political speech are not mutually exclusive

Back in March, as the social platforms took what looked like decisive action to tackle misinformation, partnering with the WHO, creating information hubs and cracking down on Covid-19-related conspiracies, many observers applauded. But it was clear that the platforms felt a sudden freedom to act around health and science misinformation. The WHO could be the arbiter of truth, unlike fact checkers trying their best to referee political speech. Health misinformation felt like an easier challenge to solve.

By April, the growth of “excessive quarantine” and anti-lockdown communities online demonstrated the naïveté of these conclusions. Health misinformation cannot be disentangled from political speech.

2020 has taught us that we should be focused on the tactics, techniques and characteristics of rumors, falsehoods and conspiracy theories. The same tactics researchers were documenting around elections emerged this year with a vengeance in the context of health and science. So in 2021, let’s learn the lessons collectively, rather than letting political psychologists decide whether misinformation can sway elections, and separately, infodemic managers at public health bodies decide whether memes influence mask wearing.

Understanding how established misleading narratives and strategies can be modified and repurposed to drive politicized agendas can help clarify and focus research, language and sourcing around medicine and climate communication in the new year.

Diara J. Townes is an investigative researcher and the community engagement lead for First Draft’s U.S. bureau. Claire Wardle leads strategy at First Draft.

Vaccine photo by Self Magazine used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/12/in-2021-its-time-to-refocus-on-health-and-science-misinformation/feed/ 0
Substack isn’t a new model for journalism — it’s a very old one https://www.niemanlab.org/2020/12/substack-isnt-a-new-model-for-journalism-its-a-very-old-one/ https://www.niemanlab.org/2020/12/substack-isnt-a-new-model-for-journalism-its-a-very-old-one/#respond Mon, 07 Dec 2020 17:25:18 +0000 https://www.niemanlab.org/?p=188212 Since 2017, Substack has provided aspiring web pundits with a one-stop service for distributing their work and collecting fees from readers. Unlike many paywall mechanisms, it’s simple for both writer and subscriber to use. Writers upload what they’ve written to the site; the readers pay from $5 to $50 a month for a subscription and get to read the work.

Enticed by the independence from editorial oversight Substack offers, several media figures with large followings — including Andrew Sullivan, Glenn Greenwald, Anne Helen Peterson, and Matthew Yglesias — are now striking out on their own.

Substack has also elevated a few commentators — perhaps most notably Heather Cox Richardson, the Boston College historian whose “Letters from an American” is currently Substack’s most-subscribed feature — to near-celebrity status.

Hamish McKenzie, Substack’s co-founder, has compared his company’s promise to an earlier journalistic revolution, likening Substack to the “penny papers” of the 1830s, when printers exploited new technology to make newspapers cheap and ubiquitous. Those newspapers — sold on the street for $0.01 — were the first to exploit mass advertising to lower newspapers’ purchase prices. Proliferating throughout the United States, they launched a new media era.

McKenzie’s analogy isn’t quite right. I believe journalism history offers more context for considering Substack’s future. If Substack is successful, it will remind news consumers that paying for good journalism is worth it.

But if Substack’s pricing precludes widespread distribution of its news and commentary, its value as a public service won’t be fully realized.

Mass advertising subsidized “objective” journalism

I believe Substack’s subscription-based plan is, in fact, closer to the model of journalism that preceded the penny papers. The older versions of U.S. newspapers were relatively expensive and generally read by elite subscribers. The penny papers democratized information by mass-producing news. They widened distribution and lowered the price to reach those previously unable to buy daily newspapers.

Substack, on the other hand, isn’t prioritizing advertising revenue, and by pricing content at recurring subscription levels, it’s restricting, rather than expanding, access to news and commentary that, for a long time, news organizations have traditionally provided free on the web.

History has shown that the economic basis of American journalism is deeply entangled with its style and tone. When one primary revenue source replaces another, much larger evolutions in the information environment occur. The 1830s, again, offer an instructional example.

One morning in 1836, James Watson Webb, the editor of New York City’s most respected newspaper, the Morning Courier and New-York Enquirer, chased down James Gordon Bennett, the editor of the New York Herald, and beat Bennett with his cane. For weeks, Bennett had been insulting Webb and his newspaper in The Herald.

In his study of journalistic independence and its relationship to the origins of “objectivity” as an established practice in U.S. journalism, historian David Mindich identifies Webb’s assault on Bennett as a revealing historical moment. The Webb-Bennett rivalry distinguishes two distinct economic models of American journalism.

Before the “penny press” revolution, U.S. journalism was largely subsidized by political parties or printers with political ambition. Webb, for example, coined the name “Whig” for the political party his newspaper helped organize in the 1830s with commercial and mercantile interests, largely in response to the emergence of Jacksonian democracy. Webb’s newspaper catered to his (mostly) Whig subscribers, and its pages were filled with biased partisan commentary and correspondence submitted by his Whig friends.

Bennett’s Herald was different. Untethered from any specific political party, it sold for one penny (though its price soon doubled) to a mass audience coveted by advertisers. Bennett hired reporters — a newly invented job — to capture stories everyone wanted to read, regardless of their political loyalty.

His circulation soon tripled Webb’s, and the profits generated by The Herald’s advertising offered Bennett enormous editorial freedom. He used it to attack rivals, publish wild stories about crime and sex, and to continually stoke more demand for The Herald by giving readers what they clearly enjoyed.

Huge circulation propelled newspapers like Bennett’s Herald and Benjamin Day’s New York Sun to surpass Webb’s Morning Courier and Enquirer in relevance and influence. Webb’s newspaper cost a pricy 6 cents for far less timely and exciting news.

It should be noted, however, that the penny papers’ nonpartisan independence didn’t ensure civic responsibility. To increase sales, the Sun, in 1835, published entirely fictional “reports” claiming a fantastic new telescope had detected life on the moon. Its circulation skyrocketed.

In this sense, editorial independence encouraged publication of what’s now called “fake news” and sensationalistic reports unchecked by editorial oversight.

Substack: A blogging platform with a toll gate?

Perhaps “I.F. Stone’s Weekly” offers the closest historical antecedent for Substack. Stone was an experienced muckraking journalist who began self-publishing an independent, subscription-based newsletter in the early 1950s.

Yet unlike much of Substack’s most famous names, Stone was more reporter than pundit. He’d pore over government documents, public records, congressional testimony, speeches and other overlooked material to publish news ignored by traditional outlets. He often proved prescient: His skeptical reporting on the 1964 Gulf of Tonkin incident, questioning the idea of an unprovoked North Vietnamese naval attack, for example, challenged the U.S. government’s official story, and was later vindicated as more accurate than comparable reportage produced by larger news organizations.

There are more recent antecedents to Substack’s go-it-yourself ethos. Blogging, which proliferated in the U.S. media ecosystem earlier this century, encouraged profuse and diverse news commentary. Blogs revived the opinionated invective that James Gordon Bennett loved to publish in The Herald, but they also served as a vital fact-checking mechanism for American journalism.

The direct parallel between blogging and Substack’s platform has been widely noted. In this sense, it’s not surprising that Andrew Sullivan — one of the most successful early bloggers — is now returning to the format.

Information doesn’t want to be free

Even if Substack proves simply an updated blogging service with an uncomplicated tollbooth, it still represents improvement over the “tip jar” financing model and reader appeals that revealed the financial weakness of all but the most famous blogs.

This might be Substack’s most important service. By explicitly asserting that good journalism and commentary are worth paying for, Substack might help retrain web audiences accustomed to believing information is free.

Misguided media corporations persuaded the web’s earliest news consumers that big advertisers would sustain a healthy news ecosystem that didn’t need to charge readers. Yet that economic model, pioneered by the penny papers, has clearly failed. And journalism is still sorting out the ramifications for the industry — and democracy — of its collapse.

It costs money to produce professional, ethical journalism, whether in the 1830s, the 1980s or the 2020s. Web surfing made us forget this. If Substack can help correct this misapprehension, and ensure that journalists are properly remunerated for their labor, it could help remedy our damaged news environment, which is riddled with misinformation.

But Substack’s ability to democratize information will be directly related to the prices its authors choose to charge. If prices are kept low, or if discounts for multiple bundled subscriptions are widely implemented, audiences will grow and Substack’s influence will likely extend beyond an elite readership.

After all: They were called “penny papers” for a reason.

Michael J. Socolow is an associate professor of communication and journalism at the University of Maine. This article is republished from The Conversation under a Creative Commons license.The Conversation

Photo of rural mailboxes by Don Harder used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/12/substack-isnt-a-new-model-for-journalism-its-a-very-old-one/feed/ 0
Searching for the misinformation “twilight zone” https://www.niemanlab.org/2020/12/searching-for-the-misinformation-twilight-zone/ https://www.niemanlab.org/2020/12/searching-for-the-misinformation-twilight-zone/#respond Tue, 01 Dec 2020 15:50:03 +0000 https://www.niemanlab.org/?p=188077 Given that conservative pages tend to dominate the results, the lists have been used to argue that Facebook is biased in favor of conservatives. Facebook, in turn, has pushed back, arguing that engagement doesn’t equal reach. [Ed. note: Check out recent Nieman Lab research on this.]

Irrespective of this argument, “Facebook’s Top 10” points to wider issues about what we see and don’t see in misinformation research. And they go beyond what data we can access, and which metrics we look at.

How do analytics dashboards shape what we see online? What if, by focusing on posts with the greatest engagement, we are missing the things bubbling underneath? Could we be looking in the wrong places and missing real harm, simply because our tools make some things harder to investigate and study?

These were questions our researchers grappled with in our recent research into vaccine misinformation. To find a solution, we took inspiration from the history of marine biology.

The twilight zone

In 2004, marine biologist Richard Pyle delivered a talk about his research into the ocean “twilight zone.” Pyle discovered that researchers had been focusing on the very top layer of the ocean’s depths. “[We] know a lot about that part up near the top. The reason we know so much about it is scuba divers can very easily go down there and access it.”

The problem was that one can only scuba 200 meters deep. Biologists were well aware of this, and so used submersible vehicles to go deeper. But this created another problem, as Pyle explains: “If you’re going to spend $30,000 a day to use one of these things and it’s capable of going 2,000 feet…you’re going to go way down deep.”

What Pyle discovered was a middle “twilight zone” — so named because of the limited sunlight that pierces to that level — that researchers had neglected because it was easier to look at the surface, and more enticing to go down deep.

This twilight zone, once registered, became a huge source of discovery for ocean biologists, at one point leading to discoveries of seven new species for every hour spent in that region.

Misinformation’s twilight zone

There are a number of lessons here for social media research. We tend to study the accounts with the largest number of followers, the ones responsible for huge engagement metrics. We see network graphs of trending hashtags, dumps of scraped social media data shared by researchers trying to look for evidence of “coordinated inauthentic activity.”

Or we see qualitative researchers lurking in private Facebook Groups, Discord servers, or 8kun boards, trying to spot disinformation campaigns before they make their way onto more popular social media platforms.

Both are valuable, but it’s not sufficient for understanding the ecosystem as a whole.

The ocean’s twilight zone is, first and foremost, a reminder that our understanding of misinformation online is severely lacking because of limited data: platforms deny access; ethical guidelines prevent researchers from entering or reporting on certain spaces online.

But more importantly, this maritime comparison is a reminder that our technology can draw us toward seeing some things and not others. CrowdTangle and Twitter’s API are not passive databases that we access, but products with affordances that influence our activity. Some features exist, others do not, and this affects what we see.

And critically, the interests of platforms are baked into not just the data they share, but the features they allow for querying it. For example, on CrowdTangle you cannot filter for labeled or fact-checked posts.

Beyond hard limitations such as these, we also need to consider friction — where accessing certain metrics or items is simply made much harder than others. This includes ranked lists that draw us toward the most engaging posts and away from those in the middle zone.

The problem of feature bias has been raised before. Richard Rogers, a key figure in the development of digital methods, has observed that social media platforms can lead researchers to focus on “vanity metrics” such as engagement scores, rather than “voice, concern, commitment, positioning and alignment.”

But more work is needed to surface feature biases, because we might be missing a critical part of the picture without realizing it.

Applying this to our research

Engaging with the concept of the twilight zone led our researchers Rory Smith and Seb Cubbon to take two critical methodological decisions in their research into vaccine misinformation.

The first, and most fundamental, was to focus on how narratives were evolving and competing rather than on highly engaged posts. The units of analysis in analytics dashboards are individual posts, but narratives are much more powerful than individual pieces of misinformation, shape how people think and can’t be simply debunked.

They also chose to exclude posts from verified accounts as a way of accessing “the middle” of social media activity. The most engaged-with posts were generally from official, often pro-vaccine accounts, such as professional media outlets. Filtering out verified accounts cut through the noise and found more of the anti-vaccine discourse bubbling underneath.

But this was only feasible because there was a feature to filter out verified accounts; otherwise, it would have been very costly to manually exclude them at scale. The filter illustrates our dependence on not just data, but features, and how this affects what we do and don’t see.

In the end, searching for the twilight zone is not a fixed process or location, but a reminder and an endeavor: to think outside the logic of analytics dashboards, and, where we can, look for the neglected parts of the ecosystem.

Tommy Shane is First Draft’s head of policy and impact. A version of this story originally ran on First Draft Footnotes.

A lobate ctenophore found during midwater exploration on the Windows to the Deep 2019 expedition. Image courtesy of the NOAA Office of Ocean Exploration and Research.

]]>
https://www.niemanlab.org/2020/12/searching-for-the-misinformation-twilight-zone/feed/ 0
“Whoa!” “I’m crying!” “Worrisome!” “Buckle up!” The swift, complicated rise of Eric Feigl-Ding and his Covid tweet threads https://www.niemanlab.org/2020/11/whoa-im-crying-worrisome-buckle-up-the-swift-complicated-rise-of-eric-feigl-ding-and-his-covid-tweet-threads/ https://www.niemanlab.org/2020/11/whoa-im-crying-worrisome-buckle-up-the-swift-complicated-rise-of-eric-feigl-ding-and-his-covid-tweet-threads/#respond Mon, 30 Nov 2020 15:21:18 +0000 https://www.niemanlab.org/?p=188011 Eric Feigl-Ding picked up his phone on the first ring. “Busy,” he said, when asked how things were going. He had just finished up an “epic, long” social media thread, he added — one of hundreds he’s posted about society’s ongoing battle with the coronavirus. “There’s so many different debates in the world of masking and herd immunity and reinfection,” he explained, among other dimensions of the pandemic. “We at FAS, we’ve been kind of monitoring all the debates and how we’re seeing signals in which the data goes one way, the debate goes the other,” he said, referring to his work with the Federation of American Scientists, a nonprofit policy think tank. He rattled off a rapid-fire sampler of hot-button Covid-19 topics: the growing anti-vaxxer movement, SARS-CoV-2 reinfection and antibodies, the body of research suggesting masks could decrease viral load, along with a quick mention of the debate among experts about what “airborne” means.

This whirlwind tour through viral Covid-19 themes felt like the conversational equivalent of Feigl-Ding’s Twitter account, which has grown by orders of magnitude since the dawn of the pandemic. The Harvard-trained scientist and 2018 Congressional aspirant posts dozens of times daily, often in the form of long, numbered threads. He’s fond of emojis, caps lock, and bombastic phrases. The first words of his very first viral tweet were “HOLY MOTHER OF GOD.”

Made in January, weeks before the massive shutdowns that brought U.S. society to a halt, that exclamation preceded his observation that the “R0” (pronounced “R-naught”) of the novel coronavirus — a mathematical measure of a disease’s reproduction rate — was 3.8. That figure had been proposed in a scientific paper, posted online ahead of peer review, that Feigl-Ding called “thermonuclear pandemic level bad.” Further in that same Twitter thread, he claimed that the novel coronavirus could spread nearly eight times faster than SARS.

The thread was widely criticized by infectious disease experts and science journalists as needlessly fear-mongering and misleading, and the researchers behind the pre-print had already tweeted that they’d lowered their estimate to an R0 of 2.5, meaning that Feigl-Ding’s SARS figure was incorrect. (Because R0 is an average measure of a virus’s transmissibility, estimates vary widely based on factors like local policy and population density; as a result, researchers have suggested that other variables may be of more use.) He soon deleted the tweet — but his influence has only grown.

At the beginning of the pandemic, before he began sounding the alarm on Covid-19’s seriousness, Feigl-Ding had around 2,000 followers. That number has since swelled to over a quarter million, as Twitter users and the mainstream media turn to Feigl-Ding as an expert source, often pointing to his pedigree as a Harvard-trained epidemiologist. And he has earned the attention of some influential people. These include Ali Nouri, the president of FAS, who brought Feigl-Ding into his organization as a senior fellow; the journalist David Wallace-Wells, who meditated on Feigl-Ding’s “holy mother of God” tweet in his March essay arguing that alarmism can be a useful tool; and former acting administrator of the Centers for Medicare and Medicaid Services Andy Slavitt. (“We all learn so much from you,” he tweeted at Feigl-Ding in July.) Ronald Gunzburger, senior adviser to Maryland Gov. Larry Hogan, even wrote a letter to Feigl-Ding attesting to how his “intentionally provocative tweet” in January “elevated the SARS-CoV-2 virus to the top of our priorities list.”

But as Feigl-Ding’s influence has grown, so have the voices of his critics, many of them fellow scientists who have expressed ongoing concern over his tweets, which they say are often unnecessarily alarmist, misleading, or sometimes just plain wrong. “Science misinformation is a huge problem right now — I think we can all appreciate it — [and] he’s a constant source of it,” said Saskia Popescu, an infectious disease epidemiologist at George Mason University and the University of Arizona who serves on FAS’ Covid-19 Rapid Response Taskforce, a separate arm of the organization from Feigl-Ding’s work. Tara Smith, an infectious disease epidemiologist at Kent State University, suggested that Feigl-Ding’s reach means his tweets have the power to be hugely influential. “With as large of a following as he has, when he says something that’s really wrong or misleading, it reverberates throughout the Twittersphere,” she said.

Critics point to numerous problems. Not too long after his “holy mother of God” tweet, for example, Feigl-Ding took to Twitter to discuss a titillating but non-peer-reviewed paper that some readers interpreted as evidence that SARS-CoV-2 was engineered in a lab; once the authors retracted the pre-print, he deleted a series of tweets from the middle of the thread. In March, Feigl-Ding tweeted a CDC graph as evidence that young people were “just as likely to be hospitalized as older generations,” but failed to mention an important detail about the age ranges represented in the graph’s bars, which didn’t actually support that claim. In August, he tweeted his support for a proposition to allow people early access to a vaccine. After criticism from epidemiologists, bioethicists, doctors, and health policy experts, Feigl-Ding deleted a few tweets at the beginning of his thread, saying they were “confusing” and “murky.” (He also argued that his critics were “spreading misinformation about what they think I said.”)

More recently, Feigl-Ding wrote a thread about coronavirus particles in flatulence, which drew criticism from researchers.

Such critiques of Feigl-Ding’s particular brand of Covid-19 commentary are by no means new, and previous articles — in The Atlantic as far back as January, for example, New York Magazine’s Intelligencer in March, the Chronicle of Higher Education in April, and in The Daily Beast in May — have explored questions about his expertise in epidemiology (his focus prior to Covid-19 was on nutrition) and whether his approach to public health communication is appropriate or alarmist. But as his influence has grown, and as the pandemic enters a much more worrying phase, critics have continued to debate whether Feigl-Ding, for all his enthusiasms, is doing more harm than good. Some complain that Feigl-Ding’s army of followers can be hateful when other scientists publicly disagree with his tweets. Others say that Feigl-Ding himself has been known to privately message his critics — a tack that some found unwelcome.

For his part, though, Feigl-Ding says many of his critics’ disagreements with him have come down to a difference in style. “Sometimes it’s a matter of a philosophical approach about tone: Should I say ‘whoa’ or ‘wow?’” he said — adding that he thinks of those words as a type of “subject line” for a tweet. “Some people don’t like the all-caps initial thing, but it’s more of a stylistic thing. And of course, some people think: ‘This tweet is sensational.’ I’ve heard that,” he said — adding that, indeed, he has contacted critics, but always in a professional capacity. “I [direct-message] a lot of people,’ he said, “sometimes email them when I have a question.

“We have spirited debates,” he added.

But Feigl-Ding makes no apologies for trying to amplify and draw attention to the seriousness of Covid-19. Sounding the alarm — even if sometimes imperfectly — Feigl-Ding insists, is a moral obligation. “The whole New York Magazine article by David Wallace-Wells, the whole article was that alarmism is needed,” he said. “It was arguing for the case of alarmism. How do we listen to the early alarms? We could have reacted faster. It’s getting people to sit up from the chair and pay attention.”

He also argues that in some cases, his Twitter influence has helped to shape policy. Specifically, he mentions a thread — which began with the words “BLOODY HELL” — criticizing broadcast company Sinclair for their decision to air a segment featuring Judy Mikovits, the star of a popular, discredited video that surfaced various conspiracy theories about the pandemic. (The media watchdog group Media Matters for America first reported on Sinclair’s plans a couple days before Feigl-Ding’s tweet.)

That tweet got “more impressions than CNN,” Feigl-Ding said, adding that hours after making it, Sinclair announced they would postpone airing the segment. Two days later Sinclair decided not to air the interview at all. “Clearly, it had an impact. That’s what I’m going for,” he said.

“The alarmism got action,” he added.

Whether or not his critics agree with that assessment, there’s little doubt that Feigl-Ding — who, depending on the context, might best be described as a scientist, a politician, an advocate, and a self-styled public health Cassandra — continues to opine, with great emotion and inflection, on myriad Covid-19-related topics, using phrases like “I’m crying”; “whoa”; “buckle up,” and “worrisome.” And on any given day, it’s easy to find other experts picking apart a Feigl-Ding tweet, explaining what he’s gotten wrong, or what nuance he’s left out.

Sometimes, Feigl-Ding is driven to clarify his position, or even delete tweets. And where his detractors suggest that his missteps are more than mere nuisances, Feigl-Ding characterizes his critics as staid scientists who want him to “stay in his lane.” Indeed, when asked about their concerns, he often steers the conversation away quickly, saying their interpersonal issues are a distraction from what this moment needs: more people like him.

The high points of Feigl-Ding’s career have been repeatedly recounted in the news media — a point he seemed keen to emphasize in recent phone calls. “Everything I’ve ever said, there’s articles for it,” he said.

In those articles, Feigl-Ding shared versions of the same anecdotes he relayed in interviews with me. A Science article detailed his 2018 run for Congress, as well as the highlights of his childhood: that he spent his earliest years in Shanghai, before immigrating to the U.S. at age five; that he didn’t have a lot of friends growing up (“Imagine a chubby kid with a double chin,” he said, recounting how cruel classmates had called him “ching chong” and “pan face”); that instead of cartoons, he watched documentary series on psychology and statistics. And the media coverage goes much further back: A 2006 New York Times article described a JAMA study Feigl-Ding co-authored that provided further evidence that the drug rofecoxib, known to consumers as Vioxx, was associated with heart and kidney issues, after which “my phone did not stop ringing for a week, or two weeks,” he said. A 2007 Newsweek article featured the Facebook campaign Feigl-Ding started in support of breast cancer research. A 2011 New York Times article details the tumor doctors found in his chest at age 17, which turned out to be a benign teratoma, but launched Feigl-Ding’s interest in public health.

While Feigl-Ding is eager to discuss his successful public ventures, he doesn’t bring up his less vetted projects, like Happy Vitals, a now-defunct startup he and his wife created, which sold at-home breast milk nutrition tests. He said he’d rather not talk about Health Justice For All, a “grassroots movement” and political action committee, which received few contributions from anyone other than himself. (Feigl-Ding’s documented contributions to the PAC come in the form of unpaid Facebook posts, valued at $0.011 per impression.) He’s also not eager to talk about his failed 2018 political run to represent Pennsylvania’s 10th district in Congress. “If I run again, I don’t want a completely blunt exposé of how difficult it was,” he said. When asked if running again is something he’s considering, her responded: “Someday. Someday. I don’t want to — someday, maybe. Let’s just say maybe.”

Feigl-Ding also glosses over his decision to leave medical school, which he enrolled in briefly after completing his Harvard degree. “I realized life’s about what you do, not the number of letters behind your name,” Feigl-Ding said, “and at that point, I already had dual doctorates in two other things, and you know, pursuing a medical degree would’ve been a little bit overkill.” He is fond of talking about the “letters behind your name”; he used the phrase in a 2017 lecture at the University of Connecticut, as well as in a 2018 interview with the Harvard Crimson. Yet he also frequently refers to the impressive credentials of people he knows, even when they’re irrelevant to the conversation. These can include a double-inductee to the National Academies, or a Rhodes scholar who wrote a book with a former president’s child.

Perhaps more than anything, though, Feigl-Ding — who says he earns most of his income as a consultant on federal projects and is unpaid for his communication and research work at FAS — frequently steers conversation towards metrics of influence, importance, or virality. In discussing how his Facebook campaign began, for example, he says he originally created two pages through the site’s now defunct Causes application: one focusing on heart disease and stroke research and another focusing on breast cancer. Unfortunately, the former “never went anywhere,” so he pivoted to concentrating on his cancer page. Eventually, the page gave him “one-click access to millions of people on Facebook,” his first foray into social media virality. “You learn to master social networks when you have your pulse on millions of people,” he said. The word “millions” is big with Feigl-Ding — he talks about the 6 million members of that Facebook campaign, the half-million dollars he says the campaign raised for cancer research, the 14 million views on a viral conspiracy video he’s publicly decried, and the millions of impressions one needs on social media to make an impact. One night, after a lengthy telephone interview, he texted a blog’s analysis that characterized him as more influential than CNN, along with a screenshot of a one of his recent tweets — one debunking hydroxychloroquine as a Covid-19 treatment. It showed that the tweet had garnered more than 2 million impressions.

Feigl-Ding’s descriptions of his work evoke images of him as a protagonist in a quest to fix the world’s problems. He often invokes war metaphors: He’s a “tank” against online detractors, and he refers to disagreements about Covid-19 policy as an “information war” or a “battle of the minds.” When talking about the role of viral tweet threads, Feigl-Ding recounts a Chinese parable about whistleblowers. The story, as he tells it, starts with a wizard offering a man the ability to talk to animals, but only if he agrees to never talk to humans ever again. The man accepts, but then the animals tell him an earthquake and flood will devastate his village. The man wants to warn the villagers, but if he does, the wizard will turn him to stone. He decides to do it anyway. “He made a choice and he sacrificed,” Feigl-Ding said, sighing. “I don’t want to be a martyr, but I felt like it was more important to tweet this and raise the alarm,” he says, referencing his “holy mother of God” tweet. Though the research he was citing had not yet been peer reviewed, he felt it could have important insights.

He also acknowledged the blowback he received as a result, but added: “I think it was still worth it.”

Indeed, Feigl-Ding expresses frustration about the times he wished he could have sounded the alarm sooner. “I’ve had so many of these Cassandra moments, and so many of these ‘Ah, what could have been,’ moments,” he says, mentioning his Vioxx study, the issue of toxins in the drinking water of rusting urban centers like Flint, Michigan, and even the general topic of cancer prevention. He wonders what would have happened, for example, if Toxin Alert, the website he developed with engineer Pius Lee, had launched earlier, rather than more than a year after news of Flint’s water crisis broke.

This desire to warn, to be heard, is the thread Feigl-Ding uses to connect the various facets of his career — and according to him, it’s what drove him to begin tweeting about the Covid-19 pandemic. “The world needs more whistleblowers, and those [who] whistleblow early, not just after the fact or whimper at the time,” he said, pointing to what he considers one of his own triumphs in having nudged the Maryland governor’s office into action: “Gov. Larry Hogan’s office, his chief policy adviser, credits me — that my January tweet made them stand up, sit up in their seats and start preparing.”

Feigl-Ding — often referred to as a Covid-19 expert in the media — clearly has the ear of some influential people. In addition to advising Hogan’s office, claims he’s made in tweets have been addressed by Mexican officials in government press conferences, and his tweets or commentary have recently appeared in publications like The Washington Post, Vox, and Salon. After all, who could speak to the science of the pandemic better than a Harvard-trained epidemiologist?

But epidemiology is a big field, and Feigl-Ding’s previous research focuses mostly on nutrition and cancer, different sub-areas of the field than infectious disease. Popescu, the infectious disease epidemiologist, likens this distinction to different specialties in medicine. “I’m not going to go to a cardiologist to have brain surgery,” she said. “Many of us have called attention to his lack of experience or training” in infectious disease epidemiology, she said of Feigl-Ding.

“It’s really challenging to communicate when someone really can sell themselves, like, ‘I’m a Harvard scientist, I’m an epidemiologist,’” she added.

But Feigl-Ding is, indeed, a Harvard-trained scientist and a degreed epidemiologist — though his critics argue that most of his training has focused on nutrition, not infectious disease, making him prone to mistakes. An example: In one of his most popular tweets — known widely within Twitter’s science community as the “holy moly” tweet — Feigl-Ding said he was “crying for Mexico” because the country’s testing positivity percentage was 50 percent. A full half of people being tested in Mexico were proving to be infected with Covid-19, a figure even New York, Lombardy, and Madrid didn’t approach at their “worst periods,” he wrote. “Mexico may be undergoing unprecedented #Covid19.”

The message clearly struck a nerve with Twitter users, as it received tens of thousands of retweets and more than 1,500 responses. But while all of the information in the tweet was technically true, what Feigl-Ding had actually done, according to his critics, was paint an incomplete — and alarming — picture of an out-of-control outbreak without providing upfront context about what that positivity percentage actually represented: the fact that Mexico still was not testing very many people, and that most of its testing was being done on people who were already ill. Under such circumstances, a 50 percent positivity rate would not be considered unusual. Indeed, as Boston University epidemiologist Ellie Murray wrote in a tweet, positivity percentages are “used for evaluating whether you’re doing ENOUGH tests, not estimating how much disease you have.” Along with her explanation was a screenshot of Feigl-Ding’s tweet, which she called “bad and misleading.” Feigl-Ding clarified the meaning of positivity percentages in a continuation of that thread hours later — in fact, before Murray tweeted her criticism — but those tweets received only a fraction of the attention as Feigl-Ding’s original “holy moly” tweet.

Finding experts publicly correcting or critiquing Feigl-Ding’s tweets is not hard. More recently, infectious disease experts refuted his claims that the suggestion of White House coronavirus adviser Dr. Anthony Fauci that eyewear could improve Covid-19 protection meant that things were “getting serious,” and a slew of scientists — including Popescu, University of Florida biostatistician Natalie Dean, and University of California, San Francisco physician Vinay Prasad, among others — expressed concern about Feigl-Ding’s take on releasing vaccines early to certain populations.

But others suggested that the blowback from Feigl-Ding’s Twitter supporters has deterred them from raising more concerns about his missteps. “He has a couple hundred thousand followers,” said Michael Bazaco, an epidemiologist at the U.S. Food and Drug Administration and an adjunct professor at the University of Maryland. “They’re very assertive and aggressive, and I don’t want to deal with that.”

Rasmussen, the Columbia virologist, said she was initially reluctant to comment for this piece because she similarly feared online criticism from his fans. Feigl-Ding’s followers “have ganged up on anyone who criticizes him publicly,” she said, adding that her own Twitter feed has been “clogged with Feigl-Ding’s fans calling me stupid, petty, inept, gatekeeping, etc., along with the usual gendered slurs and insults.” (A search for specific instances of these terms being directed at Rasmussen on Twitter turned up few results, though evidence of Feigl-Ding defenders offering sometimes arch disagreement — and even some vulgar commentary — is easy to find.)

Dueling with strangers, Popescu said, is “emotionally draining.”

Ahead of publication of this article, Feigl-Ding pointed to complaints from other scientists on Twitter about the online argumentation style of some of his fiercest critics, including Popescu, though he declined to elaborate. “Trying to stay above it,” he wrote in an email to me last week, “since we need science to be respectful and publicly trusted.” Asked earlier about his own followers’ behavior, Feigl-Ding acknowledged some early issues. “Some of my early followers, those who were harassing, I actually removed them,” he said, while others eventually stopped following him. But he also said he has experienced rough treatment online himself, including by Twitter users of even greater influence. And on Twitter, too, he has responded to criticism from colleagues by pointing to his own dealings with what he called “anti-science trolls.”

Feigl-Ding may not bear any responsibility for — or even have any control over — the actions of his followers, of course, but he has also been known to privately contact his critics himself. After Murray tweeted her criticism, for example, she said Feigl-Ding messaged her privately to discuss the issue. Six other infectious disease experts that I spoke to say they’ve received private messages from him, often after publicly remarking on his tweets. Some, like Bazaco, say they simply ignore such forays. Popescu says that in talking with other scientists who have accepted Feigl-Ding’s messages, she discerns a pattern. “It’s always the same: ‘I want to learn, I want to be better’ — but he spends the entire time saying why he was right, giving you articles that were written about him, and how he called this and how he’s been misunderstood.”

In addition to infectious disease experts, Mexican journalist Maria Fernanda Mora said she also received messages from Feigl-Ding after she tweeted a thread questioning his reliability as a source, based on information she’d read in several articles published about him. But Feigl-Ding’s message — which included two articles about himself — arrived via Instagram, not Twitter, where Mora blocks strangers from directly messaging her. “I was truly surprised,” she wrote to me in a Twitter message.

When asked about these various interactions, Feigl-Ding expressed frustration, suggesting that such complaints were a “distraction” from the issues. “I don’t understand,” he said. “I’ve never said anything rude. I’ve never asked anyone to send anything rude,” arguing that his stated goal was always a polite and professional dialogue. “I’ve never kind of sent any harass[ing] messages whatsoever,” he said. “I don’t understand.”

Indeed, from his perspective, Feigl-Ding said, professional disagreements ought to be considered healthy and par for the course. And science communicators are, after all, trying to accomplish the same thing. But he also said he believed that disagreements should be handled privately, to avoid conflicting information. If people see disagreement among scientists online, he reasoned, there’s a risk that they’ll ignore scientists’ messages entirely.

When this reasoning was shared with Popescu, she challenged its logic — in part because she said it seems to suggest that while Feigl-Ding can speak publicly, his critics ought not. “You’re saying we can’t disagree with you,” she added, “because we’re ‘on the same side.’”

Popescu also said she felt Feigl-Ding’s positioning himself alongside infectious disease epidemiologists toiling in the Covid-19 trenches was misleading to the public. The latter are “living it, working in it, and will continue to after this,” she said. “He’s just tweeting about it.”

Feigl-Ding says that he believes his ability to grab people’s attention is an asset, and his unique contribution to an inherent and ongoing conflict — he called it a “battle” — between science and misinformation. “Tweeting is an art form,” he said.

“If your initial tweet does not draw them in,” he added, “you’ll get maybe a respectable four or five hundred retweets. I just call that respectable, but that’s not impactful. Anything impactful, you need a thousand retweets, at least.” Once you’ve got an audience’s attention, he said — that’s when you can get “into the weeds” or use a thread to “give information that goes beyond the headlines.”

Some experts consider that a savvy formula. Nouri, the president of the Federation of American Scientists, first reached out to Feigl-Ding in February, when the organization was thinking about how to debunk disinformation around Covid-19. “Eric is one example of somebody who has managed to break through the noise,” he said. (Feigl-Ding’s Harvard affiliation has ended.) Feigl-Ding, Nouri added, not only has a large following, but his tweets weigh in on the latest news quickly. “Speed is very important when it comes to countering disinformation,” he said, noting that Feigl-Ding is also prolific, “constantly pushing material out on social media.”

When asked about critics’ concerns that Feigl-Ding’s tweets are often misleading or lack nuance, he suggests there’s a trade-off. “Whenever you want to get information out in a rapid way, in a succinct way, and in a way that really resonates with people — and you really want to grab their attention,” Nouri said, “you can’t do that effectively and at the same time have a caveat and an explanation for everything that you’re trying to convey.” Getting information out there is “a bigger value to public health,” he added, “than the fact that some aspect of the tweet may have been misrepresented.”

Devi Sridhar, the chair of Global Public Health at the University of Edinburgh, agrees that Feigl-Ding’s tweets have value. While she doesn’t agree with everything he tweets, she said, she believes he’s acting in good faith. With so much misinformation out there, it’s “all hands on deck,” said Sridhar, and at the end of the day, Feigl-Ding is on the same side as many of his critics. “Whoever wants to counter that misinformation — I’m not going to criticize them on style or their tone,” she said. “We’re all trying to get a handle on this and spread good information.”

And to be sure, there is some evidence that Feigl-Ding’s tweets have contributed to positive outcomes, and he pointed to some potential successes: He says, for example, that the day after one of his tweets about Arizona’s rising cases went viral, the state’s governor permitted local governments to create and enforce local mask orders.

Just how direct the line is between a Feigl-Ding tweet and an action in the world, of course, is difficult to discern — and not all the impacts have necessarily been in the direction Feigl-Ding would hope. Nicholas Evans, a bioethicist at the University of Massachusetts in Lowell, for example, said that within 24 hours of seeing Feigl-Ding’s tweet about the results of the since-retracted pre-print which many touted as evidence that SARS-CoV-2 was engineered, he saw Feigl-Ding referenced in support of the conspiracy theory that the virus was intentionally released by China. And Mora, the Mexican journalist, says that Feigl-Ding’s tweets about Mexico’s positivity percentage have been used by President Andrés Manuel López Obrador’s political opponents to criticize his administration’s handling of the pandemic.

But the central and long-running critique of Feigl-Ding — that the high profile he’s cultivated through arguably sensational, often imprecise, and above all, relentless Covid-19 messaging is problematic — suggests that many scientists simply don’t buy the notion that the urgency of the moment necessarily outweighs the need for scientific precision in public messaging during a crisis. Scientists need to be especially careful when explaining new research, said Jason Kindrachuk, a virologist at the University of Manitoba. Studies might be interesting without actually telling the public anything new about the state of Covid-19, for example. “That’s where we science communicators really have to do our diligence in providing that context back to the public,” he said. Kindrachuk is concerned that, among other things, “sounding the alarm bell too much” could needlessly concern people over trivial findings.

It also, some experts said, could lead the public to become desensitized to scientists’ concerns entirely. Paige Jarreau, a science communication scholar and vice president of science communication at the software company LifeOmic, suggested that there’s an important balance to be struck between speed and accuracy. Communicating complex ideas and nuanced arguments around the latest Covid-19 findings is indeed difficult, she said — but that makes it all the more important to think carefully about how to communicate it.

“If you’re trying to be really fast and sexy, and you put out something that breaks someone’s trust in you, maybe you ended up having to take it down because it was wrong or you were too quick to jump to sharing something and it wasn’t actually correct,” Jarreau said. “Then you’ve broken the audience’s trust.”

Feigl-Ding concedes he’s made mistakes — just as many public health experts have, he says — pointing by way of example to Anthony Fauci, who in the earliest days of the pandemic suggested that Americans needn’t be immediately alarmed about Covid-19 (though he added the situation was serious and could quickly change — as it did). Still, sometimes scientists just don’t have definitive answers to questions the public is asking, Feigl-Ding added: “We’re always trying,” he said, “to push the best available information.”

Jane C. Hu is a science journalist living in Seattle. Her work can be found at Slate, Nautilus, Wired, the Atlantic, and Smithsonian, among other publications. This article was originally published on Undark. Read the original article.

]]>
https://www.niemanlab.org/2020/11/whoa-im-crying-worrisome-buckle-up-the-swift-complicated-rise-of-eric-feigl-ding-and-his-covid-tweet-threads/feed/ 0
Russian, Chinese, and Iranian media are turning on Trump, an analysis of foreign news outlets suggests https://www.niemanlab.org/2020/10/russian-chinese-and-iranian-media-are-turning-on-trump-an-analysis-of-foreign-news-outlets-suggests/ https://www.niemanlab.org/2020/10/russian-chinese-and-iranian-media-are-turning-on-trump-an-analysis-of-foreign-news-outlets-suggests/#respond Thu, 22 Oct 2020 13:00:48 +0000 https://www.niemanlab.org/?p=187066 It can be easy to overlook how the rest of the world is making sense of America’s chaotic campaign season.

But in many cases, they’re paying attention just as closely as U.S. voters are. After all, who wins the U.S. presidency has implications for countries around the world.

Since Sept. 22, we’ve been using machine-learning algorithms to identify the predominant themes in foreign media coverage.

How different countries cover the race between Donald Trump and Joe Biden can shed some light on how foreign citizens discern the candidates and the American political process, especially in places that have strict state control of media like China, Russia and Iran.

Unlike in the U.S., where there is a cacophony of perspectives, by and large the media in these three countries follow very similar narratives.

In 2016, we did the same exercise. Back then, one of the main themes that emerged was the decline of U.S. democracy. With scandal and the disillusionment of voters dominating the headlines, America’s global competitors used the 2016 election to advance their own political narratives about U.S. decline.

Some of these themes have emerged in the coverage of the current race. But the biggest difference is their portrayal of Trump.

The last election cycle, candidate Trump was an unknown. Although foreign nations acknowledged his political inexperience, they were cautiously optimistic about Trump’s deal-making ability. Russian media outlets were particularly bullish on Trump’s potential.

Now, however, it appears that the the feelings appear have changed. China, Iran, and even Russia seem to crave a return to normalcy — and, to some extent, American leadership in the world.

Dissecting the debate

To assess how America’s competitors make sense of the 2020 campaign, we tracked over 20 prominent news outlets from Chinese, Russian and Iranian native language media. We used automatic clustering algorithms to identify key narrative themes in the coverage and sentiment analysis to track how each country viewed the candidates. We then reviewed this AI-extracted information to validate our findings.

While our results are still preliminary, they shed light on how these countries’ media outlets are portraying the two candidates. Two key moments from the 2020 campaign — the first debate and Trump’s coronavirus diagnosis — are particularly illustrative.

After the first debate, the Chinese media questioned its usefulness to voters and generally portrayed Trump’s performance in a negative light. To them, the “chaotic” back-and-forth was a sobering reflection of America’s political turbulence.

They described Trump as purposely sabotaging the debate by interrupting his opponent and, in the days after the debate, noted that his performance failed to improve his lagging poll numbers. Biden was criticized for being unable to articulate concrete policies, but was nonetheless praised for being able to avoid any major gaffes and — as an article from the Xinhua News Agency put it — responding to Trump with “fierce words.”

Unlike in 2016, where Clinton was portrayed as anti-Russian, corrupt and elitist, Russian media appeared more willing to characterize the Democratic Party nominee in a positive light.

In fact, Russian coverage expressed surprise over Biden’s debate performance. He didn’t come across as feeble; instead, he was, as the daily newspaper Kommersant wrote, a lively opponent who appeared to be “criticizing, irritating and humiliating” Trump by calling him a “liar, racist and the worst president.” They did praise Trump’s especially aggressive rhetoric. However, our analysis found that Russian media also repeatedly claimed that, unlike 2016, voters today were tiring of his bombast.

While Trump’s post-debate posturing received some positive coverage, Russian media largely lamented his administration’s failure to deliver substantive progress toward normalizing relations between the two countries. They noted the debate neither clarified policies for voters nor for international observers.

Iranian media took the strongest anti-Trump stance. Reports routinely pointed out that Trump has had no foreign policy successes, and has only exacerbated relations with the country’s major rivals. According to Iranian media outlets, Trump’s lack of accomplishments has left him with no choice but to rely on insults and personal attacks.

Biden, however, was said to have kept his calm. As Al Alam News wrote, he used “more credible responses and attacks than Trump.”

The former vice president, in their view, promised some semblance of normalized diplomatic relations.

“Intransigence” and “ignorance”

Sometimes, last-minute surprises upend the final month of the U.S. presidential race. This year was no exception, with Trump’s Oct. 2 announcement of his Covid-19 diagnosis quickly shifting media coverage from the debate to Trump’s health.

He received little sympathy from foreign outlets. Across the board, they were quick to note how his personal disregard for public health safety measures symbolized his administration’s failed response to the pandemic.

For example, one Chinese media outlet, The Beijing News, characterized the diagnosis as “hitting” the president “in the face,” given his previous downplaying of the epidemic. Other reports claimed Trump lacked “care about the epidemic,” including disregard for “protective measures such as wearing a mask.”

Chinese outlets suggested Trump would use the diagnosis to win sympathy from voters, but also noted by being sidelined from holding campaign rallies, he could lose his “self-confessed” ability to attract voters.

Russian media, on the other hand, remained confident that Trump would recover and repeated the White House line of Trump’s good health.

At the same time, Russian outlets tended to chastise Trump’s unwillingness to avoid large gatherings, practice social distancing or wear a mask, all of which violated his administration’s basic health guidelines. Likewise, Russian reports criticized Trump’s post-diagnosis behavior — like tweeting video messages while at the hospital and violating quarantine with his public appearances — as “publicity stunts” that jeopardized the safety of his Secret Service detail and supporters.

Again, Iranian media most directly criticized Trump. Reports characterized Trump as “determined to continue the same approach,” despite his diagnosis, and remain “without a muzzle,” “irresponsibly” continuing to tweet misinformation falsely comparing COVID-19 to the flu.

Coverage centered on Trump’s inability to, as Al Alam put it, show “any sympathy” for the over 200,000 dead Americans. This death toll, the same article noted, was attributed to Trump’s “mismanagement, intransigence, ignorance and stupidity,” highlighted by his cavalier disregard for safety guidelines such as wearing a mask.

In the bag for Biden?

Many of the criticisms of the U.S. found in foreign media outlets in our 2016 study appear in this year’s coverage. But since the 2016 election, geopolitics have changed quite a bit — and, for many of these countries, not necessarily for the better. That might best explain their collective ire toward Trump.

During Trump’s first term, Iranians absorbed the U.S.’s unilateral withdrawal from the Iran nuclear deal, the reimposition of sanctions, and the assassination of one of its top generals.

The Chinese entered into a trade war with the U.S., while the U.S. government leveled accusations of intellectual property theft, mass murder, and blame for the spread of what Trump has called the “China Virus.”

Russians, meanwhile, have seen themselves — fairly or not — bound to Trump’s 2016 election victory and outed as an international provocateur. That Trump has not been able to deliver on normalizing U.S.–Russian relations despite four years of posturing and political rhetoric has perhaps made Trump more of a political liability than worthwhile ally. Not only has the Covid-19 pandemic sparked unrest in Russia’s backyard, but mounting regional instability is also undermining Putin’s image as a master tactician.

As a result, these countries’ outlets appear to have shifted attention away from a broad critique of U.S. democracy toward exasperation with Trump’s leadership.

The two, of course, aren’t mutually exclusive. And these countries’ relatively positive characterizations of a potential Biden administration likely won’t last.

But even the country’s supposed adversaries seem to be craving a return to stability and predictability from the Oval Office.

Robert Hinck and Robert Utterback are assistant professors at Monmouth College. Skye Cooley is an assistant professor at Oklahoma State University. This article is republished from The Conversation under a Creative Commons license.The Conversation

People watch a large screen showing Hope Hicks and other White House staffers during a news report from Chinese state television about President Donald Trump testing positive for the coronavirus in Beijing, Friday, Oct. 2, 2020. AP Photo/Mark Schiefelbein.

]]>
https://www.niemanlab.org/2020/10/russian-chinese-and-iranian-media-are-turning-on-trump-an-analysis-of-foreign-news-outlets-suggests/feed/ 0
Facebook and YouTube’s moves against QAnon are only a first step in the battle against dangerous conspiracy theories https://www.niemanlab.org/2020/10/facebook-and-youtubes-moves-against-qanon-are-only-a-first-step-in-the-battle-against-dangerous-conspiracy-theories/ https://www.niemanlab.org/2020/10/facebook-and-youtubes-moves-against-qanon-are-only-a-first-step-in-the-battle-against-dangerous-conspiracy-theories/#respond Mon, 19 Oct 2020 11:00:46 +0000 https://www.niemanlab.org/?p=187017 Recent decisions by Facebook and YouTube to crack down on the far-right conspiracy theory movement known as QAnon will disrupt the ability of dangerous online communities to spread their radical messages, but it won’t stop them completely.

Facebook’s Oct. 6 announcement that it would take down any “accounts representing QAnon, even if they contain no violent content,” followed earlier decisions by the social media platform to downrank QAnon content in Facebook searches. YouTube followed on Oct. 15 with new rules about conspiracy videos, but it stopped short of a complete ban.

This month marks the third anniversary of the movement that started when someone known only as Q posted a series of conspiracy theories on the internet forum 4chan. Q warned of a deep state satanic ring of global elites involved in pedophilia and sex trafficking, and asserted that U.S. President Donald Trump was working on a secret plan to take them all down.

QAnon now a global phenomenon

Until this year, most people had never heard of QAnon. But over the course of 2020, the fringe movement has gained widespread traction domestically in the United States and internationally — including a number of Republican politicians who openly campaigned as Q supporters.

I have been researching QAnon for more than two years and its recent evolution has shocked me. QAnon in July and August was a different movement than what QAnon has become in October. I have never seen a movement evolve or radicalize as fast as QAnon.

In the weeks leading up to Facebook’s action against “militarized social movements and QAnon,” I had seen a trend in more violent content on Facebook, especially with the circulation of memes and videos promoting “vehicle ramming attacks” with the slogan “all lives splatter” and other racist messages against Black people.

In explaining its ban, Facebook noted while it had “removed QAnon content that celebrates and supports violence, we’ve seen other QAnon content tied to different forms of real world harm, including recent claims that the (U.S.) West Coast wildfires were started by certain groups, which diverted attention of local officials from fighting the fires and protecting the public.”

Prior action was ineffective

Prior to the outright ban, Facebook’s earlier attempts to disrupt QAnon groups from organizing on Facebook and Instagram were not enough to stop their messages from spreading.

One way Q supporters adapted was through lighter forms of propaganda — something I call Pastel QAnon. As a way to circumvent the initial Facebook sanctions, women who believe in the QAnon conspiracies used warm and colorful images to spread QAnon theories through health and wellness communities and by infiltrating legitimate charitable campaigns against child trafficking.

The latest move by Facebook will still allow Pastel QAnon to exist in adjacent lifestyle, health and fitness communities — a softening of the traditionally raw QAnon narratives, but an effective way to spread the conspiracies to new audiences.

And while Facebook’s action reduced the number of QAnon accounts, it didn’t eliminate them completely — and realistically will not. My research shows the following:

— QAnon public groups pre-ban 186; post-ban 18.

— QAnon public pages pre-ban 253; post-ban 66.

— QAnon Instagram accounts pre-ban 269; post-ban 111.

Facebook’s actions will do permanent damage to the presence of QAnon on the platform in the long run. In the short and medium term, what we will see are pages and groups reforming and trying to game the Facebook algorithm to see if they can avoid detection. However, with little presence on Facebook to quickly amplify new pages and groups and the changes to the search algorithm, this will not be as effective as it was in the past.

Where will QAnon followers turn if Facebook is no longer the most effective way to spread its theories? Already, QAnon has further fragmented into communities on Telegram, Parler, MeWe, and Gab. These alternative social media platforms are not as effective for promoting content or merchandise, which will impact grifters who were profiting from QAnon, as well as limit the reach of proselytizers.

But the ban will push those already convinced by QAnon onto platforms where they will interact with more extreme content they may not have found on Facebook. This will radicalize some individuals more than they already are or will accelerate the process for others who may have already been on this path.

Like a religious movement

What we will likely see eventually is the balkanisation of the QAnon ideology. It will be important to start considering that QAnon is more than a conspiracy theory, but closer to a new religious movement. It will also be important to consider how QAnon has be able to absorb, co-opt or adapt itself to other ideologies.

Though Facebook has taken this important step, there will be much work ahead to make sure QAnon doesn’t reappear on the platform.

YouTube said its new rules for “managing harmful conspiracy theories” are intended to “curb hate and harassment by removing more conspiracy theory content used to justify real-world violence.”

In the initial wave of takedowns, YouTube shut down the channels of some of the QAnon influencers and proselytizers, including Canadian QAnon influencer Amazing Polly and Quebec QAnon influencer Alexis Cossette-Trudel. Though this will cut off some of the big influencers, there is more QAnon content on YouTube that falls outside the platform’s new rules.

The new rules will not stop the role YouTube plays in radicalizing individuals into QAnon, nor will it curb those who will radicalize to violence until the platform bans all QAnon content.

Video is the most used medium to circulate QAnon content across digital ecosystems. As long as QAnon still has a home on YouTube, we will continue to see their content on all social media platforms. QAnon will ultimately require a multi-platform effort.

Technology and platforms provide a vector for extremist movements like QAnon. However, at its root, it’s a human issue and the current socio-political environment around the world is fertile for the continued existence and growth of QAnon.

The action by Facebook and YouTube is a step in the right direction, but this is not the end game. There is much work ahead for those working in this space.

Marc-André Argentino is a Ph.D candidate at Concordia University studying the nexus of technology and extremist groups. This article is republished from The Conversation under a Creative Commons license.The Conversation

Photo of a Trump rally by Becker1999 used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/10/facebook-and-youtubes-moves-against-qanon-are-only-a-first-step-in-the-battle-against-dangerous-conspiracy-theories/feed/ 0
Journalism faces a crisis in trust. Journalists fall into two very different camps for how to fix it https://www.niemanlab.org/2020/10/journalism-faces-a-crisis-in-trust-journalists-fall-into-two-very-different-camps-for-how-to-fix-it/ https://www.niemanlab.org/2020/10/journalism-faces-a-crisis-in-trust-journalists-fall-into-two-very-different-camps-for-how-to-fix-it/#respond Thu, 08 Oct 2020 12:30:32 +0000 https://www.niemanlab.org/?p=186730

Editor’s note: Longtime Nieman Lab readers know the bylines of Mark Coddington and Seth Lewis. Mark wrote the weekly This Week in Review column for us from 2010 to 2014; Seth’s written for us off and on since 2010. Together they’ve launched a new monthly newsletter on recent academic research around journalism. It’s called RQ1 and we’re happy to bring each issue to you here at Nieman Lab.

What work is required to build public trust in journalism?

Journalism faces a well-documented crisis of trust. This long-running decline in public confidence in the press is part of a broader skepticism that has developed about the trustworthiness of institutions more generally — leading to an overall trust recession that worries observers who speculate about the endgame of this downward spiral.

But might we see these issues of news and trust in a new light if we reconsidered our assumptions about what actually leads people to develop trust in journalism?

Consider, for example, how journalists for decades have sought to establish trust and confidence by focusing on their democratic responsibility to provide objective information — in which case, trust is presumed to be a product of faithfully adhering to standards and neutrality. In that case, reclaiming trust could be a matter of “getting back to basics,” as it were, and reporting facts in a way that more clearly communicates what people need to know, with the independence and distance that people have come to expect from journalists.

But if, in fact, journalists were to switch their mindset and understand their primary role differently as the facilitation of public deliberation, community connection, and democratic participation — of working with civil society as opposed to apart from it — what would that mean for the overall orientation of journalism and how it works?

A new study in Journalism & Mass Communication Quarterly — by Megan L. Zahay, Kelly Jensen, Yiping Xia, and Sue Robinson, all of the University of Wisconsin-Madison — offers some essential insights on this question. The team, led by Robinson and applying Zahay’s training as a rhetorician, interviewed 42 journalists, about half of them designated “engagement-oriented” and the others “traditionally oriented.” Based on a rhetorical analysis of what these journalists said (via the interviews) as well as what they did (via hundreds of pages of website materials and social media conversation threads), the authors developed a picture of two camps of journalists — both deeply concerned about the crisis of trust in journalism, but each with very divergent ideas about what should be done about it.

For traditionally oriented journalists, trust is achieved by transmitting facts and helping people perform their democratic duties, without any particular public participation involved in that process. Fixing the trust problem, in this view, means doubling down on objectivity, transparency, and accuracy — but in a way that helps citizens to more readily recognize the value that such things provide. By contrast, rather than focusing on institutionalized norms as the defining elements of journalism, “engagement-oriented journalists view [journalism] as a set of relationships, prone to complexity and messiness, and they expect this in the contexts in which they work.”

What’s especially striking about the engagement view, Zahay and colleagues argue, is that it implies not just a different mindset about one’s role but also a transformation in one’s work—the stuff of day-to-day labor, or what they call “the labor of building trust.” A focus on building and maintaining relationships thus suggests “entirely new kinds of journalistic labor that reorient reporters’ attention toward collaboration and facilitation.” From this perspective, public trust in news flows out of efforts that emphasize mutual understanding and empathy with communities — and which may be inherently slow, gradual, and long-term by nature. In the words of a cofounder of an engagement organization who was interviewed, “[I]t’s ineffective to double down on ‘Trust me, I’m a journalist’ … If you’re not in a relationship with someone, if you haven’t proved your value to them … then you don’t have trust.”

By now, there is a large and growing body of research about the possibilities and challenges of engaged journalism. These approaches, in fact, have a long history, going back to the public and citizen journalism movements of the 1990s. But what sets this latest study apart is in how it carefully charts what appears to be a key inflection point in the profession — one that even seems, in the authors’ conclusion, “paradigmatic.” Indeed, this piece is the first to be published out of Robinson’s multi-phased, ongoing book project about how journalists trust “regular people” according to their various identities.

To the extent that we’re beginning to see a decisive split in how journalists define and enact their democratic role — and to the degree that news organizations give individual journalists the freedom and encouragement to act this way and engage trust-building experiments — we may be witnessing a meaningful movement away from the institutional model of critical distance and toward an engagement model of facilitating discussion, building community, and partnering with the public.

Research roundup

Here are some other studies that caught our eye this month:

Life in a news desert: The perceived impact of a newspaper closure on community members. By Nick Mathews, in Journalism.

As scores of weekly and small daily newspapers close across the U.S., scholars and journalists have sounded the alarm about the expansion of news deserts — areas without any dedicated news coverage via a local newspaper. We’ve presumed that news deserts are damaging to democracy, that they hamper public oversight of local government and weaken the fabric of community that are essential to the civic life of these areas.

Mathews supports those premises with a vivid and detailed picture of one of those news deserts — Caroline County in rural Virginia. Using the concept of “sense of community,” Mathews interviews residents of the county after their weekly paper has been shut down. He finds that residents of the county not only feel more in the dark about what their local government is doing, but that they feel more disconnected from each other without a common forum to promote and celebrate community events. “Without the Caroline Progress, I am more isolated,” one resident tells Mathews. “I think we all are. I think the paper was the one thing that kept us together.”

Gendered news coverage and women as heads of government. By Melanee Thomas, Allison Harell, Sanne A.M. Rijkhoff, and Tania Gosselin, in Political Communication.

Media coverage of women politicians, and especially the gendered differences with its coverage of men, has long been a subject of great scholarly interest, with some excellent research on the subject coming out lately. This Canadian study adds nuance to our understanding of it with an automated analysis of more than 11,000 news articles of provincial premiers.

Thomas and her colleagues’ findings are mixed and complex: They find that fewer articles are written about women-led governments than men’s, and that coverage of women features more gendered language and more references to clothing. Other findings, though, run counter to our common assumptions. There are fewer references to women’s families and private lives, and more positive references to their character and competence, than there are for men. Women are referred to with more feminine terms, but there are no differences in the proportion of masculine language used. They conclude that gendered news coverage certainly hasn’t gone away, but we need to think of it in more multi-faceted, fully mediated terms.

How to report on elections? The effects of game, issue and negative coverage on reader engagement and incivility. By João Gonçalves, Sara Pereira, and Marisa Torres da Silva, in Journalism.

There are few aspects of journalism that scholars and media observers criticize as frequently as political journalists’ framing of news stories as a game, or with relentless negativity. And there are few things that journalists criticize as frequently as toxic comment sections under their work. This Portuguese study combines those two elements, trying to determine to what degree game frames influence the civility of news comments.

The authors found that stories that are negative as well as those that are positive toward political actors led to more uncivil comments. Game framing by itself didn’t lead to more uncivil comments overall, but it did predict more incivility among more polarized commenters. Perhaps most practically pertinent to many news organizations, both negative and game-framed articles led to more comments overall, suggesting they may be easy to justify as “drivers of engagement.”

Platforms, journalists and their digital selves. By Claudia Mellado & Amaranta Alfaro, in Digital Journalism.

There’s been plenty of research over the past decade that examines how journalists use Twitter, though quite a bit less looking at their use of Instagram. Mellado and Alfaro explore journalists’ use of both platforms in an illuminating way by looking through the prism of journalists’ identities and perception of their professional roles. In interviews with 31 Chilean journalists, they find three approaches by which journalists see their journalistic identities on Twitter and Instagram: The adapted, skeptical, and redefiner approaches.

The adapted approach involves fully incorporating the routines and features of social media into journalists’ work, but without adjusting their traditional roles and identity. The skeptical approach goes further in defending traditional journalistic identity, seeing those tools as an encroachment on it and something that shouldn’t be validated as journalistically legitimate. Only the redefiners are willing to allow social media to reshape their professional identities, focusing less on strict professional/personal boundaries and more on social media as a self-branding and professional development opportunity. These approaches aren’t mutually exclusive, they argue, but are divergent ways for journalists to reconcile their professional, organizational, and personal identities online.

Anticipatory news infrastructures: Seeing journalism’s expectations of future publics in its sociotechnical systems. By Mike Ananny and Megan Finn, in New Media & Society.

We often talk about news in terms of trying to represent what has happened, or what is happening, but in this creative and intriguing theoretical paper Ananny and Finn are interested in journalism’s approach to what’s about to happen. “Where do journalists get their authority to report on the future?” they ask, and the place they’re led to as they answer that question and others like it is the concept of anticipatory news infrastructures.

Ananny and Finn characterize anticipatory news infrastructures as sociotechnical systems — that is, they’re made up of both material and technological objects as well as the social relationships that shape them. They use examples like the Los Angeles Times’ Quakebot system, NPR’s automated transcription-driven real-time debate fact-checking, and the analytics dashboards meant to help journalists determine what’s about to become news soon to illustrate how these infrastructures allow journalists to manage uncertainty and limit risk in a work environment tightly bound by immediacy and time.

These infrastructures ultimately create their own “anticipatory publics,” Ananny and Finn argue, by planning for and expecting particular relationships between people, data, and issues. This pushes journalists away from their familiar territory of detached objectivity and toward an arena in which their own efforts to anticipate news envision and create new social relationships.

Mob censorship: Online harassment of US journalists in times of digital hate and populism. By Silvio Waisbord, in Digital Journalism.

Online harassment and its implications for the journalist–audience relationship. By Seth Lewis, Rodrigo Zamith, and Mark Coddington, in Digital Journalism.

Online harassment has become a chillingly regular part of the job for far too many journalists around the world. In an important conceptual article, Silvio Waisbord argues that such harassment — often motivated by populism and directed against women, journalists of color, and LGBTQ journalists — is more than trolling, and doesn’t qualify as press criticism. Instead, he frames it as a “political struggle to control speech,” and specifically as a form of mob censorship.

As mob censorship, he argues, it’s part of collective, violent (verbally and/or physically) action to silence journalists, distinct from censorship efforts by the state, markets, or parastate groups. In its use of violent discourse to control journalistic speech, he says, it complicates the already fraught relationship between hate speech and democratic rights.

And if you’ll permit us a bit of self-promotion at the end of this month’s newsletter, we published a study examining some of the effects of this online harassment. In surveying American journalists, we found that journalists who’ve been harassed by audiences online are less likely to view audiences as rational or like themselves. That’s a significant fracture in the journalist-audience relationship, and one that causes us to rethink the optimism that’s often surrounded scholarship around journalists’ reciprocal relationships with audiences, a concept we’ve espoused ourselves.

Photo by Quinn Dombrowski used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/10/journalism-faces-a-crisis-in-trust-journalists-fall-into-two-very-different-camps-for-how-to-fix-it/feed/ 0