social media – Nieman Lab https://www.niemanlab.org Tue, 25 Oct 2022 13:30:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 TikTok and Instagram are the only social networks that are growing as news sources for Americans https://www.niemanlab.org/2022/10/tiktok-and-instagram-are-the-only-social-networks-that-are-growing-as-news-sources-for-americans/ https://www.niemanlab.org/2022/10/tiktok-and-instagram-are-the-only-social-networks-that-are-growing-as-news-sources-for-americans/#respond Tue, 25 Oct 2022 13:30:42 +0000 https://www.niemanlab.org/?p=208787 Ten percent of all American adults now say they “regularly” get news from TikTok, according to a new Pew analysis. That’s up from 3% just two years ago. And for younger Americans, not surprisingly, the percentage is higher: 26% of Americans under 30 say they regularly get news from TikTok.

The increase comes as Americans’ use of most other social networks for news has declined over the past two years. Instagram is up too, but just a tiny bit. The use of Facebook for news has fallen the most over the last two years: Today, less than half of Americans say they regularly get news there. (That drop has taken place as Facebook has retrenched on news; a company spokesperson said recently that “Currently less than 3% of what people around the world see in Facebook’s Feed are posts with links to news articles.”)

More here.

]]>
https://www.niemanlab.org/2022/10/tiktok-and-instagram-are-the-only-social-networks-that-are-growing-as-news-sources-for-americans/feed/ 0
“You don’t know which side is playing you”: The authors of Meme Wars have some advice for journalists https://www.niemanlab.org/2022/09/you-dont-know-which-side-is-playing-you-the-authors-of-meme-wars-have-some-advice-for-journalists/ https://www.niemanlab.org/2022/09/you-dont-know-which-side-is-playing-you-the-authors-of-meme-wars-have-some-advice-for-journalists/#respond Wed, 21 Sep 2022 18:37:11 +0000 https://www.niemanlab.org/?p=207930 In 2019, after BuzzFeed and Verizon Media announced a combined 1,000 layoffs on the same day, many people who had shared their layoffs on Twitter were inundated with replies telling them to “learn to code.”

It wasn’t just ill-timed career advice. It was a targeted harassment campaign against media workers that was organized on 4chan by people on the right who hate the mainstream media.

Memes have been used to target marginalized groups for at least a decade now. A new book by researchers at Harvard’s Shorenstein Center on Media, Politics and Public Policy documents how memes and the online communities that produce them sow disinformation and erode trust in the government and the mainstream media. Meme Wars: The Untold Story of the Online Battles Upending Democracy in America explains how the “Stop the Steal” movement — the false idea that the 2020 election was “stolen” from former president Donald Trump — started online and resulted in the January 6 insurrection at the U.S. Capitol, and used examples from Gamergate, the Occupy Wall Street movement, and Donald Trump’s rise to the presidency to develop its playbook.

Meme wars are culture wars, the authors write — “accelerated and intensified because of the infrastructure and incentives of the internet, which trades outrage and extremity as currency, rewards speed and scale, and flatten the experience of the world into a never-ending scroll of images and words.”

In April 2020, co-authors Joan Donovan, the research director at Shorenstein, and Brian Friedberg, a Harvard ethnographer studying online fringe communities, launched a newsletter, “Meme War Weekly,” “that got really grim very quickly,” they told me. It was impossible to write about the political impact of memes on a week-by-week basis without noticing their significant social and political impacts over time. Their Media Manipulation Casebook, a series of case studies published in October 2020, was a precursor to their new book.

I caught up with Donovan, Friedberg, and Emily Dreyfuss, a technology journalist and 2018 Nieman Fellow, to talk about their book and what journalists can learn from the last 10 years of memes to inform their future coverage of American democracy. This interview has been edited for length and clarity.

Hanaa’ Tameez: At what point in your research did you realize you had a book on your hands?

Joan Donovan: It was the night of January 6. Brian, Emily, and I were hosting an open Zoom room where journalists were coming in and out and we were talking about the significance of Stop the Steal, these kinds of hashtag movements that had been popping up, and the role that right wing media had played in fomenting such an attack on the U.S. Capitol. We all decided that there was a book here.

In 2016, I had written an article in MIT Tech Review about meme wars in the 2016 election. We had a lot of interesting data lying around and wanted to put it into an internet history. We realized very quickly that our starting point would be the Occupy Wall Street movement and the use of memes to mobilize people in that moment in 2011. Many of the books and writings about the Occupy movement didn’t attend to the fact that it was just as mobilizing for the right as it was for the left. And so we wanted to begin with our understanding of where Andrew Breitbart, Steve Bannon, and Alex Jones got their foothold in social media and meme wars.

Emily Dreyfuss: [On] January 6, people were asking questions like “How could this be happening?” We were seeing the memes that we had been tracing through Meme Wars Weekly appear on people’s clothing and in the livestream comments. We were also getting so many questions directly from journalists and people who wanted to know Joan and our team’s take — questions like, “How could this event happen? This seems to be coming out of nowhere.”

It wasn’t out of nowhere, but hearing people ask how it could be happening really clarified for me that we actually do know, [whereas many] other people don’t know. Joan and Brian’s research about meme wars had been ongoing for years, but it struck us that this was the perfect vehicle to explain something that other people thought was unexplainable. That night, we wrote the outline for the book.

Tameez: After writing this, what are your takeaways about where we stand as a society, as a democracy?

Brian Friedberg: In these last few months leading up to the midterms, political communication and messaging are clearly trending toward polarization. But then you get big media institutions like CNN signaling a change in tone and focus over where they might have been in the last four years under Trump. There’s also a lot of fatigue within parts of the right with Trump himself, and an understanding that there’s not just a battle for the Republican Party but a battle for the future of MAGA playing out [among] different factions of broadcasters, influencers, and political candidates.

What are the comparable big movements within [the Democratic Party’s political communication]? You have the much discussed adoption of the Dark Brandon meme by more mainstream Democratic figures, potentially signaling an entrance into the meme wars. We have yet to see if that’s actually going to impact the midterms. But there is definitely [increasing] adoption of “us versus them” messaging among most political factions in the U.S. I think we saw the foundation for that in the 10 years leading up to the book.

Dreyfuss: As the normie on the team, one of the things I learned through while researching this book was just how many communities have lost faith in the power and credibility of institutions in general.

The media itself is a proxy for all types of institutions, including academia and government. I was awakened to that through the course of researching this book, and then I couldn’t unsee it. With Roe v. Wade and what’s going on with the Supreme Court in the U.S. right now, I think we’re watching that lack of trust in the system increasing on the left, as well as the right. Our book focuses a lot on the right, and there are a lot of people who feel that way on the right, but in the U.S., a lot of people feel that way on the left as well.

Tameez: Where do you think the press first went wrong?

Donovan: In the book we talk about significant moments of meme warfare occurring around Obama, particularly the way in which he was caricatured using Joker memes, [and] the conspiracy theories, particularly birtherism, that plagued his tenure as president.

One of the things that is important to understand here is that memes don’t just come in the form of an image with some quippy text. They’re viral slogans, too. They’re the activation of people’s confirmation bias and stereotypes. As we watched political opponents begin to memeify one another and push these tropes — some with the intention of sowing disinformation, others with the intention of spreading propaganda — I think many journalists were initially very dismissive.

One big example is the way in which the alt-right arrived in the media landscape. It’s not that journalists were unable to understand the rise of a white supremacist movement, but they refused to call it what it was. Instead, they used the branding and memes that had been drudged up by these groups that were specifically seeking to rebrand themselves…they didn’t know they were being played.

So you saw the rise, not just of the alt-right, but of a key figure in the alt-right with Richard Spencer, who was able to use all of that media attention to garner and recruit people into this movement, which then manifested and moved [into the real world] at the Unite the Right rally in Charlottesville, where many people were injured and a woman died.

When we’re trying to understand where the media is culpable in meme wars, it’s not just that they dropped the ball, but also that some in the mainstream media thought use of the term “alt-right” wasn’t something that they needed to concern themselves with. In doing so, they spread the meme very far.

Dreyfuss: During those periods of time, I was a reporter and an editor at Wired. Internet culture was treated, for a very long time, like its own side beat. If it was happening online, it would go into the tech section of a newspaper and not be treated with the gravitas that, perhaps, [it might get] if someone was making those statements on television or in other legacy formats. And then the press became very attuned to Twitter trends.

The media treating Twitter like an assignment editor is one of the fundamental errors that enabled meme warriors to play everyone. It showed that if they could get [something] to trend enough, then they’d get a story about it. In the era of social media taking all the ad sales out of journalism, it became even more important for journalists to write more, have a lot of content, and be covering the thing that everyone was talking about, which then created a snowball effect.

If something trended and someone covered it, a million other people covered it as well. It really reinforced things like the term “alt-right,” and if you then asked where the term “alt-right” even came from, it was very hard to find the origin because of all that content that used the term without questioning it. At Wired, we used “alt-right” for a very long time until it occurred to us: “Whoa, wait, is this a problematic phrase?” But at that point, we’d already written a million articles adding to the problem.

Tameez: In the book, you write about “hate facts,” or misrepresented statistics and pseudoscience that often come up in these communities and target marginalized people. What role does the press play in perpetuating those?

Friedberg: Things like crime statistics — real or fudged — or statistics about genomics and IQ keep resurfacing. It’s old stuff. Contemporary, blatantly racialized social science is not acceptable anymore, which is why they keep going back to the past, despite it being debunked by so many informal and formal sources. Things like scientific racism and gender essentialism keep coming up because we haven’t empowered the folks that they hurt systemically.

One of the concurrent problems with the media coverage is that there will often be stand-ins, particularly in the right-wing press. Instead of saying “Black people” are doing something, they’ll say “inner city” or “Chicago.” While they might not be directly quoting these [hate facts], they all line up with the comment sections and the Facebook shares. There’s a lot of informal knowledge-making that happens underneath the mainstream news stories that the outlets aren’t necessarily responsible for. One of the things that we’ve talked about is how the alt-right comment bombed and raided the comment sections of the conservative press, which is why a lot of comments sections on websites disappeared.

So there are placeholder words and frames that are being accepted, and a lot of uncritical adoption of official statements and statements by police that further criminalize marginalized communities. I think that’s the next frontier that needs to be addressed [and understood]: That this stuff feeds into narratives that keep people impoverished and oppressed.

Dreyfuss: These narratives get into the mindsets and brains of editors and reporters, so one other thing to look at is which stories get written and which don’t. That’s one of the main ways that the press can and does perpetuate many of these narratives. There was a ton of coverage of Antifa and violence during the Black Lives Matter movement in 2020. In fact, violence at those protests was extremely rare; it was an outlier, but the press is trained to report on outliers. That is a major problem. We write the story that’s odd — and if everyone writes about the odd story, it seems common.

Tameez: What do you think are the major takeaways for journalists from Meme Wars?

Donovan: Reporting on the meme wars is hard because you don’t know which side is playing you, in the moment. A lot of times, it only becomes apparent after the fact. Journalists might see something come up and think it’s interesting. But they don’t understand is that it might be part of a manipulation campaign, targeting them directly as journalists.

[Journalists need to ask]: Is calling attention to these memes going to improve the audience’s understanding? And then, is covering this going to be damaging in some way? Is it going to give oxygen to those who are trying to wage culture wars?

The point at which journalists should pay attention to these things is always a tough call, unfortunately. But we’ve noticed over the years that the far right is going to lie to you about who they are. They’re going to lie to you about their intentions are, because their views are fringe, unethical, and antisocial. Journalists are going to have to learn the methods of digital internet forensics and become much more adept at internet sleuthing if they’re going to survive writing about these meme wars. We caution journalists not to get into things that they don’t quite understand.

My broad advice to journalists is to get to know your beat, and to stay on top of those who are using media manipulation, disinformation, and meme wars to carry out their politics or to make a profit.

Tameez: Some people say that there’s no difference between “culture” and “internet culture” anymore because the internet is a driving part of our lives and society. Does that logic apply to meme wars, too?

Dreyfuss: We find that meme wars are an evolution of culture wars. But the creation of and mass scaling of social media, the infrastructure of the social internet, and the way those things put the power to reach billions of people in everyone’s hands has changed the nature of the way those culture wars can be fought online. They amplified the role that memes can play because of the format.

The impact of meme wars, central to the book, is Joan’s theory that meme wars drive something online to happen in the real world. The meme wars fail if something doesn’t happen in the real world, because the point of them is not to just be online, but to actually influence culture.

Donovan: I would add that there’s no more “offline.” If you’re a child of the early internet days of AOL, it was very clear when you were online, because you literally were plugged into a phone line that was plugged into the wall. We started the book with the Occupy Wall Street movement because it was the first huge instance in the U.S. of social media moving into public spaces and promoting civil disobedience.

Tameez: What is your tech stack? How do you protect yourselves online and mentally?

Donovan: I’d rather not reveal it. We use a mishmash of corporate products that delete content from the internet or make it difficult to collect our phone records, email addresses, and whatnot. We also have physical security protocols that make it more difficult for people to access us. Then we have some friendly people who monitor these spaces and keep an eye out for our names and information about people on our teams. That’s about all I’m willing to say about that publicly.

The work is rewarding when you see things getting done that are outside of your purview. As researchers, we know the work that we do is of global importance. When journalists give us feedback saying that they were able to write better stories because of our research, when technologists say they were able to create better software because of our work, when civil society actors say they were able to influence culture or policy in certain directions, when members of Congress say thank you for the research that we do — that, to me, is a really important protective element.

If we were doing this work and sort of screaming into the void and watching as democracies fail, it would be hard to keep doing it. But we know the work that we’re doing is getting taken up by important decision-makers and stakeholders, and is protecting other people from going through some of these very damaging campaigns. There’s a sense of justice in the work that you don’t get from many other jobs.

Dreyfuss: There is power in explaining something that is happening in the world and figuring it out. The existence of this book, and a lot of the work that we do as a team, is about recognizing that something is happening and figuring out why. Life is full of unknowable things and unknown things. There is some power and calmness that comes from [the recognition] that this is not an unknowable problem.

Friedberg: I almost exclusively consume some kind of indie media. At this point, the far right is just one of the many voices I listen to on a daily basis. I would rather triangulate between a real Nazi podcast and a real lefty podcast than between CNN and Fox News. I prefer this side of the media world.

Tameez: Is there anything else you want to add?

Donovan: We conclude the book by thinking about what memes have to do with people’s nationalistic identities. Right now U.S. politics is fracturing around who gets to define it means to be American. Who gets to claim that status? Under what conditions do we consider someone “patriotic” versus “nationalistic”?

The people who have most been marginalized by our political system use the tools of new media to be seen and be heard. But that doesn’t necessarily make social media good for a society. Social media is now overrun with very powerful politicians and very powerful rich men who have a very particular political agenda.

Even though social media as a technology hasn’t really changed that much over the last decade, its users have, and it’s become much more ubiquitous. It’s become much more of a tool of the powerful to oppress, rather than a weapon of the weak to liberate. As journalists are thinking about what stories they should be telling, they should turn their eye to the groups of people who are the most marginalized, who are struggling for recognition. They should not assume that just because a few accounts on social media are being loud that that means the whole multiplicity of that identity is represented.

The way in which social media is structured is almost like a distorted mirror of our society. It’s imperative that journalists understand that they are on the front lines of the meme wars, and that they can really shift the balance if they shift who they spotlight and what stories they choose to tell.

]]>
https://www.niemanlab.org/2022/09/you-dont-know-which-side-is-playing-you-the-authors-of-meme-wars-have-some-advice-for-journalists/feed/ 0
Most local election offices still aren’t on social media, new research finds https://www.niemanlab.org/2022/08/most-local-election-offices-still-arent-on-social-media-new-research-finds/ https://www.niemanlab.org/2022/08/most-local-election-offices-still-arent-on-social-media-new-research-finds/#respond Wed, 31 Aug 2022 12:59:02 +0000 https://www.niemanlab.org/?p=207500 Local election officials are trying to share voting information with the public on social media but may be missing some key platforms — and the voters who use them.

In early July 2022, for instance, young voters in Boone County, Missouri, complained that they had missed the registration deadline to vote in the county’s Aug. 2 primary election. They claimed no one “spread the word on social media.” The local election office in that county actually has a social media presence on InstagramFacebookTwitter and TikTok. But its accounts don’t have many followers and aren’t as active as, say, celebrity or teenage accounts are. As a result, election officials’ messages may never reach their audience.

The Boone County example raises important questions about how prospective voters can get informed about elections, starting with whether or not local election officials are active on social media and whether they use these platforms effectively to “spread the word.”

In our research as scholars of voter participation and electoral processes, we find that when local election officials not only have social media accounts but use them to distribute information about voting, voters of all ages — but particularly young voters — are more likely to register to vote, to cast ballots, and to have their ballots counted.

For example, during the 2020 election, Florida voters who lived in counties where the county supervisor of elections shared information about how to register to vote on Facebook, and included a link to Florida’s online voter registration system, were more likely to complete the voter registration process and use online voter registration.

In North Carolina, we found that voters whose county board of elections used Facebook to share clear information about voting by mail were more likely to have their mailed ballots accepted than mail voters whose county boards did not share instructions on social media.

Young people face distinct voting challenges

Voter participation among young voters, those between the ages of 18 and 24, has increased in recent elections, but still lags behind that of older voters. One reason is that younger voters have not yet established a habit of voting.

Even when they do try to vote, young voters face more barriers to participation than more experienced voters. They are more likely than older people to make errors or omissions on their voter registration applications and therefore not be successfully registered.

When they do successfully complete the registration process, they have more trouble casting a vote that will count, especially when it comes to following all the steps required for voting by mail. When they try to vote in person, evidence from recent elections shows high provisional voting rates in college towns, suggesting college students may also experience trouble in casting a regular ballot owing to confusion about finding their polling place, or because they are not registered to vote because their voter registration application was not successfully processed.

Some of these problems exist because voters, especially young ones, don’t know what they need to do to meet the voter eligibility requirements set by state election laws. Those laws often require registering weeks or months in advance of Election Day, or changing their registration information even if they move within a community.

Social media as a tool to spread the word

Social media can be a way to get this important information out to a wider audience, including to the young voters who are more likely to need it.

Younger people use social media more than older voters, with a strong preference for platforms such as YouTube, Instagram, and Snapchat.

News outlets and political campaigns use social media heavily. But our analysis finds that the vast majority of local election officials don’t even have social media accounts beyond Facebook. And, when they do, it is likely that they are not effectively reaching their audience.

Gaps in how local election officials use social media

We have found that during the 2020 U.S. presidential election, 33% of county election offices had Facebook accounts. Facebook is the most commonly used social media platform among Americans of all ages. But two-thirds of county election offices didn’t even have a Facebook account.

Just 9% of county election offices had Twitter accounts, and fewer than 2% had accounts on Instagram or TikTok, which are more popular with young voters than Twitter or Facebook.

Using social media for voter education

Local election officials are charged with sharing information about the voting process — including the mechanics of registering and voting, as well as official lists of candidates and ballot questions.

Their default method of making this information available is often to share it on their own government websites. But young voters’ regular use of social media presents an opportunity for officials to be more active and engaged on those sites.

While many election officials around the country face budget and staffing pressures, as well as threats to their safety, our research confirms that when officials do get involved on social media, young voters benefit – as does democracy itself.

Thessalia Merivaki is an assistant professor of American Politics at Mississippi State University. Mara Suttmann-Lea is an assistant professor of government at Connecticut College. This article is republished from The Conversation under a Creative Commons license.The Conversation

Drew Angerer/Getty Images

]]>
https://www.niemanlab.org/2022/08/most-local-election-offices-still-arent-on-social-media-new-research-finds/feed/ 0
Cable news has a much bigger effect on America’s polarization than social media, study finds https://www.niemanlab.org/2022/08/cable-news-has-a-much-bigger-effect-on-americas-polarization-than-social-media-study-finds/ https://www.niemanlab.org/2022/08/cable-news-has-a-much-bigger-effect-on-americas-polarization-than-social-media-study-finds/#respond Thu, 11 Aug 2022 13:00:01 +0000 https://www.niemanlab.org/?p=206908 The past two election cycles have seen an explosion of attention given to “echo chambers,” or communities where a narrow set of views makes people less likely to challenge their own opinions. Much of this concern has focused on the rise of social media, which has radically transformed the information ecosystem.

However, when scientists investigated social media echo chambers, they found surprisingly little evidence of them on a large scale — or at least none on a scale large enough to warrant the growing concerns. And yet, selective exposure to news does increase polarization. This suggested that these studies missed part of the picture of Americans’ news consumption patterns. Crucially, they did not factor in a major component of the average American’s experience of news: television.

To fill in this gap, I and a group of researchers from Stanford University, the University of Pennsylvania and Microsoft Research tracked the TV news consumption habits of tens of thousands of American adults each month from 2016 through 2019. We discovered four aspects of news consumption that, when taken together, paint an unsettling picture of the TV news ecosystem.

TV trumps online

We first measured just how politically siloed American news consumers really are across TV and the web. Averaging over the four years of our observations, we found that roughly 17% of Americans are politically polarized — 8.7% to the left and 8.4% to the right —  based on their TV news consumption. That’s three to four times higher than the average percentage of Americans polarized by online news.

Moreover, the percentage of Americans polarized via TV ranged as high as 23% at its peak in November 2016, the month in which Donald Trump was elected president. A second spike occurred in the months leading into December 2018, following the “blue wave” midterm elections in which a record number of Democratic campaign ads were aired on TV. The timing of these two spikes suggests a clear connection between content choices and events in the political arena.

Staying in TV echo chambers

Besides being more politically siloed on average, our research found that TV news consumers are much more likely than web consumers to maintain the same partisan news diets over time: After six months, left-leaning TV audiences are 10 times more likely to remain segregated than left-leaning online audiences, and right-leaning audiences are 4.5 times more likely than their online counterparts.

While these figures may seem intimidating, it is important to keep in mind that even among TV viewers, about 70% of right-leaning viewers and about 80% of left-leaning viewers do switch their news diets within six months. To the extent that long-lasting echo chambers do exist, then, they include only about 4% of the population.

Narrow TV diets

Partisan segregation among TV audiences goes even further than left- and right-leaning sources, we found. We identified seven broad buckets of TV news sources, then used these archetypes to determine what a typical unvaried TV news diet really looks like.

We found that, compared to online audiences, partisan TV news consumers tend not to stray too far from their narrow sets of preferred news sources. For example, most Americans who consume mostly MSNBC rarely consume news from any other source besides CNN. Similarly, most Americans who consume mostly Fox News do not venture beyond that network at all. This finding contrasts with data from online news consumers, who still receive sizable amounts of news from outside their main archetype.

Distilling partisanship

Finally, we found an imbalance between partisan TV news channels and the broader TV news environment. Our observations revealed that Americans are turning away from national TV news generally in substantial numbers — and crucially, this exodus is more from centrist news buckets than from left- or right-leaning ones. Within the remaining TV news audience, we found movement from broadcast news to cable news, trending toward MSNBC and Fox News.

Together, these trends reveal a counterintuitive finding: Although the overall TV news audience is shrinking, the partisan TV news audience is growing. This means that the audience as a whole is in the process of being “distilled” — remaining TV viewers are growing increasingly partisan, and the partisan proportion of TV news consumers is on the rise.

Why it matters

Exposure to opposing views is critical for functional democratic processes. It allows for self-reflection and tempers hostility toward political outgroups, whereas only interacting with similar views in political echo chambers makes people more entrenched in their own opinions. If echo chambers truly are as widespread as recent attention has made them out to be, it can have major consequences for the health of democracy.

Our findings suggest that television — not the web — is the top driver of partisan audience segregation among Americans. It is important to note that the vast majority of Americans still consume relatively balanced news diets.

However, given that the partisan TV news audience alone consumes more minutes of news than the entire online news audience, it may be worth devoting more attention to this huge and increasingly politicized part of the information ecosystem.

Homa Hosseinmardi is an associate research scientist in computational social science at the University of Pennsylvania. This article is republished from The Conversation under a Creative Commons license.The Conversation

AP Photo/Allen G. Breed

]]>
https://www.niemanlab.org/2022/08/cable-news-has-a-much-bigger-effect-on-americas-polarization-than-social-media-study-finds/feed/ 0
“Think carefully before you quote-tweet”: The Guardian releases new social media guidelines for staff https://www.niemanlab.org/2022/05/think-carefully-before-you-quote-tweet-the-guardian-releases-new-social-media-guidelines-for-staff/ https://www.niemanlab.org/2022/05/think-carefully-before-you-quote-tweet-the-guardian-releases-new-social-media-guidelines-for-staff/#respond Thu, 05 May 2022 15:07:11 +0000 https://www.niemanlab.org/?p=203075 Last month, The New York Times released new guidelines around the way its reporters use Twitter.

Twitter was taking up too much of journalists’ time, the Times said. It was also driving harassment and abuse, and bad tweets harm the reputation of the paper and of its staffers. The company also made it clear that Twitter is truly optional, and that using it isn’t a job requirement.

There must be an Elon Musk in the water because this week, The Guardian released new social media guidelines for its own staffers.

The guidelines are “based on extensive input from journalists and commercial staff across [Guardian News Media], in the UK, US and Australia,” the Guardian said. Here are a few key parts:

Social media is optional — really.

GNM does not require you to tweet or post on any social media platform. Most staff can do their jobs extremely well using social media either occasionally, such as to share Guardian and Observer stories; for monitoring (‘listen-only’ mode); newsgathering/finding sources; or not at all. You are not expected to have a presence or a following on social media.

Employees aren’t exactly forbidden from expressing political opinions. But…

The Guardian and the Observer are renowned for fair and accurate reporting, and being trusted matters. Editorial colleagues — particularly those working in news — should remain especially mindful of blurring fact and opinion when using social media. Be aware that expressing partisan, party-political or strong opinions on social media can damage the Guardian’s reputation for fair and fact-based reporting, and your own reputation as a journalist. The same applies to likes and retweets.

Reporters with large followings need to be especially careful about this, the memo notes:

Your behaviour, more than most, will reflect on GNM and may have a disproportionate impact on those you engage with on social platforms. Think carefully before you quote-tweet.

Don’t use social media to fight with or criticize colleagues or the company.

We strongly discourage the use of social media to air any form of internal disputes with colleagues or contributors, or with GNM. This is a serious matter.

Also, subtweets about colleagues are “never acceptable.”

Guardian reporters should generally not break news on Twitter

Remember, as a journalist your job is to break news for GNM, on GNM’s platform, not on social media. Only tweet breaking news if the news editor is happy for you to do that, rather than report it for the website.

Just because Twitter says it’s a story doesn’t mean it is.

It’s worth keeping in mind that just because a story is generating interest on social media, or a handful of people have tweeted about it, that does not necessarily mean it has news value and needs to be reported or circulated further on social media.

Delete your tweets! Expense Tweetdelete!

We strongly encourage staff to regularly delete historical tweets and other social posts. We recommend using the Tweetdelete service to do this. The cost of this can be expensed.

The Guardian also “plans to create a new role in the managing editor’s office that includes responsibility for social media, so that GNM journalists and editors have somebody to talk to for expert advice and support, including on abuse or harassment when needed.”

Early sketch of a Twitter bird in 2009 by Matt Hamm used under a Creative Commons license.

]]>
https://www.niemanlab.org/2022/05/think-carefully-before-you-quote-tweet-the-guardian-releases-new-social-media-guidelines-for-staff/feed/ 0
How can publishers respond to the power of platforms? https://www.niemanlab.org/2022/04/how-can-publishers-respond-to-the-power-of-platforms/ https://www.niemanlab.org/2022/04/how-can-publishers-respond-to-the-power-of-platforms/#respond Wed, 27 Apr 2022 11:45:24 +0000 https://www.niemanlab.org/?p=202784

The following essay is adapted from The Power of Platforms: Shaping Media and Society by Rasmus Kleis Nielsen and Sarah Anne Ganter, which was recently published by Oxford University Press. It’s reproduced here with permission.

Large technology companies such as Facebook and Google — in competition with a few others including Amazon, Apple, Microsoft, and a handful of companies elsewhere — increasingly define the way the internet works and thereby influence the structure of the entire digital media environment.

But how do they exercise this power, how have news organizations responded, and what does this development mean for the production and circulation of news? These are the questions we focus on in our new book.

Our primary objective is to understand the relationship between publishers and platforms, how these relationships have evolved over time, how they play out between different publishers and different platforms, how they differ across countries, and what this wider development — where news organizations become simultaneously empowered by and more dependent on technology companies — mean for news specifically and our societies more broadly.

The analysis is based on interviews with more than 50 people working across a range of publishers and platforms in the United States, France, Germany, and the United Kingdom as well as background conversations and observations at scores of industry events and private meetings. We trace the development of the relationship between publishers and platforms over the last decade and focus in particular on the rapid changes from 2015 onward.

Beyond “frenemies”

Despite 20 years of often difficult relations, a clear recognition of the “frenemy” dynamic at play, and the reality of intensifying competition for attention, advertising, and consumers’ cash, many publishers still actively seek to collaborate with platform companies. The vast majority continue to invest in platform products and services even when they’re not offered opportunities to collaborate directly.

Here’s how the director of strategic initiatives at a major U.S. newspaper aspiring to join the inner circle of “platform darlings” described the process of actively seeking collaboration with companies that he explicitly recognizes as major competitors for attention and advertising: “We did a lot of begging. We promised to be completely committed to whatever you ask, as long as you ask.” He explained: “We may not like them, but they have been absolutely essential in expanding our reach and building our digital business.”

Going forward, individual publishers have a series of important choices about how to structure their interactions with platforms.

(1) What balance do they seek between onsite and offsite reach? How can the two complement each other while minimizing the risk of cannibalization?

(2) What is the core business model, including the balance between advertising, reader revenue, and other sources? Which combination of platform partners is most likely to enable that business model?

Finally, given that we know that the platforms are here to stay and that their basic offer of reach in return for content is clear, but everything else is likely to continue to change: (3) How can publishers continuously assess the material and immaterial benefits of their investments in platforms and ensure that they are able to adapt to constant change, without locking in on the (all too often mistaken) assumption that a particular platform opportunity or specific platform product is here to stay?

Every publisher will need to think through what reality-based beneficial relationships with various platforms — based on the solid ground of mutual self-interest, not hopeful dreams or empty promises — can look like. Perhaps it is time to leave behind the somewhat moralizing terminology of friends, enemies, and “frenemies,” lest it gets in the way of clear-eyed analysis. Has anyone ever really been “friends” with a billion-dollar corporation?

What comes next?

While there is an increasingly lively policy debate around platforms, it is clear that the regulatory road ahead is long, slow, and uncertain.

Publishers, at least in Europe, have often ultimately secured political support for much of what they asked politicians for, but getting policies passed (let alone implemented) takes years, and the concrete benefits have often fallen far short of what publishers hoped for.

The CEO of a major U.S. newspaper company said: “We plan our strategy with two assumptions. The first is that in the future, we will have no print profits. The second is that the regulatory environment will stay roughly the same.” He added: “Even if we did see, for example, antitrust action against the platforms, it would take years, probably decades, and in the end might not really benefit us. So we focus on the things we can control.”

The “things we can control” are the decisions that publishers themselves make, individually and perhaps together. These decisions are shaped by the power of platforms and many other forces, but the decisions still matter. A growing number of individual news publishers around the world are demonstrating that while the industry as a whole continued to decline, shrink, and struggle to adapt to a changing media environment, some have managed to developed editorially and technologically compelling offers and build sustainable, even growing businesses.

Globally recognized brands like The New York Times are the most prominent examples of this, though given how unusual its position is, the arguably more important examples are the growing number of smaller organizations that are succeeding, whether legacy newspapers like the upmarket Dagens Nyheter, the popular VG, or local news publisher AMedia, or digital-born brands like the upmarket MediaPart, the widely read El Diario.es, the popular Brut, and the local Lincolnite.

Corporatist, complementary, and collaborative approaches to platforms

Individual corporate strategies and possibly public policy interventions aside, it is possible to imagine some publishers, or even groups of publishers, trying to forge different paths ahead. Three paths that seem possible include corporatist approaches, complementary approaches, and collaborative approaches.

First, publishers have repeatedly tried corporatist approaches to platforms, trying to present a joint front to get more leverage and negotiate more favorable terms of trade with platforms. Some U.S. newspapers explored this in 2009 under the aegis of the Newspaper Association of America. Their French counterparts did the same through SPQN, as did a group of German publishers through VG Media. The American attempt came to nothing, the French initiative resulted in a modest settlement, and the German group ultimately granted Google free licenses to use their content.

Each case illustrates how attempts to act collectively have foundered. Most publishers are loath to surrender the very real short-term benefits of collaborating with platforms. Some will always refuse to join collective action because they have very clear incentives for going it alone. And competition authorities are skeptical of what could look like cartels.

But the idea lives on. In the United States, the News Media Alliance, which represents 2,000 news publishers, has been lobbying for legislation to provide a temporary antitrust exemption for news publishers to negotiate collectively with platforms like Google and Facebook. In Europe, some of France and Germany’s major publishers are trying to close ranks in a fight with Google over the platform’s response to the European Union Online Copyright Directive.

South Korea is the main example of an enduring corporatist approach to platforms. There, the dominant platform companies Naver and Daum work with the “Committee for the Evaluation of News Partnership” (whose members are recommended by the Korean Newspapers Association and the Korean Broadcasters Association, among others) to identify privileged partners. Out of many thousands of South Korean online publishers, several hundreds are recognized by the Committee, and about a hundred have been paid a licensing fee that, in total, amounted to tens of millions of dollars a year, primarily to the biggest publishers.

Second, more publishers might go all-out on the opportunities that come with primarily being complementors to very large platforms, investing in a portfolio of platform opportunities in search of distributed reach, and entirely avoiding head-on competition and attempts to build up direct relations with readers.

With pivots back to websites and apps, even prominent distributed publishers like BuzzFeed missing their business goals, and the odd product change to remind everybody that what platform companies give, they can take away, this is clearly a risky strategy, and just as almost no publishers focus exclusively on on-site, exclusively off-site approaches are rare.

In particular, the strongest publishers, with distinct and effectively differentiated offers and strong direct routes to market, tend to bristle at the very idea, even as the list of top English-language publishers on Facebook in late 2019 was full of familiar names, with CNN, the Daily Mail, and Fox News occupying the top three spots, and The New York Times, the BBC, The Washington Post, and the Guardian all in the top 10.

Thus, while the platform risk is considerable, with the contingency of relying on platforms where, at any moment, the product may change, a number of publishers are pursuing these opportunities aggressively. Looking beyond established models of publishing, whether legacy or digital-born, complementary strategies focused on pursuing platform opportunities while managing platform risk can take many forms. At one end there are individual “influencers” on Western platforms, from stars like PewDiePie making millions every year to countless “nano-influencers” earning a little on the side — independently operated individual profiles working across platforms, producing original content, often as a business or at least a side job, and leveraging platform opportunities to compete with established publishers for attention, advertising, marketing, and the like. At the other end, one can point to the app economy and the video game industry as big, competitive, and lucrative industries that are almost entirely based on a multitude of third parties — some of them large profitable companies — built in large part by complementing a few dominant platforms.

Third, if the central risks publishers are trying to contain are asymmetry when faced with much larger platforms, and the contingency and platform risk that comes with being too dependent on them, publishers might collaborate to create their own alternatives to some of the products and services that dominant platforms offer.

Serious publishers have already embraced the idea that, to succeed, news media has to combine editorial excellence with technological excellence, matching the expectations that audiences and advertisers have become accustomed to through the experience of using platforms’ products and services.

Some of this work begins internally, with publishers like Vox Media and The Washington Post developing new digital publishing platforms (and in turn offering these up for licensing to other publishers), and The New York Times and others investing in advertising technology.

Occasionally, this involves publishers operating their own platforms, which companies like Axel Springer and Schibsted do very successfully with classified advertising platforms, and which Springer does in partnership with Samsung on the mobile news aggregator Upday.

Still, the track record of publishers’ dabbling in platforms has been uneven and often unsuccessful. Rupert Murdoch’s News Corporation bought Myspace in 2005 for $580m, only to sell it for $35m in 2011. The Georg von Holtzbrinck Publishing Group brought the German social network StudiVZ in 2007 for €85m, but sold it in 2012 for an undisclosed sum. French publishers have repeatedly declared their intent to launch their own aggregators and search engines (none have materialized), and several publishers have tried to launch blogging networks and various other forms of platforms for readers and subscribers, often with limited success.

More recently, there is an increasing number of examples of smaller groups of publishers collaborating on joint platforms for advertising sales, registration, subscriptions, and the like. Collaboration on specific solutions to specific problems seems like a promising route for publishers seeking to retain their independence and make the best possible use of the opportunities existing platforms offer, while finding ways of reducing the platform risk that comes with becoming increasingly reliant on and intertwined with them across distribution, advertising sales, analytics, sales, and more.

Publishers taking control of their own destiny

Publishers make their own decisions, but not under conditions of their own choosing. They are decisions nonetheless, and decisions that matter. Unwarranted determinism about the supposedly sovereign power of platforms is paralyzing, disrespectful of the difference that clear strategic thinking and careful execution makes, and ultimately not supported by the evidence.

Some publishers have demonstrably been better at building reach via platforms. Some have been demonstrably better at acquiring subscribers via platforms. The choice to try to do one or the other is a key strategic one. Some publishers have been much better at building direct engagement with audiences, and some have very significant direct traffic and very wide reach via platforms.

In the years ahead, publishers will continue to make different strategic decisions about how to realize platform opportunities while minimizing platform risk — individually, each pursuing their own interest, and perhaps sometimes in groups, whether through corporatist, complementary, or collaborative approaches.

Rasmus Kleis Nielsen is director of the Reuters Institute for the Study of Journalism and a professor of political communication at the University of Oxford. His other books include The Changing Business of Journalism and its Implications for Democracy and Political Journalism in Transition: Western Europe in a Comparative Perspective. Sarah Anne Ganter is an assistant professor at Simon Fraser University’s School of Communication.

Image of different podiums and platforms by Rodion Kutsaev is being used under an Unsplash License.

]]>
https://www.niemanlab.org/2022/04/how-can-publishers-respond-to-the-power-of-platforms/feed/ 0
Algorithms, lies, and social media https://www.niemanlab.org/2022/04/algorithms-lies-and-social-media/ https://www.niemanlab.org/2022/04/algorithms-lies-and-social-media/#respond Thu, 07 Apr 2022 14:49:20 +0000 https://www.niemanlab.org/?p=202168 There was a time when the internet was seen as an unequivocal force for social good. It propelled progressive social movements from Black Lives Matter to the Arab Spring; it set information free and flew the flag of democracy worldwide. But today, democracy is in retreat and the internet’s role as driver is palpably clear. From fake news bots to misinformation to conspiracy theories, social media has commandeered mindsets, evoking the sense of a dark force that must be countered by authoritarian, top-down controls.

This paradox — that the internet is both savior and executioner of democracy — can be understood through the lenses of classical economics and cognitive science. In traditional markets, firms manufacture goods, such as cars or toasters, that satisfy consumers’ preferences. Markets on social media and the internet are radically different because the platforms exist to sell information about their users to advertisers, thus serving the needs of advertisers rather than consumers. On social media and parts of the internet, users “pay” for free services by relinquishing their data to unknown third parties who then expose them to ads targeting their preferences and personal attributes. In what Harvard social psychologist Shoshana Zuboff calls “surveillance capitalism,” the platforms are incentivized to align their interests with advertisers, often at the expense of users’ interests or even their well-being.

This economic model has driven online and social media platforms (however unwittingly) to exploit the cognitive limitations and vulnerabilities of their users. For instance, human attention has adapted to focus on cues that signal emotion or surprise. Paying attention to emotionally charged or surprising information makes sense in most social and uncertain environments and was critical within the close-knit groups in which early humans lived. In this way, information about the surrounding world and social partners could be quickly updated and acted on.

But when the interests of the platform do not align with the interests of the user, these strategies become maladaptive. Platforms know how to capitalize on this: To maximize advertising revenue, they present users with content that captures their attention and keeps them engaged. For example, YouTube’s recommendations amplify increasingly sensational content with the goal of keeping people’s eyes on the screen. A study by Mozilla researchers confirms that YouTube not only hosts but actively recommends videos that violate its own policies concerning political and medical misinformation, hate speech, and inappropriate content.

In the same vein, our attention online is more effectively captured by news that is either predominantly negative or awe inspiring. Misinformation is particularly likely to provoke outrage, and fake news headlines are designed to be substantially more negative than real news headlines. In pursuit of our attention, digital platforms have become paved with misinformation, particularly the kind that feeds outrage and anger. Following recent revelations by a whistle-blower, we now know that Facebook’s newsfeed curation algorithm gave content eliciting anger five times as much weight as content evoking happiness. (Presumably because of the revelations, the algorithm was changed.) We also know that political parties in Europe began running more negative ads because they were favored by Facebook’s algorithm.

Besides selecting information on the basis of its personalized relevance, algorithms can also filter out information considered harmful or illegal, for instance by automatically removing hate speech and violent content. But until recently, these algorithms went only so far. As Evelyn Douek, a senior research fellow at the Knight First Amendment Institute at Columbia University, points out, before the pandemic, most platforms (including Facebook, Google, and Twitter) erred on the side of protecting free speech and rejected a role, as Mark Zuckerberg put it in a personal Facebook post, of being “arbiters of truth.” But during the pandemic, these same platforms took a more interventionist approach to false information and vowed to remove or limit Covid-19 misinformation and conspiracy theories. Here, too, the platforms relied on automated tools to remove content without human review.

Even though the majority of content decisions are done by algorithms, humans still design the rules the tools rely upon, and humans have to manage their ambiguities: Should algorithms remove false information about climate change, for instance, or just about Covid-19? This kind of content moderation inevitably means that human decision makers are weighing values. It requires balancing a defense of free speech and individual rights with safeguarding other interests of society, something social media companies have neither the mandate nor the competence to achieve.

None of this is transparent to consumers, because internet and social media platforms lack the basic signals that characterize conventional commercial transactions. When people buy a car, they know they are buying a car. If that car fails to meet their expectations, consumers have a clear signal of the damage done because they no longer have money in their pocket. When people use social media, by contrast, they are not always aware of being the passive subjects of commercial transactions between the platform and advertisers involving their own personal data. And if users experience has adverse consequences — such as increased stress or declining mental health — it is difficult to link those consequences to social media use. The link becomes even more difficult to establish when social media facilitates political extremism or polarization.

Users are also often unaware of how their news feed on social media is curated. Estimates of the share of users who do not know that algorithms shape their newsfeed range from 27% to 62%. Even people who are aware of algorithmic curation tend not to have an accurate understanding of what that involves. A Pew Research paper published in 2019 found that 74% of Americans did not know that Facebook maintained data about their interests and traits. At the same time, people tend to object to collection of sensitive information and data for the purposes of personalization and do not approve of personalized political campaigning.

They are often unaware that the information they consume and produce is curated by algorithms. And hardly anyone understands that algorithms will present them with information that is curated to provoke outrage or anger, attributes that fit hand in glove with political misinformation.

People cannot be held responsible for their lack of awareness. They were neither consulted on the design of online architectures nor considered as partners in the construction of the rules of online governance.

What can be done to shift this balance of power and to make the online world a better place?

Google executives have referred to the internet and its applications as “the world’s largest ungoverned space,” unbound by terrestrial laws. This view is no longer tenable. Most democratic governments now recognize the need to protect their citizens and democratic institutions online.

Protecting citizens from manipulation and misinformation, and protecting democracy itself, requires a redesign of the current online “attention economy” that has misaligned the interests of platforms and consumers. The redesign must restore the signals that are available to consumers and the public in conventional markets: users need to know what platforms do and what they know, and society must have the tools to judge whether platforms act fairly and in the public interest. Where necessary, regulation must ensure fairness.

Four basic steps are required:

  • There must be greater transparency and more individual control of personal data. Transparency and control are not just lofty legal principles; they are also strongly held public values. European survey results suggest that nearly half of the public wants to take a more active role in controlling the use of personal information online. It follows that people need to be given more information about why they see specific ads or other content items. Full transparency about customization and targeting is particularly important because platforms can use personal data to infer attributes — for example, sexual orientation — that a person might never willingly reveal. Until recently, Facebook permitted advertisers to target consumers based on sensitive characteristics such as health, sexual orientation, or religious and political beliefs, a practice that may have jeopardized users’ lives in countries where homosexuality is illegal.
  • Platforms must signal the quality of the information in a newsfeed so users can assess the risk of accessing it. A palette of such cues is available. “Endogenous” cues, based on the content itself, could alert us to emotionally charged words geared to provoke outrage. “Exogenous” cues, or commentary from objective sources, could shed light on contextual information: Does the material come from a trustworthy place? Who shared this content previously? Facebook’s own research, said Zuckerberg, showed that access to COVID-related misinformation could be cut by 95 percent by graying out content (and requiring a click to access) and by providing a warning label.
  • The public should be alerted when political speech circulating on social media is part of an ad campaign. Democracy is based on a free marketplace of ideas in which political proposals can be scrutinized and rebutted by opponents; paid ads masquerading as independent opinions distort that marketplace. Facebook’s “ad library” is a first step toward a fix because, in principle, it permits the public to monitor political advertising. In practice, the library falls short in several important ways. It is incomplete, missing many clearly political ads. It also fails to provide enough information about how an ad targets recipients, thus preventing political opponents from issuing a rebuttal to the same audience. Finally, the ad library is well known among researchers and practitioners but not among the public at large.
  • The public must know exactly how algorithms curate and rank information and then be given the opportunity to shape their own online environment. At present, the only public information about social media algorithms comes from whistle-blowers and from painstaking academic research. Independent agencies must be able to audit platform data and identify measures to remedy the spigot of misinformation. Outside audits would not only identify potential biases in algorithms but also help platforms maintain public trust by not seeking to control content themselves.

Several legislative proposals in Europe suggest a way forward, but it remains to be seen whether any of these laws will be passed. There is considerable public and political skepticism about regulations in general and about governments stepping in to regulate social media content in particular. This skepticism is at least partially justified because paternalistic interventions may, if done improperly, result in censorship. The Chinese government’s censorship of internet content is a case in point. During the pandemic, some authoritarian states, such as Egypt, introduced “fake news laws” to justify repressive policies, stifling opposition and further infringing on freedom of the press. In March 2022, the Russian parliament approved jail terms of up to 15 years for sharing “fake” (as in contradicting official government position) information about the war against Ukraine, causing many foreign and local journalists and news organizations to limit their coverage of the invasion or to withdraw from the country entirely.

In liberal democracies, regulations must not only be proportionate to the threat of harmful misinformation but also respectful of fundamental human rights. Fears of authoritarian government control must be weighed against the dangers of the status quo. It may feel paternalistic for a government to mandate that platform algorithms must not radicalize people into bubbles of extremism. But it’s also paternalistic for Facebook to weight anger-evoking content five times more than content that makes people happy, and it is far more paternalistic to do so in secret.

The best solution lies in shifting control of social media from unaccountable corporations to democratic agencies that operate openly, under public oversight. There’s no shortage of proposals for how this might work. For example, complaints from the public could be investigated. Settings could preserve user privacy instead of waiving it as the default.

In addition to guiding regulation, tools from the behavioral and cognitive sciences can help balance freedom and safety for the public good. One approach is to research the design of digital architectures that more effectively promote both accuracy and civility of online conversation. Another is to develop a digital literacy tool kit aimed at boosting users’ awareness and competence in navigating the challenges of online environments.

Achieving a more transparent and less manipulative media may well be the defining political battle of the 21st century.

Stephan Lewandowsky is a cognitive scientist at the University of Bristol in the U.K. Anastasia Kozyreva is a philosopher and a cognitive scientist working on cognitive and ethical implications of digital technologies and artificial intelligence on society. at the Max Planck Institute for Human Development in Berlin. This piece was originally published by OpenMind magazine and is being republished under a Creative Commons license.

Image of misinformation on the web by Carlox PX is being used under an Unsplash license.

]]>
https://www.niemanlab.org/2022/04/algorithms-lies-and-social-media/feed/ 0
Russian influencers scramble to maintain their followers — and livelihoods https://www.niemanlab.org/2022/04/russian-influencers-scramble-to-maintain-their-followers-and-livelihoods/ https://www.niemanlab.org/2022/04/russian-influencers-scramble-to-maintain-their-followers-and-livelihoods/#respond Wed, 06 Apr 2022 14:00:58 +0000 https://www.niemanlab.org/?p=202118 On March 7, the Moscow-based creator Greg Mustreader posted a video from a hotel room in Istanbul, Turkey, on YouTube. In the 12-minute clip, he explained to his 200,000-strong, mostly Russian subscriber base that he had fled Russia for fear of political retaliation. Days earlier, Russia’s Parliament had passed a new law punishing anyone who spread “false” information about the Russian military with up to 15 years in prison.

Greg, who requested to be referred to by his first name for his security, typically posted about literature, philosophy, and art before he started denouncing the war. He said the last month had upended his life. “The shock connected with the events of the war was more significant than the realization that I will have financial losses,” Greg said. “Of course, once I started thinking about the repercussions for my projects, the realization dawned that, yeah, I am going to be in some trouble.”

As waves of wartime sanctions by foreign governments and private companies hit Russia, the country’s creator economy is in flux. The state ban on both Instagram and Facebook, Google’s ban on most YouTube monetization in the country, and restrictions on the ability of Russian users to upload videos to TikTok have instigated a mass platform migration among Russian creators and audiences. To salvage their online followings, and incomes, some creators have started moving their audiences to new platforms.

Some creators, like Greg, have left Russia and pivoted their content to target international viewers. Greg said that he previously spent close to 90% of his time on his Russian-language channels but has dedicated most of energy in the last few weeks to his English-language accounts. He launched an English-language TikTok account that amassed 100,000 followers after he started posting about the war earlier this month. “Many creators that I know are desperately trying to create an English-language [account] or at least have an English-language mirror of their [account],” he said.

Meanwhile, many creators are moving to alternative Russian-grown platforms like Yandex Zen, RuTube, and VKontakte (VK) — all part of a constellation of platforms that offer a government-approved alternative to services like Facebook, YouTube, and even Netflix. Others are moving to Telegram, a messaging service with an established reputation in Russia as a relative safe haven from government censors.

The mood in the industry has been “shock and awe,” according to Boris Omelnitskiy, the former president of the Russian chapter of the Interactive Advertising Bureau, a global trade association that has helped set standards for influencer marketing. “Western advertisers and platforms, payment systems, infrastructure players, backbone telecom operators are all leaving Russia at the same time.” The IAB cut ties with the country shortly after the war began.

Data confirms that Russian creators are scrambling to rebuild their followings on Russian-owned social media. In a March survey of 500 Russian content creators by marketing agency Twiga, 69% of creators interviewed, ranging from micro-influencers to those with millions of followers, said they plan to increase their presence on domestic platforms. Several creators who spoke with us said they were wary of this shift, which poses new threats of censorship, limited monetization, and the prospect of losing much of their audience in the transition.

Russian users are also on the move. An analysis of more than 3.3 billion social media messages by data analysis firm Brand Analytics from February 1 to March 10 showed that users in the country have already migrated to domestic social media in large numbers, particularly to VK, often labeled the Russian version of Facebook. On March 14, VK announced it set a new record for daily users, reaching more than 50 million, an increase of almost 9% since January.

“Today, everyone is adapting to the new reality,” said Yulia Pohlmann, co-founder of marketing agency Market Entry Atelier. “It is still very early to make a prognosis of how the social media landscape will look in Russia. But everyone is launching or unfreezing their accounts on VK, Telegram, Yandex Zen, and others.”

Alexey Markov, a YouTuber located in the Moscow suburbs who posts personal finance and economics-related content under the name Hoolinomics, is one of the many creators attempting to migrate his 200,000-plus YouTube followers to platforms less liable to be shut down. Reports indicate that Russia’s federal media regulator, Roskomnadzor, is pursuing a full ban of the site.

Markov’s early attempt to grow a following on Yandex Zen, a personalized reader platform launched in 2015 that resembles Flipboard but allows individual authors to post, has been slow. He has roughly only 1,000 subscribers there. Instead, he’s focusing on Telegram, which has more appeal to his international followers. Markov now has 67,000 followers on his main Telegram channel, where he posts his takes on crop prices, trade surpluses, and exchange rates at least once a day. In March, the number of active authors on Telegram increased 23% and, in Russia, surpassed WhatsApp by monthly web traffic.

For Markov, a creator who relied on long-form video formats and YouTube’s livestreaming features to build his career, the shift to Telegram has already been disruptive. So he continues to update his YouTube channel, where he invites subscribers to join him on wine-and-chess-night livestreams, in which he laments the state of the Russian economy. Markov said he wanted to convey a sense of stability to his followers on YouTube, despite reports that a blanket ban on the platform could be passed any day. Losing YouTube, Markov said, would be a significant blow.

“YouTube is not only for Russians, it’s for Russian-speaking followers,” he said. About 35% of Markov’s viewers come from former Soviet Union states, such as Ukraine, Belarus, and Kazakhstan. “I cannot move them to Yandex Zen or RuTube or other platforms because they just don’t want to be there.”

“We are used to Instagram’s technology, TikTok’s organic reach, and large-scale monetization on YouTube,” said Olga Berek, the president of the National Association of Bloggers in Russia. As a platform developed first and foremost as a messenger service, Telegram offers a vastly different user experience, ideal for building small but active communities. Currently, though, users rarely subscribe to more than 25 channels because of notification overload, she added. “Telegram copes with the load, but some measurements show that well-known bloggers can only transfer a small proportion of their subscribers to Telegram, less than 10%,” said Omelnitskiy, the former IAB Russia president.

Meanwhile, Russian homegrown platforms are trying to entice creators to join. Two weeks ago, VK, which now has 97 million monthly users, announced its largest support program for content creators yet and suspended fees for monetization tools for a month. Yandex Zen launched educational courses on how to grow a community on its platform. On March 28, Russian entrepreneurs opened up Rossgram, a clone of Instagram, for creator registration.

Some creators, however, have chosen not to migrate to Russian platforms like VK because of its close ties to the government. “You can’t be safe. And you can’t say what you want,” said Karolina K, a Belarusian lifestyle and travel creator who has nearly 400,000 subscribers on YouTube.

Karolina, who asked to be referred to by her first name for her security, has a majority Russian following on Instagram and YouTube. She was traveling in Turkey when she heard the news of the invasion and decided not to return to her home in St. Petersburg. While she previously used VK to keep in touch with friends and family and to promote her YouTube content, she said the platform has grown increasingly out of fashion and tends to skew to older audiences. But it’s VK’s reputation as a platform rife with state and self-censorship that cemented her decision not to return. “Of course people go to VK, but for me, I don’t see the future there.”

Creators like Karolina are contending with the growing politicization of influencers. Last week, the Russian Investigative Committee targeted socialite and Instagram lifestyle influencer Veronika Belotserkovskaya, under its new censorship law, for her posts denouncing the Russian invasion. Meanwhile, it has been reported that Russian influencer networks and supporters of the Putin government are being mobilized to spread disinformation on the war. One Ukrainian blogger has started a website called They Love War, with a running list of Russian-speaking influencers who have been silent on the war or have allegedly posted state propaganda.

Those who do take up state-backed platforms may still face some technical limitations. For YouTubers like Karolina and Markov, the most natural alternative to reaching audiences inside Russia would be RuTube, which is owned by Russia’s largest media conglomerate Gazprom-Media. An investigation from outlets IStories and Agentstvo in February revealed Russian authorities have been investing heavily for over a year in reviving RuTube to rival YouTube.

Three creators told us that RuTube is still plagued by a shoddy user experience, lack of monetization programs, and underdeveloped recommendation algorithms. Karolina recounted how it took a fellow creator 10 tries to upload a video to the platform recently.

“It’s awful. I’m sure that many of my colleagues would rather shoot themselves in the leg than upload a video to RuTube because it looks very painful,” said Greg, who has been discussing the program in a Telegram group of fellow Russian YouTubers. But as the number of platforms available continues to shrink and their income streams remain frozen, many creators still in Russia are left with little choice but to onboard to state-backed social media. “I think some of us are saying, Well, if we have to do it, we have to do it.”

Andrew Deck is a reporter at Rest of World. Masha Borak is a journalist covering the intersection of technology with politics, business, and society. This piece was originally published by Rest of World, a nonprofit newsroom covering global technology, and is being republished with permission.

Cover illustration by Glenn Harvey is being used with permission from Rest of World.

]]>
https://www.niemanlab.org/2022/04/russian-influencers-scramble-to-maintain-their-followers-and-livelihoods/feed/ 0
Use of social media for news doesn’t seem to increase false political beliefs among Mexicans, one study finds https://www.niemanlab.org/2022/04/use-of-social-media-for-news-doesnt-seem-to-increase-false-political-beliefs-among-mexicans-one-study-finds/ https://www.niemanlab.org/2022/04/use-of-social-media-for-news-doesnt-seem-to-increase-false-political-beliefs-among-mexicans-one-study-finds/#respond Tue, 05 Apr 2022 13:34:25 +0000 https://www.niemanlab.org/?p=202191 In Mexico, social media isn’t the major driver of political misinformation it’s often believed to be, according to a new study in the International Journal of Press/Politics.

Researchers Sebastián Valenzuela, Carlos Muñiz, and Marcelo Santos found “no significant correlation between using Facebook, Twitter, YouTube, Instagram or WhatsApp as news sources and belief in political misinformation.”

Valenzuela, Muñiz, and Santos conducted the study in two phases during Mexico’s midterm elections in 2021. The first phase had 1,750 respondents, 596 of whom were re-interviewed for the second phase.

In the first phase, researchers presented respondents with four false claims that had circulated during the election season, and then another three in the second wave. Respondents were asked to indicate their level of endorsement of each statement on a scale of 1 (not at all) to 5 (completely).

To understand how people use social media platforms for news, the researchers measured how often during one week respondents used Facebook, YouTube, Instagram, and WhatsApp to get national news. They also looked at three control variables: “traditional news media use and political discussion; political interest, efficacy, presidential approval, and ideology; and political knowledge, news elaboration, information literacy, and digital skills.”

“This [study] was trying to answer a very basic question, which is how exposed are Mexicans to misinformation on social media? And how persuaded are they by the misinformation they are exposed to on social media in terms of their political beliefs?” Valenzuela explained. “[Our] finding is that there’s no relationship. That means that people who use more or [fewer] social platforms for getting their news about the election, that was not related whatsoever to how misinformed their beliefs were. In a way, that goes against the popular narrative that misinformation is something that is created [on social media] or that social platforms are going to be the big culprits here.”

This doesn’t mean misinformation isn’t a problem in Mexico, though, Valenzuela said. While the researchers didn’t find a direct link between social media use and misinformation, they did find that people who engage more frequently in political discussions — particularly in face-to-face conversations — were more misinformed than those who engaged less frequently in political discussions.

“What is it about talking that is making people become more inaccurate in their beliefs and their assessments?” Valenzuela said. “That goes against the idea of conversation being the soul of democracy, which is the idea that informal deliberation makes people more enlightened. What we found was the opposite.”

Mexico provides fertile ground for this study, the researchers explained in the report:

In Mexico, online disinformation campaigns are regularly deployed during elections since at least 2012. Candidates have used bots and paid trolls to spread fabricated polls to boost a candidate’s standing (Armstrong 2018). The effort to group together more than eighty organizations of different nature to tackle misinformation during the 2018 presidential election is but a symptom of the magnitude of the problem That year alone, 43% of online users in Mexico reported exposure to misinformation, compared to 31% in the United States and 15% in the United Kingdom.

Researchers have documented other mechanisms by which political misinformation is spread on social media in Mexico. There is “attention hacking,” such as the amplification of support for controversial government initiatives by bot networks and political campaigns that create a false universe of ghost followers, trolls and bots in favor of one candidate or another. During the 2018 elections, bot battles pro and against the winning candidate Andrés Manuel López Obrador (AMLO) drowned out conversations by posting attacks, rumors and unsubstantiated claims at a rate of more than one thousand tweets per hour. It is also common to observe so-called algorithmic repression (i.e., the sabotaging of dissident trending hashtags on Twitter such as #YaMeCanse [translated as #IHaveHadEnough], which forces activists to deploying counter-tactics like changing the hashtags (e.g., #YaMeCanse2, #YaMeCanse3). Among other problems, these tactics create public confusion.

Mexico has also experienced a steep decline in news media trust, which can make it harder for journalists to effectively counter misinformation. Between 2017 and 2021, media trust dropped 12 percentage points to 37%. Though traditional outlets such as the duopoly TV Azteca-Televisa News are still the most popular broadcasters, they are less trusted than international outlets and new digital-native media. This can also be traced back to Mexico’s “captured” liberal media system, where private news media are closely aligned with the political and economic elites and public service broadcasting plays a minor role. Thus, the country may have limited capacity to counter misinformation.

The study also found that Mexicans who had more digital skills (like searching for news, using social media platforms, and sharing content) tend to be more misinformed than people with fewer.

“It might be kind of an unintended consequence of having more skills that you will be exposed more frequently to fake news,” Valenzuela said.

For journalists, these findings can mean a number of things. First, Valenzuela said, social platforms are still powerful tools that journalists and media companies can use to communicate factual information.

Valenzuela also said Mexico’s large socioeconomic gap leaves plenty of room for digital and information literacy gaps as well. Just because people are online doesn’t mean that they know how to navigate the media ecosystem. News outlets, Valenzuela said, should invest not just in understanding what news consumers want, but also in understanding what their information literacy capabilities are.

“If you invest in trying to bridge that gap…you have a higher likelihood of reaching more people and you have a higher likelihood of maintaining or even increasing the number of people who pay attention to your work.”

Read the full study here.

]]>
https://www.niemanlab.org/2022/04/use-of-social-media-for-news-doesnt-seem-to-increase-false-political-beliefs-among-mexicans-one-study-finds/feed/ 0
Russia is having less success at spreading social media disinformation (for now) https://www.niemanlab.org/2022/03/russia-is-having-less-success-at-spreading-social-media-disinformation-for-now/ https://www.niemanlab.org/2022/03/russia-is-having-less-success-at-spreading-social-media-disinformation-for-now/#respond Wed, 09 Mar 2022 14:48:21 +0000 https://www.niemanlab.org/?p=201359

This article was originally published in Scientific American. It is being republished with permission.

Days after Russia invaded Ukraine, multiple social media platforms — including Facebook, Twitter and YouTube — announced they had dismantled coordinated networks of accounts spreading disinformation. These networks, which were comprised of fabricated accounts disguised with fake names and AI-generated profile images or hacked accounts, were sharing suspiciously similar anti-Ukraine talking points, suggesting they were being controlled by centralized sources linked to Russia and Belarus.

Russia’s Internet Research Agency used similar disinformation campaigns to amplify propaganda about the U.S. election in 2016. But their extent was unclear until after the election—and at the time, they were conducted with little pushback from social media platforms. “There was a sense that the platforms just didn’t know what to do,” says Laura Edelson, a misinformation researcher and Ph.D. candidate in computer science at New York University.

Since then, she says, platforms and governments have become more adept at combating this type of information warfare — and more willing to deplatform bad actors that deliberately spread disinformation. Edelson spoke to Scientific American’s Sophie Bushwick about how an information war is being waged as the conflict continues.

Sophie Bushwick: How do social media platforms combat accounts that spread disinformation?

Laura Edelson:These kinds of disinformation campaigns — where they are specifically misleading users about the source of the content — that’s really easy for platforms to take action against because Facebook has this real name policy: misleading users about who you are is a violation of Facebook’s platform rules. But there are [other] things that shouldn’t be difficult to take down — that historically Facebook has really struggled with — and that is actors like RT. RT is a Russian state-backed media outlet. And Facebook has really struggled historically on what to do with that.

That’s what was so impressive about seeing that [Facebook and other platforms] really did start to take some action against RT in the past week, because this has been going on for such a long time. And also, frankly, [social media platforms] have had cover from governments, where governments in Europe have banned Russian state media. And that has given cover to Facebook, YouTube and other major platforms to do the same thing. In general, banning anyone — but especially banning media — is not a step anyone should take lightly. But RT and Sputnik [another Russia state-backed media outlet] are not regular media: they have such a long track record of polluting the information space.

Bushwick: What else can be done to fight harmful false information?

Edelson: One of the things that the U.S. did really well going into this conflict — and why, at least from a misinformation [controlling] perspective, the first week went very well — is that the U.S. government was really aggressive with releasing information about what it knew about the ground realities in Russia and Ukraine. That was really helpful for creating a space where it was difficult for the Russians to put out misinformation about those same things. Because the U.S. government was very forthcoming, it didn’t leave a lot of room; there wasn’t an information vacuum that the Russians could step in and fill.

And then the Ukrainian government has been tremendously savvy in telling the story of the Ukrainian resistance. There are definitely times when it has stepped over the line into propaganda. But in general, it has made sure that the world sees the Ukrainian resistance and the fight that the Ukrainian people are willing to put up. That [helps] people see what is going on and understand that the people who are there fighting are real people who, not that long ago, were not fighters. They were civilians, and now they are defending their country.

I think both of those things are going to be difficult to maintain over time. But if they are not maintained, then the window for Russian misinformation will open. A challenge we are all going to have to deal with is that this war is not going to be over in the next few days, but the news cycle cannot maintain this level of focus on these events. It’s shocking to say, but in three weeks’ time, you will have hours go by without thinking about it. And that is when people’s guards are going to go down. If someone is trying to spread some kind of [disinformation] — maybe the Russians make up some fake Ukrainian atrocity or something — that’s when the world is going to be susceptible to that kind of thing. And that’s when we’re going to have to remember all this stuff of “Who was telling you the story? Do we trust them? How verifiable is this account?” This is going to be part of how conflict is waged going forward.

But this is something that is new for all actors, and everyone is going to have to get used to keeping up their ground game in the information war, not just in the kinetic war.

Bushwick: Some people have also pointed out an apparent reduction in other forms of misinformation, such as vaccine-related conspiracy theories, since Russia’s internet infrastructure and payment networks were limited by sanctions. What is going on with that?

Edelson: I haven’t seen a large-scale analysis published about this. That said, there have been quite a few anecdotal reports that misinformation in other sectors has decreased markedly in the past week. We can’t say for certain that this is because of lack of internet access in Russia. The conclusion is not that all of this stuff that had been taken down was sourced from Russia. The conclusion that’s reasonable to draw from these anecdotal reports is that Russian internet infrastructure was a vital part of the tool kit of people who spread misinformation. There’s a lot of pieces of this economy that are run out of Russia — bot networks, for example, networks of people who sell who buy and sell stolen credit card information, a lot of the economy around buying stolen [social media] accounts — because Russia has historically tolerated a lot of cybercrime. Either it turns a blind eye or a lot of these groups actually directly work for, or are contractors to, the Russian state.

Bushwick: How can we avoid falling for or spreading misinformation?

Edelson: The bottom line is that people shouldn’t have to do this. This is kind of like saying, “My car doesn’t have any seatbelt. What can I do to protect myself in a crash?” The answer is: your car should have seatbelts, and that shouldn’t be your job. But unfortunately, it is.

With that small caveat, you have to remember that the most successful misinformation succeeds by appealing to emotions rather than reason. If misinformation can tap into that emotive pathway, you’re never going to question it because it feels good, and if it feels good, it’s adjacent to being true. So the first thing that I recommend is: if something makes you feel emotional — particularly if something makes you feel angry — before you share it or interact with it, really ask yourself the question “Who is promoting this, and do I trust them?”

Bushwick: What is the most important thing platforms need to do to install metaphorical seatbelts?

Edelson: I think the single biggest thing that platforms should be doing, especially in these moments of crisis, is [recognize they] should not promote content solely based on engagement. Because you have to remember that misinformation is really engaging. It is engaging because of some of those reasons I talked about: highly emotive appeal, things that circumvent reason and go straight to the gut. That’s a really effective tactic for deception. So I think this is when platforms need to step up the importance of quality of content versus how engaging content is. That is the number one thing they could do, and almost everything else pales in comparison.

Sophie Bushwick is an associate editor covering technology at Scientific American.

Photo of a mural painted in South London in support of Ukraine by Loco Steve, used under a Creative Commons license.

]]>
https://www.niemanlab.org/2022/03/russia-is-having-less-success-at-spreading-social-media-disinformation-for-now/feed/ 0
Here are two new tools to help track Russia’s invasion of Ukraine https://www.niemanlab.org/2022/03/track-sanctions-russia/ https://www.niemanlab.org/2022/03/track-sanctions-russia/#respond Tue, 01 Mar 2022 16:31:20 +0000 https://www.niemanlab.org/?p=201011 The German investigative nonprofit Correctiv just launched a tracker to monitor worldwide sanctions against Russia for its invasion of Ukraine. It’s available in English and German and updated several times a day.

“This creates transparency around the most important tool the West has in the current crisis,” Justus von Daniels, editor-in-chief of Correctiv, told me.

Data for the Sanctions Tracker comes from the Open Sanctions database. Correctiv is filtering it for sanctions against Russian targets from 2014 to the present.

Users can view sanctions by country and date and track how many there are against Russian people, companies, and other individual targets.

The Technology and Social Change Project at Harvard’s Shorenstein Center is “tracking moves by major technology companies and governments to limit the flow of misinformation. This includes state sponsored misinformation and content removed at the behest of governments, as people worldwide flock to social media to receive updates of the rapidly unfolding violence.” That project, updated daily, is here.

]]>
https://www.niemanlab.org/2022/03/track-sanctions-russia/feed/ 0
Some resources for following the invasion of Ukraine https://www.niemanlab.org/2022/02/follow-war-ukraine/ https://www.niemanlab.org/2022/02/follow-war-ukraine/#respond Thu, 24 Feb 2022 17:16:32 +0000 https://www.niemanlab.org/?p=200869 Following the news of Russia’s invasion of Ukraine is difficult, especially if you’re not already extremely knowledgeable about the situation. Turning to Twitter may be the automatic reaction, but it’s not necessarily that helpful: The non-chronological-by-default timeline means news is presented out of order (here’s how you can fix that, if you’d like). Opinions outweigh people reporting from the ground. On Wednesday, many Twitter users posting video from Ukraine — including large accounts like @Conflicts — found their accounts suspended or locked, a move Twitter says was an error.

In moments like this, “Twitter’s strength as an amplification and recommendation platform goes away,” said Jeremy Littau, associate professor of journalism and communication at Lehigh University. “It’s not that the news coverage isn’t there, it’s that the ability to find it is harder. I’ve got a mix of expertise and hot takes from sudden experts and people posting with the Ukrainian flag. It’s a lot, and in these moments I think we have trouble sifting through that volume of information.”

The Kyiv Independent, a three-month-old English-language Ukrainian news site launched by former Kyiv Post journalists after that outlet temporarily shuttered — the Kyiv Post has since relaunched — is using the lightning bolt emoji to help readers quickly differentiate its breaking news tweets from other tweets:

We’ve pulled together a few resources to help you receive reliable information on what is happening. This list is being updated.

Twitter lists

A few people have compiled Twitter lists of folks to follow. Still, a caution: “Don’t necessarily trust your in-network amplifiers. Other folks are moving fast and maybe not vetting so well,” Kate Starbird, associate professor of human centered design and engineering at the University of Washington, tweeted. “Mistakes happen. Don’t let their mistake be your mistake and cascade through your network.” (For instance; for instance.)

From Jane Lytvynenko, a senior research fellow at the Technology and Social Change Project at Harvard Kennedy School’s Shorenstein Center who is originally from Ukraine:

From CNN reporter Daniel Dale:

From Josh Marshall, editor and publisher of Talking Points Memo:

From Rebecca Shabad, politics reporter for NBC News:

English-language Telegram

Dropped paywalls/products made free

The Financial Times has dropped its paywall on Ukraine coverage.

Sweden’s Svenska Dagbladet has dropped the paywall on its live coverage.

Germany’s Zeit has dropped the paywall across its site for readers in Russia and Ukraine.

The Kyiv Post and Kyiv Independent are not paywalled. (The Kyiv Independent has a Patreon and GoFundMe.)

NewsWhip is offering making its premium Spike product free to certain groups. (Find contact info, etc. for accessing the product further down in the thread.)

Podcasts and newsletters

NPR has launched State of Ukraine, a podcast that will update several times a day.

The New York Times has a Russia-Ukraine war briefing email newsletter, sent in the evenings.

Trackers

The German investigative nonprofit Correctiv launched a sanctions tracker, updated daily, of all sanctions against Russia. It’s available in German and English.

The Technology and Social Change Project at Harvard’s Shorenstein Center is tracking “moves by major technology companies and governments to limit the flow of misinformation. This includes state sponsored misinformation and content removed at the behest of governments, as people worldwide flock to social media to receive updates of the rapidly unfolding violence.” It’s updated daily.

Fact-checking and debunking

The international investigative journalism collective Bellingcat is maintaining a fact-checking spreadsheet of dubious and debunked claims from the Ukraine frontlines, noting, “Many of the more dramatic claims aired by Russian state media or pro-separatist channels of Ukrainian aggression in recent days appear to have little truth to them. On the contrary, some videos appear to be flagrant attempts at disinformation.”

The International Fact-Checking Network launched #UkraineFacts, a collaborative effort to debunk Ukraine disinformation.

Watch out for scammy Instagram war pages and fake war reporting, Taylor Lorenz reports:

Hayden, who claims to be a 21-year-old from Kentucky, says that after learning about the war breaking out through the hip-hop Instagram page @Rap, he saw an opportunity. He had already run a popular war page called @liveinafghanistan. More recently, he had renamed it @newstruths and pivoted to posting viral, vaguely conservative-leaning videos featuring people shoplifting and clips of President Biden.

But on Wednesday night, it was wartime again, and so the page became @livefromukraine.

“I don’t really know what’s going on with all this political tension,” Hayden says. “I’m just trying to document what’s going on.” His verification methods involve sussing out the comment sections of the videos and seeing if other people have claimed they are false. “I can’t really verify them myself,” he says of the videos he shares.

Kayleen Devlin and Olga Robinson of the BBC’s disinformation monitoring unit looked at some of the techniques Russia is using to try to spread pro-Kremlin media narratives; for instance:

In recent weeks, some Russian state media outlets have featured misleading headlines about international support for Ukraine based solely on user comments on Western media sites.

One article published on the website of the state news agency RIA Novosti in late January claimed that “British” readers of the Daily Express supported the view that Ukraine should not be defended because Russia had a stronger military presence in the region than NATO.

Another suggested that readers laughed at Ukraine’s military potential.

There have also been concerns that pro-Kremlin trolls, using fake accounts, have targeted British and other foreign media sites, to advance Russian interests.

Research by Cardiff University’s Crime and Security Research Institute from last year found that the comment sections of 32 prominent media websites across 16 countries, including the Daily Express, had been targeted by pro-Kremlin trolls.

According to researchers, their anti-Western and pro-Russian comments were then used as the basis for news stories in Russian-language media.

Secure messaging platform Telegram “has been the main vector for invasion disinformation,” Foreign Policy noted:

Telegram may be a fairly marginal social media channel in the West, but—unlike Twitter, Facebook, and YouTube—it is one free of restrictions for state-backed propaganda campaigns in Russia, where it remains popular. The Russian state broadcaster RT, for example, has more than 200,000 followers on the platform.

The amount of disinformation emanating from Telegram was significant enough to warrant a statementT from the Ukrainian government’s anti-disinformation body on Thursday, calling the work of such channels “information terrorism.” While few English-language channels were on the list of those the government flagged as dangerous, despite some of them having tens of thousands of followers, the statement nevertheless underscores Kyiv’s fear that Telegram offers a dedicated pipeline of pro-Russian propaganda.

Livestreams

Lytvynenko wrote for The Atlantic about watching a Reuters livestream of Kyiv’s Maidan Square.

The stream of Maidan is different from all the noise. Nothing’s fake here; there’s no algorithm; and once I hide the live chat, there isn’t even a conversation to parse. It’s not a green screen against which TV pundits discuss Russia’s next move. The livestream is not trying to convince me of anything; it’s just showing me things as they are.

Maps

Datawrapper’s Lisa Charlotte Muth has a thread of maps from graphics reporters.

But keep in mind that maps can be a less than perfect way to follow what’s happening:

Translations

The New York Times is translating some of its Russia-Ukraine stories into Spanish.

This post is being updated as needed. Let us know if you have suggestions of things to add.

Photo of Kyiv, Ukraine, on Thursday, Feb. 24, 2022 by AP Photo/Emilio Morenatti.

]]>
https://www.niemanlab.org/2022/02/follow-war-ukraine/feed/ 0
How conspiracy theories in the U.S. became more personal, more cruel, and more mainstream after the Sandy Hook shootings https://www.niemanlab.org/2022/01/how-conspiracy-theories-in-the-u-s-became-more-personal-more-cruel-and-more-mainstream-after-the-sandy-hook-shootings/ https://www.niemanlab.org/2022/01/how-conspiracy-theories-in-the-u-s-became-more-personal-more-cruel-and-more-mainstream-after-the-sandy-hook-shootings/#respond Tue, 04 Jan 2022 14:00:36 +0000 https://www.niemanlab.org/?p=198901 Conspiracy theories are powerful forces in the U.S. They have damaged public health amid a global pandemic, shaken faith in the democratic process and helped spark a violent assault on the U.S. Capitol in January 2021.

These conspiracy theories are part of a dangerous misinformation crisis that has been building for years in the U.S.

American politics has long had a paranoid streak, and belief in conspiracy theories is nothing new. But as the news cycle reminds us daily, outlandish conspiracy theories born on social media now regularly achieve mainstream acceptance and are echoed by people in power.

As a journalism professor at the University of Connecticut, I have studied the misinformation around the mass shooting that took place at Sandy Hook Elementary School on Dec. 14, 2012. I consider it the first major conspiracy theory of the modern social media age, and I believe we can trace our current predicament to the tragedy’s aftermath.

Nine years ago, the Sandy Hook shooting demonstrated how fringe ideas could quickly become mainstream on social media and win support from various establishment figures — even when the conspiracy theory targeted grieving families of young students and school staff killed during the massacre.

Those who claimed the tragedy was a hoax showed up in Newtown, Connecticut, and harassed people connected to the shooting. This provided an early example of how misinformation spread on social media could cause real-world harm.

New age of social media and distrust

Social media’s role in spreading misinformation has been well documented in recent years. The year of the Sandy Hook shooting, 2012, marked the first year that more than half of all American adults used social media.

It also marked a modern low in public trust of the media. Gallup’s annual survey has since showed even lower levels of trust in the media in 2016 and 2021.

These two coinciding trends — which continue to drive misinformation — pushed fringe doubts about Sandy Hook quickly into the U.S. mainstream. Speculation that the shooting was a false flag — an attack made to look as if it were committed by someone else — began to circulate on Twitter and other social media sites almost immediately. Far-right commentator and conspiracy theorist Alex Jones and other fringe voices amplified these false claims.

Jones was recently found liable by default in defamation cases filed by Sandy Hook families.

Mistakes in breaking news reports about the shooting, such as conflicting information on the gun used and the identity of the shooter, were spliced together in YouTube videos and compiled on blogs as proof of a conspiracy, as my research shows. Amateur sleuths collaborated in Facebook groups that promoted the shooting as a hoax and lured new users down the rabbit hole.

Soon, a variety of establishment figures, including the 2010 Republican nominee for Connecticut attorney general, Martha Deangave credence to doubts about the tragedy.

Six months later, as gun control legislation stalled in Congress, a university poll found 1 in 4 people thought the truth about Sandy Hook was being hidden to advance a political agenda. Many others said they weren’t sure. The results were so unbelievable that some media outlets questioned the poll’s accuracy.

Today, other conspiracy theories have followed a similar trajectory on social media. The media is awash with stories about the popularity of the bizarre QAnon conspiracy movement, which falsely claims top Democrats are part of a Satan-worshipping pedophile ring. A member of Congress, U.S. Rep. Marjorie Taylor Greene, has also publicly denied Sandy Hook and other mass shootings.

But back in 2012, the spread of outlandish conspiracy theories from social media into the mainstream was a relatively new phenomenon, and an indication of what was to come.

New breed of conspiracies

Sandy Hook also marked a turning point in the nature of conspiracy theories and their targets. Before Sandy Hook, popular American conspiracy theories generally villainized shadowy elites or forces within the government. Many 9/11 “truthers,” for example, believed the government was behind the terrorist attacks, but they generally left victims’ families alone.

Sandy Hook conspiracy theorists accused family members of those killed, survivors of the shooting, religious leaders, neighbors, and first responders of being part of a government plot.

Newtown parents were accused of faking their children’s deaths, or their very existence. There were also allegations they were part of a child sex cult.

This change in conspiratorial targets from veiled government and elite figures to everyday people marked a shift in the trajectory of American conspiracy theories.

Since Sandy Hook, survivors of many other high-profile mass shootings and attacks, such as the Boston Marathon bombing  and the Charlottesville car attack, have had their trauma compounded by denial about their tragedies.

And the perverse idea of a politically connected pedophile ring has become a key tenet in two subsequent conspiracy theories: Pizzagate and QAnon.

The kind of harassment and death threats targeting Sandy Hook families has also become a common fallout of conspiracy theories. In the Pizzagate conspiracy theory, the owners and employees of a Washington, D.C. pizza parlor alleged to be part of a pedophile ring that included politicians continue to be targeted by adherents of that conspiracy theory. In 2016, one man drove hundreds of miles to investigate and fired his assault rifle in the restaurant.

Some people who remain skeptical of the Covid-19 pandemic have harassed front-line healthcare workers. Local election workers across the country have been threatened and accused of being part of a conspiracy to steal the 2020 presidential election.

The legacy of the mass shooting at Sandy Hook is a legacy of misinformation — the start of a crisis that will likely plague the U.S. for years to come.

Amanda J. Crawford is an assistant professor of journalism at the University of Connecticut. This article is republished from The Conversation under a Creative Commons license.The Conversation

Photo of a poster remembering the victims of the Sandy Hook Elementary School shooting by NorthEndWaterfront.com used under a Creative Commons license.

]]>
https://www.niemanlab.org/2022/01/how-conspiracy-theories-in-the-u-s-became-more-personal-more-cruel-and-more-mainstream-after-the-sandy-hook-shootings/feed/ 0
In the ocean’s worth of new Facebook revelations out today, here are some of the most important drops https://www.niemanlab.org/2021/10/in-the-oceans-worth-of-new-facebook-revelations-out-today-here-are-some-of-most-important-drops/ https://www.niemanlab.org/2021/10/in-the-oceans-worth-of-new-facebook-revelations-out-today-here-are-some-of-most-important-drops/#respond Mon, 25 Oct 2021 18:00:01 +0000 https://www.niemanlab.org/?p=197096 There’s still another month remaining in the Atlantic hurricane season, and over the past few days, a powerful storm developed — one with the potential to bring devastating destruction.

The pattern was familiar: a distant rumbling in some faraway locale; a warning of its potential power and path; the first early rain bands; days of tracking; frantic movements; and finally the pummeling tempest slamming into landfall.

I’m talking, of course, about Facebook. (And if any of you jackals want to point out that Facebook should be more subject to the Pacific hurricane season, I’ll note that the storm is coming overwhelmingly from the other coast.)

A Nieman Lab analysis I just did in my head has found there are as many as 5.37 gazillion new stories out today about Facebook’s various misdeeds, almost all of them based in one way or another on the internal documents leaked by company whistleblower Frances Haugen. Haugen first began leaking the documents to reporters at The Wall Street Journal for a series of stories that began last month. Then came 60 Minutes, then congressional testimony, then the SEC, and finally a quasi-consortium of some of the biggest news organizations in America.

(Actually, cut that “finally”: Haugen is at the moment in London testifying before the U.K. Parliament about the documents, with a grand tour of European capitals to follow.)

It is, a Nieman Lab investigation can also confirm, a lot to take in. Protocol is doing its best to keep track of all the new stories that came off embargo today (though some began to dribble out Friday). At this typing, their list is up to 40 consortium pieces, including work from AP, Bloomberg, CNBC, CNN, NBC News, Politico, Reuters, The Atlantic, the FT, The New York Times, The Verge, The Wall Street Journal, The Washington Post, and Wired. (For those keeping score at home, Politico leads with six stories, followed by Bloomberg with five and AP and CNN with four each.) And that doesn’t even count reporters tweeting things out directly from the leak. I read through ~all of them and here are some of the high(low?)lights — all emphases mine.

Facebook’s role in the January 6 Capitol riot was bigger than it’d like you to believe.

From The Washington Post:

Relief flowed through Facebook in the days after the 2020 presidential election. The company had cracked down on misinformation, foreign interference and hate speech — and employees believed they had largely succeeded in limiting problems that, four years earlier, had brought on perhaps the most serious crisis in Facebook’s scandal-plagued history.

“It was like we could take a victory lap,” said a former employee, one of many who spoke for this story on the condition of anonymity to describe sensitive matters. “There was a lot of the feeling of high-fiving in the office.”

Many who had worked on the election, exhausted from months of unrelenting toil, took leaves of absence or moved on to other jobs. Facebook rolled back many of the dozens of election-season measures that it had used to suppress hateful, deceptive content. A ban the company had imposed on the original Stop the Steal group stopped short of addressing dozens of look-alikes that popped up in what an internal Facebook after-action report called “coordinated” and “meteoric” growth. Meanwhile, the company’s Civic Integrity team was largely disbanded by a management that had grown weary of the team’s criticisms of the company, according to former employees.

“This is not a new problem,” one unnamed employee fumed on Workplace on Jan. 6. “We have been watching this behavior from politicians like Trump, and the — at best — wishy washy actions of company leadership, for years now. We have been reading the [farewell] posts from trusted, experienced and loved colleagues who write that they simply cannot conscience working for a company that does not do more to mitigate the negative effects on its platform.”

A company after-action report concluded that in the weeks after the election, Facebook did not act forcefully enough against the Stop the Steal movement that was pushed by Trump’s political allies, even as its presence exploded across the platform.

The documents also provide ample evidence that the company’s internal research over several years had identified ways to diminish the spread of political polarization, conspiracy theories and incitements to violence but that in many instances, executives had declined to implement those steps.

Facebook was indeed well aware of how potent a tool for radicalization it can be. From NBC News:

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

From CNN:

“Almost all of the fastest growing FB Groups were Stop the Steal during their peak growth,” the analysis says. “Because we were looking at each entity individually, rather than as a cohesive movement, we were only able to take down individual Groups and Pages once they exceeded a violation threshold. We were not able to act on simple objects like posts and comments because they individually tended not to violate, even if they were surrounded by hate, violence, and misinformation.”

This approach did eventually change, according to the analysis — after it was too late.
“After the Capitol insurrection and a wave of Storm the Capitol events across the country, we realized that the individual delegitimizing Groups, Pages, and slogans did constitute a cohesive movement,” the analysis says.

When Facebook executives posted messages publicly and internally condemning the riot, some employees pushed back, even suggesting Facebook might have had some culpability.

“There were dozens of Stop the Steal groups active up until yesterday, and I doubt they minced words about their intentions,” one employee wrote in response to a post from Mike Schroepfer, Facebook’s chief technology officer.

Another wrote, “All due respect, but haven’t we had enough time to figure out how to manage discourse without enabling violence? We’ve been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control.”

Other Facebook employees went further, claiming decisions by company leadership over the years had helped create the conditions that paved the way for an attack on the US Capitol.

Responding to Schroepfer’s post, one staffer wrote that, “leadership overrides research based policy decisions to better serve people like the groups inciting violence today. Rank and file workers have done their part to identify changes to improve our platforms but have been actively held back.”

One important source of political agitation: SUMAs. From Politico:

Facebook has known for years about a major source of political vitriol and violent content on its platform and done little about it: individual people who use small collections of accounts to broadcast reams of incendiary posts.

Meet SUMAs: a smattering of accounts run by a single person using their real identity, known internally at Facebook as Single User Multiple Accounts. And a significant swath of them spread so many divisive political posts that they’ve mushroomed into a massive source of the platform’s toxic politics, according to internal company documents and interviews with former employees.

While plenty of SUMAs are harmless, Facebook employees for years have flagged many such accounts as purveyors of dangerous political activity. Yet, the company has failed to crack down on SUMAs in any comprehensive way, the documents show. That’s despite the fact that operating multiple accounts violates Facebook’s community guidelines.

Company research from March 2018 said accounts that could be SUMAs were reaching about 11 million viewers daily, or about 14 percent of the total U.S. political audience. During the week of March 4, 2018, 1.6 million SUMA accounts made political posts that reached U.S. users.

Through it all, Facebook has retained its existential need to be seen as nonpartisan — seen being the key word there, since perception and reality often don’t align when it comes to the company. From The Washington Post:

Ahead of the 2020 U.S. election, Facebook built a “voting information center” that promoted factual information about how to register to vote or sign up to be a poll worker. Teams at WhatsApp wanted to create a version of it in Spanish, pushing the information proactively through a chat bot or embedded link to millions of marginalized voters who communicate regularly through WhatsApp. But Zuckerberg raised objections to the idea, saying it was not “politically neutral,” or could make the company appear partisan, according to a person familiar with the project who spoke on the condition of anonymity to discuss internal matters, as well as documents reviewed by The Post.

(Will you allow me a brief aside to highlight some chef’s-kiss PR talk?)

This related Post story from Friday includes not one, not two, but three of the most remarkable non-denial denials I’ve read recently, all from Facebook PR. Lots of chest-puffing without ever actually saying “Your factual claim is false”:

As the company sought to quell the political controversy during a critical period in 2017, Facebook communications official Tucker Bounds allegedly said, according to the affidavit, “It will be a flash in the pan. Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine.”

Bounds, now a vice president of communications, said in a statement to The Post, ❶ “Being asked about a purported one-on-one conversation four years ago with a faceless person, with no other sourcing than the empty accusation itself, is a first for me.”

Facebook spokeswoman Erin McPike said in a statement, ❷ “This is beneath the Washington Post, which during the last five years competed ferociously with the New York Times over the number of corroborating sources its reporters could find for single anecdotes in deeply reported, intricate stories. It sets a dangerous precedent to hang an entire story on a single source making a wide range of claims without any apparent corroboration.”

The whistleblower told The Post of an occasion in which Facebook’s Public Policy team, led by former Bush administration official Joel Kaplan, defended a “white list” that exempted Trump-aligned Breitbart News, run then by former White House strategist Stephen K. Bannon, and other select publishers from Facebook’s ordinary rules against spreading false news reports.

When a person in the video conference questioned this policy, Kaplan, the vice president of global policy, responded by saying, “Do you want to start a fight with Steve Bannon?” according to the whistleblower in The Post interview.

Kaplan, who has been criticized by former Facebook employees in previous stories in The Post and other news organizations for allegedly seeking to protect conservative interests, said in a statement to The Post, ❸ “No matter how many times these same stories are repurposed and re-told, the facts remain the same. I have consistently pushed for fair treatment of all publishers, irrespective of ideological viewpoint, and advised that analytical and methodological rigor is especially important when it comes to algorithmic changes.”

If you think Facebook does a bad job moderating content here, it’s worse almost everywhere else.

This was a major theme in stories across outlets. The New York Times:

On Feb. 4, 2019, a Facebook researcher created a new user account to see what it was like to experience the social media site as a person living in Kerala, India.

For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site.

The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the Facebook researcher wrote.

“The test user’s News Feed has become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”

With 340 million people using Facebook’s various social media platforms, India is the company’s largest market. And Facebook’s problems on the subcontinent present an amplified version of the issues it has faced throughout the world, made worse by a lack of resources and a lack of expertise in India’s 22 officially recognized languages.

Eighty-seven percent of the company’s global budget for time spent on classifying misinformation is earmarked for the United States, while only 13 percent is set aside for the rest of the world — even though North American users make up only 10 percent of the social network’s daily active users, according to one document describing Facebook’s allocation of resources.

From Politico:

In late 2020, Facebook researchers came to a sobering conclusion. The company’s efforts to curb hate speech in the Arab world were not working. In a 59-page memo circulated internally just before New Year’s Eve, engineers detailed the grim numbers.

Only six percent of Arabic-language hate content was detected on Instagram before it made its way onto the photo-sharing platform owned by Facebook. That compared to a 40 percent takedown rate on Facebook.

Ads attacking women and the LGBTQ community were rarely flagged for removal in the Middle East. In a related survey, Egyptian users told the company they were scared of posting political views on the platform out of fear of being arrested or attacked online.

In Iraq, where violent clashes between Sunni and Shia militias were quickly worsening an already politically fragile country, so-called “cyber armies” battled it out by posting profane and outlawed material, including child nudity, on each other’s Facebook pages in efforts to remove rivals from the global platform.

From the AP:

An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.

In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.

“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”

(Facebook generated $85.9 billion in revenue last year, with a profit margin of 38%.)

For Hassan Slaieh, a prominent journalist in the blockaded Gaza Strip, the first message felt like a punch to the gut. “Your account has been permanently disabled for violating Facebook’s Community Standards,” the company’s notification read. That was at the peak of the bloody 2014 Gaza war, following years of his news posts on violence between Israel and Hamas being flagged as content violations.

Within moments, he lost everything he’d collected over six years: personal memories, stories of people’s lives in Gaza, photos of Israeli airstrikes pounding the enclave, not to mention 200,000 followers. The most recent Facebook takedown of his page last year came as less of a shock. It was the 17th time that he had to start from scratch.

He had tried to be clever. Like many Palestinians, he’d learned to avoid the typical Arabic words for “martyr” and “prisoner,” along with references to Israel’s military occupation. If he mentioned militant groups, he’d add symbols or spaces between each letter.

Other users in the region have taken an increasingly savvy approach to tricking Facebook’s algorithms, employing a centuries-old Arabic script that lacks the dots and marks that help readers differentiate between otherwise identical letters. The writing style, common before Arabic learning exploded with the spread of Islam, has circumvented hate speech censors on Facebook’s Instagram app, according to the internal documents.

But Slaieh’s tactics didn’t make the cut. He believes Facebook banned him simply for doing his job. As a reporter in Gaza, he posts photos of Palestinian protesters wounded at the Israeli border, mothers weeping over their sons’ coffins, statements from the Gaza Strip’s militant Hamas rulers.

From CNN:

Facebook employees repeatedly sounded the alarm on the company’s failure to curb the spread of posts inciting violence in “at risk” countries like Ethiopia, where a civil war has raged for the past year, internal documents seen by CNN show…

They show employees warning managers about how Facebook was being used by “problematic actors,” including states and foreign organizations, to spread hate speech and content inciting violence in Ethiopia and other developing countries, where its user base is large and growing. Facebook estimates it has 1.84 billion daily active users — 72% of which are outside North America and Europe, according to its annual SEC filing for 2020.

The documents also indicate that the company has, in many cases, failed to adequately scale up staff or add local language resources to protect people in these places.

So which are the countries Facebook does care about, if “care” is not a horribly misused term here? From The Verge:

In a move that has become standard at the company, Facebook had sorted the world’s countries into tiers.

Brazil, India, and the United States were placed in “tier zero,” the highest priority. Facebook set up “war rooms” to monitor the network continuously. They created dashboards to analyze network activity and alerted local election officials to any problems.

Germany, Indonesia, Iran, Israel, and Italy were placed in tier one. They would be given similar resources, minus some resources for enforcement of Facebook’s rules and for alerts outside the period directly around the election.

In tier two, 22 countries were added. They would have to go without the war rooms, which Facebook also calls “enhanced operations centers.”

The rest of the world was placed into tier three. Facebook would review election-related material if it was escalated to them by content moderators. Otherwise, it would not intervene.

“Tier Three” must be the new “Third World.”

The kids fled Facebook long ago, but now they’re fleeing Instagram too.

Also: “Most [young adults] perceive Facebook as place for people in their 40s or 50s…perceive content as boring, misleading, and negative…perceive Facebook as less relevant and spending time on it as unproductive…have a wide range of negative associations with Facebook including privacy concerns, impact to their wellbeing, along with low awareness of relevant services.” Otherwise, they love it.

From The Verge:

Earlier this year, a researcher at Facebook shared some alarming statistics with colleagues.

Teenage users of the Facebook app in the US had declined by 13 percent since 2019 and were projected to drop 45 percent over the next two years, driving an overall decline in daily users in the company’s most lucrative ad market. Young adults between the ages of 20 and 30 were expected to decline by 4 percent during the same time frame. Making matters worse, the younger a user was, the less on average they regularly engaged with the app. The message was clear: Facebook was losing traction with younger generations fast.

Facebook’s struggle to attract users under the age of 30 has been ongoing for years, dating back to as early as 2012. But according to the documents, the problem has grown more severe recently. And the stakes are high. While it famously started as a networking site for college students, employees have predicted that the aging up of the app’s audience — now nearly 2 billion daily users — has the potential to further alienate young people, cutting off future generations and putting a ceiling on future growth.

The problem explains why the company has taken such a keen interest in courting young people and even pre-teens to its main app and Instagram, spinning up dedicated youth teams to cater to them. In 2017, it debuted a standalone Messenger app for kids, and its plans for a version of Instagram for kids were recently shelved after lawmakers decried the initiative.

Instagram was doing better with young people, with full saturation in the US, France, the UK, Japan, and Australia. But there was still cause for concern. Posting by teens had dropped 13 percent from 2020 and “remains the most concerning trend,” the researchers noted, adding that the increased use of TikTok by teens meant that “we are likely losing our total share of time.”

Apple was close to banning Facebook and Instagram from the App Store because of how it was being used for human trafficking.

From CNN:

Facebook has for years struggled to crack down on content related to what it calls domestic servitude: “a form of trafficking of people for the purpose of working inside private homes through the use of force, fraud, coercion or deception,” according to internal Facebook documents reviewed by CNN.

The company has known about human traffickers using its platforms in this way since at least 2018, the documents show. It got so bad that in 2019, Apple threatened to pull Facebook and Instagram’s access to the App Store, a platform the social media giant relies on to reach hundreds of millions of users each year. Internally, Facebook employees rushed to take down problematic content and make emergency policy changes avoid what they described as a “potentially severe” consequence for the business.

But while Facebook managed to assuage Apple’s concerns at the time and avoid removal from the app store, issues persist. The stakes are significant: Facebook documents describe women trafficked in this way being subjected to physical and sexual abuse, being deprived of food and pay, and having their travel documents confiscated so they can’t escape. Earlier this year, an internal Facebook report noted that “gaps still exist in our detection of on-platform entities engaged in domestic servitude” and detailed how the company’s platforms are used to recruit, buy and sell what Facebook’s documents call “domestic servants.”

Last week, using search terms listed in Facebook’s internal research on the subject, CNN located active Instagram accounts purporting to offer domestic workers for sale, similar to accounts that Facebook researchers had flagged and removed. Facebook removed the accounts and posts after CNN asked about them, and spokesperson Andy Stone confirmed that they violated its policies.

And from AP:

After publicly promising to crack down, Facebook acknowledged in internal documents obtained by The Associated Press that it was “under-enforcing on confirmed abusive activity” that saw Filipina maids complaining on the social media site of being abused. Apple relented and Facebook and Instagram remained in the app store.

But Facebook’s crackdown seems to have had a limited effect. Even today, a quick search for “khadima,” or “maids” in Arabic, will bring up accounts featuring posed photographs of Africans and South Asians with ages and prices listed next to their images. That’s even as the Philippines government has a team of workers that do nothing but scour Facebook posts each day to try and protect desperate job seekers from criminal gangs and unscrupulous recruiters using the site.

If you see an antitrust regulator smiling today, this is why.

From Politico:

Facebook likes to portray itself as a social media giant under siege — locked in fierce competition with rivals like YouTube, TikTok and Snapchat, and far from the all-powerful goliath that government antitrust enforcers portray.

But internal documents show that the company knows it dominates the arenas it considers central to its fortunes.

Previously unpublished reports and presentations collected by Facebook whistleblower Frances Haugen show in granular detail how the world’s largest social network views its power in the market, at a moment when it faces growing pressure from governments in the U.S., Europe and elsewhere. The documents portray Facebook employees touting its dominance in their internal presentations — contradicting the company’s own public assertions and providing potential fuel for antitrust authorities and lawmakers scrutinizing the social network’s sway over the market.

And, of course, the Ben Smith meta-media look at it all.

Frances Haugen first met Jeff Horwitz, a tech-industry reporter for The Wall Street Journal, early last December on a hiking trail near the Chabot Space & Science Center in Oakland, Calif.

She liked that he seemed thoughtful, and she liked that he’d written about Facebook’s role in transmitting violent Hindu nationalism in India, a particular interest of hers. She also got the impression that he would support her as a person, rather than as a mere source who could supply him with the inside information she had picked up during her nearly two years as a product manager at Facebook.

“I auditioned Jeff for a while,” Ms. Haugen told me in a phone interview from her home in Puerto Rico, “and one of the reasons I went with him is that he was less sensationalistic than other choices I could have made.”

In the last two weeks [the news organizations] have gathered on the messaging app Slack to coordinate their plans — and the name of their Slack group, chosen by [beloved former Nieman Labber] Adrienne LaFrance, the executive editor of The Atlantic, suggests their ambivalence: “Apparently We’re a Consortium Now.”

Inside the Slack group, whose messages were shared with me by a participant, members have reflected on the strangeness of working, however tangentially, with competitors. (I didn’t speak to any Times participants about the Slack messages.)

“This is the weirdest thing I have ever been part of, reporting-wise,” wrote Alex Heath, a tech reporter for The Verge.

Original image of Hurricane Ida by NASA and Mark Zuckerberg drawing by Paul Chung used under a Creative Commons license.

]]>
https://www.niemanlab.org/2021/10/in-the-oceans-worth-of-new-facebook-revelations-out-today-here-are-some-of-most-important-drops/feed/ 0
Media consolidation and algorithms make Facebook a bad place for sharing local news, study finds https://www.niemanlab.org/2021/10/media-consolidation-and-algorithms-make-facebook-a-bad-place-for-sharing-local-news-study-finds/ https://www.niemanlab.org/2021/10/media-consolidation-and-algorithms-make-facebook-a-bad-place-for-sharing-local-news-study-finds/#respond Wed, 13 Oct 2021 13:00:23 +0000 https://www.niemanlab.org/?p=196652 The combination of local news outlets being bought out by bigger media conglomerates and the ever-present influence of social media in helping spread news seems to have created a new phenomenon, according to a new study: Issues of importance to local audiences are being drowned out in favor of harder-hitting news pieces with national relevance.

The study, published last week in Digital Journalism, was conducted by Benjamin Toff and Nick Mathews, two researchers at the University of Minnesota’s Hubbard School of Journalism and Mass Communication.

The idea for this research evolved partially because of the treasure trove of data available on the website CrowdTangle, and Toff and Mathews wanting to make use of that. “It occurred to us together that we could use [CrowdTangle] data to examine the degree to which local media engages with readers on social media platforms,” Toff said, adding that the idea took off from there. (It’s probably also good that Toff and Mathews thought to do this work now. Toff told me he is “very concerned” about possible changes at CrowdTangle with its founder and CEO’s departure, as it may curtail access to “one of the few sources of data we as researchers have to what people are interacting with on Facebook.”).

For this study, Toff and Mathews looked at a dataset of nearly 2.5 million Facebook posts that were published by local news organizations in three U.S. states. They chose the three states of Arizona, Minnesota, and Virginia for a couple of reasons. One was background knowledge on the media landscapes: Mathews had previously worked in Virginia and Toff had grown up in Arizona, and as current Minnesota residents, having the context about the local media in these states was important.

The other reason was to find states that weren’t on extreme ends of the news spectrum. “None of them are particularly extreme as far as being really small states [with limited media outlets] or on the other extreme like New York, which has such a dominant media presence,” Toff said.

Once they had a list of media outlets in the three states — along with detailed information about their ownership status and type, such as whether the outlets were owned by a multi-state chain or publicly owned — the researchers analyzed that information along with the millions of Facebook posts to identify any patterns in engagement. (For the purpose of this experiment, Toff and Mathews stuck to total engagement, which was all possible interactions including page follows, and didn’t examine individual interactions such as reactions or comments on Facebook).

They also sorted the posts into categories of hard news and soft news. Hard news stories covered topics such as politics, education, and health, while soft news included sports, arts and leisure , and — because they found so many posts of this variety on Facebook — animals.

The study revealed a few trends:

  • Ownership patterns related to activity and engagement on Facebook: “[P]ages owned by publicly traded, multi-state chains were among the most active on the platform,” the study found. These Facebook pages were also “more likely to have higher rates of interactions…on a per post basis than privately owned multi-state chains or pages owned by public or governmental organizations.”
  • Outlets owned by chains tended to post more repurposed content, but that led to less engagement: Chain-owned outlets, with more resources and access to the wire service or other sites owned by the same company, had access to more content, which they could use on their own platforms. “The idea is that it allows them to have a wider reach on the platform,” Toff said.
  • When it came to the type of news, hard news of national importance won out: Posts about hard news stories, especially on a national level, consistently brought more engagement than the softer, more locally relevant stories. “Even local organizations get more bang for their buck when they post about non-local subjects,” Toff said.

The combined effect: Local news, especially of topics that don’t rise to national importance, may be lost in the shuffle.

Co-author Nick Mathews put it this way on Twitter:

The study used data from 2018 and 2019, after when Facebook changed its algorithm to emphasize “meaningful social interactions,” so Toff is interested in seeing how these trends may have looked prior to that big change. Anecdotally speaking, Toff said that those changes made it harder for news organizations to get people to see their content.

A harder question to answer now is how much of these trends is driven by variations in people’s attention versus Facebook’s algorithms, since it’s hard to separate the two, Toff said.

Still, Toff said that the findings underscore the frustration often felt by news organizations and how they feel they are held captive by Facebook and other social media platforms. “You gotta go where people are spending time, but there’s so much [about these places] that can’t be controlled,” Toff said. “There’s a lot of hesitancy about becoming overly reliant on companies that have their own interests, ultimately, and they’re not always aligned [with news companies’ interests].”

Photo of Facebook News Feed by Dave Rutt used under a Creative Commons license.

]]>
https://www.niemanlab.org/2021/10/media-consolidation-and-algorithms-make-facebook-a-bad-place-for-sharing-local-news-study-finds/feed/ 0
As Facebook tries to knock the journalism off its platform, its users are doing the same https://www.niemanlab.org/2021/09/as-facebook-tries-to-knock-the-journalism-off-its-platform-its-users-are-doing-the-same/ https://www.niemanlab.org/2021/09/as-facebook-tries-to-knock-the-journalism-off-its-platform-its-users-are-doing-the-same/#respond Mon, 20 Sep 2021 18:00:33 +0000 https://www.niemanlab.org/?p=196103 It has been clear for several years that Facebook wishes it never got into the news business.

Sure, having a few news stories sprinkled throughout the News Feed probably makes a subset of their users happy and more willing to tap that blue icon on their homescreen again tomorrow. But there aren’t that many of them. Only 12.9% of posts viewed in the News Feed have a link to anything, much less a link to a news site. The percent that are about news — defined broadly, including sports and entertainment — is now somewhere less than 4%. It’s something of a niche interest for Facebook users.

Meanwhile, oh, what a giant pain in the ass it has been for Zuck & Co.: Fake news, foreign propaganda, Covid lies, Nazis, horse paste, fact-checking, accusations of political bias, and a seemingly never-ending list of additional headaches. Because Facebook, architecturally, makes little distinction between the best sources and the worst — but, architecturally, incentivizes content that appeals to our less rational natures — it gets blamed for roughly 80% of what ails the world.

Maybe you think that’s fair; maybe you think it gets a bad rap. Either way, Facebook would be happy if all of it could be sucked right off its servers and replaced with more puppies and silly memes and Instagram sunsets. And the company has taken a steady series of steps to reduce the role of news, especially political news, on its platform, the latest just a few weeks ago.

A new study out today from the Pew Research Center suggests it isn’t just Facebook that’s seeking a trial separation from the news — it’s also Facebook’s users.

As social media and technology companies face criticism for not doing enough to stem the flow of misleading information on their platforms, a new Pew Research Center survey finds that a little under half of U.S. adults (48%) get news on social media sites “often” or “sometimes,” a 5 percentage point decline from 2020.

Across the 10 social media sites asked about in this study, the percentage of users of each site who regularly get news there has remained relatively stable since 2020. However, both Facebook and TikTok buck this trend.

The share of Facebook users who say they regularly get news on the site has declined 7 points since 2020, from 54% to about 47% in 2021. TikTok, on the other hand, has seen a slight uptick in the percentage of users who say they regularly get news on the site, rising from 22% in 2020 to 29% in 2021.

That people would be reducing their news use of social media isn’t shocking; you may remember that 2020 was a pretty busy year! 2021, for all its continued pandemicity, has been at least a little less insane, news-wise. (Since January 20, at least.)

But Facebook’s decline (7 percentage points) was substantially larger than Twitter’s (4), Reddit’s (3), Snapchat’s (3), YouTube’s (2), Instagram’s (1), or LinkedIn’s (1). (Besides TikTok, WhatsApp and Twitch saw increases, though small ones.)

And because Facebook’s user base is so much larger than other (non-YouTube) social platforms, the impact of that drop in news usage is magnified. If my back-of-the-envelope math is right, the net decline in news usage on Facebook was about 5× the size of the net decline on Twitter. Facebook’s seeing a bigger decline that’s happening within a much larger user base.

That this is all happening despite 2020’s splashy-sounding debut of the Facebook News Tab for all its (U.S.) users and the company wearing out its checkbooks writing checks to publishers around the world. As I’ve argued, those payments (and those from rival duopolist Google) should be understood more as paid lobbying than an actual attempt to center journalism as an important anchor of their platforms.

Facebook users tend to be more casual news consumers than users of more news-oriented platforms like Twitter or Reddit — so a reduction there is probably more significant to an individual user, in terms of their overall news diet. But that more casual news consumer is also the sort more likely to be time-targeted in their news consumption — the person who pays attention to politics for the 30 days before an election and ignores it the rest of the time — so a higher drop-off from 2020 shouldn’t be too surprising.

But still, fewer people counting on Facebook for news is probably a good thing — and a sign that the interests of the company and its users may be strangely aligned, for once.

]]>
https://www.niemanlab.org/2021/09/as-facebook-tries-to-knock-the-journalism-off-its-platform-its-users-are-doing-the-same/feed/ 0
Female video game journalists on what to do when the mob comes for you https://www.niemanlab.org/2021/07/female-video-game-journalists-on-what-to-do-when-the-mob-comes-for-you/ https://www.niemanlab.org/2021/07/female-video-game-journalists-on-what-to-do-when-the-mob-comes-for-you/#respond Mon, 26 Jul 2021 13:27:40 +0000 https://www.niemanlab.org/?p=194645 The weirdest harassment campaign Ash Parrish ever faced came after she wrote a cheeky, semi-facetious story for Kotaku about why the new Xbox Series X creeped her out.

Microsoft’s flagship machine is equipped with an exhaust port consisting of dozens of circular holes. Parrish has trypophobia, which is defined as “an aversion to the sight of irregular patterns or clusters of small holes or bumps.” [Ed. note: If you think this isn’t real, Google it. NOT GOOD.] The new console fit the bill perfectly, and Parrish wanted to have her fun. The blog post, entitled “The Xbox Series X Has Too Many Horrifying Holes,” possessed wit, repartee, and a light sardonic dusting aimed to disarm anyone inclined to take it too seriously.

Didn’t matter: The post metastasized through toxic Discord channels and imageboards — populated with the sort of people who make a hobby out of tormenting the journalists who cover the geek industries — and soon enough, Parrish found herself besieged.

[Read: “In believing it can’t happen to them, they believe, on some level, that you deserve what you are getting. But the reality is, it can happen to anyone.”]

“The way gamers responded you would have thought I shot Geoff Keighley on live television,” she told me. “People sent relentless emails to me that were nothing but trypo-trigger images. It was nuts.”

Parrish is a queer woman of color who reports on video games. This was not her first rodeo. Every once in a while, she writes something that snaps the tripwire, leading to an onslaught of harassment, brigading, and invasions of privacy (though Parrish tells me she’s blessed to have never been doxxed.) Like so many other writers on her beat, she’s been forced to develop a tight regimen of self-care protocols to survive those moments when the howling rancor comes to her doorstep.

Who could blame her? Over the last decade, pop fandom has emerged as the single most powerful and dangerous force on the internet. Whole factions of forum dwellers will weaponize their numbers in pursuit of a single-minded cause — be that the superiority of a favored boy band, or cinematic universe, or in Parrish’s case, the sanctity of the Xbox — and they’ll pour into your mentions, salivating, looking to get even for any perceived indiscretion. I’m a cis white man, and therefore I’ve experienced a fraction of the fear tactics endured by some of my colleagues from marginalized backgrounds, but I’ve still often found myself vibrating with anxiety on the evening before a story runs that could rankle the gamergaters or psycho-stans hiding in the mists. To go viral, at least in this capacity, is to briefly lose control of your life.

I wanted to know more about how my friends gird themselves through the crossfire. How do they hold on through those nervy afternoons and make it to the other side? How do they stay measured when they know an anonymous, untrackable mass is sifting through their paper trail? How do they remind themselves, as the storm gathers, that everything will eventually be okay?

Parrish has her methods down to an art. I’ve followed her long enough to know that whenever a padlock appears next to her Twitter name, it means some bespoke group of angry gamer men is on her tail for something she’s posted. That’s the advice she gives to any writer who’s found themselves at the center of a witch hunt: Use every option available to stem the tide.

“Lock your social media. Mute words and phrases,” said Parrish. “Make it so only people who follow you can speak with you. Remember 98% of the time the people harassing you are not attempting to engage with your work in good faith. As such, they do not demand your attention. You don’t have to respond to them or refute them.”

It’s a shame, continued Parrish, that the best advice she can dole out is to avoid the zeitgeist entirely. But that’s also the only real defense anyone has on social media, where the only actionable currency is attention. I think one of the first lessons any writer learns after being thrown into the fire is how to suss out legitimate criticism from the noise. In fact, the longer I’ve worked as a professional journalist, the more I’ve become inclined to let whatever is written on the page speak for itself, and watch the chips fall where they may. There’s a lot of value in learning from constructive feedback, but the more you explain yourself in the mentions, the quicker you get into trouble.

Ana Valens, who is the managing editor of We Got This Covered, previously the NSFW editor at The Daily Dot, and someone who has written extensively about trans issues in the video game business, echoed Parrish’s sentiment. She’s recently stepped back from social media as a whole, in an attempt to rewire her habitual Twitter and Instagram usage. “I don’t have the apps available on my phone, I check my notifications in bulk, and I log out of Twitter every time I use it,” she said. “These all deter me from constantly seeking out social media.”

This was a decision Valens made after discovering that she had become a “person of interest” among the reactionary contingency in the gaming community. Her byline has become somewhat infamous in that realm, and Valens is aware that her notoriety increases her vulnerability — it’s frightening to know that some bad actors have stoked an obsessive, entirely parasocial animus with your blog posts. So, when the timeline gets really bad, Valens employs a watchdog network of her trusted friends who have an eye out for her security. That affords her some bandwidth to unplug, kickback, and relax.

“I’ll lock down my accounts or ask a friend to watch out for any websites, communities, or users that are prone to harass me,” she said. “I’ll usually cancel plans for the day and find something at home to keep me busy, like playing video games or getting caught up with YouTube. But other than that, I try not to be too online, too available, too accessible. It’s a band-aid to a larger, systemic problem with social media enabling harassment, but it’s better than doomscrolling.”

Valens believes that any young journalist, especially those interested in covering the geek industries, should sweep through their digital footprints and remove any compromising information. Millennial writers were trained to be extraordinarily available on main, and Valens believes that to be a fraught, out-of-date instinct.

“Start creating a divide between personal and professional social media landscapes. Have two Twitter accounts, one that’s public-facing and one that’s not,” she said. “That not only encourages a good work-life balance, but it’s way easier to protect yourself from harassment if your most personal posts and photos are private, locked away on an Instagram or Twitter that only your most trustworthy friends have access to.”

In general, Valens has a pretty good sense of when she’s writing something that might fire up the agitators. It’s easy to prepare for the worst when you know it’s coming.

But the internet can get truly chaotic when backlash catches a writer by surprise, which happens more often than you’d think. A pleasant, mild article is robbed of its original context and agency, and the author finds themself injected into yet another arbitrary, parochial skirmish. Parrish told me that a YouTuber once made a 30-minute video about her after she wrote an innocent story calling for more adaptive customization accessories — like hearing aids and canes — in video game character creators. This is when the crossfire is at its most annoying and inscrutable. Sometimes you’re thrown into a war zone without ever picking a side.

“It’s so hard to determine what’s going to be interpreted as good-faith criticism, or what’s going to be interpreted as an attack, or clickbait. And once someone says, ‘Hey, this article is huge clickbait,’ anyone they share it with is going to be primed that way,” said Cass Marshall, a staffer at Polygon who writes video game commentary. “You have to be aware of how you’re going to be read. How is someone who actively hates your website going to read your article? To a certain extent, that’s impossible. So you’re kinda just waiting for the other shoe to drop, all the time.”

Marshall notes one of the core ironies at play here. Whether we like it or not, members of the media have been cast as the primary vectors in the culture war. It doesn’t matter if you’re writing superheated political news for CNN or a silly Xbox blog for Kotaku — as journalists, we’re always a tweet away, and that’s made us one of the easiest targets for tepid, deployable political anger.

“When a topic becomes so fraught, I don’t want to get into the discourse. And that sucks, because ideally I should be hearing from people who disagree with me,” added Marshall. “I want to hear their perspectives. But when the feedback is not about the game I’m writing about, or the piece itself, and it’s about this other, greater hostility, there’s nothing to gain from that. Even the good feedback is drowned out.”

Perhaps someday the pendulum will swing back, and a chill won’t run up our spine as the numbers in our notifications tab tick past the double-digits. I think we all yearn for a future where the temperature is turned down, and we can actually hear our readers once again. Until then, journalists will hold onto a fundamental fact of life: No matter how bad it gets, the storm always passes.

“Don’t get me wrong, the sustained impact of harassment and online abuse is a real thing — I’m an ongoing target. The pain lingers beyond the initial attack. And I think we’re only just realizing that online harassment is correlated with PTSD and CPTSD. But there’s a lot of truth to the fact that, for most journalists, the mob will gradually forget about your article as another, juicier target drops,” said Valens. “There’s a lot of healing that has to happen after you’re no longer the main character, but at least it’s over. It’s comforting when you feel like it will never end.”

Luke Winkie is a journalist and former pizza maker in New York City. He has previously written for Nieman Lab about Mel Magazine, Stat, Newsmax and OAN, and Study Hall.

]]>
https://www.niemanlab.org/2021/07/female-video-game-journalists-on-what-to-do-when-the-mob-comes-for-you/feed/ 0
The New York Times is using Instagram slides and Twitter cards to make stories more digestible https://www.niemanlab.org/2021/07/the-new-york-times-is-using-instagram-slides-and-twitter-cards-to-make-stories-more-digestible/ https://www.niemanlab.org/2021/07/the-new-york-times-is-using-instagram-slides-and-twitter-cards-to-make-stories-more-digestible/#respond Thu, 01 Jul 2021 18:30:07 +0000 https://www.niemanlab.org/?p=194309 Last summer, Vox’s Terry Nguyễn wrote about the ways that our Instagram feeds had changed in the wake of the Black Lives Matters movement. We started to see more PowerPoint-looking slides that were made to communicate information about the protests, and they’ve since been co-opted for just about every subject.

Nguyễn wrote about how those slides, while attention-grabbing, ran the risk of oversimplifying issues, stripping them of their importance, and potentially spreading misinformation:

Coincidental or not, creators are applying this millennialesque visual language to their work, which makes it easy for savvy brands (or anyone who can replicate that design style) to jump on and pervert the movement by using it to further their own corporate mission. Then there’s the question of whether it’s even appropriate to aestheticize these human rights-related issues. As corporations and individuals become attuned to the widespread adoption of memes and certain creative aesthetics in online spaces, they could further be used to “commodify tragedy and obfuscate revolutionary messages,” wrote the Instagram creator @disintegration.loops, later referencing how Breonna Taylor’s death has devolved into a meme.

Most of these activism slideshows don’t appear to be made with malicious intent, nor are they actively harming anyone, but some are worried about the long-term neutralizing effect of making advocacy more digestible and consumable for a large audience.

But slides like these, when done right and with care, make complex stories (about, say, a mutating virus!) more digestible and accessible. At The New York Times, the audience team has been experimenting with variations of these slides and cards on its social media platforms, deputy off-platform editor Jake Grovum said.

“When there’s either an important, complicated news story or something that [would benefit from] context, there’s a real good journalistic reason to do this kind of thing,” Grovum said. “If you look at some of the examples when it works best, it’s almost like you get the first three or four grafs of a news story all in one post. You have the copy of the tweet, a couple of lines in the card, and then it’s just a lot more information and context, and everyone knows that context can be lacking on social.”

At the Times, using slides and cards on social became more of a priority around the beginning of the pandemic last year. The audience team wanted to have a more “visual presence” on Times platforms and wanted to make more use of the maps and data visualizations that lived on the website.

“It all came from wanting to be more visual,” Grovum said. “We made it part of our daily routine to have these visual presences for whatever news or whatever story we’re trying to share.”

Single cards work better on Facebook and Twitter because those apps don’t have a carousel feature (you know, where you swipe left to see more images in a single post) the way Instagram does. Grovum said he’s found that Facebook and Twitter lend themselves well to text-heavy cards and that users are actually taking the time to read and share them.

The cards have been particularly useful to the Times in debunking misinformation, though Grovum said a broader challenge is designing cards in ways that are still helpful even if they’re screenshotted and stripped of context.

Here, a Facebook post from this past April about a vaccine-related false claim had more than 8,300 shares and 25,000 likes, a sign that users want to share factual information with others.

Compare that to the engagement on another Facebook post from the same day that explains what the word “cheugy” means. Just over a thousand likes and 534 shares.

Grovum said the Times looks at metrics and qualitative feedback (like comments) on the cards as proxy for utility. More engagement is a sign that people found the information helpful, even if they didn’t click the link.

“We’re pretty aware and upfront with ourselves about how these are optimized for off-platform engagement, reach, and sharing. If you wanted to optimize for people clicking through the site, you wouldn’t put a third of the story on a card,” Grovum said. “We’re very aware that there’s a trade-off here, and we’ll usually do bot — we’ll have a link post or something and [also] do the shareable version and get the best of both worlds. Link posts without a visual can drive more actual click-through, [while] the cards are better for engagement.”

Because the text cards are usually image files, they can be difficult to read or even access for people who are visually impaired. Grovum said the Times is still working through those accessibility issues by adding alternative text to some posts, but noted that many social media management tools still don’t have an efficient way of including it, which can cause problems in workflow.

Trying out these new formats is helpful to the Times because it gives journalists a chance to learn about what their audiences want to see.

“Even with templates, these are labor-intensive no matter how easy your processes are. It takes longer to do [a card] than to write a tweet and just get that out,” Grovum said. “But I think it pays off to understand your audience and lean into things that you know are going to resonate with them.”

Dude with Sign, meet New York Times.

]]>
https://www.niemanlab.org/2021/07/the-new-york-times-is-using-instagram-slides-and-twitter-cards-to-make-stories-more-digestible/feed/ 0
What’s the healthiest news diet? Probably traditional media, but don’t gorge yourself: Too much can leave you less informed https://www.niemanlab.org/2021/05/whats-the-healthiest-news-diet-probably-traditional-media-but-dont-gorge-yourself-too-much-can-leave-you-less-informed/ https://www.niemanlab.org/2021/05/whats-the-healthiest-news-diet-probably-traditional-media-but-dont-gorge-yourself-too-much-can-leave-you-less-informed/#respond Mon, 17 May 2021 19:42:26 +0000 https://www.niemanlab.org/?p=193063 What sorts of media diets actually make you more knowledgeable about politics?

Is it one packed full of newspaper fiber? High-sugar clickbait? A day full of smartphone snacking, or three square meals? Can cable news really be part of a complete breakfast? Er…any idea what keto news consumption would be in this over-extended metaphor? (Vice, maybe?)

A new study in The International Journal of Press/Politics, looking at news usage in 17 European countries, finds that good ol’ traditional media is probably best for your political IQ — including high-quality public media, if you can find it. A vigorous online news regimen can also be good for knowledge, mostly. But ironically, gorging on all the news you can find might leave you less informed than someone who’s more selective.

The list of researchers is a veritable Schengen area of academics — 18 in all, led here by Laia Castro of the University of Zurich.1 (They make up NEPOCS, the Network of European Political Communication Scholars.)2 Here’s the abstract, emphases mine.

The transition from low- to high-choice media environments has had far-reaching implications for citizens’ media use and its relationship with political knowledge. However, there is still a lack of comparative research on how citizens combine the usage of different media and how that is related to political knowledge.

To fill this void, we use a unique cross-national survey about the online and offline media use habits of more than 28,000 individuals in 17 European countries. Our aim is to (i) profile different types of news consumers and (ii) understand how each user profile is linked to political knowledge acquisition.

Our results show that five user profiles — news minimalists, social media news users, traditionalists, online news seekers, and hyper news consumers — can be identified, although the prevalence of these profiles varies across countries. Findings further show that both traditional and online-based news diets are correlated with higher political knowledge. However, online-based news use is more widespread in Southern Europe, where it is associated with lower levels of political knowledge than in Northern Europe.

By focusing on news audiences, this study provides a comprehensive and fine-grained analysis of how contemporary European political information environments perform and contribute to an informed citizenry.

The research is based on an online survey of 28,317 Europeans, with per-country samples of around 1,700 each. (The samples are “fairly representative” of the broader populations, though a little more female, a little more educated, and a little younger.) Subjects were asked about how frequently they used different kinds of news media — TV, radio, newspaper, public service broadcasters, social media, online news sites, alternative media, and “infotainment” (political talk shows, comedy news shows, etc.). Researchers also asked about how often they actively try to avoid the news, how often their friends share news stories on social platforms, and how often they bump into political news without specifically seeking it.

They used the responses to all of those questions to categorize people into five “news user profiles”: news minimalists, social media news users, traditionalists, online news seekers, and hyper news consumers.

  • News minimalists: 17%. “Those who seldom consume news and use very few media outlets or platforms, if any…they are also the least politically interested, do not perceive they will be well-informed regardless of their actively following the news…and are older and slightly more educated than the average news user.”
  • Social media news users: 22%. They “mainly inform themselves through social media and consume little information beyond that…slightly higher levels of inadvertent news viewing than minimalists…are frequently exposed to news through social platforms such as Facebook, Twitter, or Instagram.” They tend to think the important news will find them without them putting in the effort to seek it out, and they have low levels of media trust. “They are furthermore the least educated and politically interested” in the sample.
  • Traditionalists: 19%. These are people “who prefer traditional and public service-oriented news sources. They watch TV more than the two previous profiles (supported also by higher levels of exposure to infotainment TV shows) and use traditional newspapers and radio. Additionally, they are the oldest and best educated, politically interested, trustful of the media, and barely feel that ‘news will find them. They are, for the most part, men.”
  • Online news seekers: 32%. They are “often exposed to news and tend to actively use various news outlets and online platforms (although they also score high in the use of traditional news) and are generally women.” They use more news outlets across more different types of media than the first three groups — but they’re also “more prone to seeking like-minded perspectives in political information.” Despite all that news they consume, they’re the most skeptical and distrusting of mass media brands of all these groups; they’re also “more likely to use alternative media and nonjournalistic sources” than the groups listed above.
  • Hyper news consumers: 10%. (Warning, Nieman Lab reader: This may be you.) Hyper news consumers consume all sorts of news from all sorts of outlets and platforms, all the damn time. They have the highest interest in politics, the highest education levels, and the highest trust in media. They reported relying on six or more news outlets and more than three social platforms to keep up with news in the past 30 days.

Here’s a visual of those findings in pastel chart form:

The researchers also looked for variations between countries. Northern Europe’s biggest economies — Germany, France, the U.K., the Netherlands — tend to have high shares of “news minimalists.” (The researchers connect this to those countries’ higher levels of international integration: “News minimalists are most prevalent in globalized, heterogeneous societies that exhibit a high movement of people through labor mobility, migration, and cosmopolitanism.”) But in much of southern Europe (Spain, Italy, Greece), minimalists are, well, minimal.

Meanwhile, people in those southern countries are substantially more likely to be online news seekers or hyper news consumers than their richer neighbors to the north. (Why? Online media is cheaper, and their established media outlets are less trusted.) The richest countries also tend to have a higher reliance on public broadcasters, which are typically funded at a higher level than elsewhere.

That’s all interesting — but what sort of news diet actually leads people to know more about the political situation they’re living in?

Along with all those questions about consumption patterns, survey subjects were also asked a bank of multiple-choice political knowledge questions to see how well informed they are. (Some questions were the same for everyone, and others were country-specific.) Items included: Who is Greta Thunberg? Who is the new leader of the European Commission? Who is your country’s current minister of foreign affairs? What percentage of people living in your country were born outside of it?

The results?

…a key finding is that only two user profiles (traditionalists and online news seekers) are positively and consistently correlated with political knowledge compared to the rest of the user profiles…

More specifically, the results show that those having a more selective and richer online news diet (online news seekers) are more likely to hold higher levels of surveillance knowledge compared to all groups of news users with the exception of those using traditional and public media, who are comparatively better informed than all the rest.

Strikingly enough, the hyper consumer of news profile shows either nonsignificant associations with political knowledge or (when compared with traditional and online news seekers) even negative correlations. We anticipate the most plausible explanation thereof stems from information overloads, as we more extensively discuss in the final section of the paper.

So: People who rely heavily on traditional and public media correctly answer more factual questions about politics than everyone else. People who seek out a lot of news online also fare above average. Those are the third and fourth groups listed above, one leaning female, the other leaning male.

(Interestingly, for online news seekers, the associated increase in political knowledge was found in the Scandinavian countries — but not in southern ones, and not even in the big northern economies like Germany, France, and the U.K. The positive knowledge effect for traditional/public media users was found more consistently across countries — but still not in Italy, Greece, or Poland.)

It shouldn’t be surprising that people who only consume a minimal amount of political news don’t know a lot about political news. And neither do all the Facebook scrollers who think the news will find them.

(From the study: “This is in line with findings…that people do not learn much from following the news on social media. This suggests that the potential positive effects of incidental exposure to news information through social networks might be offset by, for example, exposure to a sizable proportion of user-generated content and unreliable information conveyed through personalized streams and like-minded others.”)

But the hyper news consumer — let’s call him or her Dr. Twitter List von RSS Reader, Esq. — doesn’t see a return on all that time invested in terms of higher levels of knowledge.3

That was kind of shocking to me, to be honest. What good are all these 283 morning newsletters in my inbox if they don’t make me smarter?

Indeed, our findings suggest that it is more about quality than quantity, since traditionalists consume information from a lower number of sources than most news profiles identified in this study. Accordingly, consuming news from a broader range of news outlets, channels, programs, and platforms does not necessarily make for a more informed citizenry, and it may even lead to the opposite.

As our analyses show, respondents embedded in the hyper news consumers profile are less politically knowledgeable than the average news user. In line with previous research, this may be due to information overloads and a tendency for news snacking over actual news reading.

The avalanche of information and constant stream of news stories people are currently exposed to (not least on social media) makes it plausible that individuals using a multitude of sources find it ever harder to retrieve and process information from their available media. Indeed, compared to the other news profiles, hyper consumers of news use a greater number of online news outlets and social platforms for news.

The idea that information overload makes you dumber makes a certain sense, I suppose, but until I read this paper, I would have guessed that was a relatively rare outcome. But this is a systemic, statistically significant finding across more than 28,000 survey subjects and 17 countries. If you’re an information hypervore, that should give you a little pause.

 

 

 

 

 

Okay, pause over, back to Twitter.

  1. Their names: Laia Castro, Jesper Stromback, Frank Esser, Peter Van Aelst, Claes de Vreese, Toril Aalberg, Ana S. Cardenal, Nicoleta Corbu, David Nicolas Hopmann, Karolina Koc-Michalska, Jörg Matthes, Christian Schemer, Tamir Sheafer, Sergio Splendore, James Stanyer, Agnieszka Stępińska, Václav Štětka, and Yannis Theocharis. Representing, respectively, the University of Zurich, Zurich, Switzerland; Universitat Internacional de Catalunya, Barcelona, Spain; University of Gothenburg, Gothenburg, Sweden; University of Antwerp, Antwerp, Belgium; University of Amsterdam, Amsterdam, the Netherlands; Norwegian University of Science and Technology, Trondheim, Norway; Universitat Oberta de Catalunya, Barcelona, Spain; National University of Political Studies and Public Administration, Bucharest, Romania; University of Southern Denmark, Odense M, Denmark; Audencia Business School, Nantes, France; University of Silesia, Katowice, Poland; University of Vienna, Vienna, Austria; Johannes Gutenberg University in Mainz, Mainz, Germany; Hebrew University of Jerusalem, Jerusalem, Israel; Università degli Studi di Milano, Milano, Italy; Loughborough University, Loughborough, UK; Adam Mickiewicz University, Poznan, Poland; Technical University of Munich, Munich, Germany.
  2. One of their current projects is called THREATPIE, funded by NORFACE. I don’t even know what those words mean, but if there’s ever a Bond film focused on high-end European communications research, I suggest they be considered for use at key plot points.
  3. Note: Hyper news consumers did have higher levels of political knowledge in Norway, Sweden, Israel, and Romania — just not elsewhere. So country-level effects are still a factor.
]]>
https://www.niemanlab.org/2021/05/whats-the-healthiest-news-diet-probably-traditional-media-but-dont-gorge-yourself-too-much-can-leave-you-less-informed/feed/ 0
We need to know more about political ads. But can transparency be a trap? https://www.niemanlab.org/2021/03/we-need-to-know-more-about-political-ads-but-can-transparency-be-a-trap/ https://www.niemanlab.org/2021/03/we-need-to-know-more-about-political-ads-but-can-transparency-be-a-trap/#respond Wed, 31 Mar 2021 13:00:28 +0000 https://www.niemanlab.org/?p=191711 As misinformation researchers, we spend a lot of time thinking about online advertising. We dig through ad libraries, monitor platforms’ announcements, and publish investigations into how disinformation agents are bending the rules.

We rely on social media platforms to give us information to do this. But the experience of working within platforms’ parameters has left us with a question: Can transparency be a trap?

In 2017, Facebook announced it was building a searchable archive of U.S. federal election–related ads that would include some spending and targeting data. Various iterations culminated in the Ad Library, which set the standard for ad transparency. Later, Google also began sharing some information about political ads with researchers. Snapchat did the same, and Twitter eventually opted to get rid of political advertising altogether.

By setting policy on it, social platforms have demonstrated they know transparency matters when it comes to political advertising. But they’re also able to control the terms of that transparency. Here are eight big questions that arose when we began scrutinizing the current landscape for advertising transparency.

1. What is obscured by the platforms’ definitions?

What counts as “political” and how is that decided? Election and media law in the U.S. generally defines political ads as those purchased by or on behalf of a candidate for public office, or those relating to a matter of national importance; most major social media platforms use a similar definition.

Facebook calls these “social issue” ads, and defines them as ads messaging about anything “heavily debated, [that] may influence the outcome of an election or result in/relate to existing or proposed legislation.” But who determines what is “heavily debated” or what messaging has the power to influence an election? Advertisements promoting ultrasound services may appear apolitical to most, but if they’re paid for by an anti-abortion organization, they may warrant further scrutiny. On Twitter, political issue ads are banned in the United States, including those from climate advocacy groups. On the other hand, oil companies such as ExxonMobil have been allowed to run ads on the platform. Given the room for interpretation as to what is and isn’t “political,” is the distinction really useful? Should political issue-related ads, such as ads about climate change, count as “political”? And who makes that determination?

As part of a stated effort to protect the U.S. election’s integrity, Facebook did not allow new political ads to run on its platform from October 27, 2020 to March 4, 2021 (with a brief exception made for political ads targeting Georgia’s Senate runoff election in January). But ads about vaccines, ads about election fraud, and ads from politically motivated groups including Prager U, the self-described “leading conservative nonprofit,” all ran during this time. Because of the norms established by the platforms, ads deemed non-political are not held to the same transparency standards, so they remain visible to the public, with less scrutiny from researchers. When platforms aren’t thoughtful with their definitions, powerful issue lobbies are able to exploit loopholes to promote their message.

2. Who gets to access and interpret the transparency data?

There are barriers to entry for every mechanism of transparency the platforms have provided us. A researcher looking to explore Snapchat’s political ads archive must be able to run and interpret a .csv file. Facebook provides more data to researchers with the advanced skills to access their API. There is also no standardization across the platforms’ databases, making meaningful cross-platform comparisons difficult. So while platforms are increasingly giving researchers access to data, should it only be trained researchers who can scrutinize how social media is used to target communities? How could we open this up for all interested people?

The platforms also fully control what data they make public, and how, and it’s not always particularly useful. For example, Facebook provides impression data for political ads, but it is given in ranges. So an ad could be listed as having garnered <1000 impressions, but there’s no way to know if this means 998 impressions or none. Many advocacy organizations have called for more granular data, which platforms could conceivably provide in a standardized format that allows comparison, or in a user-friendly public interface.

3. Can we be confident that pro-transparency measures are effective?

It is crucial to verify whether nominal pro-transparency measures are having a positive effect. For example, many platforms provide some kind of label that indicates who paid for a political ad. This is an effort to increase transparency, but do the labels being used accomplish that? Facebook has been criticized for its lax advertiser verification requirements that allow advertisers to hide their identity behind shell pages. In this example, Students for Life, an anti-abortion advocacy group, is running ads through a page innocuously called “standingwithyou.org.”

4. Will these measures be enforced?

Are the tools built by the platforms suitable to deliver on their stated transparency goals? Researchers at the Online Political Transparency Project were surprised to see that ads containing Joe Biden’s name and image were not being picked up as “political” by Facebook’s AI. They were only able to determine this through setting up their own Ad Observer browser extension. How can we know that the tools offered by platforms are working as they are meant to? Platforms could provide more transparency around the methodology used to create these tools, so researchers could audit them for potential issues or errors.

5. Will they be evenly enforced?

A January 2021 study from Privacy International suggested that heightened transparency standards are unevenly applied around the world. Authors dubbed this the “transparency divide.” The 2020 U.S. presidential election saw unprecedented measures taken by the platforms that far outweighed their efforts elsewhere. Facebook, for example, publicized what was at the time its largest effort to date to, it said, protect the election’s integrity. At the same time, in India’s Bihar state, with a population of around 104 million people, a critical election for the state legislature garnered no blog posts or announcements from Facebook about protecting its integrity. Facebook and Twitter treated the rampant misinformation during these two elections differently, labeling more misleading posts in the U.S. than in India. Transparency measures must meet equal standards globally and be subject to the same levels of enforcement.

6. Is the data reliable?

Researchers have consistently reported errors in the data provided as part of transparency efforts. For example, during the 2019 election in the U.K., thousands of ads went missing from the Facebook ad archive because of an error. Similar complaints were made about Google’s ad archive in the US in 2019. What mechanisms are in place to ensure the data we’re getting is reliable?

There is good reason to be skeptical. In 2019, Facebook agreed to pay $40 million to settle a lawsuit alleging that it had concealed inaccuracies in its video view metrics that led to a massive and misguided industry shift. Media outlets laid off print staffers in favor of investing in video content based on incorrect information. Why should we take Facebook’s data at face value now? Without independent oversight, there is no reason researchers should consider the data from platforms to be reliable.

7. How does transparency direct our attention?

A new tool for transparency auditing is an exciting thing for researchers, and so it is only right that it should become the subject of academic and journalistic research. But what is being missed when we focus on a particular type of information because of the transparency measures behind it?

Take, for example, how the increased access to information around ads marked by social media platforms as “political” has meant that less attention is paid to non-political or commercial advertising. Facebook has given researchers unprecedented access to advertising data around the 2020 U.S. election, possibly the most scrutinized campaign to date. What about elections where that level of oversight was not in place? This concept is neatly captured as a “feature bias” by our colleague Tommy Shane. The features to which we already have access influence our perspective and, therefore, what we study.

8. What’s transparency for?

Kate Dommett, a lecturer at the UK’s University of Sheffield who studies digital campaigning, wrote in Policy and Internet about calls for more transparency in her field of study in the U.K. She found that “despite using common terminology, calls for transparency focus on the disclosure of very different types of information.” Some organizations were calling for financial transparency, others for transparency around targeting data, and only some considered the specifics of how this information would be presented.

Dommett’s research illustrates the pitfalls of demanding transparency for its own sake. When researchers and advocates aren’t specific enough about the outcomes desired, platforms are able to provide an incomplete form of “transparency” as a fig leaf that blunts the political will for positive change. Take, for example, calls for transparency in political spending. If the desired outcome is to monitor the spread of particular messages, and social media companies only offer ad spending data, and not information about impressions and engagement, there are gaps we must seek to fill. Transparency is a tool, not an end in itself; we must reflect carefully on what we want to achieve when we call for it. If we don’t, we’ll keep falling into the trap of false transparency.

Madelyn Webb is an investigative researcher at First Draft. Bethan John is a social media journalist at First Draft. This story originally ran on First Draft’s Footnotes, “a space for new ideas, preliminary research findings, and innovative methodologies.”

Photo by Michael W. May used under a Creative Commons license.

]]>
https://www.niemanlab.org/2021/03/we-need-to-know-more-about-political-ads-but-can-transparency-be-a-trap/feed/ 0
How Yahoo News reached 1 million followers on TikTok in 1 year https://www.niemanlab.org/2021/03/how-yahoo-news-reached-1-million-followers-on-tiktok-in-1-year/ https://www.niemanlab.org/2021/03/how-yahoo-news-reached-1-million-followers-on-tiktok-in-1-year/#respond Wed, 10 Mar 2021 15:08:33 +0000 https://www.niemanlab.org/?p=191083 Picture Yahoo users and you probably envision a group that’s older and a bit less digitally savvy than those relying on, say, Google’s suite. (The research says you’re not wrong.) On TikTok, in contrast, 63% of users are younger than 30 — including 33% still in their teens. So you might be thinking: Yahoo News? On TikTok?

But here’s the interesting thing. Yahoo News (which has long been one of the most-used online news brands in the U.S. and around the world, even if you don’t know anyone who uses it) isn’t just on TikTok — it’s one of the most popular news organizations on the platform. The Yahoo News account now has 1.1 million followers, including many, presumably, who are encountering the brand for the first time. Launched one year ago this month, Yahoo News is outpacing CBS News (947,000 followers), USA Today (895,000), The Washington Post (894,000), NBC News (644,000), and plenty of others. (It’s behind NowThis News, which has 2.5 million followers, and The Daily Mail, with 1.5 million.)

The account — started and run by Yahoo News special projects editor Julia Munslow — isn’t afraid to embrace humor, starting with a tongue-in-cheek bio: “Yes, we still exist.”

The content leans on fairly straightforward news coverage and service journalism geared toward the platform’s younger audience. In recent days, for example, there have been posts about the Covid-19 stimulus bill, a “360” look at raising the federal minimum wage, and early career advice pegged to Women’s History Month.

I wanted to know more, so I talked to Munslow and Joanna Lambert, head of consumer at Verizon Media, about why Yahoo News joined TikTok in the first place, combating news fatigue, and what they think Gen Z wants from a news organization.

What’s the big-picture strategy?

Yahoo News maintains a newsroom of about 40 and also aggregates news from more than 100 partner sites. It does not have a subscription product and the homepage is a fairly straightforward list of news stories with ads interspersed. (On a recent weekday, the top ones were “Conjoined twins share appalling news after nine years,” “Food you shouldn’t buy under any circumstances,” and “Hilarious tattoo fails (#12 may never be employed).”) The company, naturally, pays close attention to traffic trends, starting with the fact that more and more web traffic has been coming from people using mobile devices.

“We realized the way news is consumed has changed dramatically and we needed to evolve the way we meet new customers, and especially to meet them where they are,” said Joanna Lambert, head of consumer at Verizon Media. “In 2020, we put an intentional strategy in place to think about how to be mobile-forward and how to be on new platforms that could be more engaging with audiences. A key part of that strategy has been around TikTok.”

2020, of course, wound up being an outlier year. There was an unexpected uptick in desktop usage as many people stopped commuting or straying far from home during the pandemic. Still, Yahoo saw a 24% increase in mobile web and app users compared to 2019. It wasn’t the only good news for Verizon, which owns Yahoo News as well as sister sites like Yahoo Finance. In January, Verizon reported its first quarter of revenue growth since acquiring Yahoo for $4.48 billion in 2017, fueled by a 25% increase in ad revenue compared to the previous year and an 11% increase in daily active users for its news products.

After years of cutbacks and the highlight at quarterly meetings being that the “declines are declining,” the company feels like it’s found a workable strategy. “We expect the evolution to continue to be on mobile,” Lambert said. “That’s where we’re investing our resources now.”

Lambert, for her corporate title and 30,000-foot view, sincerely seems to appreciate the cheekiness of the account, even when it touches on Yahoo’s reputation as something of a digital dinosaur. “I actually love the ‘Yes, we still exist’ line. It’s fun, and it takes a dig at the fact we’ve been around for a long time,” she said. “It says we’re committed to staying relevant today and into the future.”

“For Gen Z, by Gen Z”

Munslow, previously an intern and associate editor at Yahoo, had recently returned to the company following a year teaching in Malaysia. Her students introduced her to TikTok and she saw firsthand that, in addition to being a low-stakes way to practice conversational English, the platform was accessible, flexible, and, importantly, fun to use. “I think we have to get on TikTok,” she told her manager on her first day back.

Yahoo knew its TikTok strategy had to be more than a copy-and-paste job from its website, but the team wanted to make sure it was introducing itself as a nonpartisan (“Yahoo purple” was mentioned), fact-based provider of news to users who may be encountering it for the first time.

“I think we’ve been very successful building our brand and audience on TikTok because we’ve been very authentic with the audience about who we are and what we offer,” Lambert said. “We didn’t take our Yahoo News content and just sort of transport it to a different platform. On TikTok, we are news for Gen Z delivered by Gen Z from the trusted Yahoo newsroom. What we’ve been able to do is provide that fact-based news, but inject an element of fun and engagement into it.”

The Yahoo News TikTok account ended up launching on Super Tuesday after a late night brainstorming the best way to promote the newsroom’s election coverage to new audiences. But before the launch, Munslow said she logged hours looking at what other news organizations were doing on the platform and immersing herself in the larger TikTok community.

It was important to Munslow — who, at 24 years old, qualifies as a member of Gen Z — that Yahoo’s voice on the platform was both highly accessible and primarily journalistic.

“We really wanted to focus on delivering the news in a trusted, but conversational, way,” she said. “We wanted to break down the news in a way that matters to them. We’ve honed in on the topics we know they care about and we’re delivering the news in a way that feels accessible to Gen Z.”

So what are the topics that Gen Z cares about?

Research indicates they’re more politically engaged but less bound by party labels than preceding generations and Munslow said that in terms of hard news coverage on TikTok, nothing beats politics. Yahoo’s coverage of the presidential inauguration and a two-parter on Biden’s “Day 1” actions, for example, pushed its account over the one million mark, earning it 92,000 new followers in 24 hours.

@yahoonewsHere’s what Biden did on Day 1 in office, part 2. @juliamunslow #news #politics #biden #yahoonews♬ original sound – Yahoo News

TikTok videos that touch on topics like climate change, social justice, student loan debt, and personal finance do well, too. Munslow said witnessing multiple recessions has made Gen Z interested in budgeting, investing, and money more generally.

“In the 2008 recession, they watched parents lose jobs, and their family friends lose jobs. And now, we’re in a pandemic that has brought another huge blow to the U.S. economy,” Munslow said. “Personal finance is huge on TikTok. We know they really care about that.”

The account doesn’t shy away from heavy material. Yahoo News has posted more than 30 videos covering the violent riot at the Capitol in early January. Its on-the-ground reporting in D.C. included the clip of “Elizabeth from Knoxville” that has been viewed upwards of 31 million times.

To combat news fatigue and anxiety, it’s also put “mental health breaks” in its timeline that encourage users to breathe, take walks, and periodically tune out. In a related series, Yahoo editor Gabrielle Sorto does soothing activities — takes a coffee break, paints a vase, bakes cookies — while recapping some feel-good headlines.

But, also? It’s not that serious.

“TikTok is a place for fun and for creativity,” Munslow said. The Yahoo News account doesn’t go in for dance videos. “Part of that is because I’m not, uh, the world’s strongest dancer,” Munslow says. But it’ll join in other trends.

Different content appears each user’s “For You” page. You might get more vegetarian cooking content, while someone else sees more Jayson Tatum highlights. To get the full picture, the team relies on a Slack channel where Yahoo News staffers active on TikTok drop trends and other ideas for the account.

When videos using hypothetical group chats starting popping up everywhere, for example, the team used the format to explain where various Republican leaders stood on impeachment.

@yahoonewsHow Republicans generally stand on impeachment, explained through a hypothetical group chat. #politics #impeachment #news #groupchat #yahoonews♬ original sound – Yahoo News

TikTok makes it easy to reuse sounds, or music, from other videos. (Think of the sea shanties that went viral earlier this year.) Incorporating a trending clip is another way that Yahoo News finds new audiences. They recently used music (“The Juice by Mr Fingers“) that’s been featured in nearly 27,000 videos in a TikTok following a journalist taking her mother to get a Covid-19 vaccine.

@yahoonewsCome with me as I take my mom to get her COVID-19 vaccine in California! @angelaishere #news #covid19 #vaccine #covidvaccine #yahoonews♬ The JUICE by MR FINGERS – Han

Every TikTok, no matter how playful, goes through a formal editorial process.

Munslow typically starts the day reviewing footage available to use through partnerships and what’s produced by the Yahoo News newsroom. She compiles a list of topics or articles that could be turned into TikToks. Once selections have been made and footage has been edited, Munslow or another editor will screen record the draft in TikTok, including captions, hash tags, title, and the video itself. That recording winds up back in the dedicated Slack channel where editors check facts and ensure the caption and subtitles are typo-free.

If the team is creating an original video — like for the Yahoo 360 videos representing “every side” of an argument — the script goes through an editing process before they’re recorded.

@yahoonewsShould the minimum wage be raised to $15? Here’s a look from every angle, with a 360. ##minimumwage ##yahoonews360##yahoonews##politics##news##jobs♬ original sound – Yahoo News

One other tip? Munslow highly recommends using subtitles and captions.

“It makes it easier for your audiences to understand what happening in the video and makes the video more accessible for all audiences,” she said. “Subtitling has proven to really help us when we’re going through the editing process and making sure our content is accurate.”

Rethinking engagement — and continuing to experiment

Yahoo News — like a number of other news organizations — has done away with its comments section. That was not a particularly popular move. (The first several pages on Yahoo News’ feedback forum show users comments’ return.) But, on TikTok, Munslow says she prioritizes engagement and encourages comments and questions.

“We look at our comments a lot, and we try to comment back to them as much as possible, whether it’s to answer a question they have about a news topic or something more fun and light and conversational.” She has clarified what’s in a particular piece of legislation and, when asked, named her favorite cookie (“white chocolate macadamia”). “It shows we’re humans, we care about their lives, and what’s relevant to their lives,” Munslow said.

Ultimately, reaching more than one million followers within a year has meant trial and error, Munslow said. What resulted in a viral TikTok this week might not work the following week — or for a different account. Sometimes videos — like a series that answered questions about Trump’s second impeachment — don’t do as well as she expected. It’s all part of the process.

Besides trying to keep videos under 30 seconds, Munslow said she has no fool-proof tips for news organizations that would work better than testing different videos and formats themselves.

“I have been trying to figure out the algorithm since we got on,” she said. “I think the biggest pieces of advice I can give to any organization, even outside of news, trying to get on TikTok is to experiment.”

]]>
https://www.niemanlab.org/2021/03/how-yahoo-news-reached-1-million-followers-on-tiktok-in-1-year/feed/ 0
A lot of Americans get news from social media, but they don’t expect it to be true https://www.niemanlab.org/2021/01/a-lot-of-americans-get-news-from-social-media-but-they-dont-expect-it-to-be-true/ https://www.niemanlab.org/2021/01/a-lot-of-americans-get-news-from-social-media-but-they-dont-expect-it-to-be-true/#respond Tue, 12 Jan 2021 18:57:32 +0000 https://www.niemanlab.org/?p=189847 While about half of Americans “sometimes” or “often” get news from social media, they say they remain skeptical about what they’re seeing, according to a new Pew Research Center report on news consumption on social media platforms.

During a year full of misinformation, from vaccines to voter fraud, Pew surveyed 9,220 U.S. adults between August 31 and September 7 about 11 different social media platforms. Of those who get news on social media at least “sometimes,” 59 percent said they expect the information they find there to be inaccurate, a sentiment that remains unchanged from 2019.

(Normally, an annual Pew report on Americans’ news use of social media would include lots of year-over-year comparisons — TikTok’s up, Snapchat’s down, that sort of thing. But Pew recently changed how it asks people about news consumption, in an attempt to better capture the diffuse nature of digital media, and as a result, comparisons to many data points from previous years aren’t reliable. Consider this a snapshot more than a dot on a fever chart; the full comparisons can start next year.)

Does news on social media help people better understand current events? Only 29 percent said yes; 47 percent said it didn’t make a difference, and 23 percent said it actually left them more confused.

The biggest social platforms for news aren’t surprises: Facebook, YouTube, Twitter, and Instagram, in that order. But the demographic makeup of each site’s users differ. “White adults make up a majority of the regular news users of Facebook and Reddit but fewer than half of those who turn to Instagram for news,” according to the report. “Both Black and Hispanic adults make up about a quarter of Instagram’s regular news users (22% and 27%, respectively). People who regularly get news on Facebook are more likely to be women than men (63% vs. 35%), while two-thirds of Reddit’s regular news users are men.”

On Facebook, 46 percent of its regular news consumers lean Republican versus 50 percent who lean Democratic. The platform with the most one-sided political leanings is Reddit, where 21 percent of its regular news consumers lean Republican against 79 percent who lean Democratic.

Find the full report here.

]]>
https://www.niemanlab.org/2021/01/a-lot-of-americans-get-news-from-social-media-but-they-dont-expect-it-to-be-true/feed/ 0
Anti-Racism Daily is a newsletter that helps you read the news and do something about it https://www.niemanlab.org/2020/09/anti-racism-daily-is-a-newsletter-that-helps-you-read-the-news-and-do-something-about-it/ https://www.niemanlab.org/2020/09/anti-racism-daily-is-a-newsletter-that-helps-you-read-the-news-and-do-something-about-it/#respond Wed, 16 Sep 2020 12:38:10 +0000 https://www.niemanlab.org/?p=186032 If you have an Instagram account, you may have seen this post when you were going through your friends’ Stories:

Along with naming racism and injustices as such, Cardoza makes sure that Anti-Racism Daily doesn’t contribute to harmful and insensitive media practices. She doesn’t publish body-camera footage from police shootings, mugshots, or any other visuals that depict the suffering of people of color and marginalized people.

“I know how difficult it is to read news that isn’t taking into account that toll that it has on our bodies,” Cardoza said. “There’s plenty of people that are going to share the videos and plenty of people that are going to report on people as if they’re bodies and not humans and souls. This platform is not designed to make entertainment out of pain and suffering of communities of color. I don’t write this space for white people to become engaged and informed. I write this space to help protect and center the needs of those most vulnerable.”

The email newsletter is free to subscribe to, though Cardoza encourages people to donate in a variety of ways to help with the upkeep of the product. There’s a monthly subscription on Patreon or a one-time contribution option. People can also donate through Venmo and PayPal. There’s no business model yet in terms of making the products profitable, as Cardoza and the guest writers all work on Anti-Racism Daily on a volunteer basis (though there are three part-time remote positions posted on the website for a graphic designer, a reporter, and an editor).

Cardoza does, however, offer team subscriptions to ARD for workplaces and classrooms. The group subscription includes the daily emails, weekly discussion guides, and monthly engagement reports of the team’s participation, with open rates and the actions taken. A subscription for a team of two to 10 people is $360 while a subscription for a team of 400 or more is $7,200.

Still, Cardoza said she doesn’t wake up thinking about ARD’s bottom line, and hopes to keep it that way.

“The only true benchmark of success is whether or not racism is has ended in America and around the world, and we’re still far away from that.”

Photo by Oleg Laptev on Unsplash.

]]> https://www.niemanlab.org/2020/09/anti-racism-daily-is-a-newsletter-that-helps-you-read-the-news-and-do-something-about-it/feed/ 0 People who get their news from social media are less knowledgeable about politics and coronavirus, and more likely to consume misinformation https://www.niemanlab.org/2020/07/people-who-get-their-news-from-social-media-are-less-knowledgeable-about-politics-and-coronavirus-and-more-likely-to-consume-misinformation/ https://www.niemanlab.org/2020/07/people-who-get-their-news-from-social-media-are-less-knowledgeable-about-politics-and-coronavirus-and-more-likely-to-consume-misinformation/#respond Thu, 30 Jul 2020 16:38:26 +0000 https://www.niemanlab.org/?p=185003 Ahead of the 2020 U.S. presidential election, a new report from the Pew Research Center is a window into what news consumption looks like for people who primarily get their news from social media.

According to Pew, Americans who mostly rely on social media for news are less knowledgeable about politics and the coronavirus pandemic, and less engaged. Eighteen percent of U.S. adults “primarily” get their political and election news from social media, while 25% use news websites and apps. Just 3% said they get such news mainly via print.

Chart shows about one-in-five U.S. adults say they get their political news primarily through social media

Pew conducted the study between October 2019 and June 2020 through five surveys as part of the the Center’s American News Pathways project, which studies how news consumption habits impact the ways that people hear and perceive the news. It found that the people who rely on social media for news skew younger (48 percent were between ages 18 and 29), are less likely to be white, and and tend to have a lower household income and lower levels of education, though Pew notes that that can be attributed to the fact that they’re younger.

Chart shows those who get most of their political news from social media more likely to be younger adults, less likely to be white

“As of early June this year, just 8% of U.S. adults who get most of their political news from social media say they are following news about the 2020 election ‘very closely,’ compared with roughly four times as many among those who turn most to cable TV (37%) and print (33%),” the report says. That same group is also the least likely to be following coronavirus news very closely (23 percent). Here’s how Pew identified who is “knowledgeable”:

Across the nine months of study and five separate surveys, respondents were asked 29 different fact-based questions that touch on a variety of topics related to the news, from economics to Donald Trump’s impeachment to the COVID-19 outbreak and more (see Appendix for details). Across these 29 questions, the average proportion who got each question right is lower among Americans who rely most on social media for political news than those who rely most on other types of news sources, except for local TV.

Other findings include:

  • Social media news consumers are more likely (68%) than most to report seeing made-up news related to the coronavirus pandemic.
  • About 81% of social media news consumers had heard at least a little bit about the conspiracy theory that the pandemic was planned.
  • Only a third (37%) of these social media news consumers are “very concerned” about the impact of misinformation on the 2020 election.

Read the full report here.

]]>
https://www.niemanlab.org/2020/07/people-who-get-their-news-from-social-media-are-less-knowledgeable-about-politics-and-coronavirus-and-more-likely-to-consume-misinformation/feed/ 0
Newsonomics: As McClatchy teeters, a new set of money men enters the news industry spotlight https://www.niemanlab.org/2019/11/newsonomics-as-mcclatchy-teeters-a-new-set-of-money-men-enters-the-news-industry-spotlight/ https://www.niemanlab.org/2019/11/newsonomics-as-mcclatchy-teeters-a-new-set-of-money-men-enters-the-news-industry-spotlight/#respond Tue, 19 Nov 2019 15:38:19 +0000 https://www.niemanlab.org/?p=176950 Whenever the definitive history of daily newspapering is written, 2019 will be recorded as a major turning point.

Today, one year-long drama comes to a close, or at least an act break: New Gannett, the result of America’s two largest newspaper companies merging, will become a reality as the deal reaches final closure. But another drama has hit the stage to take its place.

Yesterday afternoon, this was the Bloomberg headline flashing across your news feeds: “Newspaper Publisher McClatchy Teeters Near Bankruptcy.” McClatchy, of course, being the second largest newspaper publisher in America, after New Gannett.

That headline is an attention-getter, and it contains some truth. You say teeter, I may say totter — either way, there’s a whole lot hanging in that balance. But there’s some nuance in this new drama — one of many to come from the past decade’s conversion of news companies into financial instruments stripped of civic responsibility by waves of outside money men.

After all, when we talk about newspaper companies, we typically use their corporate names — Gannett, GateHouse, McClatchy, MNG, Lee. But it’s at least as appropriate to use the names of the hedge funds, private equity companies, and other investment vehicles that own and control them. It’s Fortress Investment Group that’s taking control of New Gannett today; it’s Apollo Global Management supplying the debt that let the merger happen (and first in line to skim whatever cashflow New Gannett can produce). It’s Alden Global Capital that has strip-mined MediaNews and Digital First papers, become the industry’s bête noire, and provoked drama across the industry.

With McClatchy’s troubles — its share price collapsed last week, down 82 percent across five trading days — a new financial player steps to center stage: Chatham Asset Management.1 However McClatchy gets reorganized — and now it must be, one way or another — Chatham, the company’s biggest lender and shareholder, is in the driver’s seat.

While it’s easy to think of money men monolithically, they’re not. Each brings a particular view about the 2020s newspaper business, a particular theory of the case to their investment. But the lenses through which they view it are quite different from those of traditional newspaper company executives. I haven’t found any, for instance, who believe there’s a growth story for local news. As a result, their question is: What’s the best way to operate distressed companies in a distressed industry? How do you manage, and budget for, businesses on the way down?

Leaving this woeful decade and entering a new one, it’s the financial players that have the firmest grip on what these companies will become — what will be defined as news, and who will be employed to report it.

A McClatchy reorganization could end up with a company quite similar to today’s with Craig Forman remaining as CEO. Could. It could also mean a change at the top. Forman has to thread some narrowing financial needles.

One other likely outcome: a merger with its most fitting partner, Tribune Publishing. Last week, I reported that the two companies are once again in early talks, reprising one of the more durable storylines in the Consolidation Games. I first reported that potential merger in September 2018; the two came close to a deal last December, but it fizzled.

While a deal in the coming months is possible, given the past week’s events, Tribune seems more likely to wait out McClatchy’s drama and see what comes out on the other side. The idea: Let McClatchy reorg itself, even via bankruptcy, and tidy up its balance sheet by shedding debt. Then merge. That would create a stronger merged company, the theory goes.

Who might become the CEO of a merged Tribune–McClatchy — Tribune CEO Tim Knight or McClatchy’s Forman? The industry joke is that the loser of that potential power struggle will get to run it; the winner will head home with a nice departure package.

Here’s what we know at this point, with the help of confidential sources close to all the dealings.

Start with that teeter-totter. McClatchy sits on one end, bloated with more than $700 million in debt. That’s way down from the more than $5 billion it saddled itself with by acquiring Knight Ridder in 2006 — a disastrous deal, burdened with a price-tag held over from the good old days, just as they were ending. But it’s still $700 million.

Who’s sitting on the other end? Now that’s a good question.

The federal Pension Benefit Guaranty Corporation, established in 1974 to be a pension payment backstop, has taken a seat. Congress set up PBGC in 1974 to take over pension liabilities when companies found themselves unable to fulfill their commitments.

McClatchy, both directly and through some of its now three consulting companies — Evercore Group LLC, FTI Consulting Inc., and Skadden, Arps, Slate, Meagher & Flom LLP — began talking with PBGC when the Internal Revenue Service jumped off that teeter totter. See, McClatchy had asked the IRS for a waiver that would let it skip its upcoming pension plan payment, due next fall, arguing that the fund, with $1.32 billion in hand, was in good-enough shape to handle it. McClatchy also made clear that its financial condition would make it hard to make that $124 million payment. (The company currently has fewer than 2,800 employees — and more than 24,000 pensioners.)

The IRS denied the waiver. That set up the conversation with PBGC.

Across the teeter-totter, McClatchy pitches PBGC this plan: You guys take over our pension plan, via a “distress termination,” and take the funding we have in it. We’ll work out a new defined and limited payment plan over time. The PBGC benefit is that it gets some new money over time from McClatchy, which it wouldn’t get in a bankruptcy. For McClatchy, the benefit is in reducing, and then capping, its cash outflow.

Such a PBGC agreement could be a very good thing for McClatchy, freeing up cashflow to invest in its digital transition. So if those talks are in progress, and that payment to the pension plan isn’t due until next fall, then why is that dreaded word — bankruptcy — getting thrown around now?

McClatchy, more through cost cutting than through improving revenues, continues to have positive cashflow. In its Q3 financials, it even posted a modest 4.6 percent (or $869,000) increase in EBITDA year over year. While that’s a milestone of sorts — the first increase in eight years — McClatchy made clear it doesn’t expect to report similar plus-earnings in the coming quarters. Over the first three quarters of 2019, it generated $64.9 million in adjusted earnings. Moreover, the company has largely kicked its next major debt payments down the road a ways, to 2026.

So, in the big picture, McClatchy is cutting, just like everyone else, and it can pay its bills — other than that pension contribution next fall. But the focus has been on the ominous language in some of its SEC filings, acknowledging potential illiquidity and difficulty maintaining ongoing operations. That language — though mostly standard fare, given the pension issue — has stirred the headlines.

But it’s also fairly clear that both a financial restructuring of the company and bankruptcy are among the outcomes the company and its three consultants are considering.

Why — and why now? If that big pension payment is still months away, why the new urgency over the past week?

Consider some of the moving pieces. If PBGC agrees to a “distress termination,” it will secure a priority position in claiming McClatchy’s assets should the company later go bankrupt. Inserting PBGC high up in the claim chain inevitably forces others — largely equity owners — to slip further down it. (What about those receiving the pensions? “The company believes under current regulations, such a solution would not have an adverse impact on qualified pension benefits for substantially all of its retirees.”)

In other words, a PBGC deal requires some financial restructuring.

Then, on the other side of that teeter-totter, PBGC can slide over, making room for McClatchy’s current lenders and shareholders. Most prominent among those: Chatham Asset Management. Chatham is both McClatchy’s largest lender and largest shareholder.

The name on the door says “McClatchy.” That acknowledges the family, which launched the company in California’s Gold Rush in 1857, and which on paper still “controls” the company through a two-tier share structure. In a lot of ways, though, the door may as well say “Chatham.”

Chatham is the most important player in that McClatchy war room, with all those consultants. Assuming there’s a PBGC deal, the war gamers plot out those two scenarios:

  • A financial restructuring of the company. That would put PBGC into the claim order and change others’ place in line.
  • A bankruptcy. This would be a pre-pack bankruptcy — meaning that, as the company filed for Chapter 11, it would already have at the ready a plan to emerge from that bankruptcy. A pre-pack saves a lot of time — and prevents hellacious traumas such as Sam Zell’s four-years-in-court bankruptcy from hell. That bankruptcy semi-paralyzed Tribune at a pivotal time, when it should have been focused on digital transformation.

In either case, the McClatchy family would likely see a reduced position in the company. A bankruptcy would likely mean the end of its control. So there’s a lot to balance here.

Importantly, this isn’t a new situation. When then-McClatchy CEO Gary Pruitt bought Knight Ridder in 2006, he paid $4.5 billion in cash and stock and assumed another $2 billion of Knight Ridder’s debt. It took out $3.75 billion in bank debt to make the deal work. (Knight Ridder was bigger than McClatchy, just as Gannett is bigger than GateHouse. In both cases, the smaller acquirer had to take on outsized debt to gain the larger target. Adding to the sting in McClatchy’s case was that it was the only company to even bid for Knight Ridder when it was up for auction.)

That McClatchy–Knight Ridder deal and Lee Enterprises’ acquisition of Pulitzer in 2005 turned out to be the most ill-timed of the era. Both Lee and McClatchy have struggled to deal with all that debt since.

McClatchy’s efforts on that front have been remarkable. It’s paid off more than $4 billion of its debt over a very tough period for the industry. And yet here it is, almost 15 years later, its very existence as a standalone company threatened by the debt that remains.

Money men taking front and center isn’t a new phenomenon either. Remember 2009, the bottom of the Great Recession? Newspaper revenues took at 19 percent hit that year — and it’s been all downhill since then, at varying trajectories, but with an alarming deceleration in revenues the past three years.

It was the Great Recession that largely opened the doors for those money men to enter. MediaNews, Tribune, Lee, and others all declared bankruptcy. Debt holders gained new sway and new equity. Companies like Angelo Gordon, Credit Suisse, Oaktree Capital, Alden, Apollo, and Fortress began showing up in SEC filings — and in company boardrooms.

Now, amid the tatters of a once-proud industry, they’re the ones who increasingly determine the fate of the primary gatherers of local news across the United States. Yes, there are a few contrarian independent papers — the ones with the well-known owners in big metros and the lesser-known smaller chains and single property owners. Those are largely still run by people with strong newspapering DNA. But across the wide expanse of this country, the papers still driven in part by a civic mission become fewer and fewer — and the ones that treat news providing as just another business grow in number month after month.

Photo of newsprint loading dock at McClatchy’s Sacramento Bee by Blake Thornberry used under a Creative Commons license.

  1. This story originally called the McClatchy lender and shareholder Chatham Capital rather than Chatham Asset Management. They’re different companies, and Chatham Asset Management is the one involved in McClatchy. We regret the error.
]]>
https://www.niemanlab.org/2019/11/newsonomics-as-mcclatchy-teeters-a-new-set-of-money-men-enters-the-news-industry-spotlight/feed/ 0
I’ve got a story idea for you: Storyful’s new investigative reporting unit helps publishers dig into social media https://www.niemanlab.org/2019/10/ive-got-a-story-idea-for-you-storyfuls-new-investigative-reporting-unit-helps-publishers-dig-into-social-media/ https://www.niemanlab.org/2019/10/ive-got-a-story-idea-for-you-storyfuls-new-investigative-reporting-unit-helps-publishers-dig-into-social-media/#respond Wed, 23 Oct 2019 13:47:10 +0000 https://www.niemanlab.org/?p=175897 The Islamic State on TikTok. Child sex abuse on Facebook. The trolling of Meghan Markle.

What do these stories — published by The Wall Street Journal, The Times of London, and Sky News — have in common, besides a certain grim 2019-ness? They all used data from social media agency Storyful’s nascent investigative unit, which officially launched this month as Investigations by Storyful with an expanded editorial team and an eye toward helping newsrooms mine social media for stories around the 2020 U.S. presidential election.

“The online and the offline world have melded,” said Darren Davidson, Storyful’s editor-in-chief. “[Social media] is changing and becoming the story in and of itself, in lots of ways.”

Storyful was founded in 2010 by the Irish journalists Mark Little, who’d witnessed the pivotal role of Twitter in the Arab Spring and wanted to help news organizations curate news stories via social media — capturing future big events when they were just in the bubbling-up stage and verifying information. The company was acquired by News Corp for $25 million in late 2013 and over the years has expanded to work with brands (rather than just newsrooms), verify user-generated video, help companies tackle misinformation, and train journalists.

Now Storyful is expanding its investigative capabilities with a focus on three categories: breaking news analysis, deep background (analyzing communities, accounts, and posts to see how information spreads and stories evolve), and network mapping (looking at which people and groups drive online conversations and the relationships between those groups). “The advantage we offer is speed and scale,” Davidson said, the ability to spot trends quickly and broadly. (He noted that Storyful doesn’t have access to any private data: “We don’t infiltrate, from a technological or ethical point of view. Everything we look at is public.,” Storyful can’t, for instance, see links being shared in private WhatsApp chats, though it tracks the URLs shared in public WhatsApp groups.)

Storyful, which has offices in New York, London, Dublin, and Sydney, has more than 150 employees globally, including 55 journalists — 11 of them hired specifically for its investigative product launch, with one more hire on the way. (The company identified seven of them for me: Samuel Oakford, formerly of Bellingcat; Hava Pasha, formerly of Fox News; Catherine Sanz, who was at The Times of London; Laura Silver from BuzzFeed U.K.; Eoghan MacGuire from CNNi; Peter Bodkin from Journal Media; and Richard James of Buzzfeed U.K.) The company has also rearranged its newsrooms so that more of its journalists are in the U.S. ahead of the election.

Storyful clients work with the company by buying a “bank” of hours at a time; they can then draw down on those hours to enlist Storyful’s help on articles. (The company wouldn’t discuss its fees.) The Investigations unit has worked with a handful of newsrooms so far, including The New York Times and The Wall Street Journal; Davidson described the work process as “collaborative”; in some cases, the newsrooms approach Storyful with specific ideas or topics they need researched; in other cases, Storyful has “ideas that we look to find a home for” — that was the case, for instance, with the Journal’s story on gun sales on Facebook Marketplace. “We’re coming in as an outside collaborator and partner.”

While Storyful is eager to describe itself as a partner, the news outlets that work with it vary in their attribution of the company.

“They provide some additional research for projects we do. I wouldn’t describe it as a partnership,” said Malachy Browne, a senior producer of visual investigations at The New York Times, whose unit has worked with Storyful. “They have good news gathering and discovery tools across social platforms and good verification skills; we have that expertise in our unit as well.” In the instance of a story about missile attacks in Saudi Arabia, for instance, the news broke on a Sunday night in New York; the Times used Storyful to initially research reliable tweets and YouTube videos, then narrowed down the material it was going to use. I found a number of Times stories with Storyful cited in the photo credits, as well.

The Wall Street Journal, by comparison — which, like Storyful, is owned by News Corp — is generally gives more credit to its use of Storyful in its reporting. In its story this week about the Islamic State’s use of TikTok, for instance, Storyful gets multiple citations and a photo credit. But despite sharing a corporate parent, Storyful is still editorially independent from the Journal and other clients, the company stressed. It didn’t specify the arrangements that it has with other News Corp companies, but said “all clients are treated equally and the terms of service are based on their specific needs.”

“This is about diving into online communities and piecing a puzzle together,” said Davidson. “That’s something that newsrooms struggle to do, due to the complexity of the task and the time that it takes.”

Photo by Daniel Lincoln.

]]>
https://www.niemanlab.org/2019/10/ive-got-a-story-idea-for-you-storyfuls-new-investigative-reporting-unit-helps-publishers-dig-into-social-media/feed/ 0
More Americans than ever are getting news from social media, even as they say social media makes news “worse” https://www.niemanlab.org/2019/10/more-americans-than-ever-are-getting-news-from-social-media-even-as-they-say-social-media-makes-news-worse/ https://www.niemanlab.org/2019/10/more-americans-than-ever-are-getting-news-from-social-media-even-as-they-say-social-media-makes-news-worse/#respond Wed, 02 Oct 2019 14:00:56 +0000 https://www.niemanlab.org/?p=175466 U.S. adults are getting news from social media increasingly often — but they also think that the big platforms have too much control over the news they see and that this results in a “worse mix of news” for users, according to a study of 5,107 people out Wednesday from the Pew Research Center.

Both Democrats and Republicans are concerned about social media companies’ power over the mix of news presented, but Republicans are significantly more skeptical — which probably isn’t surprising considering that Republican politicians including Donald Trump have repeatedly demagogued the issue, claiming that platforms like Facebook and Twitter systematically censor conservative news. Both sides bemoan “one-sided news” and “inaccurate news,” but after that there are splits in what people find concerning:

Republicans and Republican leaners are more likely to see censorship of the news as a very big problem on social media (43%) than Democrats and Democratic leaners (30%). Democrats, on the other hand, are about twice as likely as Republicans to say that harassment of journalists is a very big problem (36% vs. 17%).

Nearly half of those surveyed — 48 percent — also say that the news posts they see on social media are either liberal or very liberal; Republicans are more likely to say this than Democrats. This finding is slightly confusing since on social media people can filter the news that they see; a Republican, for instance, can fill their feed almost entirely with news from conservative sources if they want to. It’s possible that respondents are conflating their impressions of their own social media feeds, and of “the media” in general.

Regardless of the complaints, Americans’ use of social media to get news appears to be on the rise, despite some recent evidence it had receded a bit. Twenty-eight percent of respondents say they “often” get news from social media, up from 20 percent in 2018. And Facebook remains “far and away” the most common social source of news: 52 percent of respondents get news there.

]]>
https://www.niemanlab.org/2019/10/more-americans-than-ever-are-getting-news-from-social-media-even-as-they-say-social-media-makes-news-worse/feed/ 0
Social media is distorting the representation of women in Africa. Here’s what can be done about it https://www.niemanlab.org/2019/08/social-media-is-distorting-the-representation-of-women-in-africa-heres-what-can-be-done-about-it/ https://www.niemanlab.org/2019/08/social-media-is-distorting-the-representation-of-women-in-africa-heres-what-can-be-done-about-it/#respond Thu, 08 Aug 2019 13:15:16 +0000 https://www.niemanlab.org/?p=174187 When computer technology made electronic communication possible, the “new media” emerged: email, chat rooms, blogs, YouTube, Twitter, Facebook and so much more. It looked, perhaps, like a fresh new public space in which to represent women in new ways. But it has turned out to be just like old, conventional media. It emphasizes gender norms and portrays women as sex objects, morally deficient, and vulnerable.

The question this raises is: do we even need new media in African countries?

Any country would be left aside if it tried to ignore the global force of new technology, including new media. It makes communication faster, easier and often more personal. It reaches a wider audience. These forms of communication are already part of daily activities in most African countries, though people are largely consuming content produced elsewhere instead of producing their own.

But is it a strong tool for women or is it working against them?

I conducted research into feminist ethics in the age of new media in Africa. I found that little has changed. The new media continues in the ways of the old conventional media — that is, it supports patriarchy and negative portrayal of women.

Women still don’t have any agency in the way the mass media represents them. They become images of old, fixed ideas about femininity and masculinity. They serve dominant ideas about capitalism and consumerism. The media environment continues to keep women down.

It’s important that feminists engage with these challenges. They can do this by criticizing, analyzing and, when necessary, replacing traditional categories of moral philosophy. This will go a long way in eradicating the misrepresentation, distortion, and oppression that has resulted from the historically male perspective that is frequently reinforced by the media.

How new media portrays women

I analyzed content across selected online news platforms, advertisements and blogs and found that women were more misrepresented than underrepresented in the new media. They were highly objectified, especially through the use and widespread dissemination of memes. They were also exploited through sexting, pornography, online dating, and matchmaking.

In addition, women didn’t appear often in the news. When they did, they were often portrayed as trouble makers, or being abused and repressed. Another example is that online dating sites have evolved into their own public sphere where women are put up like a line of cows for the picking. This is simply a regurgitation of gender stereotypes.

Likewise, the introduction of software for altering pictures and manufacturing images has allowed for the misrepresentation of women’s bodies. This raises fundamental questions on truth and objectivity in the portrayal of women’s images in the new media space.

And then there is the negative stereotyping that continues unabated. Advertisements, photographs, anti-feminist websites and blog spots, fashion, modeling, and music videos objectify and patronize women on a plethora of digital platforms.

The feminist ethics lens

I have taken a feminist ethics approach to counter the distortion and oppression of women in Africa’s new media. My approach is an ethics of vigor which takes risk, care, control and justice into equal consideration when discussing female representation.

Africa is an extremely diverse continent. This makes it difficult to arrive at feminist ethical positions which can enjoy continent-wide acceptance. But there are some issues that cut across all countries. For example, there are concerns about the flow of undesirable communication including pornography and violence against women, which contaminate the values and morals of their societies.

Where states have common concerns it is important to communicate homegrown feminist ethics so as to juxtapose perspectives, preserve moral social relations, and defend women on the continent.

But to do this, attention must be paid to how African women conceive of themselves in the digital age, and how they want to be portrayed. This will require African feminists to clarify the concepts of morality, obscenity, sexism, and classism to better reflect African cultural norms.

Where to from here

The new media can only contribute to peace, security, democratic development and gender sensitive societies in Africa if African feminists guide the users of new media towards positive goals.

Doing so will depend on several things, including women’s access to the internet, better education about new media in conventional media, and a stronger presence in new media of African feminists. Above all, African feminists must produce the knowledge that they want others to consume across media.

Sharon Adetutu Omotoso is a lecturer at the University of Ibadan in Nigeria. This article is republished from The Conversation under a Creative Commons license.The Conversation

Image of a woman in Lekki, Lagos, Nigeria by Eyitayo Adekoya used under a Creative Commons license.

]]>
https://www.niemanlab.org/2019/08/social-media-is-distorting-the-representation-of-women-in-africa-heres-what-can-be-done-about-it/feed/ 0
Can our corrections catch up to our mistakes as they spread across social media? https://www.niemanlab.org/2019/03/can-our-corrections-catch-up-to-our-mistakes-as-they-spread-across-social-media/ https://www.niemanlab.org/2019/03/can-our-corrections-catch-up-to-our-mistakes-as-they-spread-across-social-media/#respond Fri, 15 Mar 2019 15:10:05 +0000 http://www.niemanlab.org/?p=169574 During the second week of February, the Fort Worth Star-Telegram published a column that turned out to be wrong. What happened next was the catalyst for an experiment in journalistic transparency that we believe has huge potential: moving corrections along the same social-media paths as the original error.

As Star-Telegram columnist Bud Kennedy explained in a subsequent piece — the original was taken down — he’d based his commentary on what appeared to be solid reporting from another newspaper, which had based its story on government records that were, in fact, incorrect. (Read Kennedy’s column — entitled “A columnist’s apology to Dan Patrick: The tax records were wrong. So was I” — to get the context.)

The original piece, harshly critical of a Texas politician, was shared by a number of people on social media. Some of them had substantial followings.

Kennedy’s column-length correction — in which he implored his readers to spread the word that the original was incorrect — caught the attention of David Farré in the newsroom of The Kansas City Star, a sister news organization in the McClatchy news group. We at the News Co/Lab have been working with three McClatchy newsrooms, including Kansas City’s, on embedding transparency and community engagement into the journalism.

Making accurate information chase the inaccurate

One essential element of transparency is doing corrections right, and the Star has been updating and upgrading corrections for the 21st century. In the age of analog traditional media, the process was flawed by definition, because corrections in newspapers were typically published on Page 2 days or even weeks after the original error, typically without the context a reader would need to understand exactly what had occurred. (TV news rarely corrected its errors at all, then and now.)

In theory, corrections in a digital age can be much more timely and useful. We can fix the error right in the news article (or video or audio) and append an explanation, thereby limiting the damage, because people new to the article will get the correct information.

But what about the people who’ve already seen the story with the misinformation? That’s a particularly tricky question when so much news spreads these days via social media and search. What if we could do more than limit future damage? What if we could repair at least some of the previous damage by notifying people who saw the error? The need for this has been growing and was highlighted in the recently released report “Crisis in Democracy: Renewing Trust in America” from the Knight Commission on Trust, Media and Democracy. (See Chapter 5.)

The News Co/Lab and the Star have had this on our agenda for a while now. A newsroom team — including Farré and led by Eric Nelson, growth editor for McClatchy’s Central region, which includes the Kansas City and Fort Worth properties — had been looking for a way to test the idea. The Star-Telegram correction, the newsroom team thought, was an ideal opportunity to chase a mistake via social media.

Nelson asked Kennedy and his editors if we could use the correction column to give this experiment a try. They were happy to do it. Here’s some of what happened.

Tracking down the sharers

The Star uses Facebook’s CrowdTangle social-media monitoring tool in several ways. One particularly useful Chrome browser plugin lets editors discover who’s sharing the Star’s stories on Facebook and Twitter.

Kennedy had posted the correction to his own Twitter and Facebook feeds and sent direct messages to several people on Twitter. Eric and I went to Facebook and let the people who’d posted the original column know that it was wrong, including a link to the correction column and a request to share that, too. Here’s an example of using messaging:

And here’s an example of posting a correction directly into the replies (we did both):

The top sharers have substantial followings, and a number of people in their networks re-shared their posts. We could have gone further; chasing all sharers down would have been possible, but seemed to have questionable value when we balanced time against likely effect.

In our limited test, the results were mixed — at least from the point of view of wanting to make sure truth catches up with falsehoods. At least one person deleted the share of the incorrect column. One said directly: No thanks. Others didn’t respond. The topic may have played a role: Politics tends to be discussed by people who consider themselves members of partisan teams, and that may well make them more reluctant than others to tell people they shared something that was wrong.

The platforms can help

The experiment did show that it’s possible to use social networks to spread corrections along the same paths the original errors moved. And we think this is potentially a very big deal.

But it’s plainly not a simple process to push out corrections this way. It’s time-consuming, and in an era when newsroom staffs are stretched to near the breaking point, it’s unrealistic to ask them to add this do-it-by-hand procedure to their jobs.

Who could help us more? Facebook, for starters. We’ve asked people there to automate this notification process and put it in CrowdTangle: Click a button, fill in a field that gives the details of the mistake and link to the correction, click another button, and the people with large followings who shared the original are notified. (Facebook knows about this experiment and is intrigued by what we’re doing. Stay tuned.)

We’d like to see the same kind of functionality in Twitter and other social media channels. The platforms have everything they need to help corrections catch up with mistakes, and it would be to everyone’s benefit if they’d deploy the tools to make it happen.

What about search? A lot of what people see is based on search results. That’s on Google, which not only dominates search but also — via its advertising clout and massive data collection — knows far better than news organizations who’s reading what. (Or at least who’s clicking on what.) We need Google’s help for sure.

We can provide some help ourselves, and that’s what we’re doing.

The best fact checkers are the audience. Look at Dan Rather’s blunder. But the best people to write convincing corrections are the ones who made the original mistake — if they have the integrity to do it.

To that end, the News Co/Lab is collaborating with another of our partners, the Trust Project, which is developing “transparency standards that help [audiences] easily assess the quality and credibility of journalism.” We’re working on the project’s WordPress plugin that helps news organizations implement Trust Project functionality on their sites. Our contribution will include, among other things, an invitation for readers of these sites to subscribe to corrections. Every news organization should embed this kind of functionality into its content management system, whether that’s WordPress or something else.

Corrections, incidentally, aren’t the only valuable use for tools and approaches of this kind. Major updates to articles — especially where the situation is changing rapidly — are another perfect use for this kind of thing. Again, however, the key word is tools, because we need the help of the platforms where so much of the news spreads in the first place.

Still, we need everyone’s help in the end. We’re human, and therefore we make mistakes. It’s everyone’s responsibility to correct public errors via the same channels. If we could embed that principle into all of our online activity, we’d all be better off.

Dan Gillmor is cofounder of the News Co/Lab at Arizona State’s Walter Cronkite School of Journalism and Mass Communication, where a version of this piece was published.

]]>
https://www.niemanlab.org/2019/03/can-our-corrections-catch-up-to-our-mistakes-as-they-spread-across-social-media/feed/ 0
BuzzFeed’s Jonah Peretti: Yes to scale, yes to platforms — still https://www.niemanlab.org/2019/03/buzzfeeds-jonah-peretti-yes-to-scale-yes-to-platforms-still/ https://www.niemanlab.org/2019/03/buzzfeeds-jonah-peretti-yes-to-scale-yes-to-platforms-still/#respond Fri, 08 Mar 2019 18:12:51 +0000 http://www.niemanlab.org/?p=169351 BuzzFeed CEO Jonah Peretti released a strategy memo Friday (timed to line up with his panel at SXSW) that outlined what he sees ahead for BuzzFeed in the coming year. The company was scarred by big layoffs in January, and many have begun to question the long-term sustainability of a business model so reliant on outside platforms (Facebook) for distribution. But Peretti says he sees “a clear path to a bright future for BuzzFeed. I’m hopeful the same is true for many of our peers.”

A few noteworthy parts of the memo:

BuzzFeed is making much more money from platforms than it used to, and it’s embracing them.

A year ago, in Q1 of 2018 we made about $500K in video platform revenue from Facebook; in Q4 of 2018 we made $3M. In January of 2017, we monetized less than 30% of views on YouTube; by November, we monetized more than 70%. Overall, revenue we generate from the biggest platforms — Facebook, Google, Amazon, and Netflix — has grown by 12 times since 2014.

These stats are in a section of the memo titled “Fix the platforms and get paid to do it.” Peretti argues that publishers are “fixing” the platforms by “[filling] the void…with quality content,” with the “void” referring to “opportunistic bad actors — anti-vaxxers, flat-earthers, conspiracy theorists, misogynists, racists, xenophobes, trolls, partisan extremists, scammers, and pedophiles.” (Related: Michael Golebiewski and danah boyd’s idea of “data voids.”) The argument is that by putting their own content on the platforms — and monetizing it — publishers will push the bad stuff out, and Peretti’s view of publisher relationships with the platforms is a rosy one:

Digital media companies scaling down or turning away from the platforms is the exact opposite of what the platforms need. It is much harder to moderate bad content than it is to create good content. No matter how much money the platforms spend, or how many content moderators they hire, this problem won’t be solved by removing bad content, we need an ecosystem where creating good content is sustainable. If tech and media work together, everyone will benefit.

This changes, of course, if the platforms turn away from you. It’s not clear what happens to content from publishers like BuzzFeed if, say, Facebook actually shifts its focus to private conversations between individuals.

Platforms are a piece but not the whole.

Peretti writes that “we still aren’t making enough from the platforms to sustain our investment in content,” and says that “in 2018 and 2019, we will generate over $200m in revenue from business lines that didn’t even exist in 2017.” He points to Tasty, which is “the biggest media brand on Facebook, but almost all of its revenue comes from businesses we’ve needed to create on our own.”

No “cheap TV.”

“We don’t make shitty TV, we make good internet,” Peretti writes. “We lean into the unique, digital power of that two-way connection, we create new ways for our audience to experience BuzzFeed.” Video content built for Netflix or Facebook Watch is “complementary,” he says, but “some digital media companies have pivoted to cheap TV or think of television as a more important medium. That isn’t true when you look at what drives culture today and in the future.” (Netflix said no to a second season of BuzzFeed’s series Follow This in January.)

The full memo — which includes the prediction that in a few years, the internet dumpster fire-ness of 2016-2019 will be a distant memory; fingers crossed — is here. And if you don’t want to read it, you can get the gist by reading tweets from his SXSW talk, just concluded. Kerry Flynn tweeted the whole thing if you want a straight narrative; a selection is below.

]]>
https://www.niemanlab.org/2019/03/buzzfeeds-jonah-peretti-yes-to-scale-yes-to-platforms-still/feed/ 0