Facebook – Nieman Lab https://www.niemanlab.org Fri, 21 Apr 2023 18:46:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 A history of BuzzFeed News, Part II: 2017–2023 https://www.niemanlab.org/2023/04/a-history-of-buzzfeed-news-part-ii-2017-2023/ https://www.niemanlab.org/2023/04/a-history-of-buzzfeed-news-part-ii-2017-2023/#respond Fri, 21 Apr 2023 18:45:26 +0000 https://www.niemanlab.org/?p=214369 We’ve written about the ups and downs of BuzzFeed News since 2011, when BuzzFeed hired Ben Smith to launch what would become a Pulitzer Prize–winning news organization.

BuzzFeed News’s first few years were a time of global expansion and excitement. This is the era when Stratechery’s Ben Thompson called BuzzFeed “the most important news organization in the world.”

But there were warning signs. In 2017, BuzzFeed was receiving more than 50% of its traffic from platforms — setting it up for trouble when the algorithms changed and social traffic to news sites plummeted. And the public always seemed to have a hard time distinguishing the Pulitzer-winning BuzzFeed *News* from cat-video BuzzFeed.

BuzzFeed, like many other digital publishers, went through multiple rounds of layoffs. When the company went public, its stock price began falling almost immediately, and investors pushed for BuzzFeed News to be eliminated entirely.

“Investors can’t force me to cut news, and the union can’t force me to subsidize news,” CEO Jonah Peretti wrote in an internal memo in spring 2022. But, he added, “We can’t keep losing money.”

Just about a year later, he announced that BuzzFeed News would be shut down.

[Part I: 2011 to 2017]

• 2017 •
10
January
BuzzFeed News publishes a PDF of documents alleging that Trump has deep ties to Russia. The article notes that “the allegations are unverified, and the report contains errors,” but “BuzzFeed News is publishing the full document so that Americans can make up their own minds about allegations about the president-elect that have circulated at the highest levels of the U.S. government.”
• 2017 •
29
March
BuzzFeed plans to go public. Peretti also talks about breaking news:

“So the Boston bombings happens, and immediately all of the most popular content on the site is hard news. Then there’s a slow news week, and the most popular content is lists or quizzes or entertainment, or fun content. When there’s huge news breaking, it becomes the biggest thing. But most of the time, it’s not the biggest thing.”

• 2017 •
30
March
BuzzFeed News is expanding into Germany and Mexico.
• 2017 •
10
April
BuzzFeed News’s Chris Hamby is a Pulitzer Prize finalist. “We’re so grateful to BuzzFeed for supporting investigative journalism on this scale,” says BuzzFeed News editor-in-chief Mark Schoofs.
• 2017 •
29
August
Amid a flurry of news coverage about Donald Trump’s collusion with Russia during the 2016 election, BuzzFeed News partners with the Latvia-based online outlet Meduza to beef up its Russia coverage. BuzzFeed world editor Miriam Elder: “On our side, there’s an enormous interest in Russia we really haven’t seen since the Cold War.”
• 2017 •
15
September
BuzzFeed gets more than 50% of its traffic from distributed platforms. Here’s Nieman Lab reporting on a presentation by BuzzFeed data infrastructure engineer Walter Menendez:

It uses an internal formula that measures how much traffic every post gets from Facebook, Twitter, and so forth versus from the BuzzFeed homepage, and weights traffic from those other platforms higher than BuzzFeed’s traffic, according to Menendez: “We want to make sure our traffic gets to the farthest reach of people as possible.”

• 2017 •
4
October
BuzzFeed launches AM to DM, a live news show on Twitter. TechCrunch:

I was far from the only one watching after the show premiered last week. AM to DM was trending on Twitter, reaching No. 1 in the U.S. and No. 4 globally. In fact, BuzzFeed says the show averaged about 1 million unique viewers each day, with clips being viewed a total of 10 million times. And it’s a young audience, with 78 percent of daily live viewers under 35.

• 2017 •
16
November
BuzzFeed will miss its 2017 revenue target, The Wall Street Journal reports.

BuzzFeed…had been targeting revenue of around $350 million in 2017 but is expected to fall short of that figure by about 15% to 20%, people familiar with the matter said.

(BuzzFeed isn’t the only digital publisher having trouble: Around this same time, Mashable sells low and Vice also misses its revenue target. At the same time, subscription-supported publications like The New York Times and The Atlantic are seeing a Trump bump.)

• 2017 •
13
December
“The media is in crisis,” Peretti writes in a memo calling for a diversified revenue model. “Google and Facebook are taking the vast majority of revenue, and paying content creators far too little for the value they deliver to users.” The memo also outlines new possibilities for BuzzFeed News: A book club, “paid events,” “content licensing.”
• 2018 •
25
April
Netflix announces a “short-form Netflix Original Documentary Series” that “will focus on BuzzFeed reporters as they report stories.”
• 2018 •
10
May
BuzzFeed launches a new weekly news podcast “for news that’s smart, not stuffy.” It includes “Jojo the bot” to help listeners follow along.
• 2018 •
18
July
BuzzFeed News gets its own domain, BuzzFeedNews.com. Nieman Lab explains one reason the separation might be needed:

Despite BuzzFeed News’ remarkable journalistic success…the general public seems profoundly unable to distinguish it from its sibling quiz factory. When the Pew Research Center polled Americans about what news organizations they trust or don’t trust, BuzzFeed finished dead last, 36th out of 36. It was the only news organization tested that was more distrusted than trusted across the political spectrum — from strong liberals to strong conservatives. The LOLs have proven a big hurdle for the news brand to overcome.

• 2018 •
20
September
BuzzFeed shuts down its in-house podcast team and says it’s shifting more resources to video.
• 2018 •
19
November
Peretti calls for more digital publishers to merge: “If BuzzFeed and five of the other biggest companies were combined into a bigger digital media company, you would probably be able to get paid more money.”
• 2018 •
19
November
BuzzFeed News launches a $5/month membership program. New York Magazine:

Asking for readers to pay for access to a publication is not a bad idea (just ask us, lol), but it’s a somewhat different proposition for a website backed by venture capitalists hoping to turn a profit in a liquidity event like a sale or public offering.

• 2018 •
10
December
Nieman Lab:

Since branching out as its own, separately branded website this summer, buzzfeednews.com has seen an increase of 30 percent in monthly average unique viewership, BuzzFeed says. That translates to 35 million unique visitors per month and 230 million monthly content views for BuzzFeed News, meaning the total traffic from News posts on the site and videos on Facebook, Twitter, Apple News, YouTube, and Instagram.

• 2019 •
10
January
Twitter renews AM to DM, which reportedly has a daily audience of 400,000 people, down from a reported 1 million at launch.
• 2019 •
23
January
BuzzFeed says it will lay off 15% of its workforce, about 250 jobs. The company “basically hit” its 2018 revenue target “of around $300 million,” but Peretti writes in a memo to staff:

“Unfortunately, revenue growth by itself isn’t enough to be successful in the long run. The restructuring we are undertaking will reduce our costs and improve our operating model so we can thrive and control our own destiny, without ever needing to raise funding again.”

• 2019 •
6
March
In New York City? Grab BuzzFeed’s first (and last) print newspaper.
• 2019 •
8
March
Peretti releases a memo about BuzzFeed’s path forward. BuzzFeed News gets a section:

“We are committed to informing the public and holding the powerful accountable. We published the dossier because we believe the public deserved to know about it. We reported that Donald Trump told Michael Cohen to lie to Congress about those negotiations. We exposed the WWF’s funding of paramilitary forces that have been abusing and killing people. We helped exonerate 10 men framed by a crooked cop in Chicago.”

• 2019 •
21
December
BuzzFeed says its international losses quadrupled in 2019.
• 2020 •
28
January
Ben Smith is leaving BuzzFeed to become the media columnist at The New York Times.
• 2020 •
25
March
Hoping to avoid more layoffs during the Covid-19 pandemic, BuzzFeed announces company-wide paycuts and Peretti says he won’t draw a salary until the crisis has passed.
• 2020 •
16
April
BuzzFeed shuts down AM to DM after Twitter stops funding it.
• 2020 •
12
May
BuzzFeed furloughs 68 staffers and stops covering local news in the U.K. and Australia. From The Guardian:

The company said that the cuts would also hit its flagship US operation as it looks to hit savings goals while continuing to produce “kinetic, powerful journalism.” “We [want to] reach the savings we need and produce the high-tempo, explosive journalism our readers rely on,” the company said.

BuzzFeed maintained that it was still “investing heavily” in its news operation, with a projection of investing $10m more this year than the division makes, and $6m in 2021.

• 2020 •
19
November
BuzzFeed announces that it will acquire HuffPost from Verizon Media. Peretti: “We want HuffPost to be more HuffPosty, and BuzzFeed to be more BuzzFeedy — there’s not much audience overlap.”
• 2021 •
11
June
BuzzFeed News wins its first Pulitzer Prize for its investigation into how China’s government detained hundreds of thousands of Muslims.
• 2021 •
24
June
BuzzFeed announces that it will go public through a SPAC merger. Its valuation is $1.5 billion.
• 2021 •
2
December
The BuzzFeed News Union goes on strike. The walkout is timed to coincide with a shareholder vote on whether BuzzFeed will go public.
• 2021 •
6
December
BuzzFeed goes public (BZFD on the Nasdaq). Peretti tells Recode’s Peter Kafka:

“I’m still comfortable [with BuzzFeed News losing money]. To a point. But it’s not the same point it was in the past. And so I think that people have this expectation that, what we’ve done in the past in terms of massive subsidies of news, is something that we will continue to do at that same level. And we can do it to a point. But we have to make sure that we build a sustainable, profitable, growing business so that we can do this journalism for years to come and have this great important impact.”

• 2021 •
15
December
BuzzFeed’s stock has fallen by about 40% since it started trading on December 6.
• 2022 •
4
January
Ben Smith is leaving The New York Times to launch a “new global news organization.”
• 2022 •
22
March
BuzzFeed News’s three top editors — editor-in-chief Mark Schoofs, deputy editor-in-chief Tom Namako, and executive editor Ariel Kaminer — are leaving the company. At this point, BuzzFeed News has around 100 employees and is reportedly losing $10 million a year. CNBC:

Several large shareholders have urged BuzzFeed founder and CEO Jonah Peretti to shut down the entire news operation…One shareholder told CNBC shutting down the newsroom could add up to $300 million of market capitalization to the struggling stock.

“This is not your fault,” Schoofs writes in his resignation email. “You have done everything we asked, producing incandescent journalism that changed the world.” BuzzFeed News will now focus on “the nexus between the internet and IRL,” according to Schoofs, and will offer buyouts to staffers on the investigations, politics, inequality, and science beats.

• 2022 •
21
April
Peretti in an email to employees, shared with Nieman Lab:

“Investors can’t force me to cut news, and the union can’t force me to subsidize news. I am committed to news in general and [BuzzFeed News] in particular. I’ve made the decision that I want News to be break-even and eventually profitable. We won’t put profits ahead of quality journalism and I’ll never expect [BuzzFeed News] to be as profitable as our entertainment divisions. But we can’t keep losing money…

For many years, News received more support than any other content division and over the years was allowed to spend 9-figures more than it generated in revenue. I still support News and value News, and I don’t want to have to cut back in News when we make new investments in other divisions. That’s why I want to transform News into a sustainable business, while continuing to do impactful, important journalism. I know this is a big shift and will require us to operate differently. We will set News up for success so News can become a stronger financial contributor to the overall BuzzFeed, Inc. business.”

• 2022 •
1
April
BuzzFeed News shuts down its app.
• 2022 •
16
November
BuzzFeed’s valuation is at $237 million, down from $1.7 billion in 2016. The Verge notes how much its Facebook traffic has fallen, according to NewsWhip data:

In 2016, BuzzFeed stories posted on the platform had 329 million engagements; by 2018, that number had fallen to less than half. Last year, BuzzFeed posts received 29 million engagements, and this year is shaping up to be even worse.

• 2023 •
26
January
BuzzFeed says it will start using AI to write quizzes and other content. However, “BuzzFeed remains focused on human-generated journalism in its newsroom, a spokeswoman said.”
• 2023 •
15
March
BuzzFeed News editor-in-chief Karolina Waclawiak says the newsroom will need to increase the number of stories it publishes, even though it is “much smaller than it used to be.”
• 2023 •
20
April
BuzzFeed lays off 15% of its staff and shutters BuzzFeed News, which is down to 60 employees from 100 in 2022. Peretti writes in a memo:

“I made the decision to overinvest in BuzzFeed News because I love their work and mission so much. This made me slow to accept that the big platforms wouldn’t provide the distribution or financial support required to support premium, free journalism purpose-built for social media…

We will concentrate our news efforts in HuffPost, a brand that is profitable with a highly engaged, loyal audience that is less dependent on social platforms.”

Photo of BuzzFeed News in New York City in 2015 by Anthony Quintano used under a Creative Commons license.
]]>
https://www.niemanlab.org/2023/04/a-history-of-buzzfeed-news-part-ii-2017-2023/feed/ 0
News now makes up less than 3% of what people see on Facebook https://www.niemanlab.org/2023/04/news-now-makes-up-less-than-3-of-what-people-see-on-facebook/ https://www.niemanlab.org/2023/04/news-now-makes-up-less-than-3-of-what-people-see-on-facebook/#respond Mon, 03 Apr 2023 16:16:58 +0000 https://www.niemanlab.org/?p=213541 People haven’t seen much news on Facebook for years now. The company’s algorithm has changed over time, as has people’s desire to see news on Facebook. And news is now an even tinier sliver of what people worldwide see in their Facebook News Feeds, according to a new paper, “Meta and the News: Assessing the Value of the Bargain” (h/t Press Gazette).

I’ll note right up front that this paper is funded by Meta and written in response to proposed legislation in Canada and the U.K. that would make platforms pay publishers for linking to their content (similar to legislation that has already passed in Australia). The platforms’ position is that they send traffic to the publishers by linking to their content and shouldn’t have to pay them. The paper’s author, economist and consultant Jeffrey Eisenach, says up front that “The evidence presented here indicates that publishers reap considerable economic benefits from their use of Facebook.” But whether you support or oppose the legislation, there are a few interesting facts here about news on the platform, using data provided by Meta that as far as I know hasn’t been published elsewhere.

— News accounts for “less than 3% of what users see in their Facebook Feeds,” Eisenach writes, noting, “News publisher content plays an economically small and diminishing role on the Facebook platform.” The 3% figure is worldwide and “based on Meta internal data for the last 90 days ending August 2022.” Facebook had said in 2018 that news made up about 4% of the feed.

— In the fourth quarter of 2022, just 7.5% of posts shared on Facebook in the U.S. contained any links, to news or otherwise. That figure is decreasing over time; in the fourth quarter of 2021, 14.6% of posts shared on Facebook in the U.S. contained a link.

— “The vast majority of news content shared on Facebook comes from the publishers’ own Facebook pages,” Eisenach writes: For the 90-day period ending August 2022, “Meta reports that more than 90% of organic views on article links from news publishers globally were on links posted by the publishers, not by Facebook users. In other words, Facebook users who view news publisher content on Facebook are primarily viewing content selected and posted by the publishers themselves.”

The full paper is here.

]]>
https://www.niemanlab.org/2023/04/news-now-makes-up-less-than-3-of-what-people-see-on-facebook/feed/ 0
Google blocks news in some Canadian searches, in response to proposed media law https://www.niemanlab.org/2023/02/google-blocks-news-in-some-canadian-searches-in-response-to-proposed-media-law/ https://www.niemanlab.org/2023/02/google-blocks-news-in-some-canadian-searches-in-response-to-proposed-media-law/#respond Thu, 23 Feb 2023 17:25:14 +0000 https://www.niemanlab.org/?p=212544 A bill under consideration in Canada would require platforms like Google and Meta to negotiate payments with publishers when they link to their content. In response, Google, which opposes the proposed law, is testing blocking news in a small number of searches.

From The Canadian Press on Tuesday evening:

The company said Wednesday that it is temporarily limiting access to news content for under four per cent of its Canadian users as it assesses possible responses to the bill. The change applies to its ubiquitous search engine as well as the Discover feature on Android devices, which carries news and sports stories.

All types of news content are being affected by the test, which will run for about five weeks, the company said. That includes content created by Canadian broadcasters and newspapers.

The tests “limit the visibility of Canadian and international news to varying degrees,” Google told Reuters.

Bill C-18, the Online News Act, is modeled on legislation that passed in Australia in 2021. The bill, which has already passed Canada’s House of Commons and moved on to the Senate, would, among other things, require platforms that “facilitate” access to news — by linking to it in search results, for instance — to compensate the publishers of said news.

For more background on both sides, we ran a piece last year discussing how the bill could be modified. Our Josh Benton called the law that passed in Australia “a warped system that rewards the wrong things and lies about where the real value in news lies.” The Canadian academic Michael Geist has written extensively criticizing the bill, as has Canadian journalist and former Wikimedia Foundation director Sue Gardner, while David Skok, CEO of Canadian news site The Logic, calls it “a necessary evil in order to maintain balance in Canada’s media ecosystem.”

A spokesperson from the Department of Canadian Heritage, whose minister Pablo Rodriguez is the sponsor of Bill C-18, criticized Google’s action, telling the Globe and Mail, “At the end of the day, all we’re asking the tech giants to do is compensate journalists when they use their work.”

This is not the first time that platforms have tested blocking news in countries where they are under legal threat: The company conducted a similar “experiment” in Australia in January 2021. In February 2021, Facebook temporarily blocked Australian users from sharing or viewing Australian and international news, sending publishers’ traffic tumbling. Facebook parent company Meta has said it’s ready to do the same in Canada.

]]>
https://www.niemanlab.org/2023/02/google-blocks-news-in-some-canadian-searches-in-response-to-proposed-media-law/feed/ 0
For the tech giants, security is increasingly a paid feature https://www.niemanlab.org/2023/02/for-the-tech-giants-security-is-increasingly-a-paid-feature/ https://www.niemanlab.org/2023/02/for-the-tech-giants-security-is-increasingly-a-paid-feature/#respond Tue, 21 Feb 2023 19:46:30 +0000 https://www.niemanlab.org/?p=212449 For more than a decade, the conventional wisdom has been that a social platform needs to be free to its users to succeed. It’s a two-sided network problem: Social networks need a critical mass of users to be of much value to anyone. And that user base has to be big enough to attract advertisers’ attention. Any sort of paywall gets in the way of the scale required to create a revenue megalith like Facebook.

Elon Musk, as he is wont to do, challenged that conventional wisdom when he made Twitter’s blue “Verified” check — previously evidence of actual verification — into a paid product. Verification was initially intended as a confirmation of identity, the sort of small mark that makes a platform sliiightly more trustworthy and secure. But it became some weird marker of status to some of the internet’s worst people, and so it became an $8 SKU.

This conversion — this shift from a “Trust and Safety” feature to a consumer product — had the results everyone predicted, a rash of impersonations, brand danger, and other malfeasance.

But last week, Musk-era Twitter went a step further and said only $8/month customers will be allowed to use SMS for two-factor authentication — a basic layer of security frequently used by journalists, celebrities, officials, and others who fear being hacked. The company tried to explain it as a matter of security (“we have seen phone-number based 2FA be used — and abused — by bad actors”) — but apparently the threat is only to non-paying customers, since Twitter Blue subscribers can keep on using it forever. There will be other ways to use 2FA for Twitter, but they’re not available worldwide and are not without their own risks.

Basic security features going behind a paywall — not good. So it was even less encouraging to see Facebook follow Musk-era Twitter’s lead:

Meta’s testing paid verification for Instagram and Facebook for $11.99 per month on web and $14.99 per month on mobile. In an update on Instagram, CEO Mark Zuckerberg announced that a “Meta Verified” account will grant users a verified badge, increased visibility on the platforms, prioritized customer support, and more. The feature’s rolling out to Australia and New Zealand this week and will arrive in more countries “soon.”

“This week we’re starting to roll out Meta Verified — a subscription service that lets you verify your account with a government ID, get a blue badge, get extra impersonation protection against accounts claiming to be you, and get direct access to customer support,” Zuckerberg writes. “This new feature is about increasing authenticity and security across our services.”

On Facebook, Zuckerberg engaged in some limited back-and-forth with users over the change. (“Call me crazy but I don’t think I should have to pay you guys to take down the accounts impersonating me and scamming my followers.” “This really should just be part of the core product, the user should not have to pay for this. Clearly it’s known by Meta this is filling a need, why profit additionally from it?”)

One user argues that “direct access to customer support is the real value, much more so than the blue check mark.” Zuckerberg: “I agree that’s a big part of the value.” And indeed, a hotline to Facebook customer service is likely the most valuable piece of the package here. But it doesn’t feel good to see features like identity verification — basic stuff for running a trustworthy platform — put behind a paywall.

For Twitter, there’s a certain mad sense to the move. Elon Musk has set the company on fire, from a cashflow perspective, and he’s desperate for all the user revenue he can generate. If 63% of your best advertisers drop you, you grab at whatever dollar bills you see floating by. (Not many seem to be floating Elon’s way.)

Facebook, meanwhile, is still pulling in more than $30 billion a quarter in ad revenue. But various headwinds, whether economic or Cupertino-driven, have demanded a “year of efficiency,” which includes chasing money from users too.

We’re seeing an addendum to that old conventional wisdom about social networks. You can’t charge most of your users — but you can charge some. Few would be bothered by a subscription product that offered additional features — ad-free browsing, say, or custom icons, like the old Twitter Blue. But it’s sad to watch basic security features put behind a credit card charge.

]]>
https://www.niemanlab.org/2023/02/for-the-tech-giants-security-is-increasingly-a-paid-feature/feed/ 0
Meta’s layoffs make it official: Facebook is ready to part ways with the news https://www.niemanlab.org/2022/11/metas-layoffs-make-it-official-facebook-is-ready-to-part-ways-with-the-news/ https://www.niemanlab.org/2022/11/metas-layoffs-make-it-official-facebook-is-ready-to-part-ways-with-the-news/#respond Mon, 14 Nov 2022 18:07:05 +0000 https://www.niemanlab.org/?p=209397 Among the mass layoffs at the company formerly known as Facebook last week are several roles that have served as a bridge between the news industry and the sprawling tech company.

The Meta Journalism Project Accelerator’s David Grant, a program manager, and Dorrine Mendoza, who led local news partnerships for the platform, were both laid off. Other journalism-adjacent positions eliminated include the head of news partnerships for South East Asia, a program manager for news, two program managers for news integrity, and multiple news communications jobs.

Meta declined to comment on the layoffs or confirm how many of the 11,000 positions eliminated were jobs relating to the news business. It’s unclear what impact the job losses will have on all of Facebook’s various news-related efforts, including the Meta Journalism Project itself. (Meta spokespeople and Campbell Brown, Meta’s vice president of global media partnerships, did not respond to requests for comment on the future of the Meta Journalism Project.)

The layoffs are another step in Meta’s journey to get the heck away from news. Meta, which promised $300 million in support of local journalism back in 2019 when it was still Facebook, has shifted resources away from its News tab, shuttered the Bulletin newsletter program, ended support for Instant Articles, eliminated human-curation in favor of algorithms, and stopped paying U.S. publishers to use their news content.

Instead, the company is focused on competing with rising platforms like TikTok and trying to build a metaverse that people actually want to spend time in. Meta has spent $15 billion so far in its quest to become “a metaverse company” and plans to spend billions more — plummeting stock price and leg-less avatars notwithstanding.

To be sure, the blockbuster investments of the past rarely arrived as checks paid directly to newsrooms. Facebook’s announced $100 million investment in local news at the start of the pandemic, for example, consisted of $25 million in grant funding and $75 million in “marketing spend.” In the early days of the Facebook Journalism Project, the training often focused on training newsrooms to use Facebook products to reach readers or teaching “best practices” for distributing on their own platform. But now, all sorts of funding is drying up — and anyone clicking on the “Grants” page on the Meta Journalism Project’s website will get a 404 error.

Multiple sources said Meta Journalism Project’s Global Accelerator Program — which consists of workshops and hands-on training designed to boost financial sustainability at news organizations — has been presumed dead for awhile now. A press release published two days before the layoffs became official said the accelerator helped 162 American and Canadian news publishers generate more than 166,000 new paying supporters and more than 2 million new registered readers since 2019. Its counterpart in Europe reported 166,000 new paying supporters, too, and nearly 1.5 million new registered users across 90 publishers from 17 countries.


The tech company’s divestment in news will hit some organizations harder than others. Many programs launched with funding from Facebook and/or Meta are funded only through 2024.

At Indiegraf — a network of local news organizations that received several infusions of Meta funds in 2020 and 2021 — the company’s change in direction does not change their work “in any way,” said CEO and co-founder Erin Millar.

“We have always been focused on building a sustainable business model that allows Indiegraf independence from platforms, similar to the news businesses we support,” Millar said. “Meta’s funding support enabled us to accelerate our plans to support independent media, but was never a crucial part of our sustainability.”

Millar said Indiegraf has been aware of the coming changes for months. Indiegraf doesn’t expect current funding arrangements to be revoked — but they’re not counting on any new partnership with Meta once those run their course.

The Meta staffers working on journalism projects “worked hard to use Meta’s resources to make an impact on the news ecosystem while they could,” Millar said. She added, “They also understood that the company’s investments in news were likely finite and temporary, and they were as transparent as they could be with partners like Indiegraf.”

A harder-hit organization will be the Local Media Association, which worked closely with the Meta Journalism Project and was chosen to execute a number of the U.S.-based programs.

Overall, since 2019, LMA has distributed more than $16.8 million to “a few hundred” local media organizations through Meta funding, said Nancy Lane, CEO of LMA.

She outlined the major programs that Facebook-turned-Meta funded through the association:

  • The News Accelerator program, including three sessions focused on reader revenue and one on video. Local media organizations received $4.27 million in grants through the sessions.
  • A Covid-19 local news relief fund that distributed $12 million to local news organizations in 2020. (“Some would have shut down without this funding,” Lane said.)
  • The LMA Local News Resource Center, dedicated to helping local media companies with their social media strategies. Meta’s total investment was more than $800,000 and allowed for a full-time staffer.
  • The Crosstown Data Journalism Pilot, which funded a collaboration between a data team at the University of Southern California and news partners WRAL-TV, NOLA.com/The Advocate and WBEZ. The grant, which totaled about $400,000, also funded data journalists in each newsroom.
  • The Meta Branded Content Project, operated in partnership with the Local Media Consortium, that helps local media companies create branded content revenue streams. Meta’s investment of more than $3 million funds two full-time positions.

LMA will seek new backers for their Branded Content Project and the LMA Local News Resource Center — both of which Meta will stop funding in 2024.

Lane praised Facebook for being one of the first organizations to invest in business sustainability efforts for local newsrooms and said many grant-giving institutions followed suit. She also had some harsh words on Thursday for people who have criticized the tech company that, frankly, has provided plenty of material for criticism.

Lane told me she thought publishers of color, in particular, would be hurt by the changes at Meta. (“Meta made sure that 50% of their programs were allocated to BIPOC publishers,” she said. I wasn’t able to confirm this number, though at least some Meta programs list half of their participants as Black-owned news organizations.)

“As far as LMA is concerned, more funders than ever before have stepped up recently and they will fill the funding void left by Meta, but nothing will replace the innovative spirit that defined this partnership,” Lane said. “And that is a huge loss for all of us.”

Chris Krewson, executive director of Local Independent Online News (LION) Publishers and the founding editor of Billy Penn, said Facebook left “an enormous mark on the emerging ecosystem of digital-only publishers.”

“There was one call where David Grant asked, ‘If money was not an issue, what would you do?’ And we’d never really thought about it that way, you know?” Krewson said. “Meta had the resources at its peak to do incredible things. Not just the dollars, but the encouragement to think of the best outcome possible, to make the biggest impact we could.”

LION received funding through Facebook’s Covid support programs and the LION-Meta Revenue Growth Fellowship. LION members also participated in the accelerator, which Krewson called “an unquestionable good” for dedicating millions to training for news orgs globally. (The team behind Meta’s Accelerator program has indicated that they hope to continue the work if they can find another funding source.)

Meta’s withdrawal won’t affect LION programs or staffing, but Krewson mentioned something else I’d overlooked. It was extremely helpful to have reliable Meta contacts when a small news organization needed verification or something went wrong with an organization’s page. Something weird happening with a member’s page is “not uncommon,” Krewson noted, but now, all of their contacts are gone.

Has your newsroom received Facebook-turned-Meta funding? Have you worked in a bridge role between the news industry and a tech platform? Will these changes affect you? I’d love to hear from you.

]]>
https://www.niemanlab.org/2022/11/metas-layoffs-make-it-official-facebook-is-ready-to-part-ways-with-the-news/feed/ 0
TikTok and Instagram are the only social networks that are growing as news sources for Americans https://www.niemanlab.org/2022/10/tiktok-and-instagram-are-the-only-social-networks-that-are-growing-as-news-sources-for-americans/ https://www.niemanlab.org/2022/10/tiktok-and-instagram-are-the-only-social-networks-that-are-growing-as-news-sources-for-americans/#respond Tue, 25 Oct 2022 13:30:42 +0000 https://www.niemanlab.org/?p=208787 Ten percent of all American adults now say they “regularly” get news from TikTok, according to a new Pew analysis. That’s up from 3% just two years ago. And for younger Americans, not surprisingly, the percentage is higher: 26% of Americans under 30 say they regularly get news from TikTok.

The increase comes as Americans’ use of most other social networks for news has declined over the past two years. Instagram is up too, but just a tiny bit. The use of Facebook for news has fallen the most over the last two years: Today, less than half of Americans say they regularly get news there. (That drop has taken place as Facebook has retrenched on news; a company spokesperson said recently that “Currently less than 3% of what people around the world see in Facebook’s Feed are posts with links to news articles.”)

More here.

]]>
https://www.niemanlab.org/2022/10/tiktok-and-instagram-are-the-only-social-networks-that-are-growing-as-news-sources-for-americans/feed/ 0
Facebook will shut down Bulletin, its newsletter service, by early 2023 https://www.niemanlab.org/2022/10/facebook-will-shut-down-bulletin-its-newsletter-service-by-early-2023/ https://www.niemanlab.org/2022/10/facebook-will-shut-down-bulletin-its-newsletter-service-by-early-2023/#respond Tue, 04 Oct 2022 18:25:04 +0000 https://www.niemanlab.org/?p=208340 Facebook is pulling the plug on its newsletter subscription service Bulletin and no one is even pretending to be surprised.

New York Times media reporter Katie Robertson broke the news:

Bulletin was launched as Facebook’s answer to Substack in 2021, not long after Twitter jumped into the paid newsletter game by acquiring Revue. The first featured authors were folks like Malcolm Gladwell and Malala Yousafzai.

“What’s weird about Bulletin…and perhaps shines a bit of a light on how much faith Facebook actually has in this product long-term, none of the creators they’ve launched with are people who I would think actually need Facebook’s monetization features,” noted Garbage Day’s Ryan Broderick at the time. “I have an extremely hard time believing that Tan France needs a monetized newsletter hosted on Facebook.”

I imagine the celebrities recruited by Facebook to write for Bulletin will be okay! But Bulletin had started to extend support to a subset of writers who could really use the Facebook cash: local news reporters.

We know the local news writers had been promised “licensing fees” as part of a “multi-year commitment” that would provide them “time to build a relationship” with their audience but when we wrote about the program last year, Facebook declined to put a dollar value on the support or specify exactly how long writers could expect the payments to last.

A Meta spokesperson said this week that 23 out of the original 25 local news writers are still using the platform and confirmed they will receive licensing payments for at least another year, as the original contracts suggested. The company said they would also provide resources to the writers to help them map out their next steps.

“We are committed to supporting the writers through this transition,” the spokesperson wrote in an email. “As mentioned, we are paying out their contracts in full. Additionally, they can keep their subscription revenue and subscriber email lists. In terms of content, they can archive all content and move it to a new platform of their choice.”

Roughly half of the 25 local news writers selected to join Bulletin are journalists of color. They’ve been publishing from communities in Iowa, North Carolina, Illinois, New Jersey, Ohio, Virginia, Florida, Connecticut, Texas, Michigan, California, Hawaii, Wisconsin, Georgia, Washington, Arizona, and Washington, D.C.

The financial support from Facebook was likely not life-changing for the local news writers. (Some boldface names reportedly inked deals with Bulletin in the six figures, but several of the local news reporters were planning to keep other jobs to make ends meet.) Facebook also provided legal resources, design help, newsletter strategy, and coaching to the group.

Soon after the local news partnership was announced, Kerr County Lead writer Louis Amestoy told Nieman Lab he saw a chance for Facebook to shape its information ecosystem of many local communities into something better.

“I think it’s important for Facebook to recognize this opportunity and say, ‘Okay, what do we really want to be?’” Amestoy said. “You see in certain communities that Facebook has come to fill a hole left by news deserts. Who becomes your local authority? The messaging group that’s there? Is there really someone there to curate that — someone who is objective and can differentiate the good stuff from the bad stuff? I certainly hope that they take some of the lessons that they’re going to learn from this, and make some more investments, because I think that there are a lot of opportunities. There’s so many talented journalists out there who really want an opportunity to do kind of thing that I want to do.”

With Tuesday’s abrupt announcement, it seems a little less likely those questions will get answered.

This article has been updated.

]]>
https://www.niemanlab.org/2022/10/facebook-will-shut-down-bulletin-its-newsletter-service-by-early-2023/feed/ 0
Most local election offices still aren’t on social media, new research finds https://www.niemanlab.org/2022/08/most-local-election-offices-still-arent-on-social-media-new-research-finds/ https://www.niemanlab.org/2022/08/most-local-election-offices-still-arent-on-social-media-new-research-finds/#respond Wed, 31 Aug 2022 12:59:02 +0000 https://www.niemanlab.org/?p=207500 Local election officials are trying to share voting information with the public on social media but may be missing some key platforms — and the voters who use them.

In early July 2022, for instance, young voters in Boone County, Missouri, complained that they had missed the registration deadline to vote in the county’s Aug. 2 primary election. They claimed no one “spread the word on social media.” The local election office in that county actually has a social media presence on InstagramFacebookTwitter and TikTok. But its accounts don’t have many followers and aren’t as active as, say, celebrity or teenage accounts are. As a result, election officials’ messages may never reach their audience.

The Boone County example raises important questions about how prospective voters can get informed about elections, starting with whether or not local election officials are active on social media and whether they use these platforms effectively to “spread the word.”

In our research as scholars of voter participation and electoral processes, we find that when local election officials not only have social media accounts but use them to distribute information about voting, voters of all ages — but particularly young voters — are more likely to register to vote, to cast ballots, and to have their ballots counted.

For example, during the 2020 election, Florida voters who lived in counties where the county supervisor of elections shared information about how to register to vote on Facebook, and included a link to Florida’s online voter registration system, were more likely to complete the voter registration process and use online voter registration.

In North Carolina, we found that voters whose county board of elections used Facebook to share clear information about voting by mail were more likely to have their mailed ballots accepted than mail voters whose county boards did not share instructions on social media.

Young people face distinct voting challenges

Voter participation among young voters, those between the ages of 18 and 24, has increased in recent elections, but still lags behind that of older voters. One reason is that younger voters have not yet established a habit of voting.

Even when they do try to vote, young voters face more barriers to participation than more experienced voters. They are more likely than older people to make errors or omissions on their voter registration applications and therefore not be successfully registered.

When they do successfully complete the registration process, they have more trouble casting a vote that will count, especially when it comes to following all the steps required for voting by mail. When they try to vote in person, evidence from recent elections shows high provisional voting rates in college towns, suggesting college students may also experience trouble in casting a regular ballot owing to confusion about finding their polling place, or because they are not registered to vote because their voter registration application was not successfully processed.

Some of these problems exist because voters, especially young ones, don’t know what they need to do to meet the voter eligibility requirements set by state election laws. Those laws often require registering weeks or months in advance of Election Day, or changing their registration information even if they move within a community.

Social media as a tool to spread the word

Social media can be a way to get this important information out to a wider audience, including to the young voters who are more likely to need it.

Younger people use social media more than older voters, with a strong preference for platforms such as YouTube, Instagram, and Snapchat.

News outlets and political campaigns use social media heavily. But our analysis finds that the vast majority of local election officials don’t even have social media accounts beyond Facebook. And, when they do, it is likely that they are not effectively reaching their audience.

Gaps in how local election officials use social media

We have found that during the 2020 U.S. presidential election, 33% of county election offices had Facebook accounts. Facebook is the most commonly used social media platform among Americans of all ages. But two-thirds of county election offices didn’t even have a Facebook account.

Just 9% of county election offices had Twitter accounts, and fewer than 2% had accounts on Instagram or TikTok, which are more popular with young voters than Twitter or Facebook.

Using social media for voter education

Local election officials are charged with sharing information about the voting process — including the mechanics of registering and voting, as well as official lists of candidates and ballot questions.

Their default method of making this information available is often to share it on their own government websites. But young voters’ regular use of social media presents an opportunity for officials to be more active and engaged on those sites.

While many election officials around the country face budget and staffing pressures, as well as threats to their safety, our research confirms that when officials do get involved on social media, young voters benefit – as does democracy itself.

Thessalia Merivaki is an assistant professor of American Politics at Mississippi State University. Mara Suttmann-Lea is an assistant professor of government at Connecticut College. This article is republished from The Conversation under a Creative Commons license.The Conversation

Drew Angerer/Getty Images

]]>
https://www.niemanlab.org/2022/08/most-local-election-offices-still-arent-on-social-media-new-research-finds/feed/ 0
Canada’s Online News Act shows how other countries are learning from Australia’s news bill https://www.niemanlab.org/2022/08/canadas-online-news-act-shows-how-other-countries-are-learning-from-australias-news-bill/ https://www.niemanlab.org/2022/08/canadas-online-news-act-shows-how-other-countries-are-learning-from-australias-news-bill/#respond Tue, 09 Aug 2022 12:27:01 +0000 https://www.niemanlab.org/?p=206627 While many governments around the world have begun to more actively engage in the journalism policy space in recent years, few efforts have garnered as much attention as Australia’s media bargaining code. Designed by the country’s competition authority to address a perceived market imbalance between platforms and Australian publishers, it has also become a lightning rod for wider debates over the state of journalism, the role of Facebook and Google in journalism’s decline, and whether and how governments should step in.

Enter Canada. In early April, the government introduced the Online News Act, a bill that, similar to the Australian model, would compel large platforms to negotiate with publishers about payment for the use of their content, or be forced into arbitration.

This isn’t the first time the Trudeau government has stepped into the journalism policy domain. In the last three years, it has passed legislation that allows qualified journalism organizations to receive a 25% tax credit toward editorial labor, issued a 15% tax credit on the purchase of digital subscriptions, and created a new charitable status for journalism organizations. The public debate over those measures was heated at times, but the new bargaining code has created a firestorm.

As in Australia, the platforms are lobbying aggressively against the bill. A range of academics, media critics, and journalists, including a network of small publishers, has also emerged in opposition. And Google, in particular, has taken an aggressive stance against the bill.

Why does Google care what Canada does? The answer likely lies in how this bill evolves and builds on the model implemented in Australia, and the fact that other countries around the world are watching this evolution and developing their own similar laws. The Canadian code probably won’t have a material financial impact on these platforms, but countries learning from each other, improving on the model, and it spreading globally very could.

So what does the Online News Act do, what does it get right and wrong, and should it be passed, scrapped or improved?

The reality of the Canadian media market

All things being equal, there should be no need for legislation to regulate the financial negotiations of private publishers. The code is a significant intervention in an industry that we rely on to hold governments and platforms to account. But it’s equally important to ground analysis of the code within the current realities of the Canadian media market, rather than an imagined world where publishers don’t already receive money from platforms, governments, or both.

There are four attributes of the current status quo that should be considered when weighing the merits of this legislation.

First, on the fundamentals, it’s clear that large tech platforms have absorbed journalism’s largest source of revenue (advertising), that this has negatively impacted the state of journalism in Canada, and that a healthy journalism industry is important for democratic societies.

While some believe that the journalism industry in Canada can self-correct and should be left to market forces — a view held strongly by many journalists themselves, and one that is supported by some important innovations particularly from smaller publishers — there is public and industry support for government intervention.

The Canadian government is already in the journalism policy game. The tax credit for journalistic labor, a $500 million program launched in 2019, both polarized the public debate about journalism in Canada and has broadly been a financial success. The subsidy helps the industry but, at the same time, has hurt its credibility with some audiences. This reality is further complicated by declining trust in the media in Canada.

Finally, while many, including ourselves, are uncomfortable with platforms funding journalism at all, the reality is that the platforms are already funding journalism in Canada. But the current status quo is one of opaque and unaccountable money for some journalism organizations. These deals are hidden behind NDAs and are not accountable to the Canadian public. They are also very often programmatic grants, casting legitimate questions about the independence and objectivity of the journalism initiatives they support.

Given these realities, the government is faced with several policy options.

The first is to leave the status quo untouched and continue to allow platforms to strike deals with publishers without oversight, transparency, or accountability. Publishers are faced with unequal bargaining power when they negotiate these deals, and the platforms can pick which publishers to cut deals with.

The second option is to use general revenue to further fund journalism through existing programs. But Canada’s labor tax subsidy is already 25% of editorial labor and only goes to qualifying journalism organizations. Deals between platforms and publishers arguably reach a broader range of organizations.

A third option is to create an alternative to ad-hoc platform deals, and instead force platforms to pay into a central fund that would then administer the funding to publishers via some sort of standard formula. This option standardizes payments, removes platforms from the decision of who gets what, allows money to go directly to journalism, and gives the public a clear sense of how money is supporting journalism.

We have previously argued for this model, but it has some real limitations. Though it might be administered by an arm’s-length organization, it inserts the government even further into the business of journalism. It’s also unclear how the amount of money put into the fund would be determined and what the basis would be for taxing platforms in Canada beyond what they already pay.

A fourth option is to regulate the bargaining process itself. Enter the Online News Act.

What the Act does

The Online News Act compels digital platforms to enter into financial agreements with publishers for news.

News outlets — either singularly or collectively — initiate bargaining. Platforms have to participate in the bargaining process, though if they believe the news outlet doesn’t meet the criteria to be subject to the Act, they can contest it. If an agreement can’t be reached by all parties within “a period that the Commission considers reasonable,” mediation occurs; if an agreement is still not reached, a panel of three arbitrators selected by the Canadian Radio Television and Communications Commission (CRTC) chooses a final offer made by one of the parties.

The bill builds on the Australian model in some important ways, most notably around the exemption criteria. Platforms can only be exempted from being designated for arbitration if the deals they have made with publishers meet the following criteria:

  • They provide for fair compensation to the news businesses for the news content.
  • They ensure that an appropriate portion of the compensation will be used by the news businesses to support the production of local, regional, and national news content.
  • They don’t let corporate influence undermine the freedom of expression and journalistic independence enjoyed by news outlets.
  • They contribute to the sustainability of the Canadian news marketplace.
  • They ensure that a significant portion of independent local news businesses benefit from them, they contribute to the sustainability of those businesses, and they encourage innovative business models in the Canadian news marketplace.
  • They involve a range of news outlets that reflect the diversity of the Canadian news marketplace, including diversity with respect to language, race, Indigenous communities, local news, and business models.

These criteria are immensely important, because they are the primary regulatory mechanism of the Act.

The bill also provides a degree of transparency into the deals that the Australian code lacked. The CRTC must be provided with details of the deals in order to access exemptions and will issue an annual audit of the aggregated deals and their impact on the journalism market in Canada.

Mischaracterizations

While the Act seems to have public support, it has spurred debate among journalists, academics, politicians, publishers, and platforms.

As during the Australia debate, the claim that this bill will “break the internet” is pervasive. Conflating “the internet” with platforms like Google and Facebook propagates a narrative that platform lobbyists have been trying to craft for years. Platforms are intermediaries whose design shapes the way we experience much of the internet, and that is a deviation from the open web.

A related argument against the Act is that it imposes a “link tax” for hyperlinking to news articles. Google said, “This is what’s known as a ‘link tax’ and it fundamentally breaks the way search (and the internet) have always worked.”

But the term “tax” implies that the money will be collected by the government, which is not the case with the Online News Act. Deals are made between private entities.

More fundamentally, the bill doesn’t necessitate that deals between platforms and publishers ascribe value to links at all. It doesn’t specify how value is determined, only that use of news content be compensated.

Others have claimed that the bill threatens journalistic independence. But tech platforms like Google and Facebook have already signed deals with several publishers in Canada, for undisclosed sums of money, with no oversight or accountability.

Another concern, reflected in a recent statement from a coalition of independent Canadian publishers, is that the bill would disproportionately benefit legacy outlets, stifling innovation in Canadian journalism.

It’s true that in Australia, deals were at least initially skewed in favor of legacy media outlets like Rupert Murdoch’s News Corp. But the Canadian bill has evolved from the Australian model, and allows for small publishers to band together. Organizations can be added to collectives after deals are done. Deal reporting will ensure that they and the regulator know the broad terms of the deals others are getting. Most critically, the exemption criteria specifies that deals must be made with independent publishers.

Again, the status quo is important to consider. Currently, we have left it solely to the whims of the big tech platforms like Facebook and Google to pick the winners and losers in Canadian journalism – The Globe and Mail, Toronto Star, and Postmedia all have deals, the details of which are hidden from the public. The vast majority of independent publishers do not. Ensuring that smaller outlets are included in a system that they are currently largely left out of arguably evens the playing field.

The oft-repeated claim that this bill won’t fix the crisis facing journalism is, of course, true: There is no one silver policy bullet that will save the entirety of the news industry. The decline of journalism and the hollowing out of newsrooms across the country are multi-faceted in nature. The Act addresses one element of the issue.

What should change

There are indeed legitimate and substantive criticisms of the Act that in our view could be addressed in amendments.

For starters, the Act expressly prohibits platforms from providing undue privilege to or discrimination against certain news content or news businesses and sets out a complaint mechanism for publishers to achieve redress. This is included in order to prevent any platform from responding in a retaliatory manner to a news outlet because of coverage that was deemed unfavorable. The problem, however, is that a strict and literal interpretation of the text could potentially prohibit the ability of a platform from ranking higher-quality content such as fact-based reporting or verified government information over lower-quality content. The legislation would benefit from clearer wording in this section.

Another area of ambiguity is the inclusion criteria. To benefit from the Act, news businesses must be designated as Qualified Canadian Journalism Organizations under the Income Tax Act, or must operate in Canada, produce news content, and regularly employ two or more journalists in Canada.

As the coalition of independent publishers has pointed out, this could mean smaller players are left out of deals altogether. In our view, the bill should err on the side of being maximally inclusive. For example, the wording of the section could be amended to include freelance journalists.

However, in order to ensure a measure of quality control on those that are funded, the definition of eligible news business could be amended to ensure that outlets are adhering to basic journalistic standards — such as fact-based analysis and reporting and having a standard procedure for issuing corrections or clarifications — as well as producing original reported pieces.

Given that one of the key concerns with the Australian model was that it was overly opaque by design, the bill in Canada needs to do a better job of being as transparent as legally possible. Transparency requirements are peppered throughout the Act but could be improved upon by ensuring that the broad metrics used by the platforms to determine the value of the deals is made available to the regulator. Also, the act could require aggregated, audited metric and market data be released at more frequent intervals. particularly in the early stages of the act’s enforcement, so that those making initial deals can benefit from knowledge of pre-existing or earlier negotiated terms.

Concerns regarding fairness and clarity over the funding formula for deals are also valid. Recently, the coalition of independent publishers suggested that the act provide a universal funding formula that would be applied consistently to all news outlets that qualify. The challenge with this is that without collective bargaining and the threat of forced arbitration, it is unclear how the terms of compensation would be established.

Perhaps a better model was proposed by the trade association News Media Canada. It would form a collective of qualified Canadian journalism organizations that would each provide their editorial expenses (total salaries and wages paid to eligible newsroom employees) confidentially to a law firm. The collective would negotiate with the platforms, and any settlements from collective negotiation would be shared among publishers on a pro rata basis.

This is a clear example of how collective bargaining can bring the accountability and equity that many critics of both the status quo and the bill rightly seek. 

The Act needs to be explicit in terms of where the extra revenue generated by these deals with the platforms will go. While the Act requires an annual report by the independent auditor to examine the expenditures on newsrooms, it should be explicit in its aims to reallocate revenues such that it results in more and better public service journalism.

More broadly, proponents of the Act need to be mindful of the fact that there is a distinct possibility the Act will result in Canadian news outlets receiving upwards of 50% of their editorial costs from a combination of government and platforms. This is the most concerning aspect of the Act, and is not a sustainable model. It is particularly worrying because platforms and governments are the two of the principle actors in our society that journalism needs to hold to account. However, the status quo must again be considered. Major publishers in Canada already get a good deal of support from a combination of government grants and deals with platforms, but with little democratic oversight and uneven distribution. This Act will at least provide a measure of equity and transparency to the funding from platforms.

A necessary evil?

There is no doubt that this bill is both complicated and controversial. Precisely because journalism is foundational to our democratic society, it is critical that we get it right. One thing is certain, however, the status quo is not serving citizens. We need to have greater accountability and transparency over the deals that are already funding Canadian journalism. While imperfect, an amended version of this bill is in our view necessary.

We saw how far the platforms were willing to go in Australia to avoid frameworks like this. Google publicly threatened to remove its search from Australia and Facebook took down all news from its platform for Australians. In doing so, they received some concessions from the government, but also created a public relations crisis that has spurred other governments, such as Canada to act. And a meaningful consequence of their over-reaction in Australia is that the Canadian bill has evolved considerably. The exemption criteria and the collective bargaining provisions alone will fundamentally change how the platforms can respond. Taking their ball and going home might have been possible in Australia, but it will not be possible if Canada, the UK, and Germany all have codes that each build and learn from each other. And that is the potential here. That democratic governments evolve their digital policy models based on the experiences of each other. This policy snowball effect is likely what worries the platforms most about the Canadian bill.

In our view, a market failure of journalism is not an acceptable risk for democratic societies. This means that journalism may, at least in the short term, need to be subsidized. While there are risks to this particular model, when considered as part of a wider policy package to support journalism in Canada, we think that at least for now, it is a risk worth taking. Perhaps more importantly, by taking many of the concerns of the Australian model seriously, this bill advances a policy approach that other countries can learn from and build on.

Taylor Owen is the Beaverbrook Chair in media, ethics, and communications and the director of the Centre for Media, Technology and Democracy at McGill University. Supriya Dwivedi is the director of policy and engagement there.

Photo of Canadian flag by Lori & Todd used under a Creative Commons license.

]]>
https://www.niemanlab.org/2022/08/canadas-online-news-act-shows-how-other-countries-are-learning-from-australias-news-bill/feed/ 0
How one Italian newspaper put Facebook “on lockdown” for more than a year https://www.niemanlab.org/2022/07/how-one-italian-newspaper-put-facebook-on-lockdown-for-more-than-a-year/ https://www.niemanlab.org/2022/07/how-one-italian-newspaper-put-facebook-on-lockdown-for-more-than-a-year/#respond Tue, 19 Jul 2022 13:00:03 +0000 https://www.niemanlab.org/?p=205821 Giornale di Brescia, one of Italy’s most popular local newspapers, quit Facebook in November 2020. It was not an easy decision: at that time, the company’s Facebook page had more than 200,000 followers and  drove almost 20% of the website’s traffic.

“Our [Facebook] comments section has always occasionally been filled with blatantly racist and sexist hate speech, just like other newspapers’ comment sections on social media,” said Nunzia Vallini, the newspaper’s editor-in-chief. During the pandemic, though, toxic behavior on the site’s Facebook page got much worse. (Giornale di Brescia does not have comments on its own website.)

Brescia and its province in Northern Italy have reported more than 5,000 deaths from Covid-19 since March 2020. “During the first Covid-19 wave, we felt as though we had to fend for ourselves. Our newspaper partnered with another foundation to raise funds for ventilators, masks, [resuscitation] beds, and other healthcare equipment. In a few weeks, we raised more than €18 million,” Vallini said.

As the second wave of the pandemic began to hit Italy in the autumn of 2020, disinformation spread too. “Covid deniers claimed we published fake data about Covid-19 deaths and cases. They insulted the courage and resilience of the doctors they had admired months earlier,” Vallini said.

When the newspaper announced on Facebook that Italian president Sergio Mattarella had visited a small cemetery near Brescia to pay tribute to the victims of Covid, trolls and Covid deniers from all over Italy attacked Giornale’s post, including death threats to the President. “The frustration and anger that could not be expressed in real life was being unloaded on the virtual squares,” Vallini said. (In May 2021, Italian military forces announced that they were investigating 11 citizens for threatening the President on social media. Three of the suspects were far-right militants who allegedly collaborated to retaliate against the government’s decision to counter the spread of the virus.)

In the last few years, international outlets have begun rethinking their comment sections, but most Italian newspapers have rarely questioned their “frenemy” relationship with social media platforms. The traffic is vital to their shaky advertising-based revenue model. Vallini’s decision to put Facebook “on lockdown” came two weeks after the president’s visit. “We don’t intend to collude with this sick game, nor trade our visibility, our history, or our style to gain more traffic [in a system that] rewards those who shout (and insult) the most,” she wrote in a letter to readers. “It may sound old-fashioned, but we prefer quality over quantity.”

With the paper’s Facebook page dormant, Vallini explored new ways to reach audiences. “After several local players decided to buy large subscription packages to the newspaper’s digital edition to show their support, it was clear that Brescia was with us,” she said.

Giornale di Brescia partnered with local events and associations to engage with its audience, organizing a cooking contest for amateur chefs, a paddle tournament, and many workshops for secondary school students. “Our newspaper was founded in 1945, at the end of World War II, to bridge our city’s social divide during that difficult time. Embracing togetherness is part of our DNA,” Vallini said. “Plus, padel is trending right now, so we wanted to have our own Padel Cup!”

The paper made a concentrated effort to increase its presence on Instagram (which, of course, is owned by Facebook parent company Meta), LinkedIn, and Twitter. Facebook’s Italian media partnerships team contacted the newsroom. “Perhaps they were just testing the waters, fearing our decision could create a snowball effect in the industry,” Vallini said. In March 2021, Facebook updated its moderation rules, allowing celebrities, politicians, brands, and news outlets to turn off comments on their posts. “We would like to think that we had something to do with this,” Vallini said.

Earlier this year, the newspaper eventually found the digital transformation specialist it needed. “When they decided to put Facebook ‘on lockdown’, I was working as public editor for another newspaper, and we encountered the same problem. At heart, I knew they had done the right thing,” said Anna Masera, Giornale di Brescia’s newly appointed deputy editor for digital strategy. “Some people ruin social media for everyone, and many of us feel we can’t do anything about it. But it’s not true.” She started preparing for Giornale di Brescia’s return to Facebook with a new social media policy.

With years of experience managing Italy’s Chamber of Deputies and national newspaper La Stampa’s social media channels, Masera understands the importance of clarity in regaining readers’ trust. Giornale di Brescia’s first social media policy is a seven-point list stating that no illegal, defamatory, promotional, or irrelevant comments are allowed under Giornale’s Facebook posts. The policy clearly states that “social media moderators also have work schedules” and that “comments may not be moderated right away because they were posted outside of working hours.”

“Many people think that working shifts is a weakness, something journalists and newsrooms ought to be ashamed of. Transparency makes us more human, and the readers appreciate it,” Masera said. “No comment is so urgent that it cannot wait until our usual working hours.”

On April 26, with a new social media policy in place, Giornale resumed posting on Facebook. Readers welcomed its return. “It’s great to have you back on Facebook! Don’t be discouraged by haters, critics, racists, or self-proclaimed experts. Ban them if you need to!” wrote one reader.

“Thank you for considering that your audience isn’t just keyboard warriors, but mostly people interested in the city and its surroundings,” wrote another.

“Sometimes I think that quitting Facebook made us special and that we shouldn’t have returned to it,” Masera said. “But that’s where our audience is, so we need to be there, too.”

Roberta Cavaglià is an Italian freelance journalist who has written for publications including Linkiesta, Rolling Stone, and Valigia Blu.

Photo of Brescia, Italy by Gianni Belloni used under a Creative Commons license.

]]>
https://www.niemanlab.org/2022/07/how-one-italian-newspaper-put-facebook-on-lockdown-for-more-than-a-year/feed/ 0
Facebook looks ready to divorce the news industry, and I doubt couples counseling will help https://www.niemanlab.org/2022/06/facebook-looks-ready-to-divorce-the-news-industry-and-i-doubt-couples-counseling-will-help/ https://www.niemanlab.org/2022/06/facebook-looks-ready-to-divorce-the-news-industry-and-i-doubt-couples-counseling-will-help/#respond Thu, 16 Jun 2022 18:52:24 +0000 https://www.niemanlab.org/?p=204346 Facebook will never officially file for divorce from the news business. The paperwork, the lawyers — yuck.

But let’s face it: They’ve been growing apart for years. They don’t share the same interests, they fight all the time, and what was once a fruitful partnership has devolved into gritted-teeth toleration. There was real love, once, long ago. But it’s probably best for everyone involved if they separate and start seeing other people.

It looks like that’s exactly what the social media giant is doing: getting as far away from the news business as it can.

Let’s look at two recent stories about Facebook. While they might seem to be about different things, there’s a connection. First, there’s this Wall Street Journal article from a few days ago, by Alexandra Bruell and Keach Hagey1:

Back in the day, news content was still seen as a valuable ingredient in the News Feed’s goulash. In 2015, Facebook was in the early stages of pivoting to video and it was working hard to lure publishers into providing it; soon it would begin shoveling millions of dollars at them to clinch the deal. (That all ended well, you may remember.) Even so, Facebook traffic to news sites was still going up, up, up, and publishers muddling toward their digital future were happy for the clicks. That summer, Facebook officially dethroned Google as the top driver of traffic to news sites.

But Trump’s candidacy brought a swarm of misinformation — “Fake news!” — to Facebook. Turned out that its algorithms were uniquely suited to made-up nonsense that enrages people. Facebook went from the place where you kept up with your high school friends to the place where you learned Hillary Clinton was offing FBI agents. There was plenty of misinformation on Facebook before Trump, of course, but his election turned it into a global issue. Facebook suddenly found itself being credited/blamed for electing a president.

The company wasn’t sure how to respond; Mark Zuckerberg initially called it a “pretty crazy idea” that his little mom-and-pop operation could’ve influenced the election. But within a few weeks, Facebook’s plan became clear: News is more trouble than it’s worth. Let’s get rid of it.

They’d already taken a few steps in that direction. In 2015, Facebook announced it would move friends-and-family content higher in News Feeds, demoting posts from publishers and other pages. In the heat of the 2016 campaign, it emphasized that “friends and family come first” is “the driving principle of News Feed today.”3

But things really ramped up after the election. Facebook traffic to publishers began declining rapidly; in just 16 months, Slate’s Facebook traffic dropped from 28 million to 3.6 million. Publishers started citing “unreliable” Facebook traffic when they announced layoffs. Some more Facebook-reliant operations shut down altogether. A Facebook exec told publishers directly: “We are not interested in talking to you about your traffic…That is the old world and there is no going back.”

Facebook ran an “downright Orwellian” experiment chopping news out of the News Feed entirely in six less-than-rock-solid democracies.4 A year later, Facebook announced it was pushing even more news out of News Feed because it didn’t “spark [enough] conversations and meaningful interactions between people.” Zuckerberg complained that too often, “reading news or getting a page update is just a passive experience.” (You know, not the kind of true engagement that makes you want to click on a little thumb’s-up icon or leave a “lol.”) The cuts kept coming: Last year, it cut back on the political content (whatever that means) in News Feed.

Facebook, once news publishing’s No. 1 source of traffic, lost that title back to Google in 2017, which now sends roughly twice as many clicks as Facebook does.

And it’s not as if Facebook users are clamoring for news. This year’s Digital News Report, out earlier this week, included a question asking people around the world whether they though a particular platform had too much news, not enough news, or just about the right amount of news, Goldilocks-style.

The platform that the most people said was too news-heavy? Facebook. In the U.K., 21% of Facebook users surveyed said there was “too much news” on it, versus just 3% who said it didn’t have enough. (55% said “just right,” and 20% couldn’t be bothered to have an opinion.) “Too much” numbers were similar around the English-speaking world: 22% in the U.S., 20% in Australia, and 20% in Canada.

So: Facebook doesn’t need news. It’s a tiny fraction of what people see on its platform, and many more of its users would rather see less of it than more. And yet news and news-like content generate a large share of its PR headaches and negative headlines.

Is it any surprise it’s on the verge of snuffing it out entirely?

To be fair, Google and Facebook have been writing publishers big checks for years now. The Google News Initiative and Facebook Journalism Project5 have paid publishers around the world hundreds of millions of dollars. But Google and Facebook got to decide who to give it to, what to give it for, and how much to give.

Have they written all those checks through the goodness of their hearts? No, it’s PR — an attempt to make publishers and their governments stop pushing to do something more severe.

But then Australia did something more severe. That country’s leaders passed a law that, functionally speaking, requires Google and Facebook to distribute bribes to Australian publishers. The size of those bribes is supposed to be a secret — but they have to be big enough to make those publishers happy.

Think that’s an ungenerous framing of Australia’s News Media Bargaining Code? Fine. Australia says it is merely requiring Google and Facebook to engage in “negotiations” with the country’s major publishers to determine the proper compensation they are due for…allowing their stories to reach many more people? The end result is that Google and Facebook have had to sit down with Aussie publishers and say: “Will…$20 million shut you up? $30 million? Okay, $50 million?” The negotiations are nonsense — in no way tethered to any real sense of “value” or “benefit,” stapled onto an obscure side-product no one uses rather than Google’s search and Facebook’s News Feed.

I have written repeatedly about why — despite my love for publishers getting money! — I think the Australian model is a bad idea. Maybe you agree, maybe you don’t.

But either way: It worked. Rupert Murdoch’s News Corp will now get checks from Google and Facebook each year worth about $50 million, just for its Australian outlets. That was the amount those companies thought it was worth to stop Murdoch’s decade-plus of complaining. The threat of government action — which would include seizing up to 10% of all of the platforms’ revenue in Australia — was enough to get this charade in motion.

The fact that it worked in Australia has inspired other countries to try to do the same. Canada will soon pass a version of Australia’s law. The U.K. will likely do the same, promising “Australia plus plus.” And while I still doubt it will pass, there’s a weaker bill in Congress that’s seeing “new bipartisan interest.”

Publishers sometimes think of Google and Facebook as interchangeable piles of money. But they have different sets of interests. Google talked plenty tough about the Australia bill as it advanced, but it was Facebook that was willing to actually pull the plug on Australian news on its platforms.

So it shouldn’t be surprising that it’s Facebook that plans to just…stop writing checks. It’s hit a little revenue bump and needs to cut costs. It has handed out hundreds of millions of dollars in order to shut up publishers, and now publishers in countries a lot bigger than Australia think they’ve figured out how to force it to hand out more — a lot more. If the checks didn’t work…why keep writing them?

And if you’re planning to stop writing them, why shouldn’t you go full in and scrunch the news content on your platform to a minimum? Facebook was born on a web browser, 18 years ago, which meant that it was at some level built around linking. That made Facebook an incredibly powerful driver of traffic. More Facebook usage meant more people clicking links which meant more pageviews for everyone.

TikTok, meanwhile, was born on a phone, five years ago, which means old web concepts like “sending traffic” are meaningless. TikTok’s goal is not to send traffic (i.e., your attention) anywhere: It’s to keep you swiping through videos on TikTok. TikTok was literally the most-used thing on the internet last year — topping even Google and Facebook. But do you see it anywhere on the list of top traffic generators for news sites? Nope.

So becoming more like TikTok is a win-win for Facebook. It helps it compete with its biggest rival. And — much less importantly for Facebook, much more importantly for news — it makes it easier to stop writing checks to publishers. “Sorry, guys, we’re just pivoting away from news. Best of luck! Come check out our Reels sometime!”

Divorces are hard for everyone involved. But with time, you can end up happier apart than you were together. Let’s be real for a minute: It’s always been weird that Facebook — essentially a database of everyone you know, mashed up with wizardly ad targeting — was a huge driver of attention to news. Google? Google’s where you look for information — it makes sense for news to be there. But the app for baby photos, silly memes, graduation announcements, and stalking your ex? It was a weird match from Day 1. It’s time for everyone to move on.

  1. The Information reported something similar with less detail last month.
  2. Is it just me, or are “disappointed” and “enthusiasm” oddly personal, even emotional responses here for a CEO?
  3. More than a little funny that the driving principle of News Feed will now apparently be “friends and family come second, right after this 8-second video from some teen you’ve never met.”
  4. Sri Lanka, Guatemala, Bolivia, Cambodia, Serbia, and Slovakia.
  5. Sorry, still not Meta here.
]]> https://www.niemanlab.org/2022/06/facebook-looks-ready-to-divorce-the-news-industry-and-i-doubt-couples-counseling-will-help/feed/ 0 How corporate takeovers are fundamentally changing podcasting https://www.niemanlab.org/2022/05/how-corporate-takeovers-are-fundamentally-changing-podcasting/ https://www.niemanlab.org/2022/05/how-corporate-takeovers-are-fundamentally-changing-podcasting/#respond Thu, 19 May 2022 13:00:28 +0000 https://www.niemanlab.org/?p=203278 At first glance, it may seem as though Big Tech can’t figure out how to make money off its foray into podcasting.

In early May 2022, Meta announced that it was abruptly ending Facebook’s podcast integration less a year after it launched. Facebook had offered podcasters the ability to upload their shows to the social media site. Meanwhile, Spotify’s own expensive gamble on podcast integration within its music streaming service hasn’t resulted in the surge of new listeners that it had hoped.

And what about the emergence of social audio platforms like Clubhouse that promised to re-imagine podcasting as live audio chatrooms hosted by celebrities and public figures?

After its meteoric rise in 2021 during the height of the global pandemic, Clubhouse has seen major declines in app installs, in part because of the rise in competing services like Twitter Spaces and Spotify Live.

Amid all this corporate turmoil, it’s tempting to conclude that online tech companies are moving on from podcasting in search of higher profit margins elsewhere.

But these realignments belie a bigger truth: Platforms have already reshaped podcasting in fundamental ways, and they will play an outsized role in its future.

An open medium collides with Big Tech

Podcasting, which has been around for only two decades, has a unique, decentralized infrastructure.

Podcasting’s audio files are accessible via a simple 2000-era technology known as RSS, short for “Really Simple Syndication.” Thanks to the openness of RSS — it is a nonproprietary distribution mechanism that cannot be controlled by anyone — podcasting has remained a thriving creative ecosystem. Once you upload an audio file and connect it to an RSS feed, any podcatching software or app can find it and download it.

The first decade of podcasting’s existence was characterized by steady, if laconic, growth. In 2006, for example, only 22% of U.S. listeners had even heard of podcasting. That percentage sits at 79% today.

After 2014, however, this slow and steady rise has been turbocharged by a staggering wave of corporate takeovers.

In 2019 I argued in the academic journal Social Media & Society that podcasting was undergoing the process of “platformization,” thanks to the increasingly central role of digital platforms like Spotify, Google and Amazon in the medium’s development. Spotify alone has spent over US$1 billion on podcast acquisitions. Other big radio and tech companies have also made significant acquisitions in the past three years, reshaping the industry in the process.

Openness, however, is anathema to digital platforms, which are intentionally structured as walled gardens that restrict access. They make money when users pay for access to content and services — and that, of course, works only when the content isn’t available elsewhere.

One of the recent shifts in podcasting has been the introduction of paywalls and exclusive content. It has since become a standard feature of the medium.

Most notably, in May 2020 Spotify signed an exclusive deal with Joe Rogan, the most popular podcaster, one that was reportedly valued at $200 million. All of Rogan’s new episodes — and even his entire back catalog — are now available only on Spotify, leading RSS and podcasting pioneer Dave Winer to argue that his show is in fact no longer a podcast.

Other eye-popping exclusivity deals have included Spotify’s 2021 $60 million deal for “Call Her Daddy,” the popular advice and comedy podcast created by Alexandra Cooper and Sofia Franklyn in 2018. Even podcast pioneer Roman Mars sold the exclusive rights to produce and distribute his longtime show “99% Invisible” to radio giant SiriusXM, though the podcast will remain freely available on all platforms for the time being.

The importance of podcast IP

For Spotify, securing popular podcasts to exclusive distribution deals is all about increasing the number of users on its platform. But podcasts with dedicated followings are also emerging as coveted forms of intellectual property.

Podcast production studio Wondery, for example, aggressively pursued cross-licensing deals for its original audio dramas, which include “Dr. Death,” “Dirty John” and “Gladiator.” All have or will appear as television series.

The value of these creative properties made Wondery an attractive acquisition target for Amazon, which paid $300 million for it in late 2020.

The content pipeline from podcasting to television and feature films is now well established, thanks in large part to the emerging centrality of traditional entertainment talent agencies into podcasting.

New podcasts with bankable Hollywood talent now launch as part of multimedia deals that include books, made-for-TV dramas or documentaries. Meanwhile, podcast networks are shifting their production strategies, aiming to land celebrities with built-in audiences for exclusive content licensing deals.

This is a marked shift from the DIY grassroots content that has been a hallmark of podcasting.

Ad tech is coming for podcasting

Platforms are also changing the way podcast audiences are measured. RSS was designed to efficiently and anonymously distribute audio files, but not to track who was downloading those files or if they were actually being listened to.

Digital platforms, on the other hand, function as sophisticated surveillance machines. They know who is listening to a podcast — which allows for specific demographic and psychographic targeting — and how much of that podcast is being consumed. Companies can also track their consumption of other media on the platform. Advertisers are coming to increasingly expect that their podcast ad buys will allow for accountability and attribution.

While it didn’t get that much media attention, Spotify’s recent acquisition of Chartable and PodSights — two important podcast analytics firms —are indicative of this arms race for user data.

There are broader issues at stake here, and not just the concentration of advertising revenue into the hands of the big platforms. The commodification of podcast listener data has privacy implications as well, which is something that the industry itself is beginning to acknowledge.

A tale of two media

What do these shifts portend for the podcasting’s third decade?

The story of podcasting has become really a story of two divergent media.

On the one hand, the traditional, scrappy, upstart version of podcasting will survive thanks to the open architecture of RSS. Podcasting still has relatively low barriers to entry compared with other media, and this will continue to encourage independent producers and amateurs to create new shows, often with hyperniche content. Crowdfunding sites like Patreon and Buy Me a Coffee allow creators to make money off their content on their own terms.

But grassroots podcasting will find itself competing with the professionalized, platform-dominated version of the medium that’s hit-driven and slickly produced, with cross-media tie-ins and big budgets.

As companies like Spotify, Amazon, NPR, SiriusXM and iHeartMedia aggressively monetize and market exclusive podcast content on their platforms, they’ve positioned themselves as the new gatekeepers with the keys to an ever-expanding global audience.

Independent podcasting isn’t going away. But with the promotional power concentrated in the hands of the very biggest tech firms, it will be increasingly challenging for those smaller players to find listeners.

John Sullivan is a professor of media and communication at Muhlenberg College. This article is republished from The Conversation under a Creative Commons license.The Conversation

Photo of podcasting setup by Will Francis is being used under an Unsplash License.

]]>
https://www.niemanlab.org/2022/05/how-corporate-takeovers-are-fundamentally-changing-podcasting/feed/ 0
Facebook promised to remove “sensitive” ads. Here’s what it left behind. https://www.niemanlab.org/2022/05/facebook-promised-to-remove-sensitive-ads-heres-what-it-left-behind/ https://www.niemanlab.org/2022/05/facebook-promised-to-remove-sensitive-ads-heres-what-it-left-behind/#respond Wed, 18 May 2022 14:18:38 +0000 https://www.niemanlab.org/?p=203234

Late last year, after facing years of criticism for its practices, Facebook announced a change to its multibillion-dollar advertising system: Companies buying ads would no longer be able to target people based on interest categories like race, religion, health conditions, politics, or sexual orientation.

More than three months after the change purportedly went into effect, however, The Markup has found that such ad targeting is very much still available on Facebook’s platform. Some obvious ad categories have indeed been removed, like “young conservatives,” “Rachel Maddow,” “Hispanic culture,” and “Hinduism” — all categories we found as options on the platform back in early January but that have since disappeared. However, other obvious proxies for race, religion, health conditions, and sexual orientation remain.

As far back as 2018, CEO Mark Zuckerberg told Congress the company had “removed the ability to exclude ethnic groups and other sensitive categories from ad targeting. So that just isn’t a feature that’s even available anymore.”

The Markup found, however, that while “Hispanic culture” was removed, for example, “Spanish language” was not. “Tea Party Patriots” was removed, but “Tea party” and “The Tea Party” were still available. “Social equality” and “Social justice” are gone, but advertisers could still target “Social movement” and “Social change.”

Starbucks, for example, was still able to use existing options after the change to place an ad for its pistachio latte focused on users interested in “Contemporary R&B,” “telenovela,” “Spanish language,” and “K-pop,” all proxies for Black, Latino, and Asian audiences on Facebook.

Facebook hasn’t explained how it determines what advertising options are “sensitive” and, in response to questions from The Markup, declined to detail how it makes those determinations. But in the days after The Markup reached out to Facebook for comment, several more potentially sensitive ad-targeting options we flagged were removed by the company.

“The removal of sensitive targeting options is an ongoing process, and we constantly review available options to ensure they match people’s evolving expectation of how advertisers may reach them on our platform,” Dale Hogan, a spokesperson for Facebook parent company Meta, said in a statement. “If we uncover additional options that we deem as sensitive, we will remove them.”

Facebook’s ad targeting system is the not-so-secret key to the company’s massive financial success. By tracking users’ interests online, the company promises, advertisers can find the people most likely to pay for their products and services and show ads directly to them.

But the company has faced blowback for offering advertisers “interest” categories that speak to more fundamental — and sometimes highly personal — details about a user. Those interests can be used in surprising ways to discriminate, from excluding people of color from housing ads to fueling political polarization to tracking users with specific illnesses.

Facebook’s critics say the company has had ample opportunity to fix the problems with its advertising system, and trying to repair the platform by removing individual “sensitive” terms masks an underlying problem: The company’s platform might simply be too large and unwieldy to fix without more fundamental changes. 

Removing a handful of terms it deems sensitive just isn’t enough, according to Aleksandra Korolova, an assistant professor of computer science at the University of Southern California.

“It’s obvious to everyone who is in the field that it’s not a complete solution,” she said.

Clear proxies for removed terms are still on Facebook

The Markup gathered a list of potentially sensitive terms starting in late October, before Facebook removed any interest-targeting options. The data was gathered through Citizen Browser, a project in which The Markup receives Facebook data from a national panel of Facebook users.

We also gathered a list of terms that Facebook’s tools recommended to advertisers when they entered a potentially sensitive term — the company suggested “BET” and “Essence (magazine)” when advertisers searched for “African American culture,” for example.

Then, also using Facebook’s tools, we calculated how similar terms were to their suggestions by viewing how many users the ads were estimated to reach, which Facebook calls an “audience.” (See the details of our analysis on Github.)

To find the gaps in Facebook’s cleanup process, we then searched those terms again in Facebook’s public advertising tools at the end of January to see which ones the company had removed following its change.

In some cases, we found, options still available reached almost exactly the same users as options that were removed. “BET,” the acronym for Black Entertainment Television, was removed, but “BET Hip Hop Awards,” which was previously recommended with BET and had a 99 percent overlap in audiences, was still available.

“Gay pride” was also removed as an option, but by using the term “RuPaul’s Drag Race,” advertisers could still reach more than 13 million of the same users.

These proxies weren’t just theoretically available to advertisers on Facebook. The Markup found companies actively using them to target ads to people on the social network. Using Citizen Browser, we found several examples of proxies for race and political affiliation used for targeting.

Ancestry, the genealogy service, for example, targeted ads using the terms “telenovela,” “BET Hip Hop Awards,” “African culture,” and “Afrobeat.”

Facebook removed “Fox News Channel” as a targeting option that could reach conservative users, but we saw the conservative satire website The Babylon Bee targeting an ad ridiculing Anthony Fauci using the then-still-available interest category “Judge Jeanine Piro,” a Fox News personality.

Before Fox News Channel was removed, we found 86% of users tagged with an interest in Judge Jeanine Piro were also tagged with an interest in the cable news network.

Facebook also failed to fully eliminate targeting based on medical conditions, we found. “Autism Awareness” was removed, but “Epidemiology of autism” was still available. “Diabetes mellitus awareness” was removed, but the closely related “Sugar substitute” wasn’t. We found an ad from Medtronic, a medical device company, using that term to promote a diabetes management insulin pen on Facebook.

Even Facebook itself has used the proxies. We found an ad placed by the company promoting its groups to users interested in “Vibe (magazine),” a stand-in for removed terms that target Black audiences.

Starbucks, Ancestry, and the Babylon Bee didn’t respond to requests for comment on their ad-targeting practices. Pamela Reese, a spokesperson for Medtronic, said the company has stopped using “Sugar substitute” as a targeting option and that Medtronic is “well within” FDA regulations for advertising medical devices.

The Markup provided several examples of these potential proxy terms to Facebook, including  “telenovela,” “BET Hip Hop Awards,” “RuPaul’s Drag Race,” and “Judge Jeanine Piro.” They were quietly removed after our request for comment was sent.

Critics of Facebook like Korolova say Facebook has a track record of promising to implement meaningful changes on its advertising platform only to fall short of the pledge. Research has shown problems with advertising “proxies” for years, and Facebook could have taken stronger action to fix the problems, she argues.

“If they wanted to, they could do better,” Korolova said.

Facebook says its recent changes were needed to prevent abuse, but some organizations that say they use Facebook for social good have complained that the new policies put up barriers to their work. Climate activists and medical researchers have complained that the changes have limited their ability to reach a relevant audience.

Daniel Carr, a recruitment consultant for SMASH Labs, a medical research group that uses Facebook to recruit gay and bisexual men for studies, said the recent changes forced them to switch from terms like “LGBT culture” to pop culture references like “RuPaul’s Drag Race.” Carr said study recruitment was steady, but the change didn’t sit right with them.

“It’s made it more complicated on our side, and it’s not actually changed anything, other than Facebook can now say, ‘We don’t allow you to target by these things,’ ” Carr said. “It’s a political move, if anything.”

Angie Waller manages The Markup’s Citizen Browser project, its custom application that monitors what is being algorithmically broadcast to paid panels of Facebook users in the U.S. and Germany. Colin Lecher is a reporter at The Markup.

Header illustration by Gabriel Hongsdusit is being republished with permission from The Markup.

]]>
https://www.niemanlab.org/2022/05/facebook-promised-to-remove-sensitive-ads-heres-what-it-left-behind/feed/ 0
Why researchers want broader access to social media data https://www.niemanlab.org/2022/05/why-researchers-want-broader-access-to-social-media-data/ https://www.niemanlab.org/2022/05/why-researchers-want-broader-access-to-social-media-data/#respond Wed, 04 May 2022 14:00:04 +0000 https://www.niemanlab.org/?p=202919 Within days of Russia’s recent invasion of Ukraine, several social media companies took steps to reduce the circulation of Russian state-backed media and anti-Ukrainian propaganda. Meta (formerly Facebook), for example, said it took down about 40 accounts, part of a larger network that had already spread across Facebook, Instagram, Twitter, YouTube, Telegram, and Russian social media. The accounts used fake personas, replete with profile pictures likely generated with artificial intelligence, posing as news editors, engineers, and scientists in Kyiv. The people behind the network also created phony news websites that portrayed Ukraine as a failed state betrayed by the West.

Disinformation campaigns have become pervasive in the vast realm of social media. Will technology companies’ recent efforts to combat propaganda be effective? Because outsiders are not privy to most of the inner workings of the handful of companies that run the digital world — the details of where information originates, how it spreads, and how it affects the real world — it’s hard to know.

Joshua Tucker directs New York University’s Jordan Center for the Advanced Study of Russia and co-directs the school’s Center for Social Media and Politics. When we spoke in mid-March, he had just come from a meeting with colleagues strategizing how to trace the spread of Russian state narratives in Western media. But that investigation — most of his research, in fact — is hampered because, in the name of protecting user privacy and intellectual property, social media companies do not share all the details of the algorithms they use to manipulate what you see when you enter their world, nor most of the data they collect while you’re there.

The stakes for understanding how that manipulated world affects individuals and society have never been higher. In recent years, journalists, researchers, and even company insiders have accused platforms of allowing hate speech and extremism to flourish, particularly on the far right. Last October, Frances Haugen, a former product manager at Facebook, testified before a U.S. Senate Committee that the company prioritizes profits over safety. “The result has been a system that amplifies division, extremism, and polarization — and undermining societies around the world,” she said in her opening remarks. “In some cases, this dangerous online talk has led to actual violence that harms and even kills people.”

In the United States, answers about whether Facebook and Instagram impacted the 2020 election and Jan. 6 insurrection may come from a project Tucker co-directs involving a collaboration between Meta and 16 additional outside researchers. It’s research that, for now, couldn’t be done any other way, said project member Deen Freelon, an associate professor at the Hussman School of Journalism and Media at the University of North Carolina. “But it absolutely is not independent research because the Facebook researchers are holding our hands metaphorically in terms of what we can and can’t do.”

Tucker and Freelon are among scores of researchers and journalists calling for greater access to social media data, even if that requires new laws that would incentivize or force companies to share information. Questions about whether, say, Instagram worsens body image issues for teenage girls or YouTube sucks people into conspiracies may only be satisfactorily answered by outsiders. “Facilitating more independent research will allow the inquiry to go to the places it needs to go, even if that ends up making the company look bad in some instances,” said Freelon, who is also a principal researcher at the University of North Carolina’s Center for Information, Technology, and Public Life.

For now, a handful of giant for-profit companies control how much the public knows about what goes on in the digital world, said Tucker. While the companies can initiate cool research collaborations, he said, they can also shut those collaborations down at any time. “Always, always, always you are at the whim of the platforms,” he said. When it comes to data access, he added, “this is not where we want to be as a society.”

Tucker recalled the early days of social media research about a decade ago as brimming with the promise. The new type of communication generated a treasure trove of information to mine for answers about human thoughts and behavior. But that initial excitement has faded a bit as Twitter turned out to be the only company consistently open to data sharing. As a result, studies about the platform dominate research even though Twitter has far fewer users than most other networks. And even this research has limitations, said Tucker. He can’t find out the number of people who see a tweet, for example, information he needs to more accurately gauge impact.

He rattled off a list of the other information he can’t get to. “We don’t know what YouTube is recommending to people,” he said. TikTok, owned by the Chinese technology company ByteDance, is notoriously closed to research, although it shares more of users’ information with outside companies than any other major platform, according to a recent analysis by the mobile marketing company URL Genius. The world’s most popular social network, Facebook, makes very little data public, said Tucker. The company’s free tool CrowdTangle allows you to track public posts, for example. But you still can’t find out the number of people who see a post or read comments, nor glean precise demographic information.

In a phone call and email with me, Meta spokesperson Mavis Jones contested that characterization of the company, stating that Meta actually supplies more research data than most of its competitors. As evidence of the commitment to transparency, she pointed out that Meta recently consolidated data-sharing efforts into one group focused on the independent study of social issues.

To access social media data, researchers and journalists have gotten creative. Emily Chen, a computer-science graduate student at the University of Southern California, said that researchers may resort to using a computer program to harvest large amounts of publicly available information from a website or app, a process called scraping. Scraping data without permission typically violates companies’ terms of service, and the legalities of this approach are still tied up in the courts. “As researchers, we’re forced to kind of reckon with the question of whether or not our research questions are important enough for us to cross into this gray area,” said Chen.

Researchers often get away with scraping, but platforms can potentially shut them out at any time. However, Meta told me that scraping without permission is strictly against corporate policy. And, indeed, last summer Meta went so far as to disable the accounts of researchers in New York University’s Ad Observatory project, which had been collecting Facebook data to study political ads.

Another approach is to look over the shoulder of social media users. In 2020, The Markup, a nonprofit newsroom covering technology, announced the launch of its Citizen Browser Project. The news outlet paid a nationally representative sample of 1,200 adults to install a custom-made browser on their desktop computers. The browser periodically gathers information from people’s Facebook feeds — with personally identifiable data being removed.

“We are like photojournalists on the street of algorithm city,” joked Surya Mattu, the investigative data journalist who developed the browser. “We’re just trying to capture what’s actually happening and trying to find a way to talk about it.” The browser’s snapshots reveal a digital world where news and recommendations look entirely different depending on your political affiliation and where — contrary to Facebook’s assertions — many people see extremist and sensationalist content more often than they would when viewing mainstream sources. (To see what’s currently trending on Facebook according to The Markup’s data, go to @citizenbrowser on Twitter.)

What could journalists and social scientists shed light on if they had a better view of the digital world? In a commentary published in December 2020 in the journal Harvard Kennedy School Misinformation Review, 43 researchers submitted 15 hypothetical projects for tackling disinformation that they could pursue “if social media data were more readily available.” Freelon’s group detailed how they could determine the origin of misleading stories (currently nearly impossible), identify which platforms play the biggest role in spreading misinformation, and determine which correction strategies work with which audiences.

An engineer by training, The Markup’s Mattu would like to see social media data work more like open-source software, which allows users to view the code, add features, and fix problems. For something operating at the scale of Facebook, there’s no way one group of people can figure out how it will work across all ecosystems, contexts, and cultures, he said. Social media companies need to be transparent about issues such as algorithms that may wind up prioritizing extreme content or advertisers that find ways to target people by race. “We should accept that these kinds of problems are a feature, not a bug of social networks that exist at the scale of 2 billion people,” he said. “And we should be able to talk about them in a more honest way.”

Chen also sees collaboration as the way forward. She envisions a world where researchers can simply ask for data rather than resorting to gray areas such as scraping. “One of the best ways to tackle the current information warfare that we’re seeing is for institutions, whether corporate or academic, to really work together,” she said.

One of the main barriers to greater access is protecting users’ privacy. Social media users are reasonably concerned that outsiders might get their hands on sensitive information and use it for theft or fraud. And, for many reasons, people expect information shared in private accounts to stay private. Freelon is working on a study looking at racism, incivility, voter suppression, and other “anti-normative” content that could damage someone’s reputation if made public. To protect users’ privacy, he doesn’t have access to the algorithms that could identify those people. “Now, personally, do I care if racist people get hurt?” he said. “No. But I can certainly see how Facebook might be concerned about something like that.”

Current laws are set up to preserve the privacy of people’s data, not with the idea of facilitating research that will inform society about the impact of social media. The U.S. Congress is currently considering bipartisan legislation such as the Kids Online Safety Act, the Social Media DATA Act, and The Platform Accountability and Transparency Act, which compel social media companies to provide more data for research while still maintaining provisions to protect user privacy. PATA, for example, mandates the creation of privacy and cybersecurity standards and protects participating academics, journalists, and companies from legal action due to privacy breaches.

In the book Social Media and Democracy, co-editors Tucker and Stanford law professor Nathaniel Persily write: “The need for real-time production of rigorous, policy-relevant scientific research on the effects of new technology on political communication has never been more urgent.”

It’s been an eventful couple of years since the book was published in 2020. Is the need for solid research even more pressing? I asked Tucker.

Yes, he answered, but that urgency stems not just from the need to understand how social media affects us, but also to ensure that protective actions we take do more good than harm. In the absence of comprehensive data, all of us — citizens, journalists, pundits, and policy makers — are crafting narratives about the impact of social media that may be based on incomplete, sometimes erroneous information, said Tucker.

While social media has given voice to hate speech, extremism, and fake news around the world, Tucker’s research reveals that our assumptions about how and where that happens aren’t always correct. For example, his team is currently observing people’s YouTube searches and finding that, contrary to popular belief, the platform doesn’t invariably lead users down rabbit holes of extremism. His research has also shown that echo chambers and the sharing of fake news are not as pervasive as commonly thought. And in a study published in January 2021, Tucker and colleagues found spikes of hate speech and white nationalist rhetoric on Twitter during Donald Trump’s 2016 presidential campaign and its immediate aftermath, but no persistent increase of hateful language or that particular stripe of extremism.

“If we’re going to make policy based on these kinds of received wisdom, we really, really have to know whether or not these received wisdoms are correct or not,” said Tucker.

More and better research depends on social media companies creating a window for outsiders to peer inside. “We figure out ways to make do with what we can get access to, and that’s the creativity of being a scholar in this field,” said Tucker. “Society would be better served if we were able to work on what we thought were the most interesting research questions.”

Teresa Carr is a Texas-based investigative journalist and the author of Undark’s Matters of Fact column. This article was originally published on Undark.

Photo by Ev on Unsplash.

]]>
https://www.niemanlab.org/2022/05/why-researchers-want-broader-access-to-social-media-data/feed/ 0
How can publishers respond to the power of platforms? https://www.niemanlab.org/2022/04/how-can-publishers-respond-to-the-power-of-platforms/ https://www.niemanlab.org/2022/04/how-can-publishers-respond-to-the-power-of-platforms/#respond Wed, 27 Apr 2022 11:45:24 +0000 https://www.niemanlab.org/?p=202784

The following essay is adapted from The Power of Platforms: Shaping Media and Society by Rasmus Kleis Nielsen and Sarah Anne Ganter, which was recently published by Oxford University Press. It’s reproduced here with permission.

Large technology companies such as Facebook and Google — in competition with a few others including Amazon, Apple, Microsoft, and a handful of companies elsewhere — increasingly define the way the internet works and thereby influence the structure of the entire digital media environment.

But how do they exercise this power, how have news organizations responded, and what does this development mean for the production and circulation of news? These are the questions we focus on in our new book.

Our primary objective is to understand the relationship between publishers and platforms, how these relationships have evolved over time, how they play out between different publishers and different platforms, how they differ across countries, and what this wider development — where news organizations become simultaneously empowered by and more dependent on technology companies — mean for news specifically and our societies more broadly.

The analysis is based on interviews with more than 50 people working across a range of publishers and platforms in the United States, France, Germany, and the United Kingdom as well as background conversations and observations at scores of industry events and private meetings. We trace the development of the relationship between publishers and platforms over the last decade and focus in particular on the rapid changes from 2015 onward.

Beyond “frenemies”

Despite 20 years of often difficult relations, a clear recognition of the “frenemy” dynamic at play, and the reality of intensifying competition for attention, advertising, and consumers’ cash, many publishers still actively seek to collaborate with platform companies. The vast majority continue to invest in platform products and services even when they’re not offered opportunities to collaborate directly.

Here’s how the director of strategic initiatives at a major U.S. newspaper aspiring to join the inner circle of “platform darlings” described the process of actively seeking collaboration with companies that he explicitly recognizes as major competitors for attention and advertising: “We did a lot of begging. We promised to be completely committed to whatever you ask, as long as you ask.” He explained: “We may not like them, but they have been absolutely essential in expanding our reach and building our digital business.”

Going forward, individual publishers have a series of important choices about how to structure their interactions with platforms.

(1) What balance do they seek between onsite and offsite reach? How can the two complement each other while minimizing the risk of cannibalization?

(2) What is the core business model, including the balance between advertising, reader revenue, and other sources? Which combination of platform partners is most likely to enable that business model?

Finally, given that we know that the platforms are here to stay and that their basic offer of reach in return for content is clear, but everything else is likely to continue to change: (3) How can publishers continuously assess the material and immaterial benefits of their investments in platforms and ensure that they are able to adapt to constant change, without locking in on the (all too often mistaken) assumption that a particular platform opportunity or specific platform product is here to stay?

Every publisher will need to think through what reality-based beneficial relationships with various platforms — based on the solid ground of mutual self-interest, not hopeful dreams or empty promises — can look like. Perhaps it is time to leave behind the somewhat moralizing terminology of friends, enemies, and “frenemies,” lest it gets in the way of clear-eyed analysis. Has anyone ever really been “friends” with a billion-dollar corporation?

What comes next?

While there is an increasingly lively policy debate around platforms, it is clear that the regulatory road ahead is long, slow, and uncertain.

Publishers, at least in Europe, have often ultimately secured political support for much of what they asked politicians for, but getting policies passed (let alone implemented) takes years, and the concrete benefits have often fallen far short of what publishers hoped for.

The CEO of a major U.S. newspaper company said: “We plan our strategy with two assumptions. The first is that in the future, we will have no print profits. The second is that the regulatory environment will stay roughly the same.” He added: “Even if we did see, for example, antitrust action against the platforms, it would take years, probably decades, and in the end might not really benefit us. So we focus on the things we can control.”

The “things we can control” are the decisions that publishers themselves make, individually and perhaps together. These decisions are shaped by the power of platforms and many other forces, but the decisions still matter. A growing number of individual news publishers around the world are demonstrating that while the industry as a whole continued to decline, shrink, and struggle to adapt to a changing media environment, some have managed to developed editorially and technologically compelling offers and build sustainable, even growing businesses.

Globally recognized brands like The New York Times are the most prominent examples of this, though given how unusual its position is, the arguably more important examples are the growing number of smaller organizations that are succeeding, whether legacy newspapers like the upmarket Dagens Nyheter, the popular VG, or local news publisher AMedia, or digital-born brands like the upmarket MediaPart, the widely read El Diario.es, the popular Brut, and the local Lincolnite.

Corporatist, complementary, and collaborative approaches to platforms

Individual corporate strategies and possibly public policy interventions aside, it is possible to imagine some publishers, or even groups of publishers, trying to forge different paths ahead. Three paths that seem possible include corporatist approaches, complementary approaches, and collaborative approaches.

First, publishers have repeatedly tried corporatist approaches to platforms, trying to present a joint front to get more leverage and negotiate more favorable terms of trade with platforms. Some U.S. newspapers explored this in 2009 under the aegis of the Newspaper Association of America. Their French counterparts did the same through SPQN, as did a group of German publishers through VG Media. The American attempt came to nothing, the French initiative resulted in a modest settlement, and the German group ultimately granted Google free licenses to use their content.

Each case illustrates how attempts to act collectively have foundered. Most publishers are loath to surrender the very real short-term benefits of collaborating with platforms. Some will always refuse to join collective action because they have very clear incentives for going it alone. And competition authorities are skeptical of what could look like cartels.

But the idea lives on. In the United States, the News Media Alliance, which represents 2,000 news publishers, has been lobbying for legislation to provide a temporary antitrust exemption for news publishers to negotiate collectively with platforms like Google and Facebook. In Europe, some of France and Germany’s major publishers are trying to close ranks in a fight with Google over the platform’s response to the European Union Online Copyright Directive.

South Korea is the main example of an enduring corporatist approach to platforms. There, the dominant platform companies Naver and Daum work with the “Committee for the Evaluation of News Partnership” (whose members are recommended by the Korean Newspapers Association and the Korean Broadcasters Association, among others) to identify privileged partners. Out of many thousands of South Korean online publishers, several hundreds are recognized by the Committee, and about a hundred have been paid a licensing fee that, in total, amounted to tens of millions of dollars a year, primarily to the biggest publishers.

Second, more publishers might go all-out on the opportunities that come with primarily being complementors to very large platforms, investing in a portfolio of platform opportunities in search of distributed reach, and entirely avoiding head-on competition and attempts to build up direct relations with readers.

With pivots back to websites and apps, even prominent distributed publishers like BuzzFeed missing their business goals, and the odd product change to remind everybody that what platform companies give, they can take away, this is clearly a risky strategy, and just as almost no publishers focus exclusively on on-site, exclusively off-site approaches are rare.

In particular, the strongest publishers, with distinct and effectively differentiated offers and strong direct routes to market, tend to bristle at the very idea, even as the list of top English-language publishers on Facebook in late 2019 was full of familiar names, with CNN, the Daily Mail, and Fox News occupying the top three spots, and The New York Times, the BBC, The Washington Post, and the Guardian all in the top 10.

Thus, while the platform risk is considerable, with the contingency of relying on platforms where, at any moment, the product may change, a number of publishers are pursuing these opportunities aggressively. Looking beyond established models of publishing, whether legacy or digital-born, complementary strategies focused on pursuing platform opportunities while managing platform risk can take many forms. At one end there are individual “influencers” on Western platforms, from stars like PewDiePie making millions every year to countless “nano-influencers” earning a little on the side — independently operated individual profiles working across platforms, producing original content, often as a business or at least a side job, and leveraging platform opportunities to compete with established publishers for attention, advertising, marketing, and the like. At the other end, one can point to the app economy and the video game industry as big, competitive, and lucrative industries that are almost entirely based on a multitude of third parties — some of them large profitable companies — built in large part by complementing a few dominant platforms.

Third, if the central risks publishers are trying to contain are asymmetry when faced with much larger platforms, and the contingency and platform risk that comes with being too dependent on them, publishers might collaborate to create their own alternatives to some of the products and services that dominant platforms offer.

Serious publishers have already embraced the idea that, to succeed, news media has to combine editorial excellence with technological excellence, matching the expectations that audiences and advertisers have become accustomed to through the experience of using platforms’ products and services.

Some of this work begins internally, with publishers like Vox Media and The Washington Post developing new digital publishing platforms (and in turn offering these up for licensing to other publishers), and The New York Times and others investing in advertising technology.

Occasionally, this involves publishers operating their own platforms, which companies like Axel Springer and Schibsted do very successfully with classified advertising platforms, and which Springer does in partnership with Samsung on the mobile news aggregator Upday.

Still, the track record of publishers’ dabbling in platforms has been uneven and often unsuccessful. Rupert Murdoch’s News Corporation bought Myspace in 2005 for $580m, only to sell it for $35m in 2011. The Georg von Holtzbrinck Publishing Group brought the German social network StudiVZ in 2007 for €85m, but sold it in 2012 for an undisclosed sum. French publishers have repeatedly declared their intent to launch their own aggregators and search engines (none have materialized), and several publishers have tried to launch blogging networks and various other forms of platforms for readers and subscribers, often with limited success.

More recently, there is an increasing number of examples of smaller groups of publishers collaborating on joint platforms for advertising sales, registration, subscriptions, and the like. Collaboration on specific solutions to specific problems seems like a promising route for publishers seeking to retain their independence and make the best possible use of the opportunities existing platforms offer, while finding ways of reducing the platform risk that comes with becoming increasingly reliant on and intertwined with them across distribution, advertising sales, analytics, sales, and more.

Publishers taking control of their own destiny

Publishers make their own decisions, but not under conditions of their own choosing. They are decisions nonetheless, and decisions that matter. Unwarranted determinism about the supposedly sovereign power of platforms is paralyzing, disrespectful of the difference that clear strategic thinking and careful execution makes, and ultimately not supported by the evidence.

Some publishers have demonstrably been better at building reach via platforms. Some have been demonstrably better at acquiring subscribers via platforms. The choice to try to do one or the other is a key strategic one. Some publishers have been much better at building direct engagement with audiences, and some have very significant direct traffic and very wide reach via platforms.

In the years ahead, publishers will continue to make different strategic decisions about how to realize platform opportunities while minimizing platform risk — individually, each pursuing their own interest, and perhaps sometimes in groups, whether through corporatist, complementary, or collaborative approaches.

Rasmus Kleis Nielsen is director of the Reuters Institute for the Study of Journalism and a professor of political communication at the University of Oxford. His other books include The Changing Business of Journalism and its Implications for Democracy and Political Journalism in Transition: Western Europe in a Comparative Perspective. Sarah Anne Ganter is an assistant professor at Simon Fraser University’s School of Communication.

Image of different podiums and platforms by Rodion Kutsaev is being used under an Unsplash License.

]]>
https://www.niemanlab.org/2022/04/how-can-publishers-respond-to-the-power-of-platforms/feed/ 0
Algorithms, lies, and social media https://www.niemanlab.org/2022/04/algorithms-lies-and-social-media/ https://www.niemanlab.org/2022/04/algorithms-lies-and-social-media/#respond Thu, 07 Apr 2022 14:49:20 +0000 https://www.niemanlab.org/?p=202168 There was a time when the internet was seen as an unequivocal force for social good. It propelled progressive social movements from Black Lives Matter to the Arab Spring; it set information free and flew the flag of democracy worldwide. But today, democracy is in retreat and the internet’s role as driver is palpably clear. From fake news bots to misinformation to conspiracy theories, social media has commandeered mindsets, evoking the sense of a dark force that must be countered by authoritarian, top-down controls.

This paradox — that the internet is both savior and executioner of democracy — can be understood through the lenses of classical economics and cognitive science. In traditional markets, firms manufacture goods, such as cars or toasters, that satisfy consumers’ preferences. Markets on social media and the internet are radically different because the platforms exist to sell information about their users to advertisers, thus serving the needs of advertisers rather than consumers. On social media and parts of the internet, users “pay” for free services by relinquishing their data to unknown third parties who then expose them to ads targeting their preferences and personal attributes. In what Harvard social psychologist Shoshana Zuboff calls “surveillance capitalism,” the platforms are incentivized to align their interests with advertisers, often at the expense of users’ interests or even their well-being.

This economic model has driven online and social media platforms (however unwittingly) to exploit the cognitive limitations and vulnerabilities of their users. For instance, human attention has adapted to focus on cues that signal emotion or surprise. Paying attention to emotionally charged or surprising information makes sense in most social and uncertain environments and was critical within the close-knit groups in which early humans lived. In this way, information about the surrounding world and social partners could be quickly updated and acted on.

But when the interests of the platform do not align with the interests of the user, these strategies become maladaptive. Platforms know how to capitalize on this: To maximize advertising revenue, they present users with content that captures their attention and keeps them engaged. For example, YouTube’s recommendations amplify increasingly sensational content with the goal of keeping people’s eyes on the screen. A study by Mozilla researchers confirms that YouTube not only hosts but actively recommends videos that violate its own policies concerning political and medical misinformation, hate speech, and inappropriate content.

In the same vein, our attention online is more effectively captured by news that is either predominantly negative or awe inspiring. Misinformation is particularly likely to provoke outrage, and fake news headlines are designed to be substantially more negative than real news headlines. In pursuit of our attention, digital platforms have become paved with misinformation, particularly the kind that feeds outrage and anger. Following recent revelations by a whistle-blower, we now know that Facebook’s newsfeed curation algorithm gave content eliciting anger five times as much weight as content evoking happiness. (Presumably because of the revelations, the algorithm was changed.) We also know that political parties in Europe began running more negative ads because they were favored by Facebook’s algorithm.

Besides selecting information on the basis of its personalized relevance, algorithms can also filter out information considered harmful or illegal, for instance by automatically removing hate speech and violent content. But until recently, these algorithms went only so far. As Evelyn Douek, a senior research fellow at the Knight First Amendment Institute at Columbia University, points out, before the pandemic, most platforms (including Facebook, Google, and Twitter) erred on the side of protecting free speech and rejected a role, as Mark Zuckerberg put it in a personal Facebook post, of being “arbiters of truth.” But during the pandemic, these same platforms took a more interventionist approach to false information and vowed to remove or limit Covid-19 misinformation and conspiracy theories. Here, too, the platforms relied on automated tools to remove content without human review.

Even though the majority of content decisions are done by algorithms, humans still design the rules the tools rely upon, and humans have to manage their ambiguities: Should algorithms remove false information about climate change, for instance, or just about Covid-19? This kind of content moderation inevitably means that human decision makers are weighing values. It requires balancing a defense of free speech and individual rights with safeguarding other interests of society, something social media companies have neither the mandate nor the competence to achieve.

None of this is transparent to consumers, because internet and social media platforms lack the basic signals that characterize conventional commercial transactions. When people buy a car, they know they are buying a car. If that car fails to meet their expectations, consumers have a clear signal of the damage done because they no longer have money in their pocket. When people use social media, by contrast, they are not always aware of being the passive subjects of commercial transactions between the platform and advertisers involving their own personal data. And if users experience has adverse consequences — such as increased stress or declining mental health — it is difficult to link those consequences to social media use. The link becomes even more difficult to establish when social media facilitates political extremism or polarization.

Users are also often unaware of how their news feed on social media is curated. Estimates of the share of users who do not know that algorithms shape their newsfeed range from 27% to 62%. Even people who are aware of algorithmic curation tend not to have an accurate understanding of what that involves. A Pew Research paper published in 2019 found that 74% of Americans did not know that Facebook maintained data about their interests and traits. At the same time, people tend to object to collection of sensitive information and data for the purposes of personalization and do not approve of personalized political campaigning.

They are often unaware that the information they consume and produce is curated by algorithms. And hardly anyone understands that algorithms will present them with information that is curated to provoke outrage or anger, attributes that fit hand in glove with political misinformation.

People cannot be held responsible for their lack of awareness. They were neither consulted on the design of online architectures nor considered as partners in the construction of the rules of online governance.

What can be done to shift this balance of power and to make the online world a better place?

Google executives have referred to the internet and its applications as “the world’s largest ungoverned space,” unbound by terrestrial laws. This view is no longer tenable. Most democratic governments now recognize the need to protect their citizens and democratic institutions online.

Protecting citizens from manipulation and misinformation, and protecting democracy itself, requires a redesign of the current online “attention economy” that has misaligned the interests of platforms and consumers. The redesign must restore the signals that are available to consumers and the public in conventional markets: users need to know what platforms do and what they know, and society must have the tools to judge whether platforms act fairly and in the public interest. Where necessary, regulation must ensure fairness.

Four basic steps are required:

  • There must be greater transparency and more individual control of personal data. Transparency and control are not just lofty legal principles; they are also strongly held public values. European survey results suggest that nearly half of the public wants to take a more active role in controlling the use of personal information online. It follows that people need to be given more information about why they see specific ads or other content items. Full transparency about customization and targeting is particularly important because platforms can use personal data to infer attributes — for example, sexual orientation — that a person might never willingly reveal. Until recently, Facebook permitted advertisers to target consumers based on sensitive characteristics such as health, sexual orientation, or religious and political beliefs, a practice that may have jeopardized users’ lives in countries where homosexuality is illegal.
  • Platforms must signal the quality of the information in a newsfeed so users can assess the risk of accessing it. A palette of such cues is available. “Endogenous” cues, based on the content itself, could alert us to emotionally charged words geared to provoke outrage. “Exogenous” cues, or commentary from objective sources, could shed light on contextual information: Does the material come from a trustworthy place? Who shared this content previously? Facebook’s own research, said Zuckerberg, showed that access to COVID-related misinformation could be cut by 95 percent by graying out content (and requiring a click to access) and by providing a warning label.
  • The public should be alerted when political speech circulating on social media is part of an ad campaign. Democracy is based on a free marketplace of ideas in which political proposals can be scrutinized and rebutted by opponents; paid ads masquerading as independent opinions distort that marketplace. Facebook’s “ad library” is a first step toward a fix because, in principle, it permits the public to monitor political advertising. In practice, the library falls short in several important ways. It is incomplete, missing many clearly political ads. It also fails to provide enough information about how an ad targets recipients, thus preventing political opponents from issuing a rebuttal to the same audience. Finally, the ad library is well known among researchers and practitioners but not among the public at large.
  • The public must know exactly how algorithms curate and rank information and then be given the opportunity to shape their own online environment. At present, the only public information about social media algorithms comes from whistle-blowers and from painstaking academic research. Independent agencies must be able to audit platform data and identify measures to remedy the spigot of misinformation. Outside audits would not only identify potential biases in algorithms but also help platforms maintain public trust by not seeking to control content themselves.

Several legislative proposals in Europe suggest a way forward, but it remains to be seen whether any of these laws will be passed. There is considerable public and political skepticism about regulations in general and about governments stepping in to regulate social media content in particular. This skepticism is at least partially justified because paternalistic interventions may, if done improperly, result in censorship. The Chinese government’s censorship of internet content is a case in point. During the pandemic, some authoritarian states, such as Egypt, introduced “fake news laws” to justify repressive policies, stifling opposition and further infringing on freedom of the press. In March 2022, the Russian parliament approved jail terms of up to 15 years for sharing “fake” (as in contradicting official government position) information about the war against Ukraine, causing many foreign and local journalists and news organizations to limit their coverage of the invasion or to withdraw from the country entirely.

In liberal democracies, regulations must not only be proportionate to the threat of harmful misinformation but also respectful of fundamental human rights. Fears of authoritarian government control must be weighed against the dangers of the status quo. It may feel paternalistic for a government to mandate that platform algorithms must not radicalize people into bubbles of extremism. But it’s also paternalistic for Facebook to weight anger-evoking content five times more than content that makes people happy, and it is far more paternalistic to do so in secret.

The best solution lies in shifting control of social media from unaccountable corporations to democratic agencies that operate openly, under public oversight. There’s no shortage of proposals for how this might work. For example, complaints from the public could be investigated. Settings could preserve user privacy instead of waiving it as the default.

In addition to guiding regulation, tools from the behavioral and cognitive sciences can help balance freedom and safety for the public good. One approach is to research the design of digital architectures that more effectively promote both accuracy and civility of online conversation. Another is to develop a digital literacy tool kit aimed at boosting users’ awareness and competence in navigating the challenges of online environments.

Achieving a more transparent and less manipulative media may well be the defining political battle of the 21st century.

Stephan Lewandowsky is a cognitive scientist at the University of Bristol in the U.K. Anastasia Kozyreva is a philosopher and a cognitive scientist working on cognitive and ethical implications of digital technologies and artificial intelligence on society. at the Max Planck Institute for Human Development in Berlin. This piece was originally published by OpenMind magazine and is being republished under a Creative Commons license.

Image of misinformation on the web by Carlox PX is being used under an Unsplash license.

]]>
https://www.niemanlab.org/2022/04/algorithms-lies-and-social-media/feed/ 0
People mistrustful of news make “snap judgments” to size up outlets https://www.niemanlab.org/2022/04/people-mistrustful-of-news-make-snap-judgments-to-size-up-outlets/ https://www.niemanlab.org/2022/04/people-mistrustful-of-news-make-snap-judgments-to-size-up-outlets/#respond Wed, 06 Apr 2022 14:25:29 +0000 https://www.niemanlab.org/?p=202224 How do people who have low trust in news sources decide which publications to trust? That’s the central question behind a newly published report from the Reuters Institute for the Study of Journalism’s Trust in News project.

The answer: People are quick to make judgments — snap judgments, as the people behind the study called them — when evaluating news outlets on popular digital outlets. These hasty decisions are based on a range of things that people look at, including the news brands themselves and who shared the stories.

To answer the question, the researchers polled 100 people in four different countries — Brazil, India, the United Kingdom and the United States — about their news habits.

Specifically, the authors chose participants labeled as “generally untrusting.” These volunteers were deemed as such because of their responses to the questions “How interested, if at all, would you say you are in politics?” and “Generally speaking, to what extent do you trust information from the following” list of 15 news organizations specific to their country. (In the U.S., this list included ABC, NBC News, Breitbart and others.)

Participants’ responses to these questions were measured on a five-point scale, and those whose scores suggested a below-average trust in news outlets as well as lower-than-average interest in politics were selected for the final sample. These people were also regular users of Facebook, WhatsApp, and Google.

With each of these participants, researchers conducted video interviews where the volunteers walked them through how they used each of the chosen internet platforms. This, the researchers write, “helped us observe in real time what they paid attention to in judging whether information was relevant and trustworthy to them. This technique allowed us to move beyond abstract responses about platform use to real-life experiences, where we could also probe participants further on specific and concrete examples.”

Here’s what they found:

These “generally untrusting” volunteers were unlikely to come across news on their regular platforms. When they did come across them, they were indifferent toward the news items. And the few times they did see them, the news tended to focus on softer topics such as entertainment.

When these participants did come across news articles on Facebook, Google or WhatsApp, they made quick judgments about the credibility of the information being reported. These judgments tended to be based on six main things, highlighted below:

When it came to headlines as a cue for making judgments, the researchers found that these people who weren’t attuned to news did get stopped by headlines — but the effect was perhaps the opposite of what outlets may have intended. One person is Brazil said, “The catchier the headline is, I’m more suspicious of it,” a sentiment that was echoed by another participant in the U.K., who said, “I think the more boring the heading is, maybe it’s more trust[worthy].”

The topic of the news item also played a role in how volunteers chose to trust publications. While these people tended to be skeptical of all news, they were especially skeptical of news about political topics. Here’s one U.K-based respondent’s take:

When you say “trust,” it depends. Trusting them for what? So, if I’m looking at a story about the floods down south, do I think they’re reporting that right? Probably. If I’m reading something about statistics that matter to politicians, do I believe it? No, because all the media are owned by the politicians.

What respondents paid attention to depended on the platform on which they saw news articles. On Facebook and WhatsApp, who shared the information helped inform how they viewed the news and the engagement the article was getting (likes, comments, etc.) Verification and labels on Facebook also helped. For example, one respondent in India said he trusted a news outlet “because this source has a blue tick, which means it’s verified through Facebook.”

At the same time, volunteers — much like the broader population — didn’t seem to know how platforms worked to show them news. They noticed that the source of the news wasn’t always apparent. People were also skeptical of the value to place on stories labeled as sponsored content. One volunteer in the U.S., for instance, said this of sponsored content on Google searches:

“Google is a private company. Google can be paid to be the first result you see. So, for certain subjects I would have to recall that it is very easy to pay to be in the first Google results.”

Participants also said they were concerned — rightly — that some of the social nature of these platforms (meaning friends and family were sharing news, which inherently made someone want to trust these people as sources) meant that it was easier to spread misinformation or mask dubious practices.

WhatsApp, for instance, offers more than just text messaging and news is often shared in audio format. But users expressed concern about this as well. One user from Brazil said this about her father’s use of audio on WhatsApp:

“[He] barely can read and write. He only uses audio messages, so news for him tends to be more trustworthy because he doesn’t know where it came from. So, it’s much more likely that he will believe in anything he receives from anyone.”

What does this mean for publications looking to win the trust of consumers?

“For news orgs, reaching this segment of the public may require more consistent and sustained branding efforts, in addition to tending more carefully to the precise ways in which stories are exhibited in digital spaces and how these may impact trust,” Amy Ross Arguedas, a postdoctoral research fellow at the Reuters Institute and the lead author of the paper said on Twitter.

And because these volunteers are coming across news on platforms that are not the news outlets’ own websites, the study “does put an onus on platforms to consider more carefully the role played by their design decisions and technologies in shaping users’ evaluations of news,” the authors write.

Read the full report here.

Image of finger snapping by jom jakkid is being used under an Unsplash license.

]]>
https://www.niemanlab.org/2022/04/people-mistrustful-of-news-make-snap-judgments-to-size-up-outlets/feed/ 0
If someone shares your politics, you’re less likely to block them when they post misinformation https://www.niemanlab.org/2022/03/if-someone-shares-your-politics-youre-less-likely-to-block-them-when-they-post-misinformation/ https://www.niemanlab.org/2022/03/if-someone-shares-your-politics-youre-less-likely-to-block-them-when-they-post-misinformation/#respond Wed, 23 Mar 2022 14:07:00 +0000 https://www.niemanlab.org/?p=201757 It’s a set of actions that’s probably familiar to many Facebook users by now: You see a friend — perhaps an older relative or someone you’ve lost touch with over the years — share questionable, offensive, or downright inaccurate posts, and eventually you reach for that “Unfollow” button.

A new study published last week in the Journal of Communication unpacks some of the patterns associated with this tried-and-tested method of limiting the misinformation that users opt to see when scrolling through their Facebook feeds. In the study of just under 1,000 volunteers, researchers Johannes Kaiser, Cristian Vaccari, Andrew Chadwick found that users were more likely to block those who shared misinformation when their political ideology differed from their own.

“People give a pass to their like-minded friends who share misinformation, but they are much more likely to block or unfollow friends that are not in agreement with them politically when they share misinformation on social media,” said Cristian Vaccari, professor of political communication at Loughborough University in the U.K. and an author of the study.

People whose political ideology leaned left, and especially extremely left, tended to be most likely to block users as a response to misinformation sharing. People whose ideology was more conservative tended to be more tolerant of those who shared misinformation.

The researchers recruited 986 volunteers in Germany to be a part of a simulation experiment. Why a simulation? “We didn’t conduct the experiment on Facebook because we can’t do that,” Vaccari said. “Facebook could do something very realistic with their interface, but researchers don’t have access to those tools.”

Why Germany? “Germany is very different from the United States,” said Vaccari. Germany is a parliamentary republic, and voters often have a choice of multiple parties. Right- and left-wing parties can form coalitions and “Voters are a lot less inclined to see voters and politicians from the other side in an antagonistic way, the way American voters do.” Conducting an experiment in this context would give them results, the researchers believed, that were not colored by hyperpartisan politics and polarization.

The volunteers were asked to answer a series of questions about their political beliefs and were ranked on their ideology on an 11-point scale. Volunteers were also asked to think of — and name — friends with similar and dissimilar political leanings. Vaccari and team then created fake Facebook profiles of these friends and had the volunteers look at their feeds.

Made-up news articles about two relatively non-contentious (in Germany, anyway) topics — housing and education — were posted to the feeds.

Researchers also created two versions of these fabricated articles depicting misinformation. One version was considered plausible enough to perhaps be true and the other was so outrageous as to likely be immediately recognizable as misinformation. (People were told after the experiment that the articles they saw weren’t real.)

The below simulation is an example of a pretty plausible news article, since the rent hike in question is only going up from 10% to 12%:

In contrast, the below simulation is highly implausible, given the jump in rent hike maximums from 10% to 50%:

Volunteers were then asked to respond with whether they would block the person in question, based on what they’d shared.

“We thought, the bigger lie, the more newsworthy but also the more inaccurate the post, the more likely it would be blocked by people, and that was true,” Vaccari said. Across the political spectrum, volunteers were more likely to block users when the more implausible or extreme version of the article was shared.

Still, it was “mostly people on the left that engaged in this kind of behavior, and especially those who were extremely on the left,” Vaccari said. “People on the right are much less likely to block people based on their ideological dissimilarity.”

One reason to explain these political differences, although speculative, could be the need for similar social identity: “I think it’s probably something to do with identity more than belief,” Vaccari said. “You might not believe the information shared is accurate, but you might not block that person because it’s a relationship you value.”

Another reason might be related to what previous research has shown, which is that right-wing voters tend to share more misinformation on social media. “So it might be that if you are a left-wing voter, you are used to seeing quite a lot of misinformation shared by right-wing voters that you are in contact with on social media. And so you might have become more used to blocking these people because you know they are more likely to share misinformation,” Vaccari said.

One takeaway, as previous studies about echo chambers have shown, is that such partisan tendencies in blocking could further polarize people and lead to a less diverse flow of information on social media channels. “If people are biased in favor of their own party, it may get rid of misinformation, but it also gets rid of alternate views,” Vaccari said.

Of course, this comes with all the caveats of the study: The German political context, the fact that people were asked to decide their take based on posts about non-partisan issues, and the fact that people were only shown one post in order to make their decision (“In reality, people are likely to have things accumulate before they act,” Vaccari said).

“I think that probably the most important takeaway is that there are some drawbacks to the widespread assumption that one of the best ways to protect people against disinformation is to give users tools that enable them to limit contact with other people who share misinformation,” Vaccari told me. “If people applied those tools in a politically neutral way, then there would be no problem with that argument. But the problem, as this study shows, is that people apply those blocking and unfollowing tools in a way that is partisan.”

Image of unfriending on Facebook by Oliver Dunkley is being used under a Creative Commons License.

]]>
https://www.niemanlab.org/2022/03/if-someone-shares-your-politics-youre-less-likely-to-block-them-when-they-post-misinformation/feed/ 0
How many people really watch or read RT, anyway? It’s hard to tell, but some of their social numbers are eye-popping https://www.niemanlab.org/2022/03/how-many-people-really-watch-or-read-rt-anyway-its-hard-to-tell-but-some-of-their-social-numbers-are-eye-popping/ https://www.niemanlab.org/2022/03/how-many-people-really-watch-or-read-rt-anyway-its-hard-to-tell-but-some-of-their-social-numbers-are-eye-popping/#respond Wed, 02 Mar 2022 17:07:42 +0000 https://www.niemanlab.org/?p=201032 Is RT’s international audience real?

That’s an important question, as regulators and distributors debate where the Kremlin-backed network falls on the spectrum from news channel to state propaganda. All across the world, RT is being tossed off cable systems, blocked by ad networks, removed from app stores, downranked in search engines, or banned outright.

But the level of concern would seem inextricably linked to the size of RT’s reach; propaganda isn’t particularly effective if no one sees it. Is it a rising power in an information war, or a waste of Kremlin cash that fakes its own numbers?

Today, Oxford’s Rasmus Kleis Nielsen compiled some data on RT’s online reach in the U.K., France, and Germany and the takeaway seems to be: not great on the web, but surprisingly strong on social media, at least in spots. You can see his thread here, but here are a few highlights.

The share of each country’s online population that sees RT content in a typical month is small: 0.6% in the U.K., 2.0% in France, and 3.0% in Germany. In the U.K., that means RT is barely a blip; both the BBC and The Guardian reach roughly 73× as many Britons.

But the story’s a little more nuanced in France and Germany, where the news audience is more fractured, without a central anchor like a BBC. In France, RT’s 2% reach means it’s not that much smaller than national newspapers Les Echoes (2.8× RT’s audience) or Le Monde (7.3× RT’s audience). In Germany, RT is within shouting distance of Der Spiegel (just 1.6× RT’s audience) and not that far behind public broadcaster ARD (4.9× RT’s audience).

Things look tighter on social media. In the U.K., RT’s monthly Facebook engagements (reactions, comments, and shares) are still well behind the BBC (which gets roughly 14× RT’s number), but it’s not far behind The Guardian (roughly 1.5× RT).

In France and Germany, though, RT is a legitimate national player on Facebook. It’s well ahead of Les Echoes, and Le Monde’s lead is only about 30%. And in Germany — hold onto deine Mütze — RT was the No. 1 news source in terms of engagements on Facebook in both December and January, according to this CrowdTangle data. (Or at least it was ahead of the major German news publishers; we’ve asked Facebook for comment and will update this post if we hear back.)

This pattern — RT as a news source that not many people actively seek out, but that knows how to push the right buttons to succeed on Facebook — is a lot like what we’ve seen over the past decade from sites that push misinformation, conspiracy theories, and culture-war content. It’s not a happy pattern.

The size of RT’s audience has been a matter of debate for as long as it’s existed. Back in 2015, The Daily Beast reported that RT “hugely exaggerates its viewership,” citing leaked documents:

The Daily Beast obtained these documents from Vasily Gatov, a former RIA Novosti employee who had a hand in their preparation. He says they were meant for top Kremlin officials. “Since RT’s earliest days, something always looked wrong to me,” Gatov said. “RT persistently pretended that it was much more important and much bigger than could be confirmed by any data. While RT’s internal reporting told their commissioner — the Russian government — that they’d managed to overcome CNN and the BBC in terms of viewership, no signs of this could be found in reliable data, audited and vetted by foreign sources. Their social media growth, reported in every public statement by RT as a ‘phenomenon,’ also looked suspicious.”

(Another useful reminder that something like Facebook interactions are subject to significant fakery.)

RT makes big claims about its YouTube audience, but only about 1% of the videos it posts there are political in nature. Its big YouTube hits? “Videos of natural disasters, accidents, crime, and natural phenomenon.”

In the U.S., one useful datapoint is that the cable systems that have carried RT nearly all did so because RT was paying them or because of a loophole in federal regulations that required them to. (The loophole was later closed.) If the market demand for RT was significant, the Kremlin wouldn’t have to pay off American companies to carry it.

There’ve also been a number of academic studies on RT’s audience. One of my favorites is this one from 2020 by Rhys Crilley, Marie Gillespie, Bertie Vidgen, and Alistair Willis:

Through a data-driven application of network science and other computational methods, we address this gap to provide insight into the demographics and interests of RT’s Twitter followers, as well as how they engage with RT…

First, we find that most of RT’s Twitter followers only very rarely engage with its content and tend to be exposed to RT’s content alongside other mainstream news channels. This indicates that RT is not a central part of their online news media environment.

Second, using probabilistic computational methods, we show that followers of RT are slightly more likely to be older and male than average Twitter users, and they are far more likely to be bots.

Identifying bots is not a perfect science, but they found that 39% of RT’s Twitter followers were likely bots, as opposed to the 1.5% it found in a random sample of Twitter users.

I’m not sure how much anyone can say with confidence about RT’s actual reach. I think it’s telling that most of the best “evidence” of a big audience — social media engagements, follower counts, YouTube views, its own press releases — originates on platforms where the data is easiest to manipulate. That pushes me to the skeptical side of the debate, at least when it comes to the United States. But the internet has taught us over and over that reach and influence are two related but distinct things. RT doesn’t need a huge audience to be influential — only the right one.

]]>
https://www.niemanlab.org/2022/03/how-many-people-really-watch-or-read-rt-anyway-its-hard-to-tell-but-some-of-their-social-numbers-are-eye-popping/feed/ 0
Facebook is letting a lot of climate change denial slide despite promises to flag it, study finds https://www.niemanlab.org/2022/02/facebook-is-letting-a-lot-of-climate-change-denial-slide-despite-promises-to-flag-it-study-finds/ https://www.niemanlab.org/2022/02/facebook-is-letting-a-lot-of-climate-change-denial-slide-despite-promises-to-flag-it-study-finds/#respond Wed, 23 Feb 2022 19:20:32 +0000 https://www.niemanlab.org/?p=200797 Facebook is failing to label many posts from websites most likely to publish climate change misinformation, according to a new report from a British watchdog group.

That’s despite the company rolling out a feature in May 2021 that would add information labels to climate change-related posts, a feature that is available in several countries around the world.

The group, the Center for Countering Digital Hate, looked at a small sample of English-language articles related to climate change from publishers the group had previously named to its “Toxic Ten” group. In November 2021, CCDH found that this group of 10 websites — including Breitbart, Newsmax, and the Daily Wire — was responsible for nearly 70% of engagement on Facebook with climate denial content.

The report’s authors used the analytics tool NewsWhip to search for nearly two dozen terms such as “climate hoax,” “climate alarmism,” “climategate,” and “global warming scam,” to arrive at a shortlist. Together, these posts had more than 1 million interactions, including likes, shares, and comments.

The shortlist was then evaluated specifically for articles containing climate misinformation as defined by the voluntary coalition known as the Conscious Advertising Network. The final list was made up of 184 articles published between May 19, 2021 (after the company rolled out its informational labeling feature) and January 20, 2022.

Using CrowdTangle, the study authors identified the most popular Facebook post associated with each article and assessed whether these posts included an information or fact-checking label. (Facebook parent company Meta is now limiting access to CrowdTangle, which could make analyses like these more difficult to do in the future.)

The study found that half of these posts contained no information label, while the other half did.

“50% is a failing grade. It’s an F,” Imran Ahmed, chief executive of CCDH, said in a call with reporters. “Someone with the resources of Facebook should be aiming for an A.”

This 50% without labels (a total of 93 posts), had nearly 542,000 Facebook interactions, which the authors found equated to a little more than half of total interactions with articles in the sample.

There didn’t seem to be any predictable patterns behind which posts earned a label and which didn’t, according to Callum Hood, head of research at CCDH. “Bottom line, it seemed quite arbitrary,” Hood told reporters. “We had posts with very high numbers of interactions that you might have intuitively thought Facebook would pay more attention to but contained phrases clearly associated with climate denial that were not labeled. And then you had others, which didn’t really contain those words or phrases, or were less popular and did have labels,” he said.

Here are some examples of the posts that were missing information or fact-checking labels:

  • A Breitbart article from November 2021 that called global warming a “hoax”
  • A Daily Wire piece from September 2021 claiming the Left is spreading “global warming alarmism” to the Right:

In contrast, here are some examples of posts that Facebook did choose to add information or fact-checking labels to:

Still, labeling is not always an effective tool against misinformation. Facebook’s own internal research has shown that adding labels has a limited effect.

The new report comes just days after whistleblower and former Facebook employee Frances Haugen filed a pair of complaints with the SEC alleging that Facebook misled investors about how it was combating Covid-19 and climate change misinformation on its website.

The new CCDH report builds on what Haugen is claiming, Ahmed said. “[Labeling] was the major intervention that Facebook said it was going to do, and it hasn’t done it,” he said. “We’ve got another case here of where a tech giant has made a sweeping promise about what it’s going to do to address a disinformation problem on its platform. And our research, again, shows that it simply isn’t doing it.”

Facebook spokesperson Kevin McAlister said in a statement:

“We combat climate change misinformation by connecting people to reliable information in many languages from leading organizations through our Climate Science Center and working with a global network of independent fact checkers to review and rate content. When they rate this content as false, we add a warning label and reduce its distribution so fewer people see it. During the time frame of this report, we hadn’t completely rolled out our labeling program, which very likely impacted the results.”

Photo of an installation of an iceberg with a burning Facebook logo by Eric Kayne/SumofUS used under a Creative Commons License.

]]>
https://www.niemanlab.org/2022/02/facebook-is-letting-a-lot-of-climate-change-denial-slide-despite-promises-to-flag-it-study-finds/feed/ 0
Facebook renames News Feed just “Feed” https://www.niemanlab.org/2022/02/facebook-renames-news-feed-just-feed/ https://www.niemanlab.org/2022/02/facebook-renames-news-feed-just-feed/#respond Tue, 15 Feb 2022 19:25:18 +0000 https://www.niemanlab.org/?p=200609 Facebook and news have had a fraught relationship. Hyperpartisan content tends to draw the most engagement. Misinformation on the platform is rampant thanks in part to a small group of abusive, toxic “superusers.” But for all of those headaches — and mounting European legal challenges, and content moderation horror stories here and abroad — most people don’t read any news on Facebook at all. (They go elsewhere to read news, however, when Facebook is down.)

So Facebook announced Tuesday that what has been known as “News Feed” since 2006 will now simply be called “Feed.”

“We think Feed is a better reflection of the broad variety of content people see as they scroll,” Facebook spokesperson Dami Oyefeso told me.

Mark Zuckerberg, the CEO of the company now known as Meta, has made it clear that he believes the company’s future is in the metaverse. Investors may not agree, but it seems increasingly clear that the company’s interest in sharing publishers’ stories on its platform is fading.

Photo of horse consuming content by Kim Bartlett — Animal People, Inc., used under a Creative Commons license.

]]>
https://www.niemanlab.org/2022/02/facebook-renames-news-feed-just-feed/feed/ 0
Facebook is blocking access to data about how much misinformation it spreads and who is affected https://www.niemanlab.org/2021/11/facebook-is-blocking-access-to-data-about-how-much-misinformation-it-spreads-and-who-is-affected/ https://www.niemanlab.org/2021/11/facebook-is-blocking-access-to-data-about-how-much-misinformation-it-spreads-and-who-is-affected/#respond Tue, 02 Nov 2021 14:28:47 +0000 https://www.niemanlab.org/?p=197400 Leaked internal documents suggest Facebook — which recently renamed itself Meta — is doing far worse than it claims at minimizing Covid-19 vaccine misinformation on the Facebook social media platform.

Online misinformation about the virus and vaccines is a major concern. In one study, survey respondents who got some or all of their news from Facebook were significantly more likely to resist the Covid-19 vaccine than those who got their news from mainstream media sources.

As a researcher who studies social and civic media, I believe it’s critically important to understand how misinformation spreads online. But this is easier said than done. Simply counting instances of misinformation found on a social media platform leaves two key questions unanswered: How likely are users to encounter misinformation, and are certain users especially likely to be affected by misinformation? These questions are the denominator problem and the distribution problem.

The Covid-19 misinformation study, “Facebook’s Algorithm: a Major Threat to Public Health,” published by public interest advocacy group Avaaz in August 2020, reported that sources that frequently shared health misinformation — 82 websites and 42 Facebook pages — had an estimated total reach of 3.8 billion views in a year.

At first glance, that’s a stunningly large number. But it’s important to remember that this is the numerator. To understand what 3.8 billion views in a year means, you also have to calculate the denominator. The numerator is the part of a fraction above the line, which is divided by the part of the fraction below line, the denominator.

Getting some perspective

One possible denominator is 2.9 billion monthly active Facebook users, in which case, on average, every Facebook user has been exposed to at least one piece of information from these health misinformation sources. But these are 3.8 billion content views, not discrete users. How many pieces of information does the average Facebook user encounter in a year? Facebook does not disclose that information.

Market researchers estimate that Facebook users spend from 19 minutes a day to 38 minutes a day on the platform. If the 1.93 billion daily active users of Facebook see an average of 10 posts in their daily sessions — a very conservative estimate — the denominator for that 3.8 billion pieces of information per year is 7.044 trillion (1.93 billion daily users times 10 daily posts times 365 days in a year). This means roughly 0.05% of content on Facebook is posts by these suspect Facebook pages.

The 3.8 billion views figure encompasses all content published on these pages, including innocuous health content, so the proportion of Facebook posts that are health misinformation is smaller than one-twentieth of a percent.

Is it worrying that there’s enough misinformation on Facebook that everyone has likely encountered at least one instance? Or is it reassuring that 99.95% of what’s shared on Facebook is not from the sites Avaaz warns about? Neither.

Misinformation distribution

In addition to estimating a denominator, it’s also important to consider the distribution of this information. Is everyone on Facebook equally likely to encounter health misinformation? Or are people who identify as anti-vaccine or who seek out “alternative health” information more likely to encounter this type of misinformation?

Another social media study focusing on extremist content on YouTube offers a method for understanding the distribution of misinformation. Using browser data from 915 web users, an Anti-Defamation League team recruited a large, demographically diverse sample of U.S. web users and oversampled two groups: heavy users of YouTube, and individuals who showed strong negative racial or gender biases in a set of questions asked by the investigators. Oversampling is surveying a small subset of a population more than its proportion of the population to better record data about the subset.

The researchers found that 9.2% of participants viewed at least one video from an extremist channel, and 22.1% viewed at least one video from an alternative channel, during the months covered by the study. An important piece of context to note: A small group of people were responsible for most views of these videos. And more than 90% of views of extremist or “alternative” videos were by people who reported a high level of racial or gender resentment on the pre-study survey.

While roughly 1 in 10 people found extremist content on YouTube and 2 in 10 found content from right-wing provocateurs, most people who encountered such content “bounced off” it and went elsewhere. The group that found extremist content and sought more of it were people who presumably had an interest: people with strong racist and sexist attitudes.

The authors concluded that “consumption of this potentially harmful content is instead concentrated among Americans who are already high in racial resentment,” and that YouTube’s algorithms may reinforce this pattern. In other words, just knowing the fraction of users who encounter extreme content doesn’t tell you how many people are consuming it. For that, you need to know the distribution as well.

Superspreaders or whack-a-mole?

A widely publicized study from the anti-hate speech advocacy group Center for Countering Digital Hate titled Pandemic Profiteers showed that of 30 anti-vaccine Facebook groups examined, 12 anti-vaccine celebrities were responsible for 70% of the content circulated in these groups, and the three most prominent were responsible for nearly half. But again, it’s critical to ask about denominators: How many anti-vaccine groups are hosted on Facebook? And what percent of Facebook users encounter the sort of information shared in these groups?

Without information about denominators and distribution, the study reveals something interesting about these 30 anti-vaccine Facebook groups, but nothing about medical misinformation on Facebook as a whole.

These types of studies raise the question, “If researchers can find this content, why can’t the social media platforms identify it and remove it?” The Pandemic Profiteers study, which implies that Facebook could solve 70% of the medical misinformation problem by deleting only a dozen accounts, explicitly advocates for the deplatforming of these dealers of disinformation. However, I found that 10 of the 12 anti-vaccine influencers featured in the study have already been removed by Facebook.

Consider Del Bigtree, one of the three most prominent spreaders of vaccination disinformation on Facebook. The problem is not that Bigtree is recruiting new anti-vaccine followers on Facebook; it’s that Facebook users follow Bigtree on other websites and bring his content into their Facebook communities. It’s not 12 individuals and groups posting health misinformation online — it’s likely thousands of individual Facebook users sharing misinformation found elsewhere on the web, featuring these dozen people. It’s much harder to ban thousands of Facebook users than it is to ban 12 anti-vaccine celebrities.

This is why questions of denominator and distribution are critical to understanding misinformation online. Denominator and distribution allow researchers to ask how common or rare behaviors are online, and who engages in those behaviors. If millions of users are each encountering occasional bits of medical misinformation, warning labels might be an effective intervention. But if medical misinformation is consumed mostly by a smaller group that’s actively seeking out and sharing this content, those warning labels are most likely useless.

Getting the right data

Trying to understand misinformation by counting it, without considering denominators or distribution, is what happens when good intentions collide with poor tools. No social media platform makes it possible for researchers to accurately calculate how prominent a particular piece of content is across its platform.

Facebook restricts most researchers to its Crowdtangle tool, which shares information about content engagement, but this is not the same as content views. Twitter explicitly prohibits researchers from calculating a denominator, either the number of Twitter users or the number of tweets shared in a day. YouTube makes it so difficult to find out how many videos are hosted on their service that Google routinely asks interview candidates to estimate the number of YouTube videos hosted to evaluate their quantitative skills.

The leaders of social media platforms have argued that their tools, despite their problems, are good for society, but this argument would be more convincing if researchers could independently verify that claim.

As the societal impacts of social media become more prominent, pressure on the big tech platforms to release more data about their users and their content is likely to increase. If those companies respond by increasing the amount of information that researchers can access, look very closely: Will they let researchers study the denominator and the distribution of content online? And if not, are they afraid of what researchers will find?

Ethan Zuckerman is a professor at the University of Massachusetts at Amherst. This article is republished from The Conversation under a Creative Commons license.The Conversation

]]>
https://www.niemanlab.org/2021/11/facebook-is-blocking-access-to-data-about-how-much-misinformation-it-spreads-and-who-is-affected/feed/ 0
Did NPR drop Facebook as a sponsor? People were asking after NPR changed its disclosure overnight https://www.niemanlab.org/2021/10/did-npr-drop-facebook-as-a-sponsor-people-were-asking-after-npr-changed-its-disclosure-overnight/ https://www.niemanlab.org/2021/10/did-npr-drop-facebook-as-a-sponsor-people-were-asking-after-npr-changed-its-disclosure-overnight/#respond Sat, 30 Oct 2021 02:00:02 +0000 https://www.niemanlab.org/?p=197342 I was listening to NPR’s Up First this morning when I heard something a little different. In an episode that included a story on Facebook’s name change, the host noted that “Facebook was, until recently, one of NPR’s sponsors.” (Previously, the disclosure was in the present tense, as in: “We should note that Facebook is a sponsor of NPR.”)

I wasn’t the only one who picked up on the change:

So did NPR drop Facebook as a sponsor, as some folks suggested? Did Facebook cancel?

NPR did tweak its note on transparency for Facebook, confirmed spokesperson Isabel Lara. (“I had no idea people pay such close attention to our disclosure language,” she said. ) But the latest Facebook sponsorship campaign wrapped up in November 2020, so the overnight change wasn’t due to a sudden break. NPR just updated the language to reflect Facebook was a former — not current — sponsor.

It’s not the only tweak that NPR listeners will hear, though. Starting today, NPR will also disclose that “Facebook’s parent company, Meta, pays NPR to license NPR content” in its coverage of the company.

NPR provided some extra context on the change. Facebook still pays it for its content to appear in Facebook’s News tab.

FB has licensing agreements with many publishers, including NPR. Through that arrangement, Facebook pays NPR to have a feed of summaries and links to NPR stories appear in Facebook alongside news from other outlets. NPR retains full editorial control of its content and feed.

I asked NPR if it had declined sponsorship from Facebook since the contract ended a year ago, as some were speculating on social media. Lara sent along NPR’s admirably strict guidelines about the sponsorship campaigns they accept.

“Sometimes our sponsorship team … will ask sponsors to tweak their language or turn down specific campaigns that don’t comply with the guidelines — as you can see in that link it is most often because of advocacy issues or if they’re referring to something that’s very much in the news,” Lara said.

In other words, NPR might turn down a specific campaign that violated guidelines, but wouldn’t bar all campaigns from a specific brand. Not that it would tell us which sponsorships were denied: “We also respect client confidentiality, so we wouldn’t be disclosing what campaigns we turn down,” Lara said.

]]>
https://www.niemanlab.org/2021/10/did-npr-drop-facebook-as-a-sponsor-people-were-asking-after-npr-changed-its-disclosure-overnight/feed/ 0
More internal documents show how Facebook’s algorithm prioritized anger and posts that triggered it https://www.niemanlab.org/2021/10/more-internal-documents-show-how-facebooks-algorithm-prioritized-anger-and-posts-that-triggered-it/ https://www.niemanlab.org/2021/10/more-internal-documents-show-how-facebooks-algorithm-prioritized-anger-and-posts-that-triggered-it/#respond Tue, 26 Oct 2021 17:30:04 +0000 https://www.niemanlab.org/?p=197146 As if there wasn’t enough Facebook news to digest already, another deep dive from The Washington Post this morning revealed that Facebook engineers changed the company’s algorithm to prioritize and elevate posts that elicited emoji reactions — many of which were rolled out in 2017. More specifically, the ranking algorithm treated reactions such as “angry,” “love,” “sad,” and “wow” as five times more valuable than traditional “likes” on the social media platform.

The problem with this plan for engagement: Other posts also likely to yield similar reactions were more likely to show up, only these posts were likely to also contain misinformation, spam, or forms of clickbait. One Facebook staffer, whose name was redacted in a dump of documents shared with the Securities and Exchange Commission by whistleblower and former Facebook employee Frances Haugen, had warned that this might happen; they were proven right.

According to the Post, “The company’s data scientists confirmed in 2019 that posts that sparked angry reaction emoji were disproportionately likely to include misinformation, toxicity and low-quality news.”

More on that:

That means Facebook for three years systematically amped up some of the worst of its platform, making it more prominent in users’ feeds and spreading it to a much wider audience. The power of the algorithmic promotion undermined the efforts of Facebook’s content moderators and integrity teams, who were fighting an uphill battle against toxic and harmful content.

This isn’t the first time that “anger” has reared its ugly head as a useful metric. Back in 2017, a report found that hyper-political publishers were especially adept at provoking the anger of their readers. And in 2019, another report found some of the effects of Facebook’s change to prioritizing meaningful interactions:

  • It has pushed up articles on divisive topics like abortion, religion, and guns;
  • politics rules; and
  • the “angry” reaction (😡) dominates many pages, with “Fox News driving the most angry reactions of anyone, with nearly double that of anyone else.”

Facebook introduced the suite of “reaction” emojis in response to a decline in people talking to each other on the social platform, according to the report. Giving the reactions five times the value of a single like was Facebook’s effort to signal that “the post had made a greater emotional impression than a like; reacting with an emoji took an extra step beyond the single click or tap of the like button.”

Mark Zuckerberg acknowledges that reactions can be used to indicate dislike.

Members of Facebook’s integrity teams raised concerns about the amplification of “anger” as a societal emotion, the documents reviewed by the Post show, but managers had a mixed record when it came to responding to these concerns.

A screenshot showing a staffer raising the question, “Quick question to play devil’s advocate: will weighting Reactions 5x stronger than Likes lead to News Feed having a higher ratio of controversial than agreeable content?”

According to the latest documents, even efforts to counteract this effect — when they were actually implemented — produced less-than-desirable results. For instance, even if Facebook employees tried to manipulate the score of a high-ranking post to get it to show up less often things often didn’t work out as planned.

If Facebook’s algorithms thought a post was bad, Facebook could cut its score in half, pushing most of instances of the post way down in users’ feeds. But a few posts could get scores as high as a billion, according to the documents. Cutting an astronomical score in half to “demote” it would still leave it with a score high enough to appear at the top of the user’s feed.

“Scary thought: civic demotions not working,” one Facebook employee noted.

The Post’s story details Facebook’s different attempts at dialing down this effect of amplifying reaction-driven posts.

When Facebook finally set the weight on the angry reaction to zero, users began to get less misinformation, less “disturbing” content and less “graphic violence,” company data scientists found. As it turned out, after years of advocacy and pushback, there wasn’t a trade-off after all. According to one of the documents, users’ level of activity on Facebook was unaffected.

Facebook’s reaction to this latest finding linking its algorithm and the prioritizing “anger” and posts that tend to invoke that emotion in users: “We continue to work to understand what content creates negative experiences, so we can reduce its distribution. This includes content that has a disproportionate amount of angry reactions, for example,” Facebook spokesperson Dani Lever told the Post.

]]>
https://www.niemanlab.org/2021/10/more-internal-documents-show-how-facebooks-algorithm-prioritized-anger-and-posts-that-triggered-it/feed/ 0
I’m in the consortium possessing the leaked Facebook documents. Let’s dissolve it. https://www.niemanlab.org/2021/10/im-in-the-consortium-possessing-the-leaked-facebook-documents-lets-dissolve-it/ https://www.niemanlab.org/2021/10/im-in-the-consortium-possessing-the-leaked-facebook-documents-lets-dissolve-it/#respond Tue, 26 Oct 2021 15:03:14 +0000 https://www.niemanlab.org/?p=197148 On Monday, the consortium of news organizations tasked with combing through Frances Haugen’s Facebook documents expanded its ranks to include my small, independent newsletter, Big Technology. While it’s nice to be in this consortium — which includes the AP, The New York Times, The Atlantic, and others — I now believe it’s time to dissolve it.

The Facebook documents that Haugen’s handed over to us — thousands upon thousands, with information about crucial aspects of Facebook’s decision-making — are simply too important to the public interest to keep under wraps. Right now, they’re available to us in a Google Drive, organized fairly neatly, and we are able to download them. But instead of a consortium of reporters sorting through them and writing stories based on what we see, we should expand our efforts to focus on the responsible redaction and wide release of these documents.

Releasing the documents will be challenging, no doubt. In their current form, they are poorly redacted and can’t simply be dumped on the web due to privacy and safety concerns. Some documents also contain issues dealing with national security that may not make sense to release immediately, or at all. But these can’t be barriers to making a wide array of these documents available to the public, especially to the people most exposed to some of the policies we’ve been writing about. Unfortunately, there isn’t a single publication in the group from outside North America and Europe.

More than 3.5 billion people use the Facebook family of apps each month, the company said in its earnings report Monday. The Facebook documents I’ve looked at so far contain a wild amount of information that unpacks decisions that impact their experiences online and offline. A document I came across Monday, for example, contained details of an experiment Facebook ran to all but turn off the News Feed ranking algorithm for .05% of its users. The public deserves to read the documents, not just the few dozen journalists in the consortium. Society distrusts institutions when a handful of gatekeepers withhold information that applies to their lives.

So how do we dissolve the consortium? We do it by making the group unnecessary through the steady, responsible release of these documents to the public. There are indeed serious liability concerns for those who publish them, but they can be assuaged with responsible redaction — which we all know how to do — and with, gulp, forward-thinking lawyers. Third-party organizations like Whistleblower Aid could play a role here too, using their resources and lawyers to help release the documents. And we can enlist interested individuals as well.

I don’t have all the answers. But what I do have is a view into a trove of documents that I’m sure belong in the public’s hands. The more broadly available we can make them, the closer we can get to solving the problems they uncover. It’s time for us to figure out a way to get these documents out.

Alex Kantrowitz writes Big Technology, a newsletter about Big Tech and society. To get it in your inbox each week, you can sign up here.

]]>
https://www.niemanlab.org/2021/10/im-in-the-consortium-possessing-the-leaked-facebook-documents-lets-dissolve-it/feed/ 0
In the ocean’s worth of new Facebook revelations out today, here are some of the most important drops https://www.niemanlab.org/2021/10/in-the-oceans-worth-of-new-facebook-revelations-out-today-here-are-some-of-most-important-drops/ https://www.niemanlab.org/2021/10/in-the-oceans-worth-of-new-facebook-revelations-out-today-here-are-some-of-most-important-drops/#respond Mon, 25 Oct 2021 18:00:01 +0000 https://www.niemanlab.org/?p=197096 There’s still another month remaining in the Atlantic hurricane season, and over the past few days, a powerful storm developed — one with the potential to bring devastating destruction.

The pattern was familiar: a distant rumbling in some faraway locale; a warning of its potential power and path; the first early rain bands; days of tracking; frantic movements; and finally the pummeling tempest slamming into landfall.

I’m talking, of course, about Facebook. (And if any of you jackals want to point out that Facebook should be more subject to the Pacific hurricane season, I’ll note that the storm is coming overwhelmingly from the other coast.)

A Nieman Lab analysis I just did in my head has found there are as many as 5.37 gazillion new stories out today about Facebook’s various misdeeds, almost all of them based in one way or another on the internal documents leaked by company whistleblower Frances Haugen. Haugen first began leaking the documents to reporters at The Wall Street Journal for a series of stories that began last month. Then came 60 Minutes, then congressional testimony, then the SEC, and finally a quasi-consortium of some of the biggest news organizations in America.

(Actually, cut that “finally”: Haugen is at the moment in London testifying before the U.K. Parliament about the documents, with a grand tour of European capitals to follow.)

It is, a Nieman Lab investigation can also confirm, a lot to take in. Protocol is doing its best to keep track of all the new stories that came off embargo today (though some began to dribble out Friday). At this typing, their list is up to 40 consortium pieces, including work from AP, Bloomberg, CNBC, CNN, NBC News, Politico, Reuters, The Atlantic, the FT, The New York Times, The Verge, The Wall Street Journal, The Washington Post, and Wired. (For those keeping score at home, Politico leads with six stories, followed by Bloomberg with five and AP and CNN with four each.) And that doesn’t even count reporters tweeting things out directly from the leak. I read through ~all of them and here are some of the high(low?)lights — all emphases mine.

Facebook’s role in the January 6 Capitol riot was bigger than it’d like you to believe.

From The Washington Post:

Relief flowed through Facebook in the days after the 2020 presidential election. The company had cracked down on misinformation, foreign interference and hate speech — and employees believed they had largely succeeded in limiting problems that, four years earlier, had brought on perhaps the most serious crisis in Facebook’s scandal-plagued history.

“It was like we could take a victory lap,” said a former employee, one of many who spoke for this story on the condition of anonymity to describe sensitive matters. “There was a lot of the feeling of high-fiving in the office.”

Many who had worked on the election, exhausted from months of unrelenting toil, took leaves of absence or moved on to other jobs. Facebook rolled back many of the dozens of election-season measures that it had used to suppress hateful, deceptive content. A ban the company had imposed on the original Stop the Steal group stopped short of addressing dozens of look-alikes that popped up in what an internal Facebook after-action report called “coordinated” and “meteoric” growth. Meanwhile, the company’s Civic Integrity team was largely disbanded by a management that had grown weary of the team’s criticisms of the company, according to former employees.

“This is not a new problem,” one unnamed employee fumed on Workplace on Jan. 6. “We have been watching this behavior from politicians like Trump, and the — at best — wishy washy actions of company leadership, for years now. We have been reading the [farewell] posts from trusted, experienced and loved colleagues who write that they simply cannot conscience working for a company that does not do more to mitigate the negative effects on its platform.”

A company after-action report concluded that in the weeks after the election, Facebook did not act forcefully enough against the Stop the Steal movement that was pushed by Trump’s political allies, even as its presence exploded across the platform.

The documents also provide ample evidence that the company’s internal research over several years had identified ways to diminish the spread of political polarization, conspiracy theories and incitements to violence but that in many instances, executives had declined to implement those steps.

Facebook was indeed well aware of how potent a tool for radicalization it can be. From NBC News:

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

From CNN:

“Almost all of the fastest growing FB Groups were Stop the Steal during their peak growth,” the analysis says. “Because we were looking at each entity individually, rather than as a cohesive movement, we were only able to take down individual Groups and Pages once they exceeded a violation threshold. We were not able to act on simple objects like posts and comments because they individually tended not to violate, even if they were surrounded by hate, violence, and misinformation.”

This approach did eventually change, according to the analysis — after it was too late.
“After the Capitol insurrection and a wave of Storm the Capitol events across the country, we realized that the individual delegitimizing Groups, Pages, and slogans did constitute a cohesive movement,” the analysis says.

When Facebook executives posted messages publicly and internally condemning the riot, some employees pushed back, even suggesting Facebook might have had some culpability.

“There were dozens of Stop the Steal groups active up until yesterday, and I doubt they minced words about their intentions,” one employee wrote in response to a post from Mike Schroepfer, Facebook’s chief technology officer.

Another wrote, “All due respect, but haven’t we had enough time to figure out how to manage discourse without enabling violence? We’ve been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control.”

Other Facebook employees went further, claiming decisions by company leadership over the years had helped create the conditions that paved the way for an attack on the US Capitol.

Responding to Schroepfer’s post, one staffer wrote that, “leadership overrides research based policy decisions to better serve people like the groups inciting violence today. Rank and file workers have done their part to identify changes to improve our platforms but have been actively held back.”

One important source of political agitation: SUMAs. From Politico:

Facebook has known for years about a major source of political vitriol and violent content on its platform and done little about it: individual people who use small collections of accounts to broadcast reams of incendiary posts.

Meet SUMAs: a smattering of accounts run by a single person using their real identity, known internally at Facebook as Single User Multiple Accounts. And a significant swath of them spread so many divisive political posts that they’ve mushroomed into a massive source of the platform’s toxic politics, according to internal company documents and interviews with former employees.

While plenty of SUMAs are harmless, Facebook employees for years have flagged many such accounts as purveyors of dangerous political activity. Yet, the company has failed to crack down on SUMAs in any comprehensive way, the documents show. That’s despite the fact that operating multiple accounts violates Facebook’s community guidelines.

Company research from March 2018 said accounts that could be SUMAs were reaching about 11 million viewers daily, or about 14 percent of the total U.S. political audience. During the week of March 4, 2018, 1.6 million SUMA accounts made political posts that reached U.S. users.

Through it all, Facebook has retained its existential need to be seen as nonpartisan — seen being the key word there, since perception and reality often don’t align when it comes to the company. From The Washington Post:

Ahead of the 2020 U.S. election, Facebook built a “voting information center” that promoted factual information about how to register to vote or sign up to be a poll worker. Teams at WhatsApp wanted to create a version of it in Spanish, pushing the information proactively through a chat bot or embedded link to millions of marginalized voters who communicate regularly through WhatsApp. But Zuckerberg raised objections to the idea, saying it was not “politically neutral,” or could make the company appear partisan, according to a person familiar with the project who spoke on the condition of anonymity to discuss internal matters, as well as documents reviewed by The Post.

(Will you allow me a brief aside to highlight some chef’s-kiss PR talk?)

This related Post story from Friday includes not one, not two, but three of the most remarkable non-denial denials I’ve read recently, all from Facebook PR. Lots of chest-puffing without ever actually saying “Your factual claim is false”:

As the company sought to quell the political controversy during a critical period in 2017, Facebook communications official Tucker Bounds allegedly said, according to the affidavit, “It will be a flash in the pan. Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine.”

Bounds, now a vice president of communications, said in a statement to The Post, ❶ “Being asked about a purported one-on-one conversation four years ago with a faceless person, with no other sourcing than the empty accusation itself, is a first for me.”

Facebook spokeswoman Erin McPike said in a statement, ❷ “This is beneath the Washington Post, which during the last five years competed ferociously with the New York Times over the number of corroborating sources its reporters could find for single anecdotes in deeply reported, intricate stories. It sets a dangerous precedent to hang an entire story on a single source making a wide range of claims without any apparent corroboration.”

The whistleblower told The Post of an occasion in which Facebook’s Public Policy team, led by former Bush administration official Joel Kaplan, defended a “white list” that exempted Trump-aligned Breitbart News, run then by former White House strategist Stephen K. Bannon, and other select publishers from Facebook’s ordinary rules against spreading false news reports.

When a person in the video conference questioned this policy, Kaplan, the vice president of global policy, responded by saying, “Do you want to start a fight with Steve Bannon?” according to the whistleblower in The Post interview.

Kaplan, who has been criticized by former Facebook employees in previous stories in The Post and other news organizations for allegedly seeking to protect conservative interests, said in a statement to The Post, ❸ “No matter how many times these same stories are repurposed and re-told, the facts remain the same. I have consistently pushed for fair treatment of all publishers, irrespective of ideological viewpoint, and advised that analytical and methodological rigor is especially important when it comes to algorithmic changes.”

If you think Facebook does a bad job moderating content here, it’s worse almost everywhere else.

This was a major theme in stories across outlets. The New York Times:

On Feb. 4, 2019, a Facebook researcher created a new user account to see what it was like to experience the social media site as a person living in Kerala, India.

For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site.

The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the Facebook researcher wrote.

“The test user’s News Feed has become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”

With 340 million people using Facebook’s various social media platforms, India is the company’s largest market. And Facebook’s problems on the subcontinent present an amplified version of the issues it has faced throughout the world, made worse by a lack of resources and a lack of expertise in India’s 22 officially recognized languages.

Eighty-seven percent of the company’s global budget for time spent on classifying misinformation is earmarked for the United States, while only 13 percent is set aside for the rest of the world — even though North American users make up only 10 percent of the social network’s daily active users, according to one document describing Facebook’s allocation of resources.

From Politico:

In late 2020, Facebook researchers came to a sobering conclusion. The company’s efforts to curb hate speech in the Arab world were not working. In a 59-page memo circulated internally just before New Year’s Eve, engineers detailed the grim numbers.

Only six percent of Arabic-language hate content was detected on Instagram before it made its way onto the photo-sharing platform owned by Facebook. That compared to a 40 percent takedown rate on Facebook.

Ads attacking women and the LGBTQ community were rarely flagged for removal in the Middle East. In a related survey, Egyptian users told the company they were scared of posting political views on the platform out of fear of being arrested or attacked online.

In Iraq, where violent clashes between Sunni and Shia militias were quickly worsening an already politically fragile country, so-called “cyber armies” battled it out by posting profane and outlawed material, including child nudity, on each other’s Facebook pages in efforts to remove rivals from the global platform.

From the AP:

An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.

In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.

“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”

(Facebook generated $85.9 billion in revenue last year, with a profit margin of 38%.)

For Hassan Slaieh, a prominent journalist in the blockaded Gaza Strip, the first message felt like a punch to the gut. “Your account has been permanently disabled for violating Facebook’s Community Standards,” the company’s notification read. That was at the peak of the bloody 2014 Gaza war, following years of his news posts on violence between Israel and Hamas being flagged as content violations.

Within moments, he lost everything he’d collected over six years: personal memories, stories of people’s lives in Gaza, photos of Israeli airstrikes pounding the enclave, not to mention 200,000 followers. The most recent Facebook takedown of his page last year came as less of a shock. It was the 17th time that he had to start from scratch.

He had tried to be clever. Like many Palestinians, he’d learned to avoid the typical Arabic words for “martyr” and “prisoner,” along with references to Israel’s military occupation. If he mentioned militant groups, he’d add symbols or spaces between each letter.

Other users in the region have taken an increasingly savvy approach to tricking Facebook’s algorithms, employing a centuries-old Arabic script that lacks the dots and marks that help readers differentiate between otherwise identical letters. The writing style, common before Arabic learning exploded with the spread of Islam, has circumvented hate speech censors on Facebook’s Instagram app, according to the internal documents.

But Slaieh’s tactics didn’t make the cut. He believes Facebook banned him simply for doing his job. As a reporter in Gaza, he posts photos of Palestinian protesters wounded at the Israeli border, mothers weeping over their sons’ coffins, statements from the Gaza Strip’s militant Hamas rulers.

From CNN:

Facebook employees repeatedly sounded the alarm on the company’s failure to curb the spread of posts inciting violence in “at risk” countries like Ethiopia, where a civil war has raged for the past year, internal documents seen by CNN show…

They show employees warning managers about how Facebook was being used by “problematic actors,” including states and foreign organizations, to spread hate speech and content inciting violence in Ethiopia and other developing countries, where its user base is large and growing. Facebook estimates it has 1.84 billion daily active users — 72% of which are outside North America and Europe, according to its annual SEC filing for 2020.

The documents also indicate that the company has, in many cases, failed to adequately scale up staff or add local language resources to protect people in these places.

So which are the countries Facebook does care about, if “care” is not a horribly misused term here? From The Verge:

In a move that has become standard at the company, Facebook had sorted the world’s countries into tiers.

Brazil, India, and the United States were placed in “tier zero,” the highest priority. Facebook set up “war rooms” to monitor the network continuously. They created dashboards to analyze network activity and alerted local election officials to any problems.

Germany, Indonesia, Iran, Israel, and Italy were placed in tier one. They would be given similar resources, minus some resources for enforcement of Facebook’s rules and for alerts outside the period directly around the election.

In tier two, 22 countries were added. They would have to go without the war rooms, which Facebook also calls “enhanced operations centers.”

The rest of the world was placed into tier three. Facebook would review election-related material if it was escalated to them by content moderators. Otherwise, it would not intervene.

“Tier Three” must be the new “Third World.”

The kids fled Facebook long ago, but now they’re fleeing Instagram too.

Also: “Most [young adults] perceive Facebook as place for people in their 40s or 50s…perceive content as boring, misleading, and negative…perceive Facebook as less relevant and spending time on it as unproductive…have a wide range of negative associations with Facebook including privacy concerns, impact to their wellbeing, along with low awareness of relevant services.” Otherwise, they love it.

From The Verge:

Earlier this year, a researcher at Facebook shared some alarming statistics with colleagues.

Teenage users of the Facebook app in the US had declined by 13 percent since 2019 and were projected to drop 45 percent over the next two years, driving an overall decline in daily users in the company’s most lucrative ad market. Young adults between the ages of 20 and 30 were expected to decline by 4 percent during the same time frame. Making matters worse, the younger a user was, the less on average they regularly engaged with the app. The message was clear: Facebook was losing traction with younger generations fast.

Facebook’s struggle to attract users under the age of 30 has been ongoing for years, dating back to as early as 2012. But according to the documents, the problem has grown more severe recently. And the stakes are high. While it famously started as a networking site for college students, employees have predicted that the aging up of the app’s audience — now nearly 2 billion daily users — has the potential to further alienate young people, cutting off future generations and putting a ceiling on future growth.

The problem explains why the company has taken such a keen interest in courting young people and even pre-teens to its main app and Instagram, spinning up dedicated youth teams to cater to them. In 2017, it debuted a standalone Messenger app for kids, and its plans for a version of Instagram for kids were recently shelved after lawmakers decried the initiative.

Instagram was doing better with young people, with full saturation in the US, France, the UK, Japan, and Australia. But there was still cause for concern. Posting by teens had dropped 13 percent from 2020 and “remains the most concerning trend,” the researchers noted, adding that the increased use of TikTok by teens meant that “we are likely losing our total share of time.”

Apple was close to banning Facebook and Instagram from the App Store because of how it was being used for human trafficking.

From CNN:

Facebook has for years struggled to crack down on content related to what it calls domestic servitude: “a form of trafficking of people for the purpose of working inside private homes through the use of force, fraud, coercion or deception,” according to internal Facebook documents reviewed by CNN.

The company has known about human traffickers using its platforms in this way since at least 2018, the documents show. It got so bad that in 2019, Apple threatened to pull Facebook and Instagram’s access to the App Store, a platform the social media giant relies on to reach hundreds of millions of users each year. Internally, Facebook employees rushed to take down problematic content and make emergency policy changes avoid what they described as a “potentially severe” consequence for the business.

But while Facebook managed to assuage Apple’s concerns at the time and avoid removal from the app store, issues persist. The stakes are significant: Facebook documents describe women trafficked in this way being subjected to physical and sexual abuse, being deprived of food and pay, and having their travel documents confiscated so they can’t escape. Earlier this year, an internal Facebook report noted that “gaps still exist in our detection of on-platform entities engaged in domestic servitude” and detailed how the company’s platforms are used to recruit, buy and sell what Facebook’s documents call “domestic servants.”

Last week, using search terms listed in Facebook’s internal research on the subject, CNN located active Instagram accounts purporting to offer domestic workers for sale, similar to accounts that Facebook researchers had flagged and removed. Facebook removed the accounts and posts after CNN asked about them, and spokesperson Andy Stone confirmed that they violated its policies.

And from AP:

After publicly promising to crack down, Facebook acknowledged in internal documents obtained by The Associated Press that it was “under-enforcing on confirmed abusive activity” that saw Filipina maids complaining on the social media site of being abused. Apple relented and Facebook and Instagram remained in the app store.

But Facebook’s crackdown seems to have had a limited effect. Even today, a quick search for “khadima,” or “maids” in Arabic, will bring up accounts featuring posed photographs of Africans and South Asians with ages and prices listed next to their images. That’s even as the Philippines government has a team of workers that do nothing but scour Facebook posts each day to try and protect desperate job seekers from criminal gangs and unscrupulous recruiters using the site.

If you see an antitrust regulator smiling today, this is why.

From Politico:

Facebook likes to portray itself as a social media giant under siege — locked in fierce competition with rivals like YouTube, TikTok and Snapchat, and far from the all-powerful goliath that government antitrust enforcers portray.

But internal documents show that the company knows it dominates the arenas it considers central to its fortunes.

Previously unpublished reports and presentations collected by Facebook whistleblower Frances Haugen show in granular detail how the world’s largest social network views its power in the market, at a moment when it faces growing pressure from governments in the U.S., Europe and elsewhere. The documents portray Facebook employees touting its dominance in their internal presentations — contradicting the company’s own public assertions and providing potential fuel for antitrust authorities and lawmakers scrutinizing the social network’s sway over the market.

And, of course, the Ben Smith meta-media look at it all.

Frances Haugen first met Jeff Horwitz, a tech-industry reporter for The Wall Street Journal, early last December on a hiking trail near the Chabot Space & Science Center in Oakland, Calif.

She liked that he seemed thoughtful, and she liked that he’d written about Facebook’s role in transmitting violent Hindu nationalism in India, a particular interest of hers. She also got the impression that he would support her as a person, rather than as a mere source who could supply him with the inside information she had picked up during her nearly two years as a product manager at Facebook.

“I auditioned Jeff for a while,” Ms. Haugen told me in a phone interview from her home in Puerto Rico, “and one of the reasons I went with him is that he was less sensationalistic than other choices I could have made.”

In the last two weeks [the news organizations] have gathered on the messaging app Slack to coordinate their plans — and the name of their Slack group, chosen by [beloved former Nieman Labber] Adrienne LaFrance, the executive editor of The Atlantic, suggests their ambivalence: “Apparently We’re a Consortium Now.”

Inside the Slack group, whose messages were shared with me by a participant, members have reflected on the strangeness of working, however tangentially, with competitors. (I didn’t speak to any Times participants about the Slack messages.)

“This is the weirdest thing I have ever been part of, reporting-wise,” wrote Alex Heath, a tech reporter for The Verge.

Original image of Hurricane Ida by NASA and Mark Zuckerberg drawing by Paul Chung used under a Creative Commons license.

]]>
https://www.niemanlab.org/2021/10/in-the-oceans-worth-of-new-facebook-revelations-out-today-here-are-some-of-most-important-drops/feed/ 0
Media consolidation and algorithms make Facebook a bad place for sharing local news, study finds https://www.niemanlab.org/2021/10/media-consolidation-and-algorithms-make-facebook-a-bad-place-for-sharing-local-news-study-finds/ https://www.niemanlab.org/2021/10/media-consolidation-and-algorithms-make-facebook-a-bad-place-for-sharing-local-news-study-finds/#respond Wed, 13 Oct 2021 13:00:23 +0000 https://www.niemanlab.org/?p=196652 The combination of local news outlets being bought out by bigger media conglomerates and the ever-present influence of social media in helping spread news seems to have created a new phenomenon, according to a new study: Issues of importance to local audiences are being drowned out in favor of harder-hitting news pieces with national relevance.

The study, published last week in Digital Journalism, was conducted by Benjamin Toff and Nick Mathews, two researchers at the University of Minnesota’s Hubbard School of Journalism and Mass Communication.

The idea for this research evolved partially because of the treasure trove of data available on the website CrowdTangle, and Toff and Mathews wanting to make use of that. “It occurred to us together that we could use [CrowdTangle] data to examine the degree to which local media engages with readers on social media platforms,” Toff said, adding that the idea took off from there. (It’s probably also good that Toff and Mathews thought to do this work now. Toff told me he is “very concerned” about possible changes at CrowdTangle with its founder and CEO’s departure, as it may curtail access to “one of the few sources of data we as researchers have to what people are interacting with on Facebook.”).

For this study, Toff and Mathews looked at a dataset of nearly 2.5 million Facebook posts that were published by local news organizations in three U.S. states. They chose the three states of Arizona, Minnesota, and Virginia for a couple of reasons. One was background knowledge on the media landscapes: Mathews had previously worked in Virginia and Toff had grown up in Arizona, and as current Minnesota residents, having the context about the local media in these states was important.

The other reason was to find states that weren’t on extreme ends of the news spectrum. “None of them are particularly extreme as far as being really small states [with limited media outlets] or on the other extreme like New York, which has such a dominant media presence,” Toff said.

Once they had a list of media outlets in the three states — along with detailed information about their ownership status and type, such as whether the outlets were owned by a multi-state chain or publicly owned — the researchers analyzed that information along with the millions of Facebook posts to identify any patterns in engagement. (For the purpose of this experiment, Toff and Mathews stuck to total engagement, which was all possible interactions including page follows, and didn’t examine individual interactions such as reactions or comments on Facebook).

They also sorted the posts into categories of hard news and soft news. Hard news stories covered topics such as politics, education, and health, while soft news included sports, arts and leisure , and — because they found so many posts of this variety on Facebook — animals.

The study revealed a few trends:

  • Ownership patterns related to activity and engagement on Facebook: “[P]ages owned by publicly traded, multi-state chains were among the most active on the platform,” the study found. These Facebook pages were also “more likely to have higher rates of interactions…on a per post basis than privately owned multi-state chains or pages owned by public or governmental organizations.”
  • Outlets owned by chains tended to post more repurposed content, but that led to less engagement: Chain-owned outlets, with more resources and access to the wire service or other sites owned by the same company, had access to more content, which they could use on their own platforms. “The idea is that it allows them to have a wider reach on the platform,” Toff said.
  • When it came to the type of news, hard news of national importance won out: Posts about hard news stories, especially on a national level, consistently brought more engagement than the softer, more locally relevant stories. “Even local organizations get more bang for their buck when they post about non-local subjects,” Toff said.

The combined effect: Local news, especially of topics that don’t rise to national importance, may be lost in the shuffle.

Co-author Nick Mathews put it this way on Twitter:

The study used data from 2018 and 2019, after when Facebook changed its algorithm to emphasize “meaningful social interactions,” so Toff is interested in seeing how these trends may have looked prior to that big change. Anecdotally speaking, Toff said that those changes made it harder for news organizations to get people to see their content.

A harder question to answer now is how much of these trends is driven by variations in people’s attention versus Facebook’s algorithms, since it’s hard to separate the two, Toff said.

Still, Toff said that the findings underscore the frustration often felt by news organizations and how they feel they are held captive by Facebook and other social media platforms. “You gotta go where people are spending time, but there’s so much [about these places] that can’t be controlled,” Toff said. “There’s a lot of hesitancy about becoming overly reliant on companies that have their own interests, ultimately, and they’re not always aligned [with news companies’ interests].”

Photo of Facebook News Feed by Dave Rutt used under a Creative Commons license.

]]>
https://www.niemanlab.org/2021/10/media-consolidation-and-algorithms-make-facebook-a-bad-place-for-sharing-local-news-study-finds/feed/ 0
When Facebook went down this week, traffic to news sites went up https://www.niemanlab.org/2021/10/when-facebook-went-down-this-week-traffic-to-news-sites-went-up/ https://www.niemanlab.org/2021/10/when-facebook-went-down-this-week-traffic-to-news-sites-went-up/#respond Thu, 07 Oct 2021 17:34:04 +0000 https://www.niemanlab.org/?p=196605 On August 3, 2018, Facebook went down for 45 minutes. That’s a little baby outage compared to the one this week, when, on October 4, Facebook, Instagram, and WhatsApp were down for more than five hours. Three years ago, the 45-minute Facebook break was enough to get people to go read news elsewhere, Chartbeat‘s Josh Schwartz wrote for us at the time.

So what happened this time around? For a whopping five-hours-plus, people read news, according to data Chartbeat gave us this week from its thousands of publisher clients across 60 countries.. (And they went to Twitter; Chartbeat saw Twitter traffic up 72%. If Bad Art Friend had been published on the same day as the Facebook outage, Twitter would have literally exploded, presumably.)

At the peak of the outage — around 3 p.m. ET — net traffic to pages across the web was up by 38% compared to the same time the previous week, Chartbeat found.

By the way, here’s how Chartbeat defines direct traffic and dark social, from CMO Jill Nicholson.

And here’s a question a bunch of people had. We’ll update this post when we know!

]]>
https://www.niemanlab.org/2021/10/when-facebook-went-down-this-week-traffic-to-news-sites-went-up/feed/ 0
As Facebook tries to knock the journalism off its platform, its users are doing the same https://www.niemanlab.org/2021/09/as-facebook-tries-to-knock-the-journalism-off-its-platform-its-users-are-doing-the-same/ https://www.niemanlab.org/2021/09/as-facebook-tries-to-knock-the-journalism-off-its-platform-its-users-are-doing-the-same/#respond Mon, 20 Sep 2021 18:00:33 +0000 https://www.niemanlab.org/?p=196103 It has been clear for several years that Facebook wishes it never got into the news business.

Sure, having a few news stories sprinkled throughout the News Feed probably makes a subset of their users happy and more willing to tap that blue icon on their homescreen again tomorrow. But there aren’t that many of them. Only 12.9% of posts viewed in the News Feed have a link to anything, much less a link to a news site. The percent that are about news — defined broadly, including sports and entertainment — is now somewhere less than 4%. It’s something of a niche interest for Facebook users.

Meanwhile, oh, what a giant pain in the ass it has been for Zuck & Co.: Fake news, foreign propaganda, Covid lies, Nazis, horse paste, fact-checking, accusations of political bias, and a seemingly never-ending list of additional headaches. Because Facebook, architecturally, makes little distinction between the best sources and the worst — but, architecturally, incentivizes content that appeals to our less rational natures — it gets blamed for roughly 80% of what ails the world.

Maybe you think that’s fair; maybe you think it gets a bad rap. Either way, Facebook would be happy if all of it could be sucked right off its servers and replaced with more puppies and silly memes and Instagram sunsets. And the company has taken a steady series of steps to reduce the role of news, especially political news, on its platform, the latest just a few weeks ago.

A new study out today from the Pew Research Center suggests it isn’t just Facebook that’s seeking a trial separation from the news — it’s also Facebook’s users.

As social media and technology companies face criticism for not doing enough to stem the flow of misleading information on their platforms, a new Pew Research Center survey finds that a little under half of U.S. adults (48%) get news on social media sites “often” or “sometimes,” a 5 percentage point decline from 2020.

Across the 10 social media sites asked about in this study, the percentage of users of each site who regularly get news there has remained relatively stable since 2020. However, both Facebook and TikTok buck this trend.

The share of Facebook users who say they regularly get news on the site has declined 7 points since 2020, from 54% to about 47% in 2021. TikTok, on the other hand, has seen a slight uptick in the percentage of users who say they regularly get news on the site, rising from 22% in 2020 to 29% in 2021.

That people would be reducing their news use of social media isn’t shocking; you may remember that 2020 was a pretty busy year! 2021, for all its continued pandemicity, has been at least a little less insane, news-wise. (Since January 20, at least.)

But Facebook’s decline (7 percentage points) was substantially larger than Twitter’s (4), Reddit’s (3), Snapchat’s (3), YouTube’s (2), Instagram’s (1), or LinkedIn’s (1). (Besides TikTok, WhatsApp and Twitch saw increases, though small ones.)

And because Facebook’s user base is so much larger than other (non-YouTube) social platforms, the impact of that drop in news usage is magnified. If my back-of-the-envelope math is right, the net decline in news usage on Facebook was about 5× the size of the net decline on Twitter. Facebook’s seeing a bigger decline that’s happening within a much larger user base.

That this is all happening despite 2020’s splashy-sounding debut of the Facebook News Tab for all its (U.S.) users and the company wearing out its checkbooks writing checks to publishers around the world. As I’ve argued, those payments (and those from rival duopolist Google) should be understood more as paid lobbying than an actual attempt to center journalism as an important anchor of their platforms.

Facebook users tend to be more casual news consumers than users of more news-oriented platforms like Twitter or Reddit — so a reduction there is probably more significant to an individual user, in terms of their overall news diet. But that more casual news consumer is also the sort more likely to be time-targeted in their news consumption — the person who pays attention to politics for the 30 days before an election and ignores it the rest of the time — so a higher drop-off from 2020 shouldn’t be too surprising.

But still, fewer people counting on Facebook for news is probably a good thing — and a sign that the interests of the company and its users may be strangely aligned, for once.

]]>
https://www.niemanlab.org/2021/09/as-facebook-tries-to-knock-the-journalism-off-its-platform-its-users-are-doing-the-same/feed/ 0
Facebook’s pivot to video didn’t just burn publishers. It didn’t even work for Facebook https://www.niemanlab.org/2021/09/well-this-puts-a-nail-in-the-news-video-on-facebook-coffin/ https://www.niemanlab.org/2021/09/well-this-puts-a-nail-in-the-news-video-on-facebook-coffin/#respond Wed, 15 Sep 2021 18:24:32 +0000 https://www.niemanlab.org/?p=195984 The phrase “pivot to video” has become a joke, shorthand for a media company’s last-ditch effort to turn things around before the layoffs begin.

“Today’s metrics are tomorrow’s punchlines, and yesterday’s pivot is today’s clumsy tumble,” Vice’s union tweeted on August 26, following the company’s elimination of 17 staffers across Vice and Refinery 29.

The layoffs were preceded, just a month earlier, by an announcement from Vice that it would “reduce the number of old-fashioned text articles on Vice.com, Refinery29 and another Vice-owned site, i-D, by 40 to 50 percent,” while increasing videos and visual stories on Instagram and YouTube “by the same amount.”

It all feels very five years ago. As we’ve documented, starting around 2016, Facebook executives including Mark Zuckerberg began pushing the notion that news video on Facebook was publishers’ bright future, a “new golden age.”

It turns out that the metrics that Facebook was using to measure engagement with news video were wrong, massively overestimating the amount of time that users spent consuming video ads. In 2019, Facebook settled a lawsuit with those advertisers, paying them $40 million (while admitting no wrongdoing). But it was too late for the publishers who’d already pivoted to Facebook video and then either made big cuts or shut down completely when it turned out people weren’t actually watching.

And now we have more proof that they weren’t watching, in the form of a tidbit from The Wall Street Journal’s big ongoing investigation into a trove of internal Facebook documents. In a story published Wednesday, Keach Heagy and Jeff Horwitz detailed how users’ engagement with Facebook started falling in 2017. Turns out that video did not slow the decline, but may actually have contributed to it:

Comments, likes and reshares declined through 2017, while “original broadcast” posts — the paragraph and photo a person might post when a dog dies — continued a yearslong decline that no intervention seemed able to stop, according to the internal memos. The fear was that eventually users might stop using Facebook altogether.

One data scientist said in a 2020 memo that Facebook teams studied the issue and “never really figured out why metrics declined.” The team members ultimately concluded that the prevalence of video and other professionally produced content, rather than organic posts from individuals, was likely part of the problem.

Facebook’s solution? Ratchet up the anger! It worked where all that Facebook Live did not.

There is one way that the video pivots and layoffs of 2021 are different from the earlier round: Executives don’t mention Facebook anymore.

“Across our news brands we see consistent global growth on text articles as a way to reach and grow new audiences,” Cory Haik, Vice’s chief digital officer, wrote in that layoff memo last month. “Alternatively, our digital entertainment brands like NOISEY and MUNCHIES have had a remarkable increase in views and engagement through our visual platforms (YouTube, Instagram) but a precipitous decline in text consumption over the last few years, roughly 75 percent.”

]]>
https://www.niemanlab.org/2021/09/well-this-puts-a-nail-in-the-news-video-on-facebook-coffin/feed/ 0