YouTube – Nieman Lab https://www.niemanlab.org Wed, 12 Apr 2023 23:32:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 Why news outlets are putting their podcasts on YouTube https://www.niemanlab.org/2023/04/why-news-outlets-are-putting-their-podcasts-on-youtube/ https://www.niemanlab.org/2023/04/why-news-outlets-are-putting-their-podcasts-on-youtube/#respond Tue, 11 Apr 2023 18:51:24 +0000 https://www.niemanlab.org/?p=213414 It has recently come to my attention that some people prefer to watch podcasts. In my house, podcasts are for multitasking, like walking the dogs or doing the dishes — but it turns out I’m in the minority, according to Morning Consult data.

The research firm found that more podcast listeners in the U.S. prefer to watch podcasts on YouTube than listen to audio-only versions.

A different study found watchable podcasts attract more podcasting newbies — those discovering podcasts for the first time — and that people listening to podcasts on YouTube are more likely to be younger (18 to 34 years old) than regular listeners elsewhere. (Cumulus Media, which conducted the survey, took care to ensure that none of the survey’s 604 respondents worked in fields that would presumably be disproportionately full of podcast listeners, like media, advertising, marketing, podcasting, or public relations.) Some podcast viewers report actively watching the videos to catch facial expressions while others minimize the video to listen in the background while doing something else.

YouTube starts to generate ad revenue for creators sooner than many other social platforms. And these video podcasts don’t necessarily have to be pretty or a heavy lift in the production department. Plenty of podcasts are uploaded with less-than-crisp videos showing hosts and guests in Zoom-like boxes. Others feature a static image and maybe a sound wave animation, if you’re lucky.

YouTube has published resources on bringing journalism to the platform, but those guides tend to be written for individual journalists — er, “news creators” — rather than news publishers. The platform recently rolled out a dedicated “Podcasts” tab and upgraded featured podcasts to include shows from The New York Times and NPR. Several news organizations stressed to me that YouTube appears, to them, to still be refining its podcast strategy, and said they’re waiting to see what shakes out before jumping on with their own content.

“We’re committed to supporting the future of journalism, and that means continuing to create opportunities for the industry to harness the latest technology and techniques for growth on YouTube,” Elena Hernandez, a YouTube spokesperson, said in an email. “Whether it’s long form video, Shorts [more on those below], or podcasts, we’re always working to improve the experience and support multiple formats for news creators.”

Here’s more from three news publishers on choosing to bring podcasts to YouTube.

Just last month, Slate announced it would partner with YouTube to bring its shows — including extensive archives — to YouTube. (A program called Headliner will allow the company to automate much of the process, a spokesperson noted.)

YouTube has more than 2.6 billion active users per month. The video platform enjoys a remarkably global audience, with more than half of internet users worldwide visiting YouTube at least once a month. Those numbers, ultimately, convinced Slate.

“Discoverability has become one of the biggest challenges across the podcast industry, and we see this as a real opportunity to build scale and reach a new, untapped audience on YouTube, which has become the world’s most-used podcast platform,” Slate president and chief revenue officer Charlie Kammerer said. “We’re excited to make our diverse collection of podcasts available to YouTube’s global audience, and to experiment with new formats and content ideas on the platform.”

Some of those experiments will include testing which Slate shows lend themselves to a visual medium and trying to envision what the next generation of a “video podcast” looks like. Slate also plans to use the videos on their site and experiment with Shorts to create “behind the scenes” content to promote the channel.

Other lenses to help determine whether putting effort into YouTube is worth the lift for NPR included research and development (NPR wants to feel like it’s learning about best practices for things like thumbnails, metadata, and discoverability) and reaching new audiences.

“We want to make sure we are reaching new audiences, and not recycling existing audiences,” Sucherman said. “Do we have evidence, if we are reaching new audiences, that they are younger, more diverse? And that these are ultimately public radio listeners [and] viewers of the future for us? Our mission is to reach as many Americans as possible with high-quality, fact-based journalism and information, however they choose to tune in.”

NPR, which recently laid off 10% of staff and cancelled podcasts amid a budget shortfall, leans toward producing content that can become a YouTube Short and a TikTok and appear on Instagram. “We use every part of the buffalo,” Sucherman noted.

The NPR team was frank about YouTube’s place in its overall social hierarchy. Audience teams have been more focused on Instagram and TikTok, where news about Ukraine and short-form videos from NPR Music have been doing especially well lately.

“We’re just trying to get a sense of what the audience might like, and not necessarily trying to build an audience around this content right now,” Jenkins said. “Our audience-building efforts are really taking place on Instagram, where we have a very robust NPR presence with our news content, as well as NPR Music.”

“We don’t do this in isolation. We do this as part of our overall podcast strategy and part of our overall content strategy. We’ve got levers that we’re pulling and pushing and this is one of them,” Jenkins added. “We are open to seeing it build over time — and we’re also open to changing course, depending on what makes the most sense.”

]]>
https://www.niemanlab.org/2023/04/why-news-outlets-are-putting-their-podcasts-on-youtube/feed/ 0
YouTube hit Channel 5 News is “reporting for people who don’t watch the news” https://www.niemanlab.org/2022/07/youtube-hit-channel-5-news-is-reporting-for-people-who-dont-watch-the-news/ https://www.niemanlab.org/2022/07/youtube-hit-channel-5-news-is-reporting-for-people-who-dont-watch-the-news/#respond Thu, 21 Jul 2022 13:00:31 +0000 https://www.niemanlab.org/?p=205828 A recent college graduate with an oversized thrift store suit and curls like Napoleon Dynamite, Andrew Callaghan doesn’t necessarily look like a credible source of information. But Channel 5 News, Callaghan’s web series and brand, has built a following including 1.93 million YouTube subscribers, and the 25-year-old pulls in roughly $100,000 per month through Patreon.

“I think I provide a gateway to engagement with reporting for people who don’t watch the news,” Callaghan, 25, told me. “People who don’t watch the news watch me. People who watch the news don’t watch me.”

Since hitting the road in 2019, Callaghan’s work has evolved beyond a parodic presentation of small-town news. He recently reported from Ukraine, interviewing the mayor of Lviv and refugees in the country and across the border in Poland. Much of the money that Channel 5 brings in is spent on operating costs for the traveling production, and the rest is split evenly between Callaghan and two collaborators.

During his college years in New Orleans, Callaghan started hitchhiking the American South between classes. When local filmmaker Michael Moises started an Instagram show called Quarter Confessions, Callaghan became one of several hosts asking drunk people on Bourbon Street to tell embarrassing secrets.

After college, Callaghan was eager to get back on the road, this time with a cameraman and the stoic correspondent persona he perfected while interviewing drunk tourists in the French Quarter. A social media content studio called Doing Things Media offered to provide a $45,000 salary, a $10,000 budget, equipment and an RV. Nic Mosher, Callaghan’s best friend from college, became the de facto cameraman. Their first budget went toward entry to Burning Man, the festival in Nevada where they filmed the first episode of “All Gas No Brakes,” a web series owned by Doing Things Media. A few months later, Callaghan convinced his hometown best friend Evan Gilbert-Katz to join them as the lead producer. They crisscrossed the country interviewing people at fringe events like The Raid of Area 51, Midwest FurFest, and Donald Trump Jr. Book Club.

When protests over George Floyd’s murder by police erupted in the summer of 2020, Callaghan, Mosher and Gilbert-Katz went to Minneapolis. The comments on “Minneapolis Protest” made it clear that their Black Lives Matter coverage was filling a void. One of the top comments, liked by 18,000 people, proclaimed that “All Gas No Brakes officially has more journalistic integrity than any cable news org.”

“I love this because he is solely showing the footage of the riot, the words of the protestors on sight, and not pushing any agenda,” another commenter wrote. “I finished this video not knowing at all what his political opinions are about the riots, just having learned more about what it was actually like to be there.”

Later in 2020, Callaghan, in partnership with Doing Things Media, landed a movie deal with comedy giants Tim Heidecker and Eric Wareheim, with Jonah Hill and A24 later joining as executive producers. During production of the film, which is slated to be released this fall, their usual breakneck production of social media content slowed. According to Callaghan, Doing Things Media — which still owns All Gas No Brakes — was frustrated by the reduced pace and wanted the team to stay away from political topics. He says the company pushed out Mosher and Gilbert-Katz, declined his attempts to renegotiate a profit share that gave him 20% of the revenue, and ultimately fired him, too. (Doing Things Media referred me to a statement that Reid Hailey, the CEO of Doing Things, gave to The New York Times in 2021: “We’re really bummed it didn’t work out with Andrew. It was a special moment in time and we’re excited we got to be a part of it.”)

Callaghan, Mosher, and Gilbert-Katz spent the rest of the year working on the movie, which follows the Stop The Steal movement up to the insurrection on January 6, 2020. Soon, they were back on YouTube under a new moniker: Channel 5 News.

The name “Channel 5” helps the team get access to subjects and events. They bought a news van, wrapped it with their graphics, and added fake satellites on top. While the brand is part parody and part camouflage, it also serves as a marker of their evolving journalistic pursuits. Most recently, they covered a pro-choice rally, traveled to the NRA conference and spoke with locals in Uvalde, Texas, following the school shooting, and attended the Satanic Temple Gathering (AKA SatanCon) in Scottsdale, Ariz.

Callaghan doesn’t consider himself a journalist in the traditional sense. “Journalists are the ones who break stories. I cover reactions. Big difference.” He rarely posits his own opinion, choosing to cover stories through the words of his subjects instead, and often sticks to open-ended questions like “What’s going on?” and “What’s on your mind?” “Andrew is just a normal guy treating people like humans,” Anna Rumbough, a fan from San Francisco, told me. “That’s what brings out such good interviews.”

“I’m trying to push a radical empathy agenda, and get people to think about why [other] people act and feel a certain way, as opposed to vilifying them,” Callaghan said. But Callaghan and Channel 5 are not without their detractors. Some think of Channel 5 as “bro-nalism,” referring to the all-white, male team. Guest correspondent Sidam — who’s covered events like the Uhuru March for Reparations in Oakland, Calif., where he remarked on the absurdity of white people gathering cash donations framed as “reparations” to build a basketball court thousands of miles away in Missouri — is Black, but doesn’t own a stake in the company. A Berkeley City College student told me he finds Channel Five content to be “very, VERY white,” pointing specifically to the videos about Crip Mac, a recurring subject who promotes gang life. “Its coverage of black topics is kinda ignorant… When yo family member is WACKED OUT by the streets, whether it be drugs or gang life, that shit ain something to put on camera and have dancin around like that,” he told me on Instagram. Callaghan noted that he plans to build a diverse roster of correspondents — “Canal Cinco, Punjabi Channel 5″ — but he believes it will happen naturally when more like-minded talents surface on their own. “I want a female correspondent,” he said.

Callaghan believes that independent creators like him will gradually replace the traditional pillars of journalism, “just because there’s so much distrust in media as it is … left and right.” Until then, Callaghan, Mosher, and Gilbert-Katz will have the opportunity to further shape the coming generations of journalists and social media reportage.

“I pretty much create news content for the disengaged,” he said. “That’s the achievement.”

Theo Schear is a filmmaker and freelance journalist. He shoots for the Golden State Warriors and his work has appeared in publications such as Juxtapoz, SFMOMA’s Open Space, Film Threat, and Deadspin.

Andrew Callaghan attends the St. Patrick’s Day in Boston. Photo by Theo Schear.

]]>
https://www.niemanlab.org/2022/07/youtube-hit-channel-5-news-is-reporting-for-people-who-dont-watch-the-news/feed/ 0
Russian influencers scramble to maintain their followers — and livelihoods https://www.niemanlab.org/2022/04/russian-influencers-scramble-to-maintain-their-followers-and-livelihoods/ https://www.niemanlab.org/2022/04/russian-influencers-scramble-to-maintain-their-followers-and-livelihoods/#respond Wed, 06 Apr 2022 14:00:58 +0000 https://www.niemanlab.org/?p=202118 On March 7, the Moscow-based creator Greg Mustreader posted a video from a hotel room in Istanbul, Turkey, on YouTube. In the 12-minute clip, he explained to his 200,000-strong, mostly Russian subscriber base that he had fled Russia for fear of political retaliation. Days earlier, Russia’s Parliament had passed a new law punishing anyone who spread “false” information about the Russian military with up to 15 years in prison.

Greg, who requested to be referred to by his first name for his security, typically posted about literature, philosophy, and art before he started denouncing the war. He said the last month had upended his life. “The shock connected with the events of the war was more significant than the realization that I will have financial losses,” Greg said. “Of course, once I started thinking about the repercussions for my projects, the realization dawned that, yeah, I am going to be in some trouble.”

As waves of wartime sanctions by foreign governments and private companies hit Russia, the country’s creator economy is in flux. The state ban on both Instagram and Facebook, Google’s ban on most YouTube monetization in the country, and restrictions on the ability of Russian users to upload videos to TikTok have instigated a mass platform migration among Russian creators and audiences. To salvage their online followings, and incomes, some creators have started moving their audiences to new platforms.

Some creators, like Greg, have left Russia and pivoted their content to target international viewers. Greg said that he previously spent close to 90% of his time on his Russian-language channels but has dedicated most of energy in the last few weeks to his English-language accounts. He launched an English-language TikTok account that amassed 100,000 followers after he started posting about the war earlier this month. “Many creators that I know are desperately trying to create an English-language [account] or at least have an English-language mirror of their [account],” he said.

Meanwhile, many creators are moving to alternative Russian-grown platforms like Yandex Zen, RuTube, and VKontakte (VK) — all part of a constellation of platforms that offer a government-approved alternative to services like Facebook, YouTube, and even Netflix. Others are moving to Telegram, a messaging service with an established reputation in Russia as a relative safe haven from government censors.

The mood in the industry has been “shock and awe,” according to Boris Omelnitskiy, the former president of the Russian chapter of the Interactive Advertising Bureau, a global trade association that has helped set standards for influencer marketing. “Western advertisers and platforms, payment systems, infrastructure players, backbone telecom operators are all leaving Russia at the same time.” The IAB cut ties with the country shortly after the war began.

Data confirms that Russian creators are scrambling to rebuild their followings on Russian-owned social media. In a March survey of 500 Russian content creators by marketing agency Twiga, 69% of creators interviewed, ranging from micro-influencers to those with millions of followers, said they plan to increase their presence on domestic platforms. Several creators who spoke with us said they were wary of this shift, which poses new threats of censorship, limited monetization, and the prospect of losing much of their audience in the transition.

Russian users are also on the move. An analysis of more than 3.3 billion social media messages by data analysis firm Brand Analytics from February 1 to March 10 showed that users in the country have already migrated to domestic social media in large numbers, particularly to VK, often labeled the Russian version of Facebook. On March 14, VK announced it set a new record for daily users, reaching more than 50 million, an increase of almost 9% since January.

“Today, everyone is adapting to the new reality,” said Yulia Pohlmann, co-founder of marketing agency Market Entry Atelier. “It is still very early to make a prognosis of how the social media landscape will look in Russia. But everyone is launching or unfreezing their accounts on VK, Telegram, Yandex Zen, and others.”

Alexey Markov, a YouTuber located in the Moscow suburbs who posts personal finance and economics-related content under the name Hoolinomics, is one of the many creators attempting to migrate his 200,000-plus YouTube followers to platforms less liable to be shut down. Reports indicate that Russia’s federal media regulator, Roskomnadzor, is pursuing a full ban of the site.

Markov’s early attempt to grow a following on Yandex Zen, a personalized reader platform launched in 2015 that resembles Flipboard but allows individual authors to post, has been slow. He has roughly only 1,000 subscribers there. Instead, he’s focusing on Telegram, which has more appeal to his international followers. Markov now has 67,000 followers on his main Telegram channel, where he posts his takes on crop prices, trade surpluses, and exchange rates at least once a day. In March, the number of active authors on Telegram increased 23% and, in Russia, surpassed WhatsApp by monthly web traffic.

For Markov, a creator who relied on long-form video formats and YouTube’s livestreaming features to build his career, the shift to Telegram has already been disruptive. So he continues to update his YouTube channel, where he invites subscribers to join him on wine-and-chess-night livestreams, in which he laments the state of the Russian economy. Markov said he wanted to convey a sense of stability to his followers on YouTube, despite reports that a blanket ban on the platform could be passed any day. Losing YouTube, Markov said, would be a significant blow.

“YouTube is not only for Russians, it’s for Russian-speaking followers,” he said. About 35% of Markov’s viewers come from former Soviet Union states, such as Ukraine, Belarus, and Kazakhstan. “I cannot move them to Yandex Zen or RuTube or other platforms because they just don’t want to be there.”

“We are used to Instagram’s technology, TikTok’s organic reach, and large-scale monetization on YouTube,” said Olga Berek, the president of the National Association of Bloggers in Russia. As a platform developed first and foremost as a messenger service, Telegram offers a vastly different user experience, ideal for building small but active communities. Currently, though, users rarely subscribe to more than 25 channels because of notification overload, she added. “Telegram copes with the load, but some measurements show that well-known bloggers can only transfer a small proportion of their subscribers to Telegram, less than 10%,” said Omelnitskiy, the former IAB Russia president.

Meanwhile, Russian homegrown platforms are trying to entice creators to join. Two weeks ago, VK, which now has 97 million monthly users, announced its largest support program for content creators yet and suspended fees for monetization tools for a month. Yandex Zen launched educational courses on how to grow a community on its platform. On March 28, Russian entrepreneurs opened up Rossgram, a clone of Instagram, for creator registration.

Some creators, however, have chosen not to migrate to Russian platforms like VK because of its close ties to the government. “You can’t be safe. And you can’t say what you want,” said Karolina K, a Belarusian lifestyle and travel creator who has nearly 400,000 subscribers on YouTube.

Karolina, who asked to be referred to by her first name for her security, has a majority Russian following on Instagram and YouTube. She was traveling in Turkey when she heard the news of the invasion and decided not to return to her home in St. Petersburg. While she previously used VK to keep in touch with friends and family and to promote her YouTube content, she said the platform has grown increasingly out of fashion and tends to skew to older audiences. But it’s VK’s reputation as a platform rife with state and self-censorship that cemented her decision not to return. “Of course people go to VK, but for me, I don’t see the future there.”

Creators like Karolina are contending with the growing politicization of influencers. Last week, the Russian Investigative Committee targeted socialite and Instagram lifestyle influencer Veronika Belotserkovskaya, under its new censorship law, for her posts denouncing the Russian invasion. Meanwhile, it has been reported that Russian influencer networks and supporters of the Putin government are being mobilized to spread disinformation on the war. One Ukrainian blogger has started a website called They Love War, with a running list of Russian-speaking influencers who have been silent on the war or have allegedly posted state propaganda.

Those who do take up state-backed platforms may still face some technical limitations. For YouTubers like Karolina and Markov, the most natural alternative to reaching audiences inside Russia would be RuTube, which is owned by Russia’s largest media conglomerate Gazprom-Media. An investigation from outlets IStories and Agentstvo in February revealed Russian authorities have been investing heavily for over a year in reviving RuTube to rival YouTube.

Three creators told us that RuTube is still plagued by a shoddy user experience, lack of monetization programs, and underdeveloped recommendation algorithms. Karolina recounted how it took a fellow creator 10 tries to upload a video to the platform recently.

“It’s awful. I’m sure that many of my colleagues would rather shoot themselves in the leg than upload a video to RuTube because it looks very painful,” said Greg, who has been discussing the program in a Telegram group of fellow Russian YouTubers. But as the number of platforms available continues to shrink and their income streams remain frozen, many creators still in Russia are left with little choice but to onboard to state-backed social media. “I think some of us are saying, Well, if we have to do it, we have to do it.”

Andrew Deck is a reporter at Rest of World. Masha Borak is a journalist covering the intersection of technology with politics, business, and society. This piece was originally published by Rest of World, a nonprofit newsroom covering global technology, and is being republished with permission.

Cover illustration by Glenn Harvey is being used with permission from Rest of World.

]]>
https://www.niemanlab.org/2022/04/russian-influencers-scramble-to-maintain-their-followers-and-livelihoods/feed/ 0
Parler is bringing together mainstream conservatives, anti-Semites, and white supremacists as the social media platform attracts millions of Trump supporters https://www.niemanlab.org/2020/11/parler-is-bringing-together-mainstream-conservatives-anti-semites-and-white-supremacists-as-the-social-media-platform-attracts-millions-of-trump-supporters/ https://www.niemanlab.org/2020/11/parler-is-bringing-together-mainstream-conservatives-anti-semites-and-white-supremacists-as-the-social-media-platform-attracts-millions-of-trump-supporters/#respond Mon, 30 Nov 2020 14:30:21 +0000 https://www.niemanlab.org/?p=188036 Since the 2020 U.S. presidential election, Parler has caught on among right-wing politicians and influencers as a social media platform where they can share and promote ideas without worrying about the company blocking or flagging their posts for being dangerous or misleading. However, the website has become a haven for far-right extremists and conspiracy theorists who are now interacting with the mainstream conservatives flocking to the platform.

As YouTube, Facebook, and Twitter continue to take action to mitigate the spread of extremism and disinformation, Parler has welcomed the ensuing exodus of right-wing users. It has exploded in popularity, doubling its members to 10 million during the month of November — although it is still dwarfed by Twitter’s roughly 330 million monthly active users and Facebook’s 2.7 billion monthly active users.

With its newfound success, the site is contributing to the widening gap between the different perceptions of reality held by the polarized public. On mainstream social media, Joe Biden and Kamala Harris won the presidential election, and theories alleging crimes by the Biden campaign and Democrats are flagged as misinformation. On Parler, Donald Trump won in a landslide, only to have his victory stolen by a wide-ranging alliance of evildoers, including Democrats and the so-called “deep state.”

While it’s too early to tell if Parler is here to stay, it has already achieved a reputation and level of engagement that has overtaken other alternative platforms. But along with its success comes the reality that extremist movements like QAnon and the Boogalooers have thrived in the platform’s unregulated chaos.

Parler’s origins

Parler was launched in 2018 and found its place as another niche platform catering to right-wing users who ran afoul of content moderation on Facebook, Twitter and YouTube. Its user base remained small — fewer than 1 million users — until early 2020.

Other primarily right-wing platforms, especially Gab, had housed fringe and violent ideologues and groups for much longer than Parler. These included violent far-right militias and the mass shooter Robert Bowers.

Parler, in contrast, gained a reputation for catering to mainstream conservatives thanks to a handful of high-profile early adopters like Brad Parscale, Candace Owens and Sen. Mike Lee. As a result, in 2020 when Twitter began labeling misleading Trump tweets about possible fraud in absentee and mail-in voting, politicians like Ted Cruz embraced Parler as the next bastion for conservative speech.

The 2020 election

In the weeks before the Nov. 3 election, the big social media sites took steps to mitigate election-related extremism and disinformation. Twitter rolled out labels for all mail-in ballot misinformation and put a prompt on tweeted articles to encourage people to read them before retweeting. Facebook blocked QAnon groups and, later, restricted QAnon-adjacent accounts pushing “SaveTheChildren” conspiracy theories. Facebook also began prohibiting Holocaust denial posts. YouTube labeled and blocked advertising for election-related fake information, though it left in place many conspiracy theory-promoting videos.

These actions continued in the wake of the election, especially as mainstream conservative politicians and Trump pushed the false claim that Biden and the Democrats committed large-scale voter fraud to steal the election. Consequently, millions of users migrated to alternative platforms: Gab, MeWe, and, in particular, Parler.

Users flocked there because of the promise of a site that wouldn’t label false information and wouldn’t ban the creation of extremist communities. But they also moved because Republican politicians and well-known elites signaled that Parler was the new home for conservative speech. These include commentator Mark Levin and Fox News host Sean Hannity.

Promoting racism, anti-Semitism and violence

Parler has only two community guidelines: It does not knowingly allow criminal activity, and it does not allow spam or bots on its platform. The lack of guidelines on hate speech has allowed racism and anti-Semitism to flourish on Parler.

My research center has spent several years building an extensive encyclopedia of far-right terminology and slang, covering niche topics from the spectrum of white supremacist, neo-fascist and anti-state movements. We have studied the ways that far-right language evolves alongside content moderation efforts from mainstream platforms, and how slang and memes are often used to evade regulations.

We have monitored far-right communities on Parler since March and have found frequent use of both obvious white supremacist terms and more implicit, evasive memes and slang. For example, among other explicit white supremacist content, Parler allows usernames referencing the Atomwaffen Division’s violently anti-Semitic slogan, posts spreading the theory that Jews are descended from Satan, and hashtags such as #HitlerWasRight.

In addition, it is easy to find the the implicit bigotry and violence that eventually caused Facebook to ban movements like QAnon. For example, QAnon’s version of the “blood libel” theory — the centuries-old conspiracy theory that Jewish people murder Christians and use their blood for rituals — has spread widely on the platform. Thousands of posts also use QAnon hashtags and promote the false claim that global elites are literally eating children.

Among the alternative platforms, Parler stands out because white supremacists, QAnon adherents and mainstream conservatives exist in close proximity. This results in comment threads on politicians’ posts that are a melting pot of far-right beliefs, such as a response to Donald Trump Jr.’s unfounded allegations of election crimes that states, “Civil war is the only way to drain the swamp.”

Behind the scenes

Parler’s ownership is still kept largely secret. However, the few pieces of information that have come to light make Parler’s spike in popularity even more concerning.

For example, Dan Bongino, the highly popular right-wing commentator who published a book about the “deep state” conspiracy theory and frequently publishes unverified information, has at least a small ownership stake in the company. CEO John Matze has said that the ownership is composed of himself and “a small group of close friends and employees.”

Notably, conservative billionaire Robert Mercer and his daughter, Rebekah, are investors in the platform. Rebekah Mercer helped co-found it with Matze. The Mercers are well known for their investments in other conservative causes, including Nigel Farage’s Brexit campaign, Breitbart News and Cambridge Analytica. The connection to Cambridge Analytica has, in particular, alarmed experts, who worry that Parler may harvest unnecessary data from unwitting users.

Parler’s privacy policy doesn’t put to rest concerns about user privacy, either: The policy says that Parler has permission to collect a vast amount of personal information, and gives its members much less control than mainstream platforms over what that data can be used for.

Parler’s future

Parler’s fate will hinge on what its members do over the next few months. Will the company be able to capitalize on the influx of new users, or will its members slowly trickle back to the larger platforms? A major factor is how Trump himself reacts, and whether he eventually creates an account on Parler.

Having catered to a right-wing audience and allowed hate speech to thrive on its platform, Parler is also at the whims of its user base. Parler’s main competitor, Gab, similarly attempted to capitalize on concerns about unfair moderation against conservatives. However, Gab’s expansion came to a halt after Bowers’ mass shooting at a synagogue in Pittsburgh. Bowers had been posting anti-Semitic and violent content on the platform, and the revelation resulted in PayPal, GoDaddy, and Medium banning Gab from their services.

Online extremism and hate can lead to real-world violence by legitimizing extreme actions. Parler’s tolerance of hate, bigotry and affiliation with violent movements opens the possibility that, like Gab, one or more of its members will commit acts of violence.

Although it’s hard to know how Parler will grow in the future, my research suggests that the extremism among its user base will persist for months to come.

Alex Newhouse is the research lead at the Center on Terrorism, Extremism, and Counterterrorism at the Middlebury Institute of International Studies. This article is republished from The Conversation under a Creative Commons license.The Conversation

]]>
https://www.niemanlab.org/2020/11/parler-is-bringing-together-mainstream-conservatives-anti-semites-and-white-supremacists-as-the-social-media-platform-attracts-millions-of-trump-supporters/feed/ 0
Is Facebook too big to know? The Markup has a plan (and a browser) to wrap its arms around it https://www.niemanlab.org/2020/10/is-facebook-too-big-to-know-the-markup-has-a-plan-and-a-browser-to-wrap-its-arms-around-it/ https://www.niemanlab.org/2020/10/is-facebook-too-big-to-know-the-markup-has-a-plan-and-a-browser-to-wrap-its-arms-around-it/#respond Mon, 19 Oct 2020 11:37:23 +0000 https://www.niemanlab.org/?p=187005 I like to think of David Weinberger’s book titles over the past two decades as a sort of tour of the Internet’s metastasizing complexity. First, in 2003, Small Pieces Loosely Joined. Then, in 2007, Everything Is Miscellaneous. In 2012, Too Big to Know. And last year, Everyday Chaos. They move, roughly, from connection to organization to information to, well, chaos, which sounds like the path I remember the Internet taking over that stretch of time.

I think of his work, especially Too Big to Know, whenever I hear someone talk about “Facebook” or “Twitter” or “YouTube” as if they were each a single unitary thing. Or whenever people assume that their News Feed must somehow be indicative (or at least suggestive) of what a billion other people’s News Feeds look like. The experience of social platforms is profoundly fractured, and the only people with any sort of god-like insight into the beast are the ones who work at the companies themselves, with access to the uncountable tendrils of personalization that touch every user interaction. That makes it very hard, in practice, to make defendable statements that begin “Facebook does x to its users” or “YouTube leads users to do y.” From the outside, they’re just too big to know.

Into that problem walks The Markup, the well-funded journalism startup that specializes in exposing those tendrils. “Our approach is scientific: We build datasets from scratch, bulletproof our reporting, and show our work,” its manifesto reads. And now it’s using “The Markup Method” to try to gets its arms around the tech giants.

On Friday, it announced a new initiative called the Citizen Browser Project — “an initiative designed to measure how disinformation travels across social media platforms over time.”

At the center of The Citizen Browser Project is a custom web browser designed by The Markup to audit the algorithms that social media platforms use to determine what information they serve their users, what news and narratives are amplified or suppressed, and which online communities those users are encouraged to join. Initially, the browser will be implemented to glean data from Facebook and YouTube.

A real nerd knows: Why make a browser plugin when you can make a browser?

A nationally representative panel of 1,200 people will be paid to install the custom web browser on their desktops, which allows them to share real-time data directly from their Facebook and YouTube accounts with The Markup. Data collected from this panel will form statistically valid samples of the American population across age, race, gender, geography, and political affiliation, which will lead to important insights about how Facebook’s and YouTube’s algorithms operate. To protect the panel’s privacy, The Markup will remove personally identifiable information collected by the panel and discard it, only using the remaining redacted data in its analyses.

“Social media platforms are the broadcasting networks of the 21st century,” said The Markup’s editor-in-chief, Julia Angwin. “They dictate what news the public consumes with black-box algorithms designed to maximize profits at the expense of truth and transparency. The Citizen Browser Project is a powerful accountability check on that system that can puncture the filter bubble and point the public toward a more free and democratic discourse.”

To put in terms that seem appropriate to the moment: The Citizen Browser Project is like a top-of-the-line poll that measures public opinion on a particular issue. It’s necessarily an imperfect instrument, with margins of error will growing as each demographic or psychographic subset is sliced more thinly. But it’s still a lot more useful than an endless string of anecdotes, the way so much discussion about social media goes — which I think, in this metaphor, would be a reporter counting yard signs to measure voters’ excitement for a candidate.

(As Weinberger put it in Too Big to Know: “The massive increase in the amount of information available makes it easier than ever for us to go wrong. We have so many facts at such ready disposal that they lose their ability to nail conclusions down, because there are always other facts supporting other interpretations.”)

This is the sort of thing that The Markup is designed for and best at. There are not many other news organizations that have both the technical skill to do this sort of analysis and the funding necessary to convince a nationally representative sample of Americans to install some Frankenbrowser they’ve never heard of.

(The money’s important. Previous attempts to do this sort of work, like NYU’s Ad Observatory, have relied on volunteers installing a browser plugin. But those volunteers tend to be, well, the kind of people who sign up for online citizen data-collection projects — educated coastal liberals — skewing the results. To build a truly representative sample, you’ve got to pay people.)

What The Markup doesn’t have — being a small and new nonprofit news organization working a very particular niche — is a huge organic audience to show all of its unique work. And thus: “The Markup has teamed up with The New York Times to analyze the data and report on the project’s findings together.”

Here’s what Angwin said about the project in a Zoom talk Friday:

The problem we face right now is that there’s no oversight of the algorithmic gatekeepers. The thing you could say about the old white men who ran the news business of the past was you did know what their decisions were. Their decisions were displayed on the front page in the newspaper — that was what they felt was the most important. Or they were the top of the news hour on the six o’clock news. So we did know what the outcome of their decisions was.

But with algorithmic gatekeepers, every one of us sees a different News Feed. And so there’s not really a way to say: What are they choosing to amplify? And what are they choosing not to amplify? And this, I think, is a fundamental issue for our democracy. Because if we can’t see what they’re saying, we can’t hold them accountable for those things.

So I think in this context, journalism’s role is changing, right? We still have to do the work, the bread and butter work of witness, and we always will. But we have to spend more time adjusting to this new reality, which is: We have to spend time doing what I would call forensics. Verifying and authenticating witness accounts and digital data trails. And also auditing: I think we need to hold these algorithmic gatekeepers accountable for what narratives they choose to amplify. Because right now, they can tell us anything about what their decisions were, and we don’t have a good way to hold them accountable.

“So we are hoping that when we get this tool going in the next few weeks that we will be able to answer the types of questions that really haven’t been answerable so far,” Angwin said. “For instance, what are conservative women seeing at the top of their feed? What kind of groups are being recommended to black men? We’re going to build real-time dashboards. And we’re going to collect the ad targeting information.”

If this sounds like the most wonderful thing in the world to you, The Markup is currently hiring a data reporter to work on the Citizen Browser Project.

Otherwise…sit tight until there are some results to share. As Weinberger put it in Too Big to Know: “Knowledge is becoming a property of the network, rather than of individuals who know things, of objects that contain knowledge, and of the traditional institutions that facilitate knowledge…We will argue about whether our new knowledge will bring us closer to the truth, as I think it overall does. But one thing seems clear: Networked knowledge brings us closer to the truth about knowledge.”

Original photo showing stars in the constellation of Sagittarius, captured by the Hubble Space Telescope’s Advanced Camera for Surveys, by NASA.

]]>
https://www.niemanlab.org/2020/10/is-facebook-too-big-to-know-the-markup-has-a-plan-and-a-browser-to-wrap-its-arms-around-it/feed/ 0
About a quarter of American adults get news from YouTube https://www.niemanlab.org/2020/09/about-a-quarter-of-american-adults-get-news-from-youtube/ https://www.niemanlab.org/2020/09/about-a-quarter-of-american-adults-get-news-from-youtube/#respond Tue, 29 Sep 2020 17:43:12 +0000 https://www.niemanlab.org/?p=186416 Twenty-six percent of American adults get news from YouTube, according to a new study by the Pew Research Center. And 13% of those say YouTube is the most important way they get news.

The Center studied the platform, which has more than two billion monthly users, by conducting a survey of over 12,000 U.S. adult YouTube news consumers in January 2020 and asking them about their experiences. Then Pew analyzed the most popular YouTube news channels and the contents of the videos some of those channels published in December 2019.

Pew found that of the 377 most popular YouTube news channels, 49% belong to news organizations while 42% belong to independent channels or creators. Among news consumers, 23% said they often watch videos by news outlets and independent channels. Established news outlets “no longer have full control over the news Americans watch,” Pew notes.

Those YouTube news consumers feel pretty much fine about getting information from the site, though nearly a third think misinformation there is a “very big problem” and another 33% think it is a “moderately big problem.” Democrats were more likely than Republicans to say misinformation and harassment are “very big problems” on the platform, compared to Republicans who were more likely to say the same about demonetization, censorship, and political bias.

The content analysis also found that the styles and content of news videos vary widely, from video length to upload frequency. “During the period analyzed (December 2019), news organizations posted a much higher volume of videos than independent sources (33 vs. 12 for the typical channel of each type), while independent channels’ videos were typically much longer (more than 12 minutes, compared with about five minutes for videos from channels affiliated with news organizations),” the researchers write.

The report also notes that 44% of these news channels are centered around YouTubers instead of what we think of as more traditional journalists:

The content analysis also finds that most of these independent channels are centered around an individual personality — often somebody who built their following through their YouTube channel — rather than a structured organization.

While 22% of popular YouTube news channels affiliated with a news organization use this personality-driven structure, seven-in-ten of the most popular independent news channels are oriented around a personality. And the people at the center of most of these independent channels are often “YouTubers” (i.e., people who gained a following through their YouTube presence; 57% of all independent news channels) rather than people who were public figures before gaining attention on YouTube (13%).

These different offerings and approaches to the news could have a variety of implications for the experiences of people who get news on YouTube. On the one hand, most YouTube news consumers seem to have a positive experience. Clear majorities in this group say in the survey that the news videos they watch on YouTube help them better understand current events (66%) and expect them to be largely accurate (73%). And a similar share (68%) say the videos keep their attention and that they typically watch closely, rather than playing them in the background.

Independent channels are also much more likely to cover QAnon conspiracy theories than established news organizations. Of the 100 most viewed YouTube news channels, just 2% of videos by traditional news organizations even mentioned QAnon or another conspiracy. Among independent news channels, that shot up to 21%.

Read the full report here.

]]>
https://www.niemanlab.org/2020/09/about-a-quarter-of-american-adults-get-news-from-youtube/feed/ 0
Biased algorithms on platforms like YouTube hurt people looking for information on health https://www.niemanlab.org/2020/07/biased-algorithms-on-platforms-like-youtube-hurt-people-looking-for-information-on-health/ https://www.niemanlab.org/2020/07/biased-algorithms-on-platforms-like-youtube-hurt-people-looking-for-information-on-health/#respond Wed, 15 Jul 2020 14:21:05 +0000 https://www.niemanlab.org/?p=184534 YouTube hosts millions of videos related to health care.

The Health Information National Trends Survey reports that 75% of Americans go to the internet first when looking for information about health or medical topics. YouTube is one of the most popular online platforms, with billions of views every day, and has emerged as a significant source of health information.

Several public health agencies, such as state health departments, have invested resources in YouTube as a channel for health communication. Patients with chronic health conditions especially rely on social media, including YouTube videos, to learn more about how to manage their conditions.

But video recommendations on such sites could exacerbate preexisting disparities in health.

A significant fraction of the U.S. population is estimated to have limited health literacy, or the capacity to obtain, process and understand basic health information, such as the ability to read and comprehend prescription bottles, appointment slips or discharge instructions from health clinics.

Studies of health literacy, such as the National Assessment of Adult Literacy conducted in 2003, estimated that only 12% of adults had proficient health literacy skills. This has been corroborated in subsequent studies.

I’m a professor of information systems, and my own research has examined how social media platforms such as YouTube widen such health literacy disparities by steering users toward questionable content.

On YouTube

Extracting thousands of videos purporting to be about diabetes, I verified whether the information shown conforms to valid medical guidelines.

I found that the most popular and engaging videos are significantly less likely to have medically valid information.

Users typically encounter videos on health conditions through keyword searches on YouTube. YouTube then provides links to authenticated medical information, such as the top-ranked results. Several of these are produced by reputable health organizations.

Recently, YouTube has adjusted how search results are displayed, allowing results to be ranked by “relevance” and providing links to verified medical information.

However, when I recruited physicians to watch the videos and rate them on whether these would be considered valid and understandable from a patient education perspective, they rated YouTube’s recommendations poorly.

I found that the most popular videos are the ones that tend to have easily understandable information but are not always medically valid. A study on the most popular videos on COVID-19 likewise found that a quarter of videos did not contain medically valid information.

The health literacy divide

This is because the algorithms’ underlying recommendations on social media platforms are biased toward engagement and popularity.

Based on how digital platforms provide information to search queries, a user with greater health literacy is more likely to discover usable medical advice from a reputed health care provider, such as the Mayo Clinic. The same algorithm will steer a less literate user toward fake cures or misleading medical advice.

This could be especially harmful for minority groups. Studies of health literacy in the United States have found that the impact of limited health literacy disproportionately impacts minorities.

We do not have enough studies on the state of health literacy among minority populations, especially in urban areas. That makes it challenging to design health communication aimed at minorities, and interventions to improve the utilization of existing health care resources.

There can also be cultural barriers regarding health care in minority populations that exacerbate the literacy barriers. Insufficient education and lack of self-management of chronic care have also been highlighted as challenges for minorities.

Algorithmic biases

Correcting algorithmic biases and providing better information to users of technology platforms would go a long way in promoting equity.

For example, a pioneering study by the Gender Shades project examined disparities in identifying gender and skin type across different companies that provide commercial facial recognition software. It concluded that companies were able to make progress in reducing these disparities once issues were pointed out.

According to some estimates, Google receives over a billion health questions everyday. Especially those with low health literacy have a substantial risk of encountering medically unsubstantiated information, such as popular myths or active conspiracy theories that are not based on scientific evidence.

The World Economic Forum has dubbed health-related misinformation an “infodemic.” Digital platforms where anyone can engage also make them vulnerable to misinformation, accentuating disparities in health literacy, as my own work shows.

Social media and search companies have partnered with health organizations such as the Mayo Clinic to provide validated information and reduce the spread of misinformation. To make health information on YouTube more equitable, those who design recommendation algorithms would have to incorporate feedback from clinicians and patients as well as end users.

Anjana Susarla is a professor of information systems at Michigan State University. This article is republished from The Conversation under a Creative Commons license.The Conversation

Bandaid on a kid’s hand by anjakb used under a Creative Commons license.

]]>
https://www.niemanlab.org/2020/07/biased-algorithms-on-platforms-like-youtube-hurt-people-looking-for-information-on-health/feed/ 0
YouTube might help boost news subscriptions with a new tool https://www.niemanlab.org/2020/05/youtube-might-help-boost-news-subscriptions-with-a-new-tool/ https://www.niemanlab.org/2020/05/youtube-might-help-boost-news-subscriptions-with-a-new-tool/#respond Tue, 05 May 2020 17:21:34 +0000 https://www.niemanlab.org/?p=182537 YouTube’s latest plan to support news publishers reportedly includes a new tool that would allow publishers to sell subscriptions from their YouTube videos, according to Digiday.

Per Digiday, the details of the tools — including the cut that YouTube would take, and how the subscription offerings would be presented to viewers — haven’t been finalized, but YouTube has reportedly been in talks with publishers about such a tool since last year and it’s part of the video platform’s work with the Google News Initiative. YouTube’s existing channel membership tool allows content creators to offer exclusive perks and features to paying members (something that Vox, for instance, has experimented with).

Publishers have been told that YouTube and Google have been working to tie the video platforms’ subscription sales tool to Subscribe With Google, a tool that Google rolled out in April 2018 for people to subscribe to publishers’ sites using their Google accounts. YouTube is also working on a way for publishers’ existing subscribers to connect their subscriptions to the publishers’ YouTube channels so that a publisher could distribute videos on the channel that are only available to its paying subscribers, regardless of whether a person subscribed directly from the publisher or through YouTube.

Publishers expect that YouTube will share subscriber information, such as subscribers’ names and email addresses, with the publishers. Google provides that information to publishers using Subscribe With Google. Receiving subscribers’ email addresses would enable publishers to establish direct relationships with the subscribers they receive from the platform.

]]>
https://www.niemanlab.org/2020/05/youtube-might-help-boost-news-subscriptions-with-a-new-tool/feed/ 0
“Every day is Saturday” on YouTube, as traffic toward “authoritative” sources surges https://www.niemanlab.org/2020/04/every-day-is-saturday-on-youtube-as-traffic-toward-authoritative-sources-surges/ https://www.niemanlab.org/2020/04/every-day-is-saturday-on-youtube-as-traffic-toward-authoritative-sources-surges/#respond Thu, 23 Apr 2020 16:45:05 +0000 https://www.niemanlab.org/?p=182216 Today’s the 15th anniversary of the first-ever video uploaded to YouTube. “All right,” YouTube co-founder Jawed Karim begins. “So here we are, in front of the elephants.”

Since then, the platform has evolved from one video of elephants at the zoo to the most popular video platform on the Internet with an average of two billion monthly users. It’s also become a huge resource for parents and children during the coronavirus pandemic while millions are forced to stay home. “Every day is Saturday,” YouTube engineer Scott Silver told CNET, disagreeing with Morrissey.

According to CNET’s Richard Nieva, YouTube has seen a huge jump in people watching videos seeking factual information:

As people around the world shelter in place, the Google-owned site has attracted parents on the hunt for children’s content, consumers looking for news, and people just trying to find a distraction during stressful times.

The surge in usage, though, could prove thorny for a platform that has for years been plagued with misinformation, extremism and child exploitation. The latest blight on the platform has been conspiracy theories tying COVID-19 to 5G wireless towers.

Still, YouTube says it has a handle on the situation when it comes to misinformation. During the first three months of the year, the company says it has seen a 75% increase in people watching videos from “authoritative” sources, such as legitimate news outlets, government agencies and health authorities like the World Health Organization. YouTube declined to share specific viewership numbers.

To steer people toward credible information, the company says it has been proactive. YouTube reached out to the team of Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, to set up videos with prominent creators on the platform, including Phil DeFranco, Doctor Mike and Lilly Singh. The videos have tallied around 30 million views collectively. YouTube has also tried to highlight educational content for kids stuck in quarantine. Last month, the company launched a hub called Learn@Home for parents to find education videos. The company says queries of “home school” on YouTube have doubled since March 13.

Historically, YouTube has fumbled the ball with misinformation, from incentivizing climate change deniers to algorithm errors during times of crisis. Still, YouTube insisted to CNET that it’s in a much better place now to handle larger problems.

YouTube’s engineers are constantly under pressure to remove offending material. Those takedowns run the gamut: They can be as innocuous as a copyright violation or as horrendous as a terrorist attack. After a shooter in Christchurch, New Zealand, live streamed himself killing worshippers at two mosques last year, tens of thousands of videos of the incident began to flow onto YouTube. The company’s engineers worked feverishly through the night to remove the content.

While the Christchurch tragedy played out online over a horrific several hours, the COVID-19 situation, from an engineering standpoint, will be a drawn out process of policing the site for objectionable content over at least the next few months. Earlier in April, YouTube made the call to ban coronavirus 5G conspiracy videos.

“What this feels like, in some ways, is a very, very long Christchurch,” Silver said, though he adds he wants to be careful about comparing the situations…

When it comes to the coronavirus situation, YouTube knows the tech won’t always be perfect. But the company says it’s in a better position to deal with the crisis — and the influx of people on the platform — because it’s used to working at a big scale. “We essentially built for that growth,” Silver said. “In many ways, a lot of what we prepared for has come true.”

]]>
https://www.niemanlab.org/2020/04/every-day-is-saturday-on-youtube-as-traffic-toward-authoritative-sources-surges/feed/ 0
Coronavirus got you housebound? Here’s how Splice quickly pulled together an online streaming event https://www.niemanlab.org/2020/03/coronavirus-got-you-housebound-heres-how-splice-quickly-pulled-together-an-online-streaming-event/ https://www.niemanlab.org/2020/03/coronavirus-got-you-housebound-heres-how-splice-quickly-pulled-together-an-online-streaming-event/#respond Thu, 26 Mar 2020 16:00:45 +0000 https://www.niemanlab.org/?p=181344 Editor’s note: Back in January, our friends at Splice became one of the first news outlets financially hurt by the coronavirus when it was forced to postpone its Splice Beta conference on media innovation in Asia as the virus began its spread. (It’s since been rescheduled for September.)

With all the restraints on travel, Splice decided to experiment with a free online-only event. (“We’re calling it the Splice Low-Res Festival, because it’s a quick and dirty idea and we all know video conferences can be pretty fuzzy…We’re doing this because it’s cheaper than therapy.”) It ended up with more than 200 people registered to watch 14 speakers on Tuesday, no one talking for more than 20 minutes. You can now watch it all on YouTube.

As we wrote about yesterday, the future of conferences even post-coronavirus is deeply uncertain — both the kind that journalists attend and the kind media companies use to drive revenue. However things shake out, online events like Splice Low-Res are going to play a bigger part. Here, Splice’s Alan Soon describes how they pulled it off.

If you’re reading this, you’re probably trying to figure out how to do your first online event. We decided to do Splice Low-Res because (1) we needed to learn how to do this, as COVID-19 leads us deeper online, and (2) we wanted to see how the media startup community was holding up.

We haven’t figured it all out — but this is what we’ve learned so far. Let us know where you think we could have done better.

Know what you need

Start with your use cases. It’s not about the technology — that comes once you’ve decided what’s important for your event. (That’s why you’ll only find the stuff about tech at the bottom of this post.)

There wasn’t a single platform that could do everything we needed end-to-end — not in the way we wanted, at least. So break it down so you’re thinking clearly about what you need, what you want, and what you can do without.

We’re community-focused, so capturing email registrations and making this content available widely are essential for us.

Registrations are a success metric for us. It’s not the sheer number — we wanted to know that we were indeed reaching specific people in the community. If we’re not reaching the kind of people we’re trying to serve, what’s the point? The idea is to be relevant. We also want to be able to do these events regularly, so keeping people in the loop through email is key.

Jobs to be done

Essential

  1. Front-end site: For brand building, for a single destination, and for a proof-of-concept for sponsors. It should be templatized to reuse for future events.
  2. Registrations: To let us register people, collect email addresses for future events, and produce an attendee snapshot for sponsors.
  3. Q&A: For interactive questions from delegates during the event. Ideally, these questions and comments should appear within the video feed itself so it can be easily captured in the archive.
  4. Community engagement post-event: Following up with attendees — for example, sending out speakers’ presentation decks, connecting people one-on-one, and signaling your next event.
  5. Multi-source inputs (headshots, split-screen headshots, slides, videos): To let us to cut between all of these formats within a speaker’s presentation.
  6. Video archival: To have the event stored on YouTube or some other video hosting platform for later viewing.

Nice to have

  1. Simulcast video distribution: To broadcast the video across multiple platforms at the same time, e.g. YouTube Live and Facebook Live.
  2. Metrics: When did people join the event? How long did they stay for? Who were the most popular speakers?

Not essential

  1. Program management: To quickly put together a rundown, allowing you to move things around as needed. Only matters if you have a complex show to execute.
  2. Speaker management: To help people discover who’s speaking. This could just go on the website, or pushed on social.
  3. Payments: A seamless way to buy a ticket as part of the registration process. We didn’t charge for Splice Low-Res, so it wasn’t important for us.

Software

Hopin: We tried this — spent $100 on it — but realized it was too cumbersome and too inflexible for what we needed. That said, Hopin checks a lot of boxes, from registrations and speaker management to chat conversations and output. There were just too many levels of complexity for us. We were also disappointed with the slow response by tech support (granted, these are busy days for them).

Zoom: A favorite for many, but we didn’t like the UI. The dealbreaker was that people would need to download the app just to join the event.

Google Hangouts/Meet: We went with this in the end for two good reasons: Everyone has joined a Google Hangout before — it’s familiar ground. And it only takes one click to get you in. It doesn’t tick all the boxes, but it was the most frictionless option we found.

We did have to manually let everybody in, but that wasn’t a massive problem. The bandwidth, given how many people were online at the same time, was amazing. Meet also allows you to record straight into your Google Drive, which is helpful when you need to share the video quickly.

(Disclosure: The Google News Initiative sponsored Splice Low-Res. But they never once asked us to use their services.)

OBS Studio: We stole this idea from the gaming community, which has been using OBS for years to dress up their live streams. OBS, which is open source, allowed us to add graphics, scenes, and branding — and output it all as a live stream. OBS also works across Windows and Macs.

We actually created all our scenes in OBS, but decided against using it in the end because it would have been hard for us to manage from multiple locations across six hours. Instead, we used Google Slides to create branded welcome slides, program rundowns for Asia and Europe, as well as house rules. But we’ll probably revisit OBS the next time we do Low-Res.

YouTube Live: From OBS, we wanted to get it out to YouTube in real time; it would have been the most obvious way to archive our show. But then we decided against that — the goal should be to drive all real-time participation through Meet, and upload the video to YT after the event.

Mailchimp: We manage almost the entire Splice business on MailChimp — our newsletters run on it, as well as our landing pages, some social automation, and now surveys. It made sense to collect all email registrations here so we could quickly send updates to folks.

Carrd: We used this for our splicelowres.com website. It’s one of the most beautiful one-page website builders out there. If you only need a single page, this is perfect.

Hardware

Sound: It’s quite simple: Sound is more important than video when it comes to keeping an audience engaged in a online event. Even if the video is iffy, it’s essential that people are able to listen to it. So always insist that people plug in headphones and mics (even those iPhone earbuds make a huge difference). But even the best mic can’t do much in an echo-y room or on a noisy street. So make sure your speakers are in quiet places — and that everyone else is on mute.

Video: Most standard webcams will do the trick. Google Meet will only take you up to 720p in resolution, so you don’t actually need crazy stuff like 1080p or 4K. If you want to do a little better, just add lighting: face a bright window or set up a simple lighting kit.

Tips

Collaborate: Don’t do this on your own. It was challenging enough for just Rishad and I. So we worked with Jakub Górnicki at Outriders to bring it all together. He organized the European segment, while we handled the Asia program. If there’s one lesson from COVID-19, it’s the importance of collaboration.

Slides: Some people had some problems sharing their presentations on Meet. So as a backup, make sure you get them to share their decks with you separately — then you can present it for them if you need to. Google Slides are always easier to deal with than PowerPoint attachments.

Timekeeping: Some speakers will run late. Some will have problems connecting. So make sure you have people you can count on to do a bit of quick banter to keep the conversation going.

Networking: Plenty of people will want to stay in touch with each other. Just get them to leave their email addresses in the chat window if they’re up for that.

Over-communicate: Many people will have the same questions: Is this being recorded? How will we stay in touch? How can I get that deck? You’ll have to repeat the ground rules often, especially for newcomers.

Welcoming: Call out new people as they join. Make them feel at home.

]]>
https://www.niemanlab.org/2020/03/coronavirus-got-you-housebound-heres-how-splice-quickly-pulled-together-an-online-streaming-event/feed/ 0
YouTube’s algorithm is pushing climate misinformation videos, and their creators are profiting from it https://www.niemanlab.org/2020/01/youtubes-algorithm-is-pushing-climate-misinformation-videos-and-their-creators-are-profiting-from-it/ https://www.niemanlab.org/2020/01/youtubes-algorithm-is-pushing-climate-misinformation-videos-and-their-creators-are-profiting-from-it/#respond Thu, 16 Jan 2020 17:25:38 +0000 https://www.niemanlab.org/?p=179170 When an ad runs on a YouTube video, the video creator generally keeps 55 percent of the ad revenue, with YouTube getting the other 45 percent. This system’s designed to compensate content creators for their work.

But when those videos contain false information — say, about climate change — it’s essentially encouraging the creation of more misinformation content. Meanwhile, the brands advertising on YouTube often have no idea where their ads are running.

In a new report published today, the social-activism nonprofit Avaaz calculates the degree to which YouTube recommends videos with false information about climate change. After collecting more than 5,000 videos, Avaaz found that 16 percent of the top 100 related videos surfaced by the search term “global warming” contained misinformation. Results were a little better on searches for “climate change” (9 percent) and worse for the more explicitly misinfo-seeking “climate manipulation” (21 percent).

Those videos with misinformation had more views and more likes than other videos returned for the same search terms — by an average of 20 and 90 percent, depending on the search.

Avaaz identified 108 different brands running ads on the videos with climate misinformation; ironically enough, about one in five of those ads was from “a green or ethical brand” like Greenpeace or World Wildlife Fund. Many of those and other brands told Avaaz that they were unaware that their ads were running on climate misinformation videos.

The report doesn’t estimate the dollar figures involved in ads on these videos containing misinformation. But if you assume a CPM of $8.00 (YouTube’s median as of Q2 2019) and all videos are monetized, the 21.1 million views Avaaz calculated they’d received would have generated something like $75,000 for YouTube and $92,000 for the videos’ makers. That’s not nothing, especially when it comes to incentivizing more misinformation videos. But it’s also far from significant for YouTube or Google.

Avaaz cites YouTube’s much-debated algorithms as the main culprit, given that its video recommendations account for 70 percent of what people watch on the platform. “For every climate misinformation video someone watches or likes, similar content is likely to show up in that person’s recommendations, thereby trapping the viewer in an online bubble of misinformation,” the report says.

It was a year ago that YouTube said it would change its algorithms to recommend fewer conspiracy theory or misinformation videos, and anecdotal evidence has suggested at least some improvement. But these results, gathered in August 2019, suggest there’s still a ways to go.

What’s particularly concerning is that YouTube is one of the most used platforms among teenagers ages 13 to 17, and as we’ve written about before, young people aren’t great at detecting misinformation. One study by Stanford last year found that “96 percent of students didn’t think about how a relationship between a climate change website and a fossil fuel company could impact the website’s credibility.”

YouTube has tried to reduce climate misinformation in the past by adding information boxes under its videos, but Avaaz notes that those boxes, when they do appear, often link to Wikipedia articles about general terms related to climate change and don’t indicate that the videos contain misinformation.

Judd Legum, who writes the newsletter Popular Information, spoke to Google (YouTube’s parent company) about the report’s findings:

In a statement to Popular Information, Google said its “recommendations systems are not designed to filter or demote videos or channels based on specific perspectives.” The company added that it has “significantly invested in reducing recommendations of borderline content and harmful misinformation, and raising up authoritative voices on YouTube.”

Google said that some climate misinformation videos are considered “harmful misinformation,” but some of the videos flagged by Avaaz are not. For example, Google told Popular Information it considers climate misinformation that is clipped from Fox News to be part of a legitimate public discourse on a political and scientific issue.

Avaaz’s four recommendations for YouTube include:

  • “Detoxing its algorithm” that freely recommends climate misinformation videos — removing identified climate misinformation videos from the recommendation algorithms and stopping their publishers’ ability to monetize
  • Including misinformation in its monetization policies and allowing advertisers to opt out of running ads on climate misinformation videos
  • Working with fact-checkers to issue corrections on misinformation videos, though Avaaz notes it doesn’t recommend deleting videos as that would conflict with freedom of speech
  • Releasing data on the number of views on misinformation content and how many views were driven by the algorithm’s recommendations

Read the full report here.

]]>
https://www.niemanlab.org/2020/01/youtubes-algorithm-is-pushing-climate-misinformation-videos-and-their-creators-are-profiting-from-it/feed/ 0
WhatsApp’s message forwarding limits do work (somewhat) to stop the spread of misinformation https://www.niemanlab.org/2019/09/whatsapps-message-forwarding-limits-do-work-somewhat-to-stop-the-spread-of-misinformation/ https://www.niemanlab.org/2019/09/whatsapps-message-forwarding-limits-do-work-somewhat-to-stop-the-spread-of-misinformation/#respond Fri, 27 Sep 2019 12:38:27 +0000 https://www.niemanlab.org/?p=175326

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“The number of countries with political disinformation campaigns more than doubled to 70 in the last two years.” The University of Oxford’s Computational Propaganda Research Project released a report about “organized social media manipulation campaigns” around the world. Here’s The New York Times’ writeup too.

One of the researchers’ main findings is that Facebook “remains the dominant platform for cyber troop activity,” though “since 2018, we have collected evidence of more cyber troop activity on image- and video-sharing platforms such as Instagram and YouTube. We have also collected evidence of cyber troops running campaigns on WhatsApp.”

WhatsApp’s message forwarding limits work somewhat, but don’t block misinformation completely. WhatsApp limits message forwarding in an attempt to prevent the spread of false information. As of this January, users worldwide were limited to forwarding to “five chats at once, which will help keep WhatsApp focused on private messaging with close contacts.”

So does the forwarding restriction work? Researchers from Brazil’s Federal University of Minas Gerais and from MIT used “an epidemiological model and real data gathered from WhatsApp in Brazil, India and Indonesia to assess the impact of limiting virality features in this kind of network.” They were only able to look at public group data, not at private conversations.

Here’s what they did:

Given a set of invitation links to public groups, we automatically join these groups and save all data coming from them. We selected groups from Brazil, India and Indonesia dedicated to political discussions. These groups have a large flow of content and are mostly operated by individuals affiliated with political parties, or local community leaders. We monitored the groups during the electoral campaign period and, for each message, we extracted the following information: (i) the country where the message was posted, (ii) name of the group the message was posted, (iii) user ID, (iv) timestamp and, when available, (v) the attached multimedia files (e.g. images, audio and videos).

As images usually flow unaltered across the network, they are easier to track than text messages. Thus, we choose to use the images posted on WhatsApp to analyze and understand how a single piece of content flows across the network.

In addition to tracking the spread of the images, the researchers also looked at the images’ “lifespans”:

While most of the images (80%) last no more than 2 days, there are images in Brazil and in India that continued to appear even after 2 months of the first appearance (105 minutes). We can also see that the majority (60 percent) of the images are posted before 1000 minutes after their first appearance. Moreover, in Brazil and India, around 40 percent of the shares were done after a day of their first appearance and 20 percent after a week[…]

These results suggest that WhatsApp is a very dynamic network and most of its image content is ephemeral, i.e., the images usually appear and vanish quickly. The linear structure of chats make it difficult for an old content to be revisited, yet there are some that linger on the network longer, disseminating over weeks or even months.

And here’s what they found:

Our analysis shows that low limits imposed on message forwarding and broadcasting (e.g. up to five forwards) offer a delay in the message propagation of up to two orders of magnitude in comparison with the original limit of 256 used in the first version of WhatsApp. We note, however, that depending on the virality of the content, those limits are not effective in preventing a message to reach the entire network quickly. Misinformation campaigns headed by professional teams with an interest in affecting a political scenario might attempt to create very alarming fake content, that has a high potential to get viral. Thus, as a counter-measurement, WhatsApp could implement a quarantine approach to limit infected users to spread misinformation. This could be done by temporarily restricting the virality features of suspect users and content, especially during elections, preventing coordinated campaigns to flood the system with misinformation.

Should politicians be allowed to break platforms’ content rules? (If so, which politicians?) The politicians will not be fact-checked: Facebook’s Nick Clegg said this week that “Facebook will continue to exempt politicians from third-party fact-checking and allow them to post content that would otherwise be against community guidelines for normal users,” per BuzzFeed. Clegg — a longtime politician himself — provided more detail in a Facebook post:

We rely on third-party fact-checkers to help reduce the spread of false news and other types of viral misinformation, like memes or manipulated photos and videos. We don’t believe, however, that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny. That’s why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements. You can find more about the third-party fact-checking program and content eligibility here.

YouTube CEO Susan Wojcicki said something similar: “When you have a political officer that is making information that is really important for their constituents to see, or for other global leaders to see, that is content that we would leave up because we think it’s important for other people to see.”

A YouTube spokesperson, however, told The Verge that Wojcicki’s remarks were “misinterpreted”:

The company will remove content that violates guidelines regardless of who said it. This includes politicians. But exceptions will be made if it has intrinsic educational, news, scientific, or artistic value. Or if there’s enough context about the situation, including commentary on speeches or debates, or analyses of current events, the rep added.

Some criticized Facebook, in particular, for letting politicians get away with worse behavior than anyone else. Others, notably former Facebook chief security officer Alex Stamos, argued that the policy is reasonable because it isn’t Facebook’s place to “censor the speech of a candidate in a democratic election.”

In The Washington Post, Abby Ohlheiser looked at how platforms are grappling with questions of “newsworthiness,” wondering, for instance:

“Newsworthiness,” as a concept, is inherently subjective and vague. It is newsworthy when the president tweets something; what about when he retweets something? Multiple times in recent months, Twitter has taken action against accounts that have been retweeted or quote-tweeted by @realDonaldTrump. When the president retweeted a conspiracy-theory-laden account claiming that “Democrats are the true enemies of America,” the account itself was suspended, causing the tweet to disappear from Trump’s timeline. At this point, it is not clear what makes Trump’s tweets, but not those he amplifies to his millions of followers, newsworthy.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

]]>
https://www.niemanlab.org/2019/09/whatsapps-message-forwarding-limits-do-work-somewhat-to-stop-the-spread-of-misinformation/feed/ 0
If Facebook goes private, where will the misinformation go? https://www.niemanlab.org/2019/03/if-facebook-goes-private-where-will-the-misinformation-go/ https://www.niemanlab.org/2019/03/if-facebook-goes-private-where-will-the-misinformation-go/#respond Fri, 08 Mar 2019 14:34:40 +0000 http://www.niemanlab.org/?p=169321 Under pressure, Facebook will block anti-vax content. In a blog post Thursday, Facebook outlined how it will — after weeks of public pressure — curb misinformation related to vaccines.

— We will reduce the ranking of groups and Pages that spread misinformation about vaccinations in News Feed and Search. These groups and Pages will not be included in recommendations or in predictions when you type into Search.

— When we find ads that include misinformation about vaccinations, we will reject them. We also removed related targeting options, like “vaccine controversies.” For ad accounts that continue to violate our policies, we may take further action, such as disabling the ad account.

— We won’t show or recommend content that contains misinformation about vaccinations on Instagram Explore or hashtag pages.

— We are exploring ways to share educational information about vaccines when people come across misinformation on this topic.

Also, YouTube will be showing users fact-checks (which it’s calling “information panels”) on topics that are “prone to misinformation,” BuzzFeed’s Pranav Dixit reported, though the feature is only available to some users in India right now and YouTube hasn’t said when it will expand it globally. And it’s unclear who precisely the fact-checkers are and whether they are being paid.

“Newspaper clippings and television news screen grabs (real or fake) were extensively shared.” The general election that India will hold this year is being described as its first WhatsApp election: Since 2014, when the last general election was held, WhatsApp usage has skyrocketed in the world’s largest democratic country: As of 2017, it had 200 million monthly active users in India, a figure that has certainly only grown since then (the company hasn’t released an updated figure).

Fake news shared on WhatsApp has led to mob violence and murders in India. When the BBC did an in-depth analysis of a group of Indian WhatsApp users in 2018, it found that the majority of messages shared within their private networks could be categorized either as “scares and scams” or “national myths.” The most common way that information is shared, the researchers found, was via images — “visual information, sometimes layered with a minimum amount of text.”

This week, The Hindustan Times took a look at the messages shared in more than 2,000 public, politics-focused Indian WhatsApp groups during the 2018 state elections. Here’s reporter Samarth Bansal:

Doctored screenshots and news clippings are used to make the content seem more reputable:

Seven of the ten most shared misleading images in the pro-BJP WhatsApp groups were media clippings. The most shared image was a screengrab of a primetime segment of Times Now, an English TV news channel, claiming that the Congress party manifesto in Telangana was Muslim-centric. Seven “Muslim only” schemes were included in the manifesto, the image claimed, including a scholarship for Muslim students and free electricity to Mosques. Except that the information was misleading. Alt News, a left-leaning fact-checking news website, later debunked how the news channel had misreported the story, by selectively picking parts of the manifesto to create a false narrative.

This message repeatedly appeared in various forms — eight of the top ten misleading images in the BJP groups were only about the manifesto — including screen grabs from CNBC-Awaaz, another news channel, and standalone graphics.

The example illustrates a key point: “fake news” as commonly understood has various shades. Unlike the morphed ABP news screenshots (second most shared) that propagated outright lies, the Telangana manifesto story is based on partially-true information that was later found to be misleading. The intent in the latter case is not clear and often difficult to establish.

Why are there so many media clippings? One possible explanation for this phenomenon is that WhatsApp-ers leverage mainstream media artefacts to compensate for the declining credibility of WhatsApp content.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

]]>
https://www.niemanlab.org/2019/03/if-facebook-goes-private-where-will-the-misinformation-go/feed/ 0
How local TV news stations are playing a major (and enthusiastic) role in spreading the Momo hoax https://www.niemanlab.org/2019/02/how-local-tv-news-stations-are-playing-a-major-and-enthusiastic-role-in-spreading-the-momo-hoax/ https://www.niemanlab.org/2019/02/how-local-tv-news-stations-are-playing-a-major-and-enthusiastic-role-in-spreading-the-momo-hoax/#respond Thu, 28 Feb 2019 20:35:03 +0000 http://www.niemanlab.org/?p=169097

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

Here are two truths and a lie. Which one is the lie?

1. Pedophiles are taking advantage of YouTube’s algorithm to spread child pornography.

2. Children and teens are being encouraged to commit suicide via footage sliced into YouTube videos.

3. Children and teens are being encouraged to commit suicide via the viral “Momo challenge,” spliced into YouTube videos and on WhatsApp.

Numbers 1 and 2 are true. It has been confirmed that pedophiles are taking advantage of YouTube’s algorithm to spread child pornography, and that some YouTube Kids videos had suicide tips spliced into them (in that case, there are corroborating screenshots and videos).

But #3, the Momo challenge? It’s fake. It’s a hoax. Not only that, it’s an old (in internet years) hoax. Per The Atlantic’s Taylor Lorenz:

The “Momo challenge” is a recurring viral hoax that has been perpetuated by local news stations and scared parents around the world. This entire cycle of shock, terror, and outrage about Momo previously took place less than a year ago: Last summer, local news outlets across the country reported that the Momo challenge was spreading among teens via WhatsApp. Previously, rumors about the challenge spread throughout Latin America and Spanish-speaking countries.

It’s not really surprising that parents are falling for it. Why wouldn’t they, considering how many horror stories we read about Facebook and YouTube that are A) true and B) often actually worse than we imagined? Even more confusingly, a couple of videos with the “Momo” figure spliced in actually were on YouTube at some point.

But the main reason parents believe the hoax is probably that local media are reporting it as fact. It is being not just spread but seemingly embraced by local news sources — especially local TV news stations, where it fits in well with their standard “Will this thing kill your child? Watch Action 4 at 11 to find out” framing. Searching on this Twitter list, local TV stations have tweeted about Momo 211 times since yesterday. And then there are stations’ Facebook pages (which, our research shows, often publish stories that will elicit engagement rather than actual local news): A Momo challenge segment by KUTV Salt Lake City has been viewed more than 22 million times and received more than 61,000 comments.

On Wednesday, YouTube issued a statement saying that it had seen “no recent evidence of videos promoting the Momo Challenge on YouTube.” Some of the outlets incorporated that statement into their stories (without changing the headline or much of the text). Others ignored the statement completely. Some of the local TV news stations’ stories are littered with “reportedlys,” but in other cases, the Momo challenge is presented as fact. Perhaps most maddeningly, some of the stories present the fact that the story is a hoax as just “one view” or “one side.”

In many of the stories that I’ve seen, the sources are mothers who’ve seen rumors online; young children prompted either by the reporters of the stories or by their parents; and — yikes! — local fire departments, police stations, and schools.

The Momo challenge is a fascinating example of how a fake story spreads in real time with the assistance of the U.S. mainstream media (no Russian trolls required). I set out to name and shame some of the many local TV news stations that are spreading this story. There are many more examples out there — if you want to share one, DM me.1

KBJR 6, Wisconsin

The published story asks, “is it real or is it a hoax?” but continues as if it is real:

Although there’s no proof the Momo Challenge is real, it is frightening for parents.

“Sometimes he’s left alone with his iPad when I’m cooking or in the car,” said Eau Claire mom April Curry.

Curry said she’s worried her toddler could end up seeing the Momo Challenge on the internet.

“I saw it just kind of online, and then kind of raised red flags because our son does watch some YouTube,” Curry said. “It was one of those things that kind of alerted me. I wanted to be careful of what he watched.”

Curry said since she saw the Momo Challenge online, she’s added more Disney apps on her iPad so she can have more control over what he’s watching.

ABC Tampa Bay

The challenge is to meet Momo and to do that one must follow a series of instructions, which can include harming others or yourself.

“When my friends or my family ask kids about it they immediately were like, ‘how do you know about it and then ran to them and cried,’” Jessica said….

The Momo Challenge has been linked to the death of a girl in Argentina, but none here in the United States.

KUTV News, Utah

A terrifying video circulating the web is encouraging children to kill themselves. It’s called the ‘MoMo Challenge.’ A creepy, bugged-eyed woman offers children instructions on how to take their own lives. The horrifying video has been infiltrating popular children sites like YouTube Kids.”

ABC4 News, Utah

The challenge is now starting to target children through YouTube videos such as Peppa Pig and Fortnite videos. The Momo image is being edited into the videos by hackers. That image is either giving kids messages directly or telling them to text a number through the Facebook-owned app, WhatsApp.

Then the number will send various instructions on challenges they need to complete or else their families will be hurt and they will be cursed. The various challenges range from self-harm and ultimately end with a directive to commit suicide.

Patch, Point Pleasant, New Jersey

Police also have issued warnings to parents on social media after the popular WhatsApp challenge resurfaced. A northern California mother says her family fell victim to the game, telling CBS Sacramento that her 12-year-old daughter with autism was encouraged to do dangerous things by the character. “Just another minute, she could’ve blown up my apartment, she could’ve hurt herself, other people, beyond scary,” Woods said.

Experts and charities, meanwhile, have warned that the “Momo Challenge” is nothing more than a “moral panic” spread by adults. The Samaritans and other charities say there is no evidence that the game has caused any harm, according to The Guardian.

According to a news release issued by Radnor police, the challenge is the same as you’ve heard: A scary doll figure with an ominous voice targets children’s websites such as YouTube Kids. The figure comes on the screen after the seemingly innocent video begins playing.

KAKE Wichita

The bulk of the story is an interview with a five-year-old.

When I asked him about Momo he panicked.

Terran: “I can’t tell you.”

KAKE Reporter Porsha Riley: “Why not?

Terran: “’Cause I can’t.”

KAKE Reporter Porsha Riley: “Does the video tell you not to tell?”

Terran: “Yea.”

KWCH Wichita

Daniel Timmermeyer said he researched “Momo” after his 12-year-old daughter came home talking about a scary face she heard about from a friend.

“From what I understand the videos start out playful then a few minutes in tell kids to go to the medicine cabinet and swallow as many pills as they can then turn on the oven and get inside. It also says that if you tell your parents it will come to your house and kill your family and then the person watching the video. This is one of the most disturbing things I have ever heard,” he said in an email to Eyewitness News.

Momo is a statue that was created by a Japanese artist. But the striking features of the young woman with long black hair, large bulging eyes, a wide smile and bird legs can be frightening to most who see it. It’s believed ‘Momo’ is run by hackers who are looking for personal information, but the dangerous lies in what “Momo” is asking people to do.

CBS Sacramento

Across the country, kids are reporting seeing Momo videos with the strange cartoon-like character telling kids to do dangerous things.

“The video that we believe she saw told her to turn on the stove while I was sleeping,” Woods said.

Whether hoax or not officials say it’s a teachable moment.

WFSB Wethersfield, CT

You may have heard of the “Momo Challenge.”

From WhatsApp to Facebook to even YouTube Kids, authorities say there could be suicide instructions targeting your children.

The disturbing content is unexpected and uncalled for.

It’s targeting innocent children who’re just trying to have a little fun on the internet.

ABC 6, Philadelphia

According to the Buenos Aires Times, the challenge is possibly linked to the death of a 12-year-old girl from Argentina who apparently took her own life. If confirmed by police, the girl will be the first victim of this disturbing challenge.

The challenge seems to be passing around primarily through WhatsApp and Facebook, and authorities aren’t sure of the perpetrators’ motive. It has also allegedly popped up through YouTube in Peppa Pig and Fortnite videos.

WRAL, Raleigh Durham

Officials believe the challenge has been around since 2018 but was recently embedded into certain programs viewed by children and sent on texting applications like WhatsApp.

While the challenge may be prevalent among teenagers, parents are worried that it is also affecting much younger kids.

The Spring Hope Police Department posted on Facebook Wednesday that Spring Hope Elementary School notified parents about the challenge.

WCMH, Columbus

Some are calling it an internet hoax, while others claim the challenge has been linked to teen deaths in other countries.

There are currently no confirmed deaths associated with the challenge in the U.S. — and authorities want to keep it that way.

KCRG, Iowa

The sheriff’s office said the challenge hides itself in other harmless games that kids play. The tasks end up telling kids to harm themselves or commit suicide.

“It appears to be more hype or hoax rather than reality,” the sheriff’s office said in a Facebook post.

Reports have also surfaced that the game is available on YouTube and YouTube Kids.

“If the children don’t do as they are instructed, Momo threatens to put a curse on them,” the sheriff’s office said.

“There are very few confirmed cases of self-harm or suicide that are connected to ‘Momo’, but we wanted to get this information out to parents in case this becomes popular in our area like the Tide Pod challenge, officials said.

WKRG Mobile, AL

The Momo challenge is dominating news headlines, mainly because of the dangers it poses to young children.

WKRG Mobile, AL

The Momo challenge is dominating news headlines, mainly because of the dangers it poses to young children.

WFAA Dallas

The “Momo Challenge” encourages kids to hurt themselves, and eventually, kill themselves.

KRON 4 San Francisco

The “Momo game” or “Momo challenge” gained international recognition last summer and was initially considered a hoax, quickly becoming a widespread meme. In August 2018, law enforcement investigated the influence of Momo on the death of a 12-year-old in Argentina, worrying parents globally to the potentially real dangers of the challenge.

WDIV Detroit

Dubbed the “Momo challenge,” a creepy face appears in videos on sites such as YouTube to tell children to do bad things in order to avoid being cursed. A video could seem harmless, then suddenly the face is onscreen.

The face of Momo originated from a sculpture by a Japanese artist. It was first used to communicate with people on WhatsApp and Facebook, telling them to do harmful things to themselves or others and provide photo proof.

Fox 2 Detroit

You might have heard of “The Momo Challenge.” A sinister video pops up like an ad on YouTube or the app, WhatsApp, with instructions.

“[The video] tells them things that if they ‘don’t do this challenge’ this is what will happen to them — and the Momo doll is scary,” said Dr. Sabrina Jackson. And it is popping up with increasing regularity.

Boston 25 News

According to Snopes, there have not been any verified cases of anyone actually being harmed because of the game. Snopes says the challenge is just hype and hoax than reality.

Tech expert Dave Hatter told WXIX the game is believed to have originated from Facebook, but has crossed over into WhatsApp, an online messaging app that has millions of users around the world.

“I think it’s a legitimate thing to be concerned about,” Hatter told WXIX. “As a parent, I find it disturbing. I have a 10-year-old, and I will definitely be having a conversation with him about this.”

WJTV Jackson

Many encounter this disturbing video as a”pop- up” or “suggested video” on youtube.

It alarms parents because it gives children directions for committing suicide.

It causes destruction to their houses by turning on household appliances which could harm them.

WFMZ 69 Eastern PA and Western NJ

Skeptics say there is little evidence of actual Momo messages, but there’s plenty of fear mongering spread by social media posts and sensationalized news reporting. They say no violent incident has been officially linked to the Momo messages.

Nonetheless, experts warn, cyberbullies may exploit the Momo hype to torment victims.

  1. Immersing myself in these stories just for a little while started sending me down the rabbit hole a little bit. Just because YouTube says these doesn’t exist doesn’t mean they don’t. Plus, what did YouTube mean when it said it had no “recent” evidence of Momo challenge . videos existing? Has YouTube ever been aware of the existence of these videos? And if it had been aware of them at some point, would it say so? I mean, we can’t trust the platforms. We should trust our local news sources instead. Right?
]]>
https://www.niemanlab.org/2019/02/how-local-tv-news-stations-are-playing-a-major-and-enthusiastic-role-in-spreading-the-momo-hoax/feed/ 0
A hotline for racists, a gun control app for “a**holes”: The New York Times is taking its opinion video coverage in a new, YouTube direction https://www.niemanlab.org/2019/02/a-hotline-for-racists-a-gun-control-app-for-aholes-the-new-york-times-is-taking-its-opinion-video-coverage-in-a-new-youtube-direction/ https://www.niemanlab.org/2019/02/a-hotline-for-racists-a-gun-control-app-for-aholes-the-new-york-times-is-taking-its-opinion-video-coverage-in-a-new-youtube-direction/#respond Wed, 27 Feb 2019 17:06:02 +0000 http://www.niemanlab.org/?p=168996 “Hi. I’m Niecy Nash, actress, inventor, and advocate for not calling 911 on black people for no goddamn reason.”

“Introducing Aftershot, the only app that helps a bunch of assholes figure out when to talk about gun reform.”

These feel like lines pulled from SNL commercial parodies. But they’re actually from The New York Times — more specifically, from the Times’ year-old Opinion Video department, which is aiming to produce videos that appeal to a YouTube-native audience and feel very different from…well, let’s say “stodgy stereotypical newspaper video.” You might be surprised by the swearing. And the Facebook snark. And the variety of video styles. And that these are things your non-news-junkie friends might actually want to watch.

While I found the satirical pieces the most strikingly different from what I’d expect from the Times, they’re only a small part of what the Opinion Video department is putting out. It also includes: investigations into crazy sexism in Chinese tech job hiring (that one was done in partnership with Human Rights Watch as well as an independently hired Chinese social media expert to scrape the social web); rapper/activist Meek Mill on prisoners’ rights; and a major three-part series on Russian disinformation (which BBC World has licensed).

The new opinion videos are “enabling us to get voices we would otherwise never be able to get on our platform,” James Bennet, editorial page editor at The New York Times, said in a talk this week at the Shorenstein Center. The videos are also attracting new audiences at a moment when the Times is extremely focused on subscriber growth; for now, Opinion Video is almost entirely focused on YouTube, even though everything is also posted on The New York Times’ website.

“We’re not really taking on these battles over on-site promotion and homepage placement and all those things,” said Adam Ellick, the Times’ director and executive producer of opinion video. “Our priority is YouTube.”

I recently talked to Ellick and to Times senior video editor Taige Jensen about their first year making videos aimed at what Ellick describes as “a new generation of video viewers who have come to expect voice and attitude.” Our conversation, lightly edited and condensed for clarity, is below.

Laura Hazard Owen: You launched Opinion Video about a year ago. Why? And what space did you want it to fill at the Times?

Adam Ellick: If you go around and speak to other people in the video journalism space, and ask them who they think is making good “opinion video,” they’ll tell you that the rest of the industry just calls that “video.” It’s a new generation of video viewers who have come to expect voice and attitude.

Historically, at the Times, there were only two forms of opinion video: There was Op-Docs, a weekly, short digital documentary series that acquires and commissions films from outside filmmakers. And then, over the years, [op-ed columnist] Nick Kristof and I used to get on airplanes and go make short documentaries on human rights.

Outside of that, there was no opinion video. I proposed that we start this department because of what viewers have come to expect in this medium. It echoes the tenets and the mission of the opinions page, which is to be a platform for diverse and interesting voices. And because we’re in Opinion, we have the license to collaborate with a lot of outside video makers — foundations and NGOs, musicians, celebrities who have their own production houses, places like Forensic Architecture that are doing their own journalism, production houses, and comedy TV shows. Historically, there wouldn’t have been a way for the Times to collaborate with those places [in video]. My vision was to create a formal structure [to do that].

Our goal for the first year was just to experiment broadly and freely with a ton of different formats, from famous voices to do-it-yourself YouTubers in order to try to eventually narrow the field of what we make.

Owen: So what have you learned so far?

Ellick: One thing we learned was that a lot of these outside video makers are eager and anxious to collaborate with us. We’ve worked with Human Rights Watch; Fortify Rights, which is an NGO that supports Rohingya; Meek Mill; some YouTubers; the director of the documentary “City of Ghosts”; Forensic Architecture in the U.K.; and a TV comedy show in Australia. A really diverse range.

We’ve also learned that good opinion video journalism can lead to significant impact, which is obviously the overriding goal of our department. In early 2018, Taige produced a video about #MeToo in the church. It’s the story of a woman who called out her pastor for sexual assault. Our video gained a ton of traction in the local press in Nashville where the pastor worked at a megachurch. Eleven days later, he resigned, and he quoted our video in his resignation later. I think the victim had been interviewed for like 20 seconds by some newscasts, but it was her first full telling of her story.

In collaboration with Human Rights Watch, we made a video about gender discrimination in hiring inside China’s tech companies. Alibaba and Tencent — these are two of the largest companies in the world, according to market value.

Human Rights Watch was coming out with a 99-page, mostly text report about the egregious ways Chinese companies were signaling that only men could apply for certain jobs, using the promise of beautiful girls to recruit male candidates for jobs. We independently hired a social forensic researcher to scrape the Chinese social web and found more video examples. We launched our video the day the report came out. Within a few days, Tencent issued an apology and pledged that this wouldn’t happen again, and Alibaba vowed to conduct stricter reviews.

Our biggest piece of last year was Operation InfeKtion, on the history of Russian disinformation. It’s now translated into 10 languages in countries where the press is either banned or under threat; the full list is embedded in our YouTube player. Someone in Romania is using it as part of a disinformation literacy project in high schools. We sold it independently to BBC World and it aired in 200 countries.

We’re really focused on engagement, in terms of completion rates and comments and participation. For the disinformation series, the engagement on YouTube was astronomical — the average watch time for each of the three episodes was between 8 and 10 minutes, and 45 percent of the viewers were international.

Owen: I was struck by the satirical videos — some of this stuff wouldn’t feel out of place as an SNL skit. How are you thinking about tone and humor?

Ellick: Our general goal this [past] year was to focus primarily off-platform, specifically on YouTube, and to reach new users who ordinarily might not read the Times’ opinion section. If you go back about eight months on the Times’ YouTube channel, the top three most-viewed videos are all opinion videos. I think that has to do with the fact that the stuff we’re doing naturally has more shelf life and it’s not super newsy, though it is off the news. We’ve been experimenting with different voices and trying to take on really serious topics but doing them in an engaging way. One of those tools is satire. I think the Niecy Nash story is something worth talking you through, because it was pretty exceptional in a few ways.

Taige Jensen: There was this avalanche of stories last year of people getting the police called on them for no apparent reason [other than that they were black]. I thought that there should be a number to call — literally a number. Typically, when I think of a project, I try to imagine it as more of an artifact that is useful to viewers — more than just a video that you watch that adds a perspective, but something you can actually participate in.

It seemed like if we created an actual number — that had meaningful statistics and a perspective when you called, and gave something you could actually use in your own conversations — we could potentially help people second-guess and wonder whether they are participating in a racist culture of over-policing black people in the United States.

The commercial was sort of an afterthought to me. The number was the product. So that’s how that evolved. Satire and comedy sometimes get a bad rap for demeaning the subject matter, but I definitely disagree with that concept.

Ellick: About 250,000 people called that phone line. The only way they would have heard about it was through the video. I sort of jokingly referred to it as our first phoneline format — the video was just the promotional lever for the actual voice recording which we thought of as the primary story form. We prompted people in the video and in text to call the number, it’s real. A ton of people were tweeting: “It’s real, you’ve got to call it.” And there was a lot of information on the phone line about these cases.

Some people left messages sharing their own experiences, and some were quite moving and tragic because they took place in an era before there were cell phones; these were instances [of racism] in the ’80s and ’90s that will never be documented. (Also, to be transparent, a lot of people were just breathing at the end of the voicemail, and we’ll never know who they were.) We received hundreds of emails from people sharing their own stories. Taige pitched this with a ton of creativity, but what attracted me at the highest level was that these cases pop up consistently, and we wanted to try to go above the news and create a sticky, evergreen place where you can keep track of all these egregious examples.

Under the video, we listed every case we could find and tried to link to it. I think it’s up to 38 or 39 cases and readers are pointing out more — the Waffle House guy in Arkansas, a lemonade guy in California. So we’ve been adding to the list and updating it.

Owen: You guys are putting a lot of focus on YouTube. How do you think about which platforms you’re going to focus on, versus bringing people to the Times site?

Ellick: I mean, we’re trying to walk and chew gum at the same time. We’re putting almost everything on the [Times] site, but we’re obsessing more with YouTube. Since we’re a very small team, we’re not really taking on these battles over on-site promotion and homepage placement and all those things. Our priority is YouTube, and the reason for that is engagement. We’re seeing tremendous engagement. In general, the tone and ethos of this young department is to reach new audiences, and I think the natural place to do that is on YouTube.

One of my favorite examples is our video about the Disney minimum wage dispute. We did it because we think it’s an important story about income inequality, but we didn’t expect it to be popular at all. It was a video op-ed about three Disney workers who are in a labor dispute with Disney. One sleeps in her car. One has been homeless. These are employees who have been working at Disney for decades, in some cases.

It has over a million views on YouTube, and five or six thousand comments. We got goosebumps reading the comments; other Disney employees were sharing their stories about how they left Disney because they were homeless and couldn’t pay their bills.

We never expected this to be [particularly] popular or engaging. So I don’t think that — I don’t think the Times website can host that sort of debate on opinion video. I think YouTube is a natural place where the product functions and the new audiences are conducive to this.

One of the things we struggle with is: How do you signal to your audiences, both new and old audiences, that something is comedy or satire? Because when they see our brand, they’re probably not expecting that. The Washington Post has a called Department of Satire, which is a very blatant and bold form of signaling. We obsess in the comment fields to see what’s landing with the audience and what’s not.

Jensen: Giving the right signals is a challenge. But I think that to be successful, you have to really believe and deliver a full argument — and not be worried about trying to bring them back to the platform, because online culture sort of resents that kind of attitude.

So when I produce a doc piece, I want to deliver to the YouTube audience or in that video a clear idea, as funny or irreverent as we’re allowed to be. My mission is also very different from the company’s company overall. I just want to execute pieces to be the best product that I can, more than getting people back to our branding and platform.

Owen: So when you say “as funny or irreverent as you’re allowed to be” — like, who’s allowing it? Who’s making the rules?

Jensen: It’s a tightrope act, to be honest. The appropriate voice is based a lot on attitudes around the subject. We have lawyers and PR people and everyone here to push back and try to keep us in some kind of lane. My thinking is, I go for what I want to make, and if they can stop me, then I’ve gone too far.

Ellick: This is an infant department. It’s a year old. We’re constantly learning. We’re experimenting with what’s in and out of bounds on a story-by-story level every day. We have killed pieces this year because we really liked the video but we just didn’t think it would land with our audience, even off-site.

If you watch these pieces carefully, you’ll notice that they’re packed with reported information. The format can be light and engaging, the tone can be satirical — not always but sometimes — but there’s reporting in them, and that is the ultimate buffer for an opinion department in injecting these pieces with information and wrapping comedy or voice or attitude around that information.

There are no rules, and we navigate all this on a story-to-story level, but it’s probably worth saying that the broader Opinion department has a few red lines. One is inaccuracy: All opinions must be grounded in facts. Every text op-ed is fact-checked, and so are our videos. The other two are anything hateful — we want to be respectful of other perspectives and experiences — and no tolerance for anything that’s beholden to hidden interests, hidden influence, or a blatant conflict of interest.

Those are our more formal red lines. Everything else falls into tone and style. I turned down a piece from a YouTuber this week that I loved, but I thought the humor was uncomfortable and even though I liked it, it wasn’t worth the risk to try to explain afterward.

Owen: What’s an example of something you’ve turned down?

Ellick: Last year, we killed a very opinionated piece about the conflict in the Middle East. It wasn’t satire; it just had a strong tone and a strong voice, and I thought that the video was fun to watch, but it lacked nuance. [Nuance is] something text does well, and I think video can do nuance well, but sometimes it’s a bit more of a struggle. We could have helped the video by putting more information in, but if we had put in all that information it would have slowed it down and made it feel a lot more like homework. So we just decided not to run it.

Owen: Now you guys are heading into Year 2. What’s the plan going forward?

Ellick: My team includes Op-Docs and this new video unit; combined, we’ll have 10 people when our visual fellow starts in a few months. We’re hiring a senior producer right now. We have a couple of editors and an assistant editor and then Op-Docs is three slots, technically.

So we’re a very small team, and we have a more significant freelance budget in order to acquire and produce and work with all of the outside collaborators I mentioned earlier. My thinking in setting this up was: Let’s have fewer [job] slots and a little bit more money, since we need to create an identity and voice in the first year, and then we can narrow the lane in years two and three.

We’ve found a bit of a sweet spot in the medium-form space. When you’re a small team, you can’t always do reaction to breaking news; you need to lean more on evergreen stories. We’re always monitoring the news and we have a news meeting every morning, but we try not to make a video every day. Stuff that’s more enterprising and valuable to our audience takes more like a couple days to a couple weeks. Every now and then, though, we’ll go hard on a news story, if we have some great idea.

We’re trying to be a platform for voices that you wouldn’t ordinarily read in text at the Times; there are a bunch of videos of people who simply would never write an op-ed for the Times. One was “We Are Republican Teachers Striking in Arizona. It’s Time to Raise Taxes.” These were Republican school teachers in Arizona whose classrooms were running short on school supplies. We Skyped with them while those strikes were happening.

We also did a video, “How to Get 1.4 Million New Voters,” with three ordinary Floridians who’d been in jail and didn’t have the right to vote and were making the appeal that they should. One was a Latina social worker, one was a black minister, and one was a working-class conservative white guy who runs a family carpet business. [Florida’s law preventing felons from voting was overturned in the 2018 midterms.] And “Our Loved Ones Died. We Want Action on Guns,” made with several Americans whose family members were killed in shootings.

We can complement what the text op-ed desk is doing with big names by making op-ed videos with more ordinary people who are personally affected or influenced by the news. Those have really resonated well. They’re so human and people are speaking from such a personal perspective.

There are a couple formats we’re gonna push forward. One is the video op-ed. Some examples of this are: “The Rape Jokes We Still Laugh At,” “I Escaped North Korea. Here’s My Message for President Trump,” and “I Was Assaulted. He Was Applauded.” These are evergreen news stories: #MeToo, prison reform, North Korea and the U.S. It’s giving people a platform to share their human story, how they were impacted by the news, and really putting in some strong visuals. All three of those were very inventive, visually; we want to scale those up quite a bit.

We also want to scale up what we’re calling “argued essays” or “argued video essays.” My favorite one is one that Taige made, called “Trump Is Making America Great Again.”

Jensen: The subhed is: “Just not the way he thinks he is.” The basic premise is to take a surprising argument and to apply a visual style and personal kind of tone. We haven’t produced a lot of those, but we like the format, and we think we can fit in more and really flesh out a fun voice that’s very current. We use a lot of GIFs and fast cutting; it’s got a sort of frenetic pace and also builds a case over time, until hopefully, by the end you’re convinced by the argument — or you’re not, and then you leave a comment.

Ellick: “Trump Is Making America Great Again” resonated with the audience, and we were shocked that it did really well on site as well. The style that Taige just described — I call it “lo-fi hi-fi.” It looks like a kid made it, but if you actually know video and can study the pacing and the rhythm, you know it was a big lift. The editing is quite elegant, even though it uses GIFs and Inspector Gadget clips and things you wouldn’t have noticed historically on the Times site.

]]>
https://www.niemanlab.org/2019/02/a-hotline-for-racists-a-gun-control-app-for-aholes-the-new-york-times-is-taking-its-opinion-video-coverage-in-a-new-youtube-direction/feed/ 0
While YouTube and Facebook fumble, Pinterest is reducing health misinformation in ways that actually make sense https://www.niemanlab.org/2019/02/while-youtube-and-facebook-fumble-pinterest-is-reducing-health-misinformation-in-ways-that-actually-make-sense/ https://www.niemanlab.org/2019/02/while-youtube-and-facebook-fumble-pinterest-is-reducing-health-misinformation-in-ways-that-actually-make-sense/#respond Fri, 22 Feb 2019 14:39:01 +0000 http://www.niemanlab.org/?p=168783

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“Freedom of speech versus freedom of reach.” Pinterest got a positive spate of publicity Thursday as a couple different outlets reported on its policy (“which the company hasn’t previously publicly discussed but which went into effect late last year,” per The Wall Street Journal) of refusing to surface certain “polluted” terms like “vaccine” and “suicide” in search results. From The Guardian:

“We are doing our best to remove bad content, but we know that there is bad content that we haven’t gotten to yet,” explained Ifeoma Ozoma, a public policy and social impact manager at Pinterest. “We don’t want to surface that with search terms like ‘cancer cure’ or ‘suicide.’ We’re hoping that we can move from breaking the site to surfacing only good content. Until then, this is preferable.”

Pinterest also includes health misinformation images in its “hash bank,” preventing users from re-pinning anti-vaxx memes that have already been reported and taken down. (Hashing applies a unique digital identifier to images and videos; it has been more widely used to prevent the spread of child abuse images and terrorist content.)

And the company has banned all pins from certain websites.

“If there’s a website that is dedicated in whole to spreading health misinformation, we don’t want that on our platform, so we can block on the URL level,” Ozoma said.

Users simply cannot “pin” a link to StopMandatoryVaccinations.com or the “alternative health” sites Mercola.com, HealthNutNews.com or GreedMedInfo.com; if they try, they receive an error message stating: “Invalid parameters.”

The Journal describes Pinterest’s rigorous process of determining whether content violates its health misinformation guidelines, and oh boy does it sound unbelievably different from anything that Facebook or Twitter does.

Pinterest trains and uses human reviewers to make determinations about whether or not shared images on the site, called pins, violate its health-misinformation guidelines. The reviewers rely on information from the World Health Organization, the federal Centers for Disease Control and Prevention and the American Academy of Pediatrics to judge the veracity of content, the company said. Training documents for reviewers and enforcement guidelines are updated about every six months, according to the company. The process is time-intensive and expensive, in part, because the artificial-intelligence technology required to automate the process doesn’t yet exist, Ms. Ozoma said.

“The only folks who lose in this decision are ones who, if they had their way, would trigger a global health crisis,” Casey Newton wrote in his newsletter The Interface. “Here’s to Ozoma and her team for standing up to them.

Please note: Pinterest is definitely not perfect, as this 2017 BuzzFeed piece makes clear.

In probably unrelated news, Pinterest filed paperwork to go public via an IPO, the Journal reported Thursday afternoon. It’s currently valued at $12 billion. That’s more than 4× the valuation of America’s publicly traded local newspaper companies (Gannett, McClatchy, Tribune Publishing, Lee, GateHouse, and Belo) combined.

Meanwhile, while Pinterest is being all responsible… BuzzFeed News’ Caroline O’Donovan (a former Nieman Lab staffer) and Logan McDonald investigated how YouTube’s algorithm leads users who perform vaccine-related searches down rabbit holes of anti-vaxx videos.

For example, last week, a YouTube search for “immunization” in a session unconnected to any personalized data or watch history produced an initial top search result for a video from Rehealthify that says vaccines help protect children from certain diseases. But YouTube’s first Up Next recommendation following that video was an anti-vaccination video called “Mom Researches Vaccines, Discovers Vaccination Horrors and Goes Vaccine Free” from Larry Cook’s channel. He is the owner of the popular anti-vaccination website StopMandatoryVaccination.com.

In BuzzFeed’s further experiments, even clicking on a pro-vaccination video the first time led you to an anti-vaccination video the next time:

In 16 searches for terms including “should i vaccinate my kids” and “are vaccines safe,” whether the top search result we clicked on was from a professional medical source (like Johns Hopkins, the Mayo Clinic, or Riley Hospital for Children) or an anti-vaccination video like “Mom Gives Compelling Reasons To Avoid Vaccination and Vaccines,” the follow-up recommendations were for anti-vaccination content 100% of the time. In almost every one of these 16 searches, the first Up Next recommendation after the initial video was either the anti-vaccination video featuring Shanna Cartmell (currently at 201,000 views) or “These Vaccines Are Not Needed and Potentially Dangerous!” from iHealthTube (106,767 views). These were typically followed by a video of anti-vaccination activist Dr. Suzanne Humphries testifying in West Virginia (currently 127,324 views).

That’s partly because of “data voids,” a concept discussed in the Guardian piece and expounded by Michael Golebiewski and danah boyd. “In the case of vaccines, the fact that scientists and doctors are not producing a steady stream of new digital content about settled science has left a void for conspiracy theorists and fraudsters to fill with fear-mongering propaganda and misinformation,” The Guardian’s Julia Carrie Wong writes. There just aren’t very many pro-vaccine viral videos.

The anti-vax stuff is not actually YouTube’s biggest problem this week. On Sunday, Matt Watson, a former YouTube creator, posted a video detailing how

Youtube’s recommended algorithm is facilitating pedophiles’ ability to connect with each-other, trade contact info, and link to actual CP in the comments. I can consistently get access to it from vanilla, never-before-used Youtube accounts via innocuous videos in less than ten minutes, in sometimes less than five clicks.. Additionally, I have video evidence that these videos are being monetized by Youtube, brands like McDonald’s, Lysol, Disney, Reese’s, and more.

The Verge’s Julia Alexander reported on how YouTube has repeatedly failed to stop child predation on the platform:

While individual videos are removed, the problematic users are rarely banned, leaving them free to upload more videos in the future. When Watson reported his own links to child pornography, YouTube removed the videos, but the accounts that posted the videos usually remained active. YouTube did not respond to The Verge’s question about how the trust and safety team determined which accounts were allowed to remain active and which weren’t.

Watson’s investigation led big advertisers like Nestle, Disney, and AT&T to pull ads from YouTube this week. AdWeek has a copy of a memo that YouTube sent to major brands on Wednesday outlining actions it’s taken already and what it plans to do in the future.

How Google (YouTube’s owner) fights fake news. All of the above can be read alongside “How Google Fights Disinformation,” a white paper the company released at the Munich Security Conference February 16 that outlines some of the ways in which Google is attempting to reduce false information across its products. The paper is largely boring and vague but does make a nod to data voids:

A known strategy of propagators of disinformation is to publish a lot of content targeted on “data voids,” a term popularized by the U.S. based think tank Data and Society to describe Search queries where little high-quality content exists on the web for Google to display due to the fact that few trustworthy organizations cover them. This often applies, for instance, to niche conspiracy theories, which most serious newsrooms or academic organizations won’t make the effort to debunk. As a result, when users enter Search terms that specifically refer to these theories, ranking algorithms can only elevate links to the content that is actually available on the open web — potentially including disinformation.

We are actively exploring ways to address this issue, and others, and welcome the thoughts and feedback of researchers, policymakers, civil society, and journalists around the world.

“There is little information about how to help clinicians respond to patients’ false beliefs or misperceptions.” Researchers from the National Institutes of Health’s National Cancer Institute wrote in JAMA in December about coordinating a response to health-related misinformation. They have far more questions than answers — among the knowledge gaps:

What’s the best way for doctors to respond to “patients’ false beliefs or misperceptions”? What is the right timing for “public health communicators” to intervene “when a health topic becomes misdirected by discourse characterized by falsehoods that are inconsistent with evidence-based medicine”? What are the most important ways that health misinformation is shared? They write:

Research is needed that informs the development of misinformation-related policies for health care organizations. These organizations should be prepared to use their social media presence to disseminate evidence-based information, counter misinformation, and build trust with the communities they serve.

Maybe some of those data voids can get filled.

Illustration by Craig Sneddon used under a Creative Commons license.

]]>
https://www.niemanlab.org/2019/02/while-youtube-and-facebook-fumble-pinterest-is-reducing-health-misinformation-in-ways-that-actually-make-sense/feed/ 0
Vox.com tries a membership program, with a twist: It’s focused on video and entirely on YouTube https://www.niemanlab.org/2019/02/vox-com-tries-a-membership-program-with-a-twist-its-focused-on-video-and-entirely-on-youtube/ https://www.niemanlab.org/2019/02/vox-com-tries-a-membership-program-with-a-twist-its-focused-on-video-and-entirely-on-youtube/#respond Wed, 06 Feb 2019 16:11:12 +0000 http://www.niemanlab.org/?p=168172 Would you pay an extra $5 a month to attend a quarterly meeting over Google Hangouts? Not “$5 a month to skip a meeting.” “$5 to have the privilege of attending a meeting.”

Well, it turns out, plenty of Vox.com video lovers would. When you sign up for a Vox Video Lab membership, you can choose between two different price levels. For $4.99 per month, you get the “DVD extras” of Vox videos: behind-the-scenes content, videos explaining Vox’s process, recommendations for non-Vox videos, and a monthly live Q&A with a producer. For $9.99 a month, you get all that plus…access to a quarterly Google Hangout where they can give Vox more advice about its membership program.

Last week, Vox Video Lab held its first such meeting. It included Vox fans from nine different countries. “I was floored,” said Blair Hickman, Vox.com’s director of audience. The time that worked best for a global digital audience, it turned out, was 5 p.m. eastern. “One guy was like, I’m kind of tired. I’ve had a long day at work in Switzerland,” Hickman recalled. (In Switzerland, it was 11 p.m.) Still, they showed up. “They were asking questions like, ‘Can we have Slack rooms so we can better prepare for these meetings? How can we coordinate in helping you reach your goals outside of these quarterly meetings?'”

These sweet meeting lovers are one sign that, roughly six weeks in, Vox.com’s video membership program might be working. (Of course, Vox would not tell me how many paying members it has, in either the $4.99 or $9.99 tier.) Vox Video Lab launched right before Christmas, with “YouTube innovation funding” from the Google News Initiative. (If “taking money from Google to help us get money from our audience on YouTube” doesn’t sum up the news industry’s conflicted relationship with big platforms, well, I’m not sure what does.) It’s the first time that any Vox Media property has solicited financial support from its audience, and is obviously different from other membership programs that have launched in that it is focused on video and YouTube rather than text. (YouTube first introduced channel memberships broadly last summer; the company takes 30 percent of subscription fees after local sales tax is deducted, so Vox gets 70 percent of the revenue from each membership.)

Before launching the Video Lab, Vox.com surveyed readers on what they wanted from a membership program. They found two buckets of people willing to pay: One group was the “Vox superfans,” the other was a group that loves Vox’s video style and is interested in making videos themselves. They also heard a consistent message, said Vox.com head of video Joe Posner: “‘We just want to support what you do, we don’t really care what we get.’ It was cool to know that a major motivator, at least as far as we can tell from this survey, is that people do just want to support us.” It also meant that the logical step for the premium $9.99 tier was simply more access to Vox video creators (though Vox does also plan to roll out more perks for the $9.99 subscribers over the next few months).

“I’m sure everybody in the industry would [agree] that YouTube can be a messy, nasty place,” Hickman said. “The membership is just this delightful two-way conversation, with people who are really there to support Vox.” Vox.com’s wording around the membership program has stressed the financial support that it needs; for instance:

The foundation of all we’ve done is our free, ad-supported short-form video program. But few of the highest-quality free videos are supported by advertising alone. We all adore the free segments of Last Week Tonight on YouTube — and they probably will stay free as long as people keep paying for HBO. Dozens of our favorite independent creators give their fans the chance to support their work through Patreon. So, today, we’re asking fans of Vox video to help us continue to expand our ambitions by joining the Vox Video Lab on YouTube.

YouTube’s membership program itself is in early phases, and Vox has been in touch with YouTube reps to talk about ways of improving it. For instance, YouTube provides Vox with very little information on who its paying members are. It “reflects our audience in general on YouTube, which skews male and young,” Posner said, but the membership analytics are “much less clear than the general analytics.” So Vox plans to run a member survey in the next week or two. For now, it’s pretty much only communicating with members through the YouTube channel, though it has a handful of email addresses of the people who signed up for that first advisory board meeting.

On the editorial front, the production of content for the Video Lab has fit in fairly seamlessly, said Mona Lalwani, executive producer for Vox.com video. The team already generates plenty of extra content in making its main videos, so sharing the extras hasn’t been too much of a lift. But Vox shifted two employees to work full-time on Video Lab member growth and retention. “There is literally nothing harder than launching something new. This is new to Vox, to YouTube, and to the video engagement team,” Hickman said. “That would be my biggest piece of advice [to other companies trying this]: [Audience] is at minimum a one-person full-time job.”

]]>
https://www.niemanlab.org/2019/02/vox-com-tries-a-membership-program-with-a-twist-its-focused-on-video-and-entirely-on-youtube/feed/ 0
Do people fall for fake news because they’re partisan or because they’re lazy? Researchers are divided https://www.niemanlab.org/2019/01/do-people-fall-for-fake-news-because-theyre-partisan-or-because-theyre-lazy-researchers-are-divided/ https://www.niemanlab.org/2019/01/do-people-fall-for-fake-news-because-theyre-partisan-or-because-theyre-lazy-researchers-are-divided/#respond Fri, 25 Jan 2019 14:37:17 +0000 http://www.niemanlab.org/?p=167875 “People who shared fake news were more likely to be older and more conservative.” Echoing other recent studies, researchers found that people who shared fake news on Twitter between August and December 2016 were likely to be older and more conservative, and were concentrated into a “seedy little neighborhood” on Twitter, according to Northeastern’s David Lazer — “Only 1 percent of individuals accounted for 80 percent of fake news source exposures, and 0.1 percent accounted for nearly 80 percent of fake news sources shared.”

The authors suggest a few ideas for reducing the spread of fake news — for example, limiting the number of political URLs that any one user can share in a day:

Platforms could algorithmically demote content from frequent posters or prioritize users who have not posted that day. For illustrative purposes, a simulation of capping political URLs at 20 per day resulted in a reduction of 32 percent of content from fake news sources while affecting only 1 percent of content posted by nonsupersharers.

By the way, the team also found that “incongruent sources,” i.e. sources that didn’t fit with a person’s political beliefs, “were shared at significantly lower rates than congruent sources (P < 0.01), with two exceptions. First, conservatives shared congruent and incongruent nonfake sources at similar rates. Second, we lacked the statistical power to assess sharing rates of conservatives exposed to liberal fake news, owing to the rarity of these events." Going back to the op-ed at the start of this column — do people fall for fake news because they're partisan or because they're lazy? — this is evidence on the "they're partisan" side of the ledger. The authors write:

These findings highlight congruency as the dominant factor in sharing decisions for political news. This is consistent with an extensive body of work showing that individuals evaluate belief-incongruent information more critically than belief-congruent information.

“I was extremely naive. I believed that people were simply misinformed.” The Guardian has a very sad article on what real life is like for people who were the victims of online conspiracy theorists: parents of children who were murdered at Sandy Hook; a Massachusetts resident with autism who was falsely pegged as the Parkland shooter; Brianna Wu, who got caught up in Gamergate. (For three of the five people profiled here, Infowars was heavily involved in the harassment.) Here’s Lenny Pozner, whose six-year-old son Noah was killed by the Sandy Hook shooter, on how he initially tried to confront the people who claimed that Sandy Hook was a hoax:

“I was extremely naive. I believed that people were simply misinformed and that if I released proof that my child had existed, thrived, loved and was loved, and was ultimately murdered, they would understand our grief, stop harassing us, and more importantly, stop defacing photos of Noah and defaming him online.”

Instead, he watched his deceased son buried a second time, under hundreds of pages of hateful web content. “I don’t think there’s any one word that fits the horror of it,” Pozner says. “It’s a phenomenon of the age which we’re in, modern day witch-hunts. It’s a form of mass delusion.”

Wu faces harassment to this day:

A woman turned up at her alma mater, the University of Mississippi, impersonating her in an attempt to acquire her college records. Someone else surreptitiously took photos of her as she went about her daily business. Wu was unaware of it until she received anonymous texts with pictures of her in coffee shops, restaurants, at the movies.

An accurate floor plan of her house was assembled and published online, along with her address and pictures of her car and license plate. And then there were the death threats — up to 300 by her estimate. One message on Twitter threatened to cut off her husband’s “tiny Asian penis.”

Pozner and his wife have had to move eight times in five years because people keep tracking down and publishing his address; he “has deliveries sent to a separate address and has rented multiple postal boxes as decoys.” Wu and her husband had to evacuate their house and stay with friends and in hotels. Instead of hunkering down, though, Pozner and Wu have fought back. Wu ran for a House seat in Massachusetts and lost in 2018 but has vowed to run again in 2020. Pozner, along with other Sandy Hook families, is fighting Alex Jones in court, with the families achieving recent victories in two separate defamation lawsuits, though there’s still a long way to go.

Semi-related: This article from MEL Magazine (which is the men’s digital magazine run by internet razor company Dollar Shave Club) is about what it’s like for women who watch their boyfriends become radicalized online. One commonality is that these men really suck to be around!

One story:

“Our relationship started normally: We went for walks, saw films, went out for dinner. Most of the ‘arguments’ we’d have would be where to go out on a date. When I moved in with him after graduation, the arguments were about who would do the washing up or the cooking that night,” she says. By the end of their relationship in September, though, she found herself having to not only try to get Craig to do his share of the laundry, but to justify why people should be allowed to speak languages other than English in public, why removing taxes for tampons isn’t unfair, and more bizarrely, why being a feminist isn’t the same as being a Nazi.

“Nearly all the arguments came from YouTube videos he was watching,” Sarah tells me. “Because he’d work at night, he’d spend the day on the internet. He’d be watching them, and send them to me throughout the day on WhatsApp, over email, anywhere really.” During one work meeting in 2016, she received videos from him about a “migrant invasion into Britain, orchestrated by Angela Merkel and Barack Obama,” which showed Libyan refugees getting off a boat carrying large bags and shouting, “Thank you, Merkel!” played over dark orchestral music. Other videos supported Donald Trump’s proposed ban on Muslim immigrants, diatribes on feminism “threatening traditional families” and “scientific evidence” suggesting that white people have higher IQs than black and South Asian people.

With each, he’d ask what her view on it was. Sometimes, she’d say she didn’t know, and he’d “send me more videos, or explain why they were correct.” Other times, when she’d disagree — for example, when it came to whether abortion should be legal — he’d get angry. “He would start off by saying I was wrong, demanding I explain my view — during a work day! When I wouldn’t respond to him immediately, he’d tell me that my view was stupid and idiotic and that I was just another ‘dumb leftie’ who didn’t know what they were talking about.”

Another:

With few friends around him and Ellen at university, he spent the majority of his time online, learning how to trade foreign currency via obscure blogs and YouTube tutorials before wading into more political waters. “It started off fairly mild,” Ellen says, with a slight laugh. “He would WhatsApp me Jordan Peterson lectures about ‘social justice warriors’ on university campuses. Sometimes I’d just ignore them, or say that I didn’t agree with what they were saying. Eventually, he moved on to more extreme material. He would send me videos by Stefan Molyneux about the links between race and IQ, or how it was scientifically proven that Conservative women were more attractive and left-wing women like me were fat and ugly.”

Also: YouTube, ugh.

Speaking of which, there’s also this piece at BuzzFeed from former Nieman Lab staffer Caroline O’Donovan and future New York Times opinion writer Charlie Warzel:

How many clicks through YouTube’s “Up Next” recommendations does it take to go from an anodyne PBS clip about the 116th United States Congress to an anti-immigrant video from a designated hate organization? Thanks to the site’s recommendation algorithm, just nine…

The Center for Immigration Studies, a think tank the Southern Poverty Law Center classified as an anti-immigrant hate group in 2016, posted the video to YouTube in 2011. But that designation didn’t stop YouTube’s Up Next from recommending it earlier this month after a search for “us house of representatives” conducted in a fresh search session with no viewing history, personal data, or browser cookies. YouTube’s top result for this query was a PBS NewsHour clip, but after clicking through eight of the platform’s top Up Next recommendations, it offered the Arizona rancher video alongside content from the Atlantic, the Wall Street Journal, and PragerU, a right-wing online “university.”

How exactly should we describe (and research) Fox News? Is Fox News propaganda or a reliable news source? How should the researchers who are studying it, and the people who are writing about it, label it? Jacob Nelson asked “a number of academics who have researched partisan news generally and Fox News specifically” how they characterize the most-watched basic cable network in America. Not surprisingly, they say: It’s complicated.

Though scholars like [Louisiana State University’s Kathleen] Searles assert that the categorization of Fox as a partisan news outlet akin to MSNBC continues to be accurate, others think that kind of comparison no longer applies. As [Rutgers associate professor Lauren] Feldman explains, “While MSNBC is certainly partisan and traffics in outrage and opinion, its reporting — even on its prime-time talk shows — has a much clearer relationship with facts than does coverage on Fox.” Princeton University assistant professor Andy Guess echoes this point: “There’s no doubt that primetime hosts on Fox News are increasingly comfortable trafficking in conspiracy theories and open appeals to nativism, which is a major difference from its liberal counterparts.”

But maybe, NYU visiting assistant professor and Tow Center fellow A.J. Bauer argues, Fox News really should be studied as a news outlet.

Taking conservative news seriously — granting that it is, indeed, a form of journalism — destabilizes our traditional normative ways of thinking about news and journalism. [But] those categories are already thoroughly destabilized among the general public, and it’s long since time that journalists and scholars reckoned with this problem directly.

(Bauer wrote a prediction for us last month, asking: “What happens to the conservative mediasphere when it loses its current center of gravity?”)

The red flags. Data & Society has a nice chart capturing “a step by step process for reading metadata from social media content. The goal for each step is to evaluate different types of ‘red flags’ — characteristics which can, when taken together indicate likely manipulation and coordinated inauthentic behavior.” Among those red flags: “Pervasive use of linkshorteners for automated messaging and mass content posting,” “total absence of content or geotags,” and “automated responses from other accounts (e.g., ‘Thanks for the follow! Check out my webpage!’).”

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

]]>
https://www.niemanlab.org/2019/01/do-people-fall-for-fake-news-because-theyre-partisan-or-because-theyre-lazy-researchers-are-divided/feed/ 0
With tech’s reality a little too dystopian, The Verge is turning to science fiction for inspiration https://www.niemanlab.org/2019/01/with-techs-reality-a-little-too-dystopian-the-verge-is-turning-to-science-fiction-for-inspiration/ https://www.niemanlab.org/2019/01/with-techs-reality-a-little-too-dystopian-the-verge-is-turning-to-science-fiction-for-inspiration/#respond Wed, 23 Jan 2019 14:55:48 +0000 http://www.niemanlab.org/?p=167813 Maybe you cringe when you see breaking news on (or about) Twitter, or flee from Facebook Portals, or can only imagine each dystopian news cycle drawing us closer to The Handmaid’s Tale. If so, try thinking of tech as a tool that can replicate you to keep your dog company after you die, or that can open-source a rocket ship to escape the charred Earth for Jovian Europa. Nice, pleasant thoughts.

These are two storylines of the 11 The Verge is releasing over January and February as part of its Better Worlds project. It’s an attempt to brighten up their coverage of the dismal real world and push for science fiction that inspires. It’s the seven-year-old site’s first steps into fiction, and it’s planting the series across all its regular distributed platforms — but with less of a focus on Facebook.

“The reality of doing journalism around science and engineering right now is there’s a bunch of fun stuff — but there’s also a bunch of pretty negative stuff,” Nilay Patel, The Verge’s editor-in-chief, told me. (Recently departed culture editor Laura Hudson came up with the initial idea last spring.) “We’re trying to strike a balance where we’re showing people as much opportunity, the democratization of culture, innovation as we are [showing that] ‘Hey, Facebook should get its shit together.'”

In other words, if you can’t report positive news, maybe kick in some fiction to cheer folks up! (Just kidding?) The Verge is trying to intersperse Star Trek-level sci-fi encouragement among its reporting on, say, whether Nest cams are being hacked to transmit fake nuclear bomb threats. (They’re not.) Here’s an snippet from Justina Ireland’s piece:

Luis runs off, and Carlinda turns back to me. “This was always the plan, Mimi. The game is rigged, and we all knew that as soon as a few people found out this was happening, they were going to come down. Hard. And that’s what we wanted. Sure, getting people to Europa is great, but disrupting the entire system? Turning their tricks against them? That’s the real way we win.”

The buzz of drones comes from overhead, and she looks up. “It’s the media. Right on time. You’d better run, Mimi. You got a ship to fly.” And then she darts off into the crowd.

“We didn’t want something that was Pollyanna and didn’t want to glaze something over,” Helen Havlak, The Verge’s editorial director, said, also highlighting their goal to commission from authors with diverse perspectives on science fiction. “We asked writers to come up with stories that presented conflict but with the possibility for tech and science to make the world better and a glimpse of a better future. And we saw great response from the authors on that premise because they too have been in the zone of ‘dystopian fiction has been what’s getting optioned for TV’.”

The fiction project is one way to refresh the feeds of followers who may be getting burnt out of Facebook data privacy explainers or the government shutdown’s (still ongoing!) impact on the future of science — and of course to harness new readers coming through the fan followings of writers like John Scalzi and Leigh Alexander.

A lot of that audience is coming from YouTube and Instagram Stories, not necessarily Facebook — a twist for The Verge, which leaned on its Circuit Breaker spinoff for high Facebook engagement. Each story is published in full in the Better Worlds section on the site with an author Q&A and either a video animation or an audiobook making the piece more vivid. But now — as the industry recuperates from Facebook’s failed pivot-to-video promises and faulty metrics — they’ve pushed off from Facebook a little more.

Video strategy on and off Facebook was the nut Patel was trying to crack in 2016. He told Nieman Lab then: “The biggest challenge, the hardest one, has been our video strategy: why we make videos, who we make videos for, what kind of stories we want to tell with video, what platforms the videos belong on.”

But videos on Circuit Breaker, The Verge’s spinoff gadget blog-as-a-Facebook-page, were taking off. A little more than two years ago, Circuit Breaker views on Facebook — with 500,000 followers — were neck and neck with views from 2 million followers on YouTube. By 2019, the tech dystopia has shown that Facebook is not social video heaven. In its first 36 hours, the video for John Scalzi’s piece got more than 35,000 views on YouTube compared to 14,000 on Facebook.

“A lot of the grueling questions in 2016 of video on the internet we just answered and said: Our video lives on YouTube. We’re going to program for the YouTube audience,” Patel told me. “We solved the main thing that made this a challenge, which was where does this live and what does our community look like. We made the call, and now after two years of investment in that decision, we’re able to spread out a little more.”

Havlak also noted that Instagram stories has brought a surprising amount of engagement to the series so far, and the team is planning Reddit AMAs with the authors as well.

This project wasn’t cheap for The Verge, which is also coming off building a house with Vox Media sister site Curbed. Boeing sponsored Ireland’s “Theory of Flight” piece, though the months-long process consumed Havlak, Hudson, and design director William Joel, Havlak and Patel said, with help from others on The Verge’s staff. But the spread of engagement might mean more fiction in the future — no matter how dystopian that future may look.

]]>
https://www.niemanlab.org/2019/01/with-techs-reality-a-little-too-dystopian-the-verge-is-turning-to-science-fiction-for-inspiration/feed/ 0
YouTube helps a majority of American users understand current events — but 64 percent say they see untrue info https://www.niemanlab.org/2018/11/youtube-helps-a-majority-of-american-users-understand-current-events-but-64-percent-say-they-see-untrue-info/ https://www.niemanlab.org/2018/11/youtube-helps-a-majority-of-american-users-understand-current-events-but-64-percent-say-they-see-untrue-info/#respond Wed, 07 Nov 2018 15:00:48 +0000 http://www.niemanlab.org/?p=164720 As much as we rag (mmm, rightfully!) on the major tech platforms for their algorithms getting “don’t amplify disinformation” wrong, YouTube as a platform occupies a very peculiar spot. Unlike its more social peers, YouTube isn’t primarily about making meaningful connections, snippets of snark, or perfected selfies. It’s closer to a pure consumption platform, at least the way most people use it, and it’s unusually directed toward usefulness.

Are you actually wasting time on YouTube when you’re watching a cooking video instead of scrolling/tapping mindlessly through one of your various News Feeds elsewhere? Is it pacifying your grabby infant so you can be an adult and clean the bathroom? Are you going to learn how to knit or repair something in your home any other way? See, useful.

YouTube, which recorded 1.5 billion monthly logged-in users last year, also has the downsides of drawing some users into more extreme-content rabbit holes, surfacing disturbing videos on the kid-friendly version of the platform, and amplifying creators like those Paul brothers who stupidly vlog from Japanese forests. Not so useful.

Still, when a one-hour outage on the platform can result in a 20 percent jump in traffic to publishers’ websites (compared to a 2.3 percent increase when Facebook was down), YouTube’s got a special share of the attention economy.

The Pew Research Center has new data on just how useful YouTube is — including its recommendations algorithm, which apparently drives 70 percent of consumption. 35 percent of all U.S. adults use YouTube, and 51 percent of those say YouTube has helped learn how to do something for the first time, according to a new report drawing on 4,500 Americans. The percentage of YouTube users who say they get news or headlines there has doubled since 2013 (38 percent today, compared to 20 percent then).

YouTube also plays a big role in occupying those who aren’t yet of reading age. 81 percent of all parents with kids age 11 and under have used YouTube to placate their spawn at least once; more than a third allow their kid to watch videos on the platform regularly. The Pew report points out that YouTube, by YouTube/Google’s own policies, is intended for those age 13 and older, though YouTube Kids is supposed to be a safer version of the platform.

There’s still plenty of questionable content on YouTube, and a majority of respondents noted that they often encounter “troubling or problematic” videos. 60 percent told Pew that they end up watching videos of “dangerous or troubling behavior,” and 64 percent see videos that “seem obviously false or untrue.” This persists in the kids content as well: One example The New York Times highlighted was a three-year-old boy coming across “PAW Patrol Babies Pretend to Die Suicide by Annabelle Hypnotized.” This is pretty much the opposite of useful.

Crises like the PAW Patrol incident uncovered by the Times, not to mention a whipsawing 2017 for the platform — The Verge highlighted the downfall of its biggest star, the apparently anti-Semitic gamer PewDiePie, and a near-boycott from big brands whose advertising was running alongside racist videos — spurred YouTube to release a transparency report in May. Users have always had the opportunity to flag inappropriate content, as we wrote at the time, but it turns out YouTube didn’t rely too heavily on those signals:

YouTube’s latest transparency report tells us a great deal about how user flags now matter to its content moderation process — and it’s not much. Clearly, automated software designed to detect possible violations and “flag” them for review do the majority of the work. In the three-month period between October and December 2017, 8.2 million videos were removed; 80 percent of those removed were flagged by software, 13 percent by trusted flaggers, and only 4 percent by regular users. Strikingly, 75 percent of the videos removed were gone before they’d been viewed even once, which means they simply could not have been flagged by a user.

On the other hand, according to this data, YouTube received 9.3 million flags in the same three months, 94 percent from regular users. But those flags led to very few removals. In the report, YouTube is diplomatic about the value of these flags: “user flags are critical to identifying some violative content that needs to be removed, but users also flag lots of benign content, which is why trained reviewers and systems are critical to ensure we only act on videos that violate our policies.”

Pew researchers also explored the recommendation algorithm, which 81 percent of those polled say at least “occasionally” drives their video consumption choices. Here’s what they found:

  • 28 percent of the videos they encountered were recommended multiple times, “suggesting that the recommendation algorithm points viewers to a consistent set of videos with some regularity.”
  • YouTube recommends longer and longer content over time. The researchers started with videos that were 9:31 long, on average, and by the fourth recommendation were directed to a nearly 15-minute-long video.
  • The algorithm also pointed users toward more and more popular videos. More than two-thirds of the recommended videos had more than 1 million views. The average number of views per recommended video went from 8 million in the starting round to 30 million views on average in the first recommended video and more than 40 million views on average at the fourth recommended video.

Video has not proven effective as the next! hot! thing! for publishers to pivot to, as demonstrated by Facebook’s video hype-and-fail. But the YouTube niche is there, and it’s definitely not cold. Nearly one in five respondents told Pew YouTube helps them understand things happening in the world — you know, current events and news, to name a few.

Earlier this year, YouTube announced its plan for improving the platform’s news discovery experience. It includes $25 million in grants for news organizations to build out their video operations and experiments with boosting local news in YouTube’s connected TV app — not to mention adding text-based news article snippets from “authoritative sources” alongside search results in breaking situations — but TBD on that initiative’s success. If YouTube really wants to be the most useful platform, it might want to make sure it’s not scarring children for the rest of their lives or radicalizing someone who just wants to learn how to clean a gun.

Image from geralt used under a Creative Commons license.

]]>
https://www.niemanlab.org/2018/11/youtube-helps-a-majority-of-american-users-understand-current-events-but-64-percent-say-they-see-untrue-info/feed/ 0
When YouTube went down for an hour, publishers’ traffic increased https://www.niemanlab.org/2018/10/when-youtube-went-down-for-an-hour-publishers-traffic-increased/ https://www.niemanlab.org/2018/10/when-youtube-went-down-for-an-hour-publishers-traffic-increased/#respond Tue, 23 Oct 2018 12:00:14 +0000 http://www.niemanlab.org/?p=164205 What do people do when YouTube is down? Apparently, they go read articles — especially articles about why YouTube is down.

A one-hour YouTube outage on October 16 at around 9 p.m. ET resulted in a 20 percent net increase in traffic to client publishers’ sites, Chartbeat found.

That increase was roughly evenly split between general articles on the publishers’ sites, and articles specifically about the YouTube outage.

The shift in consumption when YouTube was down is notable compared to previous outages on other services. A 45-minute Facebook outage on August 3, for instance, resulted in just a 2.3 percent net increase in traffic to Chartbeat publishers’ sites (and only a negligible amount of that traffic went to articles about the outage). As in the YouTube case, though, readers used the time that Facebook was down to go straight to publishers’ sites: Direct traffic to publishers’ websites increased 11 percent, Chartbeat found, while traffic to publishers’ mobile apps went up by 22 percent.

Unlike Facebook, YouTube is not normally a traffic driver to publishers, Chartbeat notes, making the October 16 bump “purely additive.” The YouTube outage also took place on a Tuesday evening in the U.S., which Chartbeat’s data scientist Su Hang refers to as “prime couch time” — and when users chilling on their couches couldn’t pull up YouTube videos, it appears they went to read stuff instead.

For an hour, anyway.

]]>
https://www.niemanlab.org/2018/10/when-youtube-went-down-for-an-hour-publishers-traffic-increased/feed/ 0
Explanatory video + engagement = How Vox’s Borders series is humanizing the map and building local source networks https://www.niemanlab.org/2018/08/explanatory-video-engagement-how-voxs-borders-series-is-humanizing-the-map-and-building-local-source-networks/ https://www.niemanlab.org/2018/08/explanatory-video-engagement-how-voxs-borders-series-is-humanizing-the-map-and-building-local-source-networks/#respond Mon, 27 Aug 2018 13:36:34 +0000 http://www.niemanlab.org/?p=162382 If you’re going to attempt to humanize the border between two contentious countries, you should probably start by asking the humans living there what they think.

And while it’s easy (well, relatively) to go in and ask the first migrants or border guards or vendors that you see — doing your engagement homework beforehand doesn’t hurt either. That was the premise of Borders, Vox’s video series by Johnny Harris and producer Christina Thornell.

After living near the U.S.–Mexico border in Tijuana, “I wanted to humanize the lines on the map,” Harris said. “I wanted to look at a map and zoom into it and look at the people there and the stories surrounding this thing that we usually just look at from 30,000 feet.”

Harris, an international-relations aficionado, pitched the idea a few years ago, which has since bubbled into three seasons of five or six episodes. The first season focused on six different borders from Haiti/Dominican Republic to Nepal/China. The second, a deep dive into Hong Kong, brought in more engagement reporting — to avoid the white-man-parachuting-into-the-natives’-land trope and more fully telling the stories of Hong Kong’s borders. The third — focusing on the borders of Colombia — is yet to come, as the callout started Thursday with the Harris’s announcement of the new destination.

Vox’s video team is only four years old, like the site itself; its leader, Joe Posner, pointed out that Harris was the fourth person to join the team. “It’s a delicate mission to help explain the world, but we’re just riding along with our audience. We’re just as curious as they are,” Posner said.

This isn’t the first time Vox (with a four-person engagement team) has sought its audience’s ideas for video; for Explained, its weekly Netflix show, producers used follower input for its e-sports and K-pop episodes, Posner said. Vox also made a video on what high schoolers really think about school shootings drawing from a survey after the Parkland shooting with 1,635 responses.

The Borders idea was different because the approach was baked in from the beginning. “I was sensitive to saying, ‘Here I am, an outsider, a non-expert going to these places and saying I’m here to explain this,'” Harris said.

Blair Hickman, Vox’s director of audience, explained the way they ramped up engagement with each season. For the first, knowing it would have a broader focus, Harris made a video asking followers to suggest places for him to highlight. Hickman designed a form to gather ideas and 7,000 responses came in over one month. While Harris was out exploring, the engagement team flexed the muscles of Harris’s Facebook page as a “community hub,” as Hickman described it, to share his trip with the 69,000 followers.

For the second season on Hong Kong, Harris and Hickman created a network of local followers who wanted to participate in or help the videos’ creation in some way, using this form and other callouts. “We expected 40 responses and got more than 700,” Hickman said. “In the weeks leading up to his trips, each week we would send via email a set of structured call-outs for what to explore.” These were designed to both hopefully find interviews on the ground and local fixers to help navigate the politics of drone-filming borders, but also surface other components of the larger story of the border, like a neon light craftsman and people stuck in cage homes in Hong Kong’s housing market. When he was on the ground, Harris emailed the network to see if anyone wanted to meet up and had to make a waitlist with 400 interested people. He set up a meeting place with 15-minute slots to come chat with him, and ended up walking around with some folks who described the history of the locations to him as they lived through it. But all the work started beforehand.

“Going through and reading 400 responses to people’s feelings on the encroachment of China in Hong Kong, suddenly after you read hundreds of responses on how people are thinking about this — you haven’t set foot in Hong Kong but you’ve interacted with this paradigm,” Harris said.

Now the show moves onto Colombia, but the network in Hong Kong remains. Hickman said they hope the local audience stays interested in the Borders framework and continues to watch the next season, and that Vox will be able to rely on them for help sourcing future stories. But the stories Harris tells in the next season depends on the ideas they get from the people who live there.

Vox’s video programming has remained scrappy, as we described in a piece about the team midway through its current existence, with a streamlined focus on having one person responsible for the video — from pitch to publication. Besides their Netflix foray with Explained, they’ve also leaned on Facebook Watch both for funding and increased viewership. “More broadly we are just trying to grow onto the various stages for video that exist,” Posner said.

Screenshot of a Hong Kong entrepreneur showing Harris his cage homes 2.0 (“pods”) from Season 2, episode 5.

]]>
https://www.niemanlab.org/2018/08/explanatory-video-engagement-how-voxs-borders-series-is-humanizing-the-map-and-building-local-source-networks/feed/ 0
A big shakeup at Audible has left the audiobook giant’s podcast strategy unclear https://www.niemanlab.org/2018/08/a-big-shakeup-at-audible-has-left-the-audiobook-giants-podcast-strategy-unclear/ https://www.niemanlab.org/2018/08/a-big-shakeup-at-audible-has-left-the-audiobook-giants-podcast-strategy-unclear/#respond Tue, 07 Aug 2018 14:26:59 +0000 http://www.niemanlab.org/?p=161709 Welcome to Hot Pod, a newsletter about podcasts. This is issue 172, published August 7, 2018.

Huge shakeups at Audible Originals. I can confirm that the Amazon-owned audiobook giant announced internally last Thursday that it was eliminating a considerable number of roles within its original programming unit. Sources within the company tell me that the role eliminations span a number of different teams within the unit, but most notably, they include nearly the entire group responsible for Audible’s shorter-form podcast-style programming, like the critically acclaimed West Cork, The Butterfly Effect with Jon Ronson, and Where Should We Begin? with Esther Perel. That group was previously led by former NPR executive Eric Nuzum and his deputy, the public radio veteran Jesse Baker.

NPR’s Neda Ulaby first reported the development in a newscast on Friday evening. In the spot, Ulaby noted that about a dozen employees were affected and that the changes came “with no warning.”

Yesterday, Nuzum, who held the title of SVP of original content development, circulated an email announcing that he will be leaving the company in the next few weeks. He also noted that he plans to engage in some consulting work in the short-term, before diving into a new venture by the year’s end.

These developments come as Audible reshapes its original programming strategy. A spokesperson for the company tells me: “As you may know, we’ve been evolving our content strategy for Audible Originals (including our theater initiative, narrative storytelling ‘written to the form’ as well as short-form programming). A related restructure of our teams resulted in the elimination of several roles and the transfer of some positions to other parts of the business.”

I briefly wrote about this shift last month, using the release of the author Michael Lewis’ audiobook-only project, The Coming Storm, as the news hook. In the piece, I posited a link between the strategic changes and recent shake-ups at the company’s executive level:

Audible has long been a horizontal curiosity for the podcast industry, given its hiring of former NPR programming VP Eric Nuzum in mid-2015 and subsequent rollout ofthe Audible Originals and “Channels” strategy in mid-2016, which saw the company releasing products that some, like myself, perceived as comparable to and competitive with the kinds of products you’d get from the podcast ecosystem.

This signing of authors like Michael Lewis to audiobook-first deals appears to be a ramping up of an alternate original programming strategy, one that sees Audible leaning more heavily into the preexisting nature of its core relationships with the book publishing industry and the book-buying audience. It might also be a consequence of a reshuffle at the executive decision-making level: in late 2017, the Hollywood Reporter broke news that chief content officer Andrew Gaies and chief revenue officer Will Lopes unexpectedly stepped down resigned from their posts. (Later reporting noted that the resignations happened in the midst of a harassment probe.) The ripple effects of that sudden shift in leadership is probably only hitting us now, and in this form.

So that’s the context. Here’s what I don’t know:

  • What happens to all the podcast-style Audible Original programs that are still ongoing? What happens to their future seasons currently in production? And will those properties be given the opportunity to leave for other podcast companies — or will they be integrated into Audible’s new strategy in some form?
  • What happens to the dozen or so producers that were affected by the role eliminations?

And then, of course, there’s the question of what this means for Audible. I’ll leave this for next week.

The Alex Jones problem. The past few months have seen a flurry of activity on the subject of internet platforms and their responsibilities around hateful content, harmful material, and the limits of free speech. The issue largely focused on high-volume media-distribution platforms like Facebook, YouTube, and Twitter, but its scope actually extends much further than that: the e-commerce giant Amazon, as well, has faced scrutiny over some of the products it allows on its platform.

Last week, the ongoing saga reached podcasting shores, and it is there that the story proceeded to reverberate back outwards with significance.

Over the weekend, both Apple Podcasts and the Midroll-owned Stitcher removed podcasts by Infowars, the conspiracy theory-peddling media company led by Alex Jones, from their platforms. (If, for some reason, you are unfamiliar with Jones and Infowars, I highly recommend this profile by Charlie Warzel.)

Stitcher and Apple’s decisions came shortly after Spotify announced they were removing specific episodes from Alex Jones’ podcasts from its platform that were found to be in violation of its Hateful Content policy. At the time, the music streaming service was facing backlash for continuing to distribute the conspiracy theorist’s podcasts after Facebook and YouTube had temporarily suspended some of Jones’ programming for similar content policy violations. Spotify remained under pressure even after the selective removals, with critics continuing to raise questions on whether the platform had done nearly enough.

It’s worth noting that Stitcher was the first major podcast-distributing platform to delist Jones’ shows in their entirety. The company did so on Thursday evening, citing over Twitter that Jones had, on multiple occasions, violated its policies when he published episodes that “harassed or allowed harassment of private individuals and organizations, and that harassment has led listeners of the show to engage in similar harassment and other damaging activity.” Sources within the company told me last week that the decision to completely remove Jones’ programming, as opposed to just focusing on specific offending episodes (as in the case of Spotify), stemmed from its concluding judgment that the podcasts were likely to violate its policies on harassment and abuse in the future. Stitcher’s move attracted a fair bit of media attention, with writeups on Billboard, Engadget, BuzzFeed News, and TechCrunch.

Apple’s removal of Jones’ podcasts took place sometime during Sunday evening. I first noticed the delisting around 6:45 p.m. Pacific, and BuzzFeed News published the first official report on the matter shortly after. In the report, Apple similarly cited policy violations as the grounds for Jones’ removal. As a spokesperson told BuzzFeed News:

Apple does not tolerate hate speech, and we have clear guidelines that creators and developers must follow to ensure we provide a safe environment for all of our users…podcasts that violate these guidelines are removed from our directory making them no longer searchable or available for download or streaming. We believe in representing a wide range of views, so long as people are respectful to those with differing opinions.

Strangely, Apple’s decision only impacted five out of six Infowars podcasts. Real News With David Knight, Infowars’ daily news recap show, remains active on the platform. No explanation was given as to why. The BuzzFeed News report also highlighted the efforts by Sleeping Giants, a social media-based activism group, to lead pressure campaigns to get major internet platforms to cut ties with Jones.

Apple’s decision to delist Jones’ podcasts is noteworthy for its ripple effect within the podcast ecosystem. The Apple Podcasts platform does not actually host podcasts itself, functioning instead as an inventory to which you have to submit your RSS feeds to review for inclusion. Because of Apple Podcasts’ historical scale, infrastructure, and preexisting inventory map, a significant number of other podcast apps, including the public radio coalition-owned Pocket Casts, rely on Apple Podcasts’ inventory to determine their own offerings — sometimes to be efficient in populating their app, other times to lean on a larger authority for content policing. The removal is also noteworthy, obviously, for the fact that Apple Podcasts is believed to still be the most widely used podcast listening app in the market.

And it seems the ripple effect has extended outwards as well. Yesterday, Facebook, YouTube, and Spotify all followed up by completely removing Alex Jones and Infowars programming from their platforms, all citing repeated violations around their hate speech and harassment policies.

As the bans from Facebook, Spotify, and YouTube trickled out on Monday, there emerged some debate about whether the bans were the result of separate processes that were all bound to end up at the same conclusion, or whether this was a situation where these gargantuan platforms were simply waiting for someone else to take the first step. Given the timeline and stutter-step nature of Monday’s Infowars bans, I can’t help but view this as the latter. When it comes to big internet platforms (or any huge organization with massive stakes, really), deeply complicated questions, and moral leadership, stories like these almost always crescendo to a point where everyone arrives at a holding pattern that waits for someone else to take the first step into the muck — and reveals the full ramifications of what happens on the other side.

In this case, the first one in was comparatively smaller Stitcher, and I can’t shake the feeling the company’s actions ended up attracting the right amount of attention and creating a permission structure that made it easier for the others to move in this direction. For what it’s worth, I hope they get the credit for it.

Show notes:

  • James Andrew Miller’s oral history podcast with Cadence13, Origins, is returning with three new seasons — or “chapters,” in its parlance — on the horizon: one on college football coach Nick Saban, one on the upcoming season of Saturday Night Live, and one on the legendary HBO show Sex and the City.
  • Tenderfoot’s Up and Vanished will kick off its second season on August 20. The podcast has now partnered with Cadence13 for distribution and monetization.
  • Radiotopia’s new Showcase series, called The Great God of Depression, dropped in full last Friday. Pagan Kennedy, a coproducer on the project, also published an related op-ed in The New York Times over the weekend.

Existentialism. Last Thursday, Edison Research SVP Tom Webster — one of the principal frontmen for the measurement firm’s Infinite Dial study, which gives the podcast industry its benchmark numbers — published a Medium post titled “Podcasting’s Next Frontier: A Manifesto For Growth.” It is an adaptation of Webster’s keynote from the recent Podcast Movement conference, and it presents a data-supported argument around what he views as the fundamental challenge for the podcast ecosystem…and what, broadly speaking, may be the way through it.

Webster’s argument contains numerous moving parts and side-theses (be sure to clock the bit about music podcasts), and at the risk of oversimplifying his perspective, here’s the main thrust of the piece as I understand it:

(1) Contrary to aspects of its public narrative, podcasting isn’t actually growing that fast. As Webster outlines: “Since we started tracking podcasting in 2006, weekly consumption has gone from essentially zero to 17% of Americans 12+. That’s 0–17, in 13 years, or less than two percentage points per year. Now, it’s grown a bit faster over the past 5 years, but can anyone look at this graph and call podcasting a fast-growing medium? It’s actually one of the slowest-growing media we’ve ever tracked in the Infinite Dial.”

(2) Raising the possibility (or, indeed, probability) that there will soon come a day when its annual reporting will show a flattening or decrease in podcast listening growth, Webster highlights the principal metric that should be the center of our attention: “17% of Americans say they listen to a podcast at least once a week. 64% of Americans say they know the term. That means that about three-quarters of the people who say they know the term ‘podcasting’ are not weekly listeners.” To Webster, this data point suggests that the fundamental problem is as follows: lots of people have heard about podcasting, but they don’t actually know what it is.

(3) That knowledge gap is preventing those potential new listeners from either trying out or buying into the medium. Part of this has to do with simple under-education about some core aspects of the ecosystem — podcasts are generally free, the means to consume them are already pre-baked into your phone, and so on — but a bigger part, Webster gestures, has to do with podcasting ecosystem’s lack of collective messaging that elevates its public identity beyond being a mere technological curiosity. Which is to say: there hasn’t been a push to help podcast programming make sense within the context of the everyday non-podcast consumers, in part by evoking facsimiles of what they already know or channeling the things they are already comfortable with.

For Webster, this conundrum is best expressed through the podcast ecosystem still not having what he calls “The Show”: the one program whose innate draw simplifies, supersedes, or even renders irrelevant the entire narrative around the distribution platform. He writes:

There were once was a time when plenty of people didn’t think they had a Netflix app, didn’t know they needed one, and weren’t sure how to watch it without getting discs emailed in those red envelopes. So what did Netflix do? They didn’t spend a bunch of money on a “Got Netflix?” campaign. They spent a lot of money on Orange is the New Black and House of Cards. What gets people to discover Netflix is curiosity, and what drives curiosity is the show. The killer show.

Technology and gaming enthusiasts can probably broadly equate this argument with the notion of “killer apps” that move new devices and consoles. Same goes as well, I think, with SiriusXM and Howard Stern.

I had originally planned to present a much bigger discussion around Webster’s post, more or less agreeing with the broad strokes of his argument while at the same time looking to do a couple of things: identifying its limits, interrogating its assumptions, expanding the scope of the conversation. Forgive me, but I’m afraid I have to postpone that to next week, both for the reasons of space and because I got caught up digging deep into the Audible and the Alex Jones stories.

In the meantime, I leaned on Tom for this week’s Career Spotlight:

Career Spotlight. Since we have a huge chunk of Tom Webster’s writing to go through, what’s a little more? Let’s go.

Hot Pod: Tell me about your current situation.

Tom Webster: I’m senior vice president of Edison Research, where I’ve been for over 14 years (wow). As one of the few Edisonians who doesn’t work in the main office (I travel a lot, and work from my home in downtown Boston), I’m a bit of a minister without portfolio, I suppose. Our digital audio practice is certainly part of my remit, but my main role is as the “chief explainer” of our research to the outside world. I present our data to clients, to agencies, and at conferences all over the world. Thought leadership is pretty much 100 percent of our marketing strategy, so I try to speak wherever and whenever I can. I’m super fortunate that my wife, Tamsen Webster, is a brilliant idea whisperer; she works with speakers, executives, and companies on finding the thread of their ideas and making them stronger — so I have a free at-home speaking coach ;).

As far as life plans are concerned, I enjoy being involved in consumer insights, and don’t think I’ll ever stray that far from being passionate about the voice of the customer. I’m currently working on my second book, and I think there will be some creative endeavors down the road (another podcast or two, for sure) that will keep me engaged. One of the things that I love about my role at Edison is that I get to touch a lot of different projects, especially on the “diagnosis” and design phases, which means I am constantly trying to solve a wide variety of problems in a wide variety of industries. But Podcasting has certainly been a passion of mine for nearly 15 years, and I really love where the space is right now, and its potential.

Hot Pod: What does your career arc thus far look like?

Webster: Bizarre, in some ways, in its relative stability. Of my 25-ish years of professional life, 20 have been with just two companies, which they tell me is fairly strange. My first real “I actually want to work here and don’t just need a job” job was with a market research company that served the radio industry, where I really cut my teeth (do people actually cut teeth?) as a media researcher. That was an invaluable experience for me — not only in terms of my craft, but also for what it taught me about how to treat and manage people. My bosses in that job, Frank Cody and Brian Stone, hired me for one role, which I sucked at. But their philosophy was to figure out what people actually were good at, then have them do those things—and they let me do that. I was a VP by age 29, and I owe that to Frank and Brian creating a role for me that played to my strengths (which I didn’t even know at the time) instead of berating me for my weaknesses. There are probably 100 things you can be good at in business, and I’m only really good at 4 of them. Frank and Brian built a role for me around those 4 things, and I’ve been in research ever since.

I left that job to co-found a startup in London which wound up burning out after a year and a half or so. When I returned to the states, I decided to go back to school full time, getting my MBA, to fill in some of the gaps I felt I had to at least be passable at if I were going to continue a career in marketing. I got a concentration in consumer insights in 2004, and then joined Edison shorty thereafter. I actually almost joined Edison in 1999 — the president and co-founder, Larry Rosin, was someone whom I’ve respected enormously throughout my career, and the chance to finally work with him and the incredible team he and Joe Lenski built was hard to pass up. As a unit, the Edison team is amazing at the 96 things I suck at, and they’ve both been incredible role models to me for doing things the right way. My wife started her own business two years ago, and more than once we have talked about a difficult business decision, and asked ourselves, “What would Larry do?” That’s always been the right answer.

Hot Pod: Throughout your life, what did a career mean to you?

Webster: I have an uncharacteristically short answer to this: it is very important to me to plant a flag for quality. Both of the two companies I mentioned spending 20 years with were prestige brands in their industries, and to me, a career is standing for something you believe in, being known for that thing, and for that thing to be of value. Edison certainly stands for a thing I believe in, and my career satisfaction stems directly from my modest role in telling that story to the world.

Hot Pod: When you first started out being a human, what did you think you wanted to do?

Webster: I grew up in a very small town in northern Maine, and really didn’t become a “human” in the grown-ass semi-aware sense until I finished college. I was the first in my family to go to college, and I am eternally grateful that my parents sacrificed so much to send me to Tufts, an experience that very nearly blew my mind in terms of the quantity and quality of ideas I was exposed to. After getting my B.A. in English lit, I was well and truly convinced that I wanted to be Robin Williams in Dead Poets Society. I went to grad school at Penn State, taught rhetoric and composition to the first-year class (time to abolish “freshmen,” yeah?) and fancied myself an Academic. I fell out of love with the “publish or perish” mindset, however, and figured out pretty quickly that academia wasn’t really my speed. The powerful play goes on — I’ve just found a different way to contribute my verse.

Hot Pod: Could you walk me through a little more about how you see Edison’s role in the world — and, like, the way your job has impacted your relationship to the knowability of things?

Webster: Larry and I talk about this a lot — our unofficial motto is that we’d rather be last and right than first. Period. This doesn’t mean that we are needlessly slow, by the way — as a small company, we are pretty nimble. But it does mean that what drives Larry, what drives me, and what drives all of us at Edison is the creation of new information — to understand something a little better than we did the day before and to go to bed at night knowing we did it as well as it could be done. I’m often asked by journalists and analysts to forecast things — where will we be in the future? What happens next? I resist those inquiries. Edison’s role in the world — in podcasting, in media, in our election research — is to be the most reliable and credible reporter of what *is*, not what will be. In terms of epistemology (top marks for being my only interviewer to ask me that one), I’d describe myself as being from the school of Pyrrho — a true Skeptic. That’s not a cynic, nor a pessimist. Merely one who believes that nothing can be known — not even this. We can only get close. And my belief in Edison’s role in the world is simply that I know we take the greatest pains possible to get as close as we can.

Hot Pod: What are you listening to right now?

Webster: I’ll get in trouble with numerous clients for not mentioning their shows, so this is a bit of a minefield question. I listen to about 20 hours of podcasts a week. I’d say half are music podcasts, which we need more of! I eagerly download and listen multiple times a week to the Anjunadeep Edition, a deep/progressive house music podcast that helps me write. I am a huge sports (and NBA in particular) nut, so I listen to Jalen and Jacoby, The Dan Le Batard Show, pretty much everything The Ringer does, and some NBA specific podcasts like The Lowe Post and The Woj Pod. My news comes from Up First, Planet Money, and Marketplace. I’ve known Mark Ramsey and Jeff Schmidt for years and years, and the collaborations they have done on Psycho, The Exorcist, and now Jaws are what audio should aspire to, IMHO.

Ultimately, I love The Show. I don’t think podcasting has given us The Show yet. It’s gotten close. And it will.

Thanks, Tom.

Miscellaneous bites:

  • “The Information has learned that only about 2% of the people with devices that use Amazon’s Alexa intelligent assistant — mostly Amazon’s own Echo line of speakers — have made a purchase with their voices so far in 2018, according to two people briefed on the company’s internal figures.” (The Information) As Nieman Lab’s Joshua Benton pointed out over Twitter: “That’s despite survey data suggesting something more like 25%.”
  • Breaker, the Y Combinator-accelerated podcast app, rolled out a new feature yesterday called Upstream that aims to help publishers to create and manage a “premium content” structure without having to rely on a non-podcast specific membership platform like Patreon. (Breaker)
  • “Apple’s HomePod may have just doubled its share of the U.S. smart speaker market.” (Fast Company)
  • “‘The Conservative Movement…Has Become a Racket’: Steve Schmidt Is Starting a Pod Save America for Never Trumpers.” (Vanity Fair)
  • “Colleen Scriven’s ‘Lesser Gods’ Podcast in Development as HBO Comedy Series.” (Variety)
  • “The Podcast Bros Want to Optimize Your Life.” (The New York Times)
  • “Patreon creators scramble as payments are mistakenly flagged as fraud.” (The Verge)
]]>
https://www.niemanlab.org/2018/08/a-big-shakeup-at-audible-has-left-the-audiobook-giants-podcast-strategy-unclear/feed/ 0
On a big story like the Helsinki Trump/Putin summit, Google News’ algorithm isn’t up to the task https://www.niemanlab.org/2018/07/on-a-big-story-like-the-helsinki-trump-putin-summit-google-news-algorithm-isnt-up-to-the-task/ https://www.niemanlab.org/2018/07/on-a-big-story-like-the-helsinki-trump-putin-summit-google-news-algorithm-isnt-up-to-the-task/#respond Wed, 18 Jul 2018 16:45:34 +0000 http://www.niemanlab.org/?p=160950 Imagine that you came back home after a busy day of work and wanted to catch up on the news about the Trump/Putin summit. This is, in fact, exactly what I did Monday.

I knew some interesting stuff had happened, but I wanted to dive deeper — to see multiple stories and get different perspectives. Google News seemed like a good place to start.

But look what I found at the top of the “Full coverage” page for the Trump/Putin press conference:

All four of these items come, directly or indirectly, from Fox News. Even worse, none of them is a factual report about the press conference — and all are commentary from the conservative end of the political spectrum, more specifically Trump sympathizers:

The fact that Google News thinks the four most important stories about the summit all come from or are based on Fox News is just stunning. Especially considering that the fallout from the press conference included criticism of Trump from conservative voices like Bob Corker, Lindsey Graham and John McCain.

Look, I understand that we have a polarized political environment and that publishing a partisan spin on the news is a reliable way for digital publishers to build an audience. One of the reasons I would go to Google News is to find different perspectives on an important story.

But my experience suggests that the Google News algorithm is, quite simply, broken. It is not only incapable of separating factual reporting from commentary — it can’t even provide a semblance of left/right balance on a story as polarizing as this one.

Google is not very transparent about how Google News works — here’s what they say on Google News Help. I would speculate that there are a couple of factors that explain what I saw:

  • First, that the Google News algorithm is prioritizing video or web pages containing video (such as all four of the links I saw).
  • Second, that Google News is valuing social signals — how content is behaving on social media, where partisan speech drives the most shares — rather than just the nature of the content (news reports vs. commentary) and/or the reliability of the news sources.

I do understand that it’s hard to build algorithms that can reliably differentiate news reports from commentary, and I also understand that tech companies are reluctant to get into the business of differentiating publishers based on quality or reliability. These are hard technology problems. But while there has been plenty of attention paid to Facebook’s role in propagating misinformation, and even a fair amount of attention to the flaws in Google’s YouTube recommendation algorithm, thus far Google News seems to have escaped the harshest criticism.

Google needs to tell us more about how its Google News algorithm works — and figure out how to apply technology (or technology plus human judgment) to do a better job of ranking stories that are the focus of so much partisan spin.

I don’t think that this requires rethinking all of Google News; the majority of news coverage never enters the partisan spin machine. But for stories like the Trump/Putin summit, the Google News algorithm seems to be failing entirely.

Rich Gordon is a professor and director of digital innovation at the Medill School of Journalism, Media, [and] Integrated Marketing Communications. A version of this post originally ran on Medium.

Photo of a “Kick Google aus dem Kiez” (“Kick Google from the Neighborhood”) protest June 14 in Kreuzberg, Germany by GloReiche Nachbarschaft used under a Creative Commons license.

]]>
https://www.niemanlab.org/2018/07/on-a-big-story-like-the-helsinki-trump-putin-summit-google-news-algorithm-isnt-up-to-the-task/feed/ 0
YouTube has a plan to boost “authoritative” news sources and give grants to news video operations https://www.niemanlab.org/2018/07/youtube-has-a-plan-to-boost-authoritative-news-sources-and-give-grants-to-news-video-operations/ https://www.niemanlab.org/2018/07/youtube-has-a-plan-to-boost-authoritative-news-sources-and-give-grants-to-news-video-operations/#respond Tue, 10 Jul 2018 16:01:17 +0000 http://www.niemanlab.org/?p=160524 Google-owned YouTube on Tuesday announced a few improvements it intends to make to the news discovery and viewing experience. The platform has had a bit of a bad run recently: surfacing videos that accuse mass-shooting survivors of being crisis actors, hosting disturbing videos targeting children, encouraging radicalizing behaviors through its recommendation algorithm, frustrating content creators trying to figure out monetization on the platform, blindsiding Wikipedia by saying it would use it to provide context and debunking. (YouTube employees themselves came under attack in April, when a woman shot three people at its headquarters in San Francisco before killing herself.)

The post about the platform’s coming changes, rosily titled “Building a better news experience on YouTube, together,” outlines new initiatives, including $25 million worth of grants for news organizations around the world to build out their video operations and tests of local news boosts in YouTube’s connected TV app (which it will expand to “dozens more markets like Cincinnati, Las Vegas, and Kansas City”).

Importantly, YouTube also says it will make “authoritative sources readily accessible,” adding text-based news article snippets to search results during developing events. A Wired piece pointed out a potential problem with those “authoritative” sources:

In the coming weeks, YouTube will start to display an information panel above videos about developing stories, which will include a link to an article that Google News deems to be most relevant and authoritative on the subject. The move is meant to help prevent hastily recorded hoax videos from rising to the top of YouTube’s recommendations. And yet, Google News hardly has a spotless record when it comes to promoting authoritative content. Following the 2016 election, the tool surfaced a WordPress blog falsely claiming Donald Trump won the popular vote as one of the top results for the term “final election results.”

YouTube will also provide links to more information on a “small number of well-established” topics, and won’t just lean on Wikipedia for those.

Starting [Monday], users will begin seeing information from third parties, including Wikipedia and Encyclopædia Britannica, alongside videos on a small number of well-established historical and scientific topics that have often been subject to misinformation, like the moon landing and the Oklahoma City Bombing.

As of midday Tuesday, I didn’t see these links out to third-party sources yet, but here’s YouTube’s illustration of what would come up when someone searches for “Moon Landing”:

YouTube also said it’s committed to hiring more people who will directly work with news organizations, and it’s convening a working group of representatives from news organizations to help surface issues and develop features (Vox Media, Brazil’s Jovem Pan, and India Today are cited as members of the group).

The company’s full announcement is here.

]]>
https://www.niemanlab.org/2018/07/youtube-has-a-plan-to-boost-authoritative-news-sources-and-give-grants-to-news-video-operations/feed/ 0
La Pulla’s wildly popular YouTube videos (born at a 130-year-old newspaper) are bringing hard news to young Colombians https://www.niemanlab.org/2018/06/la-pullas-wildly-popular-youtube-videos-born-at-a-130-year-old-newspaper-are-bringing-hard-news-to-young-colombians/ https://www.niemanlab.org/2018/06/la-pullas-wildly-popular-youtube-videos-born-at-a-130-year-old-newspaper-are-bringing-hard-news-to-young-colombians/#respond Thu, 07 Jun 2018 13:26:52 +0000 http://www.niemanlab.org/?p=159198 María Paulina Baena gets stopped on the streets of Bogota, Colombia. Young people ask to take selfies with her and tell her how much they love La Pulla. The 27-year-old is the public face of the satirical video column that has shaken up the way young people consume news in Colombia. Created two years ago by five young journalists from the country’s oldest newspaper, the 130-year-old El Espectador, La Pulla has succeeded at what publishers worldwide long to do — connect with millennial audiences.

La Pulla (which translates to “The Taunt”) was the idea of a group of friends — five reporters who were covering different beats for El Espectador and also “terribly bored,” Baena explained. “We never expected that it would become what it became — a life project.”

They didn’t really know how, but they did know that they wanted to fill the informational vacuum that existed for millennials in Colombia, and that they wanted to speak to them with emotions and a language that sounded real. YouTubers were a big influence on the team. “In Colombia, people are pissed off and they don’t know why. They don’t do anything about it. We said, ‘Let’s use that rage to do something more than war. Let’s have a conversation.'”

Colombia is going through especially convulsive times. The country recently ended 50 years of conflict with its biggest guerrilla group, FARC — the longest-running conflict in the Western hemisphere — after four years of peace negotiations, and faced its first presidential elections “in peace” at the end of May (with no candidate receiving a majority of the vote; there will be a runoff on June 17). Homicide rates are among the highest in Latin America, and corruption scandals are constant.

La Pulla doesn’t shy away from these complex and highly sensitive topics — on the contrary. Following a John Oliver-esque style of raw, no-BS language combined with in-depth analysis, Baena asks the tough questions in two- to eight-minute social-media-friendly videos: What does the peace agreement say about land property? Are we really going to have a country free of drug trafficking? Why aren’t FARC guerrillas going to jail? Why do we kill each other so much in Latin America? Why is everyone afraid of Álvaro Uribe?

The first script the team wrote, two years ago, focused on a scandal involving sexual abuse by police officers. It went viral. “The day after, I woke up and I had 500 friend requests on Facebook,” Baena recalls, still with some horror. “My Twitter followers went from 500 to two or three thousand.” Fame is something that she hasn’t gotten used to yet.

In 2016, one of La Pulla’s videos about allowing same-sex couples to adopt children was awarded the Simón Bolívar National Journalism Award, the most prestigious journalism award in Colombia.

Before La Pulla, El Espectador had been trying to reach young audiences with video for some time, but its efforts kept failing. It lacked a strategy, and its videos had “no identity,” Baena said — leadership would just tell reporters to “go out and shoot some video.”

La Pulla turned the newspaper’s video efforts upside down: The team’s YouTube channel has more than 562,000 subscribers (compared to El Espectador’s 140,000), and some of La Pulla’s videos have nearly 2 million views on YouTube, while the newspaper’s rarely top 100,000.

YouTube has become La Pulla’s primary distribution channel. After a new video is released each Thursday afternoon, the team spends time answering people’s comments and listening to their suggestions. “We pay a lot of attention to our followers — that’s what makes us different,” Baena said. The team members have noticed that YouTube is where they’ve able to nurture and grow an engaged community, with more meaningful interactions than the ones they have on Facebook or Twitter. “A YouTube subscriber is not a troll,” she added.

Through that direct engagement with its audience, La Pulla has a very clear idea of who is watching and how to speak to them. Eighty percent of its followers are between the ages of 18 and 34, though it’s also popular with people as young as 13. When La Pulla gives public talks at high schools, “children go crazy,” Bena said. says. “The myth that millennials are apathetic, that they only care about themselves, is a big lie.”

All that La Pulla’s team had when it entered the battle for public attention was a retro microphone, a suit, a pair of red eyeglasses, and an office desk — the office of El Espectador’s publisher, Fidel Cano Correa, which they still squat in every Tuesday morning to shoot. Baena feels a little embarassed when she rewatches those early videos: “They look like they were produced by a bunch of primary school students.” But they already convey what La Pulla is all about: “No filters. We are honest.”

For the six-member team, the show’s “spine” is its investigative work. It’s the part of the production process that takes the most time for them — they spend a week or more digging into every story before they start writing the script.

Because of its biting criticism — which equally affects everyone who is mentioned in their videos —La Pulla’s content is labeled as “opinion” by El Espectador. One of the most common criticisms the team faces is that what it does is not journalism, and that its visceral tone spurs polarization.

But Baena believes that information and opinion are not mutually exclusive. “We do take a stand, but we are also journalists,” she said. La Pulla sees its role as going beyond selecting a quote from a press conference — it’s a translator of “empty concepts,” like the peace process or the Odebrecht corruption scandal, so that audiences can make better decisions as citizens.

El Espectador has been supportive of La Pulla since the beginning. The newspaper leadership believes that it has rejuvenated the brand, and publisher Cano Correa has given the team the freedom and independence it needs to equally attack all the subjects of their columns. (Still, “we are like El Espectador’s child,” Baena said. “When someone calls to complain, they call Fidel, they don’t call us.”) In a 2016 editorial, Cano Correa wrote: “Serious journalism can connect with new audiences and keep demonstrating why its existence is necessary. For El Espectador in particular, [La Pulla] has shown that, even if you’re 130 years old, you can be young and creative.” And at the INMA World Congress Conference last week in Washington, DC, Cano Correa touted La Pulla’s success.

La Pulla’s satirical tone has made it a huge social media success, but also a commercial challenge. “We burn all possible bridges with advertisers,” Baena said. Because the team treasures its independence, it’s looked into other ways to fund the project, including seeking grants from foundations and nonprofit organizations with like-minded visions. Some current funders include the Open Society Foundations and Friedrich-Ebert-Stiftung. (Another revenue source: doing workshops at Colombian universities.)

The team currently functions as a small startup within El Espectador’s parent company. They are El Espectador employees and use the newspaper’s platform for distribution and recognition — but they also raise those funds to help cover their own salaries, equipment, and expenses so that they can work on La Pulla full time. La Pulla is one of the newspaper’s biggest digital assets, both because it’s its most viewed product and because it attracts an audience that would be very hard to reach otherwise. Its content all remains free, even though El Espectador recently launched a metered paywall.

La Pulla’s brand has opened up the path to other video products for El Espectador. Besides the weekly column, the team now also produces a weekly two-minute video of news analysis called “Me acabo de enterar” (“I just found out”), which is performing well on social media. The newspaper has initiated other more personalized projects on its YouTube channel that better connect to young audiences, like a show telling stories of the LGBTQ community and a feminist talk show.

“Information should be useful for something,” Baena said. “Taking a stand makes people wake up.”

Photo of La Pulla’s team by Daniel Alvarez used with permission.

]]>
https://www.niemanlab.org/2018/06/la-pullas-wildly-popular-youtube-videos-born-at-a-130-year-old-newspaper-are-bringing-hard-news-to-young-colombians/feed/ 0
Google’s news chief Richard Gingras: “We need to rethink journalism at every dimension” https://www.niemanlab.org/2018/05/googles-news-chief-richard-gingras-we-need-to-rethink-journalism-at-every-dimension/ https://www.niemanlab.org/2018/05/googles-news-chief-richard-gingras-we-need-to-rethink-journalism-at-every-dimension/#respond Thu, 10 May 2018 14:11:47 +0000 http://www.niemanlab.org/?p=158200 In the shadows of the Cambridge Analytica scandal, the public’s trust in news, and the platforms that distribute it, is at an all-time low. As big tech seemingly scrambles to restore users’ confidence in their platforms, Google is introducing new ways to streamline the subscription process for digital news-readers. I sat down last week with Richard Gingras, the longtime vice president of news at Google, to discuss the company’s new Subscribe with Google feature, the open web, data privacy, and the search giant’s role in the future of news. What follows is a lightly edited transcript of our conversation.

There’s an interesting publication in Bristol, England: The Bristol Cable. They don’t have marketers on staff, they have community organizers on staff and they go out and they arrange town halls and they’re trying to assess the needs and interests of their community, they’re trying to figure out how do they engage with their community. Because if you are going to get people to buy something you have to understand their value proposition and the value proposition [today] isn’t the value proposition of 40 years ago.

So how do you understand the community’s needs? How do you address those needs? How do you rethink what the very nature and form of journalism is in this day and age? How do the contracts evolve in an environment where we’re all snacking off our cellphones? To what extent do narrative styles have to change? To what extent is it more immersive, or less immersive, or whatever? To what extent can data journalism become a stronger part of what we do, so that we’re not just covering news through stories and anecdotes but providing additional context to help people understand why something is important or not important to them? As we deal with these challenges, all of us as institutions — including Google, including the press — have to really rethink what our roles are in this very different world.

One of my concerns when I look out there at what happens in the world of news is disproportionality. You’ve got the British Parliament attack in London and our cable news networks in the United States go wall to wall with it for three days. A sad event — four people died. [Five, plus the assailant. —Ed.]

On those same three days, there were mass murders of four or more people that didn’t get covered. We have people going to the polls living in a farming community in Iowa concerned about terrorism, not understanding what the real needs and interests of their communities are. Can we not use data journalism to rethink that?

What I’ve suggested metaphorically is we give people data every day in the weather report. Can we create a weather report for our communities? If I’ve got a membership-supported community news organization — where I’m not so concerned about every click, because they’re not paying for access, they’re paying because they believe in your mission. Why is it not that, where that “weather report” gives me a sense of my community beyond the meteorological? Does it give me a sense of the crime rate in my community and why it’s different from other parts of my world? The air quality index, graduation from schools, so that I can get a better sense of what’s real and what’s not. My own personal favorite definition of journalism is to give citizens the tools they need to be good citizens: to give them the information they need to when they go to the polls to make smart decisions about what’s important for their communities and that’s not what’s happening today.

These are hard, hard problems. And particularly in an environment where we’ve got increasing trends towards populism and we’ve got politicians who degrade everything that the people in this room are doing.

We have to address these things. And just to continue my rant one more time, yesterday was World [Press Freedom] Day and it was a fabulous event celebrating the value of journalism, celebrating the quality of journalism. Telling stories about how the stories were covered, interspersed with music, emotionally resonating the themes of what we do. That was so powerful, and I came to Canada from the States where two weeks ago we had this ludicrous event called the White House Correspondents’ Dinner.

I have no issue with Michelle Wolf as a comedian doing what she did, but why would we take that platform — an opportunity to guide people in the value and values of journalism — and not do tit for tat with politicians looking to tear you down? We really have to rethink these things, all of us, really, pushing forward. What are the models we want to see, including tech platforms like Google?

I was a founder of The Trust Project for pushing on the architecture of journalism. Can we be more transparent, can we give people a better sense of what they’re seeing? Media literacy training is important, but can you design a new site that actually doesn’t need a user manual to tell you what you’re seeing, tell you what’s fact-based coverage versus opinion? And I transfer that to Google today, to our experience, as well — how do we evolve our user experience so people understand what they’re seeing? If you can find anything that’s findable in the corpus of expression, I don’t want you believing that every result you see is truth just because we surfaced it for you.

Skok: I agree with everything you said, but one small asterisk that I’d put there is that part of that is driven by the incentive structure. From my experience, what happens in the newsroom is a direct result of what happens in the boardroom. For decades, but particularly in the last 10 to 15 years, the boardroom decisions have been driven by scale and they’ve been driven by reach and a lot of that is…not Google’s fault, but Google has provided a tool —

Gingras: Let’s be honest with ourselves. Because that’s not valid. I mean as in that didn’t start with the internet, right? Frankly, you can go back to tabloid journalism in that regard. I mean, you know, what bleeds leads. Give me a break. These are important issues for society, but when I hear people say “oh God, Facebook is causing your addiction,” I go wait a second — we’ve been driving addiction with media since the day we started producing it.

Skok: I’m not saying that you are responsible —

Gingras: No, I don’t say it with that intent. I have no problem with people criticizing us for what we do. I’m just saying let’s look at the questions on a larger scale and understand what’s really going on because it ain’t as simple as that.

David Skok is the CEO and editor-in-chief of The Logic, a new Canadian news publication providing in-depth reporting on the innovation economy.

Photo of Skok and Gingras speaking at the Canadian Association of Journalism’s annual conference May 4 by Nick Iwanyshyn.[/ednote]

]]>
https://www.niemanlab.org/2018/05/googles-news-chief-richard-gingras-we-need-to-rethink-journalism-at-every-dimension/feed/ 0
Facebook and YouTube just got more transparent. What do we see? https://www.niemanlab.org/2018/05/facebook-and-youtube-just-got-more-transparent-what-do-we-see/ https://www.niemanlab.org/2018/05/facebook-and-youtube-just-got-more-transparent-what-do-we-see/#respond Thu, 03 May 2018 13:34:17 +0000 http://www.niemanlab.org/?p=157878 Social media platforms have been notoriously opaque about how they work. But something may have shifted.

Last week, several social media platforms took significant steps toward greater transparency, particularly around content moderation and data privacy. Facebook published a major revision of its Community Standards, the rules that govern what users are prohibited from posting on the platform. The changes are dramatic, not because the rules shifted much but because Facebook has now spelled out those rules in much, much more detail.

YouTube released its latest transparency report, and for the first time included data on how it handles content moderation, not just government takedown requests. And dozens of platforms alerted their users of updates to their privacy policies this week, in anticipation of the General Data protection Regulation (GDPR) out of Europe, which goes into effect May 25.

What can we learn from these gestures of transparency? And what do they mean for the problem of content moderation? I, like many others, have been calling for social media platforms to be more transparent about how content moderation works. So the published internal rules from Facebook and the expanded transparency report from YouTube should be commended. From one vantage point, Facebook’s new guidelines are the next logical step in content moderation on social media platforms. Their rules about sexually explicit material, harassment, real names, and self-harm are already in place; now we need to get down to exactly how to impose them effectively and fairly.

Most of the major platforms have been publishing transparency reports for years, but all have focused exclusively on content takedown requests from governments and corporations; YouTube’s report appears to be the first time that a major platform has systematically reported where flags come from and how they’re responded to, and the company is promising more flagging data in future reports.

But transparency, even in its candor, is a performance, leaving as much unseen as seen. At the same time, the performance itself can be revealing, of the larger situation in which we find ourselves. A closer look at Facebook’s new Community Standards and YouTube’s new data will reveal more about how content moderation is done, and how committed we have become to the approach to moderation as a project.

Every traffic light is a tombstone

Different platforms articulate their rules in different ways. But all have some statement that offers, more plainly than the legalese of a “Terms of Service,” that that platform expects of users and what it prohibits. Explaining the rules is just one small part of platform moderation. Few users read these Community Standards; many don’t even know they exist. And the rules as stated may or may not have a close correlation with how they’re actually enforced. Still, how they are articulated is of enormous importance. Articulating the rules is the clearest opportunity for a platform to justify its moderation efforts as legitimate. Less an instruction manual, the community guidelines are like a constitution.

Last week, Facebook spelled out its rules in blunt and sometimes unnerving detail. While it already prohibited images of “explicit images of sexual intercourse,” now it defines its terms: “mouth or genitals entering or in contact with another person’s genitals or anus, where at least one person’s genitals are nude.” Prohibited sexual fetishes now include “acts that are likely to lead to the death of a person or animal; dismemberment; cannibalism; feces, urine, spit, menstruation, or vomit.”

While some of these specific rules may be theoretical, most are here because Facebook has already encountered and had to remove this kind of content, usually thousands of times. This document is important as a historical compendium of the exceedingly horrifying ends to which some users put social media: “Dehumanizing speech including (but not limited to) reference or comparison to filth, bacteria, disease, or feces…” “Videos that show child abuse, [including] tossing, rotating, or shaking of an infant (too young to stand) by their wrists/ankles, arms/legs, or neck…” “organizations responsible for any of the following: prostitution of others, forced/bonded labor, slavery, or the removal of organs.” Facebook’s new rules are the collected slag heap beneath the shiny promise of Web 2.0.

Flagging is no longer what it used to be

Most platforms turn largely or exclusively to their user base to help identify offensive content and behavior. This usually means a “flagging” mechanism that allows users to alert the platform to objectionable content. Using the users is convenient because it divides this enormous task among many, and puts the task of identifying offensive content right at the point when someone comes into contact with it. Relying on the community grants the platform legitimacy and cover. The flagging mechanism itself clearly signals that the platform is listening to its users and providing avenues for them to express offense or seek help when they’re being harmed.

When YouTube added a flagging mechanism to its videos back in 2005, it was a substantive change to the site. Before allowing users to “flag as inappropriate,” YouTube had only a generic “contact us” email link in the footer of the site. Today, enlisting the crowd to police itself is commonplace across social media platforms and, more broadly, the management of public information resources. It is increasingly seen as a necessary element of platforms, both by regulators who want platforms to be more responsive and by platform managers hoping to avoid stricter regulations.

Flagging has expanded as part of the vocabulary of online interfaces, beyond alerting a platform to offense: platforms let you flag users who you fear are suicidal, or flag news or commentary that peddles falsehoods. What users are being asked to police, and the responsibility attached, is expanding.

On the other hand, flagging is voluntary — which means that the users who deputize themselves to flag content are those most motivated to do so. Platforms often describe flagging as an expression of the community. But are the users who flag representative of the larger user base, and what are the ramifications for the legitimacy of the system if they’re not? Who flags, and why, is hard to know.

YouTube’s latest transparency report tells us a great deal about how user flags now matter to its content moderation process — and it’s not much. Clearly, automated software designed to detect possible violations and “flag” them for review do the majority of the work. In the three-month period between October and December 2017, 8.2 million videos were removed; 80 percent of those removed were flagged by software, 13 percent by trusted flaggers, and only 4 percent by regular users. Strikingly, 75 percent of the videos removed were gone before they’d been viewed even once, which means they simply could not have been flagged by a user.

On the other hand, according to this data, YouTube received 9.3 million flags in the same three months, 94 percent from regular users. But those flags led to very few removals. In the report, YouTube is diplomatic about the value of these flags: “user flags are critical to identifying some violative content that needs to be removed, but users also flag lots of benign content, which is why trained reviewers and systems are critical to ensure we only act on videos that violate our policies.”

“Critical” here seems generous. Though more data might clarify the story (how many automated flags did not lead to removals?) it seems reasonable to suggest that flags from users are an extremely noisy resource. It would be tempting to say that they are of little value other than public relations — letting victims of harassment know they are being heard, respecting the community’s input — but it might be worth noting the additional value of these flags: as training data for those automated software tools. Yet, if user flags are so relatively inaccurate, it may be that the contributions of the trusted flaggers are weighted more heavily in this training.

Gestures of transparency in the face of criticism

Facebook and YouTube are responding to the growing calls for social media platforms to take greater responsibility for how their systems work — from moderation to data collection to advertising. But Facebook’s rule change may be a response to a more specific critique: that while Facebook has one set of rules for the public, it seemed to have a different set of rules for use internally: that its policy team used to judge hard cases, that it used to train tens of thousands of human moderators, and that it programmed into its AI detection algorithms. This criticism became most pointed when training documents were leaked to The Guardian in 2016 — documents Facebook used to instruct remote content moderation teams and third-party crowdworkers on how to draw the line between, say, a harsh sentiment and a racist screed, between a visible wound and a gruesome one, between reporting on a terrorist strike and celebrating it. 

Facebook is describing these new rules as its “internal guidelines.” This is meant to suggest a couple things. First, we’re being encouraged to believe that this document, in this form, existed behind the scenes all along, standing behind the previous Community Standards, which were more written in a more generalized language for the benefit of users. Second, we’re supposed to take the publication of these internal rules as a gesture of transparency: “all right, we’ll show you exactly how we moderate, no more games.” And third, it implies that, going forward, there will be no gap between what the posted rules say and what Facebook moderators do.

The suggestion that these are their internal guidelines may be strictly true, if “internal” means the content policy team at Facebook corporate headquarters in Menlo Park. It makes sense that, behind the broad standards written for users, there were more spelled out versions being used for actual moderation decisions. It is of course hard to know whether this document was already sitting there as some internal moderation bible and Facebook’s team merely made the decision to publish it, or if it was newly crafted for the purpose of performing transparency; or if (more likely) it was assembled out of an existing tangle of rules, guidelines, tip sheets, definitions, consulting documents, and policy drafts that were combined for publication.

However, if the 2016 Guardian documents are any indication, what Facebook gave its larger labor force of content moderators was much more than just detailed rules. They included examples, tests, and hard cases, meant to guide reviewers on how to think about these rules and how to apply them. The challenge of moderation at this scale is not simply to locate the violations and act on them. It is also how to train hundreds or even thousands of people to make these tricky distinctions in the same way, over and over again, across a shocking array of unexpected variations and contexts.

Those examples and test cases are extremely important in helping to calibrate the review process. They are also where content moderation can go horribly wrong; ProPublica’s investigation into Facebook’s moderation of hate speech revealed that, even across just a handful of examples, reviewers were profoundly inconsistent, and Facebook was often unable to explain why specific decisions had been made.

The same approach, only more so

We are in a supremely weird moment.

Legible in Facebook’s community guidelines are the immense challenges involved in overseeing massive, global social media platforms. They are scarred by the controversies that each platform has faced, and the bumpy road that all social media have traveled together over the past decade. They reveal how social media platform administrators try to make sense of and assert their authority over users in the first place.

Apparent in the YouTube transparency report is a reminder of how, even as platforms promise to be responsive to the needs of their users, the mechanics of content moderation are moving away from users — done more on their behalf than at their behest.

And both make clear the central contradiction of moderation that platform creators must attempt to reconcile, but never quite can: If social media platforms were ever intended to embody the freedom of the web, then constraints of any kind run counter to these ideals, and moderation must be constantly disavowed. But if platforms are supposed to promise anything better than the chaos of the open web, then oversight and prohibition is central to that promise.

More transparency is nearly always good, and it’s certainly good in this instance. (Facebook has also promised to expand users’ ability to appeal moderation decisions they disagree with, and that’s also good.) But even as they’re more open about it, Facebook is deepening its commitment to the same underlying logic of content moderation platforms have embraced for a decade: a reactive, customer-service approach where the power to judge remains almost exclusively in the hands of the platforms. Even if these guidelines are now 8,000 words long and spell out the rules in much more honest detail, they are still Facebook’s rules, written in the way they choose, it is their judgment when they apply, their decision what the penalty should be, their appeals process.

Mark Zuckerberg himself has said that he feels deeply ambivalent about this approach. In a March interview with Recode, he said:

I feel fundamentally uncomfortable sitting here in California at an office, making content policy decisions for people around the world…things like where is the line on hate speech? I mean, who chose me to be the person that?…I have to, because [I lead Facebook], but I’d rather not.

I share his sense of discomfort, as do millions of others. Zuckerberg’s team in Menlo Park may have just offered us much more transparency about how it defines hate speech, plus a more robust appeals process and a promise to be more responsive to change. But they’re still sitting there “in California at an office, making content policy decisions for people around the world.”

The truth is, we wish platforms could moderate away the offensive and the cruel. We wish they could answer these hard questions for us and let us get on with the fun of sharing jokes, talking politics, and keeping up with those we care about. As users, we demand that they moderate, and that they not moderate too much. But as Roger Silverstone noted, “The media are too important to be left to the media.” But then, to what authority can we even turn? As citizens, perhaps we must begin to be that authority, be the custodians of the custodians.

Perhaps Facebook’s responsibility to the public includes sharing that responsibility with the public—not just the labor, but the judgment. I don’t just mean letting users flag content, which YouTube’s data suggests is both minimal and ineffective on its own. I mean finding ways to craft the rules together, set the priorities together, and judge the hard cases together. Participation comes with its own form of responsibility. We must demand that social media share the tools to govern collectively, not just explain the rules to us.

Tarleton Gillespie is principal researcher at Microsoft Research New England, a member of the Social Media Collective, and an adjunct associate professor in the Department of Communication and Department of Information Science at Cornell University.

A couple of paragraphs of this piece are drawn from his forthcoming book, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media, which will be published soon by Yale University Press.

Photo of a transparent Lego by WRme2 used under a Creative Commons license.

]]>
https://www.niemanlab.org/2018/05/facebook-and-youtube-just-got-more-transparent-what-do-we-see/feed/ 0
Could students’ media literacy be compared across countries, like math scores? https://www.niemanlab.org/2018/03/could-students-media-literacy-be-compared-across-countries-like-math-scores/ https://www.niemanlab.org/2018/03/could-students-media-literacy-be-compared-across-countries-like-math-scores/#respond Fri, 16 Mar 2018 12:59:56 +0000 http://www.niemanlab.org/?p=155911 — Don’t over-regulate. “The [High Level Expert Group] believes the best responses are likely to be those driven by multi-stakeholder collaborations, minimize legal regulatory interventions, and avoid the politically dictated privatization of the policing and censorship of what is and is not acceptable forms of expression.” Rasmus Kleis Nielsen, director of research at Reuters Institute for the Study of Journalism at Oxford and a member of the expert group that produced this report, has more on this here.

— Platforms should give up some data, enough to allow “independent inquiries, audits and research into activities reliant on proprietary media and data infrastructures with a view to ensuring transparency and authenticity of inform.”

— Fact-checking groups should find ways to work together across Europe. “As fact-checking activities in the EU are still relatively fragmented3, more work can and should be done by fact-checkers, verification organizations, and professional newsrooms in a collaborative manner within EU Member States and across the EU to exploit the untapped potential of cross-border and cross-sector cooperation and improve their working methods through the adoption of state-of-the-art technologies. Existing partnerships with platforms should be expanded across Europe with a clear roadmap for data sharing with academics that will allow for better understanding of disinformation strategies and their dynamics.” The report suggests creating “European Centres for interdisciplinary and independent evidence-based research on problems of disinformation.”

— There’s a need to “think more strategically about how media literacy is implemented across Europe…with clear methods of evaluation and cross-country comparison.” The authors even suggest including information and media literacy in the OECD’s Program for International Student Assessment, which every three years tests 15-year-olds on science, mathematics, reading, collaborative problem solving, and financial literacy.

Meanwhile, what’s already happening around the world? Poynter’s Daniel Funke has a guide to how countries around the world are looking to stem the flow of online misinformation. Lots of proposed laws and draft bills so far. Poynter will be updating the list on an ongoing basis.

Pinterest: Not exempt! “If you only use Pinterest for finding recipes and interior design ideas, mis/disinformation may never cross your home feed,” writes Amy Collier, associate provost for digital learning at Middlebury College and head of Middlebury’s Office of Digital Learning and Inquiry. “But if you’ve searched for any information on contested topics like vaccinations, gun control, or climate change, you probably have seen mis/disinformation in action.” She looks at fake/spam Pinterest accounts that use polarizing political pins and pins that spread misinformation as a way of drawing attention to their affiliate link posts.

More clarification on that Science paper. The “fake news spreads faster than real news” paper that I wrote about last week has been the subject of continued Twitter debate. Coauthor Deb Roy offered a diagram of the actual scope of the paper, compared to the much broader scope as interpreted in much of the paper’s coverage. But this probably isn’t as clear as it should have been — not just in the coverage of the paper, but in the paper itself.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

]]> https://www.niemanlab.org/2018/03/could-students-media-literacy-be-compared-across-countries-like-math-scores/feed/ 0 News in a disintegrating reality: Tow’s Jonathan Albright on what to do as things crash around us https://www.niemanlab.org/2018/02/news-in-a-disintegrating-reality-tows-jonathan-albright-on-what-to-do-as-things-crash-around-us/ https://www.niemanlab.org/2018/02/news-in-a-disintegrating-reality-tows-jonathan-albright-on-what-to-do-as-things-crash-around-us/#respond Wed, 28 Feb 2018 16:43:01 +0000 http://www.niemanlab.org/?p=155121 It’s less about what we’re doing on Facebook, and more about what’s being done to us.

Jonathan Albright, the research director at the Tow Center for Digital Journalism and a faculty associate at Harvard’s Berkman Klein Center for Internet and Society, isn’t big on studies that try to track how much fake news people have clicked on, or how many outright hoaxes they recall seeing in their feeds. Instead, his research into activity on the biggest platforms on the Internet — Facebook, YouTube, Instagram, and to a lesser extent, Twitter — situates everyday Internet users inside a kind of trap, one they can’t get out of without a great deal of help from those same platforms, which thus far haven’t been eager to tackle the problem.

It’s shadowy, scary, and difficult to pinpoint. I talked to Albright this week about the work he’s doing, which has come to center around pulling whatever data can be pulled from those platforms (almost always without the participation of those companies, and in the case of Facebook usually only through loopholes), analyzing it, releasing the data publicly, and helping journalists make sense of it means — and then repeat.

“It’s getting worse. Since the 2016 election what I’ve come to the realization — horribly, and it’s very depressing — that nothing has gotten better, despite all the rhetoric, all of the money, all of the PR, all of the research. Nothing has really changed with the platforms,” Albright told me. “We’re basically yelling about Russia right now when our technological and communication infrastructure — the ways that we experience reality, the ways we get news — are literally disintegrating around us.”

It’s all horrible and depressing, but it was still fun to talk with Albright, who speaks energetically and urgently and somehow manages not to be a total downer. Or maybe I was just scrambling to find something positive to pick up on. There are glints of light here, but a lot of them come down to the platforms accepting that they’re media companies and hiring people into totally new roles (Albright’s idea: “platform editor”). We’ll see. Our conversation, lightly condensed and edited for clarity, is below.

Albright: No. They’re calling because it’s a liability for their PR and for their shareholders.

There are clearly amazing, very concerned people working at Facebook. A lot of people work at Facebook specifically for that reason — they think they can affect the world in positive ways and build new tools to enrich people’s lives. But the problem is that with with the size and scale and sheer dominance of Facebook as a for-profit corporation, it’s getting to the point where it’s becoming impossible to affect it. [These platforms] are no longer startups that can shift direction.

Often, these companies are open to research partnerships and things, but it’s always on their terms. If you do research with them, you’re dealing with IP issues, you’re signing over the rights to the research. It has to be reviewed completely and vetted by their legal process. They often handpick researchers that help them and help their purpose and help their cause — they maybe throw in some sprinkles of criticism. I understand why they would be hesitant to want to work with people like me.

Owen: Okay, sorry to mention Russia, but how much of this, like, a Russia problem, and how much of this is coming from inside our country?

Albright: Frankly, I don’t know the answer. Whatever or whoever is behind some of this, though, I’ve chosen to focus on the larger problem, which is the fact that these algorithms, the business model, and the monetization encourage the production and promotion and spread of disinformation.

I mean, I do hold that it’s not okay to come in and try to influence someone’s election; when I look at these YouTube videos, I think: Someone has to be funding this. In the case of the YouTube research, though, I looked at this more from a systems/politics perspective.

We have a problem that’s greater than the one-off abuse of technologies to manipulate elections. This thing is parasitic. It’s growing in size. The last week and a half are some of the worst things I’ve ever seen, just in terms of the trending. YouTube is having to manually go in and take these videos out. YouTube’s search suggestions, especially in the context of fact-checking, are completely counter-productive. I think Russia is a side effect of our larger problems.

Owen: What do the platforms individually need to be doing, and are there things that they all need to be doing? Or should we just burn YouTube down completely?

Albright: YouTube has no competition, right? None. YouTube, in its space, it is a monopoly. DailyMotion is tiny, Vimeo is niche. The fact that no one has come up to challenge YouTube is bizarre, but it’s probably because they can’t afford to fend off copyright claims.

We’re being held in the dark data-wise, but equally problematic is that we’re not able to understand how things are being promoted and how they’re reaching people because of algorithms. Everything is an algorithm on top of an algorithm. The search function that I used to pull the videos is an algorithm, and you have a little bit of profiling involved in that. The recommendations are an algorithm, so everything is proprietary and highly secret, because if someone ever found the exact formulas they were using, they could instantly game it. If opaque algorithms continue to exist as a business model, we’re always gonna be chasing effects.

Maybe there needs to be a job called, like, Platform Editor, where someone works to not only stop manipulation but also works across the security team and the content team and in between the different business verticals to ensure the quality and integrity of the platform. That’s a lot of responsibility, but the kinds of things that I often see could literally be stopped by one person. I mean: 4chan trending on Google during the Las Vegas shooting? How that even happened, I have no idea, but I do know that one person could have stopped that. And I do know that a group of people working together — even if it involves deliberation, even if they don’t agree on one specific thing — can often solve problems that appear or are starting to surface because of automation. And I don’t mean, like, contract moderators from India — I mean high-level people. The companies need to invest in human capital as well as technological capital, but that doesn’t align with their business model. The rhetoric exists in their public statements, but we can clearly see that how it’s being implemented isn’t working.

It’s getting worse. Since the 2016 election, I’ve come to the realization — horribly, and it’s very depressing — that nothing has gotten better, despite all the rhetoric, all of the money, all of the PR, all of the research. Since nothing has really changed with the platforms, we can scream about Russia as the structure of our information decays around us. Our technological and communication infrastructure, the ways that we experience reality, the ways we get news, are literally disintegrating.

Owen: Why is it getting worse?

Albright: There are more people online, they’re spending more time online, there’s more content, people are becoming more polarized, algorithms are getting better, the amount of data that platforms have is increasing over time.

I think one of the biggest things that’s missing from political science research is that it usually doesn’t consider the amount of time that people spend online. Between the 2012 election and the 2016 election, smartphone use went up by more than 25 percent. Many people spend all of their waking time somehow connected.

This is where psychology really needs to come in. There’s been very little psychology work done looking at this from an engagement perspective, looking at the effect of seeing things in the News Feed but not clicking out. Very few people actually click out of Facebook. We really need social psychology, we really need humanities work to come in and pick up the really important pieces. What are the effects of someone seeing vile or conspiracy news headlines in their News Feed from their friends all day?

Owen: This is so depressing.

Albright: Sorry.

Owen: No, I mean, I already knew it was a problem, it’s just…

Albright: It’s a huge problem. It’s the biggest problem ever, in my opinion, especially for American culture. Maybe it’s less of a problem for other countries and cultures, but the way our country works is just really susceptible to this. Those Russian statements about how Americans are impressionable and they’re easy to manipulate are largely true. It’s not because Americans are stupid, but because there’s been no effort to get ahead of the curve in terms of technological policy or privacy laws. There’s no protection for Americans or researchers right now. We’re fighting everything.

Snapshots of the network generated from 9,000 “crisis actor” YouTube videos, by Jonathan Albright.

]]>
https://www.niemanlab.org/2018/02/news-in-a-disintegrating-reality-tows-jonathan-albright-on-what-to-do-as-things-crash-around-us/feed/ 0