artificial intelligence – Nieman Lab https://www.niemanlab.org Mon, 08 May 2023 16:40:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 Can AI help local newsrooms streamline their newsletters? ARLnow tests the waters https://www.niemanlab.org/2023/05/can-ai-help-local-newsrooms-streamline-their-newsletters-arlnow-tests-the-waters/ https://www.niemanlab.org/2023/05/can-ai-help-local-newsrooms-streamline-their-newsletters-arlnow-tests-the-waters/#respond Mon, 08 May 2023 13:32:53 +0000 https://www.niemanlab.org/?p=214857 Scott Brodbeck, the founder of Virginia-based media company Local News Now, had wanted to launch an additional newsletter for a while. One of his sites, ARLnow, already has an automated daily afternoon newsletter that includes story headlines, excerpts, photos, and links sent to about 16,000 subscribers, “but I’ve long wanted to have a morning email with more voice,” he told me recently in a text.

Though it could expand his outlet’s reach — especially, in his words, as email becomes increasingly important “as a distribution channel with social media declining as a traffic source” — Brodbeck didn’t think creating an additional newsletter was an optimal use of reporter time in the zero-sum, resource-strapped reality of running a hyperlocal news outlet.

“As much as I would love to have a 25-person newsroom covering Northern Virginia, the reality is that we can only sustainably afford an editorial team of eight across our three sites: two reporters/editors per site, a staff [photographer], and an editor,” he said. In short, tapping a reporter to write a morning newsletter would limit ARLnow’s reporting bandwidth.

But with the exponential improvement of AI tools like GPT-4, Brodbeck saw an opportunity to have it both ways: He could generate a whole new newsletter without cutting into journalists’ reporting time. So last month, he began experimenting with a completely automated weekday morning newsletter comprising an AI-written introduction and AI summaries of human-written stories. Using tools like Zapier, Airtable, and RSS, ARLnow can create and send the newsletter without any human intervention.

Since releasing the handbook, Amditis has heard that many publishers and reporters “seem to really appreciate the possibility and potential of using automation for routine tasks,” he told me in an email. Like Brodbeck and others, he believes “AI can save time, help small newsrooms scale up their operations, and even create personalized content for their readers and listeners,” though he raised the widely held concern about “the potential loss of that unique human touch,” not to mention the questions of accuracy, reliability and a hornets’ nest of ethical concerns.

Even when instructing AI to summarize content, Amditis described similar challenges to those Brodbeck has encountered. There’s “a tendency for the summaries and bullet points to sound repetitive if you don’t create variables in your prompts that allow you to adjust the tone/style of the responses based on the type of content you’re feeding to the bot,” he said.

But “the most frustrating part of the work I’ve been doing with publishers of all sizes over the last few months is the nearly ubiquitous assumption about using AI for journalism (newsletters or otherwise) is that we’re out here just asking the bots to write original content from scratch — which is by far one of the least useful applications, in my opinion,” Amditis added.

Brodbeck agrees. “AI is “not a replacement for original local reporting,” he said. “It’s a way to take what has already been reported and repackage it so as to reach more readers.”

]]>
https://www.niemanlab.org/2023/05/can-ai-help-local-newsrooms-streamline-their-newsletters-arlnow-tests-the-waters/feed/ 0
Meet the first-ever artificial intelligence editor at the Financial Times https://www.niemanlab.org/2023/02/meet-the-first-ever-artificial-intelligence-editor-at-the-financial-times/ https://www.niemanlab.org/2023/02/meet-the-first-ever-artificial-intelligence-editor-at-the-financial-times/#respond Mon, 27 Feb 2023 17:45:06 +0000 https://www.niemanlab.org/?p=212532 In recent weeks, Murgia has written about a science fiction magazine that had to stop accepting submissions after being flooded by hundreds of stories generated with the help of AI, China racing to catch up to ChatGPT, and the Vatican hosting a summit to address “the moral conundrums of AI.” (“A rabbi, imam, and the Pope walked into a room …”)

When not covering AI for the FT, Murgia is finishing her first book, Code-Dependent, out in February 2024. We caught up via email. Our back-and-forth has been lightly edited for clarity and that British proclivity for the letter “zed.”

Sarah Scire: How “first” is this position? It’s the first time that someone has held the title of “artificial intelligence editor” in your newsroom, correct? Have you seen other newsrooms create similar positions?

Murgia: It’s a first first! We haven’t had this title, or even a job devoted to AI before at the FT. I had sort of carved it into my beat alongside data and privacy over the last four or five years and focused on areas that impacted society like facial recognition, AI ethics, and cutting-edge applications in healthcare or science. Our innovation editor John Thornhill and West Coast editor Richard Waters often wrote about AI as part of their wider remits, too. But it wasn’t anyone’s primary responsibility.

In recent months, other newsrooms have appointed AI reporters/correspondents to take on this quickly evolving beat, and of course, there are many great reporters who have been writing about AI for a while, such as Karen Hao when she was at MIT Tech Review, and others. What I think is unique about this role at the FT is that it operates within a global newsroom. Correspondents collaborate closely across disciplines and countries — so I hope we can take advantage of that as we build out our coverage.

Scire: What is your job as AI editor? Can you describe, in particular, how you’re thinking about the “global remit” you mentioned in the announcement?

Murgia: The job is to break news and dive deep into how AI technologies work, how they’ll be applied across industries, and the ripple effects on business and society. I’m particularly interested in the impact of AI technologies on our daily lives, for better and worse. It’s a unique role in that I get to report and write, but also work with colleagues to shape stories in their areas of interest. Over the past six years, I’ve collaborated with reporters from the U.S., Brussels, and Berlin, to Kenya, China, and India — it’s something I love about working at the FT.

As AI technologies are adopted more broadly, in the same way that digitization or cloud computing was, correspondents in our bureaus across the world will start to encounter it in their beats. I’ve already heard from several colleagues in beats like media or education about AI-focused stories they’re interested in. With this global remit, I’m hoping we can tie together different threads and trends, and leverage our international perspective to get a sense of how AI is evolving and being adopted at scale.

Scire: What did covering AI look like in your newsrooms before this role was created? (And how will that change, now that you’ve taken this title of AI editor?)

Murgia: We aren’t new to covering AI — there are a handful of journalists at the FT who have understood AI well and written about it for a few years now. We were (hopefully) rigorous in our coverage, but perhaps not singularly focused or strategic about it. For instance, I became interested in biometric technologies such as facial recognition in 2018, and spent a while digging into where and how it was being used and the backlash against its rollout — but this was purely driven by interest, and not a larger plan.

Now, we are in a moment where our readers are curious and hungry to learn more about how this set of technologies works and its impact on the workforce. We’ll approach it from this macro angle. I’ve also always taken an interest in the broader societal impacts of AI, including its ethical use and its role in advancing science and healthcare, which I hope we will focus on. We want our coverage to inform, and also to reveal the opportunities, challenges, and pitfalls of AI in the real world.

Scire: You will be covering artificial intelligence as many industries — including journalism! — are trying to learn how it’ll impact their work and business. This is a little meta, but do you foresee AI changing the way you report, write, or publish?

Murgia: It’s been interesting to me how many media organizations and insiders are concerned about this question right now. It’s exacerbated, I think, by the public examples of publishers experimenting with generative AI. So far I haven’t found that these new tools have changed the way I report or write. Good journalism, in my view, is original and reveals previously unknown or hidden truths. Language models work by predicting the most likely next word in a sequence, based on existing text they’ve been trained on. So they cannot ultimately produce or uncover anything truly new or unexpected in their current form.

I can see how it might be useful in future, as it becomes more accurate, in gathering basic information quickly, outlining themes, and experimenting with summaries [and] headlines. Perhaps chatbots will be a new way to interface with audiences, to provide tailored content and engage with a reader, based on an organization’s own content. I’ll certainly be looking for creative examples of how it’s being tested out today.

Scire: How are you thinking about disclosures, if any? If the Financial Times begins to use a particular AI-powered tool, for example, do you anticipate mentioning that within your coverage?

Murgia: I don’t know of any plans to use AI tools at the FT just now, but I assume the leadership is following developments in generative AI closely, like many other media organizations will be. If we did use these tools, though, I’d expect it would be disclosed transparently to our readers, just as all human authors are credited.

Scire: What kinds of previous experience — personal, professional, educational, etc. — led you to this job, specifically?

Murgia: My educational background was in biology — where I focused on neuroscience and disease — and later in clinical immunology. One of my final pieces of work as an undergraduate was an analysis of intelligence in non-human animals, where I focused on an African gray parrot called Alex and its ability to form concepts.

I was an accidental technology journalist, but what I loved about it was breaking down and communicating complexity to a wider audience. I was drawn, in particular, to subjects at the intersection of tech, science, and society. Early on in my career, I investigated how my own personal data was used (and abused) to build digital products, which turned into a years-long rabbit hole, and travelled to Seoul to witness a human being beaten by an AI at the game of Go. I think this job is the nexus of all these fascinations over the years.

Scire: What do you see as some of the challenges and opportunities for being the first AI editor — or the first anything — at a news organization? Are there certain groups, people, or resources that you’ll look to, outside of your own newsroom, as you do this work?

Murgia: The great thing about being a first is that you have some space to figure things out and shape your own path, without having anything to contrast with. A big opportunity here is for us to own a story that intersects with all the things FT readers care about — business, the economy, and the evolution of society. And it’s also a chance for us to help our audience visualize what the future could look like.

The challenge, I think, is communicating the complicated underlying technology in a way that is accessible, but also accurate and nuanced. We don’t want to hype things unnecessarily, or play down the impacts. I’ll certainly look to the scientists, engineers, and ethicists who work in this space to help elucidate the nuances. I want particularly to find women who are experts across these areas, who I find always give me a fresh perspective. I’m keen to also speak to people who are impacted by AI — business owners, governments, ordinary citizens — to explore new angles of the story.

Scire: And what about your hopes and dreams for this new role?

Murgia: My hopes and dreams! Thank you for asking. I want to make AI more understandable and accessible to our readers, so it doesn’t feel like magic but merely a tool that they can wield. I want to report from the frontiers of AI development on how it is changing the way we work and live, and to forecast risks and challenges early on. I want to tell great stories that people will remember.

Scire: I appreciate that — trying to demystify or help readers feel it’s not just “magic.” What do you think about this criticism from some quarters that some news coverage is anthropomorphizing AI? I feel like this is coming up, in particular, when people are writing about unsettling conversations with chatbots. Is that something that journalists covering AI should be wary of doing?

Murgia: I think it’s really difficult not to anthropomorphize — I struggle with this too — because it’s a very evocative way to explain it to audiences. But I do think we should strive to describe it as a tool, rather than as a “brain” or a companion of some kind. Otherwise, it opens up the risk that consumers interacting with these systems will have certain expectations of them, or infer things that aren’t possible for these systems to do, like understand or feel.

Separately, however, I don’t think we should dismiss the very real impact that these systems do have on our behaviors and psyche, including people projecting human emotions onto chatbots. We’ve seen this happen already. It matters that the technology can fool regular people into believing there is intelligence or sentience behind it, and we should be writing about the risks and guardrails being built in that context.

Scire: Any other advice you’d give journalists covering AI? Maybe particularly for those who might be covering it for the first time in 2023?

Murgia: I’d say take the time to speak to practitioners [and] researchers who can break down and explain concepts in artificial intelligence, as it’s essential to writing well about its applications. As I’ve said above, we should strive to treat it as a tool — an imperfect one at that — in our coverage, and question all claims that sound outlandish. Really, the same skills you’d use for all types of explanatory journalism!

]]>
https://www.niemanlab.org/2023/02/meet-the-first-ever-artificial-intelligence-editor-at-the-financial-times/feed/ 0
Artificial intelligence won’t kill journalism or save it, but the sooner newsrooms buy in, the better https://www.niemanlab.org/2019/11/artificial-intelligence-wont-kill-journalism-or-save-it-but-the-sooner-newsrooms-buy-in-the-better/ https://www.niemanlab.org/2019/11/artificial-intelligence-wont-kill-journalism-or-save-it-but-the-sooner-newsrooms-buy-in-the-better/#respond Mon, 18 Nov 2019 18:48:18 +0000 https://www.niemanlab.org/?p=176934 The robots aren’t taking over journalism jobs, but newsroom should adapt artificial intelligence technologies and accept that the way news is produced and consumed is changing, according to a new report by Polis, the media think-tank at the London School of Economics and Political Science.

In its global survey on journalism and artificial intelligence, “New Powers, New Responsibilities,” researchers asked 71 news organizations from 32 countries if and how that currently use AI in their newsrooms and how they expect the technology to impact the news media industry. (Since what exactly constitutes AI can be fuzzy, the report defines it as “a collection of ideas, technologies, and techniques that relate to a computer system’s capacity to perform tasks normally requiring human intelligence.”)

Right now, newsrooms mostly use AI in three areas: news gathering, production, and distribution. Of those surveyed, only 37 percent have an active AI strategy. The survey found that while newsrooms were interested in AI for efficiency and competitive purposes, they said they were mostly motivated by the desire to “help the public cope with a world of news overload and misinformation and to connect them in a convenient way to credible content that is relevant, useful and stimulating for their lives.”

“The hope is that journalists will be algorithmically turbo-charged, capable of using their human skills in new and more effective ways,” Polis founding director Charlie Beckett said in the report. “AI could also transform newsrooms from linear production lines into networked information and engagement hubs that give journalists the structures to take the news industry forward into the data-driven age.”

While most respondents said that AI would be beneficial as long as newsrooms stuck to their ethical and editorial policies, they noted that budget cuts as a result of implementing AI could lower the quality of news produced. They were also concerned about algorithmic bias and the role that technology companies will play in journalism going forward.

“AI technologies will not save journalism or kill it off,” Beckett writes. “Journalism faces a host of other challenges such as public apathy and antipathy, competition for attention, and political persecution…Perhaps the biggest message we should take from this report is that we are at another critical historical moment. If we value journalism as a social good, provided by humans for humans, then we have a window of perhaps 2-5 years, when news organisations must get across this technology.”

Here’s a video summary of the report:

And here is a brief response to the report from Johannes Klingebiel of Süddeutsche Zeitung.

]]>
https://www.niemanlab.org/2019/11/artificial-intelligence-wont-kill-journalism-or-save-it-but-the-sooner-newsrooms-buy-in-the-better/feed/ 0
This text-generation algorithm is supposedly so good it’s frightening. Judge for yourself. https://www.niemanlab.org/2019/11/this-text-generation-algorithm-is-supposedly-so-good-its-frightening-judge-for-yourself/ https://www.niemanlab.org/2019/11/this-text-generation-algorithm-is-supposedly-so-good-its-frightening-judge-for-yourself/#respond Thu, 07 Nov 2019 16:12:37 +0000 https://www.niemanlab.org/?p=176589 The best weapons are secret weapons. Freed from the boundaries of observable reality, they can hold infinite power and thus provoke infinite fear — or hope. In World War II, as reality turned against them, the Nazis kept telling Germans about the Wunderwaffe about to hit the front lines — “miracle weapons” that would guarantee victory for the Reich. The Stealth Bomber’s stealth was not just about being invisible to radar — it was also about its capabilities being mysterious to the Soviets. And whatever the Russian “dome of light” weapon is and those Cuban “sonic attacks” are, they’re all terrifying.

So whether intentionally or not, the creators of the text-generating algorithm GPT-2 played the PR game brilliantly in February when they announced that, well, it just may be too powerful to release to the general public. That generated a wave of global publicity that is, shall we say, uncommon for new text-generating algorithms. (Elon Musk is involved, you’ll be shocked to learn.)

In any event, now, nine months later, the folks at OpenAI have apparently decided that the infopocalypse is not right around the corner and released its secret superweapon GPT-2 into the wild. They say they have “seen no strong evidence of misuse so far” from more limited releases of the technology.

The alleged threat is not, as some journalists have feared, that this machine is going to eventually cover city council meetings and aggregate viral tweets more efficiently than a human reporter could. It’s that the ease of generating semi-convincing output could make it efficient to pump out thousands or millions of individually tailored misinformation, spam emails, or some other text-based form of grift.

I suppose that’s something legitimate to worry about, but my experiences playing around with the GPT-2 has mostly matched this guy’s: Creating readable prose, sensical prose, and believable prose are still pretty different things.

To test out its capabilities, I thought I’d feed it the ledes of a few Nieman Lab stories and see what it autogenerated from there. For each of these below, the actual human-written text is in italics; everything after that is “by” GPT-2. (Note that GPT-2, as used on Talk to Transformer, often ends its output mid-sentence.)

🤖   🤖   🤖

Original story: “Meet The Salt Lake Tribune, 501(c)(3): The IRS has granted nonprofit status to a daily newspaper for the first time,” by Christine Schmidt, published Nov. 4.

It was a “happy surprise,” Fraser Nelson said, when The Salt Lake Tribune received a letter from the IRS on Friday giving the 148-year-old news outlet nonprofit 501(c)(3) status — no questions asked.

The Tribune, like all nonprofit news organizations in Utah, is also required by IRS code to register with the agency under the section 501(c)(4) that allows “social welfare” nonprofits like the Tribune to accept donations, make grants and engage in the business of journalism.

The Tribune’s board, however, approved the decision to not register under 501(c)(3), and instead register under 501(c)(4), a section that provides tax-exempt status to religious organizations.

“We felt that our reporting is political in nature and not editorial, so we felt the tax exemption was needed to protect the integrity of our journalism,” Nelson said.

According to IRS code, 501(c)(4) organizations can engage in political, educational or religious activities, but not engage in political campaign activity or engage

Verdict: This is in the broad ballpark of news prose style. It’s interesting it chose to contradict the lede so quickly — specifically saying it rejected 501(c)(3) status. The real Fraser Nelson wouldn’t like that quote, I imagine. Call it a B+.

🤖   🤖   🤖

Original story: “Watch your language: ‘Data voids’ on the web have opened a door to manipulators and other disinformation pushers,” by me, published Nov. 1.

One day fifteen long years ago, in 2004, some SEO consultants decided to have a contest to determine quién es más macho in the gaming-search-results game. As they put it: “Are you a Player or a Stayer?”

At the time, Google and Bing were using a variety of techniques to determine whether search results for a given query were “Google-friendly.”

The main criterion was the number of Web pages containing the search string.

In the contest, the contest participants decided to make a bolder claim: “No, you’re not a player.”

They set a goal: a “threshold.”

The goal: to make the Google ranking algorithm think that a page was being searched for in the way you might expect it to be.

They wanted the Google algorithm to think the page was being used, and not just viewed.

The results of their contest were remarkable.

The

Verdict: Choppy short sentences without much flow or sense, despite sticking in the general topic area. I was sorta hoping it would throw in some more Spanish. I’ll give it a C-.

🤖   🤖   🤖

Original story: “The Daily Wire and black salve show that Facebook’s takedown policies have one thing in common: Inconsistency,” by Laura Hazard Owen, published Nov. 1.

Inauthentic coordinated behavior, in the U.S.? One of modern media’s mysteries the past few years has been: How does the right-wing website The Daily Wire do so incredibly well on Facebook?

The site is so successful that its co-founder, Ben Shapiro, now regularly speaks to thousands of college students in Washington, D.C. and, most recently, at the Heritage Foundation. Yet, the site has been criticized for its lack of coverage of domestic issues. The problem is that the right-wing site has a left-leaning bias: its stories often criticize left-wing figures such as President Barack Obama.

In an article for The Wall Street Journal, conservative writer Ben Shapiro said that Facebook users are “left-leaning” — and, thus, the site should have more of a bias toward the right.

The Daily Wire, a right-wing news site, is the most-shared conservative site on Facebook and has more than 30 million users. Its Facebook page has more than 11 million likes and

Verdict: Pretty realistic at the sentence level, and maybe the self-contradicting “The problem is that the right-wing site has a left-leaning bias” is actually a galaxy-brain commentary on our current politics. A solid B.

🤖   🤖   🤖

Original story: “Why I’m starting a company to build out a new model for local news in the 2020s,” by Ken Doctor, published Oct. 25.

Over the past decade here at Nieman Lab, I’ve reported a lot of news industry news. Today, I’m sharing some of my own. This morning, I’m posting the latest story on the news industry.

“As the story behind today’s story is well known, the National Security Agency was listening to telephone calls made in Germany by the German Chancellor Angela Merkel. This story comes from an important source: German journalist Ulf Laessing. A few days ago, he told me that a former U.S. intelligence officer had revealed this to him. He gave me the name of the source, but the American intelligence officer has not been seen publicly since.”

– Edward Snowden, 2013 interview with Glenn Greenwald,

the Guardian

“The NSA has been listening to the Chancellor of the German Federal Republic [of Germany] Angela Merkel’s phone calls since 2008. They also spied on the phones of other senior leaders of the G20, including

Verdict: Um, no. There’s no mention of Germany in Ken’s piece and I can’t quite figure out what would make the algorithm want to jump straight to Snowden and Merkel. The fractured lines would make this easy to spot. F.

🤖   🤖   🤖

And finally, to go full ouroboros, let’s feed the lede of this story right here into GPT-2.

Original story: This one, silly

The best weapons are secret weapons. Freed from the boundaries of observable reality, they can hold infinite power and thus provoke infinite fear — or hope. In World War II, as reality turned against them, the Nazis kept telling Germans about the Wunderwaffe about to hit the front lines — “miracle weapons” that would guarantee victory for the Reich. When the Germans couldn’t take it, they bombed their own cities, as well as a small number of Allied cities.

Today, there are many secret weapons in use around the globe — weapons that, if deployed, would have immense consequences. A weapon that can cause such destruction and pain would be a threat not just to its users, but to humanity as a whole.

The Pentagon has repeatedly refused to acknowledge the existence of secret weapons — weapons it would be a crime for the world’s media to disclose to the public. In the past, these weapons have been used to wage undeclared wars, including those in Cambodia, Laos, and Afghanistan. These weapons can kill more innocent civilians than nuclear weapons,

Verdict: I’m sure it was right about to pivot back to text-generation algorithms in a couple more grafs. But this is a very reasonable continuation of the lede (other than that first sentence). B.

🤖   🤖   🤖

GPT-2 is not coming to take the jobs of journalists, as some have worried. Paid reporting jobs generally require a certain level of factuality that the algorithm can’t match.

Is it coming for the “jobs” of fake-news writers, those Macedonian teens who until now have had to generate their propaganda (gasp!) by hand? Probably not. Whether your intention is to make money off ad arbitrage or to elect Donald Trump as president of the United States, the key value-add comes in knowing how to exploit a reader’s emotions, biases, preconceptions, and other lizard-brain qualities that can make a lie really hit home. Baiting that hook remains something an algorithm can reliably do. And it’s not as if “lack of realistic writing in grafs 3 through 12” was a real problem limiting most misinformation campaigns.

But I can see some more realistic impacts here. This quality of generated text could allow you to create a website will what appear to be fully fleshed out archives — pages and pages of cogent text going back years — which might make it seem more legitimate than something more obviously thrown together.

GPT-2’s relative mastery of English could give foreign disinformation campaigns a more authentic sounding voice than whatever the B-team at the Internet Research Agency can produce from watching Parks & Rec reruns.

And the key talent of just about any algorithm is scale — the ability to do something in mass quantities that no team of humans could achieve. As Larry Lessig wrote in 2009 (and Philip Bump reminded us of this week), there’s something about a massive data dump that especially encourages the cherry-picking of facts (“facts”) to support one’s own narrative. Here’s Bump:

In October 2009, he wrote an essay for the New Republic called “Against Transparency,” a provocative title for an insightful assessment of what the Internet would yield. Lessig’s argument was that releasing massive amounts of information onto the Internet for anyone to peruse — a big cache of text messages, for example — would allow people to pick out things that reinforced their own biases…

Lessig’s thesis is summarized in two sentences. “The ‘naked transparency movement’…is not going to inspire change,” he wrote. “It will simply push any faith in our political system over the cliff”…

That power was revealed fully in the 2016 election by one of the targets of the Russia probe: WikiLeaks. The group obtained information stolen by Russian hackers from the Democratic National Committee and Hillary Clinton’s campaign chairman, John Podesta…In October, WikiLeaks slowly released emails from Podesta…Each day’s releases spawned the same cycle over and over. Journalists picked through what had come out, with novelty often trumping newsworthiness in what was immediately shared over social media. Activists did the same surveys, seizing on suggestive (if ultimately meaningless) items. They then often pressured the media to cover the stories, and were occasionally successful…

People’s “responses to information are inseparable from their interests, desires, resources, cognitive capacities, and social contexts,” Lessig wrote, quoting from a book called “Full Disclosure.” “Owing to these and other factors, people may ignore information, or misunderstand it, or misuse it.”

If you wanted to create something as massive as a fake cache of hacked emails, GPT-2 would be of legitimate help — at least as a starting point, producing something that could then be fine-tuned by humans.

The key fact of the Internet is that there’s so much of it. Too much of it for anyone to have a coherent view. If democracy requires a shared set of facts — facts traditionally supplied by professional journalists — the ability to flood the zone with alternative facts could take the bot infestation of Twitter and push it out to the broader world.

Illustration by Zypsy ✪ used under a Creative Commons license.

]]>
https://www.niemanlab.org/2019/11/this-text-generation-algorithm-is-supposedly-so-good-its-frightening-judge-for-yourself/feed/ 0
Here’s how blockchain, bots, AI, and Apple News might impact the near-term future of journalism https://www.niemanlab.org/2018/05/heres-how-blockchain-bots-ai-and-apple-news-might-impact-the-near-term-future-of-journalism/ https://www.niemanlab.org/2018/05/heres-how-blockchain-bots-ai-and-apple-news-might-impact-the-near-term-future-of-journalism/#respond Mon, 14 May 2018 14:17:39 +0000 http://www.niemanlab.org/?p=158306 If you’re interested in Canadian media — and who among us is not — you probably already listen to Canadaland, the flagship show of Jesse Brown’s growing podcast empire, which dives into the nation’s journalism issues. I was happy to appear on the show to talk digital news strategy in 2016, and Jesse just had me back for today’s episode, where — contrary to the doom and gloom that accompanies most discussion of the technology’s impact on the media.

Well, I’m not going to say we avoided doom or gloom entirely — but we did get to have a fruitful discussion of some of the more tech-forward ways the industry is changing. In particular:

— Will blockchain meaningfully change the fundamental questions about how we journalism gets funded? (I’m skeptical.)

— Will AI and bots replace reporters? (Maybe on the fringes, but they’re mainly for scale and speed.)

— What is Apple News planning? (Dunno, but I’m hopeful the mobile OS companies can play a more useful role in news than Facebook does.)

It’s a fun conversation, and I hope you’ll give it a listen here.

]]>
https://www.niemanlab.org/2018/05/heres-how-blockchain-bots-ai-and-apple-news-might-impact-the-near-term-future-of-journalism/feed/ 0
How digital leaders from the BBC and Al Jazeera are planning for the ethics of AI https://www.niemanlab.org/2018/03/how-digital-leaders-from-the-bbc-and-al-jazeera-are-planning-for-the-ethics-of-ai/ https://www.niemanlab.org/2018/03/how-digital-leaders-from-the-bbc-and-al-jazeera-are-planning-for-the-ethics-of-ai/#respond Mon, 19 Mar 2018 13:00:36 +0000 http://www.niemanlab.org/?p=155961 — If robot reporters are going to deploy from drones in war zones in the future, at what point do we have the conversation about the journalism ethics of all this?

The robots may still be a few years away, but the conversation is happening now (at least about today’s AI technology in newsrooms). At Al Jazeera’s Future of Media Leaders’ Summit earlier this month, a group of experts in areas from media to machine learning discussed how their organizations frame the ethics behind (and in front of!) artificial intelligence.

Ethical AI was one of several topics explored during the gathering in Qatar, focused on data security, the cloud, and how artificial intelligence can automate and augment journalism. (“Data has become more valuable than oil,” Mohamed Abuagla told the audience in the same presentation as the drone-reporter concept.)

AI has already been seeded into the media industry, from surfacing trends for story production to moderating comments. Robotic combat correspondents may still be a far-fetched idea. But with machine learning strengthening algorithms day by day and hour by hour, AI innovations are occurring at a breakneck pace. Machines are more efficient than humans, sure. But in a human-centric field like journalism, how are newsrooms putting AI ethics into practice?

Ali Shah, the BBC’s head of emerging technology and strategic direction, explained his approach to the moral code of AI in journalism. Yaser Bishr, Al Jazeera Media Network’s executive director of digital, also shared some of his thinking on the future of AI in journalism. Here are some of the takeaways:

Ali Shah, the BBC

In both his keynote speech and subsequent panel participation, Shah walked the audience through the business and user implications of infusing AI into parts of the BBC’s production processes. He continued returning to the question of individual agency. “Every time we’re making a judgment about when to apply [machine learning]…what we’re really doing is making a judgment about human capacity,” he said. “Was it right for me to automate that process? When I’m talking about augmenting someone’s role, what judgment values am I augmenting?”

Shah illustrated how the BBC has used AI to perfect camera angles and cuts when filming, search for quotes in recorded data more speedily, and make recommendations for further viewing when the credits are rolling on the BBC’s online player. (The BBC and Microsoft have also experimented with a voice interface AI.) But he emphasized how those AI tools are intended to automate, augment, and amplify human journalists’ work, not necessarily replace or supersede them. “Machine learning is not going to be the answer to every single problem that we face,” he said.

The BBC is proud to be one of the world’s most trusted news brands, and Shah pointed to the need for balance between trust in the organization and individual agency. “We’re going to have to strike a balance between the utility and the effectiveness and the role it plays in society and in our business,” he said. “What we need to do is constantly recognize [that] our role should be giving a little bit of control back to our audience members.”

He also spoke about the need to educate both the engineers designing the AI and the “masses” who are the intended consumers of it. “Journalists are doing a fantastic job at covering this topic,” he said, but “our job as practitioners is to…break this down to the audience so they have control about how machine learning and AI are used to impact them.” (The BBC has published explainer videos about the technology in the past.) “We have to remember, as media, we are gatekeepers to people’s understanding of the modern world.”

“It’s not about slowing down innovation but about deciding what’s at stake,” Shah said. “Choosing your pace is really important.”

Yaser Bishr, Al Jazeera Media Network

Bishr, who helped bring AJ+ to life and has since used Facebook to pull followers onto Al Jazeera’s new Jetty podcast network, also emphasized the need to tread carefully.

“The speed of evolution we are going through in AI far exceeds anything we’ve done before,” Bishr said, talking about the advancements made in the technology at large. “We’re all for innovation, but I think the discussion about regulating the policy needs to go at the same pace.”

In conversation with Shah, Rainer Kellerhais of Microsoft, and Ahmed Elmagarmid of the Qatar Computing Research Institute, Bishr reiterated the risks of AI algorithms putting people into boxes and cited Microsoft’s exiled Twitter bot as an example of input and output bias. “The risk is not only during the training of the machine, but also during the execution of the machine,” he said.

Elmagarmid countered his concern about speed: “Things are in motion but things are continuous,” he said calmly. “We have time to adapt to it. We have time to harness it. I think if we look back to the Industrial Revolution, look back to the steam engine…people are always perceiving new technology as threatening.

“At the end of the day you will have [not just] newsrooms, but much better and more efficient and smarter newsrooms,” Elmagarmid said.

“AI is not the Industrial Revolution,” Bishr said, adding to his earlier comments: “We’re not really in a hurry in using AI right now.”

Image from user Comfreak used under a Creative Commons license.

]]>
https://www.niemanlab.org/2018/03/how-digital-leaders-from-the-bbc-and-al-jazeera-are-planning-for-the-ethics-of-ai/feed/ 0
China’s news agency is reinventing itself with AI https://www.niemanlab.org/2018/01/chinas-news-agency-is-reinventing-itself-with-ai/ https://www.niemanlab.org/2018/01/chinas-news-agency-is-reinventing-itself-with-ai/#respond Wed, 10 Jan 2018 17:09:27 +0000 http://www.niemanlab.org/?p=153221 On the heels of billions of yuan of investment burrowed into China’s artificial intelligence scene, China’s state news agency has announced that it is rebuilding its newsroom to emphasize human-machine collaboration.

Xinhua News Agency president Cai Mingzhao said Xinhua will build a “new kind of newsroom based on information technology and featuring human-machine collaboration.” The agency has also introduced the “Media Brain” platform to integrate cloud computing, the Internet of Things, AI and more into news production, with potential applications “from finding leads, to news gathering, editing, distribution and finally feedback analysis.”

The agency’s announcement was sparse on details, but it’s the latest component of a deep push into AI by China. Last week the country announced plans for a $2.1 billion AI development park to be built in the next five years as part of its drive to become an AI world leader by 2030. Google has also committed to putting roots in China’s AI scene by opening a research center in Beijing, with Bloomberg quoting Google’s leader of the center Fei-Fei Li: “It will be a small team focused on advancing basic AI research in publications, academic conferences and knowledge exchange.” Microsoft also announced plans to create their own R&D lab for AI in Taiwan and hire 200 researchers over the next five years, investing about $34 million.

“We saw lots of interest in AI in China, and the sector is moving so fast in the country,” Chris Nicholson, former Bloomberg news editor and co-founder of AI startup Skymind, told Digiday. “Beijing supports AI, while Baidu, Alibaba and Tencent are all getting into AI. The U.S. still has the best AI talent, but there are many good engineers and AI researchers in China as well.”

Moving aside from the global AI armsrace, the reverberations from China investing in AI-media could echo in journalism worldwide. A report out today from the Reuters Institute’s Digital News Project on media trends for 2018 highlights some of the advances that Chinese AI journalism has made already:

Executive Editor of Quartz, Zach Seward, recently gave a speech in China at a conference organised by tech giant Tencent. This was turned into a news story by a combination of AI based speech to text software, automatic transcription, and an automated newswriting programme called Dreamwriter. Around 2,500 pieces of news on finance, technology, and sports are created by Dreamwriter daily.

With technology having saturated Western markets much of the opportunity for growth is shifting to markets like China and India. But Silicon Valley giants like Google and Facebook face restrictions in China in particular, leaving Asian tech firms driving new ideas at a relentless pace. Without a computer-based legacy to worry about, this is a part of the world that is able to fully embrace mobile first technologies. Increasingly we’ll be looking to the East for innovations in technology in 2018…

Frederic Filloux, author of the Monday Note, has been paying close attention to Toutiao, an app that uses artificial intelligence to aggregate content from around 4,000 traditional news providers as well as bloggers and other personal content. Toutiao has around 120m daily active users and an engagement time of 74 minutes per day. Newsfeeds are constantly updated based on what its machines have learnt about reading preferences, time spent on an article, and location. Toutiao claims to have a user figured out within 24 hours. In Korea, Naver is also looking to add AI recommendations to its mobile services. Line is another mobile news aggregator that is popular in Korea, Taiwan, and Japan. News aggregators like Flipboard and Laserlike48 have made little progress in the US and Europe. But that could change as Toutiao, leaning on its $22 billion valuation, is looking to move aggressively into the Western countries this year.

Chinese companies aren’t necessarily expanding internationally for an international advertising base; as Axios’ Sara Fischer points out, they’re interested in targeting Chinese nationals who have moved elsewhere but still use the same technology to stay in touch back home.

There are also some concerns about what the Chinese government could do with AI journalism: Nina Xiang, the co-founder of the artificial intelligence-based China Money Network, wondered about the potential security and privacy issues from Xinhua’s innovations. “The Media Brain…will raise significant concerns over the protection of personal data privacy, or the lack thereof. The tie-up between Alibaba and China’s state news agency — the first of its kind — creates an all-seeing digital eye that can potentially access data collected from countless surveillance cameras across the nation, estimated to total half a billion in the next three years, Internet of Things (IoT) devices, dashboard-mounted car cameras, air pollution monitoring stations and personal wearable devices. Whether people will be able to give permission for their data being used, or even know its being used, is questionable,” she wrote.

“To use a simple analogy, this partnership is as if Amazon, Paypal, CBS, News Corp and Fox were all working with state and city governments in the United States to share both publicly and privately collected data for the purpose of monitoring for potential news events anywhere, anytime in real time across America.”

While some American organizations are slowly introducing AI to their newsrooms, China’s Xinhua is going all in.

Screenshot from Xinhua video.

]]>
https://www.niemanlab.org/2018/01/chinas-news-agency-is-reinventing-itself-with-ai/feed/ 0
Cross-examining the network: The year in digital and social media research https://www.niemanlab.org/2018/01/cross-examining-the-network-the-year-in-digital-and-social-media-research/ https://www.niemanlab.org/2018/01/cross-examining-the-network-the-year-in-digital-and-social-media-research/#respond Tue, 02 Jan 2018 16:48:36 +0000 http://www.niemanlab.org/?p=152998

Editor’s note: There’s a lot of interesting academic research going on in digital media — but who has time to sift through all those journals and papers?

Our friends at Journalist’s Resource, that’s who. JR is a project of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School, and they spend their time examining the new academic literature in media, social science, and other fields, summarizing the high points and giving you a point of entry.

Denise-Marie Ordway, JR’s managing editor, has picked out some of the top studies in digital media and journalism in 2017. She took over this task from John Wihbey, JR’s former managing editor, who summed up the top papers for us for several years. (You can check out his roundups from 2015, 2014, 2013 and 2012.)

There’s never a shortage of fascinating scholarship in the digital news/social media space. This year, we’re spotlighting 10 of the most compelling academic articles and reports published in 2017, which delve into meaty topics such as venture-backed startups, artificial intelligence, personal branding, and the spread of disinformation. We conferred with a small group of scholars to pick the ones we think you’ll want to know about — and remember, this is just a sample. A big thank you to everybody who contributed suggestions on Twitter.

“Paying for Online News: A comparative analysis of six countries”: From the University of Oxford, published in Digital Journalism. By Richard Fletcher and Rasmus Kleis Nielsen.

This study offers both good news and bad news for publishers struggling to figure out pay models. The researchers used data collected via surveys in six countries, including the United States, to gauge who’s paying for news and who’s willing to pay in the future. The good news: Of those who are not paying for online news now, younger Americans are more willing to pay in the future, possibly because they often already pay for other forms of digital media. The bad news: No more than 2 percent of people surveyed in any country said they are “very likely” to pay for news in the future.

“News Use Across Social Media Platforms 2017”: From the Pew Research Center. By Elisa Shearer and Jeffrey Gottfried.

Throughout the year, the Pew Research Center releases survey-based reports examining journalism and news organizations. This report offers important insights into the role social media plays in distributing and accessing news. Some key takeaways: Almost 70 percent of U.S. adults reported getting news via social media. Meanwhile, a growing number of older adults, people of color, and adults without bachelor’s degrees said they turn to social media sites for news. Minority adults are much more likely than white adults to get news from social media — 74 percent reported doing so in 2017, up from 64 percent in 2016. Interestingly, only 5 percent of adults who go to Snapchat for news also often get news from newspapers.

“Venture-Backed News Startups and the Field of Journalism: Challenges, changes, and consistencies”: From George Washington University, published in Digital Journalism. By Nikki Usher.

How do venture-backed news startups compare themselves to traditional media outlets? This article examines 18 startups, including BuzzFeed, GeekWire, and Vox, to understand how this burgeoning area of digital media is changing journalism’s landscape. Usher interviewed top executives, founders, and others to learn how and why these companies formed as well as details about their editorial visions, technological visions, and plans for making money. The study also explores the rise of algorithms in predicting user behavior, the creation of scalable products, and new roles for journalists within an organization where reporters and technical staff are equals.

“Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 U.S. Presidential Cycle”: From the University of North Carolina at Chapel Hill and The University of Utah, published in Political Communication. By Daniel Kreiss and Shannon C. McGregor.

This article offers a behind-the-scenes look at how Facebook, Google, Microsoft, and Twitter collaborated with political campaigns during the 2016 U.S. election season. The paper focuses on their role at the Democratic National Convention in 2016 and in providing extensive consulting services to candidates, including Donald Trump, over the course of the campaign. The researchers found that these technology firms “are increasingly the locus of political knowledge and expertise” in digital and data campaigning. Meanwhile, representatives from each firm said “the growth of their work in electoral politics was driven by the desire for direct revenues from their services and products, for candidates to give their services and platforms greater public visibility, and to establish relationships with legislators.”

“When Reporters Get Hands-On With Robo-Writing: Professionals consider automated journalism’s capabilities and consequences”: From LMU Munich and the University of Zurich, published in Digital Journalism. By Neil Thurman, Konstantin Dörr, and Jessica Kunert.

Media innovators continue to find new ways to integrate artificial intelligence into the newsroom, moving well past using crime stats and structured data from athletic games to generate news reports. While plenty of journalists have weighed in on the trend, most don’t have direct experience using the technology. For this study, researchers conducted workshops with a small group of journalists to show them how to use software to create data-driven news content. After getting hands-on experience, journalists were asked about the potentials and limitations of the technology.

Unsurprisingly, journalists had lots of criticisms — for example, there was concern that automation would make verifying information less likely. Some journalists did see benefits, including time savings and reductions in human error. For several, though, “the experience of creating news items this way was difficult, irritating, and did not utilize their innate abilities.”

“Artificial Intelligence: Practice and Implications for Journalism”: From the Brown Institute for Media Innovation and the Tow Center for Digital Journalism at Columbia Journalism School. By Mark Hansen, Meritxell Roca-Sales, Jon Keegan, and George King.

What problems do journalists and technologists uncover when they brainstorm about AI in newsrooms? This report summarizes a three-hour, wide-ranging discussion between journalists and technologists who gathered last summer for an event organized by Columbia University’s Tow Center for Digital Journalism and the Brown Institute for Media Innovation. Among the important takeaways: A knowledge and communication gap between the technologists who create AI and journalists who use it could “lead to journalistic malpractice.” News outlets need to provide audiences with clear explanations for how AI is used to research and report stories. Also, there needs to be “a concerted and continued effort to fight hidden bias in AI, often unacknowledged but always present, since tools are programmed by humans.”

“Partisanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election”: From the Berkman Klein Center for Internet & Society at Harvard University. By Rob Faris, Hal Roberts, Bruce Etling, Nikki Bourassa, Ethan Zuckerman, and Yochai Benkler.

In this report, researchers examine the composition and behavior of media on the right and left to explain how Donald Trump and Hillary Clinton received differing coverage. The report covers a lot of ground in 142 pages, chock-full of bar charts, network maps, and other data visualizations. It even includes a case study on coverage of the Clinton Foundation. The researchers found that while mainstream media gave mostly negative coverage to both presidential candidates, Trump clearly dominated coverage and was given the opportunity to shape the election agenda.

According to the report, far-right media “succeeded in pushing the Clinton Foundation to the front of the public agenda precisely at the moment when Clinton would have been anticipated to (and indeed did) receive her biggest bounce in the polls: immediately after the Democratic convention.” Researchers also found that while fake news was a problem, it played a relatively small role in the 2016 presidential election. “Disinformation and propaganda from dedicated partisan sites on both sides of the political divide played a much greater role in the election,” the researchers wrote.

“Identity lost? The personal impact of brand journalism”: From The University of Utah and Temple University, published in Journalism. By Avery E. Holton and Logan Molyneux.

Newsrooms urge journalists to use social media to promote their work, interact with sources, and build their professional brands. How does that affect what journalists do on Twitter and Facebook when they’re off the clock? This study is one of several published in 2017 that look at how social media impacts journalists’ identities. This one is important because it lays the groundwork for the others. The authors interviewed 41 reporters and editors at U.S. newspapers to explore the challenges they face in integrating their personal and professional identities on social media. They found that reporters “feel pressure to stake a claim on their beat, develop a presence as an expert in their profession and area of coverage, and act as a representative of the news organization at all times. This leaves little room for aspects of personal identity such as family, faith, or friendship to be shared online.”

“How the news media activate public expression and influence national agendas”: From Harvard University, Florida State University, and MIT, published in Science. By Gary King, Benjamin Schneer, and Ariel White.

Journalism really does contribute to the democratic process and this study provides quantitative evidence. In an experiment involving 48 mostly small media organizations, researchers demonstrated that reporting on a certain policy topic prompts members of the public to take a stand and express their views on the topic more often than they would have if a news article had not been published. Researchers looked at website pageviews and social media posts to gauge impact. Their experiment, according to the researchers, “increased discussion in each broad policy area by ~62.7% (relative to a day’s volume), accounting for 13,166 additional posts over the treatment week.”

“Digital News Report 2017”: From the Reuters Institute for the Study of Journalism at the University of Oxford. By Nic Newman, Richard Fletcher, Antonis Kalogeropoulos, David A. L. Levy, and Rasmus Kleis Nielsen.

This latest annual report from the Reuters Institute offers a global look at digital news consumption based on a survey of more than 70,000 people in 36 countries, including the United States. There are lots of great insights to glean from this 136-page report, which examines such issues as news avoidance, access, distrust, polarization, and sharing. It may (or may not) be surprising that the U.S. ranked 7th highest in the area of news avoidance behind Greece, Turkey, Poland, Croatia, Chile, and Malaysia. Thirty-eight percent of Americans reported avoiding the news “often” or “sometimes.”

Worldwide, the amount of sharing and commenting on news via social media has fallen or stayed about the same the past two years. The U.S., which saw small increases in both habits, is an exception. Another interesting takeaway: Some countries are much more likely to pay for news. In Norway, 15 percent of people surveyed said they made ongoing payments for digital news in the last year, compared to 8 percent in the U.S., 6 percent in Japan, 4 percent in Canada, 3 percent in the United Kingdom and 2 percent in the Czech Republic.

Photo by Steve Fernie used under a Creative Commons license.

]]>
https://www.niemanlab.org/2018/01/cross-examining-the-network-the-year-in-digital-and-social-media-research/feed/ 0
The future of news (and far beyond), according to Scandinavian media giant Schibsted’s latest trends report https://www.niemanlab.org/2017/11/the-future-of-news-and-far-beyond-according-to-scandinavian-media-giant-schibsteds-latest-trends-report/ https://www.niemanlab.org/2017/11/the-future-of-news-and-far-beyond-according-to-scandinavian-media-giant-schibsteds-latest-trends-report/#respond Tue, 21 Nov 2017 17:07:08 +0000 http://www.niemanlab.org/?p=150590 ‘Tis the season for trend reports.

The Scandinavian media giant Schibsted’s annual trends report — part predictions, part survey research, part self-promotion — is out today, free for anyone interested. The report features essays on everything from the promise and pitfalls of artificial intelligence to sustainability to the future of bicycles as a consistent mode of transportation, as well as a survey of millennials in France, Spain, and Sweden on their concerns about their digital footprint. (It’s also a useful document to browse in case you’re wondering what a 7,000-employee media company considers the most important new focus areas for its business in the coming years.)

Here are a few interesting points from the report to note.

Svenska Dagbladet, Schibsted’s Stockholm-based daily newspaper, is designing a ratings system for the relative newsworthiness of each piece of news it publishes. An algorithm, trained on that data, is helping put together its homepage:

“What is a particular piece of news worth on a scale from 1–5?” … We tested different news scenarios. Stock market down 4 percent (news value 3.5), the Prime Minister proposes more CCTV cameras in central Stockholm (news value 4.0). A Strindberg play opens at the Royal Theatre (2.5).

These news ratings, combined with a time marker, how long we think the piece will draw interest and be relevant to the readers, are the very basic data in the algorithm that from now is going to steer our new front page. It was self-evident that it is journalism and the editors that, also in the future, are going to influence how news are evaluated on our front page.

— The context in which a digital ad appears is definitely important.

A study conducted jointly by Schibsted Sales and Inventory and the Stockholm School of Economics studied the impact of ads from 16 Swedish advertisers, including the buying intentions of people who saw the ads: “On average the effect doubles with the right audience and triples with the right context and the right target group. Therefore, even when there is a lot of data about who is being reached, the context almost always trumps data, especially when it comes to getting customers to act and, not least, with lesser-known brands.”

— Millennials in France, Spain, and Sweden might love to post about their lives on social media, but they’re also wary of how the information they leave online can be used for more targeted and sometimes more nefarious purposes. This is according to a study Schibsted commissioned, looking at 1,200 people born in those three countries between 1983 and 2001.

— Sixty-seven percent of Spanish and French millennials surveyed reported worrying that “the information they provide on social media can be used to influence political views.” Fifty-five percent of Swedish millennials felt the same.

— Sixty-one percent of Spanish millennials reported being willing to give up additional personal information online in order to receive better tailored products and services. That’s true for 51 percent for French millennials and 39 percent of Swedish millennials.

— Fifty-four percent of Swedish millennials have in the past year changed their phone settings to improve their digital privacy (49 percent for Spanish millennials; 37 percent for French millennials).

There’s more in the report to read here.

]]>
https://www.niemanlab.org/2017/11/the-future-of-news-and-far-beyond-according-to-scandinavian-media-giant-schibsteds-latest-trends-report/feed/ 0
AI is going to be helpful for personalizing news — but watch out for journalism turning into marketing https://www.niemanlab.org/2017/09/ai-is-going-to-be-helpful-for-personalizing-news-but-watch-out-for-journalism-turning-into-marketing/ https://www.niemanlab.org/2017/09/ai-is-going-to-be-helpful-for-personalizing-news-but-watch-out-for-journalism-turning-into-marketing/#comments Thu, 21 Sep 2017 13:59:55 +0000 http://www.niemanlab.org/?p=148060 What are the most useful ways to bring artificial intelligence into newsrooms? How can journalists use it in their reporting process? Is it going to replace newsroom jobs?

A report out this week from the Tow Center for Digital Journalism looks at how AI can be adapted to journalism. It summarizes a previously off-the-record meeting held back in June by the Tow Center and the Brown Institute for Media Innovation. (Full disclosure: Nieman Lab director Joshua Benton was part of the meeting.)

Among the report’s findings:

AI “helps reporters find and tell stories that were previously out of reach or impractical.” Three areas where AI can be particularly helpful in the newsroom: “Finding needles in haystacks” (discovering things in data that humans can’t, which humans can then fact-check); identifying trends or outliers; and as a subject of a story itself: “Because they are built by humans, algorithms harbor human bias — and by examining them, we can discover previously unseen bias.”

AI can deliver much more personalized news — for good and bad. AI could be used to monitor readers’ likes and dislikes, ultimately shaping stories to people’s individual interests. But, as one participant cautioned:

The first stage of personalization is recommending articles; the long-term impact is filter bubbles. The next step is using NLP [Natural Language Processing] to shape an article to exactly the way you want to read it. Tone, political stance, and many other things. At that point, journalism becomes marketing. We need to be very aware that too much personalization crosses the line into a different activity.

Another concern is that, if articles become too personalized, the public record is at risk: “When everyone sees a different version of a story, there is no authoritative version to cite.”

— AI brings up new ethical considerations. Participants agreed that news organizations need to disclose when AI has been used in creating a story, but “that description must be translated into non-technical terms, and told in a concise manner that lets readers understand how AI was used and how choices were made.” There’s a need for best practices around disclosures.

Also, most AI tools aren’t built specifically with newsrooms (or their editorial values) in mind. One engineer said: “A lot of these questions currently seem impenetrable to us engineers because we don’t understand the editorial values at a deep level, so we can’t model them. Engineers don’t necessarily think of the systems they are building as embodying editorial values, which is an interesting problem. The way a system like this is built does not reflect this underlying goal.”

The full report is here.

Photo of robots by Robert Heim used under a Creative Commons license.

]]>
https://www.niemanlab.org/2017/09/ai-is-going-to-be-helpful-for-personalizing-news-but-watch-out-for-journalism-turning-into-marketing/feed/ 1
The future of news is humans talking to machines https://www.niemanlab.org/2017/09/the-future-of-news-is-humans-talking-to-machines/ https://www.niemanlab.org/2017/09/the-future-of-news-is-humans-talking-to-machines/#comments Mon, 18 Sep 2017 14:00:22 +0000 http://www.niemanlab.org/?p=144808 This year, the iPhone turned 10. Its launch heralded a new era in audience behavior that fundamentally changed how news organizations would think about how their work is discovered, distributed and consumed.

This summer, as a Knight Visiting Nieman Fellow at Harvard, I’ve been looking at another technology I think could lead to a similar step change in how publishers relate to their audiences: AI-driven voice interfaces, such as Amazon’s Alexa, Google’s Home and Assistant, Microsoft’s Cortana, and Apple’s upcoming HomePod. The more I’ve spoken to the editorial and technical leads building on these platforms in different news organizations, as well as the tech companies developing them, the more I’ve come to this view: This is potentially bigger than the impact of the iPhone. In fact, I’d describe these smart speakers and the associated AI and machine learning that they’ll interface with as the huge burning platform the news industry doesn’t even know it’s standing on.

This wasn’t how I planned to open this piece even a week before my Nieman fellowship ended. But as I tied together the research I’d done with the conversations I’d had with people across the industry, something became clear: As an industry, we’re far behind the thinking of the technology companies investing heavily in AI and machine learning. Over the past year, the CEOs of Google, Microsoft, Facebook, and other global tech giants have all said, in different ways, that they now run “AI-first” companies. I can’t remember a single senior news exec ever mentioning AI and machine learning at any industry keynote address over the same period.

Of course, that’s not necessarily surprising. “We’re not technology companies” is a refrain I’ve heard a lot. And there are plenty of other important issues to occupy industry minds: the rise of fake news, continued uncertainty in digital advertising, new tech such as VR and AR, and the ongoing conundrum of responding to the latest strategic moves of Facebook.

But as a result of all these issues, AI is largely being missed as an industry priority; to switch analogies, it feels like we’re the frog being slowly boiled alive, not perceiving the danger to itself until it’s too late to jump out.

“In all the speeches and presentations I’ve made, I’ve been shouting about voice AI until I’m blue in the face. I don’t know to what extent any of the leaders in the news industry are listening,” futurist and author Amy Webb told me. As she put it in a piece she wrote for Nieman Reports recently:

Talking to machines, rather than typing on them, isn’t some temporary gimmick. Humans talking to machines — and eventually, machines talking to each other — represents the next major shift in our news information ecosystem. Voice is the next big threat for journalism.

My original goal for this piece was to share what I’d learned — examples of what different newsrooms are trying with smart speakers and where the challenges and opportunities lie. There’s more on all that below. But I first want to emphasize the critical and urgent nature of what the news industry is about to be confronted with, and how — if it’s not careful — it’ll miss the boat just as it did when the Internet first spread from its academic cocoon to the rest of the world. Later, I’ll share how I think the news industry can respond.

Talking to objects isn’t weird any more

In the latest version of her annual digital trends report, Kleiner Perkins’ Mary Meeker revealed that 20 percent of all Google search was now happening through voice rather than typing. Sales of smart speakers like Amazon’s Echo were also increasing fast:

It’s becoming clear that users are finding it useful to interact with devices through voice. “We’re treating voice as the third wave of technology, following the point-and-click of PCs and touch interface of smartphones,” Francesco Marconi, a media strategist at Associated Press, told me. He recently coauthored AP’s report on how artificial intelligence will impact journalism. The report gives some excellent insights into the broader AI landscape, including automation of content creation, data journalism through machine learning, robotic cameras, and media monitoring systems. It highlighted smart speakers as a key gateway into the world of AI.

Since the release of the Echo, a number of outlets have tried to learn what content works (or doesn’t) on this class of devices. Radio broadcasters have been at an understandable advantage, being able to adapt their content relatively seamlessly.

In the U.S., NPR was among the first launch partners on these platforms. Ha-Hoa Hamano, a senior product manager at NPR working on voice AI, described its hourly newscast as “the gateway to NPR’s content.”

“We’re very bullish on the opportunity with voice,” Hamano said. She cited research showing 32 percent of people aged 18 to 34 don’t own a radio in their home — “which is a terrifying stat when you’re trying reach and grow audience. These technologies allow NPR to fit into their daily routine at home — or wherever they choose to listen.”

NPR was available at launch on the Echo and Google Home, and will be soon on Apple’s HomePod. “We think of the newscast as the gateway to the rest of NPR’s news and storytelling,” she said. “It’s a low lift for us to get the content we already produce onto these platforms. The challenge is finding the right content for this new way of listening.”

The API that drives NPR made it easy for Hamano’s team to integrate the network’s content into Amazon’s system. NPR’s skills — the voice-driven apps that Amazon’s voice assistant Alexa recognizes — can respond to requests like “Alexa, ask NPR One to recommend a podcast” or “Alexa, ask NPR One to play Hidden Brain.”

Voice AI: What’s happening now

  • Flash briefings (e.g., NPR, BBC, CNN)
  • Podcast streaming (e.g., NPR)
  • News quizzes (e.g., The Washington Post)
  • Recipes and cooking aide (e.g., Hearst)

The Washington Post — owned by Amazon CEO Jeff Bezos — is also an early adopter in running a number of experiments on Amazon’s and Google’s smart speaker platforms. Senior product manager Joseph Price has been leading this work. “I think we’re at the early stages of what I’d call ambient computing — technology that reduces the ‘friction’ between what we want and actually getting it in terms of our digital activity,” he said. “It will actually mean we’ll spend less time being distracted by technology, as it effectively recedes into the background as soon as we are finished with it. That’s the starting point for us when we think about what voice experiences will work for users in this space.”

Not being a radio broadcaster, the Post has had to experiment with different forms of audio — from using Amazon’s Alexa automated voices on stories from its website to a Post reporter sharing a particular story in their own voice. Other experiments have included launching an Olympics skill, where users could ask the Post who had won medals during last year’s Olympics. That was an example of something that didn’t work, though — Amazon built the same capability into the main Alexa platform soon afterwards itself.

“That was a really useful lesson for us,” Price said. “We realized that in big public events like these, where there’s an open data set about who has won what, it made much more sense for a user to just ask Alexa who had won the most medals, rather than specifically asking The Washington Post on Alexa the same question.” That’s a broader lesson: “We have to think about what unique or exclusive information, content, or voice experience can The Washington Post specifically offer that the main Alexa interface can’t.”

One area that Price’s team is currently working on is the upcoming release of notifications on both Amazon’s Alexa and Google’s Home platforms. For instance, if there’s breaking news, the Post will be able to make a user’s Echo chime and flash green, at which point the user can ask “Alexa, what did I miss?” or “Alexa, what are my notifications?” Users will have to opt in before getting alerts to their device, and they’ll be able to disable alerts temporarily through a do-not-disturb mode.

Publishers like the Post that produce little or no native audio content have to work out the right way of presenting their text-based content on a voice-driven platform. One option is to allow Alexa to read stories that have been published; that’s easy to scale up. The other is getting journalists to voice articles or columns or create native audio for the platform. That’s much more difficult to scale, but several news organizations told me initial audience feedback suggests this is users’ preferred experience.

For TV broadcasters like CNN, early experiments have focused on trying to figure out when their users would most want to listen to a bulletin — as opposed to watching one — and how much time they might have specifically to do so via a smart speaker. Elizabeth Johnson, a senior editor at CNN Digital, has been leading the work on developing flash-briefing content for these platforms.

“We assumed some users would have their device in the kitchen,” she said. “This led us to ask, what are users probably doing in the kitchen in the morning? Making breakfast. How long does it take to make a bagel? Five minutes. So that’s probably the amount of time a user has to listen to us, so let’s make sure we can update them in less than five minutes. For other times of the day, we tried to understand what users might be doing: Are they doing the dishes? Are they watching a commercial break on TV or brushing their teeth? We know that we’re competing against a multitude of options, so what sets us apart?”

With Amazon’s recent release of the Echo Show — which has a built-in screen — CNN is taking the “bagel time” philosophy to developing a dedicated video news briefing at the same length as its audio equivalent.

CNN is also thinking hard about when notifications will and won’t work. “If you send a notification at noon, but the user doesn’t get home until 6 p.m., does it make sense for them to see that notification?” Johnson asked. “What do we want our users to hear when they come home? What content do we have that makes sense in that space, at that time? We already consider the CNN mobile app and Apple News alerts to be different, as are banners on CNN.com — they each serve different purposes. Now, we have to figure out how to best serve the audience alerts on these voice-activated platforms.”

What’s surprised many news organizations is how broad the age range of their audiences are on smart speakers. Early adopters in this space are very different from early adopters of other technologies. Many didn’t buy these smart speakers themselves, but were given them as gifts, particularly around Christmas. The fact there’s very little learning curve to use them means the technical bar is much lower. Speaking to the device is intuitive.

Edison Research was recently commissioned by NPR to find out more about what these users are doing with these devices. Music was unsurprisingly at the top of the reasons why they use these devices, but coming in second was to “ask questions without needing to type.” Also high up was an interest to listen to news and information — encouraging for news organizations.

While screens aren’t going away — people will always want to see and touch things — there’s no doubt that voice as an interface for devices is already becoming ingrained as a natural behavior among our audiences. If you’re not convinced, watch children interact with smart speakers: Just as we’ve seen the first Internet-connected generation grow up, we’re about to see the “voice generation” arrive feeling completely at ease with this way of engaging with technology.

The NPR–Edison research has also highlighted this trend. Households with kids that have smart speakers say engagement is high with these devices. Unlike phones or tablet, smart speakers are communal experiences — which also raises the likelihood of families spending time together, whether for education or entertainment purposes.

(It’s worth noting here that there have been some concerns raised about whether children asking for — or demanding — content from a device without saying “please” or “thank you” could have downsides. As San Francisco VC and dad Hunter Walk put it last year: “Amazon Echo is magical. It’s also turning my kid into an asshole.” To curb this, skills or apps for children could be designed in the future with voice responses requiring politeness.)

For the BBC, where I work, developing a voice-led digital product for children is an exciting possibility. It already has considerable experience of content for children on TV, radio, online and digital.

“Offering the ability to seamlessly navigate our rich content estate represents a great opportunity for us to forge a closer relationship with our audience and to serve them better,” Ben Rosenberg, senior distribution manager at the BBC, said. “The current use cases for voice suggest there is demand that sits squarely in the content areas where we consistently deliver on our ambitions — radio, news, and children’s genres.”

BBC News recently formed a working group to rapidly develop prototypes for new forms of digital audio using voice as the primary interface. Expect to hear more about this in the near future.

Rosenberg also highlights studies that have found voice AI interfaces appeared to significantly increase consumption of audio content. This is something that came out strongly in the NPR-Edison research too:

Owning a smart speaker can lead to a sizeable increase in consumption of music, news and talk content, podcasts, and audiobooks. Media organizations that have such content have a real opportunity if they can figure out how to make it as easily accessible through these devices as possible. That’s where we get to the tricky part.

Challenges: Discovery, distribution, analytics, monetization

In all the conversations I’ve had with product and editorial teams working on voice within news organizations, the biggest issue that comes up repeatedly is discovery: How do users get to find the content, either as a skill or app, that’s available to them?

With screens, those paths to discovery are relatively straightforward: app stores, social media, websites. These are tools most smartphone users have learned to navigate pretty easily. With voice, that’s more difficult: While accompanying mobile apps can help you navigate what a smart speaker can do, in most cases, that isn’t the natural way users will want to behave.

If I was to say: “Hey Alexa/Google/Siri, what’s in the news today?” — what are these voice assistants doing in the background to deliver back to me an appropriate response? Big news brands have a distinct advantage here. In the U.K., most users who want news are very likely to ask for the BBC. In the U.S., it might be CNN or NPR. It will be more challenging for news brands that don’t have a natural broadcast presence to immediately come to the mind of users when they talk to a smart speaker for the first time; how likely is it that a user wanting news would first think of a newspaper brand on these devices?

Beyond that, there’s still a lot of work to be done by the tech platforms to make discovery and navigation easier. In my conversations with them, they’ve made it clear they’re acutely aware of that and are working hard to do so. At the moment, when you set up a smart speaker, you set preferences through the accompanying mobile app, including prioritizing the sources of content you want — whether for music, news, or something else. There are plenty of skills or apps you can add on. But as John Keefe, app product manager at Quartz, put it: “How would you remember how to come back to it? There are no screens to show you how to navigate back and there are no standard voice commands that have emerged to make that process easier to remember.”

Another concern that came up frequently: the lack of industry standards for voice terms or tagging and marking up content that can be used by these smart speakers. These devices have been built with natural language processing, so they can understand normal speech patterns and derive instructional meaning for them. So “Alexa, play me some music from Adele” should be understood in the same way as “Alexa, play Adele.” But learning to use the right words can still sometimes be a puzzle. One solution is likely to be improving the introductory training that starts up when a smart speaker is first connected. It’s a very rudimentary experience so far, but over the next few months, this should improve — giving users a clearer idea of how they can know what content is available, how they can skip to the next thing, go back, or go deeper.

Voice AI: Challenges

  • Discoverability
  • Navigation
  • Consistent taxonomies
  • Data analytics/insights
  • Monetization
  • Having a “sound” for your news brand

Lili Cheng, corporate vice president at Microsoft Research AI, which develops its own AI interface Cortana, described the challenge to Wired recently: “Web pages, for example, all have back buttons and they do searches. Conversational apps need those same primitives. You need to be like, ‘Okay, what are the five things that I can always do predictably?’ These understood rules are just starting to be determined.”

For news organizations building native experiences for these platforms, a lot of work will need to be done in rethinking the taxonomy of content. How can you tag items of text, audio, and video to make it easy for voice assistants to understand their context and when each item would be relevant to deliver to a user?

The AP’s Marconi described what they’re already working on and where they want to get to in this space:

At the moment, the industry is tagging content with standardized subjects, people, organizations, geographic locations and dates, but this can be taken to the next level by finding relationships between each tag. For example, AP developed a robust tagging system called AP Metadata which is designed to organically evolve with a news story as it moves through related news cycles.

Take the 2016 water crisis in Flint, Michigan, for example. Until it became a national story, Flint hadn’t been associated with pollution, but as soon as this story became a recurrent topic of discussion, AP taxonomists wrote rules to be able to automatically tag and aggregate any story related to Flint or any general story about water safety moving forward. The goal here is to assist reporters to build greater context in their stories by automating the tedious process often found in searching for related stories based on a specific topic or event.

The next wave of tagging systems will include identifying what device a certain story should be consumed on, the situation, and even other attributes relating to emotion and sentiment.

As voice interfaces move beyond just smart speakers to all the devices around you, including cars and smart appliances, Marconi said the next wave of tagging could identify new entry points for content: “These devices will have the ability to detect a person’s situation and well as their state of mind at a particular time, enabling them to determine how they interact with the person at that moment. Is the person in an Uber on the way to work? Are they chilling out on the couch at home or are they with family? These are all new types of data points that we will need to start thinking about when tagging our content for distribution in new platforms.”

This is where industry-wide collaboration to develop these standards is going to be so important — these are not things that will be done effectively in the silos of individual newsrooms. Wire services like AP, who serve multiple news clients, could be in an influential position to help form these standards.

Audience data and measuring success

As with so many new platforms that news organizations try out, there’s an early common complaint: We don’t have enough data about what we’re doing and we don’t know enough about our users. From the dozen or so news organizations I’ve talked to, nearly all raised similar issues in getting enough data to understand how effective their presence on these platforms was. A lot seems to depend on the analytics platform that they use on their existing websites and how easy it is to integrate into Amazon Echo and Google Home systems. Amazon and Google provide some data and though it’s basic at this stage, it is likely to improve.

With smart speakers, there are additional considerations to be made beyond the standard industry metrics of unique users, time spent and engagement. What, for example, is a good engagement rate — the length of time someone talks to these devices? The number of times they use the particular skill/app? Another interesting possibility that could emerge in the future is being able to measure the sentiment behind the experience a user has after trying out a particular skill/app through the tone of their voice. It may be possible in future to tell whether a user sounded happy, angry or frustrated — metrics that we can’t currently measure with existing digital services.

And if these areas weren’t challenging enough, there’s then the “M” word to think about…

Money, money, money

How do you monetize on these platforms? Understandably, many news execs will be cautious in placing any big bets of new technologies unless there is a path they can see towards future audience reach or revenue (ideally both). For digital providers, there would be a natural temptation to try and figure out how these voice interfaces could help drive referrals or subscriptions. However, a more effective way of looking at this would be through the experience of radio. Internal research commissioned by some radio broadcasters that I’ve seen suggests users of smart speakers have a very high recall rate of hearing adverts while listening to radio being streamed on these devices. As many people are used to hearing ads in this way, it could mean they will have a higher tolerance level to such ads via smart speakers compared to pop-up ads on websites.

One of the first ad networks developed for voice assistants by VoiceLabs gave some early indicators to how advertising could work on these devices in the future — with interactive advertising that converses with uses. After a recent update on its terms by Amazon, VoiceLabs subsequently suspended this network. Amazon’s updated terms still allow for advertising within “flash briefings’, podcasts and streaming skills.

Another revenue possibility is if smart speakers — particularly Amazon’s at this stage — are hard wired into shopping accounts. Any action a user takes that leads to a purchase after hearing a broadcast or interacting with a voice assistant could lead to additional revenue streams.

For news organizations that don’t have much broadcast content and are more focussed online, the one to watch is the Washington Post. I’d expect to see it do some beta testing of different revenue models through its close relationship with Amazon over the coming months, which could include a mix of sponsored content, in-audio ads and referral mechanisms to its website and native apps. These and other methods are likely to be offered by Amazon to partners for testing in the near future too.

Known unknowns and unknown unknowns

While some of the challenges — around discovery, tagging, monetization — are getting pretty well defined as areas to focus on, there are a number of others that could lead to fascinating new voice experiences — or could lead down blind alleys.

There are some who think that a really native interactive voice experience will require news content to replicate the dynamics of a normal human conversation. So rather than just hearing a podcast or news bulletin, a user could have a conversation with a news brand. What could that experience be? One example could be looking at how users could speak to news presenters or reporters.

Rather than just listening to a CNN broadcast, could a user have a conversation with Anderson Cooper? It wouldn’t have to be the actual Anderson Cooper, but it could be a CNN app with his voice and powered by natural language processing to give it a bit of Cooper’s personality. There could be similar experiences that could be developed for well known presenters and pundits for sports broadcasters. This would retain the clear brand association while also giving a unique experience that could only happen through these interfaces.

Another example could be entertainment shows that could bring their audience into their programmes, quite literally. Imagine a reality TV show where rather than having members of the public performing on stage, they simply connect to them through their home smart speakers via the internet and get them to do karaoke from home. With screens and cameras coming to some of these smart speakers (eg the Amazon Echo Show and Echo Look), TV shows could link up live into the homes of their viewers. Some UK TV viewers of a certain age may recognize this concept (warning, link to Noel’s House Party) .

Voice AI: Future use cases

  • Audiences talking to news/media personalities
  • Bringing audiences into live shows directly from their homes
  • Limited lifespan apps/skills for live events (e.g. election)
  • Time-specific experiences (e.g. for when you wake up)
  • Room-optimized apps/skills for specific home locations

Say that out loud

Both Amazon and Google have been keen to emphasize the importance of a news brands getting their “sound” right. While it may be easy to integrate the sound identity for radio and TV broadcasters, it will be something that print and online players will have to think carefully about.

The name of the actual skill/app that a news brand creates will also need careful consideration. The Amazon skill for the news site Mic (pronounced “mike’) is named “Mic Now’, rather than just Mic — as otherwise Alexa would find difficult to distinguish from a microphone. The clear advice is: stay away from generic sounding services on these platforms, keep the sound distinct.

Apart from having these established branded news services on these platforms, we could start to see experimentation with hyper-specific of limited lifespan apps. There is increasing evidence to suggest that as these speakers appear not just in the living room (their most common location currently), but also in kitchens, bathrooms and bedrooms, apps could be developed to work primarily based on those locations.

Hearst Media has already successfully rolled out a cooking and recipe app on Alexa for one of its magazines, intended for use specifically in the kitchen to help people cook. Bedtime stories or lullaby apps could be launched to help children fall asleep in their bedrooms. Industry evidence is emerging to suggest that the smart speaker could replace the mobile phone as the first and last device we interact with each day. Taking advantage of this, could there be an app that is designed specifically to engage you in the first one or two minutes after your eyes open in the morning and before you get out of bed? Currently a common behaviour is to pick up the phone and check your messages and social media feed. Could that be replaced with you first talking to your smart speaker when waking up instead?

Giving voice to a billion people

While these future developments are certainly interesting possibilities, there is one thing I find incredibly exciting: the transformative impact voice AI technology could have in emerging markets and the developing world. Over the next three or four years, a billion people — often termed “the next billion” — will connect to the internet for the first time in their lives. But just having a phone with an internet connection itself isn’t going to be that useful — as they will have no experience of knowing how to navigate a website, use search or any of the online services we take for granted in the west. What could be genuinely transformative though is if they are greeted with a voice-led assistant speaking to them in their language and talking them through how to use their new smartphone and help them navigate the web and online services.

Many of the big tech giants know there is a big prize for them if they can help connect these next billion users. There are a number of efforts from the likes of Google and Facebook to make internet access easier and cheaper for such users in the future. However, none of the tech giants are currently focused on developing their voice technology to these parts of the world, where literacy levels are lower and oral traditions are strong — a natural environment where Voice AI technology would thrive, if the effort to develop it in non-English languages is made. Another big problem is that all the machine learning that voice AI will be built on currently is dominated by English datasets, with very little being done in other languages.

Some examples of what an impact voice assistants on phones could have to these “next billion” users in the developing world include:

Voice AI: Use cases for the “next billion”

  • Talking user through how to use phone functions for the first time
  • Setting voice reminders for taking medicines on time
  • Reading out text after pointing at signs/documents
  • Giving weather warnings and updating on local news

There will be opportunities here for news organizations to develop voice-specific experiences for these users, helping to educate and inform them of the world they live in. Considering the huge scale of potential audiences that could be tapped into as a result, it offers a huge opportunity to those news organizations positioned to work on this. This is an area I’ll continue to explore in personal capacity in the coming months — do get in touch with me if you have ideas.

Relationship status: It’s complicated

Voice interfaces are still very new and as a result there are ethical grey areas that will come more to the fore as they mature. One of the most interesting findings from the NPR-Edison research backs up other research that suggests users develop an emotional connection with these devices very quickly — in a way that just doesn’t happen with a phone, tablet, radio or TV. Users report feeling less lonely and seem to develop a similar emotional connection to these devices as having a pet. This tendency for people to attribute human characteristics to a computer or machine has some history to it, with its own term — the ‘Eliza effect’, first coined in 1966.

What does that do to the way users then relate to the content that is shared to them through the voice of these interfaces? Speaking at recent event on AI at the Tow Center for Journalism in New York, Judith Donath, from the Berkman Center for Internet and Society at Harvard explained the possible impact: “These devices have been deliberately designed to make you anthropomorphize them. You try to please them — you don’t do that to newspapers. If you get the news from Alexa, you get it in Alexa’s voice and not in The Washington Post’s voice or Fox News” voice.”

Possible implications for this could be that users lose the ability to distinguish from different news sources and their potential editorial leanings and agendas — as all their content is spoken by the same voice. In addition, because it is coming from a device that we are forming a bond with, we are less likely to challenge it. Donath explains:

“When you deal with something that you see as having agency, and potentially having an opinion of you, you tend to strive to make it an opinion you find favourable. It would be quite a struggle to not try and please them in some way. That’s an extremely different relationship to what you tend to have with, say, your newspaper.”

As notification features begin to roll out on these devices, news organizations will naturally be interested in serving breaking news. However, with the majority of these smart speakers being in living rooms and often consumed in a communal way by the whole family, another ethical challenge arises. Elizabeth Johnson from CNN highlights one possible scenario: “Sometimes we have really bad news to share. These audio platforms are far more communal than a personal mobile app or desktop notification. What if there is a child in the room; do you want your five year old kid to hear about a terror attack? Is there a parental safety function to be developed for graphic breaking news content?”

Parental controls such as these are likely to be developed, giving more control to parents over how children will interact with these platforms.

One of the murkiest ethical areas will be for the tech platforms to continue to demonstrate transparency over: with the “always listening” function of these devices, what happens to the words and sounds their microphones are picking up? Are they all being recorded, in anticipation of the “wake” word or phrase? When stories looking into this surfaced last December, Amazon made it clear that their Echo speakers are been designed with privacy and security in mind. Audience research suggests, however, that this remains a concern for many potential buyers of these devices.

Voice AI: The ethical dimension

  • Kids unlearning manners
  • Users developing emotional connections with their devices
  • Content from different news brands spoken in the same voice
  • Inappropriate news alerts delivered in communal family environment
  • Privacy implications of “always-listening” devices

Jumping out of boiling water before it’s too late

As my Nieman Fellowship concludes, I wanted to go back to the message at the start of this piece. Everything I’ve seen and heard so far with regards to smart speakers suggests to me that they shouldn’t just be treated as simply another new piece of technology to try out, like messaging apps, bots, Virtual and Augmented Reality (as important as they are). In of themselves, they may not appear much more significant, but the real impact of the change they will herald is through the AI and machine learning technology that will increasingly power them in the future (at this stage, this is still very rudimentary). All indications are that voice is going to become one of the primary interfaces for this technology, complementing screens through providing a greater “frictionless” experience in cars, smart appliances and in places around the home. There is still time — the tech is new and still maturing. If news organizations strategically start placing bets on how to develop native experiences through voice devices now, they will be future-proofing themselves as the technology rapidly starts to proliferate.

What does that mean in reality? It means coming together as an industry to collaborate and discuss what is happening in this space, engaging with the tech companies developing these platforms and being a voice in the room when big industry decisions are made on standardising best practices on AI.

It means investing in machine learning in newsrooms and R&D to understand the fundamentals of what can be done with the technology. That’s easy to say of course and much harder to do with diminishing resources. That’s why an industry-wide effort is so important. There is an AI industry body called Partnership on AI which is making real progress in discussing issues around ethics and standardisation of AI technology, among other areas. Its members include Google, Facebook, Apple, IBM, Microsoft, Amazon and a host of other think tanks and tech companies. There’s no news or media industry representation — largely, I suspect, because no-one has asked to join it. If, despite their competitive pressures, these tech giants can collaborate together, surely it is behoven on the news industry to do so too?

Other partnerships have already proven to have been successful and form blueprints of what could be achieved in the future. During the recent US elections, the Laboratory of Social Machines at MIT’s Media Lab partnered with the Knight Foundation, Twitter, CNN, The Washington Post, Bloomberg, Fusion and others to power real-time analytics on public opinion based on the AI and machine learning expertise of MIT.

Voice AI: How the news industry should respond

  • Experiment with developing apps and skills on voice AI platforms
  • Organize regular news industry voice AI forums
  • Invest in AI and machine learning R&D and talent
  • Collaborate with AI and machine learning institutions
  • Regular internal brainstorms on how to use voice as a primary interface for your audiences

It is starting to happen. As part of my fellowship, to test the waters I convened an informal off-the-record forum, with the help of the Nieman Foundation and AP, bringing together some of the key tech and editorial leads of a dozen different news organizations. They were joined by reps from some of the main tech companies developing smart speakers and the conversation focussed on the challenges and opportunities of the technology. It was the first time such a gathering had taken place and those present were keen to do more.

Last month, Amazon and Microsoft announced a startling partnership — their respective voice assistants Alexa and Cortana would talk to each other, helping to improve the experience of their users. It’s the sort of bold collaboration that the media industry will also need to build to ensure it can — pardon the pun — have a voice in the development of the technology too. There’s still time for the frog to jump out of the boiling water. After all, if Alexa and Cortana can talk to each other, there really isn’t any reason why we can’t too.

Nieman and AP are looking into how they can keep the momentum going with future forums, inviting a wider network in the industry. If you’re interested, contact James Geary at Nieman or Francesco Marconi at AP. It’s a small but important step in the right direction. If you want to read more on voice AI, I’ve been using the hashtag #VoiceAI to flag up any interesting stories in the news industry on this subject, as well as a Twitter list of the best accounts to follow.

Trushar Barot was on a Knight Visiting Nieman Fellowship at Harvard to study voice AI in the news industry. He is currently digital launch editor for the BBC’s new Indian-language services, based in Delhi.

Photos of Amazon Echoes by Rob Albright, 기태 김, and Ken M. Erney used under a Creative Commons license.

]]> https://www.niemanlab.org/2017/09/the-future-of-news-is-humans-talking-to-machines/feed/ 2 What are the ethics of using AI for journalism? A panel at Columbia tried to tackle that question https://www.niemanlab.org/2017/06/what-are-the-ethics-of-using-ai-for-journalism-a-panel-at-columbia-tried-to-tackle-that-question/ https://www.niemanlab.org/2017/06/what-are-the-ethics-of-using-ai-for-journalism-a-panel-at-columbia-tried-to-tackle-that-question/#respond Wed, 14 Jun 2017 16:37:43 +0000 http://www.niemanlab.org/?p=143627

Journalism is becoming increasingly automated. From the Associated Press using machine learning to write stories to The New York Times’ plans to automate its comment moderation, outlets continue to use artificial intelligence to try and streamline their processes or make them more efficient.

But what are the ethical considerations of AI? How can journalists legally acquire the data they need? What types of data should news orgs be storing? How transparent do outlets need to be about the algorithms they use?

These were some of the questions posed Tuesday at a panel discussion held by the Tow Center for Digital Journalism and the Brown Institute for Media Innovation at Columbia University that tried to address these questions about the ethics of AI powered journalism products.

Tools such as machine learning or natural language processing require vast amounts of data to learn to behave like a human, and Amanda Levendowski, a clinical teaching fellow at the NYU’s law school, listed a series of considerations that must be thought about when trying to access data to perform these tasks.

“What does it mean for a journalist to obtain data both legally and ethically? Just because data is publicly available does not necessarily mean that it’s legally available, and it certainly doesn’t mean that it’s necessarily ethically available,” she said. “There’s a lot of different questions about what public means — especially online. Does it make a difference if you show it to a large group of people or small group of people? What does it mean when you feel comfortable disclosing personal information on a dating website versus your public Twitter account versus a LinkedIn profile? Or if you choose to make all of those private, what does it meant to disclose that information?”

For example, Levendowski highlighted the fact that many machine learning algorithms were trained on a cache of 1.6 million emails from Enron that were released by the federal government in the early 2000s. Companies are risk averse, she said, and they prefer to use publicly available data sets, such as the Enron emails or Wikipedia, but those datasets can produce biases.

“But when you think about how people use language using a dataset by oil and gas guys in Houston who were convicted of fraud, there are a lot of biases that are going to be baked into that data set that are being handed down and not just imitated by machines, but sometimes amplified because of the scale, or perpetuated, and so much so that now, even though so many machine learning algorithms have been trained or touched by this data set, there are entire research papers dedicated to exploring the gender-race power biases that are baked into this data set.”

The whole panel featured speakers such as John Keefe, the head of Quartz’s bot studio; BuzzFeed data scientist Gilad Lotan; iRobot director of data science Angela Bassa; Slack’s Jerry Talton, Columbia’s Madeleine Clare Elish, and (soon-to-be Northwestern professor) Nick Diakopoulos. The full video of the panel (and the rest of the day’s program) is available here and is embedded above; the panel starts about eight minutes in.

]]>
https://www.niemanlab.org/2017/06/what-are-the-ethics-of-using-ai-for-journalism-a-panel-at-columbia-tried-to-tackle-that-question/feed/ 0
These are the bots powering Jeff Bezos’ Washington Post efforts to build a modern digital newspaper https://www.niemanlab.org/2017/04/these-are-the-bots-powering-jeff-bezos-washington-post-efforts-to-build-a-modern-digital-newspaper/ https://www.niemanlab.org/2017/04/these-are-the-bots-powering-jeff-bezos-washington-post-efforts-to-build-a-modern-digital-newspaper/#respond Wed, 26 Apr 2017 17:09:11 +0000 http://www.niemanlab.org/?p=141175

Editor’s note: Last weekend was the latest edition of my favorite journalism conference, the International Symposium on Online Journalism in Austin. You can catch up on what you missed through these two epic YouTube videos of the two days’ livestreams.

But there were two talks in particular that I thought Nieman Lab readers might be interested in seeing, from America’s two top newspapers, The New York Times and The Washington Post. Both Andrew Phelps, an editor on the Times’ Story[X] newsroom R&D team, and Joey Marburger, the Post’s director of product, spoke about how they were using bots in their news operations.

Today, we’re publishing transcripts (lightly edited for clarity) of their two talks. Below is Joey’s talk; Andrew’s is over here.

I’m a huge sci-fi nerd — love Isaac Asimov. And if you’ve ever seen this actually not super great interpretation — I, Robot, with Will Smith — it’s all about the Three Laws of Robotics.

Where basically, like, robots aren’t supposed to kill you — until they try to kill you. Hopefully, a conversational journalism won’t ever try to kill you.

So I developed basically three quick laws. But they’re they’re pretty spot on to the Laws of Robotics, which is: We don’t want to spread false information. It should follow what a human journalist tells it to do, unless the human tells it to spread false information.

We’ve done a lot of experiments on bots. And we’re very excited about it, because it’s this great, simple experience, and the technology is getting so much better for it: AI’s getting better. big data’s more accessible. So we knew we wanted to try a bunch of things and see what’s out there, because it’s kind of hard to have a ton of successes when you’re on the bleeding edge.

I’m going to go over three bots, which are kind of our favorites, but we actually have almost 100 bots actually. Like 99 percent of them are internal, though.

So this is our most successful reader-facing bot: It’s called the Feels Bot. About 30 days prior to the U.S. presidential election, if you opted into it on our politics Facebook page, we would message you in the evening and ask you how you felt about the election. And it was just five emoji responses, from super angry to happy — and we would curate all that. Then in the morning, we would show you a graph of how people were feeling.

We knew that we had to have a cadence in alerting people but not annoying people, because we had already built a bot for that. It was just a general news bot which didn’t do very well — which we figured would happen. Even though there are a billion people on Facebook Messenger, I don’t think anyone’s built a bot that has that many users.

So this was really fun to work on — and it was curated by a human. It had a low user account — like less than 10,000 people. But the engagement — meaning people actually answered the question every day for 30 days — was greater than 65 percent, because it’s simple. It asks you a simple question; it was a very charged election. And, you know, if you ask people how they feel, turns out they’ll tell you, which is great.

So we’d generate these social cards from it and highlight a few. Some of the best responses we’d share on Twitter, put them up on our site. We generated these little graphics out of it were really fun, and we did this every day for 30 days, which is a great exercise. Empathy is a powerful driver in conversation.

Another thing we call our Virality Oracle is a Slack bot in a Slack channel — a public channel inside the Post — that is powered by a really amazing algorithm from our data science team. From the second that a story is published, it starts monitoring it and it knows within the first 30 minutes of publishing if it’s basically going to be viral. (It’s really “popular” — “viral” is kind of a loaded word.) And it notifies the channel, so we can maybe go in and add something of the story, or start writing off of it a little bit. We get about three to five stories like this in a day. And then it also models out a 24-hour traffic window, and then the bot also emails us to digest, so we can see like the lifecycle of stories. This is really a bot as a tool — so it’s like a service bot or utility bot. It’s very handy.

So this is actually the data behind the bot, which I’m not going to go into in super detail. Our prediction model is taking in all these data points — this bot is just eating and gobbling up. We ran it for a long time, almost a year, for the machine learning to get really accurate. And we ran it on every story published — about 300 stories a day.

And we found we’d add in a new metric and it would get a little better. And now we’re at about 80 percent competency.

This is everyone’s favorite — the MartyBot. So Marty Baron is our editor, and this is tied into our publishing scheduling system called WebSked.

Whenever a reporter starts a story, they actually put in when they plan to publish it. So what is the deadline — which can always be changed. So if you’re behind, it will tell you: Hey, you’re either really close to deadline or you missed your deadline. It personally messages you — it doesn’t, like, shame in a channel or anything. And it’s really funny when it messages Marty — which I think has maybe happened once.

So this is a pretty cool thing too, called Heliograf, which is another way to think of a bot. It’s not a conversational bot, but it takes in data points from a feed and can basically craft stories very simple short stories, based on templates. Anybody every play Mad Libs? You know, put in a noun, pick an adjective, whatever? This is kind of what that does.

So we used this for the Olympics and for elections. We published a story on every single Olympic event, because of Heliograf. And then for elections, we posted a story on every single race in the U.S. on Election Night, and generated newsletters, generated tweets. We did all sorts of fun stuff from it. So it was a bot that was helping us do better journalism.

Audio bots are super, super huge right now. Amazon doesn’t call Alexa a bot, even though pieces of it inside are a bot. They like to refer to it as like it’s an operating system, as audio AI.

Our politics Flash Briefing was one of our fastest-growing products last year. We caught the wave just right — there’s a reason that the Echo is out of stock on Amazon all the time. They’re actually outselling a lot of their other hardware. Jeff Bezos, our owner, is personally driving this road map, which also gives you an indicator of how successful it is. And it’s super fun.

But what we’re thinking about bots and how it plays into your day-to-day life and your habit is: Bots can do very simple tasks. It shouldn’t do everything, because then you’ve got a lot of cognitive overhead, it’s a lot of work. Sometimes you don’t know what to ask a bot, other than, like, “What’s the news?” So we’re thinking about — the future’s here. You can build these things — and actually now there are a lot of tools, you can build them pretty easily. Amazon has a tool called Lex, which — point-and-click, you can build a pretty robust bot without any code. So the future is here, it’s just not evenly distributed — which is a quote from William Gibson, another science fiction writer. And I think this is super true for bots. Bots aren’t totally new — they’re just getting more accessible. It’s almost becoming a household name.

So we think bots can fill all these spaces between platforms — like, on different platforms, but also they fill in these gaps a little bit between things. A bot could notify you to catch you up on where you left off in a story while you were listening to it on the train into work. You sit down your desktop, and it’s like: “Hey buddy, here’s where you were in that story.” It like fills that space a little bit. This is what we’re starting to work on a lot right now; we’re calling it a handoff bot.

I remember bringing this up in the newsroom — nobody really understood it. “Why would we do this?” Especially when you do the first one and it gets like five people that use it — you’re like, “We got to keep doing it!” And it turns out that you learn a lot from experimenting. When things are really simple and really hard, it’s very attractive to a designer and a product person. So we’ll be we’ll be iterating on bots for a long time to come.

Photo illustration based on robot photo by Michael Dain used under a Creative Commons license.

]]>
https://www.niemanlab.org/2017/04/these-are-the-bots-powering-jeff-bezos-washington-post-efforts-to-build-a-modern-digital-newspaper/feed/ 0
Quartz launches its Bot Studio with $240K from Knight, and plans for Slack and Echo https://www.niemanlab.org/2016/11/quartz-launches-its-bot-studio-with-a-quarter-million-from-knight-and-plans-for-slack-and-amazon-echo/ https://www.niemanlab.org/2016/11/quartz-launches-its-bot-studio-with-a-quarter-million-from-knight-and-plans-for-slack-and-amazon-echo/#respond Tue, 29 Nov 2016 11:00:28 +0000 http://www.niemanlab.org/?p=133729 Quartz is betting big on bots. The Atlantic Media-owned outlet is getting a $240,000 grant from the Knight Foundation to launch Quartz Bot Studio, a group focused on developing three bot-related projects in the coming year, for everything from messaging platforms like Slack to voice interfaces like the Amazon Echo (disclosure: Knight is a supporter of Nieman Lab). Quartz will contribute its own resources to the Studio as well, and intends for the projects to continue after its first year.

Quartz has already made significant headway in bot experimentation. In February, it debuted a news app structured around the already familiar iMessage texting interface. It has employed a Slackbot for its Next Billion conference; the bot handles logistical questions like Wi-Fi logins and information on speakers and sessions. Its Daily Brief email is now available as a Flash Briefing on the Amazon Echo.

Quartz hasn’t settled on exactly which bot-related projects it will build for which platforms, but “we want to make sure that the experiments cover a large variety of potential platforms and use cases, so that’s where we’re coming from,” said Zach Seward, Quartz’s executive editor and VP of product (and former Nieman Lab staffer). Slack is one such promising platform. “If we’re going to make a tool or a set of automated tools for journalists — there are a few platforms you can imagine building bots for — I’d imagine we’d build something for Slack, with so many newsrooms operating largely on Slack now.”

Another major project will likely be building for Amazon Echo or Google Home (though Google Home hasn’t yet opened up its API to developer). The exact form of the third project is still wide open.

“With our Quartz app, we solved the problem of user input by creating the choices for the user, and there were all sorts of reasons why we thought that was the best direction for the app,” Seward said. “We’d love to now challenge ourselves to handle freeform user input, whether that’s inside the app itself or on a totally different messaging platform.”

Quartz will use the funding to hire at least one developer and one writer (“it’s a different form of writing, writing for something like a messaging interface,” Seward pointed out) to work exclusively on the Bot Studio projects. While these projects originate at Quartz, the outlet intends to release code for the projects produced with the Knight funding, and will provide write-ups of process and challenges along the way, similar to the Guardian’s Mobile Innovation Lab‘s process.

“We’ll do some sort of big postmortem report, but more valuable would be the incremental updates we can provide about each of the specific projects, challenges, what we learned from each one, trying to make that all public in the best way possible — that’s as valuable as the code itself,” Seward said. “We’ll be writing about it. We’re toying with the idea of a podcast, or some other venue that’s useful to the people who do this kind of work or are interested in doing it.”

“We’re also open to collaborating with other newsrooms,” he added. “It’s hard to tell precisely what these projects will look like, but it could be interesting for one or more of these projects to be a collaboration. The things we are doing may be similar to things other newsrooms are doing or may want to do, and our code and lessons learned could be helpful.”

Photo of a typing robot, by Kordite, used under a Creative Commons license.

]]>
https://www.niemanlab.org/2016/11/quartz-launches-its-bot-studio-with-a-quarter-million-from-knight-and-plans-for-slack-and-amazon-echo/feed/ 0
Here are the important announcements for publishers at Facebook’s F8 keynote https://www.niemanlab.org/2016/04/here-are-the-important-announcements-for-publishers-at-facebooks-f8-keynote/ https://www.niemanlab.org/2016/04/here-are-the-important-announcements-for-publishers-at-facebooks-f8-keynote/#comments Tue, 12 Apr 2016 18:35:47 +0000 http://www.niemanlab.org/?p=124258 Facebook’s annual developer conference F8 is up there with Apple’s and Google’s keynotes for important news for publishers. Today’s keynote speech by CEO Mark Zuckerberg (video here) was evidence of a company in its imperial phase. (I mean, its product roadmap prominently featured drones, satellites, and lasers.) As more and more of the news world gets pulled into that little blue app on your phone, what goes on at F8 is increasingly important for those involved in the production of news. Here are some of the news-oriented highlights from today.

A bot platform for Facebook Messenger. As predicted, Zuckerberg announced a developer platform that lets companies (including news companies) create bots that interact with Messenger users. One of the two sample bots he showed off was from CNN — a news digest with a carousel of stories, each of which you can choose to interact with (“Read story,” “Get a summary,” and “Ask CNN” are the three options shown — it’s unclear what Ask CNN leads to). Zuckerberg also noted that the app will learn from your actions and personalize its content mix over time.

I saw a few people on Twitter saying it looks like Quartz’s iPhone app — which is true, but the core appeal of chat bots is that they happen inside the existing chat environment you’re already addicted to, not in a separate app for which you may or may not develop a habit. (Quartz didn’t invent chat bubbles.)

It’s the next step down the long road of distributed content, for better or worse. In case you’re doubting the scale of the opportunity (or this shift), Zuckerberg also announced that Facebook Messenger and WhatsApp now process 60 billion messages a day, three times more than SMS did at its peak.

Publishers, including Business Insider and The Wall Street Journal, started announcing their own bots shortly after the announcement.

Facebook bookmarking gets bigger. I confess I didn’t even realize Facebook bookmarking was a thing, but Zuckerberg said 250 million people use the “Save” button in Facebook every month. Today, it announced a “Save to Facebook” button for the web. Not a good day for Pocket or Instapaper. From The Verge:

If that sounds a lot like Pocket and Instapaper, well, it is — it’s just baked into one of the most popular apps in the world. There are key differences, though — unlike Pocket and Instapaper, Facebook doesn’t strip articles of their formatting and advertisements. Given the high percentage of traffic that many publishers derive from Facebook, the company may have more success in getting them to add a “Save to Facebook” button than Pocket, which offers a similar button of its own.

Kind of ironic for the 2016 king of distributed reading to be disrupting the 2010 kings of distributed reading.

A new sharing tool for text. App developers can now add a quote-sharing tool to their apps; Amazon’s using it in its next Kindle apps: “Now instead of copying and pasting text from Kindle into Facebook, you can simply highlight it and share it to Facebook. Facebook will paste the text into a new post in block quote format, and include a full preview of the original URL.” If you’re a publisher with a news app, it’ll be worth thinking about the lure of more Facebook traffic (vs. the UX clutter of Yet Another Sharing Option).

More options for streaming video to Facebook Live. Instead of just your phone, you’ll now be able to broadcast live video from other devices, like professional cameras — or drones:

It’s an API, so it’ll be up to camera manufacturers to update their software accordingly, which could take time. But the API should also allow access to much more than better camera quality — we’ll see all sorts of video editing, mixing, and overlay tools built. (Facebook’s Chris Cox mentioned that BuzzFeed was working on using the API to enable a live game show.)

AI to read news articles. It’s unclear what it means in the short term, but Zuckerberg did reference the idea of Facebook developing artificial intelligence to better read and understand the content of news stories — the better to recommend content you’ll value. (Facebook already uses plenty of signals about any given article to determine whether it deserves a spot in your News Feed, of course. But those are mostly about social signals, not content signals.)

Instant Articles for everyone. Oh, yeah — didn’t get a keynote mention, but as announced back in February, any publisher can now publish Instant Articles into Facebook. Sign up here.

]]>
https://www.niemanlab.org/2016/04/here-are-the-important-announcements-for-publishers-at-facebooks-f8-keynote/feed/ 1