generative AI – Nieman Lab https://www.niemanlab.org Thu, 11 May 2023 18:54:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 Google is changing up search. What does that mean for news publishers? https://www.niemanlab.org/2023/05/google-is-changing-up-search-what-does-that-mean-for-news-publishers/ https://www.niemanlab.org/2023/05/google-is-changing-up-search-what-does-that-mean-for-news-publishers/#respond Thu, 11 May 2023 17:06:30 +0000 https://www.niemanlab.org/?p=215087 At its annual I/O conference on Wednesday, Google announced a slew of “experiments” and changes that are coming to search.

It’s early days. But if these changes are rolled out widely, they’ll be the most significant overhaul of some of the important space on the internet in quite awhile. The shift could significantly decrease the traffic that Google sends to publishers’ sites, as more people get what they need right from the Google search page instead. They could also do some damage to the affiliate revenue that publishers derive from product recommendations.

On the bright side, a new search filter aimed at highlighting humans could help highlight individual journalists, columnists, and newsletters — maybe.

“Search Generative Experience”

Google will place AI-generated answers right at the top of some search pages. Here’s how the company describes it:

Let’s take a question like “what’s better for a family with kids under 3 and a dog, bryce canyon or arches.” Normally, you might break this one question down into smaller ones, sort through the vast information available, and start to piece things together yourself. With generative AI, Search can do some of that heavy lifting for you.

You’ll see an AI-powered snapshot of key information to consider, with links to dig deeper.

The Washington Post’s Geoffrey Fowler tested the feature and describes the way that SGE cites its sources:

When Google’s SGE answers a question, it includes corroboration: prominent links to several of its sources along the left side. Tap on an icon in the upper right corner, and the view expands to offer source sites sentence by sentence in the AI’s response.

There are two ways to view this: It could save me a click and having to slog through a site filled with extraneous information. But it could also mean I never go to that other site to discover something new or an important bit of context.

You see the top three sources by default, but can toggle for more.

AI-generated content will also be incorporated heavily into shopping results. Search something like “bluetooth speaker for a pool party under $100,” or “good bike for a 5 mile commute with hills,” and up pops an AI-powered list of recommended products to buy. I haven’t tested this feature, but in addition to keeping users off publishers’ pages altogether, it also seems as though it’s not great news for any publishers that make money from affiliate links.

Google cautions that SGE is still an experiment, and it’s not widely available yet. (If you want to try it and are in the U.S., you can add yourself to the waitlist here from the Chrome browser or Google app.) In addition to that limited access, The Verge’s David Pierce notes that there are supposed to be limits to what Google will use AI to answer

Not all searches will spark an AI answer — the AI only appears when Google’s algorithms think it’s more useful than standard results, and sensitive subjects like health and finances are currently set to avoid AI interference altogether. But in my brief demos and testing, it showed up whether I searched for chocolate chip cookies, Adele, nearby coffee shops, or the best movies of 2022.

For instance, when Wired’s Will Knight asked “if Joe Biden is a good president or for information about different US states’ abortion laws, for example, Google’s generative AI product declined to answer.” But even though Google’s AI is not supposed to have opinions, it seems as if they slip in sometimes. The Verge again:

At one point in our demo, I asked [Liz Reid, Google’s VP of search] to search only the word “Adele.” The AI snapshot contained more or less what you’d expect — some information about her past, her accolades as a singer, a note about her recent weight loss — and then threw in that “her live performances are even better than her recorded albums.” Google’s AI has opinions! Reid quickly clicked the bear claw and sourced that sentence to a music blog but also acknowledged that this was something of a system failure.

“Hidden gems”

Google is also expanding the use of a search filter called “Perspectives” that brings user-created content — think Reddit posts, YouTube videos, and blog posts — into search results. This change is coming at a time when Americans are increasingly seeking out news and information from individuals, not institutions — and TikTok and Instagram are eating into Google’s share of the search market. Here’s Google:

“In the coming weeks, when you search for something that might benefit from the experiences of others, you may see a Perspectives filter appear at the top of search results. Tap the filter, and you’ll exclusively see long- and short-form videos, images and written posts that people have shared on discussion boards, Q&A sites and social media platforms. We’ll also show more details about the creators of this content, such as their name, profile photo or information about the popularity of their content.

Helpful information can often live in unexpected or hard-to-find places: a comment in a forum thread, a post on a little-known blog, or an article with unique expertise on a topic. Our helpful content ranking system will soon show more of these “hidden gems” on Search, particularly when we think they’ll improve the results.”

“We’re finding that often our users, particularly some of our younger users, want to hear from other people,” Liz Reid, Google’s VP of search, told The Verge. “They don’t just want to hear from institutions or big brands. So how do we make that easy for people to access?”

As Perspectives rolls out, it’ll be interesting to see how Google defines “other people”: Do journalists or opinion columnists who work for newspapers count? Will Substacks be surfaced? The feature could potentially benefit larger news publishers as well as journalists going it alone, but we’ll see.

]]>
https://www.niemanlab.org/2023/05/google-is-changing-up-search-what-does-that-mean-for-news-publishers/feed/ 0
AI-generated art sparks furious backlash from Japan’s anime community https://www.niemanlab.org/2022/11/ai-generated-art-sparks-furious-backlash-from-japans-anime-community/ https://www.niemanlab.org/2022/11/ai-generated-art-sparks-furious-backlash-from-japans-anime-community/#respond Tue, 01 Nov 2022 14:31:48 +0000 https://www.niemanlab.org/?p=209071

On October 3, renowned South Korean illustrator Kim Jung Gi passed away unexpectedly at the age of 47. He was beloved for his innovative ink-and-brushwork style of manhwa, or Korean comic-book art, and famous for captivating audiences by live-drawing huge, intricate scenes from memory.

Just days afterward, a former French game developer, known online as 5you, fed Jung Gi’s work into an AI model. He shared the model on Twitter as an homage to the artist, allowing any user to create Jung Gi-style art with a simple text prompt. The artworks showed dystopian battlefields and bustling food markets — eerily accurate in style, and, apart from some telltale warping, as detailed as Jung Gi’s own creations.

The response was pure disdain. “Kim Jung Gi left us less than [a week ago] and AI bros are already ‘replicating’ his style and demanding credit. Vultures and spineless, untalented losers,” read one viral post from the comic-book writer Dave Scheidt on Twitter. “Artists are not just a ‘style.’ They’re not a product. They’re a breathing, experiencing person,” read another from cartoonist Kori Michele Handwerker.

Far from a tribute, many saw the AI generator as a theft of Jung Gi’s body of work. 5you told me that he has received death threats from Jung Gi loyalists and illustrators, and asked to be referred to by his online pseudonym for safety.

Generative AI might have been dubbed Silicon Valley’s “new craze,” but beyond the Valley, hostility and skepticism are already ramping up among an unexpected user base: anime and manga artists. In recent weeks, a series of controversies over AI-generated art — mainly in Japan, but also in South Korea — have prompted industry figures and fans to denounce the technology, along with the artists that use it.

While there’s a long-established culture of creating fan art from copyrighted manga and anime, many are drawing a line in the sand where AI creates a similar artwork. I spoke to generative AI companies, artists, and legal experts, who saw this backlash as being rooted in the intense loyalty of anime and manga circles — and, in Japan, the lenient laws on copyright and data-scraping. The rise of these models isn’t just blurring lines around ownership and liability, but already stoking panic that artists will lose their livelihoods.

“I think they fear that they’re training for something they won’t ever be able to live off because they’re going to be replaced by AI,” 5you told me.

One of the catalysts is Stable Diffusion, a competitor to the AI art model Dall-E, which hit the market on August 22. Stability AI is open-source, which means that, unlike Dall-E, engineers can train the model on any image data set to churn out almost any style of art they desire — no beta invite or subscription needed. 5you, for instance, pulled Jung Gi’s illustrations from Google Images without permission from the artist or publishers, then fed them into Stable Diffusion’s service.

In mid-October, Stability AI, the company behind Stable Diffusion, raised $101 million at a valuation of $1 billion. Looking for a cut of this market, AI startups are building off Stable Diffusion’s open-source code to launch more specialized and refined generators, including several primed for anime and manga art.

Japanese AI startup Radius5 was one of the first companies to touch a nerve when, in August, it launched an art-generation beta called Mimic that targeted anime-style creators. Artists could upload their own work and customize the AI to produce images in their own illustration style; the company recruited five anime artists as test cases for the pilot.

Almost immediately, on Mimic’s launch day, Radius5 released a statement that the artists were being targeted for abuse on social media. “Please refrain from criticizing or slandering creators,” the company’s CEO, Daisuke Urushihara, implored the swarm of Twitter critics. Illustrators decried the service, saying Mimic would cheapen the art form and be used to recreate artists’ work without their permission.

And they were partly right. Just hours after the statement, Radius5 froze the beta indefinitely because users were uploading other artists’ work. Even though this violated Mimic’s terms of service, no restrictions had been built to prevent it. The phrase “AI学習禁止” (“No AI Learning”) lit up Japanese Twitter.

A similar storm gathered around storytelling AI company NovelAI, which launched an image generator on October 3; Twitter rumors rapidly circulated that it was simply ripping human-drawn illustrations from the internet. Virginia Hilton, NovelAI’s community manager, told me that she thought the outrage had to do with how accurately the AI could imitate anime styles.

“I do think that a lot of Japanese people would consider [anime] art a kind of export,” she said. “Finding the capabilities of the [NovelAI] model, and the improvement over Stable Diffusion and Dall-E — it can be scary.” The company also had to pause the service for emergency maintenance. Its infrastructure buckled from a spike in traffic, largely from Japan and South Korea, and a hacking incident. The team published a blog post in English and Japanese to explain how it all works, while scrambling to hire friends to translate their Twitter and Discord posts.

The ripple effect goes on. A Japanese artist was obliged to tweet screenshots showing layers of her illustration software to counter accusations that she was secretly using AI. Two of the country’s most famous VTuber bands requested that millions of social media followers stop using AI in their fan art, citing copyright concerns if their official accounts republished the work. Pixiv, the Japanese online artists’ community, has announced it will be launching tags to filter out AI-generated work in its search feature and in its popularity rankings.

In effect, manga and anime are acting as an early testing ground for AI art-related ethics and copyright liability. The industry has long permitted the reproduction of copyrighted characters through doujinshi (fan-made publications), partly to stoke popularity of the original publications. Even the late Prime Minister Shinzo Abe once weighed in on the unlicensed industry, arguing it should be protected from litigation as a form of parody.

Outside of doujinshi, Japanese law is ordinarily harsh on copyright violations. Even a user who simply retweets or reposts an image that violates copyright can be subject to legal prosecution. But with art generated by AI, legal issues only arise if the output is exactly the same, or very close to, the images on which the model is trained.

“If the images generated are identical…then publishing [those images] may infringe on copyright,” Taichi Kakinuma, an AI-focused partner at the law firm Storia and a member of the economy ministry’s committee on contract guidelines for AI and data, told me. That’s a risk with Mimic and similar generators built to imitate one artist. “Such [a result] could be generated if it is trained only with images of a particular author,” Kakinuma said.

But successful legal cases against AI firms are unlikely, said Kazuyasu Shiraishi, a partner at the Tokyo-headquartered law firm TMI Associates. In 2018, the National Diet, Japan’s legislative body, amended the national copyright law to allow machine-learning models to scrape copyrighted data from the internet without permission, which offers up a liability shield for services like NovelAI.

Whether images are sold for profit or not is largely irrelevant to copyright infringement cases in the Japanese courts, said Shiraishi. But to many working artists, it’s a real fear.

Haruka Fukui, a Tokyo-based artist who creates queer romance anime and manga, admits that AI technology is on track to transform the industry for illustrators like herself, despite recent protests. “There is a concern that the demand for illustrations will decrease and requests will disappear,” she said. “Technological advances have both the benefits of cost reduction and the fear of fewer jobs.”

Fukui has considered using AI herself as an assistive tool, but showed unease when asked if she would give her blessing to AI art generated using her work.

“I don’t intend to consider legal action for personal use,” she said. “[But] I would consider legal action if I made my opinion known on the matter, and if money is generated,” she added. “If the artist rejects it, it should stop being used.”

But the case of Kim Jung Gi shows artists may not be around to give their blessing. “You can’t express your intentions after death,” Fukui admits. “But if only you could ask for the thoughts of the family.”

Andrew Deck is a reporter at Rest of World, where this story was originally published.

]]>
https://www.niemanlab.org/2022/11/ai-generated-art-sparks-furious-backlash-from-japans-anime-community/feed/ 0