AI Tech Is Now Mainstream. What Could Go Wrong?

  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

Charlie Puth had a problem: He’d written a lyric, “a little ditty,” but didn’t know what to do with those lines. The “Left and Right” singer-songwriter had joined a Google AI incubator program, so he fed the new lyrics into the artificial intelligence-assisted tool “like I would do if I were collaborating with anybody else,” Puth said, speaking to a room of journalists and YouTube creators at the tech giant’s New York office Sept. 21. “It was really profound,” Puth recalled of what the system spit back, noting that it sang the lyrics back in his own voice, suggested styles and recommended that it be sung in A-flat minor.

Generative AI is on the cusp of going mainstream. While systems like DALL-E and GPT may still be used largely by first adopters, companies like YouTube and Meta are preparing to roll out AI-driven tools to the masses. At the same event, YouTube announced an AI tool that will recommend video ideas to creators as well as the “Dream Screen,” which will let creators type in a short prompt and have the AI turn it into a video background for them to use.

More from The Hollywood Reporter

Snoop Dogg’s AI chat bot, as promoted by Meta.
Snoop Dogg’s AI chat bot, as promoted by Meta.

Meta CEO Mark Zuckerberg announced a slew of AI features at an event Sept. 27, including AI image creation and editing tools, and AI “characters” that users can chat with, featuring the likenesses and voices of real people (Tom Brady plays “Bru,” a “wisecracking sports debater,” and Mr. Beast plays “Zach,” a “big brother who will roast you,” per Meta).

And Getty Images on Sept. 25 unveiled a tool meant to bring generative AI to its platform — with a twist: It will be suitable for commercial use, with indemnification for users, and compensation for photographers whose work was used for training.

But as AI tools go mainstream, concern and consternation continue to grow, as artists wonder how their work is being used and whether they will be compensated or protected. Look no further than the WGA strike, where the use of AI was a final sticking point in talks. “We really are grounding around three core principles,” YouTube CEO Neal Mohan said when asked by The Hollywood Reporter at the event about how the company is being proactive around the tech. “The first is, AI is here. It’s up to all of us to harness that in a way that really builds the creative community. The second is, we want to do it in a way where creatives retain control and the monetization opportunities accrue to them. And, then, finally — perhaps even most importantly — we want to do that in a bold but also responsible way.”

It’s not just about compensation and copyright, but manipulation and other fears. “We’re talking about issues of rights and creator tools,” Mohan explains. “But we also know that these tools can make it easier for bad actors to be able to do things on platforms like YouTube and really kind of everywhere. There also we have a track record of prioritizing responsibility above and beyond everything else, and in the realm of generative AI we intend to do the same.”

A prompt example from YouTube’s “Dream Screen.”
A prompt example from YouTube’s “Dream Screen.”

But the dichotomy of generative AI — of possibility and risk — is everywhere. Puth can use AI to help him write a song, but other artists may want their voices as far from the AI training sets as possible. “It’s our job — the platforms and the music industry — to make sure that artists like Charlie who lean in can benefit; it’s also our job together to make sure that artists who don’t want to lean in are protected,” said Robert Kyncl, CEO of Warner Music Group. at the YouTube event.

YouTube executives say they are taking a “more careful path” around some AI tech like voice cloning, acutely aware of the potential for misuse. As Big Tech encroaches on Hollywood, major players are pushing for regulations. Michael Nash, chief digital officer of Universal Music Group, tells THR that music publishers are advocating for a federal right of publicity law to combat voice mimicry in AI tracks and “ensure that creatives are entitled to and able to harness the brands they built.” Copyright does not account for actors’ faces or singers’ voices, but there are laws in certain states — like California, New York and Florida — that protect against unauthorized commercial uses of a person’s name, likeness and persona. It’s meant to provide people the exclusive right to license their identities.

Kyncl likened the current moment to the early years after the invention of the printing press. “Ever since then, technologies have transformed industries, and evolved them for the most part for the better and with much greater prosperity,” he said. “But whenever that happens, there’s a period of uncertainty, because there’s a profound change and change is unsettling. And we are in that period of change.”

Nestled in between the lines of Mohan’s nod to “issues of rights and creator tools” was the notion that some generative AI technology may not be completely aboveboard. OpenAI, Meta and Stability AI are facing a slew of lawsuits alleging mass-scale copyright infringement over their unlicensed use of copyrighted works as training data. While YouTube did not disclose the AI system used to power “Dream Screen,” it’s possible it was trained on data scraped from the internet and outside the public domain.

Courts are wrestling with whether the practice violates IP laws. AI companies say that it’s fair use, which provides protection for the use of copyrighted works to make a secondary creation as long as it’s “transformative.” OpenAI in August moved to dismiss a proposed class action filed by Sarah Silverman and others based on the argument, claiming that they “misconceive the scope of copyright.”

The company will likely run into the Supreme Court’s recent decision in Andy Warhol Foundation for the Visual Arts v. Goldsmith, which effectively reined in the scope of fair use. In that case, the majority stressed that an analysis of whether an allegedly infringing work was sufficiently transformed must be balanced against the “commercial nature of the use.” If authors are able to establish that OpenAI’s scraping of their novels undercut their economic prospects to profit off of their works by, for example, interfering with potential licensing deals that the company could have instead pursued, then fair use is likely not to be found, legal experts consulted by THR say.

“OpenAI doesn’t even need to be in direct competition with the artists; it just needs to have used their art for commercial purposes,” says Columbia Law School professor Shyam Balganesh. “Similarly for YouTube, the artists could argue that they could have had a vibrant licensing market.”

But before the courts even analyze fair use, plaintiffs have to establish that their works were used to train AI systems. Among their frustrations is that the training datasets are a black box. It’s a feature, not a bug. Since artists and authors cannot prove that their creations were used, that could be an obstacle in a lawsuit. OpenAI and Meta no longer disclose information about the sources of their data-sets. The Sam Altman-led OpenAI attributed the about-face to the “competitive landscape and the safety implications of large-scale models like GPT-4,” though the decision came in March, after it was sued over its datasets.

Liability can also fall on users. YouTube, unlike Getty, does not offer indemnity for use of its AI tools. Users will be on the hook if their work is found to have infringed upon any copyrights.

Getty has more reason to be confident in the legal posture of its AI system since it licenses the underlying images its model was trained on. It’s leaning into the decision to pay contributors for the use of their content, with the company trumpeting its technology as a “commercially safe generative AI tool.”

The Authors Guild — led by prominent authors including George R.R. Martin, Jonathan Franzen and John Grisham — on Sept. 19 entered the legal battle against OpenAI. With more than 13,000 among its ranks, the trade group represents the most formidable opponent suing the company. A finding of infringement could lead to hundreds of millions of dollars in damages and an order requiring it to destroy systems trained on copyrighted works.

Behind closed doors, companies commercializing AI tools are already warning investors of potential liability. Adobe stated in a securities filing issued in June that intellectual property disputes could “subject us to significant liabilities, require us to enter into royalty and licensing agreements on unfavorable terms” and possibly impose “injunctions restricting our sale of products or services.” In March, Adobe unveiled AI image and text generator Firefly. Though the first model is only trained on stock images, it said that future versions will “leverage a variety of assets, technology and training data from Adobe and others.”

That isn’t stopping big companies from embracing the tech. Consider Fox Corp. The Murdoch-controlled media company on Sep. 26 announced that its Tubi free streaming service was partnering with ChatGPT creator OpenAI to launch “RabbitAI,” allowing users to ask for movie or TV show recommendations (try “shark movies that are funny,” for example). And Fox’s local TV stations on Sep. 28 announced a partnership with the generative AI company Waymark to allow local business to use AI tech to create ads that can run on its local stations.

In the long run, Mohan sees a future where AI tools are leveraged to find and identify instances where other AI tools were misused. Cottage industry and businesses are already sprouting up to take advantage of anxiety around generative AI. Tech firm Metaphysic on Sept. 14 unveiled a new product and service for what it claims could help actors and other consumers “manage” unpermitted uses of their face, voice and performance data by third parties wielding AI tools. Early adopters include Anne Hathaway, Tom Hanks, Octavia Spencer, Rita Wilson, and Paris Hilton.

And Meta suggests that it is using AI tech to help police its AI features: “It’s important to know that we train and tune our generative AI models to limit the possibility of private information that you may share with generative AI features from appearing in responses to other people,” the company wrote in an FAQ about its new offerings. “We use automated technology and people to review interactions with our AI so we can, among other things, reduce the likelihood that models’ outputs would include someone’s personal information as well as improve model performance.”

Mohan adds, “AI, I think, will be an amazing tool in terms of actually enforcing those guidelines that keep the entire ecosystem safe.”

A version of this story first appeared in the Sept. 27 issue of The Hollywood Reporter magazine. Click here to subscribe.

Best of The Hollywood Reporter

Click here to read the full article.