The Rise of AI-Powered Stars: Big Money and Risks

  • Oops!
    Something went wrong.
    Please try again later.

The voice is unmistakable. As the video of a woman wading into the ocean during sunset played, a melodic tune was overlaid: “I woke up, woke up, woke up in a sunshine stattte, that’s livin’ up to me.”

The voice was clearly that of T-Pain, including his signature Auto-Tune eccentricities. The catch? The lyrics — and T-Pain’s singing voice itself — were created by generative artificial intelligence tech called “Dream Track” from YouTube. The AI wrote the song and re-created T-Pain’s voice so that a user could include it as a personal soundtrack to a video. YouTube also has signed John Legend, Demi Lovato and Charlie Puth, among others, for its Dream Track tech.

More from The Hollywood Reporter

Facebook and Instagram owner Meta has beta-launched a cast of AI “characters,” all of which share the likenesses of such celebrities as Tom Brady, Snoop Dogg and MrBeast. And tech startups, like the AI-focused firm Soul Machines (which counts SoftBank among its investors), are launching AI-powered celebrity avatars, letting users interact and chat with “digital twins” of people like boxer Francis Ngannou, K-pop star Mark Tuan or golfer Jack Nicklaus.

As the Hollywood guilds grapple with the potential for generative AI to transform film and TV production, tech firms are using the power of celebrities to introduce the underlying technology to the masses. “There’s a huge possible business there and I think that’s what YouTube and the music companies see, for better or for worse,” says Gavin Purcell, the former executive producer of The Tonight Show Starring Jimmy Fallon who now hosts the AI for Humans podcast. “The Facebook’s and the YouTubes are trying to get people onboarded with what they see as the next UGC, user generated content world, which is this AI stuff.”

It’s no small gamble. Robert Kyncl, CEO of Warner Music Group, likened the tech to the creation of the printing press at an event held at YouTube’s New York office in September. “Ever since then, technologies have transformed industries, and evolved them for the most part for the better and with much greater prosperity,” he said. “But whenever that happens, there’s a period of uncertainty, because there’s a profound change and change is unsettling. And we are in that period of change.”

For now, the AI-powered celebrity likenesses (be they look-alike chatbots or sound-alike audio) are in the experimental stage, but that is expected to change in the coming months, as Meta’s chatbots and YouTube’s Dream Track expand (Meta, for example, has secured voice recordings of many participating stars, so its chatbots likely will be able to talk to users in the future outside of text). “Look at the number of celebrities today who are creating products, brands and ecommerce stores as an extension of their social media reach,” says Greg Cross, CEO of Soul Machines. “So this becomes just another facet for digital engagement.”

But the tech also raises thorny questions about identity, intellectual property, compensation and safeguards. A film studio owns the IP of a movie, and the script is written in advance. With this AI-generated content the technology writes everything on the fly. Contracts can stipulate compensation, but that isn’t so simple when a celebrity’s likeness or voice is used to train a model.

And in an industry where the famous and powerful so zealously guard their images, how confident can they be that their AI-generated counterparts won’t go off the rails?

The compensation piece is in some ways simpler — at least for now. Meta is paying some of the celebrities signed on to its chatbot project millions of dollars each over the course of the deal, sources confirm. YouTube, too, is contemplating how to pay musicians who participate (it already shares revenue with artists whose songs are used in videos, so one could see a similar model applying to AI-generated tunes based on the style of a real artist).

Things get more complicated when you think about the future. What happens when these deals are up and one side or the other opts not to renew? The personal data used to train the AI models can be removed, but only to a point, sources say.

“It’s like saying, ‘You should go watch the movie Star Wars and then forget that you ever saw the movie.’ It’s just not possible,” one source familiar with the discussions says. “You can remove data, but there are still remnants of that data that have informed how the system is trained.”

“These models are going to essentially require some kind of perpetual license,” the source adds. “And that is something I don’t know that big tech companies are contemplating and or are prepared to deal with.”

Then there’s the question of guardrails. It’s one thing to have an AI John Legend singing about your dog, or an AI Tom Brady (sorry, his character’s name is “Bru”) talking about his Super Bowl picks. But what’s to stop the misuse of these AIs to issue threats or say things unbecoming of their real-life doppelgängers?

The guardrails exist, but they are imperfect. “We specifically train them so they won’t use bad language or get into offensive subject matter,” Cross says.

Notes Will Chan, creative director for Mark Tuan: “Soul Machines were very open toward making sure that there are filters that come into play if fans were to go take that extreme route and ask a question [touching a taboo topic]. We don’t want to be too controlling with the type of outrageous responses that might come out of it. I think it’s just trying to direct the narrative toward, you know, it being a fun, interactive, entertaining service.”

But there is only so much fencing that can be done given the models involved. Consider Kendall Jenner, who provided the faces for one of Meta’s chatbots: “Billie,” which the company describes as a “No-BS, ride-or-die companion.”

Users were able to goad Billie into discouraging alcohol consumption, an interesting take given that Jenner owns 818 Tequila. Another user asked Billie what her favorite tequilas were, and she recommended Don Julio and Patrón and said she wasn’t familiar with Jenner’s brand.

“That sort of scenario is probably going to happen with anything like this because you can’t train for everything, and a chatbot essentially based on one of these LLMs [large language models] is based on being able to answer everything according to its training,” Purcell says.

Earlier in the year, influencer Caryn Marjorie (2.6 million followers on Snapchat, 745,000 on YouTube) worked with a company to create a virtual AI chatbot based on her likeness. Users, however, were able to get the chatbot to engage in sexually explicit conversations, pushing beyond the boundaries of what was planned.

“There’s only so much that you can definitively get these things to guardrail around, and if you mess with it enough, it’ll start to kind of break,” the source says.

And while tech giants like YouTube and Meta are focused on safely releasing the technology, there are others that may play more fast and loose. What’s stopping someone from, say, taking Drake’s voice and threatening their neighbor?

“I don’t think you can,” Purcell says. “I think it’s really going to be a tricky world where there’s going to have to be some sort of ‘this is from my official model,’ kind of either watermark or statement.”

“You’re going to have to have the person come out and say, ‘this isn’t me.’ And then you’re going to probably, like it or not, have to have some serious litigation and laws based on it,” he adds.

A senior media exec grimly predicts AI-driven models are “going to be everywhere, anyway” and it’d be better if companies establish a “rules-based place” to experiment. “Media companies need to learn from their mistakes and create safe spaces for this stuff,” this exec adds, lest they hand control entirely to big tech.

The proliferation of celebrity AI avatars, chatbots and music experiments suggest that many famous people are already accepting of that future.

“Every celebrity today is interested in their digital footprint and how they use that digital footprint to connect with their fans,” Cross says. “So this becomes an extension of social media.”

But there is wariness as well.

“When I was first approached by YouTube I was cautious and still am, AI is going to transform the world and the music industry in ways we do not yet fully understand,” said Charli XCX, who is participating in YouTube’s experiments, in a statement tied to the Dream Track launch.

“It’s our job — the platforms and the music industry — to make sure that artists like Charlie [Puth] who lean in can benefit; it’s also our job together to make sure that artists who don’t want to lean in are protected,” said Kyncl at the YouTube event.

Precisely how they can be protected in an AI-first world, and how much they stand to benefit with licenses and compensation still nascent, remains uncertain.

“I think there’s a lot of work that is required of any of the big tech companies that are developing these and it’s going to require a lot of ongoing maintenance and tracking of it,” the source familiar with big tech’s efforts says. “And I don’t know that it will ever be, quote, perfect.”

A version of this story first appeared in the Nov. 29 issue of The Hollywood Reporter magazine. Click here to subscribe.

Best of The Hollywood Reporter