As AI Battle Lines Are Drawn, Studios Align With Big Tech In a Risky Bet

Hollywood and the technology companies leading the charge on developing generative AI tools are on a collision course, with both sides digging their heels into clashing sides of novel intellectual property issues that will shape the future of production.

The Copyright Office has been exploring policy questions surrounding the intersection of intellectual property and AI. The agency has received thousands of comments, drawing submissions from SAG-AFTRA, the Writers Guild of America and Directors Guild of America, among other major players in the entertainment and media industries.

More from The Hollywood Reporter

The unions landed on opposite sides of several hot-button issues with the Motion Picture Association, which reps major studios like Disney and Warner Bros. Discovery. And the MPA was joined by Meta, OpenAI and tech advocacy groups. Where they diverge the most is whether new legislation is warranted to address the unauthorized use of copyrighted material to train AI systems and the mass generation of potentially infringing works based on existing content. This has prompted some creators to question why studios aren’t allying themselves with actors and writers and against AI companies to oppose what could constitute the mass pilfering of their material in violation of intellectual property laws.

“Studios should be protecting their copyrights,” a WGA member tells The Hollywood Reporter. “It’s shortsighted, because it demotes them to another source of content for these AI firms.”

Even as studios insist on certain rights to exploit the technology, AI firms are scraping the internet for copyrighted works owned by those studios — as well as actors whose likenesses they contract for use in films and TV series — for incorporation in training data. This is happening as artists and authors open multiple fronts in a growing legal battle against AI firms, alleging that mass-scale copyright infringement is fueling their endeavors.

The MPA, Meta and OpenAI — backed by trade groups representing companies like Apple and Amazon that have a foothold in Hollywood and generative AI — maintained that existing intellectual property laws are sufficient to address thorny legal issues posed by the technology. This stood in stark contrast to SAG-AFTRA’s call for a federal right of publicity law that would protect members’ rights to profit off of their images, voices and likenesses.

A representative for Scarlett Johansson last week said the actress took legal action against an AI app developer for using her name and likeness in an online ad posted on X that featured an AI-generated version of her voice. While copyright law doesn’t account for a person’s voice or face, some states carry right of publicity laws that protect against unauthorized commercial uses of a person’s name, likeness and persona. It’s meant to provide individuals the exclusive right to profit off of their identities.

SAG-AFTRA urged the copyright office to push for a new federal law defining an individual’s likeness as an intellectual property right. “Anything short of this would create a massive loophole allowing websites that act as a ‘marketplace’ for digital replicas — those who have the most control over their creation, dissemination, and exploitation — to escape liability,” the union stated in a comment to the copyright office filed on Oct. 30.

Under Section 230 of the Communication Decency Act, websites like X and Facebook that carry ads of AI-generated actors can claim immunity. There’s currently a split between the courts on whether the right of publicity falls within Section 230’s exception for intellectual property rights. It’s been read to provide near blanket immunity from such claims, including by the 9th U.S. Circuit Court of Appeals, which is home to the largest concentration of SAG-AFTRA members.

Section 230 has historically afforded tech firms significant legal protection from liability as third-party publishers and remains a battleground for copyright issues surrounding generative AI. Chamber of Progress, a tech industry coalition whose members include Amazon, Apple and Meta, argued that big tech’s favorite legal shield should be expanded to immunize AI companies from some infringement claims.

“One criterion for determining safe harbor eligibility could involve an evaluation of the size and diversity of the training dataset used for the model (e.g., whether it’s sufficiently extensive to prevent substantially similar outputs and reasonably varied),” it stated in a submission to the copyright office. “Further, given the inherently opaque nature of Generative AI models and the unpredictable behavior of human users, Congress may consider legislation that establishes a liability framework that shields Generative AI services from liability when users intentionally submit infringement-driven queries.”

The group added that any new legislation should ensure that the responsibility of identifying specific works used in training datasets belongs to copyright holders. The issue is hotly contested in the courts, with a federal judge dismissing on Oct. 30 most claims from artists suing generative AI art generators. Among the problems U.S. District Judge William Orrick identified in the lawsuit against StabilityAI, Midjourney and DeviantArt was whether AI systems actually contain copies of copyrighted images that were used to create allegedly infringing works. AI companies have largely maintained that training their systems does not include wholesale copying of works but rather involves the development of parameters — like lines, colors, shades and other attributes associated with subjects and concepts — from those works that collectively define what things look like. To allege infringement, the artists will have to establish that their works were copied to train AI systems. Orrick wrote that plaintiffs’ theory is “unclear” as to whether there are copies of training images stored in StabilityAI’s Stable Diffusion, pointing to arguments that it’s impossible for billions of images “to be compressed into an active program.”

A major hurdle plaintiffs suing AI companies face is that training datasets are largely a black box. If the courts are unequipped to deal with the litigation due to certain constraints, the DGA and WGA advocated for the establishment of “moral rights” that would recognize writers and directors as the original authors of their work. This would give them larger financial and creative control over exploitation of their material even when they don’t own the copyrights.

Under U.S. copyright law, directors and writers are not entitled to some rights that exist in other countries, including the U.K., France and Italy. This is because the contributions of writers and directors in America are typically considered “works-made-for-hire” which establishes creators as employees and producers as the owner of any copyright.

“This statutory provision gives producers a significant power that is taken away from American audiovisual creators (writers and directors),” stated the filing from the DGA, which was joined by the WGA.

Creators’ rights instead lie in unions’ contracts with the studios. But with the rise of generative AI tools, the DGA warned that companies will take advantage of the absence of laws that recognize creators’ rights to their creations. “These third parties, who are not bound to our collective bargaining agreements, may ingest and regurgitate copyrighted films and televisions shows into AI systems without the participation of the copyright owner or the need to agree to the terms of our new agreement,” the guild stated in its filing.

Without intervention from Congress, the legality of using copyrighted works in training datasets will be decided by the courts. The question will likely be decided in part on fair use, which provides protection for the use of copyrighted material to make a secondary work as long as it’s “transformative.”

According to comments from the unions, the ingestion of copyrighted material in AI systems is not covered by fair use under current case law. They note the Supreme Court’s recent decision in Andy Warhol Foundation for the Visual Arts v. Goldsmith, which effectively reined in the scope of the defense. In that case, the majority stressed that an analysis of whether an allegedly infringing work was sufficiently transformed must be balanced against the “commercial nature of the use.” This means that fair use is less likely to be found if, for example, AI companies undercut creators’ economic prospects to profit off of their works by scraping material from the internet instead of pursuing licensing deals.

“When works produced under a SAG-AFTRA collective bargaining agreement are reused in another market or medium, the collective bargaining agreement requires negotiation, consent, and compensation for the reuse,” the union stated. “This is an important protection for SAG-AFTRA members and is part of the value of the work that should be considered (i.e. use of the work deprives not only the copyright holder of licensing fees, it deprives the depicted SAG-AFTRA member bargained-for compensation).”

The MPA, meanwhile, stated that fair use should be decided on a case-by-case basis. It explained, “For example, fine-tuning an AI model, specifically using the library of James Bond movies for the purpose of making a competing movie that appeals to the same audience, likely would weigh against fair use.”

Additionally, the MPA argued in favor of looser standards with regard to the copyrightability of works created by AI. It said that the copyright office is “too rigid” in its human authorship requirement, which holds that intellectual property rights can only be granted to works created by humans, because “it does not take into account the human creativity that goes into creating a work using AI as a tool.”

Uniting creators across Hollywood, guardrails surrounding the use of generative AI proved to be a major point of contention between the WGA and studios. It appears that members of the AMPTP maintain that they are allowed to use writers’ material as training data and plan to follow through. “The companies have, they claim, some ongoing copyright rights in using our material,” negotiating committee co-chair Chris Keyser told THR on Sept. 27.

The battle lines over the use of generative AI tools in Hollywood are still being drawn. AI companies, some of which are considered leaders in the field and own companies that are a part of the AMPTP, may turn to directly competing with studios to generate scripts (writers will still need to play a part in the process given that copyrights can be granted only to humans). The legacy studios, if they plan on creating their own AI systems, are likely at a disadvantage.

Darren Trattner, an entertainment lawyer who represents actors, directors and writers, said it would behoove the studios to “align themselves” with creators in the battle against generative AI “because there’s a common interest.”

He stressed, “Why would a studio want 100 years of films to be gobbled up by third-party AI programs? Then, anyone can use and try to create material based on their intellectual property.”

Best of The Hollywood Reporter

Click here to read the full article.