MPA Sees “No Need” For New AI Copyright Legislation Or Special Rules, Warns Of “Inflexible” Guidelines

Despite the D.C. push to establish guardrails around artificial intelligence, major studios are warning against “inflexible” rules when it comes to copyright, asserting that existing law is sufficient to deal with the emerging AI technology.

The studios’ positions on a host of issues regarding AI were outlined in a Motion Picture Association filing with the U.S. Copyright Office. AI also is said to be a major point of contention in talks between the SAG-AFTRA and the AMPTP.

More from Deadline

In the filing (read it here), the MPA’s legal team, including Karyn Temple, Benjamin Sheffner and Terrica Carrington, wrote that the studio members’ “overarching view, based on the current state, is that while AI technologies raise a host of novel questions, those questions implicate well-established copyright law doctrines and principles. At present, there is no reason to conclude that these existing doctrines and principles will be inadequate to provide courts and the Copyright Office with the tools they need to answer AI-related questions as and when they arise.”

RELATED: WGAE Petition Demands AI Protections For Journalists

They added, “At the current time, however, there is no need for legislation or special rules to apply copyright law in the context of AI.”

“Developments in AI, like preceding technological advancements, have a great potential to enhance, not replace, human creativity,” they wrote. “MPA’s members further believe these developments can, and should, co-exist with a copyright system that incentivizes the creation of original expression and protects the rights of copyright owners.”

President Joe Biden issued an executive order on Monday to address concerns about AI, including a provision to establish standards for watermarking of AI-generated content, a move intended to identify so-called “deepfakes.”

The executive order, however, did not address copyright, and other action will be left to Congress. Senate Majority Leader Chuck Schumer (D-NY) has been working on AI legislation, and other lawmakers have proposed a draft bill for the unauthorized use of a performer’s digital likeness.

There already is disagreement over details.

In its 77-page filing, the MPA took issue with Copyright Office’s position on the protection of content that contains AI-generated material. Earlier this year, the office issued guidance on AI, noting that only material that is created by humans can be protected by copyright. The guidance said that copyright applicants had a duty to “disclose the inclusion of AI-generated content in a work submitted for registration and to provide a brief explanation of the human author’s contributions to the work.”

The Copyright Office also is in the midst of gathering public comment on its approach to AI.

In the filing, the MPA’s team wrote that the Copyright Office “has not yet sufficiently distinguished between generative AI where the AI model itself creates the expressive material (e.g., Midjourney), on the one hand, and the use of routine post-production AI tools that could fall under the Office’s broad definition (e.g., a human post-production creator using AI as a tool to remove mud from a performer’s clothing in successive frames for a motion picture), on the other.”

They warned that the Copyright Office “may be moving toward an inflexible rule that does not properly recognize the extent to which human creativity can be present in a work generated with the use of AI tools.”

Other issues also are addressed in the MPA’s filing, including the use of copyrighted material in AI training models. A number of authors, including John Grisham and George R.R. Martin, have sued OpenAI and Meta over the use of their works in training models for their AI systems.

The MPA argues that the “appropriate way” to deal with potential cases of infringement is via a court’s determination of whether it is a “fair use.” That is the standard by which courts weigh whether the unauthorized use of a copyrighted work is legal.

As such, courts weigh a series of factors in determining whether a use is “fair.” That includes the purpose and character of the use, the nature of the copyrighted work, the amount of the copyrighted work used, and the effect on the market for the copyrighted work. The MPA noted that the Supreme Court has determined that the four factors have to be considered together, and that the “task is not to be simplified with bright-line rules.”

The MPA’s team wrote that “the relevant use will vary, both with the stage of training, scope of material used, and ultimate use of the outputs.”

“For example, fine-tuning an AI model, specifically using the library of James Bond movies for the purpose of making a competing movie that appeals to the same audience, likely would weigh against fair use,” they wrote. “By contrast, an AI tool that is trained on an author’s own copyrighted works but that is specifically designed to detect infringement (e.g., “an AI to recognize an Ariana Grande-like song in order to try to catch infringers of her songs”), more likely would be deemed to be making a fair use.”

That said, the MPA’s team wrote that “if the fair use defense does not excuse the exercise of the copyright owner’s exclusive rights, the use of the owners’ works for training requires affirmative, i.e., opt-in, consent.”

Some AI companies already are doing that. OpenAI, for instance, has a licensing agreement with Shutterstock.

An alternative proposal is for content owners to opt-out of having their protected works used in training models. Yet the MPA warned that such methods “may prove to be unworkable.”

“Because MPA’s members’ libraries include thousands of works, not to mention promotional and other material, the sheer scale and volume means these proposed opt-out regimes likely will be insufficient and overly burdensome for the copyright owner.,” the MPA’s team wrote. Moreover, such solutions likely will not address the problem of pirated content used as training material, they wrote.

Where the MPA does see benefit is in a developer keeping records of the material used in their models, as well as in making them available. Such record-keeping may be necessary anyway in the event of litigation, they wrote.

The MPA’s approach to AI comes after years in which studios have sounded the alarm over protection of copyright and piracy, particularly with the growth of the internet.

But with AI, the studios want flexibility. In the filing, the MPA’s team wrote that while “strong copyright protection is the backbone of their industry,” studios “have a strong interest in developing creator-driven tools, including AI technologies, to support the creation of world-class content.”

The MPA team also expressed some concern over the proposed No Fakes Act, which would establish a new federal intellectual property right on the use of digital replicas. Even though the legislation has the endorsement of SAG-AFTRA, studios are “working in good faith with staff and stakeholders on legislative text that adequately protects the fundamental First Amendment rights of filmmakers, documentarians, news organizations, and other creators,” the MPA’s team wrote.

But the MPA has misgivings over other ideas, like ones to establish protections over artistic style, not merely artistic expression. The MPA team said that it “may harm artistic freedom, subjecting creators to litigation over imitation of ‘style,’ which itself may be vague and difficult to even define.”

Biden’s executive order sets in motion a process where the Commerce Department will indentify standards for authenticating content and detecting and watermarking AI generated materials.

Studios, however, oppose “any requirement to label or disclose when their works include the use of AI-generated material for expressive and entertainment purposes,” according to the MPA filing.

“Such a requirement would hinder creative freedom,” the MPA’s team wrote. “That is very different, of course, than labeling and identification requirements to avoid consumer deception in the context of uses of AI that are intentionally designed to deceive or mislead for political or other reasons.”

Best of Deadline

Sign up for Deadline's Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.

Click here to read the full article.