What protections do writers, actors, producers and others have from AI?
Will changing laws around name, image and likeness (NIL) eliminate less lucrative college sports?
And what does the No Fakes Act actually entail?
These are some of the questions that were explored during the 10th annual TechTainment conference, which took place this month at Arizona State University’s California Center Broadway.
Advanced technology, particularly the advent of AI, has had an enormous impact on the law and entertainment industry. The sessions at the conference explored the profound legal and business implications of these ever-evolving advancements.
“It is the crossroads of technology, entertainment and law,” said Mark Treitel, founder of TechTainment. “And what better place to hold a TechTainment conference than in Los Angeles, the entertainment capital of the world.
“Technology has changed entertainment and how we interact with that entertainment and has transformed it into something that is much bigger than what came before it.”
That means new laws, changing laws and working with state and federal laws, which can create an assortment of legislation throughout the country that does not allow for a consistent and more straightforward approach to problems.
High-profile attorneys, agents, academics and other industry professionals gathered at ASU’s California Center for the daylong event. Hollywood power broker Jeff Berg was among them. Berg is the former chairman of ICM (International Creative Management), one of the three top mega-Hollywood agencies.
The new location for the event represented an inaugural partnership between the Los Angeles Intellectual Property Law Association, or LAIPLA, and ASU’s Sandra Day O’Connor College of Law. LAIPLA is the nation’s premier intellectual property law group.
“The vision of this organization perfectly ties to where we’re heading as a law school as well,” said Stacy Leeds, Willard H. Pedrick Dean at the Sandra Day O’Connor College of Law. “So I wanted to express our enthusiasm for being your partner for the next 10 years.”
Game on or off?
In one session, titled “Game Time: An In-House Look at the Evolution of NIL Advising,” panelists discussed the regulation changes for college athletics and the role that in-house counsel plays in advising athletes on NIL matters.
In 2021, the NCAA started allowing athletes to be paid for the use of their name, image and likeness.
“NIL stands for name, image and likeness, but what it really means is sponsorship and marketing deals in the real world,” said panelist Debbie Spander, founder of Insight Sports Advisors. “So this is giving athletes the right to enter into sponsorship, marketing endorsement deals, etc.”
The panel explored the challenges and opportunities this change created.
“I’ve seen this progressing over the last 10 years,” said Thomas Speiss, a panelist and partner at Snell & Wilmer who has been involved in college athletics for 28 years. “I think back and remember sitting at a conference table and there was an effort at that time to try to institute a $2,000 stipend for all student-athletes. And some of the most well-resourced institutions were absolutely against it. It just spoke to the complexity of Division I, let alone Division II and Division III and the efforts that it takes to get anything passed.”
Since the introduction of the new policy, athletes like quarterback Dillon Gabriel, who plays for the University of Oregon, have become top NIL earners with a valuation at $1.7 million and deals with Beats by Dre and Celsius.
Despite the NCAA change, no federal law has been enacted, leaving laws around the regulation to the states.
“So the last three years we’ve been in the complete wild West,” Spander said. “People have just been making things up as they go along and what has come into the void, because you could only do marketing and licensing deals and athletic departments wanted to lure people to sign — especially with the new transfer portal — was collectives.”
Collectives are independent groups that provide opportunities for college athletes to earn money from their NIL rights.
A recent $2.8 billion class-action settlement between the NCAA and the major athletic conferences will allow colleges and universities to pay athletes directly for playing sports. The agreement would also compensate nearly 25,000 athletes from 363 Division I colleges who were denied the chance to profit from marketing their names and images while they were playing.
There are countless lawsuits underway, new legislation and Supreme Court arguments for and against legislation — disputes attempting to address this monumental problem.
New laws have an assortment of consequences. They can limit the number of athletes on a roster, change sports teams to club teams and even eliminate some sports altogether.
“They’ll continue the sport, but it’s going to operate as a club sport,” Speiss said. “I think it only makes sense that over a period of time someone challenges that. If it’s purely just an economic and a business decision, you would only offer sports that generated revenue, and that’s not how college athletics has operated for an eternity.
“So I think that there’s some awareness around all of that to be able to try to continue as holistic of an offering as possible. But there’s a real need to be able to fund it.”
AI and copyright
A panel of agency and government affairs representatives from the private sector tackled the topic of AI-generated deep fakes and the growing call for more AI regulation in another session, titled “Chat GPT Goes to Washington: The Future of AI Regulations in Congress, Copyright Office and the USPTO.”
Last year, the U.S. Copyright Office proposed regulations for the legal standards of copyright registration where an author used AI, while the U.S. Patent and Trademark Office proposed its own new regulations.
In July, the Copyright Office released a report to Congress about the growing and deceptive use of deepfakes and the critical need for legislation.
That was quickly followed by the No Fakes Act, which would protect the voice and visual likeness of all individuals from unauthorized computer-generated re-creations. But according to panelist Jenni Katzman, with the upcoming administration transition, it will not be moving forward soon.
“Nothing is going to happen on the AI front this Congress,” said Katzman, senior director of government affairs at Microsoft.
Panelists agreed that there was a lot to consider when crafting legislation. Katzman said the key aspects of the bill are preemption, damages, liability and knowledge standard. She said that Microsoft would like to be supportive of the bill.
Knowledge standard is a term that relates to a replica or fake being knowingly distributed versus one that is so authentic of a replica that it is unknowingly shared.
“You would have to know that you are distributing the digital replica, and you’d have to know that that digital replica was unauthorized and that it was fake,” explained Andrew Foglia, deputy director of policy and international affairs for the U.S. Copyright Office. “We thought that makes sense because the nature of digital replicas is that they’re going to deceive people who cannot be liable for unknowingly or accidentally sharing something you didn’t realize was fake or unauthorized.”
But then the question becomes, how do you enforce the laws around intent — how can intent be proven in federal courts?
Like other sessions throughout the day, panelists noted the problem of multiple laws being passed in states across the country.
“The reason bills are coming up now is that there is a patchwork of laws covering the behavior, but it is inconsistent, depending on the state — even though we have a federal law,” said Lynda Quigley, senior copyright advisor for the Office of Policy and International Affairs at the U.S. Patent and Trademark Office.
Joshua Simmons talked about the challenge in creating protections from the dangers of AI while preventing overregulation.
“There are things that (the technology) can do, and then there are guardrails that can or cannot be created,” said Simmons, an intellectual property litigator with Kirkland & Ellis. “The constant debate in Washington, D.C., is, ‘How can we put out a tool that we all feel good about but does not create a liability in doing that?’”
AI in the entertainment industry
It has been more than a year since the Screen Actors Guild and the Writers Guild of America–American Federation of Television and Radio Artists reached collective bargaining agreements with major film and television studios. Both labor disputes centered, in part, on the ever-increasing use of AI by studios and platforms.
In another session, titled “Fallout: One Year After Strike-A-Geddon: The Future of Hollywood AI After the WGA & SAG-AFTRA Strikes,” panelists discussed the 148-day strike and the tenets of the new agreements.
“Our clients are writers and actors, and they are very concerned about AI and frankly about seeing their jobs go to machines,” said Craig Wagner, general counsel and executive vice president for business affairs at Paradigm Talent Agency. “And this is not just a theoretical or academic concern, because we’re already seeing this happen, right? In the modeling industry … some brands have actually come out and said that they’re only going to use AI models for their advertising campaigns. And that’s definitely perceived as a threat to our clients’ livelihood.”
Wagner outlined some of the provisions in the SAG-AFTRA and WGA agreements.
“The first was a minimum (compensation); the second was staffing requirements,” Wagner said. “A lot of shows with these short orders had very few writers on them, and that was a big concern. And, of course, artificial intelligence … that was a huge concern.”
Among the provisions of the agreements was that the Alliance of Motion Picture and Television Producers would not use AI to write or rewrite literary material. The second issue was that AI-written material could not be source material.
Another provision was that the writers could use AI in writing their material with the company or studio’s consent. And the final piece of the deal, with respect to AI, was a provision that material written by WGA members couldn’t potentially be used to train AI systems.
Wagner said that the agreement was really novel and groundbreaking in a lot of ways.
An essential component of agreements pertained to adequate consent, which would not only be required at the time of the agreements, said Nicholas Huskins, a senior corporate counsel at Amazon Studios, “but if the studio wanted to use the material for another project, that required them to get consent again.”
Moderator Nedeen Nasser of Nasser Legal asked Tracey Freed, the owner of Freed Law, a key question: “Who owns the IP when AI contributed to the script or visual effect or character performances?”
Freed said it depends on whether AI is being used to make decisions or creating something on its own.
“The more you are using AI as a tool, the more likely it is that you are going to have authorship rights and can own the copyright,” Freed said.
“I think it is a double-edged sword,” said Marc Bruder, who represents a variety of production and releasing companies and works to get their products in the market. “There is a ton of content that can be injected into AI … and there is no way of tracking all of that footage that then creates a new AI product.
“And I couldn’t possibly trace the revenue of an AI product (like that) … And then if the product is in public domain … and used to create an AI product — who’s going to sue you? No one has the rights and everyone has the rights.”