Industry Views

When the Library Talks Back

2f8c1286 c8c3 4b72 ba11 091cec050fcd
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgImagine SiriusXM acquires the complete Howard Stern archive – every show, interview, and on-air moment. Months later, it debuts “Howard Stern: The AI Sessions,” a series of new segments created with artificial intelligence trained on that archive. The programming is labeled AI-generated, yet the voice, timing, and style sound like Stern himself.

Owning the recordings might suggest the right to create new works from them. In reality, the answer is more complicated – and the music industry offers a useful comparison.

Music Industry Precedent

Sony, Universal, and others have spent hundreds of millions buying music catalogs from artists such as Bob DylanBruce SpringsteenPaul Simon, and Queen. These deals often include both composition rights and master recordings, giving the buyer broad control over licensing and derivative works.

In music, the song and the recording are the assets. In talk content, the defining element is the host’s persona – voice, cadence, and delivery – which changes the legal analysis when creating new material.

Copyright and Persona Rights

Buying a talk archive usually transfers copyright in the recordings and any scripts. That permits rebroadcast, excerpts, and repackaging of original programs.

It does not automatically transfer the host’s right of publicity – control over commercial use of their name, likeness, and in many states, their distinctive voice. In Midler v. Ford Motor Co. (1988), the court ruled that imitating Bette Midler’s voice in a commercial without consent was an unauthorized use of her identity.

This means a company can own the shows without having the right to make new performances in the host’s voice unless the contract clearly grants that right.

The AI Factor

AI technology can replicate a host’s voice, tone, and style with high accuracy, producing entirely new programming.

Outside broadcasting, a recent AI-generated George Carlin special – written by humans but performed by a voice model trained on decades of his work – sparked debate about rights and legacy.

In talk radio, similar AI use could create “new” episodes featuring well-known hosts. Even with clear labeling, right-of-publicity claims may arise if the host or their estate never authorized it. Disclaimers may address consumer confusion but do not remove identity-rights issues.

Why It Matters

This applies to more than national figures. Any broadcaster or podcaster with a substantial archive could face it. Selling or licensing a library could give the buyer the tools to replicate your voice without your participation.

For buyers, the ability to produce new content from archived material has commercial appeal. But without the right to use the host’s voice for new works, it carries significant legal and reputational risk.

Contracts Decide

The key is in the contract:

— Did the talent assign rights to their name, likeness, and voice for future works?
— Is use limited to original recordings or extended to derivative works?
— Does it address future technologies, including AI?

Older agreements often omit these points, leaving courts to decide. Future contracts will likely address AI directly.

Takeaways

For talent: Know what you are transferring. Copyright ownership does not necessarily include your future voice.

For buyers: Owning an archive does not automatically give you the right to create AI-generated new material in the original host’s voice.

For everyone: As AI advances, control over archives will depend on the contracts that govern them.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Is That Even Legal? Talk Radio in the Age of Deepfake Voices: Where Fair Use Ends and the Law Steps In

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn early 2024, voters in New Hampshire got strange robocalls. The voice sounded just like President Joe Biden, telling people not to vote in the primary. But it wasn’t him. It was an AI clone of his voice – sent out to confuse voters.

The calls were meant to mislead, not entertain. The response was quick. The FCC banned AI robocalls. State officials launched investigations. Still, a big question remains for radio and podcast creators:

Is using an AI cloned voice of a real person ever legal?

This question hits hard for talk radio, where satire, parody, and political commentary are daily staples. And the line between creative expression and illegal impersonation is starting to blur.

It’s already happening online. AI-generated clips of Howard Stern have popped up on TikTok and Reddit, making him say things he never actually said. They’re not airing on the radio yet – but they could be soon.

Then came a major moment. In 2024, a group called Dudesy released a fake comedy special called, “I’m Glad I’m Dead,” using AI to copy the voice and style of the late George Carlin. The hour-long show sounded uncannily like Carlin, and the creators claimed it was a tribute. His daughter, Kelly Carlin, strongly disagreed. The Carlin estate sued, calling it theft, not parody. That lawsuit could shape how courts treat voice cloning for years.

The danger isn’t just legal – it’s reputational. A cloned voice can be used to create fake outrage, fake interviews, or fake endorsements. Even if meant as satire, if it’s too realistic, it can do real damage.

So, what does fair use actually protect? It covers commentary, criticism, parody, education, and news. But a voice isn’t just creative work – it’s part of someone’s identity. That’s where the right of publicity comes in. It protects how your name, image, and voice are used, especially in commercial settings.

If a fake voice confuses listeners, suggests false approval, or harms someone’s brand, fair use probably won’t apply. And if it doesn’t clearly comment on the real person, it’s not parody – it’s just impersonation.

For talk show hosts and podcasters, here’s the bottom line: use caution. If you’re using AI voices, make it obvious they’re fake. Add labels. Give context. And best of all, avoid cloning real people unless you have their OK.

Fair use is a shield – but it’s not a free pass. When content feels deceptive, the law – and your audience – may not be forgiving.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Harrison Legal Group or read more at TALKERS.com.