Industry News

Grant Napear Returning to Sacramento Sports Talk

Former KHTK, Sacramento sports talk host and former Sacramento Kings TV broadcaster Grant Napear is returning to Sacramento sports radio to host the 3:00 pm to 6:00 pm show on Lotus Communications’img sports talk KSAC-AM/K284CM, effective September 2. This comes five years after he was fired by Bonneville’s KHTK, Sacramento and the Kings after a social media exchange with former Kings player DeMarcus Cousins about the Black Lives Matter movement and used the phrase, “all lives matter.” Napear sued Bonneville and lost but is currently appealing that ruling. Napear tells the Bee, “It’s going to be the Grant Napear show that people have listened to for decades. I’m competitive. I’m not only coming back to Sacramento. I’m coming back to win.” Read the Sacramento Bee story here.

Industry Views

Is That Even Legal? Talk Radio in the Age of Deepfake Voices: Where Fair Use Ends and the Law Steps In

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn early 2024, voters in New Hampshire got strange robocalls. The voice sounded just like President Joe Biden, telling people not to vote in the primary. But it wasn’t him. It was an AI clone of his voice – sent out to confuse voters.

The calls were meant to mislead, not entertain. The response was quick. The FCC banned AI robocalls. State officials launched investigations. Still, a big question remains for radio and podcast creators:

Is using an AI cloned voice of a real person ever legal?

This question hits hard for talk radio, where satire, parody, and political commentary are daily staples. And the line between creative expression and illegal impersonation is starting to blur.

It’s already happening online. AI-generated clips of Howard Stern have popped up on TikTok and Reddit, making him say things he never actually said. They’re not airing on the radio yet – but they could be soon.

Then came a major moment. In 2024, a group called Dudesy released a fake comedy special called, “I’m Glad I’m Dead,” using AI to copy the voice and style of the late George Carlin. The hour-long show sounded uncannily like Carlin, and the creators claimed it was a tribute. His daughter, Kelly Carlin, strongly disagreed. The Carlin estate sued, calling it theft, not parody. That lawsuit could shape how courts treat voice cloning for years.

The danger isn’t just legal – it’s reputational. A cloned voice can be used to create fake outrage, fake interviews, or fake endorsements. Even if meant as satire, if it’s too realistic, it can do real damage.

So, what does fair use actually protect? It covers commentary, criticism, parody, education, and news. But a voice isn’t just creative work – it’s part of someone’s identity. That’s where the right of publicity comes in. It protects how your name, image, and voice are used, especially in commercial settings.

If a fake voice confuses listeners, suggests false approval, or harms someone’s brand, fair use probably won’t apply. And if it doesn’t clearly comment on the real person, it’s not parody – it’s just impersonation.

For talk show hosts and podcasters, here’s the bottom line: use caution. If you’re using AI voices, make it obvious they’re fake. Add labels. Give context. And best of all, avoid cloning real people unless you have their OK.

Fair use is a shield – but it’s not a free pass. When content feels deceptive, the law – and your audience – may not be forgiving.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Harrison Legal Group or read more at TALKERS.com.