Industry Views

Neutraliars: The Platforms That Edit Like Publishers but Hide Behind Neutrality

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn the golden age of broadcasting, the rules were clear. If you edited the message, you owned the consequences. That was the tradeoff for editorial control. But today’s digital platforms – YouTube, X, TikTok, Instagram – have rewritten that deal. Broadcasters and those who operate within the FCC regulatory framework are paying the price.

These companies claim to be neutral conduits for our content. But behind the curtain, they make choices that mirror the editorial judgment of any news director: flagging clips, muting interviews, throttling reach, and shadow banning accounts. All while insisting they bear no responsibility for the content they carry.

They want the control of publishers without the accountability. I call them neutraliars.

A “neutraliar” is a platform that claims neutrality while quietly shaping public discourse. It edits without transparency, enforces vague rules inconsistently, and hides bias behind shifting community standards.

Broadcasters understand the weight of editorial power. Reputation, liability, and trust come with every decision. But platforms operate under a different set of rules. They remove content for “context violations,” downgrade interviews for being “borderline,” and rarely offer explanations. No appeals. No accountability.

This isn’t just technical policy – it’s a legal strategy. Under Section 230 of the Communications Decency Act, platforms enjoy broad immunity from liability related to user content. What was originally intended to allow moderation of obscene or unlawful material has become a catch-all defense for everything short of outright defamation or criminal conduct.

These companies act like editors when it suits them, curating and prioritizing content. But when challenged, they retreat behind the label of “neutral platform.” Courts, regulators, and lawmakers have mostly let it slide.

But broadcasters shouldn’t.

Neutraliars are distorting the public square. Not through overt censorship, but through asymmetry. Traditional broadcasters play by clear rules – standards of fairness, disclosure, and attribution. Meanwhile, tech platforms make unseen decisions that influence whether a segment is heard, seen, or quietly buried.

So, what’s the practical takeaway?

Don’t confuse distribution with trust.

Just because a platform carries your content doesn’t mean it supports your voice. Every upload is subject to algorithms, undisclosed enforcement criteria, and decisions made by people you’ll never meet. The clip you expected to go viral. Silenced. The balanced debate you aired. Removed for tone. The satire? Flagged for potential harm.

The smarter approach is to diversify your presence. Own your archive. Use direct communication tools – e-mail lists, podcast feeds, and websites you control. Syndicate broadly but never rely solely on one platform. Monitor takedowns and unexplained drops in engagement. These signals matter.

Platforms will continue to call themselves neutral as long as it protects their business model. But we know better. If a company edits content like a publisher and silences creators like a censor, it should be treated like both.

And when you get the inevitable takedown notice wrapped in vague policy language and polished PR spin, keep one word in mind.

Neutraliars.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Is That Even Legal? Talk Radio in the Age of Deepfake Voices: Where Fair Use Ends and the Law Steps In

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn early 2024, voters in New Hampshire got strange robocalls. The voice sounded just like President Joe Biden, telling people not to vote in the primary. But it wasn’t him. It was an AI clone of his voice – sent out to confuse voters.

The calls were meant to mislead, not entertain. The response was quick. The FCC banned AI robocalls. State officials launched investigations. Still, a big question remains for radio and podcast creators:

Is using an AI cloned voice of a real person ever legal?

This question hits hard for talk radio, where satire, parody, and political commentary are daily staples. And the line between creative expression and illegal impersonation is starting to blur.

It’s already happening online. AI-generated clips of Howard Stern have popped up on TikTok and Reddit, making him say things he never actually said. They’re not airing on the radio yet – but they could be soon.

Then came a major moment. In 2024, a group called Dudesy released a fake comedy special called, “I’m Glad I’m Dead,” using AI to copy the voice and style of the late George Carlin. The hour-long show sounded uncannily like Carlin, and the creators claimed it was a tribute. His daughter, Kelly Carlin, strongly disagreed. The Carlin estate sued, calling it theft, not parody. That lawsuit could shape how courts treat voice cloning for years.

The danger isn’t just legal – it’s reputational. A cloned voice can be used to create fake outrage, fake interviews, or fake endorsements. Even if meant as satire, if it’s too realistic, it can do real damage.

So, what does fair use actually protect? It covers commentary, criticism, parody, education, and news. But a voice isn’t just creative work – it’s part of someone’s identity. That’s where the right of publicity comes in. It protects how your name, image, and voice are used, especially in commercial settings.

If a fake voice confuses listeners, suggests false approval, or harms someone’s brand, fair use probably won’t apply. And if it doesn’t clearly comment on the real person, it’s not parody – it’s just impersonation.

For talk show hosts and podcasters, here’s the bottom line: use caution. If you’re using AI voices, make it obvious they’re fake. Add labels. Give context. And best of all, avoid cloning real people unless you have their OK.

Fair use is a shield – but it’s not a free pass. When content feels deceptive, the law – and your audience – may not be forgiving.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Harrison Legal Group or read more at TALKERS.com.

Industry Views

Mark Walters v. OpenAI: A Landmark Case for Spoken Word Media

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgWhen Georgia-based nationally syndicated radio personality, and Second Amendment advocate Mark Walters (longtime host of “Armed American Radio”) learned that ChatGPT had falsely claimed he was involved in a criminal embezzlement scheme, he did what few in the media world have dared to do. Walters stood up when others were silent, and took on an incredibly powerful tech company, one of the biggest in the world, in a court of law.

Taking the Fight to Big Tech

Walters, by filing suit against OpenAI, the creator of ChatGPT, become the first person in the United States to test the boundaries of defamation law in the age of generative artificial intelligence.

His case was not simply about clearing his name. It was about drawing a line. Can artificial intelligence generate and distribute false and damaging information about a real person without any legal accountability?

While the court ultimately ruled in OpenAI’s favor on specific legal procedure concerns, the impact of this case is far from finished. Walters’ lawsuit broke new ground in several important ways:

— It was the first known defamation lawsuit filed against an AI developer based on content generated by an AI system.
— It brought into the open critical questions about responsibility, accuracy, and liability when AI systems are used to produce statements that sound human but carry no editorial oversight.
— It continued to add fuel to the conversation of the effectiveness of “use at your own risk” disclaimers when there is real world reputational damage hanging in the balance.

Implications for the Radio and Podcasting Community

For those spoken-word creators, regardless of platform on terrestrial, satellite, or the open internet, this case is a wake-up call, your canary in a coal mine. Many shows rely on AI tools for research, summaries, voice generation, or even show scripts. But what happens when those tools get it wrong? (Other than being embarrassed, and in some cases fined or terminated) And worse, what happens when those errors affect real people?

The legal system, as has been often written about, is still playing catch-up. Although the court ruled that the fabricated ChatGPT statement lacked the necessary elements of defamation under Georgia law, including provable harm and demonstrable fault, the decision highlighted how unprepared current frameworks are for this fast-moving, voice-driven digital landscape.

Where the Industry Goes from Here

Walters’ experience points to the urgent need for new protection and clearer guidelines:

— Creators deserve assurance that the tools they use are built with accountability in mind. This would extend to copyright infringement and to defamation.
— Developers must be more transparent about how their systems operate and the risks they create. This would identify bias and attempt to counteract it.
— Policymakers need to bring clarity to who bears responsibility when software, not a person, becomes the speaker.

A Case That Signals a Larger Reckoning

Mark Walters may not have won this round in court, but his decision to take on a tech giant helped illuminate how quickly generative AI can create legal, ethical, and reputational risks for anyone with a public presence. For those of us working in media, especially in formats built on trust, voice, and credibility, his case should not be ignored.

“This wasn’t about money. This was about the truth,” Walters tells TALKERS. “If we don’t draw a line now, there may not be one left to draw.”

To listen to a longform interview with Mark Walters conducted by TALKERS publisher Michael Harrison, please click here

Media attorney, Matthew B. Harrison is VP/Associate Publisher at TALKERS; Senior Partner at Harrison Media Law; and Executive Producer at Goodphone Communications. He is available for private consultation and media industry contract representation. He can be reached by phone at 724-484-3529 or email at matthew@harrisonmedialaw.com. He teaches “Legal Issues in Digital Media” and serves as a regular contributor to industry discussions on fair use, AI, and free expression.

Industry News

Matthew B. Harrison Holds Court Over Section 230 Explanation for Law Students at 1st Circuit Court of Appeals in Boston

As an attorney with extensive front-line expertise in media law, TALKERS associate publisher and senior partner in the Harrison Legal Group Matthew B. Harrison (pictured at right on the bench), was selected to hold court as “acting” judge in a moot trial involving Section 230 for law students engaged in a nationalim competition last evening (2/22) at the 1st Circuit Court of Appeals in Boston, MA. The American Bar Association, Law Student Division holds a number of annual national moot court competitions. One such event, the National Appellate Advocacy Competition, emphasizes the development of oral advocacy skills through a realistic appellate advocacy experience with moot court competitors participating in a hypothetical appeal to the United States Supreme Court. This year’s legal question focused on the Communications Decency Act – “Section 230” – and the applications of the exception from liability of internet service providers for the acts of third parties to the realistic scenario of a journalist’s photo/turned meme being used in advertising (CBD, ED treatment, gambling) without permission or compensation in violation of applicable state right of publicity statutes. Harrison tells TALKERS, “We are at one of those sensitive times in history where technology is changing at a quicker pace than the legal system and legislators can keep up with – particularly at the consequential juncture of big tech and mass communications. I was impressed and heartened by the articulateness and grasp of the Section 230 issue displayed by the law students arguing before me.”

Industry News

Michael Harrison Says AI is One of the Most Important Talk Topics of Our Times

TALKERS founder Michael Harrison has kicked off a nationwide guesting tour of talk shows promoting discussion of the upside and downside of AI in conjunction with the release of the new song, “I Got a Line in New York City,” by the long-established classic rock group, Gunhill Road. Harrison performs lead vocals on the track performed with band members Steve GoldrichPaul Reisch and Brian Koonin. The music video of the song (produced by Harrison’s son and TALKERS associate publisher Matthew B. Harrison) has been described as a computer’s “fever dream about the Big Apple.” Although the music is totally organic, all of the visual graphics on the video have been assisted in their creation by generative artificial intelligence. Harrison says, “There’s huge interest in the topic of AI including the existential issues of its potential impact on our species. In the art community, debate is raging over whether AI enhances originality and creativity or if it is ushering in the death of individual artists and the role they play in the humanities.” See that video here.

Harrison launched the tour late last week appearing on the Rich Valdes show on Westwood One and has subsequently appeared on network programs hosted by Doug Stephan, Dr. Daliah Wachs, and WABC’s Frank Morano, as well as Harry Hurley on WPG, Atlantic City,  Todd Feinburg on WTIC-AM, Hartford and Michael Zwerling on KSCO, Santa Cruz.  WOR, New York has posted the video and an  accompanying story here.

To book Michael Harrison please call Barbara Kurland at 413-565-5413 or email info@talkers.com

Industry News

Panel Discussion to Tackle the Talk Media Industry’s Key Concerns

One of the most popular sessions at the annual TALKERS Conference is “The Big Picture” panel and this year’s planned installment of the discussion promises to continue in that tradition of perspective and pertinence.  The panel will be introduced by TALKERS associate publisher/media attorney, Matthew B. Harrison, Esq. and moderated by TALKERS publisher Michael Harrison.  Panelists include (in alphabetical order): Arthur Aidala, Esq. founding partner, Aidala, Bertuna & Kamins, PC/host, AM 970 The Answer, New York; Dr. Asa Andrew, CEO/host, The Doctor Asa NetworkLee Habeeb, host/producer, Our American StoriesLee Harris, director of Integrated Operations, NewsNation; and Kraig Kitchin, CEO, Sound Mind, LLC/chairman, Radio Hall of Fame.  One more panelist has yet to be named.  The issues that the session will cover include: the existential cultural, technological and financial issues facing radio and talk media; the medium’s role in the national political conversation and culture wars; the impact of artificial intelligence on intellectual property and creative originality; the evolution of ethics, justice and journalism in American society; and an examination of potential topics and concerns that will keep the medium vibrant as we move deeper into the 21st century. “It’s all about perspective,” says panel moderator Michael Harrison. “If we are to survive as an industry as well as a community, we have to step back and look at the big picture within which we operate… and it is getting bigger and bigger with each passing moment. We must avoid becoming smaller and smaller.”  More than 60 luminaries from the talk media industry are set to speak at a power-packed day of fireside chats, solo addresses, panel discussions, workshops, award presentations, new equipment showcases and endless networking opportunities. TALKERS 2023 is nearing an advance sellout. See more about the agenda, registration, sponsorship and hotel information here