Industry Views

Fair Use in 2025: The Courts Draw New Lines

By Matthew B. Harrison
TALKERSVP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgImagine an AI trained on millions of books – and a federal judge saying that’s fair use. That’s exactly what happened this summer in Bartz v. Anthropic, a case now shaping how creators, publishers, and tech giants fight over the limits of copyright.

Judges in California have sent a strong signal: training large language models (LLMs) on copyrighted works can qualify as fair use if the material is lawfully obtained. In Bartz, Judge William Alsup compared Anthropic’s use of purchased books to an author learning from past works. That kind of transformation, he said, doesn’t substitute for the original.

But Alsup drew a hard line against piracy. If a dataset includes books from unauthorized “shadow libraries,” the fair use defense disappears. Those claims are still heading to trial in December, underscoring that source matters just as much as purpose.

Two days later, Judge Vince Chhabria reached a similar conclusion in Kadrey v. Meta. He called Meta’s training “highly transformative,” but dismissed the lawsuit because the authors failed to show real market harm. Together, the rulings show that transformation is a strong shield, but it isn’t absolute. Market evidence and lawful acquisition remain decisive.

AI training fights aren’t limited to novelists. The New York Times v. OpenAI case is pressing forward after a judge refused to dismiss claims that OpenAI and Microsoft undermined the paper’s market by absorbing its reporting into AI products. And in Hollywood, Disney and Universal are suing Midjourney, alleging its system lets users generate characters like Spider-Man or Shrek – raising the unsettled question of whether AI outputs themselves can infringe.

The lesson is straightforward: fair use is evolving, but not limitless. Courts are leaning toward protecting transformative uses of content—particularly when it’s lawfully sourced – but remain wary of piracy and economic harm.

That means media professionals can’t assume that sharing content online makes it free for training. Courts consistently recognize that free journalism, interviews, and broadcasts still carry market value through advertising, sponsorship, and brand equity. If AI systems cut into those markets, the fair use defense weakens.

For now, creators should watch the December Anthropic trial and the Midjourney litigation closely. The courts have blessed AI’s right to learn – but they haven’t yet decided how far those lessons can travel once the outputs begin to look and feel like the originals.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com

Industry Views

When “Sharing” Becomes Stealing: TALKERS’ 90-Second Lesson in Fair Use

By Matthew B. Harrison

TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgNinety seconds. That’s all it took. One of the interviews on the TALKERS Media Channel – shot, edited, and published by us – appeared elsewhere online, chopped into jumpy cuts, overlaid with AI-generated video game clips, and slapped with a clickbait title. The credit? A link. The essence of the interview? Repurposed for someone else’s traffic.

TALKERS owns the copyright. Taking 90 seconds of continuous audio and re-editing it is infringement.

Could they argue fair use? Maybe, but the factors cut against them:

  • Purpose: Clickbait, not commentary or parody.
  • Nature: Original journalism leans protective.
  • Amount: Ninety seconds may be the “heart” of the work.
  • Market Effect: If reposts draw views, ad revenue, or SEO, that’s harm.

And here’s the key point: posting free content doesn’t erase its market value. Free journalism still generates reputation, sponsorships, and ad dollars. Courts consistently reject the idea that “free” means “up for grabs.”

Enforcement options exist. A DMCA notice can clear a repost quickly. Repeat offenders risk bans. On-screen branding makes copying obvious, and licenses can set terms like “share with credit, no remix.”

But here’s the hard truth: a takedown won’t stop the AI problem. Once a clip circulates, it’s scraped into datasets training text-to-video and voice models. Deleting the repost doesn’t erase cached or mirrored copies. Think of it like pouring a glass of water into the ocean – you can’t get it back. And to make matters worse, enforcement doesn’t stop at U.S. borders. Different countries have different copyright rules, making “justice” slow, uneven, and rarely satisfying.

That TALKERS interview may now live inside billions of fragments teaching machines how people speak. You can win the takedown battle and still lose the training war. Courts are only starting to address whether scraping is infringement. For now, once it’s ingested, it’s permanent.

Creators face a constant tension: content must spread to grow, but unchecked sharing erodes control. The challenge in 2025 is drawing that line before your work becomes someone else’s “content.”

The law is still on your side – but vigilance matters. Use takedowns when necessary. Brand so the source is clear. Define sharing terms up front. And remember: free doesn’t mean worthless.

The real question isn’t just “Is it fair use?” It’s “Who controls the story?”

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com

Industry Views

Could Your Own Podcast Become Your AI Competitor?

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

mattybharrisonImagine a listener “talking” to an AI version of you – trained entirely on your old episodes. The bot knows your cadence, your phrases, even your voice. It sounds like you, but it isn’t you.

This isn’t science fiction. With enough content, it’s technically feasible today. A determined developer could transcribe archives, fine-tune a language model, and overlay a cloned voice. The result wouldn’t be perfect, but it would be recognizable.

Whether that’s legal is another question – one circling directly around fair use.

Why It Matters

For most content creators, archives are their most valuable asset. Yet many contracts with networks, distributors, or hosting platforms quietly grant broad rights to use recordings in “new technologies.” That language, once ignored, could be the legal hook to justify training without your permission.

Fair use is the fallback defense. Tech companies argue training is transformative – they aren’t re-broadcasting your show, only using it to teach a machine. But fair use also weighs market harm. If “AI You” pulls listeners or sponsors away from the real thing, that argument weakens considerably.

Not Just Theory

Other industries are already here. AI has generated convincing tracks of Frank Sinatra singing pop hits and “new” stories written in the style of Jane Austen. If that can be done with a few books or albums, thousands of podcast episodes provide more than enough material to train a “host model.”

Talk media is especially vulnerable because its product is already conversational. The line between “fan remix” and “AI imitation” isn’t as wide as it seems.

What You Can Do

This isn’t about panic – it’s about preparation.

— Review your contracts: confirm you own your recordings and transcripts.
— Register your work: enforceable rights are stronger rights.
— Decide your stance: licensing your archives for training might be an opportunity – if you control it.
— Emphasize authenticity: audiences still value the human behind the mic.

The Takeaway

Could your podcast be turned into your competitor? Yes, in theory. Will it happen to you? That depends on your contracts, your protections, and the choices you make.

Fair use may ultimately decide these battles, but “fair” is not the same as safe. Consider this example a reminder: in the AI era, your archive is not just history – it is raw material.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

When the Library Talks Back

2f8c1286 c8c3 4b72 ba11 091cec050fcd
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgImagine SiriusXM acquires the complete Howard Stern archive – every show, interview, and on-air moment. Months later, it debuts “Howard Stern: The AI Sessions,” a series of new segments created with artificial intelligence trained on that archive. The programming is labeled AI-generated, yet the voice, timing, and style sound like Stern himself.

Owning the recordings might suggest the right to create new works from them. In reality, the answer is more complicated – and the music industry offers a useful comparison.

Music Industry Precedent

Sony, Universal, and others have spent hundreds of millions buying music catalogs from artists such as Bob DylanBruce SpringsteenPaul Simon, and Queen. These deals often include both composition rights and master recordings, giving the buyer broad control over licensing and derivative works.

In music, the song and the recording are the assets. In talk content, the defining element is the host’s persona – voice, cadence, and delivery – which changes the legal analysis when creating new material.

Copyright and Persona Rights

Buying a talk archive usually transfers copyright in the recordings and any scripts. That permits rebroadcast, excerpts, and repackaging of original programs.

It does not automatically transfer the host’s right of publicity – control over commercial use of their name, likeness, and in many states, their distinctive voice. In Midler v. Ford Motor Co. (1988), the court ruled that imitating Bette Midler’s voice in a commercial without consent was an unauthorized use of her identity.

This means a company can own the shows without having the right to make new performances in the host’s voice unless the contract clearly grants that right.

The AI Factor

AI technology can replicate a host’s voice, tone, and style with high accuracy, producing entirely new programming.

Outside broadcasting, a recent AI-generated George Carlin special – written by humans but performed by a voice model trained on decades of his work – sparked debate about rights and legacy.

In talk radio, similar AI use could create “new” episodes featuring well-known hosts. Even with clear labeling, right-of-publicity claims may arise if the host or their estate never authorized it. Disclaimers may address consumer confusion but do not remove identity-rights issues.

Why It Matters

This applies to more than national figures. Any broadcaster or podcaster with a substantial archive could face it. Selling or licensing a library could give the buyer the tools to replicate your voice without your participation.

For buyers, the ability to produce new content from archived material has commercial appeal. But without the right to use the host’s voice for new works, it carries significant legal and reputational risk.

Contracts Decide

The key is in the contract:

— Did the talent assign rights to their name, likeness, and voice for future works?
— Is use limited to original recordings or extended to derivative works?
— Does it address future technologies, including AI?

Older agreements often omit these points, leaving courts to decide. Future contracts will likely address AI directly.

Takeaways

For talent: Know what you are transferring. Copyright ownership does not necessarily include your future voice.

For buyers: Owning an archive does not automatically give you the right to create AI-generated new material in the original host’s voice.

For everyone: As AI advances, control over archives will depend on the contracts that govern them.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

They Say YOU Infringed – But Do THEY Own the Rights?

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgYou did everything right – or so you thought. You used a short clip, added commentary, or reshared something everyone else was already posting. Then one day, a notice shows up in your inbox. A takedown. A demand. A legal-sounding, nasty-toned email claiming copyright infringement, and asking for payment.

You’re confused. You’re cautious. And maybe you’re already reaching for the fair use defense.

But hold on. Before you argue about what you used, ask something simpler: Does the party accusing you actually own the rights?

Two Main Reasons People Send Copyright Notices

1. They believe they’re right – and they want to fix it.  Sometimes the claim is legitimate. A rights-holder sees their content used without permission and takes action. They may send a DMCA takedown, request removal, or ask for a license fee. Whether it’s a clip, an image, or a music bed – the law is on their side if your use wasn’t authorized.
2. They’re casting a wide net – or making a mistake. Other times, you’ve landed in a mass enforcement dragnet. Some companies send thousands of notices hoping a few people will pay – whether or not the claim is strong, or even valid. These are often automated, sometimes sloppy, and occasionally bluffing. The sender may not own the rights. They may not even know if what you used was fair use, public domain, or licensed.

Mistakes happen. Bots misidentify content. Images get flagged that were never protected. Even legitimate copyright holders sometimes act too fast. But once a notice goes out, it can become your problem – unless you respond wisely.

The First Thing to Check Is Ownership

Most creators instinctively argue fair use or say they meant no harm. But those aren’t the first questions a lawyer asks.

The first question is: “Do they have standing to bring the claim?”

In many cases, the answer is unclear or flat-out “no.” Courts have dismissed copyright lawsuits where the claimant couldn’t show ownership or any active licensing interest. If they can’t demonstrate control over the work – and actual market harm – they may not have the right to sue.

What To Do If You Get a Notice

Don’t panic. Not all claims are valid – and not all claimants are in a position to enforce them.
Don’t assume fair use will protect you. It might, but only after ownership is clear.
Don’t engage emotionally. Responding flippantly can escalate things fast.
Do get help early. A media attorney can help you assess whether the claim is real – and whether the sender has any legal ground at all.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

When One Clip Cuts Two Ways: How Copyright and Defamation Risks Collide

img

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgA radio (or video podcast) host grabs a viral clip, tosses in some sharp commentary, and shares it online. The goal? Make some noise. The result? A takedown notice for copyright infringement – and then a letter threatening a defamation suit.

Sound far-fetched? It’s not. In today’s media world, copyright misuse and defamation risks often run on parallel tracks – and sometimes crash into each other. They come from different areas of law, but creators are finding themselves tangled up in both over the same piece of content.

Copyright Protects Ownership. Defamation Protects Reputation

It’s easy to think of copyright and defamation as two separate beasts. One guards creative work. The other shields reputation. But when creators use or edit someone else’s content – especially for commentary, parody, or critique – both risks can hit at once.

Take Smith v. Summit Entertainment LLC (2007). Smith wrote an original song. Summit Entertainment slapped him with a false DMCA takedown notice, claiming copyright they didn’t actually own. Smith fought back, suing not just for the bogus takedown but also for defamation, arguing that Summit’s public accusations hurt his reputation. The court said both claims could go forward.

That case shows just how easily copyright claims and defamation threats can pile up when bad information meets bad behavior.

Murphy v. Millennium Radio: A Close Call with a Clear Message

In Murphy v. Millennium Radio Group LLC, a New Jersey radio station scanned a photographer’s work – with his credit – and posted it online without permission. That alone triggered a copyright claim. But the hosts didn’t stop there. They mocked the photographer on-air, which sparked a defamation lawsuit.

Even though the copyright and defamation claims came from different actions – using the photo without permission and trash-talking the photographer – they landed in the same legal fight. It’s a reminder that separate problems can quickly become one big headache.

Why This Double Threat Matters

Fair Use Isn’t a Free Pass on Defamation. Even if you have a solid fair use argument, that won’t protect you if your edits or commentary twist facts or attack someone unfairly.
Public Comments Can Double Your Trouble. The second you speak publicly about how you’re using content – whether you’re bragging about rights you don’t have or taking a shot at someone – you risk adding a defamation claim on top of an IP dispute.
Smart Lawyers Play Both Angles. Plaintiffs know the playbook. They’ll use copyright claims for takedown leverage and defamation claims for reputational damage – sometimes in the same demand letter.
FCC Rules Don’t Cover This. It doesn’t matter if you’re FCC-regulated or a podcaster on your own. These risks come from civil law – and they’re coming for everyone.

The Takeaway

The overlap between copyright and defamation isn’t just a legal footnote – it’s a growing reality. In a world of viral clips, reaction videos, and borrowed content, creators need to watch how they frame and comment on what they use, just as much as whether they have permission to use it in the first place.

Because when one clip cuts two ways, you could take a hit from both directions.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Neutraliars: The Platforms That Edit Like Publishers but Hide Behind Neutrality

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn the golden age of broadcasting, the rules were clear. If you edited the message, you owned the consequences. That was the tradeoff for editorial control. But today’s digital platforms – YouTube, X, TikTok, Instagram – have rewritten that deal. Broadcasters and those who operate within the FCC regulatory framework are paying the price.

These companies claim to be neutral conduits for our content. But behind the curtain, they make choices that mirror the editorial judgment of any news director: flagging clips, muting interviews, throttling reach, and shadow banning accounts. All while insisting they bear no responsibility for the content they carry.

They want the control of publishers without the accountability. I call them neutraliars.

A “neutraliar” is a platform that claims neutrality while quietly shaping public discourse. It edits without transparency, enforces vague rules inconsistently, and hides bias behind shifting community standards.

Broadcasters understand the weight of editorial power. Reputation, liability, and trust come with every decision. But platforms operate under a different set of rules. They remove content for “context violations,” downgrade interviews for being “borderline,” and rarely offer explanations. No appeals. No accountability.

This isn’t just technical policy – it’s a legal strategy. Under Section 230 of the Communications Decency Act, platforms enjoy broad immunity from liability related to user content. What was originally intended to allow moderation of obscene or unlawful material has become a catch-all defense for everything short of outright defamation or criminal conduct.

These companies act like editors when it suits them, curating and prioritizing content. But when challenged, they retreat behind the label of “neutral platform.” Courts, regulators, and lawmakers have mostly let it slide.

But broadcasters shouldn’t.

Neutraliars are distorting the public square. Not through overt censorship, but through asymmetry. Traditional broadcasters play by clear rules – standards of fairness, disclosure, and attribution. Meanwhile, tech platforms make unseen decisions that influence whether a segment is heard, seen, or quietly buried.

So, what’s the practical takeaway?

Don’t confuse distribution with trust.

Just because a platform carries your content doesn’t mean it supports your voice. Every upload is subject to algorithms, undisclosed enforcement criteria, and decisions made by people you’ll never meet. The clip you expected to go viral. Silenced. The balanced debate you aired. Removed for tone. The satire? Flagged for potential harm.

The smarter approach is to diversify your presence. Own your archive. Use direct communication tools – e-mail lists, podcast feeds, and websites you control. Syndicate broadly but never rely solely on one platform. Monitor takedowns and unexplained drops in engagement. These signals matter.

Platforms will continue to call themselves neutral as long as it protects their business model. But we know better. If a company edits content like a publisher and silences creators like a censor, it should be treated like both.

And when you get the inevitable takedown notice wrapped in vague policy language and polished PR spin, keep one word in mind.

Neutraliars.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Is That Even Legal? Talk Radio in the Age of Deepfake Voices: Where Fair Use Ends and the Law Steps In

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn early 2024, voters in New Hampshire got strange robocalls. The voice sounded just like President Joe Biden, telling people not to vote in the primary. But it wasn’t him. It was an AI clone of his voice – sent out to confuse voters.

The calls were meant to mislead, not entertain. The response was quick. The FCC banned AI robocalls. State officials launched investigations. Still, a big question remains for radio and podcast creators:

Is using an AI cloned voice of a real person ever legal?

This question hits hard for talk radio, where satire, parody, and political commentary are daily staples. And the line between creative expression and illegal impersonation is starting to blur.

It’s already happening online. AI-generated clips of Howard Stern have popped up on TikTok and Reddit, making him say things he never actually said. They’re not airing on the radio yet – but they could be soon.

Then came a major moment. In 2024, a group called Dudesy released a fake comedy special called, “I’m Glad I’m Dead,” using AI to copy the voice and style of the late George Carlin. The hour-long show sounded uncannily like Carlin, and the creators claimed it was a tribute. His daughter, Kelly Carlin, strongly disagreed. The Carlin estate sued, calling it theft, not parody. That lawsuit could shape how courts treat voice cloning for years.

The danger isn’t just legal – it’s reputational. A cloned voice can be used to create fake outrage, fake interviews, or fake endorsements. Even if meant as satire, if it’s too realistic, it can do real damage.

So, what does fair use actually protect? It covers commentary, criticism, parody, education, and news. But a voice isn’t just creative work – it’s part of someone’s identity. That’s where the right of publicity comes in. It protects how your name, image, and voice are used, especially in commercial settings.

If a fake voice confuses listeners, suggests false approval, or harms someone’s brand, fair use probably won’t apply. And if it doesn’t clearly comment on the real person, it’s not parody – it’s just impersonation.

For talk show hosts and podcasters, here’s the bottom line: use caution. If you’re using AI voices, make it obvious they’re fake. Add labels. Give context. And best of all, avoid cloning real people unless you have their OK.

Fair use is a shield – but it’s not a free pass. When content feels deceptive, the law – and your audience – may not be forgiving.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Harrison Legal Group or read more at TALKERS.com.

Industry Views

When the Algorithm Misses the Mark: What the Walters v. OpenAI Case Means for Talk Hosts

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn a ruling that should catch the attention of every talk host and media creator dabbling in AI, a Georgia court has dismissed “Armed American Radio” syndicated host Mark Walters’ defamation lawsuit against OpenAI. The case revolved around a disturbing but increasingly common glitch: a chatbot “hallucinating” canonically false but believable information.

The Happenings: A journalist asked ChatGPT to summarize a real court case. Instead, the AI invented a fictional lawsuit accusing Walters of embezzling from the Second Amendment Foundation — a group with which he’s never been employed. The journalist spotted the error and never published inaccurate information. But the damage, at least emotionally and reputationally, was done. That untruth was out there, and Walters sued for defamation.

Last week, the court kicked the case. The court determined Walters was a public figure, and as such, Walters had to prove “actual malice” — that OpenAI knowingly or recklessly published falsehoods. He couldn’t but now it may be impossible.

The judge emphasized the basis that there was an assumption false information was never shared publicly. It stayed within a private conversation between the journalist and ChatGPT. No dissemination, no defamation.

But while OpenAI may have escaped liability, the ruling raises serious questions for the rest in the content creation space.

What This Means for Talk Hosts

Let’s be honest: AI tools like ChatGPT are already part of the media ecosystem. Hosts use them to summarize articles, brainstorm show topics, generate ad copy, and even suggest guest questions. They’re efficient — and also dangerous.

This case shows just how easily AI can generate falsehoods with confidence and detail. If a host were to read something like that hallucinated lawsuit on air, without verifying it, the legal risk would shift. It wouldn’t be the AI company on the hook — it would be the broadcaster who repeated it.

Key Lessons

  1. AI is not a source.
    It’s a starting point. Just like a tip from a caller or a line on social media, AI-generated content must be verified before use.
  2. Public figures are more exposed.
    The legal system gives less protection to people in the public eye — like talk hosts — and requires a higher burden of proof in defamation claims. That cuts both ways.
  3. Disclosure helps.
    OpenAI’s disclaimers about potential inaccuracies helped them in court. On air, disclosing when you use AI can offer similar protection — and builds trust with your audience.
  4. Editorial judgment still rules.
    No matter how fast or slick AI gets, it doesn’t replace a producer’s instincts or a host’s responsibility.

Bottom line: the lawsuit may be over, but the conversation is just beginning. The more we rely on machines to shape our words, the more we need to sharpen our filters. Because when AI gets it wrong, the real fallout hits the human behind the mic.

And for talk hosts, that means the stakes are personal. Your credibility, your syndication, your audience trust — none of it can be outsourced to an algorithm. AI might be a tool in the kit, but editorial judgment is still the sharpest weapon in your arsenal. Use it. Or risk learning the hard way what Mark Walters just did. Walters has yet to comment on what steps – if any – he and his lawyers will take next.

TALKERS publisher Michael Harrison issued the following comment regarding the Georgia ruling: “In the age of internet ‘influencers’ and media personalities with various degrees of clout operating within the same space, the definition of ‘public figure’ is far less clear than in earlier times. The media and courts must revisit this striking change. Also, in an era of self-serving political weaponization, this ruling opens the door to ‘big tech’ having enormous, unbridled power in influencing the circumstances of news events and reputations to meet its own goals and agendas.”

Matthew B. Harrison is a media attorney and executive producer specializing in broadcast law, intellectual property, and First Amendment issues. He serves as VP/Associate Publisher of TALKERS magazine and is a senior partner at Harrison Media Law. He also leads creative development at Goodphone Communications.

Industry News

Outstanding Speakers Joining “GENERATIONS 2025” Agenda

The lineup of industry speakers set to speak at the forthcoming GENERATIONS 2025 conference being presented by TALKERS at the forthcoming Intercollegiate Broadcasting System (IBS) convention – IBSNYC 2025 – continues to grow.

img

A stellar line-up of speakers have already signed up to speak at this groundbreaking industry event including (in alphabetical order): Vince Benedetto, CEO, Bold Gold Media Group; Chris Berry, VP News/Talk/Sports, iHeartMedia; Scot Bertram, General Manager, WRFH, Hillsdale College, Hillsdale, MI / Lecturer In Journalism; Mike Gallagher, talk show host, Salem Radio Network; Dom Giordano, talk show host, WPHT, Philadelphia; Lee Harris, Director of Integrated Operations, NewsNation / WGN, Chicago; Michael Harrison, Publisher, TALKERS; Matthew B. Harrison, Esq., VP/Associate Publisher, TALKERS; Harrison Media Law, Senior Partner; Harry Hurley, morning talk show host, WPG, Atlantic City; Jeff Katz, talk show host, WRVA, Richmond, VA; Chad Lopez, President, WABC, New York, Red Apple Media Group; John T. Mullen, general manager, WRHU-FM/WRHU.org, Hofstra University, Hempstead, NY; Walter Sabo (a.k.a. Walter M Sterling), consultant / talk show host / WPHT, Philadelphia / Talk Media Network; Rich Valdés, talk show host, Westwood One; with several more to be announced in the next few days. See agenda and accompanying stories below.

Sheraton Times Square New York Hotel
New York East Room
Saturday March 8, 2025
12:30 pm – 4:30 pm

AGENDA

12:30 – 1:00 pm Keynote Address “Welcome to the Brave New World”

Speakers:
Michael Harrison, Publisher, TALKERS
Matthew B. Harrison, Esq., VP/Associate Publisher, TALKERS; Harrison Media Law, Senior Partner

1:10 – 1:40 pm Fireside Chat “Setting the Stage”

Facilitator: Michael Harrison, Publisher, TALKERS
Special Guest: Chad Lopez, President, WABC, New York, Red Apple Media Group

1:50 – 2:20 pm Discussion: “Launching and Managing a Career in a Changing Media Industry”

Moderator: Dom Giordano, talk show host, WPHT, Philadelphia
Speaker: John T. Mullen, general manager, WRHU-FM/WRHU.org, Hofstra University, Hempstead, NY
Speaker: TBA
Speaker: TBA

2:30 – 3:00 pm Discussion: “Old School/New School/Next School – Learning from Each Other”

Moderator:  Harry Hurley, morning talk show host, WPG, Atlantic City
Speaker:  Vince Benedetto, CEO, Bold Gold Media Group
Speaker: Scot Bertram, General Manager, WRFH, Hillsdale College, Hillsdale, MI / Lecturer In Journalism
Speaker: Walter Sabo (a.k.a. Walter M Sterling), consultant / talk show host / WPHT, Philadelphia / Talk Media Network

3:10 – 3:40 pm Discussion: “Radio’s Place in a Diverse, Digital World”

Moderator: TBA
Speaker: Mike Gallagher, talk show host, Salem Radio Network
Speaker: Rich Valdés, talk show host, Westwood One
Speaker: TBA

3:50 – 4:20 pm Discussion: “Finding Truth in an Age of Misinformation”

Moderator: Lee Harris, Director of Integrated Operations, NewsNation / WGN, Chicago
Speaker:  Chris Berry, VP News/Talk/Sports, iHeartMedia
Speaker: Jeff Katz, talk show host, WRVA, Richmond, VA
Speaker: TBA

4:20 – 4:30 pm Wrap Up:  Group Chat