Industry Views

Are Your AI Logos Actually Kryptonite?

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgSuperman just flew into court – not against Lex Luthor, but against Midjourney. Warner Bros. Discovery is suing the AI platform, accusing it of stealing the studio’s crown jewels: Superman, Batman, Wonder Woman, Scooby-Doo, Bugs Bunny, and more.

At first glance, you might shrug. “That’s Warner Bros. vs. Silicon Valley – what does it have to do with my talk media show?” Here’s the answer: everything. If you or your producer are using Midjourney, DALL·E, or Stable Diffusion for logos, promos, or podcast cover art, you’re standing in the same blast radius.

AI Isn’t Neutral Paint

The romance of AI graphics is speed and cost. Need a logo in five minutes? A flyer for a station event? A podcast cover? Fire up an AI tool and you’re done.

But those images don’t come from a blank canvas. They come from models trained on copyrighted works – often without permission. Warner Bros. alleges that Midjourney not only trained on its characters but knowingly let users download knockoff versions.

If Warner wins – or even squeezes a settlement – AI platforms will clamp down. Suddenly, the “free” art you’ve been posting may not just vanish; it may become a liability.

Too Small to Matter? Think Again

Here’s the legal catch: infringement claims don’t scale by size. A podcaster with a Facebook page is just as liable as a network if the artwork copies protected content.

It’s easy to imagine a rival, competitor, or ex-producer spotting an AI-made graphic that looks “too much like” something else – and firing off a takedown. Once that happens, you’re judged not by intent but by what you published.

Unlike FCC guardrails for on-air speech, there’s no regulator to clarify. This is civil court. You versus the claimant – and the billable hours start immediately.

Even Elon Musk Just Got Burned

Neuralink – Elon Musk’s brain-computer startup – just lost its bid to trademark the words “Telepathy” and “Telekinesis.” Someone else filed first.

If Musk’s lawyers can’t secure simple branding terms, what chance does your station or company have if you wait until after launch to file your new show name? Timing isn’t just strategy; it’s survival.

The Playbook

  1. Audit Your AI Use. Know which graphics and promos are AI-generated, and from what platform.
  2. File Early. Register show names and logos before the launch hype.
  3. Budget for Ownership. A real designer who assigns you copyright is safer than a bot with murky training data.

The Bottom Line

AI may feel like a shortcut, but in media law it’s a trapdoor. If Warner Bros. will defend Superman from an AI platform, they won’t ignore your podcast artwork if it looks too much like the Man of Steel.

Big or small, broadcaster or podcaster – if your AI Superman looks like theirs, you’re flying straight into Kryptonite.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Fair Use in 2025: The Courts Draw New Lines

By Matthew B. Harrison
TALKERSVP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgImagine an AI trained on millions of books – and a federal judge saying that’s fair use. That’s exactly what happened this summer in Bartz v. Anthropic, a case now shaping how creators, publishers, and tech giants fight over the limits of copyright.

Judges in California have sent a strong signal: training large language models (LLMs) on copyrighted works can qualify as fair use if the material is lawfully obtained. In Bartz, Judge William Alsup compared Anthropic’s use of purchased books to an author learning from past works. That kind of transformation, he said, doesn’t substitute for the original.

But Alsup drew a hard line against piracy. If a dataset includes books from unauthorized “shadow libraries,” the fair use defense disappears. Those claims are still heading to trial in December, underscoring that source matters just as much as purpose.

Two days later, Judge Vince Chhabria reached a similar conclusion in Kadrey v. Meta. He called Meta’s training “highly transformative,” but dismissed the lawsuit because the authors failed to show real market harm. Together, the rulings show that transformation is a strong shield, but it isn’t absolute. Market evidence and lawful acquisition remain decisive.

AI training fights aren’t limited to novelists. The New York Times v. OpenAI case is pressing forward after a judge refused to dismiss claims that OpenAI and Microsoft undermined the paper’s market by absorbing its reporting into AI products. And in Hollywood, Disney and Universal are suing Midjourney, alleging its system lets users generate characters like Spider-Man or Shrek – raising the unsettled question of whether AI outputs themselves can infringe.

The lesson is straightforward: fair use is evolving, but not limitless. Courts are leaning toward protecting transformative uses of content—particularly when it’s lawfully sourced – but remain wary of piracy and economic harm.

That means media professionals can’t assume that sharing content online makes it free for training. Courts consistently recognize that free journalism, interviews, and broadcasts still carry market value through advertising, sponsorship, and brand equity. If AI systems cut into those markets, the fair use defense weakens.

For now, creators should watch the December Anthropic trial and the Midjourney litigation closely. The courts have blessed AI’s right to learn – but they haven’t yet decided how far those lessons can travel once the outputs begin to look and feel like the originals.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com

Industry Views

When “Sharing” Becomes Stealing: TALKERS’ 90-Second Lesson in Fair Use

By Matthew B. Harrison

TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgNinety seconds. That’s all it took. One of the interviews on the TALKERS Media Channel – shot, edited, and published by us – appeared elsewhere online, chopped into jumpy cuts, overlaid with AI-generated video game clips, and slapped with a clickbait title. The credit? A link. The essence of the interview? Repurposed for someone else’s traffic.

TALKERS owns the copyright. Taking 90 seconds of continuous audio and re-editing it is infringement.

Could they argue fair use? Maybe, but the factors cut against them:

  • Purpose: Clickbait, not commentary or parody.
  • Nature: Original journalism leans protective.
  • Amount: Ninety seconds may be the “heart” of the work.
  • Market Effect: If reposts draw views, ad revenue, or SEO, that’s harm.

And here’s the key point: posting free content doesn’t erase its market value. Free journalism still generates reputation, sponsorships, and ad dollars. Courts consistently reject the idea that “free” means “up for grabs.”

Enforcement options exist. A DMCA notice can clear a repost quickly. Repeat offenders risk bans. On-screen branding makes copying obvious, and licenses can set terms like “share with credit, no remix.”

But here’s the hard truth: a takedown won’t stop the AI problem. Once a clip circulates, it’s scraped into datasets training text-to-video and voice models. Deleting the repost doesn’t erase cached or mirrored copies. Think of it like pouring a glass of water into the ocean – you can’t get it back. And to make matters worse, enforcement doesn’t stop at U.S. borders. Different countries have different copyright rules, making “justice” slow, uneven, and rarely satisfying.

That TALKERS interview may now live inside billions of fragments teaching machines how people speak. You can win the takedown battle and still lose the training war. Courts are only starting to address whether scraping is infringement. For now, once it’s ingested, it’s permanent.

Creators face a constant tension: content must spread to grow, but unchecked sharing erodes control. The challenge in 2025 is drawing that line before your work becomes someone else’s “content.”

The law is still on your side – but vigilance matters. Use takedowns when necessary. Brand so the source is clear. Define sharing terms up front. And remember: free doesn’t mean worthless.

The real question isn’t just “Is it fair use?” It’s “Who controls the story?”

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com

Industry Views

Could Your Own Podcast Become Your AI Competitor?

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

mattybharrisonImagine a listener “talking” to an AI version of you – trained entirely on your old episodes. The bot knows your cadence, your phrases, even your voice. It sounds like you, but it isn’t you.

This isn’t science fiction. With enough content, it’s technically feasible today. A determined developer could transcribe archives, fine-tune a language model, and overlay a cloned voice. The result wouldn’t be perfect, but it would be recognizable.

Whether that’s legal is another question – one circling directly around fair use.

Why It Matters

For most content creators, archives are their most valuable asset. Yet many contracts with networks, distributors, or hosting platforms quietly grant broad rights to use recordings in “new technologies.” That language, once ignored, could be the legal hook to justify training without your permission.

Fair use is the fallback defense. Tech companies argue training is transformative – they aren’t re-broadcasting your show, only using it to teach a machine. But fair use also weighs market harm. If “AI You” pulls listeners or sponsors away from the real thing, that argument weakens considerably.

Not Just Theory

Other industries are already here. AI has generated convincing tracks of Frank Sinatra singing pop hits and “new” stories written in the style of Jane Austen. If that can be done with a few books or albums, thousands of podcast episodes provide more than enough material to train a “host model.”

Talk media is especially vulnerable because its product is already conversational. The line between “fan remix” and “AI imitation” isn’t as wide as it seems.

What You Can Do

This isn’t about panic – it’s about preparation.

— Review your contracts: confirm you own your recordings and transcripts.
— Register your work: enforceable rights are stronger rights.
— Decide your stance: licensing your archives for training might be an opportunity – if you control it.
— Emphasize authenticity: audiences still value the human behind the mic.

The Takeaway

Could your podcast be turned into your competitor? Yes, in theory. Will it happen to you? That depends on your contracts, your protections, and the choices you make.

Fair use may ultimately decide these battles, but “fair” is not the same as safe. Consider this example a reminder: in the AI era, your archive is not just history – it is raw material.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

When the Library Talks Back

2f8c1286 c8c3 4b72 ba11 091cec050fcd
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgImagine SiriusXM acquires the complete Howard Stern archive – every show, interview, and on-air moment. Months later, it debuts “Howard Stern: The AI Sessions,” a series of new segments created with artificial intelligence trained on that archive. The programming is labeled AI-generated, yet the voice, timing, and style sound like Stern himself.

Owning the recordings might suggest the right to create new works from them. In reality, the answer is more complicated – and the music industry offers a useful comparison.

Music Industry Precedent

Sony, Universal, and others have spent hundreds of millions buying music catalogs from artists such as Bob DylanBruce SpringsteenPaul Simon, and Queen. These deals often include both composition rights and master recordings, giving the buyer broad control over licensing and derivative works.

In music, the song and the recording are the assets. In talk content, the defining element is the host’s persona – voice, cadence, and delivery – which changes the legal analysis when creating new material.

Copyright and Persona Rights

Buying a talk archive usually transfers copyright in the recordings and any scripts. That permits rebroadcast, excerpts, and repackaging of original programs.

It does not automatically transfer the host’s right of publicity – control over commercial use of their name, likeness, and in many states, their distinctive voice. In Midler v. Ford Motor Co. (1988), the court ruled that imitating Bette Midler’s voice in a commercial without consent was an unauthorized use of her identity.

This means a company can own the shows without having the right to make new performances in the host’s voice unless the contract clearly grants that right.

The AI Factor

AI technology can replicate a host’s voice, tone, and style with high accuracy, producing entirely new programming.

Outside broadcasting, a recent AI-generated George Carlin special – written by humans but performed by a voice model trained on decades of his work – sparked debate about rights and legacy.

In talk radio, similar AI use could create “new” episodes featuring well-known hosts. Even with clear labeling, right-of-publicity claims may arise if the host or their estate never authorized it. Disclaimers may address consumer confusion but do not remove identity-rights issues.

Why It Matters

This applies to more than national figures. Any broadcaster or podcaster with a substantial archive could face it. Selling or licensing a library could give the buyer the tools to replicate your voice without your participation.

For buyers, the ability to produce new content from archived material has commercial appeal. But without the right to use the host’s voice for new works, it carries significant legal and reputational risk.

Contracts Decide

The key is in the contract:

— Did the talent assign rights to their name, likeness, and voice for future works?
— Is use limited to original recordings or extended to derivative works?
— Does it address future technologies, including AI?

Older agreements often omit these points, leaving courts to decide. Future contracts will likely address AI directly.

Takeaways

For talent: Know what you are transferring. Copyright ownership does not necessarily include your future voice.

For buyers: Owning an archive does not automatically give you the right to create AI-generated new material in the original host’s voice.

For everyone: As AI advances, control over archives will depend on the contracts that govern them.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

They Say YOU Infringed – But Do THEY Own the Rights?

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgYou did everything right – or so you thought. You used a short clip, added commentary, or reshared something everyone else was already posting. Then one day, a notice shows up in your inbox. A takedown. A demand. A legal-sounding, nasty-toned email claiming copyright infringement, and asking for payment.

You’re confused. You’re cautious. And maybe you’re already reaching for the fair use defense.

But hold on. Before you argue about what you used, ask something simpler: Does the party accusing you actually own the rights?

Two Main Reasons People Send Copyright Notices

1. They believe they’re right – and they want to fix it.  Sometimes the claim is legitimate. A rights-holder sees their content used without permission and takes action. They may send a DMCA takedown, request removal, or ask for a license fee. Whether it’s a clip, an image, or a music bed – the law is on their side if your use wasn’t authorized.
2. They’re casting a wide net – or making a mistake. Other times, you’ve landed in a mass enforcement dragnet. Some companies send thousands of notices hoping a few people will pay – whether or not the claim is strong, or even valid. These are often automated, sometimes sloppy, and occasionally bluffing. The sender may not own the rights. They may not even know if what you used was fair use, public domain, or licensed.

Mistakes happen. Bots misidentify content. Images get flagged that were never protected. Even legitimate copyright holders sometimes act too fast. But once a notice goes out, it can become your problem – unless you respond wisely.

The First Thing to Check Is Ownership

Most creators instinctively argue fair use or say they meant no harm. But those aren’t the first questions a lawyer asks.

The first question is: “Do they have standing to bring the claim?”

In many cases, the answer is unclear or flat-out “no.” Courts have dismissed copyright lawsuits where the claimant couldn’t show ownership or any active licensing interest. If they can’t demonstrate control over the work – and actual market harm – they may not have the right to sue.

What To Do If You Get a Notice

Don’t panic. Not all claims are valid – and not all claimants are in a position to enforce them.
Don’t assume fair use will protect you. It might, but only after ownership is clear.
Don’t engage emotionally. Responding flippantly can escalate things fast.
Do get help early. A media attorney can help you assess whether the claim is real – and whether the sender has any legal ground at all.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Just Because You Found It Online Doesn’t Mean You Can Use It

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgA New Jersey radio station thought they were just being clever online. They scanned a photo from New Jersey Monthly, cropped out the photographer’s credit line, and posted it on Facebook – inviting listeners to edit and reshare it for fun. ying to engage listeners to interact with the station with more than just their ears.”

But that station, WKXW 101.5, ended up in federal court.

Photographer Peter Murphy sued for copyright infringement and removal of attribution. The Third Circuit ruled against the station – finding that the image was used without permission, credit removed, and the photographer’s ability to license his work damaged.

It wasn’t fair use. It was infringement.

Fair Use Won’t Save You from Getting Sued

Fair use isn’t a free pass – it’s a defense. That means someone’s already accused you of infringement, and now it’s on you to justify it.

Even when it works, fair use still costs time and money. In the WKXW case, the station used the entire photo, failed to transform it, and encouraged widespread online sharing. The court saw that as market harm – one of the most important fair use factors.

And don’t assume you’re safe just because it wasn’t part of the broadcast. Courts have made clear that even social media posts by broadcasters can undermine the value of the original and trigger liability.

Don’t Ignore It Just Because It Feels Small

In my own experience with clients fending off these kinds of claims, sometimes it’s obvious. Other times it’s a bluff. But even bogus claims can cost you if you don’t take them seriously from the beginning.

License It, Link to It, or Leave It

If you didn’t create it or license it, don’t assume it’s fair game. Look for content with clear reuse rights. Better yet – link to the source instead of copying it.

Because if a copyright holder comes after you, your intentions won’t matter. Only your rights will.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

In the Age of Blogs, Podcasts, and Substack, Defamation Law is Asking: How Public is Too Public?

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgMark Walters didn’t expect to lose private-figure legal protections over something he never talked about – especially since the thing he never talked about never even happened. A nationally syndicated radio host and outspoken Second Amendment advocate, Walters is publicly known, but in a specific lane. He never discussed nonprofits, financial misconduct, or legal ethics. Yet when ChatGPT hallucinated a claim that he had embezzled from a charity, a Georgia court ruled he was a public figure – and dismissed his defamation suit. 

The logic? Walters had a platform, a following, and a history of public commentary. That was enough. The court held that his general media presence elevated him to public-figure status, even though the allegedly defamatory statement had nothing to do with the subject matter of his actual work. wasn’t defamed about what he’s known for—but his visibility was used against him anyway.

The case didn’t just shut down a complaint. It opened a wider question: who qualifies as a public figure in the modern media era – and when does that designation apply to topics you never touched?

Mark Walters Inspired editorial cartoon for exclusive use by TALKERS


Why Public Figure Status Matters

Defamation law protects people from false, reputation-harming statements – but not equally. A private figure needs only to show that the speaker was negligent. A public figure, by contrast, must prove actual malice – that the speaker knew the statement was false or recklessly disregarded the truth.

This high standard, first articulated in New York Times v. Sullivan, was intended to protect freedom of speech and the press. But in the age of digital publishing and algorithmic reach, it’s increasingly used to deny protection to people who never thought they were stepping into the spotlight.

What Makes Someone a Public Figure?

Courts recognize two main categories:

– General-purpose public figures are household names – people famous across all topics and platforms.

– Limited-purpose public figures are individuals who have voluntarily entered public controversy or engaged in widespread public commentary on specific issues.

Here’s where the modern problem begins.

Thanks to blogs, newsletters, podcasts, and social media, it’s easier than ever to participate in public dialogue – and harder than ever to keep that participation confined to just one topic.

Post a viral thread on immigration?

Host a weekly podcast about school choice?

Weigh in on TikTok about local politics?

You may have just stepped into “limited-purpose public figure” territory – whether you intended to or not.

The Walters v. OpenAI Case – Now the Law

In Walters v. OpenAI, the court didn’t question whether the claim was false – only whether Walters could meet the public figure burden of proof. The court held that he could not. Despite the fact that he had never discussed the subject matter in question, his general visibility was enough to require that he prove actual malice. And he couldn’t.

The decision came with no trial, no settlement – just a dismissal. It now stands as legal precedent: having a public voice on one issue may cost you private-figure protections on others.

Microphone, Meet Microscope

This shift affects:

Independent journalists

– Podcast hosts

– Niche content creators

– Local activists with modest but vocal platforms

They may not feel “public,” but courts increasingly view them that way. And once that threshold is crossed, the burden in a defamation case becomes dramatically harder to meet.

he more you speak publicly—even on one topic—the more legally exposed you are everywhere else.

That wasn’t the intent of Sullivan. But in today’s fragmented, always-on media culture, visibility leaks- and so do legal thresholds.

Final Takeaway

You don’t need to be famous to be “public.” You just need to be findable.

Whether you’re behind a mic, a blog, or a camera, your platform may elevate you into public figure status – and bring defamation law’s toughest burdens with it. If you’re defamed, you’ll have to prove the speaker acted with knowing falsehood. If you’re doing the speaking, your target’s legal classification could determine how costly a misstep becomes.

In 2025, every microphone is also a microscope. Know what the law sees before you go live.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

When One Clip Cuts Two Ways: How Copyright and Defamation Risks Collide

img

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgA radio (or video podcast) host grabs a viral clip, tosses in some sharp commentary, and shares it online. The goal? Make some noise. The result? A takedown notice for copyright infringement – and then a letter threatening a defamation suit.

Sound far-fetched? It’s not. In today’s media world, copyright misuse and defamation risks often run on parallel tracks – and sometimes crash into each other. They come from different areas of law, but creators are finding themselves tangled up in both over the same piece of content.

Copyright Protects Ownership. Defamation Protects Reputation

It’s easy to think of copyright and defamation as two separate beasts. One guards creative work. The other shields reputation. But when creators use or edit someone else’s content – especially for commentary, parody, or critique – both risks can hit at once.

Take Smith v. Summit Entertainment LLC (2007). Smith wrote an original song. Summit Entertainment slapped him with a false DMCA takedown notice, claiming copyright they didn’t actually own. Smith fought back, suing not just for the bogus takedown but also for defamation, arguing that Summit’s public accusations hurt his reputation. The court said both claims could go forward.

That case shows just how easily copyright claims and defamation threats can pile up when bad information meets bad behavior.

Murphy v. Millennium Radio: A Close Call with a Clear Message

In Murphy v. Millennium Radio Group LLC, a New Jersey radio station scanned a photographer’s work – with his credit – and posted it online without permission. That alone triggered a copyright claim. But the hosts didn’t stop there. They mocked the photographer on-air, which sparked a defamation lawsuit.

Even though the copyright and defamation claims came from different actions – using the photo without permission and trash-talking the photographer – they landed in the same legal fight. It’s a reminder that separate problems can quickly become one big headache.

Why This Double Threat Matters

Fair Use Isn’t a Free Pass on Defamation. Even if you have a solid fair use argument, that won’t protect you if your edits or commentary twist facts or attack someone unfairly.
Public Comments Can Double Your Trouble. The second you speak publicly about how you’re using content – whether you’re bragging about rights you don’t have or taking a shot at someone – you risk adding a defamation claim on top of an IP dispute.
Smart Lawyers Play Both Angles. Plaintiffs know the playbook. They’ll use copyright claims for takedown leverage and defamation claims for reputational damage – sometimes in the same demand letter.
FCC Rules Don’t Cover This. It doesn’t matter if you’re FCC-regulated or a podcaster on your own. These risks come from civil law – and they’re coming for everyone.

The Takeaway

The overlap between copyright and defamation isn’t just a legal footnote – it’s a growing reality. In a world of viral clips, reaction videos, and borrowed content, creators need to watch how they frame and comment on what they use, just as much as whether they have permission to use it in the first place.

Because when one clip cuts two ways, you could take a hit from both directions.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

The Soundbite Trap: How Editing in Radio and Podcasting Creates Legal Risk

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn radio and podcasting, editing isn’t just technical – it shapes narratives and influences audiences. Whether trimming dead air, tightening a guest’s comment, or pulling a clip for social media, every cut leaves an impression.

But here’s the legal reality: editing also creates risk.

For FCC-regulated broadcasters, that risk isn’t about content violations. The FCC polices indecency, licensing, and political fairness – not whether your edit changes a guest’s meaning.

For podcasters and online creators, the misconception is even riskier. Just because you’re not on terrestrial radio doesn’t mean you’re free from scrutiny. Defamation, false light, and misrepresentation laws apply to everyone — whether you broadcast on a 50,000-watt signal or a free podcast platform.

At the end of the day, it’s not the FCC that will hold you accountable for your edits. It’s a judge.

1. Alex Jones and the $1 Billion Lesson

Alex Jones became infamous for promoting conspiracy theories on Infowars, especially his repeated claim that the Sandy Hook shooting was a hoax – supported by selectively aired clips and distorted facts.

The result? Nearly $1 billion in defamation verdicts after lawsuits from victims’ families.

Takeaway: You can’t hide behind “just asking questions” or “it was my guest’s opinion.” If your platform publishes it – over the airwaves or online – you’re legally responsible for the content, including how it’s edited or framed. 

2. Katie Couric and the Gun Rights Group Edit

In “Under the Gun,” filmmakers inserted an eight-second pause after Katie Couric asked a tough question, making it seem like a gun rights group was stumped. In reality, they had answered immediately.

The group sued for defamation. The case was dismissed, but reputations took a hit.

Takeaway: Even subtle edits – like manufactured pauses – can distort meaning and expose creators to risk. 

3. FOX News and the Dominion Settlement

FOX News paid $787 million to Dominion Voting Systems after airing content suggesting election fraud – often based on selectively edited interviews and unsupported claims.

Though FOX is (among other things) a cable network, the impact shook the media world. Broadcasters reassessed risks, host contracts, and editorial practices. 

Takeaway: Major networks aren’t the only ones at risk. Radio hosts and podcasters who echo misleading narratives may face similar legal consequences. 

4. The Serial Podcast and the Power of Editing

“Serial” captivated millions by exploring Adnan Syed’s murder conviction. While no lawsuit followed, critics argued the producers presented facts selectively to build a certain narrative. 

Takeaway: Even without a lawsuit, editing shapes public perception. Misleading edits may not land you in court but can damage trust and invite scrutiny.

Whether you’re behind a radio microphone or a podcast mic, your editing decisions carry weight – and legal consequence.

The FCC might care if you drop an indecent word on air, but they won’t be the ones suing you when a guest claims you twisted their words. That’s civil law, where defamation, false light, and misrepresentation have no broadcast exemption.

There’s one set of rules for editing that every content creator lives by – and they’re written in the civil courts, not the FCC code.

Edit with care. 

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

You Cut for Time. They Cut You a Lawsuit.

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgLet’s discuss how CBS’s $16 million settlement became a warning shot for every talk host, editor, and content creator with a mic.

When CBS settled a lawsuit with Donald Trump for $16 million over a selectively edited “60 Minutes” interview with Kamala Harris, it wasn’t about guilt. It was about leverage. The lawsuit happened to coincide with Paramount’s FCC merger review – coincidentally, right when regulatory pressure was needed the most.

For broadcasters and digital creators alike, the message is clear: even lawful edits can become political weapons. If you shape content, you’re a target. And the courts aren’t the only battleground. Public outrage, regulatory scrutiny, and advertiser anxiety all shape the cost of controversy.

For Broadcasters: Every Cut Counts

Editing always alters reality. That doesn’t make it wrong – but it makes it risky. Even good-faith trims for time or tone can be reframed as distortion. What matters isn’t just what you cut, but whether you can defend it.

Case in Point: “60 Minutes” vs. DeSantis

CBS was accused of misleading edits in a 2021 vaccine rollout story. They published full transcripts and stood their ground. No apology, no payout.

Takeaways:

— Archive raw footage.
— Log your editorial decisions.
— Be ready to explain your process with clarity and conviction.

For Digital Creators: You’re Not as Untouchable as You Think

Section 230 might protect platforms, but it doesn’t shield you from smear campaigns, takedowns, or frivolous lawsuits. Editing with commentary or critique is often fair use – but that doesn’t stop bad-faith actors from flipping the narrative.

Case in Point: “Decoding Fox News”

Jules Terpak’s critique series survived coordinated attacks thanks to clear sourcing, transparency, and credibility built ahead of time.

Takeaways:

— Know your rights, but also your vulnerabilities.
— Keep receipts.
— Build audience trust before someone tries to burn it down.

The Real Risk Isn’t the Edit – It’s the Optics

Trump didn’t need to win the lawsuit. He just needed the headlines – and CBS needed their merger. Settlements aren’t always about truth. They’re about timing.

So protect yourself:

— Document your work.
— Develop internal standards.
— Don’t panic under pressure – prepare for it.

Because in an era where outrage spreads faster than facts, defending the integrity of your edit isn’t optional. It’s essential.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Is That Even Legal? Talk Radio in the Age of Deepfake Voices: Where Fair Use Ends and the Law Steps In

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn early 2024, voters in New Hampshire got strange robocalls. The voice sounded just like President Joe Biden, telling people not to vote in the primary. But it wasn’t him. It was an AI clone of his voice – sent out to confuse voters.

The calls were meant to mislead, not entertain. The response was quick. The FCC banned AI robocalls. State officials launched investigations. Still, a big question remains for radio and podcast creators:

Is using an AI cloned voice of a real person ever legal?

This question hits hard for talk radio, where satire, parody, and political commentary are daily staples. And the line between creative expression and illegal impersonation is starting to blur.

It’s already happening online. AI-generated clips of Howard Stern have popped up on TikTok and Reddit, making him say things he never actually said. They’re not airing on the radio yet – but they could be soon.

Then came a major moment. In 2024, a group called Dudesy released a fake comedy special called, “I’m Glad I’m Dead,” using AI to copy the voice and style of the late George Carlin. The hour-long show sounded uncannily like Carlin, and the creators claimed it was a tribute. His daughter, Kelly Carlin, strongly disagreed. The Carlin estate sued, calling it theft, not parody. That lawsuit could shape how courts treat voice cloning for years.

The danger isn’t just legal – it’s reputational. A cloned voice can be used to create fake outrage, fake interviews, or fake endorsements. Even if meant as satire, if it’s too realistic, it can do real damage.

So, what does fair use actually protect? It covers commentary, criticism, parody, education, and news. But a voice isn’t just creative work – it’s part of someone’s identity. That’s where the right of publicity comes in. It protects how your name, image, and voice are used, especially in commercial settings.

If a fake voice confuses listeners, suggests false approval, or harms someone’s brand, fair use probably won’t apply. And if it doesn’t clearly comment on the real person, it’s not parody – it’s just impersonation.

For talk show hosts and podcasters, here’s the bottom line: use caution. If you’re using AI voices, make it obvious they’re fake. Add labels. Give context. And best of all, avoid cloning real people unless you have their OK.

Fair use is a shield – but it’s not a free pass. When content feels deceptive, the law – and your audience – may not be forgiving.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Harrison Legal Group or read more at TALKERS.com.

Industry Views

Mark Walters v. OpenAI: A Landmark Case for Spoken Word Media

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgWhen Georgia-based nationally syndicated radio personality, and Second Amendment advocate Mark Walters (longtime host of “Armed American Radio”) learned that ChatGPT had falsely claimed he was involved in a criminal embezzlement scheme, he did what few in the media world have dared to do. Walters stood up when others were silent, and took on an incredibly powerful tech company, one of the biggest in the world, in a court of law.

Taking the Fight to Big Tech

Walters, by filing suit against OpenAI, the creator of ChatGPT, become the first person in the United States to test the boundaries of defamation law in the age of generative artificial intelligence.

His case was not simply about clearing his name. It was about drawing a line. Can artificial intelligence generate and distribute false and damaging information about a real person without any legal accountability?

While the court ultimately ruled in OpenAI’s favor on specific legal procedure concerns, the impact of this case is far from finished. Walters’ lawsuit broke new ground in several important ways:

— It was the first known defamation lawsuit filed against an AI developer based on content generated by an AI system.
— It brought into the open critical questions about responsibility, accuracy, and liability when AI systems are used to produce statements that sound human but carry no editorial oversight.
— It continued to add fuel to the conversation of the effectiveness of “use at your own risk” disclaimers when there is real world reputational damage hanging in the balance.

Implications for the Radio and Podcasting Community

For those spoken-word creators, regardless of platform on terrestrial, satellite, or the open internet, this case is a wake-up call, your canary in a coal mine. Many shows rely on AI tools for research, summaries, voice generation, or even show scripts. But what happens when those tools get it wrong? (Other than being embarrassed, and in some cases fined or terminated) And worse, what happens when those errors affect real people?

The legal system, as has been often written about, is still playing catch-up. Although the court ruled that the fabricated ChatGPT statement lacked the necessary elements of defamation under Georgia law, including provable harm and demonstrable fault, the decision highlighted how unprepared current frameworks are for this fast-moving, voice-driven digital landscape.

Where the Industry Goes from Here

Walters’ experience points to the urgent need for new protection and clearer guidelines:

— Creators deserve assurance that the tools they use are built with accountability in mind. This would extend to copyright infringement and to defamation.
— Developers must be more transparent about how their systems operate and the risks they create. This would identify bias and attempt to counteract it.
— Policymakers need to bring clarity to who bears responsibility when software, not a person, becomes the speaker.

A Case That Signals a Larger Reckoning

Mark Walters may not have won this round in court, but his decision to take on a tech giant helped illuminate how quickly generative AI can create legal, ethical, and reputational risks for anyone with a public presence. For those of us working in media, especially in formats built on trust, voice, and credibility, his case should not be ignored.

“This wasn’t about money. This was about the truth,” Walters tells TALKERS. “If we don’t draw a line now, there may not be one left to draw.”

To listen to a longform interview with Mark Walters conducted by TALKERS publisher Michael Harrison, please click here

Media attorney, Matthew B. Harrison is VP/Associate Publisher at TALKERS; Senior Partner at Harrison Media Law; and Executive Producer at Goodphone Communications. He is available for private consultation and media industry contract representation. He can be reached by phone at 724-484-3529 or email at matthew@harrisonmedialaw.com. He teaches “Legal Issues in Digital Media” and serves as a regular contributor to industry discussions on fair use, AI, and free expression.

Industry Views

When the Algorithm Misses the Mark: What the Walters v. OpenAI Case Means for Talk Hosts

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn a ruling that should catch the attention of every talk host and media creator dabbling in AI, a Georgia court has dismissed “Armed American Radio” syndicated host Mark Walters’ defamation lawsuit against OpenAI. The case revolved around a disturbing but increasingly common glitch: a chatbot “hallucinating” canonically false but believable information.

The Happenings: A journalist asked ChatGPT to summarize a real court case. Instead, the AI invented a fictional lawsuit accusing Walters of embezzling from the Second Amendment Foundation — a group with which he’s never been employed. The journalist spotted the error and never published inaccurate information. But the damage, at least emotionally and reputationally, was done. That untruth was out there, and Walters sued for defamation.

Last week, the court kicked the case. The court determined Walters was a public figure, and as such, Walters had to prove “actual malice” — that OpenAI knowingly or recklessly published falsehoods. He couldn’t but now it may be impossible.

The judge emphasized the basis that there was an assumption false information was never shared publicly. It stayed within a private conversation between the journalist and ChatGPT. No dissemination, no defamation.

But while OpenAI may have escaped liability, the ruling raises serious questions for the rest in the content creation space.

What This Means for Talk Hosts

Let’s be honest: AI tools like ChatGPT are already part of the media ecosystem. Hosts use them to summarize articles, brainstorm show topics, generate ad copy, and even suggest guest questions. They’re efficient — and also dangerous.

This case shows just how easily AI can generate falsehoods with confidence and detail. If a host were to read something like that hallucinated lawsuit on air, without verifying it, the legal risk would shift. It wouldn’t be the AI company on the hook — it would be the broadcaster who repeated it.

Key Lessons

  1. AI is not a source.
    It’s a starting point. Just like a tip from a caller or a line on social media, AI-generated content must be verified before use.
  2. Public figures are more exposed.
    The legal system gives less protection to people in the public eye — like talk hosts — and requires a higher burden of proof in defamation claims. That cuts both ways.
  3. Disclosure helps.
    OpenAI’s disclaimers about potential inaccuracies helped them in court. On air, disclosing when you use AI can offer similar protection — and builds trust with your audience.
  4. Editorial judgment still rules.
    No matter how fast or slick AI gets, it doesn’t replace a producer’s instincts or a host’s responsibility.

Bottom line: the lawsuit may be over, but the conversation is just beginning. The more we rely on machines to shape our words, the more we need to sharpen our filters. Because when AI gets it wrong, the real fallout hits the human behind the mic.

And for talk hosts, that means the stakes are personal. Your credibility, your syndication, your audience trust — none of it can be outsourced to an algorithm. AI might be a tool in the kit, but editorial judgment is still the sharpest weapon in your arsenal. Use it. Or risk learning the hard way what Mark Walters just did. Walters has yet to comment on what steps – if any – he and his lawyers will take next.

TALKERS publisher Michael Harrison issued the following comment regarding the Georgia ruling: “In the age of internet ‘influencers’ and media personalities with various degrees of clout operating within the same space, the definition of ‘public figure’ is far less clear than in earlier times. The media and courts must revisit this striking change. Also, in an era of self-serving political weaponization, this ruling opens the door to ‘big tech’ having enormous, unbridled power in influencing the circumstances of news events and reputations to meet its own goals and agendas.”

Matthew B. Harrison is a media attorney and executive producer specializing in broadcast law, intellectual property, and First Amendment issues. He serves as VP/Associate Publisher of TALKERS magazine and is a senior partner at Harrison Media Law. He also leads creative development at Goodphone Communications.

Industry News

Outstanding Speakers Joining “GENERATIONS 2025” Agenda

The lineup of industry speakers set to speak at the forthcoming GENERATIONS 2025 conference being presented by TALKERS at the forthcoming Intercollegiate Broadcasting System (IBS) convention – IBSNYC 2025 – continues to grow.

img

A stellar line-up of speakers have already signed up to speak at this groundbreaking industry event including (in alphabetical order): Vince Benedetto, CEO, Bold Gold Media Group; Chris Berry, VP News/Talk/Sports, iHeartMedia; Scot Bertram, General Manager, WRFH, Hillsdale College, Hillsdale, MI / Lecturer In Journalism; Mike Gallagher, talk show host, Salem Radio Network; Dom Giordano, talk show host, WPHT, Philadelphia; Lee Harris, Director of Integrated Operations, NewsNation / WGN, Chicago; Michael Harrison, Publisher, TALKERS; Matthew B. Harrison, Esq., VP/Associate Publisher, TALKERS; Harrison Media Law, Senior Partner; Harry Hurley, morning talk show host, WPG, Atlantic City; Jeff Katz, talk show host, WRVA, Richmond, VA; Chad Lopez, President, WABC, New York, Red Apple Media Group; John T. Mullen, general manager, WRHU-FM/WRHU.org, Hofstra University, Hempstead, NY; Walter Sabo (a.k.a. Walter M Sterling), consultant / talk show host / WPHT, Philadelphia / Talk Media Network; Rich Valdés, talk show host, Westwood One; with several more to be announced in the next few days. See agenda and accompanying stories below.

Sheraton Times Square New York Hotel
New York East Room
Saturday March 8, 2025
12:30 pm – 4:30 pm

AGENDA

12:30 – 1:00 pm Keynote Address “Welcome to the Brave New World”

Speakers:
Michael Harrison, Publisher, TALKERS
Matthew B. Harrison, Esq., VP/Associate Publisher, TALKERS; Harrison Media Law, Senior Partner

1:10 – 1:40 pm Fireside Chat “Setting the Stage”

Facilitator: Michael Harrison, Publisher, TALKERS
Special Guest: Chad Lopez, President, WABC, New York, Red Apple Media Group

1:50 – 2:20 pm Discussion: “Launching and Managing a Career in a Changing Media Industry”

Moderator: Dom Giordano, talk show host, WPHT, Philadelphia
Speaker: John T. Mullen, general manager, WRHU-FM/WRHU.org, Hofstra University, Hempstead, NY
Speaker: TBA
Speaker: TBA

2:30 – 3:00 pm Discussion: “Old School/New School/Next School – Learning from Each Other”

Moderator:  Harry Hurley, morning talk show host, WPG, Atlantic City
Speaker:  Vince Benedetto, CEO, Bold Gold Media Group
Speaker: Scot Bertram, General Manager, WRFH, Hillsdale College, Hillsdale, MI / Lecturer In Journalism
Speaker: Walter Sabo (a.k.a. Walter M Sterling), consultant / talk show host / WPHT, Philadelphia / Talk Media Network

3:10 – 3:40 pm Discussion: “Radio’s Place in a Diverse, Digital World”

Moderator: TBA
Speaker: Mike Gallagher, talk show host, Salem Radio Network
Speaker: Rich Valdés, talk show host, Westwood One
Speaker: TBA

3:50 – 4:20 pm Discussion: “Finding Truth in an Age of Misinformation”

Moderator: Lee Harris, Director of Integrated Operations, NewsNation / WGN, Chicago
Speaker:  Chris Berry, VP News/Talk/Sports, iHeartMedia
Speaker: Jeff Katz, talk show host, WRVA, Richmond, VA
Speaker: TBA

4:20 – 4:30 pm Wrap Up:  Group Chat

Industry Views

Fair Use or Foul Play? Lessons from “Equals Three”

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn the ever-evolving landscape of digital media, creators often walk a fine line between inspiration and infringement. The 2015 case of “Equals Three, LLC v. Jukin Media, Inc.” offers a cautionary tale for anyone producing reaction videos or commentary-based content: fair use is not a free pass, and transformation is key.

The Case at a Glance

“Equals Three,” a popular YouTube series, built its reputation on humorously reacting to viral videos. The show used 10-30 second clips of these videos, pausing periodically for the host to add jokes and reactions. Jukin Media, which owns the rights to many viral clips, sued for copyright infringement, arguing the use was not protected under fair use.

The court sided with Jukin Media, ruling that “Equals Three’s” use was not sufficiently transformative. While the show added humor and commentary, it primarily repackaged the original content for entertainment without enough new meaning.

What This Means for You

Fair use requires creators to add something new, such as critique or analysis. Simply reacting to content with jokes or minimal commentary isn’t enough. Use only what’s necessary and ensure your work doesn’t substitute for the original.

Additionally, fair use considers market impact. If your content diminishes the value of the original by serving as a substitute, it’s unlikely to qualify. 

Why This Matters

Reaction videos and commentary are staples of digital media, but they come with risks. The “Equals Three” case highlights the need for meaningful transformation. By focusing on critique, analysis, or education, creators can navigate fair use confidently while respecting intellectual property rights. 

Media attorney, Matthew B. Harrison is VP/associate publisher, TALKERS; Senior Partner, Harrison Media Law; and executive producer, Goodphone Communications.  He is available for private consultation and media industry contract representation. He can be reached by phone at 724.484.3529 or email at matthew@harrisonmedialaw.com

Industry Views

FAIR USE: What Constitutes “Publishing” or a “Publication” on Today’s Media Playing Field?

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imAs the practice of “clip jockeying” becomes an increasingly ubiquitous and taken-for-granted technique in modern audio and video talk media, an understanding of the legal concept “fair use” is vital to the safety and survival of practitioners and their platforms.

When assessing fair use in audio media, courts closely examine the “nature of the copyrighted work,” especially focusing on whether the work is factual or creative, and published or unpublished. Factual content, such as news reports or data, is more likely to be seen as fair use material, as it’s in the public interest to keep factual information accessible. Creative works, like music, fiction, or original performances, often enjoy stronger protection because they embody the creator’s unique expression and should be compensated accordingly.

Unpublished interviews or speeches.  When audio content includes unpublished material – such as a speech or interview that hasn’t been publicly released – courts typically approach it with heightened caution. For example, if a podcast includes clips from an unpublished interview with a politician to enhance commentary, courts might scrutinize this more heavily than they would a published work, as the speaker retains significant control over whether and how the content reaches the public.

Case study insight: Salinger v. Random House (1987).  The landmark case Salinger v. Random House highlighted how unpublished works generally receive stronger copyright protection. In this case, the use of unpublished letters in a biography was ruled as infringing, emphasizing that unpublished materials hold a unique status in copyright law. If a podcaster today were to use a similarly unpublished interview with a public figure without significant commentary or transformation, they might face greater legal challenges.

Redefining “published” in the digital era.  With digital platforms, the meaning of “published” is evolving. Traditionally, a work was deemed “published” when made available for sale, license, or public distribution. Now, sharing content online, even in a limited way – such as within a closed social media group or private online forum – raises questions about whether the content should be considered published. Courts are increasingly aware that limited digital sharing doesn’t necessarily reduce a work’s unpublished protections, but extensive online distribution might.

Modern considerations of online sharing. Courts today analyze factors like control over access and the sharing platform’s nature. For instance, an audio clip shared in a restricted forum might retain its unpublished protections, while a widely posted clip could lose some of those protections. Additionally, when creators post content on platforms like Instagram or YouTube before officially “publishing” it elsewhere, courts may take the creator’s intent and distribution scope into account when determining the content’s legal status.

As online platforms reshape how creators distribute their work, they also impact fair use, pushing courts to reinterpret what it means for a work to be “published.” This evolving understanding means that copyright protections depend not only on whether a work is accessible but also on the level of control over its distribution, especially for audio content.

Media attorney, Matthew B. Harrison is VP/associate publisher, TALKERS; Senior Partner, Harrison Media Law; and executive producer, Goodphone Communications.  He is available for private consultation and media industry contract representation. He can be reached by phone at 724-484-3529 or email at matthew@harrisonmedialaw.com

Industry News

TALKERS Launching YouTube Channel

im

TALKERS is adding a YouTube channel to its array of talk media platforms. The venue – titled Talkers Media Channel – will initially provide a base for the organization’s conference videos and a platform for its founder Michael Harrison’s reimagined video iteration of his pioneering audio podcast, “Up Close and Far Out.” Videos from the recently held TALKERS 2024: Radio and Beyond will begin being posted after the July 4th holiday. “Up Close and Far Out with Michael Harrison” soft launches today with a conversation between Harrison and radio tech star Kim Komando examining the sociological, biological, and theological implications of artificial intelligence. The Harrison talk show will take several forms including one-on-one interviews, panel discussions and solo commentaries. Harrison and his guests will sometimes appear on camera and on other occasions just be voices enhanced by stunning visuals and music provided by the channel’s producer TALKERS VP/associate publisher, Matthew B. Harrison. The series will be targeted to an audience Harrison describes as “pop culture aficionados, political junkies, media enthusiasts, fans of science, followers of technology and the philosophically curious.” The Harrisons plan to focus heavily upon a stable of guests composed of professionals from the world of talk media covered by TALKERS magazine but stretch beyond that core as well. After it is fully up and running, plans are in place to expand the channel’s menu to include shows and properties produced by an array of content providers. To visit the debut installment of “Up Close and Far Out with Michael Harrison,” please click here.

Industry News

Michael Harrison Embarks Upon “Obsolete Slobs” Media Tour in Support of Provocative New Music Video

im

Now that the TALKERS 2024: Radio and Beyond conference is in the history books, TALKERS founder (and Gunhill Road member) Michael Harrison has embarked upon what is being called the Summer of ‘24 “Obsolete Slobs” media tour in support of the perennial rock group’s latest music video endeavor. Gunhill Road, the ensemble that has been creating multi-genre rock and pop music spanning more than five decades, has released a breathtaking new song and video titled, “Artificial Intelligence (No Robots Were Injured in the Production of this Song).” The piece – which is an unapologetic examination of the potential consequences AI poses to current human civilization – is an advance release from the band’s forthcoming fifth album. Gunhill Road has developed a unique niche in recent years attracting hundreds of thousands of internet followers powered, in large part, by the attention and airplay given it on talk radio. New songs by the group typically debut on multiple radio talk shows sparking conversation about today’s pressing topics of news and social concern. The compositions feature clever, biting lyrics delivered in a highly musical and original way. The band consists of co-founding member/pianist/vocalist/songwriter Steve Goldrich, longtime guitarist/vocalist/songwriter Paul Reisch, noted Broadway theater instrumentalist/guitarist/vocalist/songwriter Brian Koonin, and vocalist/songwriter Michael Harrison. (Harrison co-wrote this song and performs lead vocals.) This production features a special guest vocal appearance by recording artist Bibi Farber, daughter of the late talk radio pioneer, Barry Farber. The visually stunning video for “Artificial Intelligence (No Robots Were Injured in the Production of this Song)” which, ironically, employs generative AI for many of its remarkable images, was produced by Matthew B. Harrison. The song, an infectious rocker marked by driving guitars, riveting keyboards, soaring horns and a multi-layered group chorus depicts the dangers human civilization faces in an increasingly uncertain environment marked by the rising corruption of deep fakes and manipulative algorithms that threaten elections and question the very premise of “self-evident” truths. It ultimately asks the question, “What does it mean to be human?” Check out the video here. To arrange a talk media interview with Michael Harrison to discuss the song and its implications, email info@talkers.com or call 413-565-5413.

Industry News

Gunhill Road Music Video on YouTube Flagged and “Shadow Banned” by Google for Containing Shocking Content

im

The music video for the Gunhill Road song “Damn Scammers (Get Off My Phone)” has been flagged by the editorial powers-that-be at Google for containing “shocking” content. The video has, thus, been relegated to a covert censorship process on YouTube commonly known as shadow banning which drastically inhibits its ability to garner views and potentially go viral within the processes of the platform’s algorithms. The song and video make a powerful statement against the growing practice of scamming that is polluting the internet and sowing the seeds of distrust throughout modern society. TALKERS publisher Michael Harrison, a member of the heritage rock band and co-writer (with Steve GoldrichPaul Reisch and Brian Koonin) of the controversial song states, “When we wrote the song and created the accompanying video images, we knew that some folks – including the censors at Google – might find it troubling. But we were pretty sure that most people (and hopefully the folks at Google) would realize it is just provocative satire and not a literal call for violence. After all, we are only venting in highly dramatic fashion against a universally hated category of criminals who operate in the darkness of anonymity and are destroying innocent people’s lives. Perhaps we misjudged its potential impact. Regardless, we are neither withdrawing it from distribution nor apologizing for its alleged offensiveness. We realize this is not a First Amendment issue. Google and YouTube have the right to post whatever they choose. And for the most part, I love and am a big fan of YouTube. However, because of the enormous, borderline monopolistic power of Big Tech, it might eventually be considered a First Amendment issue.” The song and video presents scammers as hideously ugly, troll-like figures and calls for their deaths by firing squad, electric chair, hanging, burning at the stake, castration and being blown up by drones.

Media attorney and TALKERS associate publisher, Matthew B. Harrison – the video’s producer – states, “It’s like being silenced but without a whisper – shadow banning – an invisible barrier between your content and your audience. Social media platforms may limit the visibility of your content without any notification, causing confusion and frustration. Why does this happen? Often, it’s due to violations of community guidelines, albeit sometimes mistakenly. Do you think they’ve got people watching everything? No. It was most likely a bot. So, understanding context is not going to be at the top of its abilities. The solution? Regularly review the platform’s policies, engage with your content positively, and diversify your social media presence to ensure your voice is widely heard.”

To view the unedited version of “Damn Scammers (Get off My Phone)” (viewer discretion is now advised) please click here.

Industry News

Gunhill Road Attacks Fraudsters with a Powerful New Rocker, “Damn Scammers (Get Off My Phone)”

im

Gunhill Road, the timeless band that has been creating multi-genre rock and pop music spanning more than five decades, has released a stunning new song and video titled, “Damn Scammers (Get Off My Phone).” The piece – which is a no-holds-barred attack on the rise of scams and fraud in our society – is an advance release from the band’s forthcoming fifth album. Gunhill Road has developed a unique niche in recent years attracting tens of thousands of internet followers powered, in large part, by the attention and airplay given it on talk radio. New songs by the group typically debut on hundreds of radio talk shows sparking conversation about today’s pressing topics of news and social concern. The compositions feature clever, candid lyrics delivered in a highly musical and original way. The band consists of co-founding member/pianist/vocalist Steve Goldrich, longtime guitarist/vocalist Paul Reisch, noted Broadway theater instrumentalist/guitarist/vocalist Brian Koonin, and TALKERS publisher/vocalist Michael Harrison. The provocative video for “Damn Scammers (Get Off My Phone)” was produced by Matthew B. Harrison. The song, a powerful rocker marked by driving guitars, riveting keyboards, an exuberant group chorus and a compelling lead vocal by Brian Koonin, expresses the frustration we all face in an increasingly dangerous environment marked by the rising corruption of identity theft, charity scams, grandparent scams, imposter scams, mail fraud, romance scams, lottery scams, crypto scams, blackmail, phishing, and disingenuous institutions. Click here scammersvideo.com to see the video.  To arrange an interview with Michael Harrison to discuss the scam crisis, please email info@talkers.com.

Industry News

Matthew B. Harrison Holds Court Over Section 230 Explanation for Law Students at 1st Circuit Court of Appeals in Boston

As an attorney with extensive front-line expertise in media law, TALKERS associate publisher and senior partner in the Harrison Legal Group Matthew B. Harrison (pictured at right on the bench), was selected to hold court as “acting” judge in a moot trial involving Section 230 for law students engaged in a nationalim competition last evening (2/22) at the 1st Circuit Court of Appeals in Boston, MA. The American Bar Association, Law Student Division holds a number of annual national moot court competitions. One such event, the National Appellate Advocacy Competition, emphasizes the development of oral advocacy skills through a realistic appellate advocacy experience with moot court competitors participating in a hypothetical appeal to the United States Supreme Court. This year’s legal question focused on the Communications Decency Act – “Section 230” – and the applications of the exception from liability of internet service providers for the acts of third parties to the realistic scenario of a journalist’s photo/turned meme being used in advertising (CBD, ED treatment, gambling) without permission or compensation in violation of applicable state right of publicity statutes. Harrison tells TALKERS, “We are at one of those sensitive times in history where technology is changing at a quicker pace than the legal system and legislators can keep up with – particularly at the consequential juncture of big tech and mass communications. I was impressed and heartened by the articulateness and grasp of the Section 230 issue displayed by the law students arguing before me.”

Industry Views

Matthew B. Harrison is This Week’s Guest on Harrison Podcast

Two generations of the Harrison radio family meet on mic discussing the copyright implications of artificial intelligence as Matthew B. Harrison is this week’s guest on the award-winning PodcastOne series, “The Michael Harrison Interview.” Matthew, the son of Michael, is VP/associate publisher of TALKERS in addition to being a media and intellectual property attorney, talent manager and audio/video producer.  His latest productions, “I Got a Line in New York City” (www.igotaline.com) and “My Friend is Going Away” (www.myfriendisgoingaway.com) are experimental exercises in the utilization of AI graphics in the music video genre. On this podcast, Harrison and Harrison bring into focus the somewhat murky application of copyright law in communications and the arts as we hurtle into the frontier age of artificial intelligence. Listen to the podcast in its entirety here

Industry News

Michael Harrison Says AI is One of the Most Important Talk Topics of Our Times

TALKERS founder Michael Harrison has kicked off a nationwide guesting tour of talk shows promoting discussion of the upside and downside of AI in conjunction with the release of the new song, “I Got a Line in New York City,” by the long-established classic rock group, Gunhill Road. Harrison performs lead vocals on the track performed with band members Steve GoldrichPaul Reisch and Brian Koonin. The music video of the song (produced by Harrison’s son and TALKERS associate publisher Matthew B. Harrison) has been described as a computer’s “fever dream about the Big Apple.” Although the music is totally organic, all of the visual graphics on the video have been assisted in their creation by generative artificial intelligence. Harrison says, “There’s huge interest in the topic of AI including the existential issues of its potential impact on our species. In the art community, debate is raging over whether AI enhances originality and creativity or if it is ushering in the death of individual artists and the role they play in the humanities.” See that video here.

Harrison launched the tour late last week appearing on the Rich Valdes show on Westwood One and has subsequently appeared on network programs hosted by Doug Stephan, Dr. Daliah Wachs, and WABC’s Frank Morano, as well as Harry Hurley on WPG, Atlantic City,  Todd Feinburg on WTIC-AM, Hartford and Michael Zwerling on KSCO, Santa Cruz.  WOR, New York has posted the video and an  accompanying story here.

To book Michael Harrison please call Barbara Kurland at 413-565-5413 or email info@talkers.com

Industry News

Michael Harrison Discusses AI as Used to Create Stunning Images on New Gunhill Road Video

The historic rock band Gunhill Road, of which TALKERS founder Michael Harrison is a member, has just released a new advance track from its forthcoming fifth album. The song, “I Got a Line in New York City,” is a genre-bending combination of jazz, rock, and blues with a Broadway flair. Harrison serves as lead vocalist on the song which he co-wrote with his bandmates Steve GoldrichPaul Reisch and Brian Koonin. Further energized by its provocative music video, “I Got a Line in New York City” is slightly abstract and even mystical – while, simultaneously, heart-tugging and down-to-earth. The engaging narrative puts an ultra-modern-but-somewhat-retro twist on the classic story of a young person (Brando Young) whose lifelong dream of making it on the stage is dashed by the cold, harsh reality of the big time. HERE’S THE HOOK: The visual images that bring stunning dimension to the video were created by human artists – under the direction of the video’s producer, TALKERS associate publisher Matthew B. Harrison – tapping into the assistance of leading-edge generative AI on every panel. Michael Harrison states, “The experience of employing the assistance of ‘generative artificial intelligence’ to render these images of an ‘alternate universe’ version of the Big Apple, sprinkled with bizarre characters and weird technology, has been one of the most exciting, challenging, and educational experiences of my media career. I’m thrilled to be able to go out there now and talk about life-changing AI with this knowledge under my belt.” Harrison is embarking on a mini-media tour to discuss the AI aspect of the video and the sociological implications of this game changer.  To arrange a phone interview with Michael Harrison please call Barbara Kurland at 413-565-5413 or email info@talkers.com.  To view the video, please click here: www.igotaline.com

Industry News

TALKERS 2023: Video of “The BIG Picture” Panel Discussion Posted

im

During the coming days, videos of all of TALKERS 2023’s numerous sessions conducted June 2 at Hofstra University will be posted, continuing today (6/16) with the panel discussion, “The BIG Picture.” The session,im sponsored by Newsmax, was introduced by TALKERS associate publisher and media attorney, Matthew B. Harrison, Esq. (pictured at right) and moderated by TALKERS publisher, Michael Harrison (pictured above). Panelists (pictured below from left to right) include Lee Harris, director of integrated operations, NewsNationLee Habeeb, host/producer, “Our American Stories”; Kraig Kitchin, CEO, Sound Mind, LLC/chair, Radio Hall of FameArthur Aidala, Esq., founding partner, Aidala, Bertuna & Kamins, PC/host, AM 970 The Answer, New York; Chad Lopez, president, WABC, New York/Big Apple Media; and Dr. Asa Andrew, CEO/host, “The Doctor Asa Show.” See video of the session here.

im