By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
Artificial intelligence now makes it possible to replicate a human voice with striking accuracy. For broadcasters, podcasters, and content creators, the central question is: When does using or imitating a voice become a legal problem? The answer depends on the person being imitated, the purpose of the use, and the rights attached to that voice. Below is a six-bucket framework to help evaluate the risk.
Bucket 1 – Human Imitation of a Living Person
Example: In Midler v. Ford Motor Co. (1988), Ford hired a singer to imitate Bette Midler’s voice for a commercial after she declined. Legal focus: Right of publicity, false endorsement, misappropriation of identity. Risk: High – especially for commercial use without parody or commentary.
Bucket 2 – AI Cloning of a Living Person
Example: AI trained on hours of a broadcaster’s work generates new scripts in that broadcaster’s voice. Legal focus: Same as Bucket 1, plus emerging AI laws in several states. Risk: Very high – AI makes imitation faster, more precise, and harder to defend as coincidental.
Bucket 3 – AI Cloning of a Deceased Person Within Post-Mortem Publicity Window
Example: An AI-generated George Carlin special, written by humans but performed in a Carlin voice model. Legal focus: Post-mortem right of publicity, lasting 20–100 years depending on the state. Risk: High without estate authorization, even if marketed as a tribute.
Bucket 4 – Historical/Public Domain Figures
Example: Voicing George Washington in an original script. Legal focus: Minimal – rights generally end at death and do not extend for centuries. Risk: Low unless portrayal implies a false endorsement of a current product or service.
Bucket 5 – Corporate Library Owner Using AI to Create New Content
Example: A company acquires a complete host archive, such as Howard Stern’s, and uses AI to create new programming in that voice. Legal focus: Copyright in recordings is separate from publicity rights in the voice. Owning the archive does not automatically permit new performances in that voice. Risk: High without explicit contractual rights to name, likeness, and voice for future works.
Bucket 6 – Inspired-By Voice Not Clearly Identifiable as a Specific Person
Example: An AI voice styled as “a gravelly, old-school talk radio host” without matching a real person. Legal focus: Minimal unless resemblance convinces listeners it is a specific individual. Risk: Low to moderate, depending on closeness to a real identity.
Decision Path
Before using a recognizable voice, ask: 1. Is the person living or deceased? 2. If deceased, are they within their state’s post-mortem publicity period? 3. Is the voice a deliberate imitation? 4. Do you have written permission? 5. Is the purpose parody, commentary, or other transformative use?
Takeaways
Talent: Protect your voice rights in contracts, including AI uses. Buyers: Archive ownership does not guarantee the right to generate new voice content. Creators: Parody and commentary may help, but they are not blanket defenses. As voice cloning becomes more accessible, securing clear rights before production remains the safest path. The cost of permission is almost always less than the cost of defending a lawsuit.
Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com