- Conservative Fix
- Posts
- NPR Host Sues Google Over AI Voice That Sounds Just Like Him
NPR Host Sues Google Over AI Voice That Sounds Just Like Him
As artificial intelligence blurs the line between innovation and imitation, a veteran broadcaster is taking on Big Tech in a case that could reshape copyright law.

When veteran radio host David Greene heard what sounded like his own voice coming from a Google AI tool, he says he was “completely freaked out.”
Now, the longtime NPR “Morning Edition” host is suing Google, claiming one of the company’s artificial intelligence voices mimics him so closely that listeners assumed he must have authorized it. The case could become a landmark battle in the rapidly expanding legal war over AI, copyright, and ownership.
Greene said he first learned about the AI voice after a former coworker sent him a clip from Google’s NotebookLM tool. The product features two AI-generated hosts a male and female voice conversing in podcast-style “audio overviews.” The similarity was so striking, Greene says, that friends and family began asking whether he had licensed his voice to the tech giant.
“I was, like, completely freaked out,” Greene told The Washington Post. “It’s this eerie moment where you feel like you’re listening to yourself.”
Google denies the allegation. A company spokesman said the claims are “baseless,” adding that the male voice in NotebookLM’s Audio Overviews is based on a paid professional actor hired by Google not Greene.
Still, the lawsuit signals a much larger issue: Who owns a voice in the age of artificial intelligence?
As AI tools become more sophisticated, they can generate speech patterns, tones, and cadences that closely resemble real people sometimes without directly copying recordings. That gray area has triggered mounting legal uncertainty, especially as generative AI explodes across media, entertainment, and publishing.
The AI market is projected to surpass $1 trillion globally within the next decade, according to industry estimates. Meanwhile, more than 70% of media companies report actively integrating AI tools into content production workflows. The pace of adoption has outstripped clear legal guardrails.
Online, NotebookLM’s voices have sparked debate over whom they resemble. Some listeners say the male voice sounds most like Greene. Others argue it resembles former tech podcaster Leo Laporte. Still others compare it to the conversational tone of “Armchair Expert,” co-hosted by Dax Shepard and Monica Padman.
But this case is not about online speculation. It’s about precedent.
In recent years, AI-related copyright cases have produced enormous financial consequences. One of the most significant was Bartz v. Anthropic, in which the AI company faced a class-action lawsuit after training its model on thousands of pirated books. The case ended in a $1.5 billion settlement the largest copyright payout in history.
Other disputes have resulted in corporate partnerships rather than courtroom battles. Universal Music Group and the AI platform Udio resolved their legal conflict by announcing a collaboration to develop new music creation and streaming experiences.
The Greene lawsuit sits somewhere between those extremes. Unlike cases involving direct use of copyrighted books or music, this dispute centers on vocal identity something harder to define in statute. U.S. law recognizes rights of publicity in many states, protecting individuals from unauthorized commercial use of their likeness. But whether an AI-generated voice that merely resembles someone qualifies as a violation remains unsettled.
Legal experts expect a wave of similar cases as more creators discover AI tools that appear to echo their work, style, or voice. Already, actors and musicians have raised alarms about AI voice cloning. In 2023, SAG-AFTRA’s negotiations with Hollywood studios included explicit provisions about AI-generated likenesses.
For conservatives skeptical of Big Tech’s power, the case adds another layer to long-standing concerns about corporate overreach. Google, one of the most dominant technology companies in the world, now faces accusations that its AI systems blur the line between innovation and appropriation.
At stake is more than one broadcaster’s voice. The courts will be forced to answer fundamental questions:
Can a vocal style be owned?
Does AI “inspiration” cross into imitation?
And how should copyright law adapt to machines that can replicate human expression at scale?
As artificial intelligence races ahead, lawmakers and judges are scrambling to catch up. The outcome of this case could influence not just podcasters and media figures, but authors, musicians, and everyday Americans whose digital identities may one day be replicated by code.
In a world where your voice can be recreated without your consent, the legal system may soon have to decide whether that voice still belongs to you.
Share this article or subscribe to our newsletter to stay informed.