An Elegant Solution To Music's A.I. Conundrum
Compensating artists when machines "cover" their voices could be as simple as the system for paying songwriters when their work gets covered. Just ask Grimes.
You’ve probably heard about the kerfuffle surrounding “Heart On My Sleeve,” the song written by someone known as Ghostwriter977 and performed by the A.I.-generated voices of Drake and the Weeknd.
Fueled by a mix of consumer curiosity and technological panic, the uncanny track began to rocket in the direction of the Billboard charts in mid-April. At this point, Universal Music Group—home to both artists—somewhat predictably issued a takedown notice and a rather dramatic statement.
“The training of generative A.I. using our artists' music,” the label said, “begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.”
Never mind the fact that the major labels have been investing in A.I.-driven music creation startups and signing algorithms to record deals for years. The reaction was typical of the industry’s philosophy toward new technology throughout this Millennium: try to smash anything you can’t control, even if people really like it, while quietly trying to find a way to own it.
Yet there’s an elegant and familiar solution to compensate artists when their voices are inhabited by artificial intelligence: simply pay singers in the same way songwriters get paid when someone covers their work. Performers can get what’s known as a compulsory mechanical license in order to cover a songwriter’s work, as long as they follow the rules and share proceeds as designated. Why not set up a similar system for “covering” someone’s voice?
Perhaps unsurprisingly, one of the first artists to attempt to answer this question with an actual business model is Grimes—who’s perhaps known as much for her role voicing a cyborg popstar in the videogame Cyberpunk 2077 and for her work with A.I. music app Endel as for her albums. (She also has two futuristically-named kids with Elon Musk).

This week Grimes announced the launch of her new software, Elf.Tech, which will allow anyone to sing into an app and have their voice transformed into hers. Another option will allow users to train a Grimes A.I. model of their own. Critically, Grimes already has a monetization strategy in place: she will collect 50% on master recording royalties.
“Grimes is now open source and self replicating,” she declared on Twitter.
There have already been more than 15,000 voice transformations since the launch of the software. It’s powered by a generative A.I. platform known as Triniti, which was developed by CreateSafe and Dauda Leonard, Grimes’ manager.
To be sure, this solution isn’t foolproof, and there remain plenty of complications. For instance: Grimes doesn’t own the rights to the vocals from all her old albums, so a successful replication of her voice could get taken down if monetized.
But, as Stanford professor Ge Wang said of A.I. in music, “The cat is not going back in the bag.” Grimes is right to embrace and experiment. The music industry ought to try covering her strategy.
Zack O’Malley Greenburg is the author of five books, including the Jay-Z biography Empire State of Mind. His work has also appeared in the New York Times, Washington Post, Rolling Stone, Vanity Fair and Forbes, where he served as senior editor of media & entertainment for a decade.
ALSO BY ZACK O’MALLEY GREENBURG
We Are All Musicians Now: Artists Are Canaries In The Coal Mine Of Business
Empire State of Mind: How Jay-Z Went from Street Corner to Corner Office
A-List Angels: How a Band of Actors, Artists & Athletes Hacked Silicon Valley
3 Kings: Diddy, Dr. Dre, Jay-Z & Hip-Hop’s Multibillion-Dollar Rise
Michael Jackson, Inc.: The Rise, Fall & Rebirth of a Billion-Dollar Empire