Decrypt’s Art, Fashion, and Entertainment Hub.
The Recording Academy has updated the rules for the 2024 Grammy Awards to include music created with the help of AI tools.
The new rules for the 66th Grammy Awards, released on Tuesday, now state that “only human creators are eligible to be submitted for consideration for, nominated for, or win a Grammy award”—but AI-assisted music will also be considered.
According to the new rules, “the human authorship component of the work submitted must be meaningful and more than de minimis,” with de minimis defined as “lacking significance or importance.” That human element must also be relevant to the appropriate Grammy category, while songs that contain “no human authorship” are not eligible in any category.
“At this point, we are going to allow AI music and content to be submitted, but the Grammys will only be allowed to go to human creators who have contributed creatively in the appropriate categories,” explained Recording Academy CEO Harvey Mason Jr. in a news post.
“If there's an AI voice singing the song or AI instrumentation, we'll consider it,” he continued. “But in a songwriting-based category, it has to have been written mostly by a human. Same goes for performance categories—only a human performer can be considered for a Grammy.”
Mason Jr. went on to say that the updated rules are “important,” because “AI is going to absolutely, unequivocally have a hand in shaping the future of our industry. The idea of being caught off-guard by it and not addressing it is unacceptable.”
He further added that the Academy has to start “adapting” to accommodate AI technology, along with setting “guardrails and standards.”
“Not knowing exactly what [AI] is going to mean or do in the next months and years gives me some pause and some concerns,” he added. “But I absolutely acknowledge that it's going to be a part of the music industry and the artistic community and society at large.
"There are a lot of things that need to be addressed around AI as it relates to our industry,” he added.
In April, British indie band Breezer used AI to replicate Liam Gallagher’s vocals for a “lost” Oasis album, with all of the music and lyrics written and performed by Breezer themselves. Meanwhile, the musician Grimes has created an open-source program that allows her vocals to be replicated by AI.
Elsewhere, Paul McCartney said recently that he used AI technology to extract vocals from an old John Lennon demo tape, which will form the basis for the “final” Beatles song due later this year. Both Meta and Google have recently released their own music-generating tools, too.
“Generative models can represent an unfair competition for artists,” admitted Meta researchers when they unveiled MusicGen.