Artificial intelligence (AI) technology has advanced to the point where AI systems can be used to create new musical works and songs. As AI becomes more ubiquitous in the process of making music, artists and listeners both will have to address a looming question: do humans or machines make better music?

Before digging into this question, we must first acknowledge that taste in music, like all art, is subjective. Certainly there is no way to reach a consensus about which human artists create the best music, so it should be expected that there will similarly be many viewpoints when it comes to comparing AI- and human-generated sounds. Nonetheless, there are some important aspects of AI-based music to consider when framing this question, and the question itself could have significant implications for all aspects of the music industry.

AI Music: A Learning Process

As of this writing, few AI systems designed to make music are capable of truly creating a brand new song out of thin air. Many others are able to take user inputs to make adjustments to existing soundscapes or beats, but that may result in music that sounds similar to songs produced by other users of those systems. Two systems that are, in fact, able to create new music are Google’s MusicLM and Jukebox, made by ChatGPT creator OpenAI. Neither has been released to the public as of May 2023.

AD

Aside from the fraught copyright landscape, a major reason that these tools are not widely available is the level of quality of music that they currently produce. Listeners have commented on odd sounds, uneven mixes, disconcerting vocals, and strange hybrids of different genres and styles in songs created by these and similar platforms.

This is to be expected of AI-generated music. The reason for this is that many of these systems utilize machine or deep learning to generate new music based on a detailed analysis of countless pre-existing musical examples. As the AI systems generate new songs, those are then analyzed to see how well they fit with the rules and patterns detected by the AI. Improvement over time is to be expected. So one reason why AI-based music may seem inferior to human-created music is that it simply hasn’t had the time to figure out how to make convincing songs.

Study Says AI-Based Music Is Not There Yet

A 2023 study by researchers at the University of York attempted to determine whether deep or non-deep learning methods could be used to generate music that was rated favorably compared to human-created music. They involved 50 participants with “relatively high musical knowledge,” each of whom rated samples of both computer- and human-created music based on dimensions including stylistic success, aesthetic pleasure, repetition/self-reference, melody, harmony, and rhythm. The works were all in the classical style and included string quartets and piano improvisations.

The result of the study showed not only that human-composed music was strongly favored over AI-generated sounds, but also that the strongest deep learning method was equivalent to a non-deep learning method. This latter point suggests that deep learning may not yet be the key to achieving ultimate success with AI-created music.

What’s the Difference?

One other crucial question in the discussion of whether AI- or human-generated music is better is to what extent listeners may be able to discern the difference between the two. In the study cited above, listeners were often able to tell the difference based on the parameters used in the evaluation. Other listeners may point to a lack of subtle nuance and variety in AI-generated music.

AD

Still, some studies suggest that it can be difficult to tell the difference between these two sets of music in some cases. A study by the team behind the AI music generator Amper and audio researchers Veritonic asked participants to tell the difference between AI-based music, human-created music, and stock music. The average person was unable to tell the difference.  Tidio, a customer service platform, conducted a survey on AI- versus human-created art in 2022, and found that respondents said music was one of the categories most difficult to distinguish between machines and humans. Participants tended to attribute songs they felt were “too good” or “too complex” to AI, suggesting that they doubt the ability of human musicians.

Cheat Sheet

  • As machine-generated music becomes increasingly prevalent, listeners and artists will have to reckon with the question of who (or what) creates the best music.
  • As of mid-2023, most artificial intelligence (AI) systems designed to make music are not yet capable of creating brand new songs from nothing. Rather, many take existing inputs, either provided by users or from a database of musical samples, to create new works.
  • Two AI tools that are capable of creating new songs from nothing are Google’s MusicLM and OpenAI’s Jukebox.
  • Both MusicLM and Jukebox have been criticized for creating music that sounds disjointed, uneven, and generally not as good as some examples of human-created works.
  • It is to be expected that music AI tools will become better at creating music that listeners enjoy hearing over time.
  • A 2023 study by researchers at the University of York found that participants generally rated human-composed classical music favorably to machine-created music on parameters including stylistic success, aesthetic pleasure, melody, and more.
  • Still, some studies suggest that many listeners are already unable to tell the difference between music made by computers and music composed by humans.

Stay on top of crypto news, get daily updates in your inbox.