Produce a song that hits the top of the chartsof platforms like Spotify or Apple Music is what many artists crave, but less than 4% of new songs will make it onto the charts. While there’s no magic formula for doing it, a new study suggests that machine learning — an artificial intelligence technique — applied to people’s brain responses can identify songs that arouse their emotions. And those are the ones that usually become the hits of the music industry.
Developed by researchers at Claremont Graduate University in the Los Angeles area, the method uses conventional sensors, such as those found in smart watches, to analyze human neurophysiological responses and thus rate songs. In the study published in the journal Frontiers in Artificial Intelligence33 participants listened to 24 songs selected by personnel from a service of streaming; of them, 13 were hits (with more than 700,000 views) and the rest were not. The researchers measured their brain reactions associated with attentional status (from dopamine release) and emotional response (linked to oxytocin). Together, these neural signals predict brain behavior after a stimulus, especially one that elicits emotion. In essence, it is like having a window into the mind to understand the effect that music has on the brain.
Paul Zak, lead author and professor at the American university, explains that people can even attribute characteristics like rhythm or tone when explaining why they like a song; however, it is impossible to be fully aware of intrinsic motives. “It turns out that the brain knows. Even if you cannot consciously identify it, the unconscious brain systems do know if something is good or not”, details the researcher.
The study showed that participants’ neurophysiological responses were able to predict which songs were the most popular, based on music market figures. A linear statistical model achieved a 69% success rate in identifying the musical themes, and by applying machine learning, the researchers improved the accuracy to 97%. Even when analyzing the neural responses of only the first minute of the songs, an assertiveness of 82% was obtained.
Despite these promising results, the team acknowledges the limitations of the study they just presented, such as the relatively small number of songs used in their analysis and the lack of representation of certain demographic groups among the participants. However, they guarantee that the novel methodology could be applied to other forms of entertainment, such as movies and TV shows, opening doors to a game changer in the entertainment industry. For other types of content, such as audiovisuals, it would be necessary to model the data differently, but the neurophysiological responses remain the same, as the author of the research explains: “The methodology is solid, which means that it can be used over and over again, although each model will be slightly different,” adds Professor Zak.
The platforms of streamingThey often have their own recommendation methods, but are generally based on algorithms, human expert analysis, and listener behavior, such as when the user rates the track with a I like it. Melanie Parejo, Head of Music for Southern and Eastern Europe at Spotify, explains that the platform’s methodology employs a “wide range of learning techniques”, ranging from “collaborative filtering to reinforcement learning”.
Parejo underlines that musical trends respond to internal factors of the platform, such as the number of reproductions or the evolution of each of the songs, but also external factors, such as what happens on the Internet or on television. “There are multiple consumption signals that can contribute to the success of a song, from its growth rate to the organic consumption of the song, but also that users search for that song proactively or do not skip it if it appears in a playlist. But our editorial teams also take into account the broader context, what happens outside the platform, how it is shared on networks or if, for example, it is experiencing a sweet moment thanks to a TV series”, details this representative of Spotify.
In search of the musical ‘hit’
If the method proposed by the American researchers proves to be effective in identifying musical hits, could it perhaps contribute to creating the perfect song? Professor Paul Zak approaches this question with nuances. The yes answer focuses on the musician or band, who might invite a few people to listen to a release to gauge the intensity of their emotional connection. From there, they would be in a position to fine-tune different musical elements, whether it be evolving chords or making a change in rhythm, all aimed at amplifying the affective impact. “That would be the approach that some people are already starting to take today,” Zak stresses. However, when it comes to producing a musical composition with such attributes from scratch, his perspective is not so clear. “We need artists to do that initial creative work. There is no way to go full circle and artificially produce the perfect song,” he clarifies.
Professor Sergi Jordà, who has been researching the relationship between music and technology for more than 30 years, agrees that deciphering brain signals through sensors can make it possible to optimize songs, but “it is insufficient to generate hits”. Despite this, it is only a matter of time. Given the rapid advancement of generative artificial intelligence and mood sensors, it is not unreasonable to anticipate a scenario in which machines give birth to the most memorable melodies.
We are at the gates of that future. In November 2022, the Chinese giant of streaming Tencent Music Entertainment produced and released over a thousand previously unreleased songs with AI-generated voices that imitate the human voice. One of them, titled Todayhad one hundred million views and has become the first artificial song to reach this figure, according to a report by Music Business Worldwide.
Jordà points out that the current capabilities of generating music from texts, for example, “have left all the experts perplexed.” In addition, given the ability of neural networks to generate variations based on what already exists, it is likely that they will assume the role of songwriters. trackbreaker In the near future. “It is clear that, fed with great successes, they will tend to do things that resemble great successes”, details the professor and researcher in the Music Technology Group at the Universitat Pompeu Fabra in Barcelona. “This is a future that seems very dystopian. But it is worrying and it is real ”, he assures. In addition, he clarifies that there are also other possibilities, such as the creation of music on the fly, instantly optimized, according to a person’s moods.
For his part, Zak believes that the method developed by his team can benefit artists who are starting their careers, so that they perceive what other people like. “If you’re the Rolling Stones and you’ve played about ten thousand concerts, you already know, more or less, what’s good and what’s bad,” he exemplifies. However, for an amateur musician it is a way to “accelerate his learning to deal with some dilemmas, like when someone thinks ‘I like this song, so why wouldn’t someone else like it?’”. “It’s not the only reason to create art, but if you want to create art that touches people emotionally, then it has to touch not only you, but others as well,” concludes Zak.
You can followTHE COUNTRY Technology in Facebook andTwitter or sign up here to receive ourweekly newsletter.
Subscribe to continue reading
Read without limits