From jingles to pop hits, AI is music to some ears
Share this article
LONDON, Jan 23 — Patrick Stobbs recently sat in a conference room playing songs from his smartphone, attempting to show how his startup, Jukedeck, is at the cutting edge of music. The tune sounded like the soundtrack to a 1980s video game. “This is where we were two years ago,” he said, looking slightly embarrassed.
“And this is where we are now,” he continued. He then played a gentle piano piece. Its melody was simple, and it was unsubtle in its melancholy, but there was no denying that it could work as background music for, say, a health insurance commercial.
Stobbs did not write the music himself, nor did he commission it from a composer. Jukedeck is one of a growing number of companies using artificial intelligence to compose music. Their computers tap tools like artificial neural networks, modelled on the brain, that allow the machines to learn by doing, rather as a child does. So far, at least, these businesses do not seem to be causing much anxiety among musicians.
“We see our system as still in its infancy; it’s only learned a certain amount about music,” Stobbs said, although he quickly hinted how he hoped Jukedeck’s music could advance: “There’s no rule of physics that says computers can’t get as good as a human.”
Having machines write music is not new. In the 1950s, composer Lejaren Hiller used a computer to produce the “Illiac” Suite for string quartet, the first computer-generated score.
Since then, countless researchers have pushed that work forward. But several startups are trying to commercialise AI music for everything from jingles to potential pop hits. Jukedeck, for instance, is looking to sell tracks to anyone who needs background music for videos, games or commercials. The company charges large businesses just US$21.99 (RM97.65) to use a track, a fraction of what hiring a musician would cost. Stobbs wouldn’t reveal how many tracks it has sold, but said that the British division of Coca-Cola pays for a monthly subscription.
Tech giants are also involved. In June, Google Brain announced Magenta, a project that aims to have computers produce “compelling and artistic” music, filled with surprises. Its efforts so far do not quite fit the bill.
In September, DeepMind, the Google-owned British artificial intelligence company, also released results of an experiment it undertook for fun. DeepMind put samples of piano music into its WaveNet system, used to generate audio, such as speech. The system, which was not told anything about how music worked, used the initial audio to synthesise 10-second clips that sound like avant-garde jazz. IBM also has a research project called Watson Beat, which musicians will be able to use to transform their work’s style, making songs sound Middle Eastern, for example, or “spooky”.
Jukedeck’s beginnings are somewhat surprising for a tech company. Stobbs and composer Ed Newton-Rex, both 29, founded it in 2012. They had been choristers at King’s College School in Cambridge, England, and Newton-Rex went on to study music at the University of Cambridge, where he first learned that artificial intelligence could compose.
After graduating from Cambridge, the pair set up a choral boy band (“a terrible idea”, Stobbs said), and had aspirations to run a record label. But in 2010, Newton-Rex attended a computer science lecture at Harvard, where his girlfriend was studying. The lecturer made coding sound relatively straightforward, and also made Newton-Rex recall his earlier studies in AI music. He decided to put the two together, and he set about building Jukedeck on the flight home.
Jukedeck’s system involves feeding hundreds of scores into its artificial neural networks, which then analyse them so they can work out things like the probability of one musical note’s following another, or how chords progress. The networks can eventually produce compositions in a similar style, which are then turned into audio, using an automated production programme. It has different networks for different styles, from folk to “corporate” — something that sounds like the glossy electronica typically played at business conferences.
The company only recently started experimenting with the artificial neural networks for the audio output as well as the composition. This should make tracks sound more natural and varied — more human, in other words.
Other companies working on AI music tend to involve musicians more directly in the process. The Sony Computer Science Laboratory in Paris, for example, considers musicians essential to its Flow Machines project, which has received funding from the European Research Council.
The idea behind the project is to get computers to write pop hits, said François Pachet, the laboratory’s director. “Most people working on AI have focused on classical music, but I’ve always been convinced that composing a short, catchy melody is probably the most difficult task,” he said.
He added: “A compelling song is actually a rare and fragile object. It can only work if all the dimensions are right: The melody, the harmony, the voice, the dress of the singer, the discourse around it — like, ‘Why did you write this song?’ No one is able to model all that right now, and I’m interested in that problem.”
Flow Machines’ main system is a composing tool that works similarly to Jukedeck’s: by getting a computer to analyse scores — everything from Beatles’ songs to dance hits — so that it can learn from them and write its own. However, its output is then given to musicians, who are free to use it, change it or throw it away as they like, at no charge. (Negotiations are underway regarding contractual obligations if record labels release any of this music.) Musicians give “a sense of agency”, Pachet said. “The systems don’t know why they want to make music. They don’t have any goal, any desire.”
Around 20 acts have used the system, Pachet said, some performing the songs they wrote using it at a recent concert. He is in talks with some well-known groups, like indie band Phoenix, to try it, he added, and several albums will be released this year.
Musicians appear to enjoy it. “I could never have written a song like the one I did without it,” said Mathieu Peudupin, a French rock musician who goes by the name Lescop. “It drove me to a place I would never have gone myself.” He said it was like working with a bandmate, although he ignored most of its suggestions. “But what singer in the world listens to his bandmates?” he said, laughing.
Pachet and Lescop both said they did not think listeners would ever entirely accept computer-generated music. “Music fans need to fall in love with musicians,” Lescop said. “You can’t fall in love with a computer.”
But the founders of Jukedeck are less certain.
Newton-Rex sees artificial intelligence changing the way we listen, especially if computers eventually “understand music enough to make it respond in real time to, let’s say, a game, or you going for a run,” he said. “Recorded music’s brilliant, but it’s static. If you’re playing a game, Hans Zimmer isn’t sitting with you composing. I think responsive systems like that will be a big part of the music of the future.” — The New York Times