John Hersey

Ed Newton-Rex grew up immersed in music. As a child, he sang in the King’s College Choir in Eng­land and played piano. He went on to earn a music degree, and one of the things he studied was, “Why do people like music?” he told me. The answer, he learned, is that there’s no simple answer: It’s a deeply complex stew of art, timbre, and emotion.

And math. As Pythagoras discovered about 2,500 years ago, music is deeply mathematical, and it’s possible to represent melody using numbers and ratios. After finishing his undergraduate degree in 2010, Newton-Rex went to visit his girlfriend, who was studying at Harvard. He sat in on a coding lecture and became enraptured with the idea of writing software that could generate songs by harnessing the machine’s ability to semi-randomly recombine numbers. “Why haven’t computers been able to do this yet?” he wondered.

Over the next year, he set out to create a composing machine. He taught himself enough to code up a prototype that would create songs based on a set of simple rules. Before long, his system, Jukedeck, was cranking out instrumental tunes good enough to convince some investors to back him. He then hired programmers to rebuild his system using “deep learning” neural networks, the hot new artificial-intelligence technique. Neural nets can, in effect, learn on their own. Newton-Rex would feed thousands of melodies his team composed—pop, blues, folk, and other genres—into the system. The neural net would decode the deep patterns in the music and crank out new melodies based on what it had intuited.

Jukedeck has since penned more than 1 million songs, and in the past few years several similar firms—Amper in New York, Popgun in Australia, and AIVA in Luxembourg—have emerged to join this weird new industry. Their tools are point-and-click easy: Pick a genre, a “mood,” and a duration, and boom—Jukedeck churns out a free composition for your personal project or, if you pay a fee, for commercial use. Songs composed by Jukedeck and its ilk are already showing up in podcasts, video games, and YouTube content, “from explainer videos to family holiday videos to sports videos,” says Patrick Stobbs, Jukedeck’s co-founder. For years, DIY video makers have licensed tunes from huge “libraries” of Muzak-y stuff produced by humans. Now, AI offers fresh compositions at the press of a button.

The songs can be surprisingly good. I generated a 90-second folk-pop tune on Jukedeck using the “uplifting” option, with bass, drums, synthesizers, and jangly artificial guitar. The robot composer even threw in a few slick little melodic breaks. As a part-time musician, I’ve composed and recorded enough to be impressed. The tune wasn’t brilliant or memorable, but it easily matched the quality of human work you’d hear in videos and ads. It would take a human composer at least an hour to create such a piece—Jukedeck did it in less than a minute. All of which raises some thorny questions. We’ve all heard about how AI is getting progressively better at accomplishing eerily lifelike tasks: driving cars, recognizing faces, translating languages. But when a machine can compose songs as well as a talented musician can, the implications run deep—not only for people’s livelihoods, but for the very notion of what makes human beings unique.

Newton-Rex and his fellow pioneers are, historically, in good company. For centuries, musicians have been mesmerized by the idea of writing algorithmically, usually by finding some device to add randomness to their craft. In the 18th century, composers played Würfelspiel, a dice game, to generate compositions. This became so common that one composer even wrote a satire about an artist who splattered paint on musical scores and tried to play whatever emerged. In Amsterdam, Dietrich Winkel, inventor of the common metronome, built a mammoth automated pipe organ that recombined melodies using two barrels that interacted on a “random walk.” In the 1930s, with the help of Léon Theremin, Henry Cowell created the Rhythmicon, a robotic drum machine. “He could hear these rhythms that he was theorizing, basically, that no human could play,” explains Margaret Schedel, a professor of music at New York’s Stony Brook University. The innovations picked up again in the 1960s, as the first generation of computer nerds coaxed room-size mainframes to generate simple melodies. A couple of decades later, composition tools arrived on the first blast of personal computers—with Laurie Spiegel’s Music Mouse software, you could wave your mouse around and hit keys to influence the algorithm, making you a partner in your Mac’s auditory creation.

There are two forces propelling today’s robotic music explosion. One is the rise of neural nets, a technique AI scientists beavered at for decades before enjoying key breakthroughs in the early 2010s. Companies like Google have released free, easy-to-use neural net code, so now nearly any competent programmer can dabble. And neural nets allow for subtler compositions than past technologies did. Rather than telling the system precisely how to compose a tune or a beat, the coder simply gathers thousands of examples and lets the system make its own rules.

The second factor is demand. The US market for background music hit $660 million in 2017, up 18 percent from two years earlier, according to industry consultant Barry Massarsky, and preliminary figures show 11 percent growth in 2018. Composers worldwide make ends meet by contributing to the tune libraries used by You­Tubers, corporations, radio shows—whoever needs a sonic backdrop. This is basically the audio version of the market for stock photos: The songs are predictable, often hackneyed, but good enough for a how-to makeup video or sports podcast.

RELATED: You Will Lose Your Job to a Robot—and Sooner Than You Think

AI will seriously disrupt that labor market. Background tracks are pretty algorithmic even when humans write them: You introduce one motif, then another, layer them together, rinse, and repeat. It’s what Amper founder Drew Silverstein, a former Hollywood composer, calls “functional” music. “We don’t necessarily care how it was created and where it came from,” he says. “The most extreme example is elevator music, right? It’s music that is serving a purpose.” Here, Amper radically outperforms humans. “Amper is not a music tool. It’s not a music solution,” Silverstein insists. “Amper is an efficiency tool. Fundamentally.” Given the choice between paying a slow human or asking a lightning-fast, almost-free bot to generate a purely functional soundtrack, which would you choose?

But the debate extends beyond elevator music. As AI capabilities improve, it’s possible­—probable even—that the songs will become good enough that we’d opt to listen to them, for instance, while working or driving. The economics are enticing for streaming services. Imagine Spotify self­generating thousands of hours of chill-out ambient tracks with no need to pay human composers a dime. This isn’t far-fetched. In November 2017, Jukedeck churned out 11 songs for the Slush tech conference after-party. Pretty good ones, too—these songs would hold up next to the human-created, endless-loop instrumentals that are already racking up millions of listens on streaming sites. In July 2017, Spotify hired a major music-AI scientist, François Pachet, with the goal, it says, of developing a machine that can assist musicians.

Whenever we ponder the impacts of auto­mation, there are dismal prophecies and sunny ones. The optimists argue that, sure, AI will destroy some jobs, but it will create new ones that pay better and require more creative smarts. Okay, the pessimists reply, but those jobs are never plentiful enough to employ the hordes hurled out of work, and rarely do they materialize fast enough. The entrepreneurs behind the one-click compositions, as you might imagine, mostly fall into the first camp. Their efforts may erode prospects for low-end, entry-level composers, they say, but they will never eliminate the need for top talent, the people writing complex scores for movies, TV, and videos—or just songs we want to listen to. “I don’t think any of these systems are anywhere near that place, and I’m glad that they’re not,” Newton-Rex says. “We as a company don’t think that should be the aim. We don’t think that would be a good thing for musicians.” His AI tools can’t understand context or purpose: “It’s like having a composer who has no other experience in life except reading 1,000 pieces of music and then trying to come up with something similar. They won’t know what they’re doing. They won’t be able to bring any emotion, any life experiences, into it. They won’t be able to cross-pollinate ideas from other fields.” As Adam Hibble, creator of Popgun’s music-writing tool, puts it, “This AI has no idea what’s culturally relevant or what is politically relevant or whatever it is that is currently important in the zeitgeist. It’s a mindless but very intelligent music creation system.”

Humans, of course, will need to adapt. The ability to generate a three-minute instrumental probably won’t cut it anymore. To feed their families, composers likely will have to move up the food chain and do work that requires collaboration, stuff bots can’t achieve. “Musicians and composers, your job will not exist in five years,” Silverstein says. “Your career certainly will.”

But AI music poses other gnarly ethical and philosophical questions. Bob Sturm, an accordion player and computer scientist in Stockholm, is a fan of traditional Irish folk, so in 2017 he trained a neural net on 23,000 Irish tunes and had it crank out more than 60,000 new ones. He then asked composer Daren Banarsë to plow through the output, and while the majority of the songs were uninspired (or outright duds), perhaps 1 in 5 were quite good. “Some of them sound 100 percent like traditional Irish tunes—I’m confident no one could tell the difference,” Banarsë told me in an email. “And there is the occasional one with that combination of musical quality and interesting character, a special tune that could become a classic.”

Sturm and Banarsë recorded an album of human musicians performing the best of their AI creations and sent it to reviewers as the Ó Conaill Family and Friends. “No one suspected a thing; it actually got great reviews,” Banarsë says. When they later emailed the reviewers to explain the genesis of the songs, most just shrugged. Hey, a good song’s a good song, right? But some members of the Irish folk crowd objected on the grounds that AI-generated songs are inherently cannibalistic, each one derived from the creativity of the people who wrote the originals. Sturm understands their misgivings: “What gives us a right to take 23,000 tunes that have been collected by hundreds of volunteers online for one purpose and to use them in a completely different purpose?”

There’s also some philosophical weight here, because musical pursuits—historically, anyway—have always seemed like quintessentially human activities. Our melodic creations are deeply tied into our everyday emotional lives and woven into the ceremonies of civilization. This is why the prospect of automating them can seem unsettling, even depressing. There’s a bit of bleakness in realizing that something we so often associate with soulfulness and spirituality can be spewed out by a computer.

Yet in one sense, the neural nets are mere­ly mimicking the way humans compose. We, too, consume hundreds or thousands of songs over a lifetime, intuit patterns, and recombine our knowledge into something new. We sample, we steal, and we transmogrify. Our creativity, too, is built on the creativity of those who came before. But when a machine does this, it can feel like an impersonal, even vampiric act.

As of now, no commercial AI system is good enough to create, by itself, a half-­decent symphony, or even an entire pop song with words. (Some programmer-­artists have tried robo-generating lyrics with limited success.) So if you wanted to draw a Rubicon between human and computer creativity, that’d be it: a hit song at the push of a button. Humanity’s last stand!

But there’s a vanguard of artists using AI tech to complement, not replace, their composing skills: neural nets as writing partner. Singer Taryn Southern built an online audience starting in 2008 for comedic songs she posted on YouTube, but “I was never an adept musician,” she told me. So she had to coax musician friends to create the backing tracks or license instrumental music. After she produced a dozen or so videos that way, the human logistics were getting to be a bit much.

Then in 2017, Southern discovered the AI tools and thought, “Why not have them make the music?” She began using Jukedeck and Amper for her soundtracks—“a lot of the cinematic pop.” She also had AIVA compose a classical piano track for her song “Lovesick.” AIVA is trained on work by the major composers of the 18th and 19th centuries, so “it’s kind of like collaborating with Beethoven!” Southern jokes. And she has explored more complex experimental tools, such as the Watson Beat, with IBM programmer Anna Chaney. Feed the Watson Beat a few bars of a melody and it generates a recommendation of where that melody ought to go.

After assembling dozens of snippets of music and sounds, Southern fired up her editing software, extracting a bit of melody here, a chunk of a drum pattern there, to craft an album’s worth of instrumental backups for her lyrics. The result: I AM AI, an album of sparkly, synth-heavy tracks. Southern says her project is a harbinger of a cyborg future in which AIs assist human composers rather than replace them. After all, she figures, few people truly want to listen to software-­generated music: “If it was all made by a robot, then it’s just not interesting.”

Human-machine collaborations, musicians are also discovering, open up new aesthetic possibilities if only because AI creativity can be rather alien—particularly at this rudimentary phase. Last year, the French pop songwriter Benoit Carré used Flow Machines, an experimental Sony AI, to help him create the album Hello World. He trained the system on melodies he and his collaborators composed and selected. “You are a little bit like an artistic director or a producer, and you have a crazy musician in the room,” he told me. “Most of the time it is crap,” but every so often the machine kicked out a melody he would never have thought of. Carré helped write the lyrics and recorded the album with a group of meatspace musicians. It certainly wasn’t push-button easy. If anything, sifting through the AI’s output for useful, provocative passages was like panning for gold—probably more work than writing everything himself. But the silicon intelligence helped Carré break out of ruts. “In pop music, you know, it is always the same chords,” he says, so to do something new “you have to be surprised, you have to be shaken.” In a sense, we’re still like those dice-throwing artists of the 18th century, trying to goose human creativity with forces that feel outside our control.

At the very least, the human-AI collabs will raise new copyright questions. “If you want [a neural net] to make music in the style of the Beatles, you have to feed it Beatles so it understands. So do the Beatles get a royalty?” wonders Scott Cohen, co-founder of the Orchard, a distribution firm for music, film, and video. You could view the process as sampling (for which artists must pay) or merely a source of inspiration (which is free). Or suppose I feed the melody from Ariana Grande’s “thank u, next” into Google’s free Continue app, which, like the Watson Beat, predicts musically plausible next bars. If I then use those bars to create a song, do I owe Grande anything? Such questions will need to be hashed out in the years to come, maybe in Neal Stephensonesque courtroom brawls among record labels—if they still exist.

RELATED: Welcome, Robot Overlords. Please Don’t Fire Us?

This is likely the future, in any case. The AI gurus claim their products will help amateurs punch above their weight—crafting songs far beyond their unaided skills. During an internship with Google’s Magenta project, for example, Ph.D. music student Chris Donahue developed an AI system called Piano Genie. Rather than rely on all 88 keys, Piano Genie has just eight buttons, and it generates melodies based loosely on the patterns you key in. It’s kind of like Guitar Hero, if hitting the controller buttons created actual music. “It’s not as hard as playing full piano, but it’s not touching a button and having music spill out,” Donahue told me.

“Millions of people downloaded Garage­Band and then never used it,” says Popgun founder Hibble. “Or they used it once and found it too difficult. The promise of a musically intelligent system is that you can reduce that to essentially no previous musical creation skill in order to start making something that you want to share.” AI promises to democratize composing in “the same way Instagram makes everybody a photographer,” Hibble adds, “because it’s now much easier to get something that looks good.” He predicts that his AI tool and others will be incorporated into instruments—actual hardware. “It’s going to be in keyboards; it’s going to be everywhere,” he says. You’ll sit down at an electric piano to riff away with the ghost of Bach guiding you. And that sounds awfully fun, though it, too, raises a disquieting possibility: Would such instruments begin to de-skill young musicians, who might then decide there’s no need to work toward mastery?

On the upside, the rise of AI tools could spur entirely new genres. Fresh music technologies often do. The electric guitar gave us rock, the synth helped create new wave, electronic drum machines and samplers catalyzed the growth of hip-hop. Auto-Tune was a dirty little secret of the record industry, a way to clean up bad singing performances, until artists like Cher and T-Pain used it to craft entirely new, wild vocal styles. The next great trend in music could be sparked by an artist who takes the AI capabilities and runs with them. “Someone can make their own and really develop an identity of, I’m the person who knows how to use this,” says Magenta project engineer Adam Roberts. “A violin­—this was technology that when you give it to Mozart, he goes, ‘Look what I can do with this piece of technology!’” exclaims Cohen, the Orchard co-founder. “If Mozart was a teenager in 2019, what would he do with AI?”

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate