Musical Ai

On Nitch, the Instagram page with enough scrolling power to prod you out of bed in the morning, there’s an image of Banksy sitting in a chair, his face cloaked behind an oversized hood and his fingertips welded together as if in prayer.

The caption reads, “I don’t know why people are so keen to put the details of their private life in public; they forget that invisibility is a super power.”

Sometimes this idea prompts me to hang back. To watch while others spread themselves thin. But when I discovered Spotify and the opportunity to be totally transparent about my listening habits, I leaned in.

Spotify gives me a heightened awareness about people’s connection to music. A track off Bon Iver’s For Emma might signal a rough day. ODESZA’s Divinity Remix hints at a moment of teeth-gnashing action. And when my grandfather, a man dedicated to his vinyl records, joined Spotify, I watched as he suddenly transitioned from Bach, Handel and Arthur Rubinstein to Frank Zappa, revealing a rare glimpse at a subterranean rebellious streak.

Maybe it was naive to overlook that all this public data could be harnessed for something besides interconnectedness. With great data comes great responsibility and recent news is uncovering a growing suspicion about how Spotify may be using that data.

Journalists and bloggers are accusing Spotify of creating songs by “fake artists” to fill in some of their ambient playlists to save money on royalty fees. Several artists stepped forward, others like Deep Watch remain eerily quiet.

Then there’s the potential connection between Echo Nest, Spotify’s data collection hub that tracks user’s listening habits, and the recent hire of Francois Pachet, a revered French professor and machine learning guru.

Bloggers speculate that Spotify is accumulating listening data so that they can use machine learning to create customized music. With millions of hours of listening data, Spotify’s algorithms could pinpoint the chord progressions, rhythms and styles that resonate with users. They could theoretically customize music for a specific moment in time.

The goal is awesome. And for me it would be the end of a long journey after stumbling on the crowdsourcing capacity of Hype Machine. I’ve always fantasized about a sixth sense for what song belongs in the current moment. An algorithm would take out the guess work.

We may not even be that far off. The glowing boards laid out in front of a D.J. and the prevalence of pro tools has introduced a new member of the band – a formula, a computer, or whatever cuts and hems raw tracks.

It’s hard to resist that mathematical precision. By definition technology makes things easier. Venmo means you don’t have to take cash out. Uber Eats means you don’t have to leave your couch to eat sushi. But I’m of the belief that easier doesn’t always mean better in the long term.

Take Dave Grohl for example. As a kid he couldn’t afford a drum set so he’d beat wooden sticks on pillows, hitting the fabric as hard as he could to make an audible sound. That produced a breakneck style that drove Nirvana and the Foo Fighters. No one brings Dave Grohl on to play drums for a soft jazz track – they hire him to shatter snare drums.

My concern with AI generated music is that we will lose the ‘fuck you’ types. The narrative behind the music will become less important as it becomes a more streamlined experience. We will settle into a cyclical feedback loop, thoughtlessly mainlining the musical stream. Listening data goes in. Tunes come out. Listening data goes in. Tunes come out. Our past would prescribe our future.

But as we get further into AI territory, we may find that our experiences with music aren’t as transactional as listening data suggests. A friend recently told me that he associated Father John Misty’sNancy From Now On” with a second-hand surfboard he’d found in the back room of a well-lit surf shop. When he called the previous owner, the man told him that he’d shaped it for his wife, who was uninterested in the alternative design (it’s round like a pill or a bar of soap). As my friend twirled around the board he noticed a note on the bottom channel. Inscribed in pencil it said, “For Nancy.” The moment was cemented and the board became one with the song.

Algorithms are designed to root out randomness and chance. For an equation to work, you need a closed system. A self driving car can’t function if the roads are forever changing direction. The same would be true for AI generated music. You would need to construct a limit. A known quantity. Random chance would be deadly.

But music needs randomness. It needs chaos. It needs Death Grips. It thrives off leaps and bounds – unpredictable moments of improvisation. And for that, there’s no one better than a human.

Advertisements

Published by

D-man

What will be left?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s