Why Mood Is The New Musical Genre
Traditional classification systems struggle to be relevant in an age where the term “chill” means more than “R&B.”
Greetings from paternity leave! Please enjoy this guest post by Tiffany Ng. In addition to doing some fantastic editorial research for me on We Are All Musicians Now and other projects this year, she served as culture editor at the Yale Daily News before her recent graduation. She’s also written for Vogue Hong Kong and High Snobiety—and understands far better than me not only what “the kids” are listening to, but how they’re classifying it. As for yours truly, I’ll be back in action later this month. -ZOG
How would you describe your taste in music? Is your instinctual response a genre? A musician? A general vibe?
Several months ago, Spotify’s 2021 Wrapped helped answer this daunting question through its “Audio Auras”—a visualization of two adjectives describing each user’s listening habits: be it wistful and comforting, happy and cozy, or energetic and bold. These keywords are chosen based on collected data, reducing the complicated nuances of taste into simple descriptors. In the age of individualism, it feeds on the push to define one’s brand.
Programmed by aggregating each user’s listening history and creating a track mood descriptor dataset, Audio Auras calculate the percentage that each user’s listening preferences match the six defined moods. The calibrated percentage then generates a color map that visualizes the two key moods most statistically emblematic of the user.
Projects like this gesture towards a bigger question asked increasingly by my generation: is genre still relevant when it comes to describing taste? In a time where Kaytranada is considered an “electronic-hip-hop-R&B-dance-funk-house artist” and categories like “Country Pop” are frequented on Spotify, I doubt the labels “hip-hop” or “country” mean much to the average listener anymore.
When a series of historical definitions are used to determine what is or isn’t music, new sounds are inevitably boxed in by existing conceptions of what music “should” be. In this vein, musicians—eager to find a unique sound—are growing increasingly defiant of genre. And so are their audiences.
Not only are types of music evolving, the way that people listen to music is changing, as evidenced by the rise of self curation. Streaming services actively encourage a culture of music consumption that is highly personalized (though Spotify did comply with Adele’s recent request to remove a default shuffle button for albums).
Gone are the days where people would sit patiently next to their record player, listening to each song in order of the album’s track list. Instead, it’s not uncommon to find playlists where Beyonce’s ever-so-raunchy “Partition” is immediately followed by Beethoven’s “Symphony no. 9.”
Listeners—especially young ones—are not concerned with what category each track falls under, but instead in how each track makes them feel. The abundance of homemade playlists coupled with the popularity of experimentation has made the fixation on traditional genres akin to insisting that the guy has to pay for dinner on a first date.
Picking up on this collapse of genre, streaming services are creating new categories like “Bedroom Pop” or “Club Culture” to depart from the purist, cookie-cutter definitions. They’re responding to this personalized culture of music consumption by embracing an overlooked cornerstone of musical classification: moods.
So how do these platforms categorize and define vibes of music? A 2015 research paper titled “Curation By Code: Infomediaries and the Data Mining of Taste” explores this phenomenon by investigating The Echo Nest, a Spotify-owned music intelligence/data platform. By parsing “an entire song in a few seconds and [processing] the signal into thousands of unique segments, including timbre, beat, frequency, amplitude, vocal syllables, [and] notes,” The Echo Nest’s software distills each song into a set of data points used for categorization.
These computer-measurable characteristics are organized into groups based on what the software calls “cultural data.” Through surveying the Internet using natural language processing techniques, the program defines the characteristics of each genre and subsequently labels each song. The same goes with an artist's sound. For example, if “terms like ‘dreamy’ or ‘ethereal’ are frequently used to describe a Beach House album,” The Echo Nest assumes a connection between these terms and the band’s sound. All this information—culled from sweeping the youth-dominated web—helps build a “musical brain” accessible through APIs (application programming interfaces).
Music is thus categorized by the people, or rather, those actively contributing to online Internet discussions about music. Tearing down the constraints of traditional genre, The Echo Nest’s software breathes fluidity into the public’s approach to organizing, finding, and consuming music. It also significantly reduces categorical overlaps. While a Kaytranada song may fall into the traditional buckets of electronic, hip hop, R&B, dance, funk, and house, an approach to categorization by mood—like Spotify’s Audio Auras—may consider it “energetic” or “chill.”
One of Spotify’s rival streaming services, Pandora, contests this experimental approach to music curation by insisting on categorizing music through a taxonomy information system called the “Music Genome Project.” The program focuses specifically on five genres of music: Pop/Rock, Hip-Hop/Electronica, Jazz, World Music, and Classical. Wary of letting the Internet categorize music, Pandora’s algorithm relies on a framework designed by a team of musicologists. The project is rooted in the company’s attachment to traditional conceptions of music, fearful of the so-called “tyranny of the majority.”
For the simple fact that Tyler the Creator’s album IGOR won best Rap Album at the 62nd GRAMMY Awards despite not being a rap album, miscategorizations in the music industry aren’t new—even when specialists are in charge. Recognizing the politics that come with different approaches to categorizing music, be it the influence of problematic personal biases or the challenge of equal representation, the notion of grouping music by moods finds merit.
However, while the boundaries of traditional genre melt away in the face of feelings-based categorization, a new challenge arises: how do we reconcile with the fact that music has different emotional impacts on different people? Would artists feel boxed in by moods? Better yet, what happens when sad artists get happy?
It’s worth noting that emotion-driven organization leaves room for change in ways that traditional genres don’t. As the general public reacts to Lorde’s sunny tunes from her semi-recent release Solar Power as “shedding [the] melancholy [that] defined her music,” Spotify’s algorithm collects cultural data that changes the way the Echo Nest’s musical brain understands Lorde’s music. Unlike classical musicology, categorization by moods is dynamic. It evolves with the public’s tastes and catches up with the ever-changing nature of the music industry itself.
To be sure, categorizing by mood isn’t without fault, and there are nuances in music’s relationship with emotion that aren’t accounted for with today’s technology. It is also true that the act of sorting music by definition results in exclusion. But when genre miscategorizations sit at the center of controversy (see Billboard’s removal of “Old Town Road” from its Country chart), it’s reasonable to assume that exclusion based on emotional intuition rather than loaded historical conceptions lends itself to be less polarizing and problematic.
To me, in the end, organizing music by mood finds promise in one simple fact: some people can’t tell you what genre a song falls under, but everyone can tell you how it makes them feel.
Follow Tiffany on Instagram @tffanyng and contact her via firstname.lastname@example.org.
Thanks for reading ZOGBLOG by Zack O'Malley Greenburg! Subscribe for free to receive new posts and support my work.