top of page
Search
Writer's picturemargarete

What Brussels Listens to when it's Windy- I know you better than you do.

It's quite interesting for me to find seemingly odd business partnerships, especially when it comes to companies combining their expertise when they seemingly have nothing in common. Thinking about algorithms for most of this past week, I was amazed to find a partnership between Accuweather global and Spotify to create a joint product. The pair came together in 2017 to create a website that is now known as Climatune. This website automatically identifies your location, pulls up the weather report and generates a 30 song playlist that fits the forecast and your likely mood given that forecast. For about a year Accuweather shared data about the daily weather forecast (globally) while cross referencing their data with Spotify users streaming patterns using a location based filter. Researchers representing both companies then took the time to analyze the data in order to point out trends that can speak to the way in which the weather impacts people's moods and potentially music genre preferences. The results of this year study was somewhat predictable, as for instance: sunny days typically encourage listening to happier and higher-energy music, rainy days bring lower-energy, sadder-sounding music with more acoustic vs. electronic sounds and lastly, snowy days encourage more instrumental music. What I did find amusing though was how different U.S states respond to rain. For instance, I was surprised to learn that most Chicagoans get excited by the rain and generally were found to stream happier music. Whereas, folks in Houston respond the most strongly to rain, with acoustic listening increasing by 121 percent when it downpours. Additionally, while Miami and Seattle listeners buck the trend and listen to more energetic music on cloudy days, San Franciscan's music style, on the other hand, suggests overwhelmingly sad moods when its cloudy. These details may or may not get you going, but at the very least, its interesting to see how algorthims can pick up on client usage data in order to posit connections. As I was researching how discovery weekly compiles its data for what people would seemingly like to be surprised by on their weekly monday playlist, I realized that predictive algorithms somewhat need to rely on a different set of data cues in order to produce a compelling set of results. This realization brought me down the rabbit hole of machine deep content based learning. When answering the question of how exactly does Discovery weekly find the perfect songs for me that is both new and unpopular?!

I became aware that automatic music recommendation has become an increasingly relevant problem since music is now sold and consumed digitally. Most algorithmic recommender systems rely on collaborative filtering- i.e what are others listening to in order to share predictive preferences. This approach however, suffers from cold start problem: that data usage approaches fail when no usage data is available to begin with. I.e If no one has heard this new Alt-J album yet....So yikes!! how will we know if its any good and who will like it!?? And so using Neural Information Processing systems (aka NIPS) moves algorithms into new territory -using a latent factor model approach. What does that mean? In this case breaking down audio clips into filtered categories instead of studying user history. By pulling 30 seconds off of audio tracks (usually mid-audio) and testing for specific acoustic features such as, chords and chord progressions, bass drum sounds, or lets say a ringing ambience - and linking those features with filters such as danceability, valence, speed etc. these correlations can then be tested up against mood, genre, similar vibes. So when I want to be surprised and excited about what song my Discover weekly will deliver next, I can be assured that tunes that are somewhat obscure will be tracked down for my listening pleasure.

I have to say, in this case I'm quite happy to give over such intimate details about my taste in order for my pleasure momentum to be tracked and met by an algorithmic genius.




 


10 views1 comment

Recent Posts

See All

Digital Media and the Human Condition

As this is my final blog post in the final week of class, I feel like it is appropriate to think on the class as a whole. During our...

1 Comment


kellytriece
kellytriece
Nov 24, 2018

I also found the partnership between Accuweather global and Spotify to be one of the most interesting parts of Spotify’s algorithmic music choice. I wonder if that type of system might perpetuate certain moods in a given season as well. For example, people are notoriously depressed in the winter months due to lack of sunlight. Do you think that this type of music matching system might accidentally perpetuate depression by offering users music that perpetuates sad sounds and lyrics? If so, do you think that Spotify has any responsibility to offer hopeful options to listeners as well? I guess this question has more ethical than technical implications. Obviously, users listen to what they want to listen to, but sometimes feeding…

Like
bottom of page