July 5, 2019
Wouldn’t it be great if someone listened to all the songs in the world for you and handpicked what you would like best based on your current taste?
Meghna Banerjee, Phil John and Spencer Peterson are 3 undergraduate Computer Science students from UCSC who published their research, Music Recommendation using Unsupervised Learning*. Their paper provides a better recommendation system than Spotify, by not looking at who listens to what, but rather, to make suggestions on the music itself by its components such as tempo, speechiness and danceability.
* Unsupervised learning is a type of machine learning algorithm that finds patterns based only on input data, without labeled responses.
Q: What did you work on?
A: The goal of our project was to analyze trends in music in order to explore the different components as well as create a recommendation system that offers songs of similarity to the user. Music is often recommended in context of users rather than how the music sounds itself. We wanted to change that, and recommend music based on actual audio features.
Q: What did you use to generate recommendations?
A: By taking on an exploration route as well as offering a recommendation service, we developed a way for people to find similar songs to the ones they like. Our recommendation segment focused on using features generated by Spotify such as tempo, speechiness and danceability.
Q: What worked and what didn’t during all of this?
A: Some of the highlights from our recommendations included the fact we were able to group 80s songs like “Wake Me up Before You Go-Go”, “Maniac”, “Careless Whisper” together. It also categorized instrumental music / vocals together.
But after some initial trials, we realized we didn’t have information dense enough to generate strong recommendation systems, so we began exploring audio spectrograms of the music (these can be considered to be “visual pictures” of songs). We were able to look at the spectrograms and conclude a 10% improvement in our results, compared to when we used features generated by Spotify.
Meghna Banerjee is studying Computer Science at University of California, Santa Cruz and wanted to work on improving current music recommendation systems because she often discovers new music through them. She wanted to use unsupervised learning in the project because she wanted to see what patterns artificial intelligence would notice in music and how it would group different forms of music together. She is currently interning at Lyft in their San Francisco headquarters.
Phil John is a recent graduate from the University of California, Santa Cruz. He graduated with Honors in Computer Science with a minor in Technology and Information Management. His passion for machine learning and the challenge of building a music recommendation model from scratch led him to work on this project. He was inspired after using real world services such as Spotify and Soundcloud for music and was curious to see what this project could do! He is currently going to be working as a Software Engineer for Tata Consultancy Services in San Jose and hopes to further his knowledge in data science!
Spencer Peterson is a rising fourth year at the University of California, Santa Cruz, studying Computer Science. He has long appreciated machine learning and been interested in generative art and music, so the music recommendation project was a perfect outlet. Spencer currently works for Google in Seattle.
You can view their full research paper here.
We wanted to give the 3 of them a massive thank you for sharing their research with the AI4A team. It’s been a pleasure and honor getting to learn more about what they are doing. Their work inspires us.
This blog post was written to accompany our All About AI newsletter, our newsletter containing news from the world of artificial intelligence, as well as research papers to help you learn more about AI. Subscribe here for more content like this!