The “Discover Weekly” playlist I get a lot of value from – it’s probably worth 30% of my subscription cost. I’m able to find one or two new songs that I like every week but feel trapped in a bubble of similar-sounding music and repeating genres. This convergence is disappointing – fire the engineers for doing a great job.
“Discover” is a misnomer when I only find more of the same thing. To me, “discover” means something new, perhaps unexpected. Also, my taste in music is not static. It ebbs and flows throughout the year. Last year I went from indie rock, to some blues to, now, hip-hop with trumpets and saxophone. And “Discover Weekly” didn’t track any of this.
I’m fine with keeping an accurate recommendation engine to give users something they can rely on – but why not layer on other engines optimized differently? I’m imagining a suite of AI “music sommeliers”, each providing a different “recommendation experience”. One could track short-term preference cycles, another takes more risk and stretches into other genres, and another homes in on more macro-preferences (like “Discover Weekly”).
I like the sound of recommendation engines with “informed spontaneity” or “informed divergence”. Recommendations should always be based on my user data but they shouldn’t become monotonous either. Right now, we assess performance on the percentage of recommendations that the user likes. What about, in some cases, assessing performance based on influencing a user’s current preferences.
After all, many people make (art/food/music) recommendations to me. The most memorable recommenders are not those with 100% accuracy, but those who influenced or shaped my preferences through their recommendations.