Ive spent a few days thinking on this.
I struggle with the idea that algorithms could be so informed as to nail sharing things you dont know you like. We dont give Spotify nearly enough information to know that kind of thing. It's a utility and not a friend we share real shit with.
I like the language of economics for considering these types of things, but Im gonna tweak it too
The Efficient-Curator Hypothesis is a theory in Jamesian Economics that states that curated playlists fully reflect all available information
A direct implication is that it is impossible to "beat the algorithm" consistently on a mood-adjusted basis since curated playlists only react to new information
Economics moved on from its efficient market hypothesis as it learned how to study the errors of human perception, and their consequences in behavior, often framed as cognitive biases
After reading what Fred said, I started to wonder if curation should be thought of as a form of cognition with biases too, where we know what statistically-normal looks like and can then explain the reasons someone would deviate as a form of Curation Bias
The current model is typically a playlist with a name describing a mood and some playlist author's opinion on what songs represent that mood. We all get the same experience from the "mellow beats" playlist in spotify, etc
Curation biases would describe an effect and not specific songs, allowing the playlists covering common moods to still be unique for each user
In the case of wanting to discover things you dont know you like yet, the usual algorithms could generate baseline suggestions and then a curation bias could be used to both focus the baseline while having a guide for exploring things not in your baseline yet, representing things you dont know you like yet
My hunch is that when a friend suggests music they are operating at this cerebral level while the algorithms are still offering up the greatest common denominator from a sea of statistics