There is a trend in recommender systems that I think is extremely interesting: systems are starting to explain themselves. The first place I noticed this was at Amazon in their personal recommendations section, at the bottom of a given suggestion:

In this case, Amazon recommended Moon Palace because I had rated another book by Paul Auster. This makes perfect sense, namely I rated something by an author, so the system recommended other books by the same author. The second place this popped up was at the new social music service iLike. Every time a user views another user’s profile, the system calculates a compatibility score based on how similar your favorite artists are, as shown here:

In this case, I share interest in the bands ESG, TV on the radio, et al. with this user, so our compatibility is high. When I share more popular artists like Miles Davis or Bob Dylan, my compatibility score is lower. This makes sense since rarer bands suggest a closer connection. Last.fm has added a similar feature called Taste-o-meter.
What’s interesting about these examples is not the algorithm, some augmented form of collaborative filtering, but rather in the way that the algorithm explains itself to the user. Many years ago, with the likes of Firefly and CDNow showing off the power of recommender systems, this sort of behavior would have been considered crazy. Showing to users elements of how your algorithm works? What if they reverse engineer it and copy your methods and copy your system and steal all your users?!
Not likely. For most intents and purposes, recommender systems are within wiggling distance of each other. Netflix is holding a contest to see if theirs can be improved, offering a cool $1M to anyone who can show a 10% gain over their current algorithm. While the current leaderboard shows the best contenders at a 4% gain over the original algorithm, Netflix does not expect people to make the 10% gain necessary anytime soon, suggesting the contest could run until 2011. But companies like Amazon and iLike are making improvements through the way that these algorithms are explained.
Explanation creates understanding, and understanding leads to trust.
What if all systems started to take this approach? We mostly assume that search providers keep their ranking algorithms in a 6-foot safe behind a wall of lasers, but at the same time Google is starting to release more information about PageRank through various systems. Someday we might have search results that explain themselves, while keeping the special sauce away from SEO geeks and spammers. Imagine if a top search result said “This result is first because: your search term was in the title, the author is a well known writer, and the host is a reputable newspaper.” I would probably say “that makes sense,” and in turn I would trust that system even more.
Digg users reverse engineering the new algo create a new unofficial FAQ: http://www.seopedia.org/tips-tricks/social-media/the-digg-algorithm-unofficial-faq/
You know the problem with iLike’s current algorithm, though? It doesn’t take into account the true music taste similarities. For example, I’m apparently very similar musically to someone named “mary m” because I said I liked Justin Timberlake. Similarities end there, however, as her actual playlist shows a perplexing predilection for Beyonce, Nick Lachey, and N’Sync. She’s not actually indicated any artists using the “iLike” feature, so based (incorrectly) on the assumption that I like Timberlake and she’s *played* lots of JT, the system ranks us as being supremely compatible. People who have actually listed a number of artists using the iLike feature (i’M bEginnIng to hAte sEcondary cAps) are generally appearing as only medium-compatible with me.
This just doesn’t work.
Nice post …
My favorite example of why transparency in recommendations is important is found in this image on Flickr:
I like your observation, but disagree with the conclusion. I agree that early recommendation systems would never had added this extra information in the result. Still, I disagree, that their doing so exposes their algorithm, or that the reason they didn’t show reasons was because of a fear in doing so.
First, the reasons shown are simply relevant attributes that are more a product of the algorithm, than they are the algorithm itself. I suspect selecting the appropriate attributes is most of the challenge in building the recommendation algorithm.
Secondly, I think early adopters of recommendation systems liked its black box nature. They wanted the recommendation to be magical. “Wow, how did you know I needed new socks?”