Summon's Topic Explorer shows relevant reference material in the right-hand pane to users doing broad searches. The location of this content is getting more useful as Google adds more and more features to it's Knowledge Graph, which shows relevant information (including reference content, store hours and traffic, content from Gmail and calendar and more). In the past, the right-hand side of a page was largely ignored by users, since it was the home of banner ads and other useless bits of content (and it still is in many places). But Google's push to take back the right side pane means our users are more and more likely to see content there, especially when searching. So it's never more important that we get things like Topic Explorer right.
I've been recording Topic Explorer entries that are shown to users since November, and have analyzed over 8,000 search query/Topic Explorer pairs (that's less than half the entries I have in my data set). The overwhelming majority are pretty good. In my analysis, 93% of Topic Explorer entries were on the mark or close. But there are still times when the Topic Explorer entry is off, and I wanted a way for our users to help us know when they've found a problem, especially if they make it seem like the tool is returning biased results.
Today I launched a new feature in GVSU's Summon instance that gives our users the option to anonymously report a mismatch between their search terms and the Topic Explorer results.
The script styles the "Read More" link as a button, and then adds two more links below, one on the left and another on the right. The first link allows users to "Report a problem with this result." Clicking that link will send a request to a PHP script that adds the data to our project management system without recording any identifying information about the user (except in cases where the search terms could identify them, if that ever happens with Summon). The script then changes the text of the link to let them know we got their request and will look into improving it. I've already shared my 8,000 analyzed records (along with flagged innaccuracies and topics that appear biased) with ProQuest, and they're working to improve the Topic Explorer tool. This feature will give us another way to spot problematic searches, and also signal to our users that if there is a problem, we want to make it better.
The second link brings up a small modal window that explains how the entries in the Topic Explorer are chosen, without getting technical or ever using the word "algorithm." This is meant to help contextualize this contextual information.
Finally, I'm finishing up an analysis of the Topic Explorer records to be published soon. I think it's geting more and more important to push for quality not just in design and UIs and collection coverage, but also for accuracy and fairness in the algorithms that drive so much of library software. If you're interested in that project, feel free to drop me a line. I'd love to chat more about it.
If you're interested in the code behind the changes, I've pulled out the relevant bits and put them on Github.