Skip to content

Instantly share code, notes, and snippets.

@umbrae
Last active December 14, 2015 23:59
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save umbrae/5169690 to your computer and use it in GitHub Desktop.
Save umbrae/5169690 to your computer and use it in GitHub Desktop.

Via: http://blog.shaskins.net/googlereader.txt

On Google Reader's Sunset

Yesterday evening brought the news that Google is going to shut down Google Reader this summer (http://googlereader.blogspot.com/2013/03/powering-down-google-reader.html). Predictably, this brought about much gnashing of teeth. Having seen from the inside the rising costs of keeping unmaintained software running, and understanding the high product cost of keeping around old projects that don't support current vision, I don't think Google made a bad choice. It sucks for people who use the software, but it makes sense to me, even taking those people's anger at Google in mind.

There was a sudden burst of weekend-project "Open Source" (because the code is on Github!) developers declaring that they'll write their own replacement in my Twitter stream. Perhaps someone will make a serious stab at this, and (like http://pinboard.in did for del.icio.us) end up being a home for Google Reader's most locked-in users. The OPML export of Google Reader should make the transition fairly painless - users can be j-ing and k-ing through their favorite blogs forevermore with little effort expended.

Hopefully, though, this will be an opportunity to examine (and solve!) two problems that we have been ignoring, having a free, ubiquitous service like Google Reader for consuming RSS feeds (which were essentially "Google Reader Feeds" for how much people used alternative clients):

  1. There's basically no standard way to lay out an "article feed" (or an "essay feed", a "photo" feed, or a "video feed"). This means that attempting to parse RSS/ATOM from heterogenous sources to produce a single format is very difficult. This is the true brilliance of what Google had - ostensibly some mammoth group of heuristics for processing the myriad formats people's blogs and other RSS sources produce.
  2. The interface to Google Reader was pretty silly - when in the context of being a Google product, being a web-app like that makes sense, but for the actual problem domain, it really doesn't. The text was poorly laid out (compare Google Reader to something at least a little better, like Apple's "Reader" interface in Safari), styling by the source provider wasn't really done, and sources often truncated what was available in the feed in order to get you to come to their actual site. In other words, it was not to the advantage of the reader or the publisher for content to be consumed inside the Google Reader interface.

Problems like number 1 are hard to fix without a community agreeing that it is currently experiencing a "tragedy of the commons" and that many members are going to have to make sacrifices for the good of the whole. This seems unlikely to happen, as "people who provide RSS feeds" isn't really a community, and even subsets like "blogs that publish essays" don't seem to have that level of cohesion (despite how much long-time bloggers like to wax poetic about the blogging community).

The reality of new technologies like Twitter and Facebook seem to indicate that agreed-upon standards aren't currently in vogue, even for something very standardizable like the concept of a "feed". Standards have been developed (like OStatus http://www.w3.org/community/ostatus/), but the momentum of big walled-garden players seems currently unassailable. That may change (and I hope it does), but in the meantime, my favorite solution to problem 1 is to go with the lowest common denominator of feeds: a link to the source content.

That plays well into what I'd like to see for number 2 - the functions of a tool like Google Reader were, for me, being presented with a list of content, and selecting one to read (often by just reading all of them in turn). In a product that aggregates feeds, I'd like to continue to see the list of available content (I'm sure there are innovations to be had here), but instead of the approach of consuming inside the tool, I'd like to instead consume the actual source content. If it's a Youtube video, I'd like to see it on Youtube. If it's an NYTimes article, I'd like to read it on NYTimes.com. If it's the essay "Learnable Programming", I'd like to see it in all of its well-formatted glory right at the source: http://worrydream.com/LearnableProgramming/.

In a system where all that needs to be extracted from a feed is a reference to the source material, this becomes very possible. A first draft could handle polling sources for new articles, and parsing only enough to find original source items. Future iterations could cache the entire content, for easier reading offline, analysis of what had been read, etc. Such a system could be self-hostable, or hosted. Users can read content in the form it's originally intended (usually the browser, though certainly not always), and still utilize the power of feed aggregation.

I took a stab at starting something like that a few months ago, but got stuck on problem 1 (I was trying to still view feed elements instead of just tracking source objects). I may again take a look at that, now that Google Reader (and Apple Mail, my more recent RSS reader) are gone. I imagine that some more entrepreneurial coders will beat me to it, though, and I can't wait to see what the ones who don't try to ape Google Reader come up with.

-- Sam Haskins (sam.haskins@gmail.com)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment