Skip to content

Instantly share code, notes, and snippets.

@kissane
Last active December 10, 2015 17:58
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kissane/4471047 to your computer and use it in GitHub Desktop.
Save kissane/4471047 to your computer and use it in GitHub Desktop.

The Daily Emerald's normal homepage is a basic three-column layout using Twitter's Bootstrap: posts in chronological order on the left, featured stories in the center, ads on the right. Weeks ago we talked about doing something different for the Fiesta Bowl. The first plan was to make a full-width well below the menu bar and push the rest of the page down ~400-700 pixels. It probably would've been a big picture, a Twitter embed and some related stories. A bit different, but not particuarly exciting.

Fast forward to last Wednesday, when we decided to try to build a "second screen" for the game. What do fans want that we can make? Maybe they want to see what the each other's saying, talk about the game, read a reporter's account, read stories about the game. We've done each of these things independently, so this was essentially a repackaging of those components. (It's also the first time I've seen a homepage on DailyEmerald.com that isn't a print-y layout translated to web. I think that's pretty cool.)

The whiteboard idea Caption: Our whiteboard

I really like Quartz's responsive, independently scrolling columns, so maybe that was in the back of my head. We've been experimenting with social media on the Emerald since I played with a real-time feed of geotagged Instagram photos near campus last spring, but integrating such content into a newspaper website is far from an accepted practice.

I hacked one together with iFrames of existing components and sent it to Andy, our editor-in-chief. I chatted with Ryan and our ad director, Brittney Reynolds, to get the go-ahead on removing all the ads from the homepage. They said yes: rock and roll.

We decided the format of the project around 3 p.m. Wednesday and pushed it into production on Thursday at noon. I had a frontend proof around 5 pm (column, fixed headers, etc) after going back and forth with Andy Rossback, the editor-in-chief. He's a designer. I worked until 1 a.m. or so, then spent a couple hours cleaning it up in the morning. Needless to say, some attention to detail was sacrificed.

What it's made of

All of the backend pieces already existed before we decided to build the game-day site. Most of the front end is extracted from previous projects and partially rewritten:

The feed module

I wrote the app behing the Twitter/Instagram feed, nicknamed DeathStar, for the Ducks GameDay iPhone app we built in August. That's probably the most complicated piece of code in the project.

DeathStar is the second revision of a client for Instagram's real-time API. It collects both Instagram photos and tweets (with Twitter's streaming API) and uses Socket.io to push them to the app. It collimates these objects into a single stream… DeathStar… get it? Anyway. It's a big hacky CoffeeScript mess than has a slow memory leak. It persists objects to Redis and only does a couple non-trivial things. For one, an update from Instagram includes an array of images, but only one object is actually new. I started with in in-memory hash of IDs, then moved it to Redis. Existence checking is plenty fast and keeping data out of app code is good. (I also used to write every object to Mongo for analysis, but since it's on a 32-bit Linode slice, I hit the 2 GB issue and haven't provisioned a new 64-bit box.)

DeathStar was built to combine Twitter and Instagram posts into one stream. For the iPhone app, this makes the front-end code easier than dealing with two backends. But when I split it apart in this project, it's very possible to bootstrap the page with 50 tweets and 0 Instagram photos. I hadn't anticipated this and didn't have time to write a new endpoint. I wrote the ad-hoc logging code to see how often this actually happened.

The Instagram real-time API requires a publicly available callback URL. This makes local development hard, so when I wrote the server, I developed on a VPS. Moving forward, I'll either use localtunnel/similar or write a wrapper that uses a persistant HTTP connection or WebSocket to have dev/prod parity.

The NYT's Erik Hinton kindly pointed out that since I was serving socket.io off post 6767, it was blocked on some firewalls. I guess it's time to get HAProxy rolling.

The chat module

The Heroku-backed chat app was a proof-of-concept from a few weeks before. I wanted to play with the Parse.com API from Node and it was the straightforward demo and turned out to be handy.

I love Heroku. I have a ton of (free) one-dyno projects. It makes me build smarter stuff (environment variables, ephemeral file system) and has plenty of horsepower for the scale stuff we do -- probably an order of magnitude more than what we need. I was nervous that it would get massively popular and bring down the social and chat feeds, but it was stable-ish and fast the whole night. (Then the social-realtime server crashed overnight, as it's for some reason prone to do. That thing needs a rewrite.)

Our dev server cURLs the apps every few minutes to keep the dynos from spinning down.

I used Underscore.js in the chat for one _.each call, actually, but I should've rewritten that. Since I slapped all the parts together, I didn't refactor anything that I didn't have to and was left with another HTTP request to avoid writing a for loop.

The stories module

The stories column was extracted from the view code in a UIWebView on the Ducks GameDay app. (I never pulled the secret keys out of it, so the Obj-C isn't on GitHub. It's Storyboards w/ custom segues, UIWebView and some JS <-> Obj-C communication. I learned Obj-C to write it. It's not good code.)

The JSON feed

The JSON feed is provided by a plugin. It's the same feed that backs everything in the iOS apps. It wraps WordPress's RSS feeds, so it's easy to get a feed for whatever you want. Super simple. The only annoying part is it won't take an offset, so loading more posts isn't possible. (I haven't tried recently. But I think it's due to the RSS feed, which has its length set by WP.) I've been trying not to write another one for months now (something that understands LIMIT and OFFSET), but it might be time.

Logging

I set up custom JS logging out of panic -- we had a few machines that had empty Twitter and Instagram columns, and I wanted to know what the counts were for those columns. I pointed it at a Node.js Heroku app that console.log'd the GET param so I could tail -f it to get a feel for how it was performing.

A few places I could've stored that data but didn't:

  • Heroku, but there's a 10k row limit. In hindsight, I should've just paid the $9 for a DB.
  • Parse.com free tier. But I didn't know the volume, and that the chat app persists there.
  • Build one. SQLite and a single PHP file could've done it, but I ran out of time.

Time and resources

I made a sketch of the app on Wednesday morning, send a quick prototype to Andy (our editor-in-chief) around noon, then hacked until about 1 a.m. (~11 hours of dev time).

Here's how it looked at 5 p.m. Wednesday:

5pm wasn't a good look for it

Thursday morning, I spent a couple hours cleaning it up and tweaking to fit it into our WordPress site, then we pushed it around noon on Thursday.

Accommodating screens

I abandoned phone-size screens, which was questionable. It was fluid down to iPad-portrait width, but a four-column layout on a phone just doesn't work the same way as a tablet or 1000px-wide browser window. Instead of rethinking the project for mobile, I abandoned it. Analytics showed 3x the use on iPhones than iPads, so thinking it through would've been worthwhile for those 350 people. :(

@SeanDKennedy also reported it doesn't work in IE8. I'm not surprised. It would've been hard to build this for IE8 and even though there's plenty of people using it by choice or forced to, I would get much less done trying to support it.

Traffic and audience response

3,200 views (1,900 unique). I should've logged enough to know the total-time-on-site, but I just tail -f'd the logs to spot problems. We average 138,000 views per month (that's an average of Jan-Sept 2012), or ~4600 hits/day (~1900 uniques/day)

Here's a plot with post views for comparison: Hourly Stats

And my log watching. tail -f, please

It got a lot more use in the hours after launching it than during the game. There were some haters—I think providing anonymous chatting was begging for it. I'm surprised there wasn't more spam.

I haven't thought all the way through what the lower numbers during the game means. Maybe people don't want a second screen? I found myself not watching the game sometimes because I was watching a mass of tweets flow by and odd Instagram photos coming through.

I hoped more people would participate in the live chat. Emerald digital sports editor Isaac Rosenthal liveblogged the game with it, adding a nice balance to the free-for-all on Twitter. Have near-zero engagement during the game was a clear sign that people didn't want to use it for what I had in my head.

Side note: after DAT's opening touchdown, the next 5 minutes saw 12,500 tweets (average 42 tweets/sec). Whoa.

There were ~138,000 tweets between 10am Wednesday and 2pm Saturday. I really want to make something from all this. Here's Redis's dumb.rdb and a quick hack to read it.

Adjusting on the Fly

I added a score box around 5 p.m. that polls a Google Doc every couple of seconds (using Tabletop.js to make reading the JSON trivial) and is a free-and-quick backend for small data sets. I broke the site for a few users for about a minute when I did this, actually—I hadn't built a way to do an atomic update because all the app code is hotlinked to the GitHub repo and I pushed app code that needed Tabletop.js without updating the template. I got the project into WordPress as a page template, but that meant I was copy-pasting changes from dev (hosted locally with Pow) to the PHP template. This dev/prod mismatch was a mistake, clearly.

The velocity of the Twitter feed was crazy, but I couldn't think of any easy way to fix it. There seem to be upper and lower bounds on when real-time is useful for this kind of thing. If it doesn't change every few seconds, polling is fine. But if it's multiple-per-second, it's not readable and thus not useful. I made the Twitter and Instagram columns evict old items after I noticed people had thousands (or tens of thousands) of tweets rendered. Here's the commit, but I'm not totally sure the fixes the whole issue. I also didn't have a way to push code changes for people with the page already open. A few sessions were 5+ hours long.

Secondary things -- plotting tweet velocity, visualizing groups of tweets around peak moments, etc -- might be interesting. I'll spend a lot of time this summer getting ready for next season's round of stuff with all the information and experience from the past months.

Last thoughts

Refusing to experiement won't lead to new things. This was an experiement that our editor-in-chief and president went for, and I know they would've pulled the plug if it sucked, too. Working with a team that's open to do different things is why my job is awesome.

The project is still up at http://dailyemerald.com/gameday-live, and the code is on GitHub. The only reason I could do any of this is because of the brilliant people behind the projects I'm building on top of. I'd be thrilled if someone else built more cool stuff because of what I've done.

For the bio page

My name is Ivar Vong and I'm a full-time developer at Emerald Media Group, an independent student media company at the University of Oregon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment