Skip to content

Instantly share code, notes, and snippets.

@daveshah
Created May 10, 2016 14:37
Show Gist options
  • Save daveshah/45032ff5c5dd20e927d9ba7f7d416c73 to your computer and use it in GitHub Desktop.
Save daveshah/45032ff5c5dd20e927d9ba7f7d416c73 to your computer and use it in GitHub Desktop.
Just some quick notes/thoughts on tracking happiness :)

##Some quick thoughts on tracking happiness I've done this with a couple of teams. I've used Mercury App and Google Forms.

Some things I've learned from this:

  • Setting expectations is important - we're going to have good and bad days - that's okay. That's human.
  • It's important to PAY ATTENTION and HONESTLY AND OPENLY discuss the reasons we're happy / unhappy. (make sure there's room for notes)
  • It's important to remember balance - making one group happy can easily make another one unhappy. Be mindful of the impact of our actions in a larger context!
  • IMO, as with all metrics, this is one the team should own.
  • This definitely takes discipline. It's easy to forget to track things. Try and set a calendar reminder (or similiar) to take 5 minutes or so and keep this habit.
@spetryjohnson
Copy link

I'm reformatting our retro and I'm approaching this in two ways:

  1. Iteration-level "temperature", via this survey that's sent out before each retro: https://www.surveymonkey.com/r/356VB7S This is anonymous.

  2. Adding custom fields to each case in the issue tracker so that DEV, QA, and BA can independently rate their happiness/satisfaction with the case. This feedback is NOT anonymous.

The thought is that the survey collects data on a 2-week basis, asking people to reflect on the iteration as a whole. The case-level fields give us specific data points tied directly to the work product. We send a reminder to the survey before each retro, and the case-level fields will be visible throughout the entire daily process, so I don't expect it will be difficult to establish discipline.

My hope is that graphing these data points over time will yield interesting feedback:

  • If people are generally unhappy at the iteration level, but the case-level ratings are high, then there must be process/environmental/cultural issues to investigate.
  • If people are happy at the iteration level, but case-level ratings are low, then we know we've acknowledged some issues, but that people are happy with the process and progress we're making to address them.
  • These trends may be leading indicators for quality and retention issues. If satisfaction or happiness are dropping, we might expect defect rates to go up.

This data will be visible to executive management, but it's being tracked and discussed by the team itself, and exec management is not invited to the retro.

One question I have is, is it better to ask "how happy were you RELATIVE TO LAST ITERATION" or just "how happy were you in THIS iteration"? The former could be problematic because a significant positive outlier "raises the bar", meaning if I have one AWESOME iteration, followed by a GREAT iteration, I might end up with a negative response ("less than last time") while still having a positive outcome.

Thanks for the reply!

@daveshah
Copy link
Author

Sounds good. I personally like "how happy were you in THIS iteration" more than the relative question - I feel like the relative question might be a bit problematic as you'd mentioned.

Also - I feel like it may be hard to gauge how happy someone was over a 2 week period. It's easy to get caught up in an iteration and, without taking some serious time to reflect (and maybe having some data to reflect on), things might get skewed. It might be helpful to have folks try and just jot down a note on a post-it daily just to help with accuracy to give them something physical to help drive out a more accurate answer for that time frame.

Just my thoughts though - as with all things (A|a)gile - experimentation is key so I'm curious what you do and how it works out!

@daveshah
Copy link
Author

@DocOnDev might have some good insights on this as well. He gave a really great talk at CodeMash last year on some of his experiences with gathering metrics like this on teams.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment