Skip to content

Instantly share code, notes, and snippets.

@pjbollinger

pjbollinger/post.md

Last active Mar 22, 2021
Embed
What would you like to do?
High Velocity Product Development for Agile Teams

High Velocity Product Development for Agile Teams

This is a presentation of a framework that strikes a balance between new product features and maintaining existing features.

The Framework

Focus on product until an engineering indicator alerts otherwise.

Where an engineering indicator is a metric that development team can measure to describe the health of the product.

OR

Work on new business value as much as possible but stop when the business is at risk of losing customers.

The exact implementation of the framework depends on the company. The key goal for all frameworks should be the same though.

In this post, I will walk through an example for a Software-as-a-Service (SaaS) company, since this is where I plan to apply the framework. It would not surprise me if this idea applies to other business models and companies.

Introduction

In an organization, there is always a tension between product and engineering. This tension comes from the conflicting goals. Product wants to develop new features that customers want. Engineering wants to develop a reliable product that always deteriorates as the company scales. Since these goals can conflict, the tension will always exist. Depending on how much tension exists can be an indicator of the health of your organization. Too much tension will cause rifts and too little tension will cause slow downs in product or engineering. The team needs to strike a balance.

If the team is too engineering focused, customers will not see new features and may get upset from the lack of response to their needs. If the team is too product focused, customers will not see the reliability they expect and may get upset from the inability to trust the product. You can derive more scenarios using the same formula.

When product balances engineering, customers will be able to get their needs satisfied while being able to trust the product. Customers may not get everything they want. The product may not be perfect all the time. But, the balance will provide comfort to customers and employees.

Example: SaaS Company

A SaaS company can be successful by developing a product with a unique set of features that customers are willing to pay for. If the service goes down or if the product fails to differentiate from competitors, it can hurt the SaaS company.

Choosing Engineering Indicators

For a SaaS company, Google has published books on Site Reliability. In these books, you can learn about indicators and how to respond to them. These books are the inspiration for this framework. Yet, this framework focuses on smaller product development teams.

The goal of an indicator is to provide clear sign when product development should halt. Since indicators can halt development, you should always be aware of how noisy the indicators are and adjust.

Also, indicators are customer focused. By including the customer in the indicator, it should prevent bad indicators that would lead to distracted work.

Also, indicators should be time-based to prevent noise. They should be over the course of days so teams are not context switching too often.

Engineering Indicators for a SaaS Company

These are some indicators that I came up with. There may be more and even better indicators to use but these alone should cover the gamut.

Time-based Application Performance Index (Apdex)

Apdex is a formulaic way of describing if something is in a good state or not. In this context, you can choose an indicator like so:

The endpoint POST /person must have an Apdex of 0.9 for a response time of 10 ms over the past week.

With that statement, you can create the following expression:

Apdex[10 ms][past week]
=
(
    (
        1 * (Count of responses that responded in 10 ms in the past week)
        +
        0.5 * (Count of responses that responded between 10 ms and 40 ms in the past week)
        +
        0 * (Count of responses that responded above 40 ms in the past week)
    )
    /
    (Count of responses that responded in the past week)
)

This can be a good indicator because it impacts how the customer experiences the product.

Error-based Apdex

Using a similar formula, we can describe error rates with Apdex. HTTP response codes less than 400 are good, codes between 400 and 499 are frustrating, and codes greater than or equal to 500 are bad.

The idea behind this metric is that the UI should do whatever it can to prevent bad user input (4XX) and the team should design the system to avoid faults (5XX).

An example of an engineering indicator:

The endpoint POST /person must have an Apdex of 0.9 for HTTP response codes for the past week.

Bug Creation

If a team develops a product without enough testing, customers will be the testers and report issues. This is costly though because any bugs customers experience could relate to them churning.

This indicator relates to the upfront quality of the product development.

An example indicator:

The People service/component should have no more than 2 bug tickets created per week.

Note: It's important to denote the difference between bug and feature request. I have seen tickets where customers reported a bug because it did not meet their expectations. The product was behaving as intended but the desired functionality was not implemented yet.

Bug/Feature Velocity

If a product is well designed, it should be easy to change it with patched or adding features. If it takes the development team a long time to make changes, this indicates that the team needs to focus on the system itself to make the product better.

This is a harder indicator to fix because there can be two issues. First, there is an immediate reason why development may be slower than expected. Second, the issue can stem from a process or organization issue that the company needs to address. It impacts customers because they will churn if you are not competitive enough.

An example indicator:

With 1 feature request being equal to 3 bugs, the People team will complete 9 bugs worth of work per week.

Up-to-date Dependencies

With most software, there are dependencies to other software systems used within it. As time goes on, the software used to develop the product will become outdated. Outdated software itself is not bad, as the saying sort-of goes, "If it ain't broke, why fix it?" But, if researchers discover security flaws in the outdated software, the outdated software is in a broken state. Although customers will not see the impact of outdated software, they will feel it if any hackers take advantage of an obsolete system. Customers will also feel some pain when large amounts of work needs performed for an update, see Bug/Feature Velocity.

Examples of indicators:

Our software will run on the latest version of Python that is in the "security branch" stage of development.

If a dependency has a hotfix, minor, or major update, we will update the dependency after 2 weeks, 1 month, or 2 months after its release.

With some software systems, like Python, they have published schedules. The product-focused team members can use the schedules to incorporate updates into the product roadmap.

Package dependencies are harder to predict when updates will come out. If the team maintains dependencies, effort should be minimal when performing hotfix or minor updates.

Operating Expenses (or Profit Margin?)

All products will have a cost associated with running them. A development team can measure the cost of a system to create an indicator.

The indicator could be a simple limit such as:

The operating costs for the People service/component will not exceed $1,000 per 1-month period.

Or if you include the revenue received for that specific product/component:

The profit margin for the People service/component will stay above $500 per 1-month period.

The exact indicator used will depend on the transparency of your company. By bringing up costs it will help keep the team aware of the impact of their work and they should have access to the data.

If you have good data, you could include the engineering time in the costs, but that can be difficult to get.

Conclusion

Becoming a data-driven company is hard work. By using data though, you can ensure a product development team is working on the right tasks at the right time. By using engineering indicators, you can have clear focus on new product features. To get started, teams will need to agree on what the indicators should be.

Afterword

As mentioned in the original disclaimer, these are my thoughts. I plan to use these concepts to guide my personal and work projects. In the future, I will be able to come back to this post with an update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment