Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save KubaJastrz/1b4c7c7ed7b7d956cc178f90c60063de to your computer and use it in GitHub Desktop.
Save KubaJastrz/1b4c7c7ed7b7d956cc178f90c60063de to your computer and use it in GitHub Desktop.

These are my notes and takeaways from the InfoQ Webinar: How to pay down technical debt in JavaScript applications


three sources of tech debt

  1. deliberate
  2. design change
  3. incremental code change

tech debt characteristics

  1. bugs are okay, they are a consequence of innovation
  2. emotional drag of tech debt is not okay
    • devs don't want to work in shitty code base
  3. tech debt is never "if" but rather "when"

how to manage tech debt between dev and product team?

both sides need to:

  • agree on the impact of tech debt
  • define what must be addressed
  • talk openly, set a common language
  • define a method of measuring the debt

how to measure?

  • static analysis - linting, tests, typescript
  • production analysis - monitoring, crash reports, logging
  • emotional drag - "developer groans per minute" - retros, 1on1 meetings, satisfaction surveys

static techniques

  • best practice: a gate to merging a PR
  • common mistakes: relaxed rules, lint ignore
  • decide, enforce, automate
  • static type definitions with TS
  • optionally: code smell detector (cyclomatic complexity etc)

dynamic techniques

  • stability score
    • (1 - (user session with unhandled bugs / total user sessions)) * 100%
  • stability targets, useful to define both
    • target stability (Service Level Objective, long term goal)
    • critical stability (Service Level Agreement, fix immediately)
  • monitoring exceptions
    • unhandled exceptions - affect stability
      • page stops rendering
      • event handlers break user interaction
      • unhandled promise rejections
    • handled (expected) exceptions - don't count towards stability
  • not all bugs are worth fixing, sometimes they are caused by strange behavior or 3rd party extensions - use monitoring to make a decision

QA

how to define stability targets?

introduce metrics, let them calibrate and then look at the numbers. generally SLO should be higher than the measurements. SLO can depend of multiple factors:

  • maturity of the business (young companies may not need super strict targets)
  • industry (clients might have higher standards, like fintech)

how to prioritize bugs?

make sure to attach metadata to monitoring reports. prioritze based on the area of effect, like bugs which affect revenue should have higher priority (for example shopping carts or user onboarding), define issues that are not on the critical business path (like changing user avatar in settings)

what are examples of dynamic techniques?

site reliability

  • availability - if a page is running at all
  • stability - if the app is working
  • performance - page load speed, runtime speed

incremental rollouts (testing on production for x% of users)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment