Created
March 14, 2019 15:18
-
-
Save giorgiosironi/cbf2596fac3ecdd4d02110086b8d9d47 to your computer and use it in GitHub Desktop.
Accelerate notes
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Martin Fowler foreword | |
- real data and analysis | |
- 1 hour from commit to production as the benchmark | |
- speed with stability, not a trade-off | |
- caveats: subjective perceptions, sampling bias, confirmation bias | |
- scope: from commit to production, not entire development process | |
Courtney Kissler (Nike) foreword | |
- advocates for the book by personal experience | |
- optimizing for speed, not cost | |
- senior leadership commitment to a learning organization | |
5 categories of capabilities | |
- Continuous Delivery | |
- Architecture | |
- Product and process | |
- Lean management and monitoring | |
- Cultural | |
preface | |
- 23000 survery responses from 2000 organizations | |
- surveys with snowball sampling | |
- history | |
-- 2014 | |
--- measuring delivery | |
--- throughput *and* stability | |
--- Likert-type questions (Strongly Agree/...) | |
-- 2015: extending the model | |
-- 2016: more technical practices | |
-- 2017: leadership and not-for-profits | |
part 1: what we found | |
accelerate | |
- modern market: delivering sooner and responding to changes and threats | |
- measure coming from the practitioners, not the executives | |
- capability models, not maturity models | |
-- capabilities are never done, continuous improvement | |
-- no lockstep | |
-- outcome based rather than vanity metrics | |
-- dynamically changing | |
- evidence | |
-- no prediction can be made from | |
--- age and technology of the systems | |
--- operations vs development teams | |
--- change approval boards presence | |
- large value of adopting devops | |
-- 46x deployment frequency (tempo) | |
-- 440x shorter lead time (tempo) | |
-- 170x faster time to recover (stability) | |
-- 5x smaller change failure rate (stability) | |
measuring performance | |
- previous attempts | |
-- lines of code: bloated software | |
-- velocity: team-dependent, gamed | |
-- utilization: in tradeoff with lead time | |
- characteristics | |
-- global outcome: no teams fighting | |
-- outcomes not output: no busywork | |
--- design+development vs delivery (build, test, deploy) | |
---- tempo | |
----- deployment lead time | |
----- deployment frequency | |
---- stability | |
----- Mean Time to Restore | |
----- change failure rate (very generic, is a small bug a failure?) | |
- findings | |
-- high performers pulling away over the years | |
-- low performes sacrificing stability to improve tempo | |
-- organizational performance | |
--- profitability, market share, productivities as proxies | |
--- claim of a predictive relationship rather than correlation | |
- quantitative measures for culture | |
-- getting numbers in the wrong culture has pathological problems | |
measuring and changing culture | |
- modeling | |
-- assumptions (invisible) | |
-- values | |
-- artifacts (visible) | |
- Westrum model | |
-- pathological (power-oriented) | |
-- bureaucratic (rule-oriented) | |
-- generative (performance-oriented) | |
- measuring | |
-- Westrum construct | |
--- discriminant validity | |
--- convergent validity | |
--- reliability | |
-- hypothesis: Westrum predicts development and organization performance | |
--- all come from previous research | |
--- only time/frequency/MTTR form a valid construct, no change failure rate | |
technical practices | |
- foundations | |
-- comprehensive configuration management: provisioning starts from source control and all changes pass through source control | |
-- continuous integration: no feature branches that live longer than 1 day | |
-- continuous testing | |
- causal model (coming from previous research) | |
-- technical practices -> continuous delivery | |
-- continuous delivery -> Westrum organizational culture -> organizational performance | |
-- continuous delivery -> software delivery performance -> organizational performance | |
-- continuous delivery -> identity -> organizational performance | |
-- continuous delivery -> less deployment pain | |
-- continuous delivery -> less burnout | |
-- unplanned work as a proxy for quality | |
architecture | |
- high performance is possible with wll systems provided they are loosely coupled | |
-- test and deploy individual components even as their total number grows | |
-- lack of correlation is a strong-ish result | |
- characteristics | |
-- can do most of testing without an integrated environment (project isolated for testing) | |
-- can deploy/release independently of other projects | |
-- trivial: architecture loosely coupled means teams are loosely coupled | |
-- enables scaling (more deploys) | |
-- allows teams to choose their own tools | |
--- standardize on architecture/infrastructure however | |
--- architects should focus on engineers and outcomes, not on tools (provide loose coupling and empower teams to make changes) | |
integrating infosec | |
- DevSecOps - Rugged DevOps | |
- shifting left security inside the team | |
- or providing capabilities at the platform level so that each application doesn't have to reinvent them | |
lean management | |
- WIP limits is not enough | |
-- add: visual management of metrics and defects | |
-- daily feedback from production to make decisions | |
- lightweight change management process: external approval does not correlated with stability | |
product development | |
- the usual advice on small batches and feedback | |
- virtuous cycle between lean management and software delivery performance | |
-- curious about how this is measured? how is correlation directed both ways? | |
sustainability | |
- continuous delivery -> less deployment pain | |
-- keep an eye on barriers that hide deployment from the development side | |
- continuous delivery -> less burnout | |
-- lean practices and various risk factors contribute to this of course | |
employee satisfaction | |
- continuous delivery -> job satisfaction | |
- job satisfaction -> organizational performance | |
-- diversity and inclusion: previous research supports their positive effect on results, but this research is only descriptive in its lack of diversity e.g. 6% women | |
leadership | |
- servant leader: focuses on the development of followers | |
- transformational leader: focuses on making followers identify with the organization | |
- correlation of transformational leader characteristics with outcomes | |
- however, lesdership impact is indirect so not a strong correlation | |
- contributing to a strong team culture | |
-- cross-functional collaboration | |
-- climate of learning | |
-- effective use of tools | |
part 2: the research | |
the science behind this book | |
- Leek's framework | |
-- descriptive: demographics about company size, etc | |
-- exploratory: correlations | |
-- inferential predictive: theory-driven, test established hypotheses with data | |
--- unclear: I'd read inference as extending a sample results to a population, testing statistical hypotheses on the population, etc | |
-- no predictive? corrected: predictive through regression and an underlying theory | |
-- no causal | |
-- no mechanistic | |
-- separate: classification via clustering of groups of performers | |
- "research in this book" is primary and quantitative | |
psychometrics | |
appendices | |
apppendix c: statistical methods | |
- ... | |
- relationships | |
-- (Pearson) correlation: the usual | |
-- regression on predictive relationship | |
--- only used if literature/theories for a predictive relationship exist | |
--- linear first, then partial least squares |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment