Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save GuillaumeDua/950695f99b23fb13a934ac88bf27d828 to your computer and use it in GitHub Desktop.
Save GuillaumeDua/950695f99b23fb13a934ac88bf27d828 to your computer and use it in GitHub Desktop.
Interface-based programming & architecture : How to implement clean, scalable codebase in modern C++
<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: {inlineMath: [['$', '$']]}, messageStyle: "none" }); </script>

How to avoid decreasing productivity in software development projects ?

Or "The multiple impacts of technical debt"
Or "How technical debt is choking IT companies on multiple scales" Or "Software development : How does technical debt suffocates your projects?"
Or "Technical debt : Stop shooting yourself in the foot"
Or "Software development : It's time to do it right"
Or "Technical debt : why do care ?"
Or "how to implement clean, scalable codebase (in modern C++)"
Or "Software development : How to avoid projects failure ?"
Or "Software development : technical debt & risks prevention"

word_cloud

This paper is an intro to Scalable softwares : interface-based programming & architecture in modern C++ that will follow shortly.

About this paper

Abstract

Most of software development companies, whatever their size, how long they exist, nor how much money they generate, face the same issue :

  • Decreasing productivity that is in proportion with their projects lifetime.

In this article, which mainly deals with IT projects management, we will analyze how technical debt could be the cause of this decrease on several scales. We will wonder how TD has a direct impact on efficiency at work, projects performance and quality, but also deeper factors by exploring how does it influences employees mental state thus increasing staff turnover by generating work-induced psychological disorders.

Finally, we will create a set of recommendations to both prevent and reduce this deficit, and attempt to create a tool to predict the impacts of TD, and therefore velocity, at a specific time on any project and team.

Target audience

This paper aims to reach anyone who already experienced software development one way or another, from trainee developers to end-users, including teams staff-managers.

However, since technical debt is likely to impact most jobs, many of the topics discussed below may apply to those who are not in the IT world.

Prerequisites

  • Agile : Not mandatory, however a basic knowledge of Agile methodologies, Scrum in particular, will ensure a much smoother reading experience.
  • C++ : There are a few C++ snippets, mostly to illustrate what expressive clean code is. Any other programming language would work so.

Disclaimer / About the author

This article was written by a developer to best reflect their thoughts, personal opinions and his experiences perception.
I did not study management techniques nor psychology, however when I became aware of how important the technical debt problem is, and how strongly it impacts on many projects and companies, I choosed to devote my professional life to this very subject.

Introduction / motivations

A story about Scrum/Agile methodology and legacy code

From a developer perspective

As a developer, some of the most frustrating things I have to deal with in my daily work are deprecated technologies, chaotic architectures, and spaghetti code.

If you are unfamiliar with this antipattern name, it means that the current codebase is a real overcomplicated mess, and it would be way faster and easier to rewrite from scratch than attempting to understand then modify.

This often, if not always, leads to unproductive work done by unmotivated employees.

Even the simpliest task may cost several days, which can be experienced as failure by the developer it is assigned to. Generating this way an deep incompetence feeling, and thus a poor self-esteem.
This permanent frustration and dissatisfaction often leads to psychological disorder called boredom bore-out syndrome. The employee slowly stop being proactive in both its professional and personal lives. Feeling to be a fraud, he/she works with the persistent fear to be discovered by his/her supervisors, which results in growing anxiety and ultimately, depressive disorders.

At a company level, this is most likely to cause tension between managers and and developers team, as the former not understand the situation root causes, and considers requests like technical debt decreasement as a latter's whim.

Who never heard things like the following, from current or former colleagues :

"It's an old codebase, an old project that exists since 10+ years ... so it's completely normal.
It works ... sometimes ... most of the time I guess. So, leave it alone."
.

So what ? This is definitely no excuse. Defeatism never helped solving problems.

This issue has a name : technical debt.
Simple solutions exist, such as software refactoring and code cleanup; that cost time, thus money of course.

In such case, the team need to invest short-term worktime to save a tremendous amount of middle and long-terme time. Time that often lacks on most software development projects, which explains why poor-quality, untested, undocumented spaghetti code is so common.

From a manager and/or end-user client perspective

But what about non-developers perspective ?

This kind of technical debt manifests to managers and end-user clients by repeated delivery delays, a continuous misunderstanding of encountered difficulties, as well as tensions with the development team.

They may even conclude that the staff is responsible of its own ineffectiveness, even slacking on purpose.
This unproductivity engenders a tremendous amount of stress on both sides.

The worst point is that it endanger the whole team integrity; and thus, the project's.

Being oblivious of what the technical debt subject is, or considering it only as a non-priority task (comparing to new value creation, add and delivery), theses symptoms accentuates over time, and might ultimately and most likely freeze the project in a non-productivity statis state .

Developers team productivity

Using Agile/Scrum methodology, velocity is a value calculated in story points using a Fibonacci scale. This points determine how much a team is able to deliver during a sprint.

A common formula to calculate velocity is :

$velocity\equiv { \frac { Agile\ story\ points\ delivered } { number\ of\ sprints } }$

From my perspective, (good) team managers tends to improve developers velocity mainly the followings ways :

  • Improve communication

    • Visibility is a key-factor to success.

      Daily stand-up meeting is a double-edged tool : while imposing more responsibility to team members for the delivery may increase productivity, this may lead to demotivation in case of delay, as it may be consider as a failure.
      Also, updating the burndown chart (that represents the work left versus time) on daily basis ensures to have an overview of the team's progress.

      This is why creating a clean, exhaustive task hierarchy tree is so important. Each US must be divided in tasks, that are also divided in subtasks and so on. Across the tree, each node must be precisely detailed, and at least answer the following : "What", "How", and "How long". Also, "Who" is important, particularly when noone in the team is able to do that specific task.

      The better the definition of ready is, the smaller overcosts are.

      Keep in mind that the DoR is a powerful Agile tool to avoid unexpected user-stories paths across the workflow. It ensures that the product-owner, business analysts, and the development team share the same vision of what must be done, and what the expected result is.
      Better requirements avoid frequent changes, while inaccuracies and oversights often generate tremendous overcosts. Also, reworking the same user story multiple times is damaging for employees morale.

  • Keep employees morale high

    Motivated and well-rested employees generate more value. Being more creative, they produce higher code quality while avoiding common mistakes & pitfalls, and therefore, avoiding wasted time.
    We will develop this point later, in a dedicated part : "The human factor".

  • Capitalize on Agile retrospectve meetings

    Retrospective meetings provide a safe place for the team to best reflect on and discussion what works well, and whant needs to be improve.
    This is an excellent tool to improve both velocity and team cohesion, while adjusting processes and practices.

Please note that I won't waste here time detailing bad practices, such as workflow by-passing, shortening deadlines, modifying schedules, frequent crunch-times, overtimes, managers that shout on the team, etc.

Codebase quality is a key to efficiency

While velocity score is interesting, it does not explain causes.

It is a "what", while we need a "why".

According to my experience, a major factor for (lack of) productivity that is often neglected is codebase quality.

Codebases are projects value. They are teams efforts output across projects lifetimes, what remains beyond the turnover of developers and managers.

Often, we tend to calculate projects health using two factors : the final product quality and performance.
Why not add maintainability as a new key parameter ?

Technical debt : the beast

Agile methodologies claims to improve value delivery by enhancing visibility and adaptability, while decreasing risk.
Why not consider increasing technical debt as a major risk factor ?

Why do we call this a "debt" ?

Let's compare technical to financial debt. Contracting a loan will results in three main points :

  • An amount of money loaned
  • An interest rate
  • A repayment capacity - and schedule -, expressed in an amount or rate over time

The comparison is relevant when it comes to the so-called snowball effect.
Not repaying the technical debt is quite similar to paying interests on a loan taking out a new loan !
This situation is likely to result in bankruptcy or a repayment of which the amount can reach several times that of the initial loan.

Sounds like a worthy or substainable strategy ? Obviously not.

What actually technical debt is ?

Technical debt definition may vary, but it generally encompass :

  • Deprecated technical environement
    • Programming languages - or languages standards/release
    • Tools (builders, generators, compilers, optimiser, static analysers, sanitizers, etc.)
    • Plateforms
    • External components (librairies, framework, ...)
  • Bad-practices (which may be old best-practices which became deprecated since)
  • Poor architecture choices (Antipatterns, tight-coupling, etc.)
  • Poor implementation (code duplications, quick-wins, quick-fixes, over-complicated design or code, ASM-level optimisations, ...)
  • Lack of development & architecture guideline
  • Lack of technical documentation
  • Poor test coverage
  • < and so on ... >

Theses are dark creatures lurking in the shadows, waiting to be numerous and strong enough to sink projects.

Trust me, I already helplessly contemplated a fistful of companies literally drowning under technical debt to a point they were not able to fulfill any end-user requests anymore, then eventually going bankrupt.
Clients do not care how does the product work, as long as it works. After What financial issues grow quickly.

In his Nobel acceptance speech in Oslo on December 1986, Elie Wiesel said :

"We must take sides. Neutrality always helps the oppressors, never the victim. Silence encourages the tormentor, never the tormented. Sometimes we must interfere".

This is why I chose to focus my career on this very subject :

  • "Software development risks prevention in general, technical debt management in particular".

And today, you too can choose to be part of the solution, rather than the problem.

In addition to these sources, developers and managers must accept the fact that any modification to a code base is likely to introduce new technical debt, however small. Also, we should not fear changes : there is nothing worse than a project in stasis state, decaying over time to ultimately becomes deprecated, unused and thus, forgotten.
Especially when such project generates a significant part of the company's profits.

Who has to pay for TD mitigation ?

A common question when a topic like this is discussed is "Who has to pay for it ?" - in matter of money and time.

  • Cout continue, lisse dans le temps

    • Comme la pesanteur Agile
    • Comme la pesanteur des Tests
    • Cout intrinsec au processus de development
      • Doit etre lisse, integres tout au long du projet
  • huge TD gape -> TD managements strategies

    • list + priorize TD items,
    • then refactor the component which is the most related to the current US

The vicious technical debt cycle(s)

Avoiding or postponing technical debt payment is similar to the following think :

"Ain't no time to get more time"

The greater the technical debt is, the harder and longer it is to pay.
And the worst is yet to come : this cost is not linear over time, but exponential-like, as the causes-consequences relationships do not shape a chain, but a cycle. This is a slow but ineluctable suffocation.

Not only because the solution involves refactoring (which once again, cost both time and thus money) with increasing complexity over time, but also technologies & softwares updates as well as staff trainings. Which involves potential additional risks and delays.

diagrams-Viscious_circle_of_technical_debt

Also, if quick-wins may generate value fast, it introduce just as much if not more technical debt and issues. Thus, the strategy to add new features to cover refactoring costs is basically a double-edged, very risky, pact with the devil.

To illustrate this in a simplier way, let's use the following humoristic draw that became pretty popular across technical-debt-related papers, because so relevant :

Technical debt joke

This is why detecting and fixing technical debt as early as possible is so important.
Facing economic reality, most companies are not able to put their projects in stasis states for a long period of time, in order to pay technical debt.

The point of no return

If the technical debt has an exponential-like impact over productivity, the latter will move towards zero.
Thereby, we may safely assume that it exists a point that when reached, a project cannot evolve anymore, and so deliver value. No more new features, performances improvements, not even critical bugs fixes nor any kind of maintenance.

We can virtually - and quite naively - illustrate this concept using the following curves :

no_return_point

Here, the y-axis represents a rate of productivity, compared to the ideal one. It fluctuates using a sinusoidal shape to emulate common factors such as staff's fatigue or vacantions for instance.
The x-axis represents time, without any particular unit. Days or weeks are possible options.

Basically, the productivity decrease in proportion the TD impact grow. Besides external factors, the reverse could be just as true.

Even when a team's speed is halved by the impact of technical debt (where curves cross each others), it's not so common to point out that TD is the root cause of lack of productivity.

Managers and/or teams are most likely to consider other hypothesis (which may be related in some way), such as the lack of financial means or material resources, staff's skills, financial context, understaffed teams, the recruitment crisis, etc.

This is why it is so important for the stability of projects to be able to detect - and measure - the impact of each factor that could penalize productivity with precision, relying on facts rather than speculation.

We can also wonder about the inertia first of the generation of technical debt, then its impact.

  • What do I risk if I continue to ignore the problem?
  • How long will it take to stabilize the amount of technical ?
  • If we stop introducting new TD right now, is this enough to stabilize the debt ? Won't the interests continue to run ?
  • In term of TD management, how late is too late ?
  • Should TD reduction be a top priority ? If not, how much debt can I afford without risking to really endanger my project ?
  • Is there an optimal TD amount ? If yes, is it zero ?

It is to these questions, and others, that we will try to provide answers in the next parts.

How to detect technical debt ?

As mentioned before, delayed deliveries, unmet deadlines, as well as increasing bugs reports may warn your as a manager. But if the symptoms already exist, so does the disease.
At this stage, you basically have two choices : deny or face this reality.

How to act upstream ?

It is developers, lead-developers and architects jobs to evaluate technical debt, and if applicable, trigger the alarm bell at early stages. You hired a team, listen to its advises.
You may also introduce a "prevention is better than cure" strategy, creating dedicated periods of times to make internals (or externals !) specialists to audit your codebase.

Also, other points/tools may become handy, as it makes codebase audit faster :

  • Quality flaws, such like performances issues and frequent bugs. Especially undefined behaviors and non-reproducibles issues.
  • Static analysis tools, such as Coverity, SonarQube, etc.
  • Lack of documentation : no or not enough architecture diagrams, technical details, and logbook.

From a staff manager or Scrum master perspective, beside informations provided by the developers team, other symptoms should alarm you about the situation :

  • The QA team is reporting an increasing ratio of "opened issues per story points delivered"
    Btw, this is a very important chart to keep up-to-date, in order to monitor a project's health.
  • A decrease of story points delivered
  • An increase in poor task estimates that need to be re-evaluated again when grooming
  • Recurring tasks that require more and more time
  • An increase of undelivered stories at the end of sprints
  • Too many round trip of stories/issues between testers and developers
  • Non-regression tests failure

How to quantify technical debt ?

Knowing the "what" does not necessarily means knowing the "how much", a fortiori accurately.

Both developers and team managers use to attempt to estimate technical debt using an X number of man-days to pay it back. This way, because the average salary rate of the development team is a known cost, a "time to money" cost conversion is simple.

This amount is most likely to appear as scary. However, being aware of it may act as an electrochoc, and will make the management able to forecast future costs, and risks.

Any healthy IT project must (be able to) frequently quantify both its development team velocity, as well as the amount of technical debt, as accurately as possible. These are major production forecasting factors, and so the yield. This is just as essential as being aware of the economic context.

How to mitigate new technical debt ?

Technology evolves on daily basis, improving quality, performance, and maintainability.

You need to move on, and you need to do it just as fast.
Either you jump into the train or stay on the platform, the choice is yours, and now is the very right time to act.

This is way easier to pay technical debt often, than once a decade. If you wait too long, it may cost you so much in term of time, human ressources, and money that you might not afford it.

Why risk to endanger your project ? Does working inefficiently using deprecated technologies worth that much ?

Here is a raw list key-points to work on :

  • Up-to-date technologies

    This includes the whole development and run environments : tools such as IDEs and compilers, libraries & frameworks, OS, etc.

    If for some reasons (security, etc.) you are stuck with a deprecated OS such as Ubuntu 14.04 for instance,
    an interesting way to by-pass I use is the following :

    • Create a script that download the latest release of every toolchain components you may need.
      For me, CMake, GCC and Clang.
    • Create a Docker container that will build by running this script, and build each components.
      I chose to install everything localy, in a particular directory like /home/builduser/cpp_toolchain/.
      This way, we do have a container with an up-to-date, modern C++ toolchain.
    • For every project, I do have a multi-stage Docker container that first use the modern C++ toolchain container,
      then the target OS. Here for instance, Ubuntu 14.04 LTS, which is now deprecated (end of life, end of support).
      That build container will take my project sources, and run the CMake scripts. Which will download, configure, build, test and install project's dependencies on a component level.

    The only requirement here, is to use the cpp_toolchain components instead of the regular ones.
    So my build command is something like :

    /home/builduser/cpp_toolchain/cmake/bin/cmake . -DCXX_COMPILER=/home/builduser/cpp_toolchain/gcc/bin/g++ -DBUILD_TYPE=RELWITHDEBINFO

    All of this may of-course be integrated into a Gitlab-CI or Jenkins process.

  • Up-to-date processes

    Code review :
    Systematic code-review should be the norm.
    This step prevents common mistakes, quality flaws, unmet specs, and potential technical debt spikes.
    It also may be wise to bind this stage with a continuous training process. Thus, any recurrent development or design mistake may trigger a specific training session to improve the commiter's skills, and thus, avoid futher occurences.

    Continuous integration/delivery :
    As developers, or any IT workers, we tend to automate most repetitives and time-consuming tasks.
    However, It is pretty common to see companies that does not have CI nor CD process.

    This may involves the following steps :
    (Warning : the order may differ, each company has its own specific needs/requirements)

    • A developer create a merge request for its feature branch
    • Automatic code formatting to avoid diff noise and guarantee consistency
    • Trigger a code-review
    • Trigger automatic unit tests
    • Trigger automatic functional tests
    • Trigger a manual end-to-end PO test and validation
    • Trigger a documentation review
    • Validate integration
    • Packaging
    • Make the delivery available
    • Automatic non-regression tests
    • Trigger a manual end-to-end PO test and validation
    • Automatic delivery

    Introducing automatic steps save a lot of time, when manual steps act as warrants of the final product's quality.
    This is a very naives steps list, you may be willing to re-order it and add several steps such as security checks for instance, in order to make it bullet-proof.

  • Better conception

    Create scalable software solutions by design.
    This will avoid breaking-changes when attempting to solve architecture issues.
    Avoid components tight-coupling, and design interfaces wisely.

    Thus, your codebase will grow in size, not in complexity.

    As these points are my next paper's topic, stay tune for more details, tricks and best-practices.

  • Better documentation

    Documentation matters.
    Not inlined code documentation, that most of the time decrease codebase quality, but any others.
    And no, an Agile software that keep tracks of task IS NOT a valid documentation. Noone will ever parse the whole tickets flow to understand the past and current project's state.

    This will ensure that anyone that is externe to or new on the project (such as a newcomer or someone doing a codebase audit) may understand its design and implementations details fast and in full.

    Most projects are likely to last, perhaps for decades.
    As staff changes, it is common for most of the architecture and implementation specifications to be lost. New developers may therefore wonder "Why is it done this way?" and the only answer is "for historical reasons ...".

    A software development project includes four pillars which support the delivery of value over time :

    • The staff, which fluctuates
    • The code base
    • Tests
    • Documentations (functional & technical)

    Remove one of these, and the whole project is likely to collapse.

    Keep architecture & implementations diagrams up-to-date.
    Theses may continuously reflect the project's current stat. So, do not hesitate to use subversion on your diagrams.
    Every repository must contain a documentation and a fistful of diagrams that best describe subversionned components. This way, each releases will be associated to its matching documentation.
    Thus, the whole staff keeps the project's big picture easily in mind, may notice any digression, and fix it. No more : "I do not know why this is done this way, we'd better leave it alone".

    Create a logbook.
    For every technical choice, add informations about pros & cons, and why does the team made it.

    The author :
    Finally, I worked with so many teams that completely lost tracks of their projects technical design and specification, that I really wanna make this point count. Teams that became affraid of making any changes, unable to maintain nor even explain what previous developers created. This is so sad.

  • Better scheduling

    The better your DoR (definition of ready) and DoD (definition of done) are, the lesser overcosts are.

    Schedule sprints wisely :

    • How many USs, for how many story points ?
    • Does your current DoR prevents incompletes USs to be schedule ?
    • Does your current DoD ensures that both developers (with UTs) and testers are able to validate an US ?
      TDD (test driven development) may be interesting to consider as a develpment paradigm.
    • Who is about to compose your staff for this sprint ? Who won't be available ?
    • Do your staff have the required skills to fulfil its tasks ?
    • What could distrub that planification ?
    • What is your room for maneuver in the event of the unexpected ?

    It is always better to deliver less, than nothing by overloading the current sprint with too many tasks.

    Another important point about scheduling are what we call "short-circuit demands".
    This appellation includes any request that disturbs the current sprint schedule, for instance introducing new or re-priorising tasks.

    A work-around may be to integrate a room of maneuver to the schedule, in order to handle emergencies such as a bug on a delivered release. The correlated question is, what to do with this margin if it results being unspent at the end of the sprint ?
    It may be bond to non-priority, background tasks (such as tech surveys, trainings or talks designs, self-training, etc.) that developers may work on while waiting for emergencies.

  • Stovepiped technical and functional designs/implementations

    The technical design & implementation must not be mixed with the functional ones. On the contrary, it must support and frame it, providing a convenient and stovepiped layer for functional content add.

    Mixing theses two is most likely to result in an over-complicated codebase, and generate a squared ratio of technical debt.

    Also, working on this codebase may requires developers to know the functional part of the project, which makes their onboarding way too long. Their efforts are dispersed, and thus much less effective.

    This is why it is so efficient to split developers in two teams : one technical, one functional. Each one focus on its purpose.

  • Write simplier code

    See CppCon 2018: Kate Gregory “Simplicity: Not Just For Beginners”
    Simplier code is slower and harder to write. It requires new habits, new way of looking, as well as humility.
    However, there is a very nice counterpart : it is easier to understand, modify, maintain, and often runs faster or uses less ressources.

    Speaking of Mrs. Gregory, there is another interesting point I'd like to mention here, from her speech "Emotional code" at CPPP 2019 conference (Paris, France);
    Which is that code conveys emotions, like fear, misplaced pride, etc.
    Developers must be humble and brave enough to write a code that others can understand.
    Something that looks over-complicated, nearly obfuscated may make them unfireable as they became indispensable, but this is so toxic for the codebase health that it may endanger the project, and by extrapolation the company itself.

  • Decrease code complexity

    Not quite the same as the previous point, but correlated.
    Use standard algorithms, containers, and improve components genericity, and thus their reusability.
    Overuse syntactic sugar and expressive code. Become better at naming things.

    Anyone should be able to understand your code in a eye-blink, including beginners.

  • Better and standard code formating

    Use Clang-format. Seriously. Create a hook in your subversion system that enforce code formatting, thus avoiding any valueless changes.

    Remove any useless code convention in your codebase that modern IDEs already bring to you;
    Such as variables prefix and suffix that reflects types, cv-qualifiers, etc.

    This example is from the real world (meaning delivered, in-production used) :

    struct st_checkresult_t       // `st_` for `struct`,
                                  // `check_result` is the struct name
                                  // `_t` for type
    {
      const bool mv_st_checkresult_book_ct;   // `mv_` for member value,
                                              // `st_checkresult` the type name
                                              // `bo` for boolean type,
                                              // `ok` is the variable name,
                                              // `_ct` for const qualifier
    
      inline const bool mf_st_checkresult_t_get_mv_book_ct() const  // `mf` for member function
                                                                    // `st_checkresult_t_`
      {
        return this->mv_book_ct;
      }
      inline void mf_st_checkresult_t_set_mv_book_ct
      (const bool & p_bor_newvalue_ct) const  // `p` for parameter
                                              // `bo` for boolean
                                              // `newvalue` is a mandatory name for setters argument
                                              // `ct` for const qualifier
      {
        this->mv_book_ct = p_bor_newvalue_ct;
      }
    
    } static g_st_checkresult_v;  // `g_` for global
                                  // `st_checkresult` must reflects the type name
                                  // `_v` for value
    g_st_checkresult_v.mf_st_checkresult_t_set_mv_book_ct(true);

    Funny isn't it ? This must be sooo easier to understand and maintain (laugh ...) !

    This is basically :

    static bool is_result_ok = true;

    Which is better but way perfectible, as this is not really expressive code, but an error-prone one.
    We can had a meaningful namespace to the variable and improve the type, in order to make it extensible and usable, such as :

    namespace my_project::my_component::results
    {
      struct success          { /* additional infos here ... */ };
      struct minor_failure    { /* additional infos here ... */ };
      struct critical_failure { /* additional infos here ... */ };
    }
    namespace my_project::my_component
    {
      using result_type = std::variant
      <
        results::success,
        results::minor_failure,
        results::critical_failure,
        // etc.
      >;
    }

    Which can be use like :

    namespace my_project::my_component
    {
      auto do_complicated_stuffs()
        -> my_project::my_component::result_type
      {
        // complicated stuffs here ...
        return results::minor_failure
        {
          .error_code = 42,
          .error_message = "non-critical step <smthg...> failed"
        };
      }
    }
  • Improve development practices

    Create or follow an existing guideline, and best-practices pool, that you improve on sprint basis (during retrospectives ceremonies), according to developers feedbacks and discussions.

    This may also be applied to any other teams, like testers, BAs, POs and managers.
    What makes Agile methodologies so powerful is that they are meant to adapt and evolve frequently.

    For C++ development, the Cpp core guideline project is a gold mine for quality code & best-practices.

    Also, please note that many analyser and compilers implements checks for several popular guidelines, such as MSVC and Clang.

    If you choose to create your own guideline, i'd strongly advise to add a step during retrospectives ceremonies to question and improve it.

    Here is a bunch of questionable rules/policies I had to go threw accross my career.

    • "One program, one file"

      Imagine working on files that are from 3k to 40+k lines, with no #include or import directives ... which begin with a ton of incompletes declarations, and copy/pastes for common types naives implementations such as strings representation.

    • "C++, but not templates"

      Including no STL therefore !
      I remember the interview guy answering my "Why ?" question this way : "C++ templates and the STL are fancy, overcomplicated, useless stuffs. Here, we do deliver value, we're not code nerds".

      That company, which was a pretty big one, recommanded to all developers to load a shared file called the "algorithms container" (or sometimes their "shared library", sigh ...) in their IDEs that contained many macros. Each macro was basically a STL algorithm, where types were parameters.
      Feels like C++ template-functions, right ? With the difference that types were hardcoded, instead of being resolved at compile-time.

      You can easily imagine how hard was the code to maintain, and how silly patching one of these generated algorithm was.

      I still picture that poster on their openspace wall which title "Copy-paste is safety", close to another : "To test is to doubt". Retrospectively, I realize that were no a joke but their guideline.
      Which was quite surprising, according to how famous the company was at this time, and how many major clients it had.

      Taking for instance the count_if algorithm, that counts elements for which predicate p returns true :

      template <class InputIt, class UnaryPredicate>
      typename iterator_traits<InputIt>::difference_type
      count_if(InputIt first, InputIt last, UnaryPredicate p)
      {
        typename iterator_traits<InputIt>::difference_type ret { 0 };
        for (; first != last; ++first) {
          if (p(*first)) ret++;
        }
        return ret;
      }

      They used something like the following IDE macro :

      int64_t count_if(%InputType% first, %InputType% last, bool(*)(%InputType% const &) p)
      {
        int64_t ret = 0;
        for (; first != last; ++first) {
            if ((*p)(*first)) {
                ret++;
            }
        }
        return ret;
      }

      Has you may see if you are familiar with C++, this is a non-generic, sub-optimal code, both from a maintainability point of view as mentioned above, but also from a usability one.

      Also, did I mentionned that this company required developers to use Notepad++ and Microsoft Visual C++ 6 (which was released in 1998) as IDEs ?
      Back to these days, Microsoft Visual Studio 2014 and Jetbrains's CLion were already very popular.

      Last but not least, the main struggle here was using legacy C++ ... instead of C++1y ! Can you imagine working with compilers thus language features and an STL implementation from 1998 ?

      Honestly all I picture when remembering 1998 is the 8 years-old boy I was, listening to Aerosmith's song "Don't want to miss a thing" and watching in secret the freshly released Blade movie on VHS tape with his best friend while eating colorful candies.

    • "The final user is a free tester"

      Back to theses days, my manager, that also was a tech-lead, explained to me that this is strategy :

      • You do not need to hire testers, so you save money
      • Bugs create value : If a customer reports a bug, the company can sell him a "premium maintainance pack" to ensure fast fixes delivery.
      • Any delivered fixes will of course be reflected to all other customers, and thus improving the product's quality
    • "No modern C++ features"

      That company used to work with up-to-date, modern C++ compilers which is great, but also enforces the use of -std=c++03 option.

    The existence of such policies is difficult to justify, as they both decrease productivity and codebases/products qualities.
    It is however interesting to attempt to understand how and why does these companies shoot themselves in the feet.

    Often, not questionning ourselves about the way we use to do things may lead to these aberrations.
    What was optimal at the time may no longer be. Technical or business constraints restrictions may no longer exist.
    Most companies, especially in the IT area, must welcome changes that result from modernization with open arms

  • Better test coverage

    The more unit tests and functional tests, the better. However, this is not an alternative to integration and stability tests.

    For UT, create a test hierarchy, from top-level components to low-levels. Any test failure must unwind the hierarchy, to invalidate all components above.
    Ensure that all components without exception has a unit test it is associated to.

    For the record, I attempt to create a single file, header-only C++ test library a few years ago that enforce test hierarchy by design. Which, I must admit is way perfectible, but good enough to serve its purpose.

    The earlier you detect a bug, a quality flaw, or an undefined behavior, the less impact it will have on your schedule.

  • Keep your staff up-to-date

    As technology evolves, new needs emerge, and thus new jobs to fulfill them.

    A team is not a bunch of so-called "full-stack devops" (or former "computer scientist"), that works with any languages, technologies, and will take care of any step from requirements creation to delivery, including business analysis, architecture design, development, tests, packaging, integration, validation, etc.
    Every has a job, and a particular skillset as well as speciality. For instance, It would be counter-productive to ask a C++ developer to code in Javascript, as much as asking a developer to design and process tests.

    Would you ask a blacksmith to chop trees ? No. You would ask the lumberjack.
    Of course, basically, both jobs requires the employee to hit stuff with a metal tool. But the way to do it is very different, and is a whole other skill & job.

    This is very disconcerting for entrepreneurs that are neophyte to the software development world.
    At first, they expect that the combination of a "great idea" and a fistful of hired geeks will deliver value through stable apps.

    Make sure you both train your staff to the latest releases, standards & technologies, as well as completing your team with new positions/jobs that emerge.
    This is to consider as a short-term investment. They're gonna be more productive, and more enthusiast as they feel being supported by their hierarchy so their skillset (and thus, employability) does not decay overtime.

    Training your staff may take several ways, for instance :

    • Daily short training, for instance 30 minutes after the morning daily scrum/standard-up meeting
    • Once a week, about 2 hours. Inquire about your employees specialities, and let one them prepare a lecture about a skill or trick that may immediately help every other team members on daily basis.
    • Every 3 months, plan a week-long training lead by a training specialist
    • Create your own training program & schedule : ask your staff what they need, what they want, what skill they urge & need to acquire in order to make its job easier and faster.

    Note that the first two proposals above may also help from a team-building perspective. Sharing knowledge is great way to motivate employees and create new bonds.
    It also create a virtuous circle: when ego comes to play, the more lectures an employee will attend to, the more he or she will be willing to create its own, and then present it.

Technical debt management strategies

Mitigating the introduction of new technical debts is one thing, designing a long-term strategy to deal with is another.

As an entry point for this article, I created a one-week poll on Reddit /r/cpp to collect datas.
While I'm well aware that opinions about technical debt can differ from professions - or even from programming languages - to another, this experience has proven to be a prolific source of interesting ideas. Several survey participants shared strategies and opinions on how to effectively deal with technical debt, as well as how to sell it to managers. Thus, if you are interested in, I invite you to read these.

The question was : "How much do you care about & manage technical debt ?", here are the results :

reddit_poll_TD_how_much_do_you_care

No room for technical debt management ?

As you can see, the participants mainly voted for the options :

  • "From daily to weekly basis" (~31.25%)
  • "Not scheduled / When I have time" (41.91)

reddit_poll_TD_how_much_do_you_care_pie_chart

We can cluster these options into two groups: well-defined technical debt management strategies and undefined ones. Without any strong trend, we see that for this sample of participants, most clearly do not foresee any reliable tactics to reduce TD.

reddit_poll_TD_how_much_do_you_care_pie_chart_analyse

How to analyse it ?

This best reflects what we can undoubtedly consider as the root cause of technical debt. Fact is that most developers and projects managers are not aware of how disastrous TD impacts may - and most likely will - be.
Even if they are, I observed that they tend to poorly prioritize tasks, promoting others such as new features or delivery dates.
However, we saw previously that these three elements are all part of software development lifecycles, and none should be overlooked.
Thus, the underlying question is : how to well-balanced these, in what proportions ?

Long-term strategies

We previously mentioned a set of good practices to avoid the introduction of new technical debts. However, this is not bulletproof : acting upstream is not enough, as the amount of TD which already exists still remains.
This is why it is important to opt for a long-term, substainable mitigation strategy.

Prerequisites

Please note that most strategies described hereunder imply the following prerequisites :

  • An Agile development approach
  • An accurate definition and tracking of existing technical debt
  • Prioritized TD reduction tasks
  • Well-established coding standards and guideline
  • Automated tests : non-regression and performance in particular.

If your current situation lack one or more of these elements, it is way safier to improve your workflow process first, prior to any integration of long-term strategy for TD reduction.
As mentioned in previous parts, this will involve code-review, strong automated tests coverage, software architecture design, and CI/CD.

Here is a non-exhaustive list of strategy types currently used by several companies, whiches we will examine using the following criterias :
How often these strategies need to be applied, what are their inherent costs and how effective are they.

Here, I want to send my warm thank you to the attendees who completed surveys, answered questions, and shared ideas on LinkedIn, by emails, and especially on Reddit and by phonecalls.

the small steps strategy

-> submarine refactor components you need to use a developer so it best matches your needs -> if not, hardly usable or not usable at all

Total blackout

"Desperate diseases need desperate remedies", as the saying goes.

This procedure is applied by creating a sprint dedicated to the reimbursement of first-rate technical debt items. Therefore, even if no new functionality can be programmed, there may still be a limited leeway to solve critical issues.

TD-reduction-strategy_total_blackout

Even if this strategy is the most efficient one - from a TD reduction perspective - it may severly impact delivery schedule.

Criteria value
Most suitable case This plan is particularly effective in the case where the main objective is to improve stability and/or performance
Efficiency High
Cost A whole sprint for a whole team
Frequency From time to time,
E.g after each milestone or major release

Codebase warden

Devoting all or a major portion of an employee's working time to technical debt relief may seem odd at first. However, after experimenting with this strategy myself and discussing it with others who have also done so, it seems like a valid - but perfectible strategy.

TD-reduction-strategy_codebase_warden

You may then wonder why. Because lead-developers often - if not always - have an up-to-date, big picture of both project's design and implementation details, they are privileged actors to take action on TD.
As their role often involves numerous code reviews, they are able to take early action to prevent the introduction of critical TD elements, rejecting developers' commits if they feel that the risk-benefit ratio is not worthy.

Therefore, they can refactor architecture design as background tasks, while keeping track of and prioritizing technical debt reduction tasks.

Criteria value
Most suitable case A lead-developer who acts as a bottleneck to the codebase
Efficiency Average
Cost A fully-allocated employee per team
Frequency n/a : steady

Sprints spacers

For this strategy, a timespan is allocated to background tasks between sprints. For instance, in case of 3-weeks long Agile sprints, it can be an entire week in betweens.

TD-reduction-strategy_sprints-spacers

In addition to some leeway to deal with critical issues, the whole team spends this time dealing with the technical debt that was introduced in the sprint that just ended, but not only that. Other recurring tasks such as team training, technical surveys and internal development competitions can be realized here.

This is a well-balanced solution between fast deliveries which often involve dirty quick-wins, and refactoring actions that reduce TD.
Note such technic only deals with newly introduced TD items, thus is closer to damage-control rather than a deep debt management. Therefore, it must be used from the very beginning of a project, or supplemented by another to make it sustainable over the long term.

Also, without strong discipline, experience has shown that such a period tends to be used to cover delays, thus to lengthen the length of the sprint, which makes it ineffective for its primary function.

Criteria value
Most suitable case When a team need to deliver many features fast
Efficiency Limited in time
Cost An entire developers team during a week after each sprint
Frequency Once after each every sprint

"Cleanup fridays"

In a top-rated comment below my original Reddit thread, the user brenoguim detailed a solution that seems substainable.
What's interesting here is after trying several other strategy, his team opted for this one 3 years ago, which gives sufficient feedbacks to make it valid.

Here's the deal: Once a week, a subset of the developer teams spend an entire day all together in a boardroom, working on accumulated technical debt. This takes the form of a training session where best practices are shared, while collective intelligence is used to improve areas of code, and architecture designs as well.
To improve these aspects of collaboration, coordination and emulation, the whole group works on the similar items.

TD-reduction-strategy_cleanup-fridays

Every six months - which seems to be the right time to carry out relevant refactoring work - the group is changed (except the leaders) so that everyone can benefit from the advantages that this provides.
Although this group can receive requests from managers, they cannot demand the prioritization of certain tasks or any results. Such a point ensures that the group remains autonomous, while protecting its objectives from any form of intrusion that could diverge from TD management.

Criteria value
Most suitable case When a team need a stable productivity over time
Efficiency High
Cost Cheap : only few developers once per week
Frequency Once per week

The human factor

As mentioned earlier, an amount of technical debt experienced as high by a development team can affect workers on a psychological scale.
The point of convergence with the elements raised in the previous part is the following : human factor matters.

For the last decade, many research organizations and economy monitoring institues observed a significant growth of work-related psychological disorders.

Without claiming to be able to precisely calculate their impacts on productivity, we should at least try to understand their nature,
thus what causal relationship which may exists between technical debt and Humans mental stat.

Here are some of the rising risks that became common nowadays :

The burnout syndrome

Christina Maslach and Michael P. Leiter dedicated a huge part of their work to the burnout topic.
For instance, they wrote Maslach Burnout Inventory and Burnout: 35 years of research and practice that are widely cited across other studies.

One of this citation was in 2015, in a commissioned study for the French government, which title can be translated as ""The burnout syndrome - Better understanding to act better"".

This paper mentioned Maslach and Leiter 's burnout syndrom definition :

"The gap between what people are and what to do. It represents an erosion of values, dignity, spirit and will - an erosion of the human soul. It is a suffering that is gradually and continuously reinforced, sucking the subject into a downward spiral from which it is difficult to escape ..."

Maslach and Leiter, 1997

Also, in another paper soberly titled Burnout published in 2007, they define burnout the following way :

Definition and Assessment

Burnout is a psychological syndrome of exhaustion, cynicism, and inefficacy in the workplace. It is considered to be an individual stress experience embedded in a context of complex social relationships, and it involves the person’s conception of both selfand others on the job. Unlike unidimensional modelsof stress, this multidimensional model conceptualizes burnout in terms of its three core components.

Burnout Components

Exhaustion refers to feelings of being overextendedand depleted of one’s emotional and physical resources. Workers feel drained and used up, without any source of replenishment. They lack enough energy to face another day or another person in need.
The exhaustion component represents the basic individual stress dimension of burnout.
Cynicism refers to a negative, hostile, or excessively detached response to the job, which often includes a loss of idealism. It usually develops in response to the overload of emotional exhaustion and is self-protective at first – an emotional buffer of detached concern. But the risk is that the detachment canturn into dehumanization. The cynicism component represents the interpersonal dimension of burnout.
Inefficacy refers to a decline in feelings of competence and productivity at work. People experience a growing sense of inadequacy about their ability to do the job well, and this may result in a self-imposed verdict of failure. The inefficacy component represents the self-evaluation dimension of burnout.

"Burnout", 2007
Maslach & Leiter

The commissioned study mentioned before highlights an interesting factor of burnout (page 14) :

Value conflicts and impeded quality

"Losing the meaning of one's work or not finding it, having the impression of doing useless work, can be provoked or amplified by the fact of not being able to discuss with colleagues or management on the objectives and ways of doing things his work."

"Le syndrome d’épuisement professionnel ou burnout", 2015.
ANACT, INRS and the French Ministry of Labor and Employment

"Impeded quality" is a key point here : technical debt often act as an inhibitor factor that prevent software quality from being raised.

In a study lead by Corporate Balance Concepts published in the UK Times in September, 2015, an estimation of 5 percents - over a thousand employees who participated - of people suffered from burnout.

The boreout syndrome

As an exact opposite to the burnout syndrom, the boreout one represents employees exhaustion by boredom. The latter appears to be much more frequent that the former.

The work-value

However, our society - since it first defines the individual by his professional function - values ​​consequent workloads, which results in the boreout syndrome being difficult to understand, even to conceive for some peoples. Thus, social pressure - applied to thoose who are qualified as time-wasters - may becomes a major stress factor that strongly impact the individual's psychee.

Nowadays, work is promoted as a main source of fulfillment and happiness. Employees are requiered to thrive as participants in this promoted collective myth, demonstrating how much satisfaction they get from being challenged, and - as saw in the previous section - sometimes pushed to the limits.
French have an expression for this : an "Epinal print" (image d'Épinal) which refers to an emphatically traditionalist and naïve depiction of something, showing only its good aspects.

We no longer speak of technicians, developers, or computer scientists - but passionates/enthusiasts.

For those who have the misfortune - for whatever reason - of not being able or not willing to participate in this image would then find themselves on the sidelines : perceived as outcasts.
A dissonance, the perception of which can have extremely serious effects on the individual.

Getting paid to do nothing

First of all, you might think that being paid for doing nothing is a dream for any employee. However, the facts show that being underloaded make them bored because they lack challenges, which leads to disinterestedness and frustration toward work at first, and then - as for burnout - to depression, anxiety attacks, and in some cases, suicide.

In a more general maner, bored peoples are more likely to die younger those who are not bored.
For more infos, see Bored to death ?, International Journal of Epidemiology.

According to a research lead by the America Online to which more than 10k employees participated, the average worker declares to frittering away 2.09 hours per day, raised to 2.2 hours for software & internet businesses.

Time-wasting reason rate
Don't have enough work to do 33.2%
Underpaid for amount of work I do 23.4%
Co-workers distract me 14.7%
Not enough evening or weekend time 12.0%
Other 16.7%

As you can read, while the rest of the study is interesting, let's focus on the major reason the participants mentionned : Do not have enough work to do.

win-win solution

There is a strong dissonance here. While staff managers often invoke the lack of time as a reason to not address the technical debt, shouldn't that wasted time be spent on tasks that matters, such this one precisely ? Sounds like a both wise and profitable strategy.

The lack of work is not the only boreout cause. They are three other major factors, which are :

  • Persistent routine
    Repetitive tasks, although stimulating at first, are very likely to become uninteresting.
  • Overqualification
    The employee is overqualified for the tasks to which he is assigned.
  • Interruption of tasks
    A task the employee is working on is interrupted for no reason - or for any reason he or she considers irrelevant. (e.g re-prioritizing tasks during a sprint can be difficult for staff)

Also, if an employee feel a trend that he or she get less and less work to do, he may feel sidelined, professionally unattractive, and fantasize - clinically speaking - about an hypothetical dismissal. Which will greatly increase his or her level of stress, with all the consequences it involves.

In order to both prevent and resolve potential or actual cases of boreout, solutions similar to the burnout ones exist. Any new stimulus would work, such as post reorganization, job diversification, career development, increased responsibilities, internal transfer to another office, or even a whole new job.

The brownout syndrome

Any electrician would describe a brownout as a drop in the voltage of a power supply.
Psychologically speaking, it refers to a new pathology induced by work which results in the decrease in energy of an employee, who is demotivated therefore experiences a lack of interest in his profession.
Here, it is not the light that fades or flickers, but the capacity of individuals to perform.

What causes such trouble ?

Brownout psychological disorder stems from a loss of meaning and apparent utility of an employee for his job. A malaise born from certain everyday tasks whiches are perceived as absurd. If a qualified person, recruited for their diploma - thus knowledge and skills -, is employed in meaningless tasks, even in negation of their skills, they may find themselves in a brown-out.

Here, the feelings involved that have a profound impact on the psychee of employees are those of absurdity, emptiness, meaninglessness, and worthlessness. Because an employee defines his social function by a job that he describes as being useless, this feeling is reflected in, and therefore applied, on the mental representation he has of himself.
Thus, this impact reach the individual's identity, and personality as well.

Nadia Droz, a psychologist specialized in psychosocial risks prevention - which involve occupational health and work suffering - describes brownout disorder as an "internal resignation".
Because work is now focused on performance and not valuation with very detailed tasks, the profession's core became invisible which leads to this feeling of worthlessness.

The brownout disorder was first defined in a book which titles "The Stupidity Paradox : The Power and Pitfalls of Functional Stupidity at Work" (Profile Books, 2016), authored by Andre Spicer (professor of organisational behaviour) and Mats Alvesson (professor of business administration).
They describe it as a wake-up call for "smart organisations and smarter peoples", which put pros and cons of functional stupidity into perspective.
From the book's summary, they explain what makes a workplace mindless, why being stupid might be a good thing in the short term but a disaster in the longer term, and how to make workplaces a little less stupid by challenging thoughtless conformity. It shows how harmony and action in the workplace can be balanced with a culture of questioning and challenge, and how does personal satisfaction can lead to organisational success.

In order to detect brownout, here is a list of behaviours that may alarm you :

  • Avoiding work
    The professor of organizational psychology and health Cary Coope explains that when experiencing brownout, because employees no longer feel invested in their work, they tend to avoid it as often as possible, seeing most excuses as valid. It is close to the avoidance reflex, which is a survival mechanism triggered by an element perceived to be dangerous. E.g An employee who chains cigarettes and coffees breaks, who prefers to stay at home because of a early cold symptoms, etc.

  • Never-ending tasks
    No matter how much time they spend at work, employees feel like the tasks they are assigned to are endless.
    Thery simply cannot take a step back, thus cannot estimate completion rates. Which is a major source of anxiety, especially when managers or clients ask about forecast items such as delivery dates.

  • Questioning career fundamentals
    While questioning oneself is a healthy mechanism for continuous self-improvement, doubting without relying on facts can have in the opposite effect. As they mull over the past - particularly what they perceive to be points of divergence in their personal history, such as key choices - employees are likely to fantasize about alternative lives they might have. This often results in speculative and negative thoughts, as human nature tends to sublimate memories. In the end, such a gap between who the individual is and what they think of as what would have been better life choices often results in a deep sense of depression.

  • Not participating
    Due to a deep lack of interest in their profession, employees who experience a drop in tension no longer want to participate: they then opt for passive behavior, as opposed to proactivity.
    This is easily noticeable during meetings, peer-programming sessions, and follow-up interviews : an individual who suffers such syndrom is deprived of any form of intellectual stimulation, therefore of enthusiasm.
    This phenomenon spreads to social and family life, which accentuates the troubles of by isolating them from others.

  • Not feeling fit nor healthy Symptomaticaly, employees who experience brownout feels tired, unenergetic, and loss their sense of humor.
    Disgusted by the image they reflect, a common response from employees is to turn to basic needs, seeking satisfaction by using junk food or sex in excessive proportions. What is pernitious here is that such behavior tends to accentuate this feeling of unease - by isolating them -, and is likely to impact health.

  • Wondering if supervisors like what you deliver
    When they crave recognition while lacking feedback, employees feel confused.
    The result is a persistent - and therefore inhibiting - questioning of what managers would think of their work. Again, by speculating, individuals fantasize over and over again about hypothetical scenarios. This extra pressure is a major source of stress, which affects their ability to concentrate.

In a study published in the UK Times - mentioned the burnout section above -, Corporate Balance Concepts estimated to 40% the rate of people who suffer from brownout, what should alarm us.

What differs from the two previously mentioned syndrom is its manifestation is not that obvious : no nervous breakdown nor panick attack.
Often, staff managers realize an employee's malaise while reading his resignation letters.

The real problem here is that this phenomenon seems to affect the top performers in particular. Losing an employee is one thing, because it increases staff turnover - and therefore all the associated costs, such as those of onboarding and ramp up - but helplessly leaving a top-tier profile to another company, let alone a competitor, is painful. Not to mention that this can endanger a project in terms of deadlines and / or stability.

Beside top performers, this problem may affect anyone, and is exacerbated when it comes to "bullshit jobs", a concept theorized by the anthropologist David Graeber in 2018, in a book soberly named Bullshit Jobs: A theory (Simon & Schuster editions).

Bullshit Jobs: A theory

The author takes examples that perfectly fit with our brownout and technical debt impacts thematic, which are :

  • "Duct tapers, who temporarily fix problems that could be fixed permanently, e.g., programmers repairing shoddy code".
  • "Taskmasters, who manage - or create extra work for - those who do not need it, e.g., middle management, leadership professionals"

However, the root cause of brownout is very simple as well as the way to fix it.

Staff managers must integrate the concept of brownout when assigning tasks.

A good job assignment should increase the cognitive arousal of the employee, giving him space to express his skills, so that he can perform at his own level. This is obviously beneficial for both parts.

"In every job that must be done, there is an element of fun. You find the fun and - SNAP - the job's a game."
Mary Poppins, Walt Disney (1964).

The question now is how can staff managers know if a task will be interesting enough for a particular employee ?
Here is a bunch of questions to answers when assigning a task :

  • Does the task match the employee's skills he/she has - and wants - to develop ?
  • Is it challenging enough ? Will it triggers intellectual stimulation, thus creativity ?
  • Does it guarantee a suffisant task plurality for the sprint being planned ?

Note that these should be weigh according to the two types of profile patterns that exist :

  • Sharp & narrow: a restricted set of related skills, each well mastered
  • Large & limited: A wide range of skills, not necessarily related, mastered at an average level

This is why staff managers need to keep an up-to-date skills matrix of their staff, which accuratly illustrates for each employee three kinds of informations :

  • Which are his/her skills (including soft-skills) ?
  • What skill level to associate with each one ?
    Here, we must distinguish two types of scores : the one perceived by the employee - which is then subjective -, and another - more objective - which may be the result of peers evauation or such.
  • What skills does the employee wish to develop ?

In order to be as accurate as possible, each skills should be split into subskills, which may then create a hierarchical tree.
An idea here may be to use theses datas to create others metrics that highlight trends, such as :

  • Existing gaps between self-evaluations and peer-evaluations
  • Bind scores to wishes

Here are five hypothetical examples to illustrate the idea mentioned above :

  • Hubert, a senior C++ developer

    Hubert self-rated himself for C++ at 3.8 over 5. Peers review revealed a score sightly greater, of 4.1 which is close enough to be coherent. For the current project, there are two available tasks :

    • A data binding from JSON to C++ types.
    • A new collection of types-traits that involves the detection idiom, thus SFINAE.

    Here, the former may appear too easy (and therefore boring) for an experienced developer. It is therefore preferable to attribute it to a newcomer, as part of his/her onboarding, while attributing the second to Bob who will then express his skills as best as possible.

  • Philip, an experienced C#/.NET developer

    Philip loves C#/.NET : he demonstrated a great ability to fulfill tasks he was assigned to, fast and efficiently. Once again, there are two tasks to distribute :

    • A high performing HTTP requests analyser, that will monitor data traffic and may trigger alerts.
    • A WPF GUI dashboard that will render traffic metrics and show triggered alerts.

    Here, at first glance, the former would fit Philip's profile perfectly. However, the peer review pointed out that even though he has a good command of C#/.NET, he still have gaps with high performance programming.
    Accordingly, it might be safer to give him the second one. However, in order to deal with his potential frustration, it may be interesting to offer him adequate training, or even to let him do the task under a mentor supervision that will guide - and teach - him along.

  • Zoidberg, a Go enthusiat

    After having completed several personal projects, Zoidbert self-proclamed himself a "Go expert". However, there is an important gap between self-evaluation and peer-review scores : 4.5 out of 5 for the former, and 2.3/5 for the latter.

    As the dissonance between perception and factual reality is a strong factor of ill-being that may reallly endanger the employee's self-confidence, a wise strategy here would be to assign him to tasks that will help to reduce the gap : sufficiently challenging but not likely to cause too much discomfort.

  • Hermes, 20+ years as C developer

    Hermes has spent the past 20 years working with the C programming language, mastering every critical case. He demonstrates an undisputed ability to carry out the tasks entrusted to him with seriousness and efficiency. Most of his colleagues consider him a top performer.

    However, as he has already asked several times to switch from C to C++, it would be prudent to initiate a transition which will guarantee a stable increase in skill on the one hand, while on the other it will put that switch's impact over the project and staff into perspective.
    Also, even if there is a greater need on the current project for a C developer than a C++ one, this does not necesserely means to refuse his wish is a wise strategy : the frustration of being restricted might severely diminish the employee's motivation, thus his productivity.
    Here, if the project cannot afford - speaking of delays or budget - such a transition, the lesser harm might be to hire a new C developer, initiate period that will ensure the transfer of skills, then integrate Hermes to another team.

  • Leela, a senior full-stack developer

    Leela is a reliable employee who has been working on the project for 5 years, as a full-stack developer. However, she emitted the wish to not code anymore, willing to switch to software architecture.

    As she already know the project's internal structure, it might be interesting here to put her to the test with increasing responsability and autonomy. Firstly, to monitor how well she could perform these tasks, but also to let her confirm - or disprove - that she is cumfortable with it.

Through the above examples, we have seen that staff-managers must find the perfect balance between the wishes of their collaborators and the constraints related to the project.

What about substainable retention strategies ?

With such a comprehensive view of the skills and wishes of their teams, staff managers can then prevent employees from becoming bored or overworked, thus avoiding most work-induced psychiatric disorders that can lead to departures.

An interesting behavioral psychology mechanism involved here is known as the reward system. To ensure the stability of employees through the feeling of fulfillment, a common method is the use of operant conditioning - or associative learning -, where the rewarding stimulis work as positive reinforcers to create a virtuous circle.

virtuous retention system

Challenge :
Here, as long as the challenges are well balanced, employees will be successful in completing the tasks assigned to them, which, along with a sense of self-reward, is likely to lead to appetitive behavior (also known as consummatory behavior).
In short, it makes them want to do more than just "good enough".

Reward :
To reinforce this process, additional external rewards are important, such as praise for achievements - as long as it does not prevent constructive criticism.
It could be a note from a manager to show how s/he appreciates their work, a congratulatory speech, a restaurant team dinner, etc..
Everything that makes high-performing employees feel being appreciated.

Develop :
Finally, another virtuous element is the development of skills. A well-designed learning curve which content will directly matches the project's needs is a key to staff retention.
Not only will this guarantee the employee that he is evolving - thus reassuring him that he remains attractive from the perspective of labor market - but will underline that the source of this security feeling is the company for which he works.
This creates a strong - conscious or unconscious - bond.

Also, if for some reasons challenging employees on daily basis using project tasks is not possible at a time, nor training, here are some valuable options :

  • Frequent individual interviews
    Get up-to-date informations about employees feelings and wishes.
  • Internal contests
    Create an positive emulation where employees get to challenge each others to create the best components, in adequacy with the current project.
  • Partially off-work activities
    Create - and sponsor - a lab, or any space where employes may bond and work together around a project they value. Even if not related to your company's activity, it may bring visibility thus help to hire new talents.
    E.g "Look how cool our company is, here we let employees create a robot which plays football during worktime".

About game design and addictive behaviors

Remember the last part of Mary Poppins's (Walt Disney, 1964) quote above ?

  • "[...] the job's a game"

Note the process we described is similar to what game theory calls core loop.
Take video game design for example : compulsion loop are deliberatly used to create a motivation for players. Motivation which - psychologicaly speaking - is defined as the experience of desire or aversion, involving both an objective and subjective aspects. In game design, core loops are defined the following way :

"A core or compulsion loop is any repetitive gameplay cycle that is designed to keep the player engaged with the game. Players perform an action, are rewarded, another possibility opens and the cycle repeats."
Wikipedia, compulsion loop : "In game design" section.

Compulsion_loop_for_video_games
Source : Wikipedia, Masem user, CC BY_SA 4.0.

What makes such loop so powerful are neurochemical drugs that users bodies release as rewards - e.g dopamine -, which are fundamentaly addictives. Also, multiples studies about drug addiction demonstrated that peoples who are addicted to drugs often prefer the process part which is related to anticipation & expectation (such as planning, buying, and preparing) rather than consumption itself.
Why ? Because human body, when experiencing an expectation state, release more neurochemical substances to keep the individual's motivation high so he/she continue through the process.
However, this immediate reaction effect on the promise only works on individuals who have already gone through the entire process at least once - or any similar process so that they can project themselves, therefore anticipate.

Also, the behaviorist Burrhus Frederic Skinner (1904 - 1990) - also known as the most influential psychologist of the 20th century - experiments in operant conditioning chamber (also known as "Skinner box") demonstrated that random rewards and variable time between awards makes animals quickier to learn the rules of the positive reinforcement system.
Since then, multiples studies in gambling and video games demonstrated this applies to humans being, resulting in random loot-box mechanism for instance (random content quantity & quality).

How to transpose such system as retention strategy ?

The application of game-design elements - or principles -, such as core-loops in non-game contexts is a mecanism called "gamification".
It may for instance outcomes in positive induced behaviors which use the reward system to improve employees implication thus their performance.

gamification core loop applied to work

Here we find a one-way accountability system in which managers dictate the set of rules, providing a framework within which employees must perform their actions.
An analogy with games is valid in many ways : managers here act as game-masters, while employees are players.
By controlling the nodes, the former are able to adjust and refine the process to improve performance: precisely choose training topics to better meet the needs of the project while meeting the wishes of the employees, select the topic of challenges and difficulty depending on employees skillset and mindset, as well as influence rewards frequencies in specific cases.

These slight adjustments can help influence the feelings of the participants to create a stable and lasting performance - therefore predictable.

What makes such process so efficient from a psychological perspective is that the feeling of satisfaction is ephemeral, while that of dissatisfaction will persist until the lack is satisfied. In both case, the individual wants to iterate again.
To function at its best, iterations on the process must be fast. In previous parts, we have already mentioned that well-refined subtasks - which are the result of a deep task hierarchy - are the keys to the efficiency and visibility of the work. This has never been more important than here, as it impacts the integrity of the system and therefore the psyche of users.

However, just like game designer attempt to prevent any soft-lock, we should avoid any case of deficiency. Typically, an employee who repeatedly fails to complete the tasks to which he is assigned should be a wake-up call for managers: they should then check if the challenges are well balanced and also match the trainings - and if necessary, make adjustments accordingly.

However, randomness in rewards frequency must not prevent the establishement of the causal relation challenge-success-reward in the first place. This is why it is best to let individuals experience a full cycle first, which will mentaly implant the related mecanisms.

Also, be aware that because such process *(ab)*use the individuals reward system, it may result in workaholic-related psychiatric disorders such as OCD (obsessive-compulsive disorder), stress - especially when being in lack of rewards -, or social isolation.

This is why such a system must be carefully designed and balanced, while giving special and permanent attention to users.

The Yerkes & Dodson law

Initialy developed by the psychologists Robert M. Yerkes and John Dillingham Dodson in 1908, the Yerkes-Dodson law establishhed a rellationship between pressure and performance.

Illustrated with a bell curve or Gaussian curve, it demonstrates a uniform distribution of performance according to a level of arousal.
Indeed, the higher the arousal, the better performance. But only up to a certain point, after which the trend begins to reverse, until a perfectly symmetrical effect is formed.

Thus, we may wonder how to reach and stabilize a team's level of arousal to that very point, which we can since consider as being the optimal stress/arousal rate.
Nowdays, speaking of stress level, we tend to focus on what managers call the performance area, which is the top-tier portion of the curve. Considering a section and not a particular point provides an interesting flexibility, while avoiding any precise calculation.

For more information about this law, you may read the original publication, that reached more than 2.8k citations over the years :

Psychologists consider two types of thoughts that inhibit an individual's ability to concentrate, by generating anxiety :

  • Rehashing the past
  • Worrying and/or fantasizing about the future

They can be directly correlated with the Yerkes-Dodson law : stress - up to a certain amount - or fatigue can act as a way of escaping these negative thoughts, thus avoiding any generation of anxiety that occurs. Subsequently, increasing the ability to concentrate and ultimately, individual productivity.

Yerkes_Dodson_law

As illustrated in this diagram, from the x-axis perspective which represents the arousal level, the first and the last part are to be considered as very risky, as these are major factors of boreout (for the first) and burnout (for the latter).

This is why companies and their staff managers must take into account - if not take responsibility of - the level of excitement of their employees. First, to optimize the performance of the latter, but also to improve forecastings by managing the risks of occupational psychiatric disorders.
Simply, an employee who is underperforming or who is absent is likely to have an impact on deadlines, while increasing costs, thus decreasing the reliability of their team to deliver quality products on time.

In conclusion, employees well-being matters, as it directly impacts performance.

Work-induced psychiatric disorders : Summary

As mentioned in the previous parts, work-induced psychiatric disorders have a major impact over an employee efficiency, thus his productivity.

However, many other external factors may result in a drop of productivity, such as fatigue, personal issues (family, money, addictions, ...), diseases, etc.

What is important here is to differentiate causes from consequences; And in case of cycle, find the weakest node to solve it first.

We previously saw that worrying and/or fantasizing about the future is a major generator of anxiety. When I discussed with contacts (who work at several companies in different countries) about this paper early draft, asking them how does they feel TD impact, most of them answered basically the following :

  • In the case of technical debt, when an employee is forced to use outdated technology or tools, then he may consider his experience to be worthless from a labor market perspective.
    In a world where the average length of consulting assignments is 3 years in most countries, working for the same company for life is no longer the norm. Each employee in the IT field must keep a competitive skillset and CV.
    What would happend if an employee perceives that his current experience will bring nothing to the skills he wants to develop, and that it's likely to make him unattractive towards recruiters ?

    Note that the same phenomenon exists for companies that use too large a proportion of internal technologies in their ecosystems, or simply that are not productive enough to make the experience significant in the end.

The escape response

Facing such situation, most people tend to feel trapped - which will psychologically cost them -, and naturally urge to escape.

Resulting from research on animals, scientists observed that this behavior, if it is always caused by possible predation, manifests itself in different ways : camouflage, freezing behaviour, and fleeing, among others. We can then wonder about the way in which these behaviors are reflected in humans, since they are basically animals.
This is correlated to the avoidance response, which is defined as a responsive mechanism that prevents an aversive stimulus with unpleasant outcomes from occurring.

A sense of oppression associated with an emergency is a well-known social engineering mechanism for deceiving people, as the target looses his/her ability to take a step back, resulting in a biased thinking process. In such case, most people forget critical thinking, thus behaving in response to feelings rather than facts.
This is even accentuated in the event of psychological distress or existing weakness.

Here are the three main behaviors then observed, which may differ according to the subject ability to take a step back while facing such situation.

  • Unawareness & passivity

    If the employee feels helpeless and consequently lost all motivation, slipping into inertia in both professional and personal life is very likely. It translates into a passive attitude : unproductive and docile.

    This passivity results in the individual waiting for an outside factor to come to resolve the situation that he considers to be hopeless.
    In the end, several testimonies describe a surprising feeling of liberation when they are fired.

    A common observed behavior here is to comfort oneself in a form of Stakhanovism, countering feelings of dissatisfaction by completing as many tasks as possible. However, producing more than what was needed by working harder and longer leads to more fatigue - which is both physical and mental -, and may ultimately result in complete exhaustion. Such auto-alienation, similar to workaholism behavior in some ways, is not sustainable as it underlines a mental distress.
    Note that efficiency here is only defined by how quickly a task is completed, which does not match - and often, counters - project's needs.
    What fundamentally differentiates Stakhanov's physical labor (which was coal mining) and modern office jobs is that quality matters.

    As mentioned earlier, hampered quality and promoted quick wins are a major crux of the technical debt vicious circle.
    Such behavior is very likely to accentuate this tendency, by accelerating the iterations of the cycle : completed tasks increasing in quantity while decreasing in quality ; Which will have an exponentially disastrous impact on the project afterward.

    On the other hand, if the employee realizes his/her situation and the consequences it has, then determines that the source of discomfort is his/her current job, s/he will then try to fix it, by using one or both of the next following strategies, depending on his level of optimism:

  • Attempt to resolve

    Force of proposal, the employee will first alert managers, in the hope of sensitizing them to existing problems, and eventually suggest resolution strategies.
    Based on the latter's feedback, he/she may adapt his/her behavior accordingly, trying harder or falling through the next behavior described hereunder.

  • Escape

    This option may take several forms, according to the employee's situation perception, whether it is objective, or way more likely, subjective.
    He/she can either decides that his job is the root cause of his/her troubles, then attempt a career switch, or simply find another employer. In the first case, taking the option to become a decision maker (e.g manager) rather than an implementer is not the only option.
    There are many testimonies of engineers who became artisans, farmers, etc. Which suggests that when an employee feels his job as meaningless or unnecessary, he tends to orient himself towards basic, concrete jobs.
    Similar behaviors have been observed in cases of what the anthropologist [David Graeber] (https://en.wikipedia.org/wiki/David_Graeber) theorized to be "bullshit jobs", as mentioned before in the "brownout" section.

    Until the employee designs then implements an escape strategy, the awareness of the current situation - associated with the apprehension of the changes to come -, is a source of doubts which increase the level of anxiety. This generates an increasing level of stress - hence the probable appearance of psychological disorders induced by work. Moreover, although it is often experienced as a release, the escape is often associated with a feeling of betrayal towards the ex-employer and the former project team.

All these options will likely lead to delays or even increased staff turnover, thus unanticipated additional costs.

What about problem solving professionals ?

In case of developers, what is really interesting is that their work basically trigger the following mental scheme :

diagrams-developers_mental_scheme

Which suggests that they are more likely to choose one - or both - of the two last options, as perceived (in case of aware, proactive behavior) as solutions.

Finally, we can add all theses elements related to individuals psyches to our previous "Vicious technical debt cycle" schematic :

technical debt psychological impact -> accentue par la dette technique -> sentiment de non competitivite du skillset -> baisse de la capacite a etre employe apres -> sentiment de danger

The causal relationship causes these two loops to feed each other cyclically. Thus, these are to some extent correlated. However, not being exhaustive, it should be noted that external factors could reinforce or, on the contrary, delay them. Also, both processes probably do not have the same speed. Iterating over the "project" one is likely to be on sprint basis, while the "individual" one may take longer, perhaps months or even years, depending on how the subject is mentaly well-balanced.

Being able to deal with stress as well as to take a step back from feelings is a healthy, key-mecanism to break the loop. If not everyone is able to act on a project level, we all can act on ourselves.
This is why personal development should be encouraged, if not framed, by societies.

The study for the French Ministry of Labor mentioned above clearly showed that sharing sessions where each employee is free to express themselves without judgment or impact on their career can drastically improve internal processes, but also relieve employees from their negatives feelings. Once again, communication is the key to successful human interactions.

It seems that the old paradigm of an employee exchanging their labor force for a salary is no longer the norm - or at least, deserves an update.
Nowadays, employees aspire to both security and stability, which involves more than just money. Professional development is a new dimension that matters, including well-being at work, but also a tacit deal which is the following :

  • "I will work for you as an employee, but you - as a company - must keep me intellectually stimulated, offering me challenges, ways to develop my skills, and possibly rewards, while keeping my profile both competitive and attractive".

Few words about the mental load

In the previous part, we mentioned a common mind map that developers - consciously or unconsciously - use to troubleshoot issues.

To follow this process, the individual need to build a dedicated mental space for the task. Indeed, he/she needs to build a mental representation of the program workflow, and variables states/values all along. Analysis tools such as callstacks visualisation, variables monitors and UML diagrams generators come handy in such situation.

However, gathering and sorting relevant informations is not that easy, as many factors may clutter the employee's mind. No matter how much diagrams he/she draws, how many notes he/she takes, sometimes there's just too much information for the mind to consider all of it. Even though this task takes some time, it is nonetheless essential.

In addition to work-induced psychiatric disorders mentioned in the section above, as well as anxiety factors involved in the Yerkes-Dodson law, we may list some factors that tend to drastically shrink mental space, or even makes the mental space being built to collapse; resulting in a loss of time in proportion with the later's reconstruction.

  • Any kind of external disruptions, from notifications to chit-chat, including receiving phone calls.
    (See the AO research mentioned earlier: 14.7% of participants reported wasting time because a colleague distracted them)
  • Forced context-switch, such as a manager asking for something "urgently"
  • Background noises

If these have serious effects on productivity, the ones on creativity is even more important, which is nevertheless a central quality for this profession.

What about technical debt here ?

TD, because it dramatically increases the amount of overly complicated, unnecessary, and messy code, is a huge source of parasitic mental noise. To cope, and avoid overload, the employee must therefore provides an additional effort, which once again reduces his/her ability to concentrate, and therefore, productivity.
It triggers a lot of - negatives - emotions, which it is not necessary at this time.
About this particular point, I invite you (once again) to watch Kate Gregory's talk about emotional code at CppCon 2019.

What's the solution then ?

Noted by the U.S navy in 1960, the KISS acronym - for "keep it simple, stupid" - reflects a principle which states that most works best if they are kept simple rather than complicated.
Promoted by Agile methodologies, it is well-known by IT workers, but to my observations, not used that much.

To me, we should encourage simple and clean interfaces first.

For instance, I don't need to know how does my washing machine work in details to use it.
I only need to know that to fulfill its purpose it requires electricity, clothes, water and soap as mandatory inputs, while fabric softener is optional. Then it will provide gray water and clean clothes on the way out (outputs).
Also, if and only if this does not lead to the expected behavior (e.g. unexpected outputs such as noise or flooding, poor performance, etc.) then I will check the technical specifications of the machine to try to fix the problem myself, call the manufacturer or a repairer.

A clean component is then defined the following way :

  • A set of deterministic behaviors each of which depends on a specific set of inputs, to possibly produce outputs; Where behaviors link inputs in a causal relationship to outputs.

Anything else is qualified as undefined behavior (UB). This is also known as black-box programming : only the way the component reacts to inputs to produce outputs matter. No knowledge of its implementation is required to use it, and it will remain opaque until it no longer functions as intended.

diagrams-BlackBox

By reducing the amount of information required to deal with a problem, not only will the mental space to be constructed simpler - hence faster and more precise - but new employees onboarding will also be.

For example, instead of duplicating a class or function multiple times, generic programming might be a valid option. However, the quick-wins avoid such refactoring, thus generating more technical debt.

Also, another correlated quote that is attribuated to Albert Einstein is :

"Make everything as simple as possible, but not simpler"

How to measure/predict productivity at a specific time ?

While technical debt is a major factor in explaining the decline in productivity, there are others to consider if we are to try to accurately estimate a team's productivity at any given time.

History : the "eight hour workday" paradigm

Basically, we currently still follow the "eight hour workday" doctrine enacted during the 19th century by the manufacturer, philantropist and social reformer Robert Owen.
His slogan to formulate the goal of an eight-hour workday was :

"Eight hours labour, eight hours recreation, eight hours rest."

At this time, a paradigm change takes place.

Let's recontextualize : it is the industrial era.
While a new innovations wave has generated high-performance, efficient machineny, the implementation of these in factories tends to replace the demand for conventional labor by skilled jobs, resulting in (almost) proportional turn-over of workers.
Those who cannot learn these skills are considered as obsolete, lose their jobs, and eventually join the emerging labor movement.

As this is a structuring element of our common History, if you're not familiar with, I'd strongly advise you to continue gathering informations about it. If you are german or french speaker, the Arte channel is a documentaries goldmine about the industrial revolution and the labor movement.

Increased work paces generated a vicious cycle where workers were exhausted, in pain, and thus had many accidents. Humans was considered as a very fallible limit to productivity.
There are testimonies which mention that the workers who were injured (sometimes permanent disabilities such as amputations, etc.) while using the machines then had to reimburse the lack of production generated by the accident, but also the damage possibly suffered by the machine.

What goes without saying, that most, if not every, disabled worker was unable to support the required pace of work, and was thus replaced.

diagram_vicious_cycle_high_industrial_work_pace

Soon after Robert Owen published his doctrine details, another manufacturer, Ford, actually implemented it and changed the standards. As mentioned earlier, at this time innovation was hard on employees and turnover was high. According to Ford's staff, "Turnover meant training delays, which decrease productivity".

To fix this, Ford's factories not only offered trainings for workers, but also a pay that was much higher than other similar companies, while work time was much smaller.

This strategy quickly paid off :

"In January 1914, Ford solved the employee turnover problem by doubling pay to $5 a day cutting shifts from nine hours to an eight-hour day for a 5-day work week (which also increased sales; a line worker could buy a T with less than four months' pay), and instituting hiring practices that identified the best workers, including disabled people considered unemployable by other firms. Employee turnover plunged, productivity soared, and with it, the cost per vehicle plummeted. Ford cut prices again and again and invented the system of franchised dealers who were loyal to his brand name. Wall Street had criticized Ford's generous labor practices when he began paying workers enough to buy the products they made".
https://en.wikipedia.org/wiki/History_of_Ford_Motor_Company

Time past. Isn't it time to modernize - if not break - that paradigm ?

Generally speaking, questionning ouselves about the ways we proceed on daily basis is a healthy habit.
This can be unpleasant at first, as doing this the good old fashioned way seems like a comfort for the most part, but it quickly pays off as it highlights opportunities for improvement.

For instance, let's take a repetitive and alienating task such as code integration. With a CI tool, most of the process may be automated. Which is a gain for both the bored employee, and the company as well, as it makes the whole process much faster and stable.

How to calculate productivity on Agile/Scrum projects ?

Accurate forecasting is a key to any project's success.

Velocity represents the amount of work a team can do over a specific amount of time. Thereby, release planning is possible, as managers can figure how long it will take to the staff to achieve a specific bunch of tasks that can be shipped afterward to the end-users, thus generating value.

Historically, we use to estimate the amount of uninterrupted labour to perform a task using man-hours, or man-days. These are the amount of work performed by an average worker in one hour, or one day.

However, this approach changed with Agile/Scrum methodology, to focus more on delivered value than work-days.

About Velocity and sprint capacity

We mentioned the following formula earlier :

${Velocity}\equiv { \frac { \sum_{\ points\ of\ fully\ completed\ stories} } {N\ sprints} }$

As you may notice, only points from fully completed stories are totalized. Points from partially-completed or incomplete stories should not be counted in calculating velocity.
This is why using a sprint-burndown chart is so important to monitor it throughout the current sprint.

Also, velocity is a powerful feedback tool for a team, helping them measure the impacts of process changes and tune accordingly during the retrospective ceremony.
By the book, we use to consider that even if a team's velocity will slightly oscillate over sprints, it should reach a trend of 10% growth per sprint iteration.

How does Agile story points work ?

The base goal of story points is to make tasks estimation on a specific project (thus with a specific team) quick and accurate.
Like old wines, this process gets better over time, as the team can compare backlog and delivered user-stories with each other. Similar tasks are likely to be estimated with a close amount of points.

Typically using a Fibonacci sequenced numbers like 1, 2, 3, 5, 8, 13 and 21 to convey a level of effort to fulfill a task, story points values are relative, and adapted in a reactives way.
This is somehow similar to the "T-shirt sizing" technique, which uses X-small, Small, Medium, Large, X-large.

At the end of each Agile sprint, managers are able to calculate the next one's capacity, while updating the velocity rate.
To make capacities estimations even more accurate, the process mentioned above can be done using the few last sprints, often three. This has the advantage of increasing consistency, smoothing out fluctuations.

In facts, story points do not represent the amount of delivered value. Agile methodologies tend to bond both as close as they can be, but ultimately some quick tasks may generate a huge amount of value for the end-user, while the opposite is just as common.

Work-hours are raw values, while story points are relative

If sometimes useful, converting Agile story points into work-hours is not recommended, and may result in being a slippery slope, if not a pitfall topic.
Because story points evolve over a project's lifetime due to many factors (such as staff turnover, fatigue, working process refinement, technologies/tools, etc.), it does not allow any constant conversion, thus any study nor comparison of teams's productivity.

What we need here is to measure efficiency, not productivity.

Productivity differs from efficiency

In order to accurately forecast a team's productivity, we need to detail as much as possible the formulas previously used.

As mentioned earlier, productivity is the amount of output over a certain period of time.
So it's something that we can see in retrospect, not predict in advance. Therefore, it does not correspond to our need to proactively anticipate.

${productivity}\equiv { \frac { \sum_{\ produced\ value} } {time} }$

To best fit our goal, this lacks details as there is no room for factors that can illustrate - thus highlight - causes. We need more adjustment variables to interact with.

A first rough factor - as it encompass multiples others - here is efficiency, which might be defined the following way :

${productivity}\equiv { { \sum_{\ efforts\ made} } \times {efficiency} }$

There are three cohexisting ideas :

  • The definition of productivity as the amount of value produced over time is inherently imprecise, as it does not allow us to highlight its causes. Therefore, continuous improvement is made difficult because it is based on speculation, not on datas thus not facts.
  • A significant but ineffective effort is unproductive.
    Worse, this lack of productivity could affect the following sprints because of the fatigue it will have induced.
  • Improving efficiency can lead to greater productivity - thus delivered value - without the need to increase the teams effort.

Efficiency is therefore the factor that results in a delta between optimal and actual productivity depending on the effort provided.

The remaining question now is "how to benchmark efficiency" ?

Efficiency is a fundamental metric to manage projects. Therefore, a chart that keeps track of its evolution over time seems essential.
Such a tool makes it possible to retrospectively analyze the effectiveness of management policies and other changes, thus guaranteeing continuous improvement and room for experimentation.

The remaining difficulty is that efficiency has multiple sources, which cannot be distinguished unless you isolate by varying them each in turn, rather than several at the same time.
This leads to a more precise analysis. For instance :

  • How has this new tool improved the efficiency of our team ?
  • How fast are our staff onboarding and ramp up ?
  • Did the latest improvement to our work process really work?

In order to measure effectiveness in a relevant, accurate way that is also substainable over time, we need a tool.

According to my experience, the main way to do this :

  • Calculate the evolution of the time the team takes to perform recurring and similar tasks

Also, keeping track of the following datas may help :

  • The amount of misestimated, incomplete user stories at the end of the sprint that requires extra time to be completed
  • The amount of created issues over time
  • The amount of rejected commits during code reviews or CI checks over time

Parasitic costs

  • Context switches
    • multi-tasking decrease efficiency, thus productivity

The focus factor

  • todo : Capacite a se concentrer a la realisation des taches du sprints (voir notes papiers)
    • Vs. parasitics cost (context switches, useless meetings, etc.)
    • Vs. Agile gravity/weight

Many Agile specialists mentioned that using any form of focus factor to forecast productivity is a slippery slope.
Indeed, that concept is quite confusing, and definitions vary across books and internet resources. Also, like often in Agile, companies tend to use their own - thus biased - representations instead of a hard "by the book" definition.

Basically, the focus factor is a rate which enable velocity forecasting, used to estimate a team's capacity during sprint planning.
It can be illustrated using the following formula :

${Forecast\ velocity}\equiv { { average\ velocity } \times {focus\ factor} }$

How to calculate the focus factor ?

Let's take a concrete case :
We have a team of 5 developers, and 3 weeks-long sprints, which is equal to 15 workdays. Our base rate, e.g full-focus, is of course 100%. However, there are several inputs to take in consideration.

A developer scheduled a 2 weeks long vacation, another one a training every morning for the next 4 weeks, and a third has to support another team for about a week.

20 -> 20 20 -> 20 20 -> 20 x ((3 - 2) / 3) -> 20 x 0.33 -> 6.6 20 -> 20 x (1 / 2) -> 10 20 -> 20 x ((3 - 1) / 3) -> 20 x 0.66 -> 13.2 == 69.8

Here, about a third of the base velocity should be subtracted. For example, if the speed was 42 points, the next sprint capacity now is forecasted as 42 x 69.8% == 29.316 points.

  • Agile ceremonies cost -> 0.6 ~ 0.8

  • per team

    • per expertise (C++ specialists != testers != BO developers)

How to calculate the technical debt impact over productivity ?

${Delivered\ value}\equiv \sum_{i=1}^{10} $

// todo : rate -> [0 -> 1] => percentage

// todo : man-days into productivity conversion // - project managers used 6-6.5 hours as planned hours in a day => separate meetings, slacks, breaks, etc. // - focus factor [<~0.6 - 0.8] => separate scrum ceremonies, // Lower : new project / new team / onboarding / turnover, scrum newbies, new technologies or complex product, chaotic team ability to self-organize : need handholding

However, returning to our initial observation according to which productivity decreases in proportion to the life of a project, it appears that this estimation tool is sub-optimal, if not naive, because it is relative to time and therefore not absolute.

We saw in a previous section that technical debt has a growing, exponential impacts that deacays a team's velocity. Thereby, we may consider that it acts as a less-or-equal to one, multipling coefficient (or percentage so), which slyly applies to your development's process velocity.

${Average\ velocity\ per\ employee}\equiv { { \frac { Agile\ story\ points\ delivered } { number\ of\ sprints } }\times { \frac {1}{average\ available\ employees} } }$

Attempt to roughly represent velocity involving that technical debt factor, we may produce the following formula :

${Velocity}\equiv { { \frac { Agile\ story\ points\ delivered } { number\ of\ sprints } } \equiv \frac { {work-hours\ delivered} \times technical\ debt\ rate } { number\ of\ sprints } }$

But how to define that technical debt rate precisely ?

Perhaps this way :

${technical\ debt\ rate}\equiv { \frac { work\ duration } { technical\ debt\ amount } }\equiv { \frac {1} {percentage\ of\ technical\ debt} }$

Most teams I worked with used to deny or at least minimize technical debt impact over projects health and sustainability.
Hereunder, I will try to describe common pitfalls and roughly illustrate them with curves, using :

  • X-axis : work-days, from 0 to 20, as four weeks of 5 work-days is a common sprint duration
  • Y-axis: men-days productivity, where 1 is a theoretical reachable maximum

Being not aware of what technical debt is equivalent to presuming that teams velocity will remain somehow stable, linear over the time.
(The fluctuations are due to the team's tiredness, absences, etc.)

denying TD impact over sprint time

Minimizing it, is equivalent to presuming that its impact will remain linear. Here, the technical debt is an amount that is substracted to the team's productivity.

naive TD impact over sprint time

// todo : point de pivot => productivity ~= 0 // "If it's aint broken, don't fix it" -> IT IS BROKEN ! // "let it die slowly"

References

CppCon 2018: Kate Gregory “Simplicity: Not Just For Beginners”.
Kate Gregory's talk : emotional code at CppCon 2019.
SourceMaking.com : Really good website about design patterns, antipatterns, refactoring and UML.
Technical debt joke image : "Too busy for efficiency" from Daniel Okwufulueze

Todo-s :

TD reduction's cost -> Who should pay for it ?

TD management strategies -> critical/emergency

  • Split dev team -> a subset will be fully allocated to TD reduction, thus work on a propert subversion branch
  • Sould never happend, since projects have a TD management strategy since the beginning

Are equaly important :

  • deliver functional code
  • tested, with test coverage
  • Performance
  • Substainable -> Quality, maintainability

productivity != efficiency

delta etre != -> paraitre != -> objectifs (vouloir etre) / projections

move "> What about substainable retention strategies ?" elsewhere ?

mention turnover (especially top-performers that leave) as a source of decreasing productivity

TD factor is part of effectivness which is part of productivity

Schematics :

  • code cleanup/refactoring generates values -> improve velocity

Formulas/Maths : https://render.githubusercontent.com/render/math?math=

TOC -> Table of content

Check routine :

  • Images paths : Absolute, not relatives

todo : Next paper

  • Components hierarchy and modularity => IBA -> this/next article
  • IBP -> uncoupling -> better project's sources understanding -> easier turnover, faster team members onboarding
  • contrat d'interface -> delegation to others teams / contractors
  • Interface contract == 1 specs list (add Jira, WeKan, Trello, Redmine, Mantis, etc links into; as it evolutes)
  • Blackbox implementation, clean and easy contracts + interfaces

My own answers in the reddit thread :

1 - A first session in large group (I let you define if this is the whole team or a subset. >It may also include developers/lead-dev/architects that are not part of the project).

This session's goal is to plan refactoring (redesign, partitionning, code cleanup, etc.) >tasks.

We use a top-down strategy to question how things are organized with multiple diagrams, >including UML, performances reports, etc. Afterward, we compare theses hand-written diagrams with generated. For instance, Valgrind tool's CallGrind is great to find redundant paths, performances >bottlenecks, etc.

If you work on a C++ project that configure/generate/build/compile/test/patckage/etc. you can >easily use Graphviz to generate a dependencies diagram.

cmake --graphviz=foo.dot <...> dot -Tpng -o foo.png foo.dot Also, some tools offer dependency diagrams generation from include files perspectives.

Once you analyze deltas, you can discuss on how to solve it (e.g the component purpose, why >does it has these dependencies, these interface, etc.), then create subtasks and assign it to >a subset of your group. Repeat this step as many times as you need.

2 - Use black-box interfaces. Partition components.

Once the team/work-group discussed about a component, update the documentation with its specs. Then redesign a clean, black-box interface, perhaps using IBA/IBP, some actor patterns, etc.

I use to separate interfaces and implementations using contracts (or emulation of).

For instance, a component interfaces is a type-erasure pattern. The implementation does not need to know the interface it matches.

Both the implementation and the interfaces fills the contracts. This is ensured using a static_assert with a detection idiom type_trait "Does T matches the >contract ?"

Thus, if at some point the implementation is not OK anymore, you can easily replace it.

FYI, at first I planned on writing my paper on this (simple?) design that, according to my >experience, solve many problems. The technical debt part was only the intro. However, after more than 1k lines, I decided to separate the "motivations/intro" and the technical solution parts, creating a stand-alone paper about technical debt.

Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment