Skip to content

Instantly share code, notes, and snippets.

@owulveryck
Last active June 26, 2024 02:28
Show Gist options
  • Save owulveryck/4ac8c96c16f5a48de24f8a0e949ad9a5 to your computer and use it in GitHub Desktop.
Save owulveryck/4ac8c96c16f5a48de24f8a0e949ad9a5 to your computer and use it in GitHub Desktop.
Dave Snowden's Organization, Collective, Decision

This is a transcription of the talk: Organization, Collective, Decision by Dave Snowden at USI 2022

Table of Contents


Hope without optimism

The title of this talk comes from a book I strongly recommend by Terry Eagleton called Hope Without Optimism1.

The first time I ever came to Paris was in the 1970s, as a British representative of the Catholic Marxist group, that was the 70s. Terry Eagleton was also associated with that, and for those of you who know your Catholic theology, you'll know that to abandon hope is a mortal sin. But it doesn't require you to be optimistic about the future, and I really want to make that a sort of theme of what I want to talk about today, which is:

how can we do things in the here and now, which increase the possibility of a sustainable and survivable future.

So that's my focus.

For those who want to know more about things like Cynefin, Agile, we're doing big stuff at the moment on what's called rewilding Agile. I rebalance in the system, there's loads of stuff on YouTube and documents on that.

About the talk

Today I'm much more focused on organisational and society design, so that's the focus.

Resources

There are two free books you can download on this. A big theme here will be this, which came out late last year, which is the European Union Guide to Managing Incomplexity in a Crisis, of which I was the principal author2.

You can download that for free or get a copy sent to you for free from the European Union, there's a website for it. And also something which came out last week, and this is going to be a theme where I'm going to go to, the whole power of narrative in understanding civil society. And that's a book by Oxfam and other agencies on the last ten years' work they've been doing with us on how to understand the underlying dispositional aspects of human society, i.e. the basic underlying patterns from which behaviour arises. So both of those are downloadable and you can move forwards.3

I want to start here with a metaphor.

This is a place called Cwm Ivy in Wales, in the Gower Peninsula. I actually walked across that wall a few years ago, and I did a walk all the way around Wales. There's a lovely phrase by the way from one of our leading journalists. He says, Wales is a country small enough to know in one lifetime. And I really like that idea, and I have walked all the way around it. This is an interesting phenomena.

You can see what used to be a seawall. And that was built in the 19th century to effectively allow the land to be drained on one side.

And I'm going to make a big metaphor now around this.

A seawall is extremely effective, it's highly robust, it's stable, you know what it is, it allows you to do lots of good things, it allows you to focus on efficient use of land. And it's wonderful until the day it breaks. And then it would be better if it hadn't been there in the first place. And that's what actually happened here, the seawall broke.

And what the National Trust did was to allow the land to regenerate in what's called a salt marsh.

Now a salt marsh is a good example of a highly resilient system. It's a complex ecosystem, it's constantly changing and adapting to the flows of water. And critically, even when it's saturated, it doesn't release the water. So it's failure isn't a catastrophic failure.

The failure of the seawall is catastrophic, the failure of the salt marsh is not.

Now there's no agreement on robust resilience, all of these words are used differently by different people. So I really want to get that metaphor in your mind.

Are we building a salt marsh or are we building a seawall? If we build a seawall, we better be damn sure we understand everything in advance, because if we don't, the design conditions will be exceeded.

The Thames barrier in London is now no longer adequate to the level of flooding that we expect to get. And there's other examples as you go around. I have friends in Netherlands who say they're going to live in Breda by the sea in their own lifetime, which is quite a depressing concept. Breda is quite far inland.

So I want to hold that metaphor because the whole of the EU field guide, it's all about getting ready for things that you cannot anticipate.

It says you have to architect businesses, architect companies, architect society, so it can respond like the salt marsh to unanticipatable events.

And actually forecasting and scenario planning may be contraindicated because they give you a false sense of security or narrow the perspective of what you're seeing too much. So that's the underlying metaphor of today.

The field I'm in is called naturalising sensemaking. My first degree is physics and philosophy as a joint major, which gives me a philosopher's delight of concepts and a physicist's despair of social scientists. They never have enough data to form any valid conclusion anyway, and you'll see that come through in this. The naturalising comes from sensemaking I define as how do we make sense of the world so that we can act in it? This is one of five distinct schools of sensemaking recognised within the literature. If anybody wants the reference, you can have it. I use sensemaking with a hyphen because that's a verb. Sensemaking without a hyphen is a noun. And that distinction is actually quite important.

It's about continuously trying to understand not only what we can understand, but the limits of our understanding, and taking actions compatible with those limits.

Naturalising means to base what we know in the natural sciences, not in the social sciences. Now, one of the reasons for this, if you look at most social science and certainly all of the management literature, it adopts an inductive approach to learning. Somebody goes away and studies a whole series of organisations or societies which are held to be successful. They identify elements in those societies which are common to all success, and from that, they create a recipe.

Do these things, you too will be successful. Now, this is deeply problematic for many reasons.

One of the books which is very popular is called Lean Startup4.

The guy goes and studies a whole body of companies in Silicon Valley who have succeeded, identifies things they did, and creates a best-selling book with a series of recipes. You know, do these things, you will be successful

Now, when I was in IBM research, we did the same research with Dorothy Leonard at Harvard, but we also studied companies who failed, not just companies who succeeded. And what we found is they all did almost identically the same things.

It's just you were dealing with a huge number of agents, so some of them were bound to succeed anyway. So that false imputation of causality is common.

The other problem is the confusion of correlation with causation.

If France wants to increase the number of Nobel Prizes it wins, all it has to do is increase dark chocolate consumption. I mean, this is good news, you don't need an educational system. Dark chocolate consumption per head of population directly correlates with Nobel Prizes per head of population for the last five decades on a much bigger data set than any management textbook I've ever read.

Yeah, and the other one, by the way, which I think is causal, and there's many of these, is that attempts to commit suicide by drowning, peaks in that directly correlate with the release of Nicolas Cage movies, but I can see a reason for that. Yeah, there's a whole site of false correlations that is worth looking at.

On the other hand, if we go back to natural science, there are things which have been subject to peer review, multiple experiments, and these act as what in complexity is known as an enabling constraint.

We say if we know these things about systems, if we know these things about human beings, then we have to design on the basis of that knowledge, not on the basis of how we would like things to be.

Complexity is materialist, and it is realist, and it is pragmatic in terms of its approach.

So that means, and there's a phrase from the 70s people remember, we used to talk about praxis, the fusion of theory and practice.

To go back to Aristotle's Saphir and Pronesis,

without practical wisdom everything is useless, but without theoretical wisdom nothing scales.

You need a balance of the two.

And in the 70s, we used to joke about this, praxis makes perfect, which is kind of like a pun.

So, let's go through some, a few key scientific facts, which we need to base on.

This is from a famous set of experiments run on radiologists.

Radiologists have deep training, you know, they're dealing with limited data sets. You give them a batch of x-rays and ask them to look for anomalies. And on the final x-ray, you put a picture of a gorilla, which is 48 times the size of a cancer nodule, and 84% of radiologists will not see it, even though their eyes physically scan it. And the 17% who do see it, come to believe they were wrong when they talk with the 84% who didn't.

This is called the inattentional blindness.

The reason for it is the most anybody in this room will scan of available data before they make a decision is about three to five percent. It's that low.

If you're Chinese, it doubles, and there are reasons for that link to language evolution, but for everybody here, three to five percent.

That then triggers a whole series of memories, cognitive, body-based, socially based through collective narrative, and that it triggers those in nanoseconds, and you fuse them together to form patterns, and the first pattern which fits, you apply.

You do a first-fit pattern match, not a best-fit pattern match.

Now we can see why this happens in evolutionary terms. If you think about the early hominoids on the savannas of Africa, something large and yellow with very sharp teeth runs towards you at high speed. Do you want to artistically scan all available data? Look at the catalogue of the flora and fauna of the African veldt, and having identified lion, look up your company's best practice notes on how to deal with carnivores. Yeah, by that time the only document of any use to you will be the book of Jonah from the Old Testament, which is the only example I found of how to escape from the digestive tract of a large carnivore written by a survivor.

We evolved to make decisions very, very quickly based on a partial data scan, privileged in our most recent experience. Now this is absolutely fascinating, because it means there's an awful lot out there to discover which could make a difference if we just learn to look differently. But it also says we can't get individuals to see things in that way, it has to be collective.

So I'll talk later about our work on distributed decision support, where for example of an executive of a company can consult the whole of their workforce in five minutes in real time, to identify dominant subsidiary and outlier patterns before they make a decision. And we're now doing that at society level, I'll talk about that later as well, in using children in schools as human sensors, to actually give us real-time feedback on what is possible and not possible in society overall. And that's actually quite critical given the growth of populism. A lot of my early work was on weak signal detection working for DARPA in the US. The earlier you spot a negative or a positive trend, the lower the cost to dampen or disrupt or expand. So weak signal detection is key. So that's important to realise.

Second one, and this came out of evolutionary biology with Gould.

Dinosaurs feathers are a good example. All dinosaurs had feathers, we now know that, and they were very colourful. Now the fossil finds in northern China are fascinating. And from everything we can see, feathers evolved primarily for sexual display. And then a small breed of dinosaur developed skin flaps under its forelimbs. So if it stood on its hind limbs, it could better display for sexual purposes.

And those dinosaurs had to run very quickly because they were more prey than predators. And when they ran very quickly, they started to glide, and that's how we got flight.

A trait which evolved for one function, and when it was stressed, coming back to the stressor concept from Nassif's conversation yesterday, it accepts it doesn't adapt for something completely different.

The cerebellum at the base of your brain evolved in higher apes to manipulate muscles in fingers. It then accepts in humans to manage grammar in language. The huge sophistication of grammar can't develop in a linear way, it requires a non-linear, exaptive shift.

If you think that isn't relevant to the modern day, in 1945, a Raytheon engineer, maintaining the magneto of a radar machine, realised the significance of the fact that a chocolate bar had melted in his pocket. So he put a metal box around the magneto, and we got microwaves.

The whole history of technology innovation, and certainly pharmaceutical innovation, is about noticing side effects early, realising the significance and adapting them. Now this is absolutely key for humanity. What we call it in the Ufield guide is radical repurposing. We haven't got time to create technologies from scratch to deal with global warming. We need to find radical repurposing of existing technologies in order to meet novel threat.

And that ability to manage for exaptive discovery is something technology can help us do. That's been my work for the past two decades, is how do we associate existing capability with new threats, in such a way that people will pay attention to something that they would otherwise ignore.

So exaptation is a key concept.

And then we get to complexity theory. Now this is a lecture in its own right.

Complexity is looking at interacting elements and asking how they form patterns and how the patterns unfold. It's important to point out that the patterns may never be finished. They're open-ended. In standard science this hit some things that most scientists have a negative reaction to. Science doesn't like perpetual novelty. - Brian Arthur

There are various ways we can define complexity, but the one I like is, this really comes from Alicia Juarrero5:

She says, a complex system is like bramble bushes in a thicket.

A thicket is a small, dense woodland. And bramble bushes are those things you see on the screen. Everything is entangled with everything else. Although you know there are separate plants, you can't identify them.

No pathway is ever the same. And the only thing we know for certain is unintended consequences. The key thing to understand about complexity, and this is really scary for anybody brought up in northern Europe or northern America, **is a complex system has no material linear causality.**a You can never say, if I do X, it will produce Y result.

Complex systems have dispositions, which we can measure, and propensities that we can understand, but they don't have causality, and therefore the future is uncertain by definition. So what matters in complexity is not to define where you would like to be and try and close the gap, that's been the dominant approach of systems thinking, but to more accurately describe the present and start journeys with a sense of direction, open to novel discovery on your pathway. And I'm going to build on that in a minute or two.

So complexity theory is key.

The other thing to understand about complexity is complex systems scale, not by finding something which worked and repeated it, but by breaking things down to their lowest coherent unit and allowing them to recombine.

Complex systems will only scale by decomposition (to an optimal level of granularity) and recombination, not by aggregation or imitation - Dave Snowden

It's like DNA, if you think about it, the whole of organic life form comes from four different chemicals in different combinations. The work we're currently about to announce, which is a collaborative open source project for the agile community, is to break every single agile method and framework down to its units, and allow those units to be combined across frameworks in novel ways to deal with uncertainty.

So scaling complexity is not about copying what somebody else did.

Remember I talked about inductive logic? In complexity we talk about abductive logic. Abduction is sometimes known as the logic of hunches. It's what's the most plausible connection between apparently unconnected things.

And starting to think abductively is going to be a key survival system, a key survival need for companies and society alike.

Just to give you an idea, complexity is a large field. It is absolutely not systems thinking. Please don't confuse the two.

Anthro-complexity

I quite like this because it's got me on it. It's got Nassim's main collaborator also on it up there. This you can download from Durham University and it gives you an idea of the richness of the field. Within that field, we've started to identify a field called anthro-complexity, which is a study of complexity in human systems. Or as I sometimes put it, if I'm at Santa Fe and it upsets them, human beings are not termites. You can actually model termiteness behavior, but human beings have multiple identities:

  • We have agency.
  • We think in different ways.
  • We can change our identity.
  • We're pattern-based intelligences.

Human complexity requires a transdisciplinary approach.

So to give one illustration, and I'm not going to go to this in depth, part of our work is to be taking the concept of strange attractors in complexity theory, one of the best-named phenomena because they are, to be honest, bloody strange, in that no agent actually follows a predictable pathway, but the overall patterns of the agents follow a pattern which is itself predictable.

You can actually exactly match that with Deleuzian concepts of assemblages[ ^6].

So Deleuzian epistemology matches to that, and with a narrative concept of a trope.

And once we put those together, we had a way of mapping dispositional states in human systems, which I'm going to come on to in a minute. So I say realizing this is important. It's not all about mathematical simulation. We actually need to bring into account cognitive neuroscience. Material engagement theory, if you don't know about it, is really important, which is the way that tools influence cognitive development.

And also things like Deleuze, I've said.

I've actually introduced Derrida6.

I'm quite proud of this. Nobody in the history of humanity has got American executives to quote Derrida in day-to-day discourse until I managed it. I'm really proud of this, all right? But we took Derrida's concept of aporia. And for those of you who don't know your French philosophers, Derrida said:

a question to which you know the answer isn't a question, it's a process. The only useful questions are the ones which you do not know the answer, and they force you to think differently.

So we've actually developed linguistic aporia, aesthetic aporia, physical aporia, to engender change in organizations, not by saying what you should be, but by forcing people to think differently. All of that is in an open source wiki.

Now, key to understanding human systems is narrative. Now, this is from Alasdair McIntyre, a British philosopher. He and I studied under McCabe together in Blackfriars back in the 70s. Fundamentally, and you'll find a lot of people now talk about homo narans with the storytelling ape.

A central thesis then begins to emerge: man is in his actions and practice, as well as in his fictions, essentially a story-telling animal... a teller of stories that aspire to truth. But the key question for men is not about their own authorship; I can only answer the question ‘What am I to do?’ if I can answer the prior question ‘Of what story or stories do I find myself a part?’ Deprive children of stories and you leave them unscripted, anxious stutterers in their actions as in their words. Hence there is no way to give us an understanding of any society, including our own, except through the stock of stories which constitute its initial dramatic resources.7

Most meaning in human systems comes from stories, and I don't mean elaborate stories, I mean fragmented anecdotes, casual conversation.

We were all at the art exhibition yesterday and people were standing around having drinks with strangers and everybody was dropping their favorite anecdotes into the conversation, yeah?

The stories that matter in organizations are the stories that people tell around the water cooler. The stories that matter in society are the stories of the pub after work or the school gate. And that's actually where you understand people's attitudes and beliefs and dispositions.

It's not through questionnaires or focus groups where people know what you're looking for and they gift or they gain. It's also critical, and there's a key concept here from the feminist literature which has driven our work for the past 20 years, which is called epistemic injustice.

The way you describe things can control people.

Beth, who is a colleague of mine who's also Welsh, has a wonderful way of saying this. She says:

the way you understand epistemic injustice in language is that old men are called philosophers while old wives tell tales.

And if that's how you describe it, you can see the prejudice which goes with it. So, and I'm not gonna go with this in depth, but our focus for the last 20 years is to give people the power to interpret their own narrative rather than have it interpreted by experts or by algorithms because power comes from interpretation of content, not from the content itself.

And I say that's described in the Oxfam book and elsewhere.

similar groups

This is actually an example of young girls in a project we did on genital mutilation. Yeah, actually acting as ethnographers into their local community without anybody from a Western background involved. Yeah, we've done a lot of work on this, which I'll talk about in a minute, by using children as ethnographers to their own environments. So it's people from the community interviewing other people in the community rather than people from outside and the autonomy of that community to then come up with localized solutions. So this is a key aspect and the same applies to organizations as a society. Say, this has been going on for some years now. This is from work we've been doing on patient journey. This is in Northern Ireland. That quote is from the chief medical officer. He actually slated the health service but said there was one positive light and that was 10,000 voices. And that was where we collected narrative of patients as they went through journeys back to or away from health, allowed and empowered them to interpret their own stories and use that to complement medical advice. In another project where we got patients with identical physiological conditions on chronic pulmonary care but variable oxygen take-up, we can account for about 90% of the variation in the oxygen take-up by the journals they index on a day-to-day basis. This is now starting to move across into clinical trials, which is the ability to measure human attitudes at a micro level. And that actually is a critical missing element in the role of machine learning. I refuse to call it artificial intelligence, by the way. We should all start to call it machine learning. Then we might get a realistic assessment of what it is.

What really matters is the training data sets, not the algorithms.

If you haven't read the Scholastic's Parrot8 paper by people who, after they published it, became ex-Google employees, which basically talks about the radical prejudice of black-box AI because it builds and reinforces society's prejudice. One of the things I worked on for the US government was the ability to generate balanced training data sets, and that's still a lot of what we actually do. So it's not that this is completely novel.

Now, all of this has consequences, and the best way of understanding this is to go and watch Frozen 2. You weren't expecting to be told to go and watch Frozen 2 today. You now have a good excuse, even if you don't have grandchildren or children. It's a great complexity movie. Frozen 1 is classic Disney stereotypes, but it made so much money they could employ the good scriptwriters for Frozen 2.

And in the middle of Frozen 2, there's a little heroine of the movie series who's the younger sister without magic. That's important to realize. She's the heroine.

And in a state of despair, when she thinks she's lost a sister, she's lost a mentor, she sings an absolutely beautiful song, which is subsequently made famous by Ukrainian refugees, which is, all I can do is do the next right thing. And that, in complexity terms, is move to the adjacent possible.

Shifting to the adjacent possibles using fitness landscapes

In a highly complex system, you can't know where you should go, but you can understand where you should step next and look again.

Remember I said you start journeys with a sense of direction. You don't try and achieve goals.

So that is actually what you see on the screen, is a fitness landscape. That's the result of us presenting a situation to 2,000 employees, getting them all to interpret it within the safe five-minute period into a non-gameable, quantitative, semiotic framework. Semiotics are really important here. You can't afford, if you want to understand complexity, for anybody to be able to influence the outcome of a survey by understanding what you're looking for.

I'll give you a simple illustration of this.

You all, at some stage, have done an employee satisfaction survey. Everybody done one of those? Or filled out the hotel survey. Your net promoter score is a classic example of a measure which has become a target and lost all utility. So we had this when I was in IBM, and the question came out, does your manager consult you on a regular basis, scale of zero, not at all, 10 all the time? Everybody familiar with that sort of question? It's deeply hypothesis-based. It assumes the manager should consult you. So I phoned up HR because I was in a mischievous mood.

You need to understand I was on a watch list in IBM HR because I'd done a controlled study which had proved Myers-Briggs was less accurate than astrology in predicting team behavior.

And they were really upset that they'd been conned into paying for that. It has actually less base in science than astrology as well, but that's another story.

So either way, I got through to our monk, and I said, how am I meant to answer this question? I said, first of all, I've got several managers.

Nobody has a single manager in a modern organization. I've got several managers, and I've got people who are quite powerful.

And I said, sometimes they consult me, and sometimes they don't. And sometimes they should. and sometimes they shouldn't.

You're asking me a context-free question in a context-specific world.

And I'll come back to that point, it's a key one.

Nearly every management recipe you see assumes a context-free situation, when actually everything is context-specific.

Either way, she said, average your experience over the year and stop causing trouble and slam the phone down before she heard me say, and you call yourself HR in the research group.

We take a different approach if we're doing that.

We'll ask you to tell the story that you would tell your best friend if they were off to a job in the company.

No hypothesis in that, we're gathering rich narrative material.

Then we get you interpreted on a series of triangles. There's a whole bunch of cognitive neuroscience behind this, by the way.

One of the triangles is labeled, in this story, the manager's behavior was altruistic, assertive, analytical. Three positive qualities.

That actually triggers the brain into a different pattern of response.

If you don't know what the answer is, you go into what Cardamon called thinking slow, not thinking fast.

It triggers a change.

And balancing three positives means you can't say anything negative, but then the manager looks at the results and it's all assertive, analytical, no altruistic. And they realize the pattern of their behavior is problematic.

But then they can click on the triangle and look at the few altruistic stories they've got and say, how can I create more stories like that? And then click on the negative ones and fewer like those.

This is a whole new theory of change.

How do I create more stories like these and fewer stories like that is engaging.

If I look at our work in the health service, if we go to nurses and say, how do you have a better focus on patient safety, they will get defensive. If we've got patient stories gathered over the last year and we say we need more patient stories like those and fewer like these, they can engage in that because it's non-judgmental and the stories trigger context. So that's kind of like what I mean when we talk about high-abstraction metadata.

On that pattern, this was designed to look at the culture of a company. Now you can immediately see that there's some overlap, but there are some big differences.

Seeing the world in different ways

Different parts of the company see the world in very different ways. They've all had the same indexing structure and the same input data, but they see it differently. So if you want to change this, well, kind of like trying to move those people in the bottom blue is going to be difficult because they're an outlier. But you can possibly move them to the purple, more like this, fewer like that. But equally, you might say that's actually fairly healthy.

There's a key phrase in complexity I use a lot, is we need a system to be coherently heterogeneous.

We don't want homogeneity. Homogeneity can destroy creativity.

And the way I normally explain this is I'm Welsh. As I've told you, in Wales, and I live halfway between the rugby ground in Cardiff and the Opera House in London. This is a nice oscillation between the two. And when I go to watch the rugby with Cardiff, who are my team, we're a highly civilized team with spectators who understand rugby. We follow the rules. We don't cheat. We applaud the opposition if they do badly. We're nice people. We're there.

And then those bastards from Llanelli arrive.

West Walians. They bribe referees. They can't be trusted. They cheat. Their supporters are too partisan.

But when the English arrive, we're Welsh.

That's called coherent heterogeneity. The ability to be different, but also to come together.

And most organizations don't realize the importance of that.

If you try for homogeneity, you get the wrong sort of patterns.

So I'd say, more like this, fewer like that.

The other thing we can do with this is we can give a manager, well let's take the hospital example, a charge nurse can look at her ward for the last two weeks. And a disease specialist can look at everybody in their disease group for the last three years, all from the same source data. So people look at a map in the areas where they're competent to do something, not where they're not competent. And all of that is called fractal engagement.

But it all comes together in a very different way in real time. And that's also the other work we're doing at the moment.

I have an intense dislike of behavioral science, and particularly nudge theory.

Because generally nudging isn't nudging, it's yanking. It's manipulation.

Actually at this point, we're now talking about what's called micronudge, is you work at when the system is ready to change, and then you allow the system to do small changes to it. Much more sustainable, lower energy cost.

And to give a couple of examples on that, one of the things we did in South Wales, we did a big project using children's ethnographers to capture stories in one of the post-industrial areas. This is the Rhondda Valley.

Most families are in their fourth generation of unemployment, with all the consequences which go with that. From what was originally a hugely rich society, with the miners' libraries which gave rise to the health service and everything else. So we gathered the stories, we drew the map, a minister said I want more stories like these and fewer stories like that. And then we put teenagers together with people from their grandparents' generation. And they came up with ideas for change. And that was deliberate. Young, bright, with old, wise and networked. And if they came up with a good idea, we put them into a trio with somebody from government who could make the idea work. So instead of a grand government project, we ended up with about 500 micro projects developed in the community by the community which was sustainable.

We're doing the same with refugees in Malmo and elsewhere.

And in companies, the same thing. Put somebody who's joined the company with somebody who's about to leave the company with somebody on your executive leadership program.

And throw half a dozen of those trios at a problem, you're bringing diversity into the system.

In software development, we put a bright young coder straight out of college with a systems architect who sees the systems as a whole with a user trained to talk to IT people. It's a lot easier to train users to talk to IT people than train IT people to understand users.

I'm repurposing books titled So Your Child Has Asperger's Syndrome in order to develop the training course.

Instead of sending out a systems analyst to interview people, we throw two dozen, you know, 20 trios at a problem for a month and see what they come up with.

That's a complexity intervention. We know that 17% will see something that other people haven't seen, so we build a method for that sort of discovery. That also has other utility which I'll come to in a second.

Real-time feedback loop.

The big thing we talk about in the Ufield guide is build your employees as a human sensor network so you can actually consult them in real time.

You can't afford to wait for a linear process in a crisis, you need to have immediate feedback.

And again, that actually has come from that sort of process.

The other big thing, and this is the work we've been doing with the European Union on how do you handle false data, is actually what matters is not creating better algorithms to tell you what's true or false, but doing more work to actually identify where the data comes from.

Now what we're now looking at, and we've now done this in five countries, we're looking for funding to do it worldwide, is every child at the age of 16, and this can be part of a baccalaureate, becomes an ethnographer to their own community every week for one year. I now know what's happening at a local level, and I know where the data came from, and I know it's reliable. And again that's coming from that sort of scientific principle.

And then that also allows us to do what are called anticipatory triggers.

I was one of the two principal designers, Peter Schwartz was the other, for the Singapore government's risk assessment arising scanning system. One of the things we said is scenario planning is plain bloody dangerous if the situation is complex. You do a forces and factors analysis, you do a two by two matrix, you identify your scenarios. The trouble is you're only seeing what you expect to see, and therefore you can be caught blindside. You're often better without those scenarios and having a more responsive system.

One of the things we focus on now is what's called micro scenario generation in context.

So if something happens, you trigger the workforce to create micro scenarios, and you draw the maps from that to identify what's more or less likely. Again it's that real-time feedback loop from a cognitively diverse group.

And that's that's a big area of work.

So coming on to the field guide. It has three key things that you should do:

First of all, build a human sensor network. Your employees or your citizens are an obvious candidate. They can also be customers. This is a whole different approach to panel-based. It's creating panels at large scales. If you've got a human sensor network and they've got the ability to feedback, that's good news. One of the things I'm currently working on with one of the big pharma companies is a Gemba system. If you know the Japanese concept of Gemba, the workforce know what's going on, which will replace all of their reporting systems because people will do micro-recording as things happen. So we give people a benefit.

Ideas generation for R&D, things which make me feel nervous, that's weak signal detection, lessons learning, and so on. So we have one system capturing all of that sort of stuff.

We remove a burden. But now I've got a network that I can ping. So if something happens, I can ask that network a question, and they're familiar with the process. The way we say this is you build networks for ordinary purpose that you can activate for extraordinary need. You don't scramble to create a network after something has happened. You build networks in anticipation of the unexpected, not in response to the unexpected.

Empathy counters false news better than information counter-warfare

The second thing is informal networks. Remember I talked about entangled trios, putting three people together from different backgrounds? That was designed to build the density of informal networks across silos. You've all heard people say we should get rid of silos. Everybody heard that? The first time I can find a trace of this complaint is Zeno the Tyrant of Athens. People have been complaining about this for centuries, and nothing has changed. So maybe it's about time to realize that silos have utility.

If I'm at a conference of complexity scientists, I can say something in three minutes that will take me ten minutes or half an hour here.

The reason silos exist is they lower the energy cost of knowledge sharing based on expertise. The key thing is not to break down the silos, but to build the informal networks across the silos.

It's rather like the fungal roots that connect tree roots and make a soil healthy. With things like entangled trios, within two years, everybody can be within two phone calls of everybody else.

If you've built that network density, you've focused on the channels through which knowledge will flow, rather than trying to capture and codify the knowledge. That's more flexible.

Non mediated peace and reconciliation

Of course, you can see the importance. If you look at the Singapore government, all of their informal networks, which are extremely effective, come from the fact they do national service. They all have spent time in the army together, and that breaks down barriers. If you look at the English government, all of their informal networks come from three elite private schools and two elite universities. That's perverse. I could go on to Grande Ecole, but I'll be nice, being as I am in Paris, in terms of the way things work. What you actually need is your informal networks that distribute across the organization not be confined.

I've interviewed over 100 CEOs and principal actors in government over the last couple of years.

All of them, during COVID, fell back to their informal networks, not the formal systems, because informal networks carry trust automatically. Building informal networks is a way of creating a healthy ecosystem, and it's not too difficult to do.

Sensing multiple perspectives (EU Ponte Project)

In informal networks, people know each other, which is why I'm talking about empathy.

Some of the work we're currently doing in the States on the post-election crisis is to increase empathetic contact between people from both sides.

That comes from work I did in the 70s in Northern Ireland, where there were two approaches to peace and reconciliation.

One was to get everybody in a big hall together and talk about how Catholics and Protestants should get on, and wouldn't it be nice if we stopped throwing petrol bombs at each other? That was very attractive, and it always reported a huge success.

If you haven't seen a comedy on Channel 4 called Derry Girls, you should watch it, because in episode 1 of series 2, the Catholic girls are forced into a peace and reconciliation process with the Protestant boys, led by a trendy priest. They have a blackboard for everything we've got in common, and a blackboard for everything which is different. At the end of the episode, everything which is different is full, and the other one is empty. Being as they're Northern Irish, they've actually put that into a museum now. They're quite proud of it, in terms of the way it works.

We took a different approach. We took two or three people from each community, and we put them on teams into Latin America, and we didn't talk about the conflict. They discovered pretty fast they had more in common than they realised, and they had the conversation about their difference when they were ready to have it in their own time.

Again, that's the epistemic justice, epistemic sovereignty thing. That's actually really important in organisational change.

Throwing people together and getting them to work on something, and not deal with the problem directly, is far more effective and energy efficient than trying to tackle the problem head on.

Anybody with teenage children? You all know this, don't you? If you have teenage children, you develop manipulative skills beyond your imagination, and you never tackle a problem directly, only indirectly.

Then, exaptation. You need to map what you know at the right level of granularity, so it can rapidly reassemble in the context of different need. To give an illustration, this is a project I did years ago.

We had a European company who thought it would be brilliant if people bought lights as a garden feature, rather than a garden utility. People bought lights to light their garden. They didn't see light as part of the garden itself. We gathered about 3,000 anecdotes about people's gardens, and we didn't talk about light. We hid that in the signification structure, light and shade, so it wasn't prompted.

Then we took all of the technologies of the company and indexed them into the same index set. Then we mashed the databases together, and we got five clusters. You went to marketing and said, why are these technologies linked with these customer stories? The one highly project which I'm both proud of and ashamed of at the same time, is you can now in the Far East buy a plastic rock which changes color based on human proximity, and there are terrible colors in your swimming pool. That's entirely based on a technology originally developed to handle urine-saturated staircases in football stadiums.

You see exaptation? Find something you know and use it for something completely different, and you get there. That's a mapping technique, and that needs to be done continuously.

This, for those of you who want to download it, is the whole process. We actually have a whole process on this. What did we do during COVID? What do we need to do differently for the next crisis? I said some years ago that COVID was God's gift to humanity because there are worse things coming. We have to learn from that.

To quote Lincoln, a famous quote from Lincoln towards the end of the Civil War,

as our case is new, so we must think anew, so we must act anew.

Remember, it's think and act. It's both.

Three programs from the Cynefin Center

So, three programs.

The Cynefin Centre is a not-for-profit centre. That's my main focus.

You can go onto the website, you can download these.

The citizen engagement one is all about how do we use churches, schools, focus groups, arts clubs, effectively as ethnographers to their own condition, to feed back to government. And within that, we're now looking at radical new forms of democracy based on understanding attitudes rather than stated opinions. So that's there, that picks up some of the ideas.

Health is massive. The cost of the health sector for the world in Europe is out of all proportion to what we're going to be able to afford to pay.

So we've got to find radical new ways of thinking about health in the context of families, lifestyle, life journeys and everything else.

And that's where we're pioneering this concept of continuous patient journey capture of narrative to understand those sort of micro-patterns.

And again, there's projects in there which you're welcome to join.

And then, of course, the big one, which is climate change. And that's where we're really applying complexity theory.

Lesson from complexity theory: create a hyper-localised awareness to change the overall attitude

We're saying, this comes back to my concept earlier in another book by Terry Eagleton called Radical Sacrifice9.

And until people see climate change as a local problem, they will never accept international solutions. When COVID came, everybody could see the local impact, so they accepted sacrifice. But they will not accept the sacrifices which are needed for climate change. So we've got to create a hyper-localised awareness of climate change if we're going to change the overall attitude. And that is one of the massive lessons of complexity theory.

It's why myself, Nora Bateson, and others are heavily criticising things like the internal development goals. We've got to start to get to the reality of people's day-to-day lives, not the grand visions of people in an ivory tower.

It's got to be practical for people to take it forward. And that's kind of like where I really want to finish.

If those of you haven't read Hitchhiker's Guide to the Galaxy, everybody read Hitchhiker's Guide to the Galaxy? Okay, remember the description of Earth, all right? He gets it modified from harmless to mostly harmless. That's the only entry that Earth gets. I love that phrase, all right?

So there are things we need to be careful of:

Algorithmic approaches to democracy

One is algorithmic approaches to democracy. The trouble with IT people is they all think everything is logical, ordered, and structured, and they can write an algorithm for it. It's kind of like one algorithm to rule them all and, in the darkness, bind them. We need to think about approaches to democracy, which are based on empathy, personal contact, and action. And we need to use technology to support that, but not to replace it.

Designing how it should be rather than influencing the evolution of sustainability

Talking about how things should be. I mean, it's very nice if you get with a group of people who all agree how things should be, but the reality is you're not going to get there. That's what I call lotus-eating, yeah? Individual change.

Interactions - focus on individual change - the mindset maxima

And one of the key lessons of complexity is we're defined by our interactions, not by what we are.

Changing interactions is a much more successful strategy than trying to change people, yeah?

I've actually started to try and stop people using the word mindset because it's a meaningless statement. We say, instead, ask three questions:

  • What has agency?
  • What are the assemblage structures?
  • What are the affordances?

Because then you can do something about it. And again, you want to get away from those idealized states. You can have the reference on this.

Explicit goals destroy instrinsic motivation

Whenever people are working for explicit goals, it destroys intrinsic motivation.

That's a meta-study of all studies of human motivation. There is no doubt in this in the scientific community.

Where do we have the most amount of explicit goals? Health, education, social services. Where do we most need implicit motivation? Health, education, social services.

If you have explicit goals, people focus on achieving the goals rather than the things the goals are measuring.

And that has unexpected consequences. The only predictable aspect, yeah, of complexity.

Neo-colonialism with a caring face

Neocolonialism is forming a new way.

I call it young white males trying to do good. It's moving into Africa and Asia on the basis that everybody wants to be a northern European enlightenment culture from the 19th century reading Kant.

Sorry, I'm being a bit cynical here, but that's it, right? And that is actually a problem.

The reason we focus on this field, ethnography, is it allows people to generate their own solutions authentic to their culture without that outside interference.

Happy clappy gatherings of the enlightned calling you to salvation

Yeah, happy clappy is one of those rude phrases. There's an awful lot of this. Just to be rude and direct about this, people like Arthur Scharmegger and Peter Senge are really good at this.

Everybody comes together and agrees, oh, wouldn't it be wonderful if? And then nothing changes, right? We need a far greater degree of pragmatism than we're getting from that.

And the other problem you've got in Western society is manichaeism, the belief that things are either absolutely good or absolutely bad.

You've all seen those two-column tables on social media, the bad thing on the left and the wonderful thing I'm trying to sell you on the right.

Yeah, we've got to start to take a both-and approach, not a neither-or approach, and start to realize a lot of things which we did 100 years ago are still valid today.

We just understand the context.

There's nothing wrong with process engineering in an order-constrained system. There's everything wrong with it in a service environment.

It's, again, that context-specific type approach.

Conclusion

And I'd kind of like to finish off. This is one of my favorite quotes from T.S. Eliot.

Nothing pleases people more than to go on thinking what they have always thought, and at the same time imagine that they are thinking something new and daring: it combines the advantage of security and the delight of adventure.

The trouble with new things is people adopt the language of the new things, but they don't change.

Yeah, it's kind of like fashionable and it's fun.

And then the real thing, this is from friends of mine in Gaping Void, realize that all paths up are different, all paths down the same.

All right? Understanding how we move forward is going to be highly contextual.

And to do that, we need dense informal networks, we need human sensor networks, and we need to be able to repurpose our existing knowledge fast to deal with existential as well as non-existential threat.

Footnotes

  1. HOPE WITHOUT OPTIMISM - Terry Eagleton

  2. Managing complexity (and chaos) in times of crisis - A field guide for decision makers inspired by the Cynefin framework

  3. The Learning Power of Listening - Practical guidance for using SenseMaker

  4. The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful by Eric Ries

  5. Dynamics in Action

  6. Aporias Jacques Derrida

  7. Man is essentially a story-telling animal - Alasdair MacIntyre

  8. Stochastic parrot on wikipedia

  9. Assemblage (philosophy)- Wikipedia

@owulveryck
Copy link
Author

owulveryck commented May 6, 2024

Points clés:

Cette conférence aborde plusieurs thèmes clés liés à la complexité des systèmes et comment nous pouvons nous y adapter pour construire un avenir durable et survivable. Les points suivants en sont les éléments saillants :

  1. Comparaison d'une digue et d'un marais salant : Cette analogie met en lumière l'approche à adopter pour la conception de systèmes (sociétaux, professionnels ou autres). Une digue, comme tout autre système hautement robuste et stable, pourrait être anéantie de manière catastrophique si les conditions dépassent celles pour lesquelles elle a été conçue. D'un autre côté, un marais salant, en tant qu'écosystème complexe, est constamment en train de changer et de s'adapter, et sa "défaillance" n'entraîne pas de conséquences aussi désastreuses. De ce fait, il est préférable de créer des systèmes qui réagissent comme des marais salants aux événements imprévisibles.

  2. Complexité et pragmatisme : La complexité est perçue comme une réalité matérielle et réaliste. L'approche en est pragmatique, prônant la fusion de la théorie et de la pratique. Pour y voir plus clair et prendre des mesures appropriées, il faut sans cesse essayer de comprendre non seulement ce que nous pouvons comprendre, mais aussi les limites de notre compréhension.

  3. Évolution de l'apprentissage : La majorité des sciences sociales et des ouvrages de gestion adoptent une approche inductive de l'apprentissage. Cette méthode pourrait s'avérer problématique car elle repose sur des conclusions tirées d'observations individuelles, qui pourraient n'être que des coïncidences. En revanche, l'approche déductive, qui se base sur les principes pour faire des prédictions sur ce à quoi s'attendre dans des circonstances spécifiques, et l'approche abductive, qui établit le lien le plus plausible entre des éléments apparemment non liés, sont la clé pour comprendre la complexité.

  4. Le rôle des récits : Les histoires sont particulièrement importants dans les systèmes humains où ils aident à comprendre les attitudes, les croyances et les inclinations des gens.

  5. Interprétation et influence : Le moyen de décrire les choses peut influencer la perception des gens à leur égard.

  6. La complexité à l'échelle : Les systèmes complexes évoluent non pas en répétant ce qui a déjà fonctionné, mais en décomposant les choses jusqu'à leur plus petite unité cohérente puis en les laissant se recombiner.

  7. La fausse causalité : Comprendre que la corrélation n'équivaut pas à une causalité aide à éviter de fausses interprétations.

  8. L'importance des interactions : Plutôt que de chercher à changer les gens, il est souvent plus productif de se concentrer sur la modification de leurs interactions.

  9. L'éscalade de la complexité : L'échelle de la complexité passe par la décomposition (jusqu'à un niveau optimal de granularité) et la recombinaison, et non par l'agrégation ou l'imitation.

  10. Le rejet de l'homogénéité : L'homogénéité peut être un frein à la créativité, rendant alors préférable une hétérogénéité cohérente.

  11. Le pouvoir des récits : Les récits ont un pouvoir d'engagement et peuvent fournir des informations précieuses.

  12. Approche pragmatique pour la gestion des problèmes : Il est parfois préférable de résoudre les problèmes indirectement en travaillant ensemble sur autre chose, plutôt que de les affronter frontalement.

  13. Nudging : C'est plus une manipulation qu'autre chose. Il est préférable de se concentrer sur l'adaptabilité et la flexibilité, plutôt que d'essayer de forcer les gens à changer.

  14. Manichéisme : Le manichéisme est un obstacle à la compréhension correcte de la complexité. Il est nécessaire d'adopter une approche "à la fois ceci et cela", plutôt que de considérer les choses comme étant soit totalement bonnes, soit totalement mauvaises.

  15. Compréhension de l'évolution future : En fonction du contexte, il est possible d'avancer vers une direction générale plutôt que vers un objectif spécifique.

  16. Création d'une prise de conscience : Il s'agit de créer un réseau de capteurs humains en formant les employés pour recueillir des informations en temps réel. Cela permet de répondre immédiatement aux crises.

  17. Préconception de micro-scénarios : Plutôt que de créer de meilleurs algorithmes pour déterminer ce qui est vrai ou faux, il est également important de comprendre d'où proviennent les données.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment