This episode is a conversation with Adrian Brown of the Centre for Public Impact about the need for experimental approaches to improving complex systems.
Episode notes
Main points
- Losing Control movement - 2:00 / 31:00
- Emphasis on evidence-based policy - 5:15
- The difference between Explicit and Tacit knowledge and the importance of place - 6:30
- Cultural barriers to experimentation - 18:45
- Buurtzorg and the need for a "heatshield" - 21:30
- Difference between pilots and experiments / and industrial vs organic approaches - 27:15
- Continuous experimentation/learning - 28:00
- Good things happening around the world - 32:45
- Government becoming an enabler - 33:45
- The challenges of replacing New Public Management with something equally compelling - 34:45
References
- Centre for Public Impact
- Enablement mindset - Blog post by CPI
- Evidence vs experiments - Blog post by CPI
- Losing Control
- Buurtzorg in the UK
- "Experiment" - A sonnet
Adrian Brown - Bio
Adrian is huge public sector reform enthusiast. He’s spent more than 15 years working on transformation and performance improvement in government. He’s been a consultant with McKinsey, a Fellow of the Institute for Government in London and a policy advisor to the UK Prime Minister. He’s currently the Executive Director of the Centre for Public Impact - a not-for-profit foundation set up by the Boston Consulting Group.
Transcript
Mark: Hello. I’m Mark Foden, and welcome to The Clock and the Cat, a podcast of conversations about complexity. The Clock and the Cat explores the emerging science of complexity ultimately to help you generate better ideas and make better decisions whatever you’re involved with. This is Episode Five, and I’m going to be talking with Adrian Brown about experimenting. Before that, if this is your first experience of The Clock and the Cat and you don’t know what it’s about, please do go back and listen to Episode One for a seven-minute, short, sharp introduction to the podcast.
If you went away, welcome back. Here I am with Adrian Brown. Adrian is a huge public sector reform enthusiast, and he’s spent more than 15 years working on transformation and performance improvement in government. He’s been a consultant with McKinsey, a fellow of the Institute for Government in London, and a policy advisor to the UK prime minister. He’s currently the executive director of the Center for Public Impact, which is a not-for-profit foundation set up by the Boston Consulting Group. Adrian, hello, and welcome to The Clock and the Cat.
Adrian: Hi, Mark. It’s great to join you for this podcast.
Mark: Welcome. Thanks very much. Before we talk about the subject of the podcast today, experimenting, can you just tell us a little bit about the Centre for Public Impact and what you’re up to there?
Adrian: Sure. Well, as you said, we are a not-for-profit foundation, and we work with governments around the world to learn and exchange ideas about how to strengthen what we call public impact, which are the outcomes that governments achieve for their citizens. We do that through some research and we also provide tools and training and bring together government leaders and people outside government to exchange ideas and challenge each other about how to improve.
Mark: The genesis of the discussion today is because Adrian and I were both at a conference in Birmingham a couple of weeks ago. That’s Birmingham in the UK, by the way. The conference was called Losing Control. There were about 200 people from all sorts of backgrounds and it was about promoting social change. The Losing Control theme came from the need to let go of power in order to enable change. There was lots and lots to talk about trying new things and freeing people to enable them to engage with the challenges and to experiment and learn how to make things better. It came out, for me, as a real theme of the event. What does experimenting mean to you?
Adrian: The best way for me to answer that is to contrast experimenting with evidence. If we think of experimenting, that’s the process through which we can hypothesize about the world around us and test new approaches and, therefore, learn which approaches are more effective or less effective over time. Experimenting is an activity that you do. In some sense, it’s a mindset and a way of approaching the world. Evidence is the output from that process in certain situations, so that’s definitive statements, more or less, about what works. Often, the word evidence is associated with more of a scientific approach and, perhaps, the use of randomized controlled trials to develop hard facts about the world around us. I think of these two as being closely related but quite different given that experimenting is an approach, whereas evidence is a set of facts. My contention, which perhaps some people will find a little surprising, is that, in public policy, we focus rather too much on evidence and not enough on experimenting. That’s probably something that we should talk about, Mark.
Mark: Yeah, absolutely. Evidence-based policy has become a thing lately, hasn’t it? This is one of the things that struck me. If you’ll forgive me, the evidence-based approach is quite clock, and the experimenting is much more cat. Is that reasonable to say?
Adrian: I think that is reasonable but I think I might say it slightly differently in that, if you’re in a clock world, then experimenting is much more likely to lead to evidence in the sense that things are knowable and, ultimately, you can measure things and make more definitive statements, which means things are more testable. This means it’s much more likely you can arrive at what you might call hard evidence. If you’re in a cat world, then experimentation is still important, if not essential, in that it helps you to learn about what is more effective or less effective and improve, but crucially, it’s less likely to arrive at hard evidence because things are less knowable. It’s harder to measure things, and it’s harder, if not impossible, to make definitive statements, about what works in all situations. I think the clock world and the cat world set up a different relationship or a less strong relationship between experimenting and evidence such that, in the cat world, evidence is much less likely to be interesting versus understanding how we could be better at experimenting.
Mark: Yeah, so I’ve been thinking about that. Context is really, really important. You might conduct an experiment in one context and it work one way, and you might have a particular set of evidence coming from it, but if the conditions are slightly different, you might do something in the US and have one set of outcomes with pretty much the same sort of inputs, but do the same thing in the UK, and you’ll get something completely different. What I’m talking about is the context is important. One of the things coming out of the Losing Control, for me, was the importance of doing things in place. People use the word place a lot, which I think they were talking about it’s important to do things in a context rather than abstract out evidence and then apply that evidence generally. Does that make sense?
Adrian: It does make sense. I think one differentiation that is important to make is between what you might call explicit knowledge or codifiable knowledge that we can write down and describe and share with other people and tacit knowledge, which we can’t. My argument as to why experimentation is more important than evidence is that evidence relies on, and can only be developed on, the basis of explicit knowledge because it requires us to be able to capture that knowledge and test it and share it with other people, whereas what matters in many, many public service settings is actually not so much the explicit knowledge but much more the tacit knowledge. That’s the experience and the skills and the judgment of professionals who are working in those services. That’s why I think, at Losing Control, there was a great emphasis on place, because you can’t extract that knowledge from the place. Indeed, you can’t even extract it from the heads of the people who are involved in that particular service, and I’m including the users of the service, as well, in that description. That’s why experimenting in situ, where that knowledge and information resides and from where it cannot be extracted very easily, is much more important than worrying about whether we can codify it all up and make grand statements about what works in general terms and everywhere.
Mark: We’re involving service providers, for the want of a better expression, or the users of those services and, together, they’re learning about how things might be done better. What you’re actually doing is changing the system, changing the context such that it does something different and improves but, actually, it’s quite hard to take out all that tacit knowledge and turn it into something that will be useful elsewhere.
Adrian: Yes. If we’re strictly talking about tacit knowledge, then I’d say it’s impossible to extract it. The judgment that a teacher uses when they’re leading a class of 30 different children is, in some senses, codifiable in that they attended teacher training college and learned theories of about how children learn and how you can teach different types of class. But actually, how they act as a teacher is as much what they’ve learned from experience, either from doing it themselves over years and years or from observing other people who have done it, and talking to their colleagues about different situations and maybe individual pupils or specific types of approach that they have absorbed in a whole variety of different ways, not just something that was written down and codified. I would imagine, if you asked a teacher or a social worker or a probation officer to make an estimate of the extent to which they use their tacit knowledge on a day-to-day basis versus what they know which is codifiable, they would say most of it is tacit. In that sense, it’s very hard to extract and make grand statements about outside of that specific situation.
Mark: A lot of it is about human relations, isn’t it, and micro differences in how you behave? I just remember one of the stories from Losing Control was from the Prison Reform Trust in Birmingham, and they were talking about how they were involving people who’d been in prison themselves and helping them to help people who were in prison at the moment. So much of it was about their understanding, their empathy for what was going on and, because of that, the thing that made the biggest difference was the fact that they’d lived that experience. You could try to write it down and tell someone, “This is what we said, and this is the context in which we said it,” but actually, the thing that makes the difference is that experience being in the human relationship.
Adrian: That’s right, and the interplay, the basic human interaction that occurs when two or more people meet and talk with each other, and respond to each other, and react to what each other is saying in ways that you couldn’t codify because they’re so wide and varied and totally dependent on the interplay between the people who are talking to one another. When we do make efforts to capture evidence and to make statements about what works or doesn’t work, which I’m not saying are pointless, I’m just saying we shouldn’t place so much emphasis on them, that you tend to be drawn to making statements which are quite blunt, big, bold statements. You might say something like class sizes of a certain size for a certain age group are more likely to lead to better outcomes than larger class sizes — so a smaller class size is better than a larger class size. Maybe you’ve gathered some evidence to support that statement. If you think about the dynamic in a classroom, the size of that class is a fairly blunt way of describing what that classroom is, so the evidence is necessarily ending up being quite limited to these sort of blanket statements and is less likely to get into the complex interplay between a teacher and their students because that’s just harder to observe, harder to measure, and harder to codify in a way that you can make comparable across many situations.
Mark: To what extent do you think this kind of thinking is getting into government policymaking at the moment?
Adrian: It is to an extent. My concern, I suppose, is that evidence-based policy is really the fashion of the day. Evidence-based policy has become a phrase which is widely used. It’s one of those things which who would argue with it?
Mark: Absolutely.
Adrian: Who would argue with a policy that’s based on evidence? That sounds completely sensible, and anyone who thinks that’s a bad idea must be mad. The problem, as I see it, is that when policymakers become too focused on evidence, i.e., things that they know in advance, and spreading best practices, and creating standards, and maybe setting targets and metrics associated with some of these things, they are either completely ignoring or at least placing less emphasis on how can we encourage the system to be a learning system, because many of the things you would do if you want to spread evidence and best practice, i.e., create standard and targets, like I was saying, are actually things which might inhibit a learning approach because if we’re having to follow a particular standard, we’re less likely … or maybe it would actually be considered to be poor practice to be doing something else. My worry is that, although policymakers do talk a good game around encouraging innovation, encouraging learning in public services and public systems, a lot of what they actually end up doing tends to go in the other direction. They’re doing that with the best of intentions to try and spread best practices and evidence, but that that has a chilling effect in terms of encouraging people to try new things and to recognize the value of the knowledge that’s embedded in the system as opposed to this sort of external knowledge that’s beamed in from on high.
Mark: Yes, so what’s done by extremely capable people to make sense of a situation can actually have a detrimental effect. It’s one of the reasons that I’ve started doing this Clock and the Cat podcast is to help folk understand about why that is happening and why complexity ideas are important, because I think it’s one way of explaining what you’ve just said in a sort of scientific way. I think it’s really very important, indeed, that people learn about this stuff, we all learn more about this stuff, because we don’t have that much experience of working in this way anywhere.
Adrian: I’d agree with that. I think The Clock and the Cat metaphor is a particularly helpful one, actually, because it exemplifies the very different way of thinking and viewing the world as you’ve explained in your earlier podcasts. I think the challenge here, if we compare and contrast evidence with the more experimental learning mindset, is we need a little bit of clock, right? I’m not saying there’s no room for evidence at all. How can we blend together these two very different ways of viewing the world without one squashing the other?
Mark: Exactly. It’s very easy to say the thing that is missing is the incremental experimental approach and, because that’s what’s missing, you overemphasize it, and people think we cannot possibly go from where we are now to that kind of world when it isn’t actually like that. Roland Kupers, who’s been in the last two episodes, he’s got this neat way of talking about it. He said, “When we look at any kind of potentially complex situation,” he says, “take a look through the complexity window, i.e., plug in your complexity routines or whatever it is, and just look at the situation through that window, through that lens, and see whether those kind of effects are important in this situation.” Sometimes they are, and sometimes they’re not, so it’s never one thing, it’s both, but we do need to develop this capability to see the cat in things.
Adrian: Yes, I think that’s right. This is some of the research that we’re doing at CPI now to try and understand what are some of the things that policymakers can do if they want to embrace the more cat-like view of the world? Related to what we’ve been talking about here, you might say, well, if you want to encourage experimentation, then you would, for example, ask how can we enhance the information flows, not up and down the system but horizontally between practitioners in a system? How can we encourage there to be as much sharing of information and learning between probation officers or teachers or whoever as possible? Do we need to think about creating learning platforms or other mechanisms that can encourage that kind of interplay? I think we spend a lot more time thinking about information going up and down the system than we do going across. That’s just one example of a policymaker, and we’re using the word policymaker here in a sort of generic sense, but what somebody higher up the system might be able to do to help encourage a more of an experimental and learning mindset rather than discourage it. It’s a very sort of practical thing that they could look into and potentially do something about.
Mark: Absolutely. I think there are some potentially quite significant organizational cultural barriers to doing that. For example, I did some work with the NHS a couple of years ago where we were trying to promote that kind of connection across different organizations. We developed a community of probably a couple of hundred people, I suppose, and encouraged by many of the managers of those people but, nevertheless, other managers involved were a bit nervous about what was going on because people were talking about new ideas and doing things differently that they weren’t aware of. They weren’t plugged into this … Well, we used Slack to communicate, and they weren’t on the Slack, and stuff was happening that they weren’t aware of. It made them feel nervous, and it actually closed down some of the conversations. It was clear that some people stopped turning up to meetings and that kind of thing. In hierarchical organizations, there are those impediments, so it’s not straightforward doing this kind of thing. I mean have you come across that sort of thing?
Adrian: Yeah, totally and, of course, once you start to look outside of those organizations because, arguably, in any of these complex services, the way we carve up the world into silos is not necessarily the right way of thinking about who needs to be involved in helping to improve the situation. When we think of the NHS, then we might say, well, if we were thinking about information flows horizontally, we would be saying how can we not only encourage other information flows within this organization but with the other adjacent organizations who we end up working alongside or working with when we’re trying to address particular challenges? Of course, the police and the NHS have a very strong interplay around who’s coming into A&E if you’re thinking about stabbings or other types of situation where the police are involved. How are the police and the NHS sharing information so that they’re constantly learning and improving? This is a good question and a very worthwhile question to ask, but because of the cultural differences and the organizational barriers, it is extremely difficult. Now, I don’t want to pretend that it’s easy in any way.
Mark: Almost by definition, if you’re experimenting with things that are going to make a big difference, a lot of that difference is going to be achieved by connecting things that are outside your normal sphere and, as you say, the NHS talking to the police and so on, so this makes experimenting really, really hard. One of the things that came out of the Losing Control was the discussions about Buurtzorg. I know you weren’t in all of them, but the standout image from the event, for me, was a picture that was put up by a chap called Paul Jansen who’s involved with Buurtzorg in the UK. He had a picture of an NHS hierarchy, typical hierarchy organization chart, and then a picture of the Buurtzorg community nursing, which is a self-managed kind of network thing. Between the two, they had this great line that was labelled “heat shield”. Did you see that image?
Adrian: I did see that and I thought that concept is a very interesting one. The idea that you have to create a safe space within these larger systems, these larger organizations such that this kind of more learning experimental mindset and, perhaps, and, therefore, less hierarchical and less vertically accountable type of approach can flourish. If we’re being realistic about this, we’re not going to reform, much that we might want to, these massive public systems anytime soon, so I do believe the way we can make progress is through models like Buurtzorg, highlight how these different approaches can lead to better outcomes, and then hope that they can then almost infect the system from within as people see that’s working well, want to be involved and want to ask how those approaches can work in other bits of the organization.
Mark: Let me try this on you. The thing that struck me sitting there listening to Paul talk was that, for the NHS managers who were responsible for that area, this was a huge deal. It was making a big difference to how things were working in their area, and it would mean really quite significant change and actually taking risks. At a national level, it’s a sort of an experiment. We’re going to try it in a few places, but it won’t feel like an experiment in each of the contexts. I think the thing that’s missing is this sort of experimental mindset where we’re saying, “Okay, there’s a couple of places we’re going to try this. We’re going to take special care to manage the risks, and maybe it will take some more money in order to do that, but these are comparatively small experiments, but they’re nationally important.” I don’t think that we’re creating the experiment mindset that allows this sort of development to happen. We looked at Buurtzorg four years ago and, comparatively, little has happened in this time, and I think a lot of it is down to this sort of lack of ability, willingness, or whatever to experiment. People are thinking, “Oh, right, that’s the new model that’s implemented,” rather than saying, “Hey, there’s a new model. It might work here. Let’s try it,” which is why I wanted to have this discussion about experimenting.
Adrian: Let me try and put a more positive spin on it.
Mark: Okay, good.
Adrian: I do think that, from Whitehall down in the UK, people who work in policy making and the commissioning and in public services themselves are absolutely motivated to try and improve things and do understand and get the need to try new things in order to improve. I think that that is clear. In all sorts of different ways, public services are piloting new models, trying new approaches all the time. I think the slight disconnect, which is perhaps what you’re picking up on, is if you approach that with a kind of industrialized mindset, then you think that what you’re doing is creating something that you can then scale, and you can take it up to an industrial level. In order to do that, by definition almost, it has to be codifiable. It has to be measurable. It has to be packaged up, because scaling, that whole concept if scaling is very industrialized. I think, perhaps, Mark, you’re saying, well that doesn’t really feel experimental anymore. That feels industrial. I think what we’re both arguing for here is to follow an experimental mindset, not just in terms of trying new things and seeing if they work and then scaling them, but making the whole system, and this is a bit jargony, so I apologise, but making the whole system a learning system so that it’s learning all the time, and good ideas are promulgated and shared, and things that don’t work so well end up naturally withering and disappearing. That system doesn’t require someone sitting at the top to mandate the pilot and then codify it and roll it out. It’s a much more organic … well, you know, it’s much more cat-like in its approach. We can get ourselves sort of tied up in terminology of experimenting and experimental mindset, but I think that’s the biggest difference is do we think we’re building a learning system or a giant kind of machine which we’ll try a different bit of kit from time to time and, if it works, we’ll build a lot more of them and plug them in all over the place.
Mark: That’s exactly what I was trying to articulate, I think. When you’re dealing with complex systems and you’ve got problems that look more like cats than clock, then you’ve got to be organic. You’ve got to thinking about growing rather than building. The difference between a pilot and an experiment is, I think, is crucial for me because if you’ve got a big idea, you’ve gathered some evidence, for example, about a particular way of doing things, and you might start gearing up towards implementing that approach. Of course, you’d start with a pilot, but the implication with the pilot is that, if it succeeds, we’ll go on and do the big thing after it, whereas an experiment is quite different. The point of an experiment is to test a hypothesis, and it only fails if there’s no learning at the end of it. Then, I don’t know, maybe that’s just a different way of saying what you’ve just said, but I just think there’s a huge difference between the sort of piloting mindset and the experimenting mindset and, well, I think it’s good to talk about the difference between the two.
Adrian: To go back to almost where I started, experimenting is an act, is an action. We’re doing something. Maybe the important thing is that we are experimenting. We are learning. It is the process that’s important. The outcome from that process is actually, strangely, less important because there is no fixed outcome, necessarily, that’s a continuous process of learning. I think, when you have the pilot mindset, it’s implying that there’s a very particular point in time where we’re doing something that we’re testing and then, once we’ve finished testing, that’s over. We will have decided if this has worked or not. If it’s worked, we will roll it out, scale it, send it more widely. The period of learning is more time-limited to this piloting phase, whereas my reading of what it means to have more of an experimental mindset or a learning mindset is that it is an ongoing process.
Mark: We’ve been talking about the learning organization thing for, I don’t know, since Peter Senge started it probably more than 30 years ago now. Do you think we’re getting any closer to working that way?
Adrian: I actually think that we have much more of a learning organisation approach out in the system. So if you’re away from the top of Whitehall and maybe away from commissioners, that’s really, at the place where it matters, I think that is actually where you will see this kind of approach much more readily happening.
Mark: As evidenced at Losing Control. I was just so impressed by the way people there were approaching things and the way they talked about things. It was like a different world. It was just fresh air to go.
Adrian: Actually, you know, the vast majority of people who work in public services and work in the public sector are probably behaving much more like this than it would appear if you’re looking at it from the top down. I think the argument that you and I might have with the top of the hierarchy is that they’re doing things which, in many ways, are curtailing and constraining and preventing that learning from really reaching its full potential rather than doing things which are helping it, nurturing it. The irony is that the things that are imposed from above that are stopping this are done with the best of intentions because it’s thought that they improve things. This is the big debate, right? It’s where in the system is there this discontinuity between one way of thinking about the world and another way of thinking about the world? I think, in different services, you’ll find it in different places. At Losing Control, we did hear some really inspirational and positive stories of, for example, commissioners in healthcare and ethics, I think it was, who have actually started to embrace this much more in their approach as a commissioner having seen it work in one of their particular programs or activities. I think we can see examples of this bubbling up the system. Wigan has done this across a whole council. They’ve taken a much more learning, open, adaptive approach. Different examples in the local areas of councils or CCGs or the particular bits of public services we do see this, it’s just not across the whole system at this stage and, in many ways, it feels like the system is somewhat working against it.
Mark: Complexity thinking says there won’t be a procession from one world to the next if we are going to move to a much more, for the want of a better expression, now an experimental mindset lurking in the learning organisation way that you’ve been talking. It is going to be pretty messy for probably quite a long time while that change happens?
Adrian: Yeah. At CPI, we have an office in the UK, and I’m British, so we do pay a lot of attention to the UK, but it’s as interesting for us to be looking around at what’s happening in other parts of the world. I think we are seeing other governments, believe it or not, Mark, who … other governments other than the UK leading on this agenda. For example, perhaps unsurprisingly, in Finland, they’ve adopted an experimental approach to policymaking right at the top, right in the prime minister’s office. Now, I’m not saying that that’s cracked the whole system, but they’ve more explicitly embraced some of these concepts at a higher level of government. We, at CPI, are constantly looking around the world for the best examples of this kind of an approach so that we can highlight it, because it’s often not particularly obvious from the outside, to share it with people and give some inspiration about how this can happen more systemically rather than just in pockets here and there. We’ve been looking at how government should evolve and adapt in the future more generally and have come up with this terminology around an enabling mindset. Government should think of itself in more of an enabling force in society rather than, and you can pick your word here, but a delivery or a sort of an implementation or a set of services. If government is thinking about itself more with an enabling mindset, it’s starting with a question along the lines of, “How can I or we help you to achieve what you need to achieve?” It starts by thinking about the assets and the skills that are out there in the world rather than defining the question in terms of, “How can I deliver my service in the most efficient and effective way?” We think that there might be something in that, in talking about enablement rather than delivery, but it is a little bit of a battle of the frameworks and battle of the ideas at the moment. I think where we’ve struggled, and I say we, the whole public service reform community around the world, we struggle to really articulate anything that’s been as powerful and as compelling and as clear as New Public Management, which is the last two or three decades worth of markets and metrics and management in public services. One way we might be able to make some progress, because I think we’re close, is can we articulate all of these different ideas that we’ve been talking about in something which is as compelling and as clear as that New Public Management manifesto that took off so significantly in the ’80s and ’90s and went around the world? If we can, then the way to dislodge one idea is to come up with a better idea. That’s the big systemic approach to this. The other way that I’m advocating and championing is, just as you were saying, Mark, there are so many people doing such good work out there in the world, often invisible to the outside because it’s layered down in organisations or it’s a small program over here or a small service over there, is to try and highlight those, understand what they’re doing, share that with others, not to copy because of all the problems of one thing that works in one place doesn’t necessarily work elsewhere, but to get a sense of the different approaches that seem to be successful in asking how can we adapt those in our circumstances? I think it’s by trying to encourage that infectious idea to spread or infectious ideas to spread in systems and between different countries, ultimately, we will have a reasonable chance of changing the system as a whole.
Mark: Things are not going to change in a linear way, are they? It’s much more likely to happen in an organic way, and so that feels like a good place to stop. Adrian, thank you very much indeed. That’s been really, really interesting. Thank you.
Adrian: Thank you, Mark. I really enjoyed the conversation.
Mark: We’re done. There are lots more interesting conversations coming down the Clock-Cat pipeline, so if you’ve found what you heard useful, please do subscribe, and do spread the word because it may be useful to someone else. Please, straight away before the whirl of the world takes over, message, email, tweet, Slack, or otherwise let your chums know about The Clock and the Cat. Thank you. Hope you listen again. Bye-bye.