#121 – Understanding Value in a GenAI Powered World with Simon Wardley
BOUNDARYLESS CONVERSATIONS PODCAST - EPISODE 121

#121 – Understanding Value in a GenAI Powered World with Simon Wardley
Strategic tools can help you navigate organisational and market complexities. But how do you even begin to make sense of it all?
In this episode, Simon Wardley, the creator of Wardley Mapping and one of the most profound thinkers and influencers in strategy, explains the power of mapping complex ecosystems.
He speaks on how building resilient organisations goes beyond clever tactics and requires a deep understanding of the landscapes you’re navigating. He highlights how most organisations still struggle to truly understand their customers, and emphasises why it’s crucial to realign strategies, foster a shared language, and enable cohesiveness to create true “value.”
For leaders, this conversation serves as a call to rethink how you approach both organisational structures and the strategies needed to stay adaptable in a constantly changing landscape.
Youtube video for this podcast is linked here.
Podcast Notes
Simon brings a wealth of experiential knowledge in strategy, having contributed to the growth of some of the biggest organisations worldwide.
In this podcast, he delves into the evolution of technology, organisational structures, and strategy, with a particular focus on the impact of AI.
He challenges our fear of rapid technological advancement by sharing how tech’s true development follows a more gradual process, ultimately leading to sudden bursts of change – a more nonlinear growth.
He also explores other critical themes like – the evolution of organisational structures, referencing his explorers-villagers-town planners model, the need to have strong guiding principles, and also shares why the engineer is the true architect of a technology.
So, for anyone seeking a roadmap to navigate complexity correctly, this conversation is a must-listen. Tune in.
Key highlights
👉 Strategic tools must account for complex and rapidly shifting environments – mapping systems can help you visualise and adapt to these changes.
👉 Understanding the landscape is like seeing the entire chessboard – without it, organisations risk making strategic moves without fully grasping the game.
👉 Technology evolves through long, gradual build-up phases before rapid, transformative bursts – a non-linear path.
👉 Principles, not just structures, are the foundation of organisational agility – without them, teams become unstable and revert to traditional structures under pressure.
👉 Developing a common language within organisations is critical for strategic coherence and effective decision-making.
👉 Socio-technical systems, when understood and harnessed properly, create synergies between technology and organisational culture – these connections should be fostered to drive both innovation and user engagement.
This podcast is also available on Apple Podcasts, Spotify, Google Podcasts, Soundcloud and other podcast streaming platforms.
Topics (chapters):
00:00 Understanding Value in a GenAI Powered World
02:42 Introducing Simon Wardley
04:06 Mapping Strategic Change in an AI-Driven World
19:44 Deterministic and Non-Deterministic Languages: Creating Shared Systems
38:41 Building a Humanised Strategy for the future
45:42 Creating Dynamic Systems
52:21 Evolution of Socio-Technical Systems
01:00:08 Breadcrumbs and Suggestions
To find out more about his work:
- Simon Wardley – Wikipedia
- Simon Wardley – LinkedIn
- Simon Wardley – Medium
- Simon Wardley – Twitter
- WardleyMaps.com
Other references and mentions:
- Architecture by yourself by Nicholas Negroponte
- Alex Komoroske – Boundaryless Conversations Podcast
- Eric Schmidt – AI by the end of 2025
- Rendanheyi
- Susanne Kaiser – Boundaryless Conversations Podcast
- The Portfolio Map Canvas
- Explorer, Settlers, and Town Planners
Guest suggested breadcrumbs:
The podcast was recorded on 17 April 2025.
Get in touch with Boundaryless:
Find out more about the show and the research at Boundaryless at https://boundaryless.io/resources/podcast
Twitter: https://twitter.com/boundaryless_
Website: https://boundaryless.io/contacts
LinkedIn: https://www.linkedin.com/company/boundaryless-pdt-3eo
Simon brings a wealth of experiential knowledge in strategy, having contributed to the growth of some of the biggest organisations worldwide.
In this podcast, he delves into the evolution of technology, organisational structures, and strategy, with a particular focus on the impact of AI.
He challenges our fear of rapid technological advancement by sharing how tech’s true development follows a more gradual process, ultimately leading to sudden bursts of change – a more nonlinear growth.
He also explores other critical themes like – the evolution of organisational structures, referencing his explorers-villagers-town planners model, the need to have strong guiding principles, and also shares why the engineer is the true architect of a technology.
So, for anyone seeking a roadmap to navigate complexity correctly, this conversation is a must-listen. Tune in.
Key highlights
👉 Strategic tools must account for complex and rapidly shifting environments – mapping systems can help you visualise and adapt to these changes.
👉 Understanding the landscape is like seeing the entire chessboard – without it, organisations risk making strategic moves without fully grasping the game.
👉 Technology evolves through long, gradual build-up phases before rapid, transformative bursts – a non-linear path.
👉 Principles, not just structures, are the foundation of organisational agility – without them, teams become unstable and revert to traditional structures under pressure.
👉 Developing a common language within organisations is critical for strategic coherence and effective decision-making.
👉 Socio-technical systems, when understood and harnessed properly, create synergies between technology and organisational culture – these connections should be fostered to drive both innovation and user engagement.
This podcast is also available on Apple Podcasts, Spotify, Google Podcasts, Soundcloud and other podcast streaming platforms.
Topics (chapters):
00:00 Understanding Value in a GenAI Powered World
02:42 Introducing Simon Wardley
04:06 Mapping Strategic Change in an AI-Driven World
19:44 Deterministic and Non-Deterministic Languages: Creating Shared Systems
38:41 Building a Humanised Strategy for the future
45:42 Creating Dynamic Systems
52:21 Evolution of Socio-Technical Systems
01:00:08 Breadcrumbs and Suggestions
To find out more about his work:
- Simon Wardley – Wikipedia
- Simon Wardley – LinkedIn
- Simon Wardley – Medium
- Simon Wardley – Twitter
- WardleyMaps.com
Other references and mentions:
- Architecture by yourself by Nicholas Negroponte
- Alex Komoroske – Boundaryless Conversations Podcast
- Eric Schmidt – AI by the end of 2025
- Rendanheyi
- Susanne Kaiser – Boundaryless Conversations Podcast
- The Portfolio Map Canvas
- Explorer, Settlers, and Town Planners
Guest suggested breadcrumbs:
The podcast was recorded on 17 April 2025.
Get in touch with Boundaryless:
Find out more about the show and the research at Boundaryless at https://boundaryless.io/resources/podcast
Twitter: https://twitter.com/boundaryless_
Website: https://boundaryless.io/contacts
LinkedIn: https://www.linkedin.com/company/boundaryless-pdt-3eo
Transcript
Simone Cicero
Hello everybody, and welcome back to the Boundaryless Conversations Podcast. This podcast explores the future of business models, organisations, markets, and society in our rapidly changing world. I’m joined today by my usual co-host, Shruthi Prakash. Hello Shruthi.
Shruthi Prakash
Hello everybody.
Simone Cicero
Thank you so much. And we are truly honoured to welcome our guest today, somebody who has had a profound impact and shaped the way we first and I would say many people in the world, think about strategy in complex environments. Simon Wardley, welcome Simon. It’s a pleasure to have you back on the podcast.
Simon Wardley
It’s a delight to be here. So thank you very much.
Simone Cicero
Thank you so much. You are so lucky that you accepted to come back. I mean, Simon needs no introduction. He’s a prominent creator of a Wardley mapping, which is a powerful framework for visualizing and navigating strategic play in constantly evolving environments. His work has influenced leaders across industries and governments, offered a new way to better understand how technology, context, and evolution affect our organizations.
As a researcher with a sharp eye on social-technical change, Simon has long anticipated the shifts we are seeing now accelerate with the rise of AI and its wide-ranging impacts on economic and organizational dynamics. So Simon, as a start, I would love to hear how you are currently processing all that’s happening in the world, both from a geopolitical perspective and, technological perspective, and hopefully get you on this podcast today to project the thinking ahead a little bit so we and our listeners can have a head start in anticipating the future a little.
So that’s where I will start. What kind of patterns or shifts feel most strategically relevant for you today?
Simon Wardley
Gosh, so no small subjects. Let’s talk about geopolitics. Let’s talk about AI. Let’s talk about future. And let’s anticipate what’s going to happen. I mean, yeah, minor, minor subjects.
First of all, I’ve got to say on the introduction, thank you ever so much. That’s very kind. I still can’t get my head around this idea that I don’t need any introduction. I mean, I’m delighted to hear that other people are using, have been using mapping.
I remember I made it open-sourced, well Creative Commons, 20 years ago, mostly because I found it useful. I developed it for myself when I was running a business and I thought maybe others would as well because I hadn’t done an MBA and so I thought well there may be other people running companies who hadn’t done MBAs who therefore hadn’t learned the right way to map out their landscape and here’s my cheap and cheerful way of doing it.
So if I make it creative commons, it may be help a few other people. So I’m delighted when I go to events that have so many people come up and say that they use it. Not quite what I was expecting. Still doesn’t quite register in the brain cells that it’s, you know, it is used reasonably, quite wildly, or it seems to be, or maybe I’m just living in a bubble.
All right, so let’s talk about changes. Well, you can’t talk about changes without talking about AI, so we better tackle that subject. And of course, you can’t really talk about changes without talking about sovereign risk and what’s going on in terms of China and what’s going on in terms of trade tariffs in the US. So I don’t think there’s any way we’re to avoid politics in this issue, but let’s try.
Let’s start with AI. And to understand AI, let’s go back a little bit in time to understand some of the changes that occurred beforehand, such as cloud and serverless, and how this led us into AI and anticipating what was going to happen. So one of the things about maps, and I assume most people on this podcast are familiar, but very quickly, a map. You start with the user, you understand the chain of components that make up their need, and then you simply map it over evolution. It’s as simple as that.
And one of the things with maps is obviously you learn patterns because we do things. So we map something out, then we do something, and then we go and have a look at what’s changed using the map.
And that’s exercise of pre-mortem challenge, post-mortem learning is how we pick up patterns. And there’s a lot of patterns within maps. And when we talk about change, the patterns we use are what are called climatic patterns. So these are like the rules of the weather, rules of the game. They change the landscape that you’re looking at.
So back in roughly 2006 when I was running, company called Fotango had a map of Fotango. And what we could see, or one of the pans we’d learned, is that if there’s supply and demand competition, everything evolves. So evolution starts off with the genesis of novel and new items, custom-built examples, products, then eventually utility. So we knew that compute was going to turn into a utility. We didn’t know who it was going to be play the game, but we knew eventually somebody would play the game. I personally thought it was going to be Google, it turned out to be Amazon later in that year, and they launched with things like EC2.
So one of the other patterns that you learn is that we have inertia to change because of pre-existing practices, that you get coevolution of practice. So as things change, their characteristics change, so you get new practices come along. These changes tend to be a punctuated equilibrium, non-linear, and they cause an explosion of higher order systems. There’s actually 30 climatic patterns. I’ve just picked a few there.
So what we knew with cloud is we’d get the shift to a utility, be non-linear. So it’d be, know, over time, we’d get caught out by the rapid expansion of this industry. People would have inertia because of pre-existing data centers and compute, and we’d get a new practice. And we didn’t know Andy and Patrick would call it DevOps, but we knew there’d be a new set of practice because of the change of characteristics. And those things happened.
Now for me, that was quite useful because I was actually in 2008 running a strategy for a company called Canonical. They provide something called Ubuntu. And we were 3 % of the operating system market up against Microsoft and Red Hat. So we could use the maps to learn where to target. So we spent about half a million, took us 18 months, we took 70 % of all cloud. We had great engineers, but we also knew where to attack. That’s the key thing.
So roll forward to, gosh, 2014. We start to see the same thing in terms of the runtime. So the runtime becomes a utility. We end up with AWS Lambda serverless. You get an explosion of higher-order systems. You get a change of practice. So we eventually got what’s called FinOps, et cetera. And so exactly the same pattern.
About 2018, I was speaking at a conference, and we were talking about machine cognition. So we have human intelligence, human cognition as in the capabilities, and then machine intelligence and machine cognition. I just spent a bit of time doing a big study on what we called machine intelligence at that time. And so we saw that machine cognition was starting to industrialise. We didn’t know what it was going to be quite the format, but we were getting there. And the interest then became, what’s the impact?
Well, you get industrialisation, you get higher-order systems, you get change of practice, but it’s about machine cognition. How exciting is that? Well, one of the things we did with the map is look at the combination. And so we looked at some of the practices in the whole FinOps space, which were increasingly things like no-code.
And so we came up with this idea of conversational programming. So that’s the first talk that I did in 2018. We’d actually had the discussion slightly beforehand, but that was the first public talk. And it was about this idea that we’re increasingly going to be moving into a world where – It actually, I’d better go back a little bit further. The origin is things like architecture by yourself by Nicholas Negroponte, based upon the ideas of John Freeman.
The process of designing something is a conversation between the perspectives of multiple designers, sometimes in the head of one person or multiple people. And all that’s happening is that the machine is now one of those designers. So this idea of conversational programming is that you would have a conversation machine.
It could be verbal or text, and that was the process by which you would design a system. And as a result of that presentation in previous discussions, a friend of mine, Alexander, went and built that and demonstrated it at AWS re. Invent either in 2018 or 2019. This was a system called, it was either Jenkins or Jarvis, memory slips just there, but Alexander’s a very interesting person.
It was a system you could talk to, and it would build other systems.
So you roll forward to 2023, and we’re starting to get machine cognition with large language models. We got inertia to change naturally because of pre-existing ways. It’s likely to be a punctuated equilibrium.
You’re going to get new practices, so prompt engineering, et cetera. We didn’t know quite what they were going to be called and we’ve got this sort of idea that conversational programming starts appearing. So this is things like vibe coding or you depend on who you talk. I call it vibe wrangling because it’s less coding. It’s more wrangling something out of a system with a conversation. So that’s a really big and interesting change that’s going on. So we can unpack that a little bit more, but let’s just reference some of the other changes.
So you’ve got vibe coding. The second issue that you have is that the tool set is changing. So we’re moving away from this world of our sort of traditional Vizio studio, whatever it happens to be, into this world of developing through co-pilots and through vibe, well, using those practices of vibe coding. But It’s not just the tool set that’s changing, it’s the language. So we’re moving away from very deterministic, defined, declarative type languages, C++, to much more conversational languages. So, prompts, non-deterministic. They are, or more non-deterministic. They are computing languages, if you’ve got memory in there. They’re turning complete, but they are different in nature.
And the last thing is the medium is changing. So when we think about code, we think about text. Code is text. But one of the things I do with mapping is I use online Wardley maps to create maps. And so online Wardley maps, you can see the code that creates the map. But the conversation we have about the map is very different. So when we talk about the map, it’s all about objects, relationships, and the context.
When I look at the code, it’s all about style, syntax, whether it follows the rules, and whether the code compiles. And you see this in engineering departments, which is why we have whiteboards. You have two conversations about the same problem, one on the screen, one on the whiteboard. Okay, very simple.
So our language, toolset, and medium is changing. Now, the problem with that is that those are the three pillars by which we reason about the world around us.
So if you want to give a comparison, you’d think about the printing press as the tool, paper as the medium, and written word as the language. If somebody controls those three things, they control how you reason about the world around you. That’s really, really super powerful. And also incredibly dangerous. mean, lots of people say, the danger of AI is hallucination, the danger is bias, and yeah, yeah, they’re problems. OK, fair enough.
But the real danger is creating a new theocracy. That’s the thing that should keep you up at night and terrify you. And it’s not Skynet. It’s a new priesthood. And the way you counter that is through diversity, through critical thinking, and through basically ensuring these systems are open. And that means all the way down to the training data.
There are some good examples pushing in that direction, of which China and has been quite clear. It’s in AI policy is all about the benefits to society has been pushing the open source route for many, many years. It made warnings about this at the AI safety summit in the UK that it was going to go down the open route having declared it years beforehand. And it’s coming with their with deep seek as well.
So we start off with the pinch and vibe coding and we can go into details about that. We’ve talked about the change of tools, medium and language and how that impacts reason and how the danger of creating a new theocracy. Now we can slip into the nation-state stuff as well. Because the sort of mapping that I do is not just about competition between individuals and organisations and by competition I mean all its flavors.
So competition means to strive. So competition means to strive with others and we have multiple different ways of doing this. We can fight with others, conflict, cooperate with others, help others, or we can label with others, collaboration. So you’ve got three basic forms of competition. Conflict, cooperation, collaboration. Now, when we think about nation states, obviously, we play games.
We sometimes conflict, sometimes collaborate, sometimes cooperate. And often do multiple at the same time, even multiple with the same other nation. So we might conflict over a border dispute, but cooperate on trade, basic stuff, okay.
Simone Cicero
Of course, US and China, for example.
Simon Wardley
Yeah, you look at somebody, the interesting thing about this is obviously that competition occurs over a landscape. And we normally think about landscapes in terms of territorial, but there’s four other landscapes we have to consider. So there’s economic, technological, social, and political landscapes. We’ve got very good situational awareness on the territorial. We’ve got radars, we’ve got maps and all that. We pretty much suck on the other four.
So China, when I investigated this back in 2014, 2015, very high degrees of situational awareness across economic and technological spaces, which is why, you know, it’s quite dangerous to get into a trade war with somebody who actually understands the landscape because they can just cut you off in places you don’t realize are important. And you see also very, very high degrees of gameplay because of that understanding. So the whole sort of open-source play around AI.
And if you look at the growth of China for the last 30 years, mean, how it’s climbed up the value chain, it’s been very skillful, very targeted, extremely capable. And I have to say the West’s response has been, you know, ridiculous things like, well, there’s containment with tariffs. I mean, it’s about the worst possible imaginable response you can think of. It’s not how you should play the game. But that’s the situation that we’re in today.
So, interestingly, we’re going to see even further rise of China. It’ll, you know, will become the global default currency, I would expect. That’ll be a shock to the dollar. But we’re already heading down that path. So we’ve got AI, big set of changes, particularly around things like vibe coding, which has meaning but has weaknesses as well. We’ve got threats in terms of new theocracies, attempts to create new theocracies under AI. And we’ve also got major changes going on in terms of power and the balance of power between nations, a lot of it governed by these issues of how well they understand and play the game. So that’s the time that we’re in. Rather exciting.
So where do you want to go?
Simone Cicero
Right. Well, I I was debating with Shruthi in the background. She also has some questions coming up. But I wanted to pick one of the topics that you mentioned that I’ve also been looking into in the last few weeks. I was catching up with your writings and I had the chance to chat also with some other people.
One thing that really strikes me about these new age right? So you say something very interesting, which is these changes are punctuated equilibrium.
So we can kind of expect that at some point everything will change, right? Very fast. So it’s happening already, but it feels like it’s accelerating and it will kind of shift at some point. My question is that as we adopt AI in the technological stack or in general in the social technical systems we build, I have a big question around something you brought up, which is this transition from deterministic languages and non-deterministic languages.
So in general, what is my feeling? My feeling has been so far that as AI is penetrating systems and becoming this multiplier of capabilities and potential and productivity on one side and on the other side this kind of universal duct tape, as Alex Komoroske once called it on this podcast. I felt like as a way to respond to these and let’s say keep observability, keep a strategic ability to govern our systems and potentially also to collaborate, we kind of have to revert back into some level of shared ontology, right? So to some level of intentional, I would say, limitation of the complexity, OK?
Because even if the technology gives us a possibility to increase the complexity, because again, it can adapt very quickly, it can create these easy plugins and interpret interfaces very fast. I feel like if we really want to collaborate, there needs to be some sort of agreement on sharing a language. So the point I want to bring up is, how do we manage this transition from completely non-deterministic, infinite languages that can be plugged in and glued on by the AI into, on the other side, intentionally deciding to collaborate, for example, in shared standards or shared languages that at least give us as humans the possibility to understand each other and observe the system, keep it observable, let’s say.
And on the other side, I also want to connect this with, with this fact that seems to emerge from the adoption of AI systems, that they are more capable as you give them more context and ontology, right? So how do we manage this friction between the complete loss of ontology into non-deterministic interactions into something that is more like an intentional decision to collaborate and share the languages that can increase our capability to co-design, collaborate, and observe what we build together as humans and systems alike.
Simon Wardley
Gosh, you give me the good questions. All right. When we build some systems, we often collaborate and agree over the architecture. And so we have these wonderful architectural diagrams, which we build for a system. And then we build the system according to the architectural diagram. And that’s mostly a delusion to make architects feel important because the architecture is actually in the code of the system itself. So the real architecture is the person writing the code. one of the things I work with a group called FENC and I’m writing some work on rewilding software engineering. So one of the things we often do is we build visualizations of the architecture of a system.
And it looks nothing like the, you see in architectural diagrams, they’re just statements of belief, wishful thinking. This is what we hope the thing will look like. Doesn’t mean it will, it depends upon the decisions that the engineers who are actually writing the system actually make and whether they tell you what decisions they’ve made. So, a couple of things. One, the architecture is in the code, it’s not within the diagram; the diagram is just a belief. Secondly, software engineering itself is a decision-making process. So if you now consider, let’s take the, when we use AI, there’s two different basic forms. There’s the idea of using AI as an assistant, where we do software engineering, and then there’s what we call Vibe coding, or I like to call Vibe wrangling. So let’s talk about the Vibe stuff first of all.
In the world of Vibe, it’s the world of pure conversational programming. So I give it a prompt and it builds me something, which is great. But that prompt is non-deterministic. I can keep a record of those prompts. I can share those prompts with others. I can give it instructions in those prompts. I can say, I want you to build me a testing engine and for every function that you add, please build a add a test script.
And so I did this with one particular system and I got it to build as it was building the system, a little testing tool and I could run the test and see all the tests were passing. And occasionally they would fail and it would create a log and I’d take the log and give it to the system and ask it to fix this. Several hours in I started noticing something’s not quite right. But I was trying to do pure conversational programming. So I’m trying not to look at the code.
I’m trying not to dip into the world of software engineering. Eventually, I just couldn’t help myself, so I had to look at the code. Basically, the system hadn’t built me a testing engine. It built me a simulation of a testing engine. And what it was doing was randomly, depending upon which version number I was on, giving me errors and reporting those errors. And every time it was writing a test script for a function, it wasn’t writing a test script for the function. It was writing a description of what an error should look like and how frequently that should occur. So the entire thing was a simulation of a testing engine. So there I had been running my test. Isn’t it good? And there’s an error. Copy the log. Complete nonsense. OK. And that’s the thing you have to remember with prompts. They’re wishes.
They’re not rules, they’re not commands, they’re not instructions, they’re wishes. I wish you to do this, yeah? And the machine will statistically interpret what that means. And what it may do may not be what you’re expecting. But that is the world you’re in, the world in vibe coding, which is why you wrangle the system. And the most important thing to remember about that world is the machine is making all the decisions. You’re not.
There is no human decision-making in the process. You’re not architecting it. You can give it as many enterprise resource diagrams as you like. Fine. That’s like telling it to build me a testing engine. It may or may not pay any attention to this.
So then we get to the point where we go: “well, hang on a minute. I do need to understand what this thing actually does. I can’t just hand over my decision-making to a machine and be completely ignorant and have no understanding”.
We often sometimes find ourselves in the, you know, when we’re vibe coding, we get into a loop where we ask it to fix something and it creates an actual error. And then we ask it to fix that, create that error and it creates another error. And then we ask it to fix that and it creates the first and it gets into a loop and it just can’t get out.
So at some point you have to dive in and look at the code.
And at that point, you’re crossing that boundary. Now humans are involved in the decision-making. And this is where we’re in the world of software engineering. We’re in the world of understanding the environment. We’re in the world where the tools actually matter. Now here in this world, AI is still useful as an assistant. So in the first world, it’s sort of in charge. We’ve given over control. That’s often in the world of prototypes.
The jump between that boundary it depends on how much we trust this environment, this system, its capability. It’s a real big trust issue. But when we cross and we start looking at the code, we are in the world of software engineering. And we are in the world of AI as an assistant. We are in the world of we will decide what code is actually accepted. Now, that doesn’t mean AI assistants aren’t useful. They are.
But you still need somebody to read the code. Now there are ways and mechanisms which we will develop over time for building things like judgment systems, using AIs to check the work of AIs, which will help us build more confidence, more trust. But every time we don’t get into the code, whether we like it or not, we are handing over decision-making to the AI.
So the big architectural question that you have to really ask yourself is, because architecture is basically an expression of values – What’s important to us? And the thing that really matters to you is where do you value humans in the decision-making? And that’s the big architectural question of today and probably not what most people are thinking about. I’m afraid too many are still drawing lovely pictures.
But that’s the you know where we need to focus. So now to the idea of a common language – well, we only need a common language when humans are involved in the decision-making and that’s the space of where we we were actually doing software engineering. I now, I like maps as a common language because I find people in business from the IT side from all over the place can use that. So, you know, I would always argue use a map, which I do, and then use the map to determine what I’m going to vibe code and which bits I’m going to need to use software engineering on. And software engineering itself, I mean, that’s an industry that needs some changes anyway to explain why.
Well, we call it engineering and I accept engineering is about decision making and understanding a space, but it’s also about the tools we use. So if I just stick in the physical world for a short amount of time, if I’m making soup at home, I’ll use a kitchen blender and that’s a tool. If I’m trying to build a deep shaft mine, so I’m trying to mine some land, I’m not going to use a kitchen blender. I’m going to use a different tool. I’m going to use a tool which is contextual to the problem. In the world of software, even though it’s all symbolic instructions, both tool and application.
We’ve got ourselves stuck into this world of using standardised tools. So we don’t build tools for the context. We try and use these fairly standardised tools. So it’s a bit like trying to mine with kitchen blenders. And then we’ve got this idea of we use AI. That’s a bit like robots with kitchen blenders are better than humans. I mean, what, building mines? No, you really need to use mining equipment.
But we don’t do that in software except for in one place. That place is test. So with tests are very small tools you have input output. We have a problem and what we do is we build the test to look at that problem the same you would do with any builder tool for the problem and then we maybe write some code before and after.
Nowhere in the industry does somebody turn up and say, here’s my ACME list of 100,000 tests you should apply to your application. Because people would go, well, that’s not my application. That’s somebody else’s. My tests have to be for my application. Well, your tools should be for your application, full stop.
But we’ve been sucked into and trapped into this world by tool vendors, etc. The performance improvements I’ve seen with approaches like multiple development have been astonishing. I mean, in one case, 600x improvement. And so I’m pretty certain there’s a lot more room for improvement in software engineering than we are currently today. And because otherwise I wouldn’t be seeing figures like that.
So everybody tells me AI is fantastic and brilliant and super and it’s all the way we should go. First of it’s great and it’s really useful but you’ve got to understand how and where to use it and be very careful of the height as well.
So very good as an AI assistant. It’s very good in vibe coding. You’ve got to make that conscious choice of when you’re willing to sacrifice decision-making and just hand it over to the machine. And secondly, even with the AI assistant, the assumption that we are going to get rid of software engineers because it’s so much better with AI doing it, well, A, that’s humans out in the loop. Not necessarily a good thing. Second, we’re not ready for it yet.
Secondly the improvement of software engineer. We’ve got a long long way to go and so when I see figures like In the future the AI will be the 10x coder. Well, I’ve got 600X. So, okay, I still think there’s space for humans and this is why I’m quite strongly But certain parts of the system is human and AI.
Other parts, sure, we let the AI govern.
Simone Cicero
Right. It makes me think that, and then I know Shruthi has a question on strategy, but basically it makes me think that at some point as productivity increases and explodes, you really have to take a call on what it’s useful to build. So what is meaningful to build, even more than useful, right? Because at some point we will have built all the code that we need, right? And it’s a matter of decision-making, right?
Simon Wardley
but that’s never gonna happen. Because we always found, so this is one of the beauties about…
Simone Cicero
Or at least, yeah, maybe we’ll not build all the code we need, but let’s say we can build infinite code for whatever we need. So the question is really to choose what it’s useful to be, what is meaningful to build.
Simon Wardley
Okay, so now I agree. But a couple of things in that statement. First of all, it’s quite popular. Eric Schmidt recently was saying software engineers will be replaced by AI by the end of 2025.
First of all, what you’ve got to understand is, industrialisation of machine cognition, the combination with previous practices, explosion and conversational programming. What that means is there’s a whole bunch of new things which are going to be built. Because we’re quite good at imagining new things that we want to build and people do this. So I can sit down there and can co, I can vibe an entire new game in the space of a couple of hours. And it’s great. I mean, as long as you don’t look at the code too much.
But I’m not looking at the code at all, so I’m just happy I’m trusting the machine to have done something. But at some point you want to take that into production. And this is the point where we get into trust and whether we need someone to look at the code. And so the reality for most organisations is you’re going to see an explosion of new things built through vibing.
And we’re not ready yet to just put those into production without anybody looking at the code and seeing what’s there. So you’re going to need probably more software engineers than you’ve ever had before. So you get this and they’re just going to have to use AI as an assistant to help keep up with the pressure. There’s going to be more. Yeah.
Simone Cicero
There’s more decision-making because there’s more decision making to take.
Simon Wardley
And so as I sit down with these organisations, they’re really excited. They’re like, we’re going to be able to get rid of all our software engineers next year because VC selling our product told us we could. And the answer is no, you’re probably going to need more, or at least the same. And they’re going to have to work harder just to keep up because you’re in competition with others. So you’re going to see an explosion of new things because people won’t use it to save money.
So then it boils down to your next question, is we shouldn’t we build for you know, why do we build stuff? Why, well, you know – should we focus more on the meaning and the purpose of stuff and that’s really interesting. And it’s interesting because if I think about the basics of mapping: understand your users understand your user needs, understand the supply chain, understand how to evolve those components… We are pretty atrocious even at the most basic things like – understanding your users and understanding your user needs. I mean, it’s it’s ridiculous. Here we are in 2025. God it’s 2025. I started this 20 years ago. I mean 20 years ago. It was it was ever so funny going into organizations;: “Do you know who your users are? No. Oh, all right. Oh, yes. Okay. Do you know what their needs are? No. Okay…”are you you were talking like some sub 1 % actually had an understanding of the user needs and a vague inkling of their supply chain. It was terrifying.
Because you’re constantly asking, well, why are you building stuff? And they’re mostly building stuff because the CEO read some article in a magazine by HBR and that’s why they’re doing stuff. But roll forward 2025, so it’s 20 years later. you would hope that most organisations have a clear idea of who their users are and their user needs. Okay, we can accept they still don’t understand their supply chain and we accept, but most of them don’t. I find it terrifying that you go into organisations, and so you have to ask, well, why are you building it? And they come up with these things like the value it creates. It’s all about value. Good, digital transformation gives us value. Excellent.
What do we actually mean by those words: “value”? Because everything seems to have value. Are we talking financial? Are we talking political? Are we talking social value? What are we actually measuring? And so it’s terrifying to discover that people don’t measure. They don’t do, they don’t have benchmarks. They don’t measure against benchmarks. They just normally write. It’s the most overused word in the consultancy lexicon – everything is value, but almost nobody measures it.
So somehow we’ve ended up in this mess, it’s sort of, we think it works until we come up against other organizations who more much more capable than us in this space. But fortunately, we’ve been fairly isolated in the past. And then we watch these Silicon Valley giants come in and tread all over us like Amazon, etc. Or what’s going on with China at the moment. So we’ve often protected ourselves with legal barriers, etc. We’ve hidden the incompetency in the industry. So yeah, I mean, those are good questions.
As we build more and more things more easily, are people going to start thinking about why the hell are we building this stuff in the first place? And the answer to that is probably not. They’re just going to go because the CEO goes, we should have one of these and we want to be seen to be doing stuff. They’re just going to build more stuff that we probably don’t need.
Simone Cicero
Right, exactly. I know Shruthi has a question coming up.
Shruthi Prakash
Yeah, so I’ve written down a lot of points. yeah, no, no problem. So firstly on like, I think this is either a remark or you can also address it, but on common language or shared ontology, right? Ontology. So for me, like that is one way of looking at it.
You know, and that’s sort of this clean homogenized way of, you know, looking at technology, I guess. So for me, I’m also curious how we can look at it outside of that and de-homogenize basically a little bit more. And yeah, with relation to that, and you know, how much we’ve spoken about like so many shifting sort of aspects, let’s say shifting alliances, supply chain being extremely volatile, techno-nationalism, so much that’s happening and so much movement. We spoke about, let’s say, the balance of power affecting strategy. How do you see strategy taking shape in all of this? Like you said, do we still stick to the basics and then get that right, solve for that first, and then address the larger questions?
Because, I’m also really tired of, I guess, somewhere homogeneity, I guess. But that’s why for me, I’m curious, how can we add more sort of diversity in that process? And one more thing that you mentioned was sort of these like two binaries, right? Like one being extremely tech forward and dehumanized and then the other being hyperhumanized, I guess.
Simon Wardley
Hyperhumanized strategy, homogenization of language. Gosh, you ask killer questions. Let’s start with the homogenization of language. Wow, that’s really interesting. So of course, within mapping, I have my doctrine table. And my doctrine table is just a collection of principles that have come out of mapping.
So whenever I do pre-event, post-event analysis. I find patterns, or often find patterns, and those patterns break down into patterns you have no influence over but will change the map. So these are climatic patterns and I use those for anticipation prediction. Then there’s about 30 of those. Then there’s the doctrine. These are universally useful patterns – which you could use, but often people don’t.
And there’s about 40 of those. Things like focusing on user needs, understanding the supply chain. And then there’s the gameplay, the strategic patterns, and there’s over 100 different forms. They’re all highly contextual. And what that means is if you use them in the wrong place, they can cause as much harm as good.
So if we look at the principles, as I said, there’s about 40 of those.
One of them is have a common language. So a way of communicating between people. And for that, I use something like maps. Now, all maps are imperfect representations of a space. What the map does is because one of the problems I found with language is that in organisations, we tend to be very story-led. And because we’re story-led, we’re also told that great leaders are great storytellers.
And so that creates a problem in the fact that if you challenge somebody’s story, you’re challenging their leadership ability, so they get very defensive. So the beauty about putting the story down on a map is we can challenge the map without challenging the person, which is why when I was mapping out cultural systems, I could take a group of Brexiteers and Romaners who could not talk to each other verbally, but we could communicate through the medium of a map.
So the map became our common language. We could challenge the map without challenging each other.
So when it comes to common language, whilst I say user common language, there are issues, particularly with narrative and stories and the politics that come into it as well. So I’ve found maps as a way of getting groups, even groups in conflict, to talk to each other.
Now the problem with a map is that all maps are simply imperfect representations of a space. So the danger of the map is you start believing the map is true. If you want to understand a landscape, you normally have to take multiple groups to map out the landscape from different perspectives and you aggregate across those multiple groups.
So I do this in my research efforts. So when I was mapping out healthcare, I took groups of clinicians. I actually mapped out the AI space, so I had about 60, 70 people involved. And we broke into small groups and mapped it out from different perspectives. And then we could bring all those maps back and aggregate them to find what matters. And so that’s a bit like mapping Paris.
Imagine no one’s mapped Paris – you send a group of people out to map Paris, they come back and you ask them what’s important. They might say Pierre’s Pizza Parlor because they’ve mapped it from a perspective of nice places to eat pizza. What you need to do is send multiple groups to multiple different perspectives. So all of those require a medium in which you can converse.
And for me, you know, that’s why I like maps. You know, they’re a good way of multiple different groups talking about a space, understanding a space, and literally by pointing at the map. It’s as simple as point to the map and move things.
Okay, there are a couple of little barriers you have to understand what the things are. So you need that textual language there, unless you’ve got some sort of symbolism, which is clearly identifies it in some way. You still have that little barrier, but mostly it’s people pointing at a diagram and moving bits.
So in terms of homogeneity, homogenous environments, one, obviously we map from multiple perspectives because all maps are wrong. Secondly, it’s a good communication environment. Thirdly, I would always encourage you to look at other perspectives as well.
So a friend of mine, Dave Snowden, they have a formal mapping as well that they’re doing, which is great. I would encourage people to look at multiple different ways of looking at the same problem. It’s a bit like cutting an orange. You know, you cut it in one way, it looks one. You cut it another way, it looks completely different. mean, so always try and look at the same problem from different ways.
Right, so that’s my views on homogenous language.
I try to encourage diversity at multiple different layers. Mostly because the ideas that you ever need are not in the minds of one person. They’re in the minds of multiple people. you’ve basically got to find ways of getting this out even conflicting people. So getting this out onto a form which can be expressed amongst others. Does that answer your question or do you need more?
Shruthi Prakash
No, I think it’s good. was just, I mean, for me, that’s also, I guess, like the challenge with it is that it’s somewhere reductionist, I guess, in process, but that can also be good because you are solving for complexity through maybe simpler means, I guess, for me. but yeah, the other thing is also like, like, how do you make it more sort of fluid and alive and especially when things are like super dynamic, super fast and like the commoditization cycle of like tech is so incredibly short now. So yeah, how maybe.
Simon Wardley
So, okay, really, really, really, really interesting. We’ve been several years into the battle of AI since it’s industrialized.
I mean, we’re several years in and we’ve got many more years to go. We have this wonderful illusion that somehow technology just overnight magically changes everything. It doesn’t. It takes time. And there’s a long, long lead up time before that stuff industrialises as well.
What we often get confused by is if we think about the industrialisation of a tech. You think about AI, you can go back to the 1960s. If you think about computing back to the 1940s, you can think about VR back to the 1960s. And then it’s been long lead and then massive industrialisation. So what catches us out, the punctuated equilibrium, it’s nonlinear growth. So it’s doubling every couple of years. So it looks like small, small, small, small, then bam hits you.
Certainly there’s been rapid spread rapid diffusion. That’s not the same as evolution of things like large language models That’s certainly true, but it’s still taking time and even things like vibe coding Yes, I know we’ve got Eric Schmidt saying we won’t need software engineers by the end of 2025 – It’s probably more or at least the same level. And it’s going to take time for us to go through those social systems so that we start really trusting what’s being produced. And we’ve got to build the practices for this as well.
So I do understand the argument that things are rapidly changing. What I see is overlapping waves of industrialization with long lead times and so we get these condensed period and so you still got 20, 30, 40 years for it to industrialize and then you got the five to eight years of like madness as it starts to take over but you’ve still got that long period beforehand before it industrializes.
And I haven’t seen much evidence that that’s accelerated a little bit in terms of diffusion, but not an evolution. So I would caution against the idea of everything is getting faster. Certainly things are diffusing more. I’m not yet convinced we have the evidence to say that they’re evolving faster. So that was one.
So the homogenous in terms of maps themselves, obviously all maps are imperfect and they’re evolving. So they’re constantly changing. They’re never a fixed property and there’s no time on the map. So how quickly the map gets out of date depends on the industry that you’re in and what’s going on in that industry. And that depends upon supply and demand competition.
So most maps you get even in the AI space if you create a map of the AI space if you updated it every month. That’s more than enough. I mean it’s there are changes that are going on but even like China’s play with deep seek and all the rest of it I mean this was signaled three or four years ago. So nobody should have been surprised. The most hilarious thing about Deep seek was that people were surprised It was just like were you were you not looking? Are you are you so much in a bubble that you haven’t been observing what’s going on in the outside world? I mean despite the fact that China ran around with big flags saying hey we’re gonna be open sourcing in this space look and they’ve been doing that for years you were like my god look shock I found that hilarious so and that that is the funniest thing for me I mean this really amazingly good work coming out of China in this space.
Alright, so that now leads us to strategy, to which I will say what strategy? mean, most organizations, even up to nation state level, have strategy based upon little to no understanding of the landscape they’re competing in. We’re very good at territorial. So with the exception of territorial, generals are very good at strategy.
Okay, we’ve got good understanding the landscape. We’ve got all that learning. But in the economic, technological, social, political space, we don’t even have maps. Most of us. So we talk about strategy. That’s like having strategy without seeing the chessboard. I have a strategy to win. Great. How are you going to do that? I don’t understand the game. I don’t understand the pieces. I don’t understand, but I’ve got a strategy to win. And the fact that we’re not winning, we’re blaming it on execution. So we’re blaming it on people.
I came up with a strategy, the strategy was to win. You failed to implement the strategy, must be your fault. I mean, the state of strategy, I mean, there are some great people, right? Professor Roger Martin and his work, which is really good, but I’ve got to say there is a lot.
There’s an entire industry which loves telling stories with little to no understanding of the landscape we’re actually competing in and they call this strategy. I think we have a dearth, a poverty of strategy. So to your question, is strategy changing? Hopefully, maybe we’ll get to see some. That’s me at my most sarcastic because to be honest, we’re quite poor at that. There are exceptions.
China. Brilliant understanding of supply chains, working out in terms of which bits to industrialise. Investments are actually spot on in so many different ways. They play a completely different game. And they even signal that they’re going to play this game as well. So levels of skill are unmatched. And then there’s specific companies in Western philosophy. Think Amazon’s early cloud was pretty exceptional – The use of ecosystems very very good as well. I mean there are companies like Haier which are very impressed.
Simone Cicero
Indeed, if I can jump in, Simon, maybe ask a last question before moving into the breadcrumbs. I was curious to ask you, you know, we kind of see some invariant advantages in the market, in the social-technical system design today. For example, one is this tendency to develop distributed small teams, small units like Amazon or Haier that you mentioned.
So I was thinking about asking you what kind of evolutions you see even reinforced by the AI penetration now, which kinds of promises to transform into software-like behaviours our social-technical systems, right? So I’m thinking, for example, when you said for software development as AI penetrates the system, you need to be more concerned on value decision making. I feel like similar things are happening on the organisational and social technical system. It’s like social technical systems are becoming extensions of software, right? So as organisations become more programmable, more modular, how are things changing in your perspective?
And also across levels, you also mentioned a lot, that there are different layers in thinking about social technical systems, the national layer, the organisation, the team. I’m curious to hear about, for example, what dynamics you see in going cross-organisation beyond the typical concepts of what an organisation is as something that happens inside a boundary with certain responsibilities well defined. So what are you seeing in terms of the evolution of our social technical systems?
Simon Wardley
Oh god, more good questions, right. I am, you know, no, no, no, these are good. There is limits to my knowledge. So very, very strong limits to my knowledge. I mean, the last major thing, major experiments I did on organizational systems, because when I do experiments, they’re normally things I’m running – was the introduction of what is now called explorers, villagers, town planners.
So this is back in 2005, 2006, subdivide it using maps to understand the landscape, subdividing into small teams and then realizing those small teams had not only aptitudes as in skill sets but also attitudes. And those attitudes, you know, explorers, villagers, town planners. Originally pioneers set this town planners, but that’s because I wasn’t thinking about the colonialist overtone of that. And so that’s why that’s dumped and it’s explorers, villagers, town planners. And what you find are there certain characteristics of each of these groups. And so that was very successful. And I’ve seen a couple of places use that.
But that organizational model depends upon having the right principles in place.
So when you go and look at the doctrine table, there’s about 40 of those. Each of the doctrine you can map, which is how I’ve organized them into stages by mapping them.
For example, you know, using multiple methods means you need to understand the landscape, which means you need to understand the supply chain, which means you need to understand who the users are, which means or user needs, which means you need to understand the users, which means you need to have a common language to communicate. So so what you find is those principles built on top of each other. And so by mapping all the principles out, I was able to divide it into nice little stages.
And at the very top is the sort of design for constant evolution, which is this whole explorer’s village’s town planner. So you let waves of change literally flow through the organization. And it’s designed to cope with that. And most of our organizations aren’t. Now, if you don’t have those principles in place, this system becomes incredibly unstable, breaks down – which is why people keep on turning up and saying, we want to do explorers, villagers, town planners. And they sit them down and say, where are your, let me look at your principles.
And if they’re not there, don’t do it. And in fact, if you put the principles in place, the structure seems to flow out anyway. So I always argue, don’t start by messing around with organizations, start by getting your principles right. So. That was 2006, 2007, 2008, and then a few little experiments a bit later. And that’s where my limit of knowledge of organisational structure ends.
But of course, I do have that little experiment. So there was a thing called DAO, Distributed Autonomous Organisations. So I used to get involved with all these little DAO groups. It’s great fun. And what I would do is first by start looking at the group and asking what principles have we got? And so I’d map out the principles and watch the group develop. And invariably, they started off with no principles at all. We’re totally distributed autonomous. And what would happen is some incident would happen and then somebody would have to create a rule.
So they agree on a rule. And as soon as you’ve got a rule, you need someone to be in judgment. And so you’ve got a hierarchy. So before you know it, your DAO has turned into a hierarchy and then completely is against the point of the DAO and dismantles. And so I can comfortably say that I can sit there and organize in a DAO on day one, just listen to them and work out pretty quickly how fast it’s going to collapse based upon its lack of principles that it starts with. If you go in and it’s got no principles, no basic structure of beliefs, then it’s going to collapse in no time.
I mean, there’ll be rules before you know it, hierarchy, and then you’re against the whole thing. So, Haier, I think, is a great example. mean, if you want to talk about organizational structure, my knowledge is somewhat limited from the past. I’m not doing active experience. Of course, you know, I look at companies today and they’re not even anywhere near close to doing things like explorers, religious town planners, even though for me, that’s 18 years ago. And they’re nowhere near that level. I mean…
Simone Cicero
Mm-hmm. That speaks to the speed of strategy that Shroudy was talking about.
Simon Wardley
Yeah, yeah, I mean – So and the problem is you get too many executives reach for the organisational leader because it’s like we can make, the change the landscape is possible you know – it’s just like the easy thing to reach for. So I think you know, there’s where I see most of the interesting where I come across most of the interesting stuff and though I’m not actively involved in this right now is I have to say is from China and organizations related to China and there’s a bunch of cultural and structural reasons I suspect for this as well.
Mostly, there’s certain concepts which I think are really really important, Ren, for example, which we don’t have in our Western systems and I think that creates a real weakness within our systems as well. But I’m not actively studying those so I have to say knowledge is dated. You might want to talk to people like Susanne Kaiser and others who have been doing things in terms of organisational structure as well, which are probably more modern than mine.
Simone Cicero
Right, right, right. But I think the message is clear. There are steps and I think it’s fascinating. We had this conversation in the past as I was talking about – The Portfolio Map Canvas.
You that it’s important that you kind of become intentional about the structure of the organisation, but that’s something that you achieve after, you kind of go through a series of first principles that need to be in place for you to start wondering about how this should be organised. But I think that’s that’s very important. And we will be sure to attach the doctrine table to the podcast so people can look into that at least and figure out what we’re talking about.
Shruthi Prakash
Simon, towards the end of the podcast, we have this section called as the Breadcrumbs. So, if you have maybe books or podcasts, videos, movies, any suggestions for our listeners.
Simon Wardley
Gosh, suggestions for our listeners. obviously there’s medium.com/WardleyMaps, which has my book. I finally got around to vibing myself a website. So, swardleymaps.com, very easy, Swardley as in me, maps as in me, one word, dot com.
So I put that up there. And then there’s loads of online resources. Chris Daniels runs WardleyMaps.com. He does loads of resources from then Ben runs LearnWardeyMaps. He does lots of teaching. There’s loads of online tools. John has an awesome list for Wardley Maps as well. So you can find stuff online in different groups. You just search for Wardley Maps.
So I would, my suggestion is talk to others, find others, there’ll be somebody in your network who’s using Wardley or has used Wardley Maps. And just go and talk to them, ask them about their experience, maybe go to them and say, do you mind helping me look at this problem and have a go at mapping, but do it with others. So I suppose that’s all my advice.
Simone Cicero
Simon, thank you so much. It’s been an amazing conversation. I hope you also enjoyed a bit of our questions.
Simon Wardley
I’d love them and they’re tough questions and I’m not sure I’ve answered them all. I feel I might have skipped over a little bit or been a bit too hand-wavy at places but you keep me honest.
Simone Cicero
I mean – Some of them, maybe some of those questions people have to map themselves through the answers, right? So that would be a suggestion. Thank you, Shruthi. It was great to have you again with your great questions.
Simon Wardley
Yes, yes.
Shruthi Prakash
Thanks, Simone. Thanks, Simon, both of you.
Simone Cicero
And for our listeners, as always, can head to boundaryless.io/resources/podcast. You will find a very visible Simon podcast this week with the transcript of the key that he mentioned during the conversation. And of course, until we speak again, remember to think Boundaryless.