#124 – How AI Restacks the System of Work with Sangeet Paul Choudary | Closing of Season 6
BOUNDARYLESS CONVERSATIONS PODCAST - EPISODE 124

#124 – How AI Restacks the System of Work with Sangeet Paul Choudary | Closing of Season 6
Sangeet Paul Choudary, globally recognised Platform Strategist and author, joins back on the Boundaryless Conversations Podcast for the 4th time. In this episode, the closing one for Season 6, we unpack how AI radically transforms the system of work.
Together, we explore how organisations can stay relevant as value is being redefined (intrinsic, economic, contextual), and how systemic design choices shape who benefits in a rapidly fragmenting economy.
Drawing a powerful parallel to the shipping container revolution, Sangeet shows how AI’s impact operates at multiple levels, from simple task automation up to systemic change, urging us to think bigger than just isolated productivity gains.
He challenges the Techno-Optimist and Luddite narratives for assuming that competition rules and value distribution remain static in an AI world. Instead, Sangeet urges us to think about how the “pie” gets sliced.
To pre-order Sangeet’s Book: “Reshuffle: Who wins when AI restacks the knowledge economy”, check out this link: https://www.amazon.com/dp/
Youtube video for this podcast is linked here.
Podcast Notes
Sangeet is a globally renowned author and has an upcoming book, “Reshuffle”. He is known for his deep systems thinking and sharp analysis of digital ecosystems.
In this episode, he explored how organisations must fundamentally rethink value (and their role) in an AI-transformed world and unpacks the often-overlooked link between constraints and value.
He helps us distinguish local and systemic effects, urging leaders to stop optimising for short-term efficiencies, and challenges the outdated assumption that markets are uniform and competition is static.
If you’re curious to learn how your perception of innovation needs to shift, tune in.
Key highlights
👉 In an AI-driven economy, organisations must redefine value in context, not just by markets, but by their unique purpose, position, and impact.
👉 Both techno-optimists and Luddites fall into the same trap: they assume the rules of competition stay the same. But as AI reshapes how the “pie” is sliced
👉 Traditional frameworks fall short – leaders must now navigate systems thinking, modularity, and multi-dimensional trade-offs.
👉 Strategic advantage lies in judgment, where decision-makers are directly impacted by their choices.
👉 Organisations must shift from task-level automation to system-level redesign.
👉 The future of leadership demands a long-term appetite for long-term planning – helping us thrive through uncertainty and systemic adaptation.
This podcast is also available on Apple Podcasts, Spotify, Google Podcasts, Soundcloud and other podcast streaming platforms.
Topics (chapters):
00:00 How AI Restacks the System of Work – Intro
01:38 Introducing Sangeet Paul Choudary
03:04 New Framing for assessing the impact of AI
10:33 Deeper Impacts of AI on the System
16:21 AI affecting Value Economics
21:46 How is Value Impacted in the System of Work
30:37 Shift of Role Requirements in Organsiations
34:42 Individual responsibility in defining value
49:36 Pace of Technology
56:16 Breadcrumbs and Suggestions
01:00:33 Closing of Season 6
To find out more about his work:
Other references and mentions:
- Boundaryless Podcast – Sandwich Economics: A New Era of Competition
- Boundaryless Podcast – Re-bundling the Firm around Problems to Be Solved
- Boundaryless Podcast – Composability beyond software: building ecosystemic portfolios
- Sandwich Economics
- Sam Lessin – The State of VC Capital in 2025
- Yuk Hui – Ideas of Cosmotechnics
Guest suggested breadcrumbs:
This podcast was recorded on 15 May 2025.
Get in touch with Boundaryless:
Find out more about the show and the research at Boundaryless at https://boundaryless.io/resources/podcast
Twitter: https://twitter.com/boundaryless_
Website: https://boundaryless.io/contacts
LinkedIn: https://www.linkedin.com/company/boundaryless-pdt-3eo
Sangeet is a globally renowned author and has an upcoming book, “Reshuffle”. He is known for his deep systems thinking and sharp analysis of digital ecosystems.
In this episode, he explored how organisations must fundamentally rethink value (and their role) in an AI-transformed world and unpacks the often-overlooked link between constraints and value.
He helps us distinguish local and systemic effects, urging leaders to stop optimising for short-term efficiencies, and challenges the outdated assumption that markets are uniform and competition is static.
If you’re curious to learn how your perception of innovation needs to shift, tune in.
Key highlights
👉 In an AI-driven economy, organisations must redefine value in context, not just by markets, but by their unique purpose, position, and impact.
👉 Both techno-optimists and Luddites fall into the same trap: they assume the rules of competition stay the same. But as AI reshapes how the “pie” is sliced
👉 Traditional frameworks fall short – leaders must now navigate systems thinking, modularity, and multi-dimensional trade-offs.
👉 Strategic advantage lies in judgment, where decision-makers are directly impacted by their choices.
👉 Organisations must shift from task-level automation to system-level redesign.
👉 The future of leadership demands a long-term appetite for long-term planning – helping us thrive through uncertainty and systemic adaptation.
This podcast is also available on Apple Podcasts, Spotify, Google Podcasts, Soundcloud and other podcast streaming platforms.
Topics (chapters):
00:00 How AI Restacks the System of Work – Intro
01:38 Introducing Sangeet Paul Choudary
03:04 New Framing for assessing the impact of AI
10:33 Deeper Impacts of AI on the System
16:21 AI affecting Value Economics
21:46 How is Value Impacted in the System of Work
30:37 Shift of Role Requirements in Organsiations
34:42 Individual responsibility in defining value
49:36 Pace of Technology
56:16 Breadcrumbs and Suggestions
01:00:33 Closing of Season 6
To find out more about his work:
Other references and mentions:
- Boundaryless Podcast – Sandwich Economics: A New Era of Competition
- Boundaryless Podcast – Re-bundling the Firm around Problems to Be Solved
- Boundaryless Podcast – Composability beyond software: building ecosystemic portfolios
- Sandwich Economics
- Sam Lessin – The State of VC Capital in 2025
- Yuk Hui – Ideas of Cosmotechnics
Guest suggested breadcrumbs:
This podcast was recorded on 15 May 2025.
Get in touch with Boundaryless:
Find out more about the show and the research at Boundaryless at https://boundaryless.io/resources/podcast
Twitter: https://twitter.com/boundaryless_
Website: https://boundaryless.io/contacts
LinkedIn: https://www.linkedin.com/company/boundaryless-pdt-3eo
Transcript
Simone Cicero
Hello everybody and welcome back to the Boundaryless Conversations Podcast. On this podcast, we explore the future of business models, organizations, markets and society in our rapidly changing world. I’m joined today by my usual co-host, Shruthi Prakash. Hello Shruthi.
Shruthi Prakash
Hello everybody.
Simone Cicero
Good to see you. And we are incredibly excited to welcome back our guest for today for the fourth time. So it’s kind of a record. Sangeet Paul Chudary. For those that don’t know, Sangeet is a globally recognised thought leader on platforms and ecosystems, network effects, and more generally, the evolution of the digital-enabled economy.
And he’s the founder of Platformization Labs and has been an advisor to not just companies, but also governments and global institutions. Sangeet has been instrumental in shaping how we understand marketplaces, ecosystems and platforms, and is now working on a new book called “Reshuffle: who wins when AI restacks the knowledge economy”, a book that looks into the transformative impacts that artificial intelligence is going to have on the structures of work, organizations, and economic systems. Sangeet, it’s an absolute pleasure to have you back on the podcast today.
Sangeet
Thank you, it’s my pleasure as well. Thank you so much.
Simone Cicero
Thank you. So, I mean, let’s start from the very core of the conversation. So in your upcoming book, you argue that the impacts of AI go beyond what you call task substitution, but rather you imply that there is a deeper reconfiguration happening or supposed to happen in the knowledge economy, but I think also beyond that.
So to start, maybe you could share what led you to think that a completely new frame was needed to really assess and leverage the impacts of AI, and more specifically, generative AI. And how do you see that affecting value, who creates it, who captures it, and how coordination structures themselves are going to evolve? I mean, of course, this is just an initial framing, but maybe you can paint a few pieces that we can get back during the conversation.
Sangeet
Absolutely. I think what has kind of helped me move in this direction of thinking about framing AI is not something that just impacts tasks, but changes the overall system is really the work that I’ve done over the past 10 years in understanding platforms and ecosystems. In the course of looking at platforms and ecosystems, we’ve always had two different types of issues we’re looking at.
One sets of questions were – How do I compete when the competitive ecosystem around me changes? And the other set of questions was, how do we protect the Uber driver when the algorithm keeps managing him? And these two questions are not undulated. What happens to work and what happens in the competitive ecosystem are very much connected. And the way to see those connections is to really think about platforms and ecosystems and think more specifically, forget platforms and ecosystems, think more specifically about the fact that – everywhere around us, value today is created not through production, but through coordination. Now, what I mean by that is prior to the digital economy, value was largely created by owning large production systems or large distribution systems. You could either have really large factory setups. You could have huge bases of IP, which is also a production-based view of value.
Or you could control global logistics and supply chains, and you could control distribution points, you could control brands, and you could control the ability to influence people. And all those things were distribution-based sources of value. And what we’ve seen over the last 10 – 15 years is that value is shifting away from production and distribution-based sources to really coordination-based sources, the ability to identify how you can coordinate fragmented systems and create new value.
Now that sounds abstract, but let me be specific. Initially, when we thought about this idea of coordination, we saw simple things like marketplaces coming up, matching supply and demand. But in order for work to really get done, you don’t just solve the counterparty finding problem, you also solve the work execution problem. And the more you get into the work execution problem, the more you shift from matching to coordination problems.
So if you’re in the construction industry, and really building across design, construction, and down to operations, you are coordinating across multiple actors. And that is a fundamentally new coordination problem. Now, what takes us to this coordination problem, and I actually started the book with this example, is that if you take the example of a country like Singapore, which transformed within the space of one generation, and I spent more than a decade in Singapore – But in the space of one generation, it transformed from a swampy fishing village, where at one time tigers used to roam the streets. And that’s why Singapore means the land of the tiger. It used to, lion, used to, that was sort of the feeble history of Singapore. And within a generation, it is what we see it as today.
And the factor that kind of drove Singapore’s rise was not information technology. It was not even a strategic location. All those things are helpful. What drove Singapore’s rise was that Singapore realized that global trade was moving from an old system of trade to a new system of trade, structured around a new system of coordination. And the old system of trade was that supply chains were local. You created and you shipped something locally because shipping things over long distances was unreliable. What made it undeliable was that moving things through ports itself was slow and cumbersome, but then moving them onto railroads and trucks was even more so.
One single invention, the shipping container, changed all of that. The shipping container, once it was created, its first effects happened in the form of automation. The ports that adopted the shipping container had to automate their ports, and so jobs were lost and dock workers were put out of jobs. But that in itself did not change the system of trade. What really changed the system of trade was that the shipping container was enforced alongside a standardized contract that required standardized shipping containers across ships, trucks, and rail. Because before that, everybody had their own way of defining loads. And so there was fragmentation in the system. There was no coordination.
When there’s fragmentation, handoffs take a lot longer, uncertainty increases. And so the cost and the reliability of trade was very high, which is why supply chains were local. The moment you had standardized shipping containers and unified contracts, you transformed this coordination cost fundamentally, and that enabled the creation of global supply chains. Now, this is some very interesting facets, and that’s why I’m kind of taking the segue.
Today, we have a computer, we have a smartphone, all of this owes its origins to the shipping container and we don’t necessarily realize that. What the shipping container did was it unbundled industrial production so that all components of a product no longer had to be produced at the same place if you could reliably source them from multiple different places which then meant that component-level producers came up and component-level competition happened.
Because component level competition happened a company like Intel could focus on just one component and computing and processing power without having to worry about vertically integrating the whole computer into the same factory. And that’s what broke IBM’s vertical integrated model with the rise of companies like Intel and Microsoft. Because if you really think of it, without the shipping container, that component level competition would not have been incentivized. You could have had that competition within a local place like Silicon Valley, but to do that at global scale, you needed a new system of trade. That’s what the shipping container did.
So my key point is, much like the rise of the shipping container, you could have misunderstood it as an automation story. Ports are getting automated, dock workers are losing their jobs. But the real value was in moving from an old system of trade to a new system of trade by solving what creates fundamental value today, solving coordination problems. And the new system of trade was built around a new logic of coordination. That logic of coordination then unbundled the way traditional
models used to work and that completely changed and gave rise to new forms of competition. So I’ll pause there because to me that study is really powerful, not just because it helped Singapore succeed and other ports like Liverpool fail, but more so because it had these unanticipated effects of transforming the computing industry and so on.
Simone Cicero
Well, I mean, of course, you you in this initial introduction, you brought up one of the typical, let’s say, drivers of innovation, which is componentization and standardization of interfaces, and rightly so, you made the point that sometimes we don’t see the second order effects or larger systemic impacts of certain changes in value chains and especially in interfaces.
So I’m curious to hear from you now, how do you connect these with the impacts of AI? So more specifically, in what ways people are looking at the small impacts of AI, like task automation, for example, and how instead, AI is impacting second-order or third-order effects in the system? So how do you make these connections within these two layers of these impacts.
Sangeet
Absolutely. So think of the old system of trade and what happened with moving to the new system of trade. So the system of trade involved three levels of value. One was how goods were handled. Second was how goods moved from source to destination. And third was how that changed what would be traded, where that would be traded, and how it would be traded.
So if you think of the shipping container, you would think that the only thing that the shipping container was changing is how goods are going to be handled. In which case, you would think that the only thing it changes is that ports now become automated. Instead of unloading non-standardized sacks and barrels, you are now unloading a standardized thing which a crane can do better. And so some people are going to lose jobs. What you are not necessarily seeing is that once you combine that standardized container with a unifying contract across multiple forms of transportation, you now change not just how goods move, but where goods can move reliably between which source and which destination. So that’s the second level, right? And then the third level is what that means in terms of the structure of supply chains. Now, the same thing kind of happens with AI today, because when we look at AI, we can think about AI’s effect at three different levels. What is AI doing to a specific task? And that is something that’s visible to all of us because whether we think about systems or not, whether we care about those things, all of us care about our jobs, our livelihood, and all of us care about what makes us command a premium for our skills. I don’t think people necessarily care so much about working in a meaningless job versus not having a job as much as they care about also losing the premium that was associated with their skill.
So my first point is that it’s not just job loss that concerns people, it’s also meaning loss in their jobs, right? But all of that is still very much task-oriented focus, which is a bit like saying how goods will be moved that’s changing. So this task gets changed. Will AI automate me? Will it augment me? It’s all task-oriented focus.
But then the system of work, like the system of trade, has two more levels. There are the tasks, and then there are the workflows and the organizations, which are structures in which tasks are organized. Organizing structures are important because they demonstrate the relationship between tasks. What needs to get done before what gets done, which one oversees which one, and so on. And so they create informal hierarchies and variations in power between different forms of tasks, variations in value between different forms of tasks in terms of how it relates to the final output. And all of this then sits inside the competitive ecosystem. How companies compete determine how organizations and workflows work, and that determines how tasks are organized.
So let me take an example. Amazon purchased a company called Kiva, a robotics company, to start using robots in its warehouses. So if you take a task-based perspective, you’re going to look at the warehouse worker and say, is the robot augmenting the worker? Is it automating a certain task?
But Amazon doesn’t care about all of that. Amazon cares about only one thing. Are you going to be a prime customer or are you going to churn out of being a prime customer? Because that is the central logic of competition. How do you stay a prime customer if you get stuff delivered reliably in two days? That’s what’s happening in the competitive ecosystem. That then decides how the organization and workflows work around that, which means the fulfillment workflows, the layout of the warehouses across the country. How can you ensure that everything reaches people in two days, which then determines how tasks get organized, what should be done by a robot in the warehouse versus what should be done by a human in the warehouse? So my point is, if you try to start looking at these issues in terms of augmentation automation, you won’t really understand the reason the task exists. The task in the warehouse exists only for one reason, to make that two-day delivery happen.
And if anything changes because of technology, you have to first look at how that two day delivery promise is going to change to then think about how the task in the warehouse will change. So you can’t think about tasks independent of the changing system of work. You have to see what the changing system of work is going to look like. And that’s my key argument over here that when the system of work changes, fundamentally new competitive issues start coming in.
I’ll take a simple example. If Amazon was not working with Kiva and it did not own Kiva and you’ll understand why it was so important to own Kiva. But if they were not owning Kiva and if they were working with a third party robotics company and over time the robotics company realized which points in the system it was controlling that directly determined the two day delivery promise, it would have incredibly high negotiating power with Amazon. And that’s why Amazon had to own Kiva. So my point is you need to look at the system to need which kinds of competitive tensions can come up to then see on the basis of that how organizations and workflows will reorganize to stay competitive.
And then only then it will determine which tasks will matter and hence what your job should look like.
Simone Cicero
I know, should you have a question, but I wanted to have a quick follow up. because, know, what you said here, it made me connect with some earlier content, earlier work of yours, the work on sandwich economics, right? So this idea that you have to look at the system, right? And you have to look at all the layers and as a brand, the most likely to ensure your competitive advantage in a world that is highly componentized, highly modular, you have to sandwich your value chain.
So the point you made in the sandwich economic conversation, which we also have in the podcast, so people can look into that, is essentially this. So as time moves forward and the economy gets much more dynamic and modular and easy to connect, brands to exert their dominance, let’s say, they are integrating fully vertically their value chains. And they are, as you said, for example, with the acquisition of Kiva, Amazon was sure to control the full stack. they could, let’s say, control and leverage all the places in the value chain, in the supply chain, where their dominance and unique experience was made possible.
So the last question in this first, let’s say, round of questions would be, in what ways is AI radically changing this? So this is something that was already there, right? So the question for you is, how is this changing with AI and most specifically with GenAI, which is the latest?
Sangeet
Absolutely. So the starting point of this is that what changes with AI is that the nature of what can be modularized fundamentally changes. So traditionally much of knowledge work was based on tacit knowledge. It was knowledge stuck in your head. It was knowledge stuck on a meeting call. It was knowledge stuck in conversations. Once you can turn a model on all of this knowledge because you trained it on unstructured information of various forms, it can start modularizing the components of that work, the logic of that work. It can start then enabling different forms of the competent innovation on what was previously tacit knowledge. And so we don’t necessarily see those modules the way we see modularization in other aspects, because those modules are not sitting inside the model. It’s more in how they can be retrieved that you can now modularize and retrieve and then recombine it on the fly.
Now, what that means is that the very nature of large language models allows us to modularize previously tacit knowledge. When that happens, the nature of coordination can be fundamentally extended into new areas entirely. Because so far, the way to modularize something was to standardize it either by saying, here’s what a listing should look like, or here’s the API definition. But now, with large language models, you can modularize knowledge without requiring such strict enforced standardization.
A simple way to think about it is, we’ve seen the importance of coordination with a company like Shane, right? So Shein, the Chinese fashion retailer, it basically looks at social media signals and then it modularizes its whole supply chain. It modularizes design to a very basic sense. All the taste making is taken away.
And then based on the signals, designs are quickly generated and the right suppliers are asked to quickly push things out because it’s modularized to that extent and because algorithms control demand predictability, they’re able to do all this with very low uncertainty, right? So it’s like the extreme version of the container model. But Shein is again, you know, based on very mathematical signals, what’s happening on social media, things of that sort.
But if you take that to a different problem, let’s say you’re booking a travel, the way I’m booking my travel soon. Travel booking has been a problem that has traditionally been very modular to the extent of creating heavy coordination costs. In the past, you would have a travel agent solving the coordination costs for you. Now you have to solve it. You see something on Instagram. You go on TripAdvisor and see if it’s worth any good. Then you go to booking and you book it. And then when you’re actually at the place, you use Google Maps to do it.
With AI, with the ability to ingest and learn on all this information, you could have an assistant working on your behalf, mapping your itinerary, managing it. But more importantly, the whole value chain is fragmented, right? Every hotel provider, every property management system, everything is fragmented.
So if you can organize demand around organizing the itinerary, you can then start organizing the value chain and manage it to more agentic execution, where if something on the basis of your history and on the basis of what you’re interested in and the constraints of your itinerary, the right things can be booked. But in real time, if you’re changing plans agentically, you can then get all of those changes executed as well. So my point is all of this would not have been possible if unstructured forms of knowledge were not modularizable. And now they are modularizable. And more importantly, as agentic execution improves, the ability to act on that modular information also improves.
So it’s both those things. It’s the ability to decompose the ecosystem, but it’s also the ability to execute and govern the ecosystem in fundamentally new ways.
And if I may just say something, I mean, the image behind you is a really interesting example. I know it’s from the Collosium, but the way it’s been colored, there’s the vertical integration on the left side, and then there’s the horizontal place on the right side. And really, that’s what happens between the old system of work and the new system of work. That image is just jumping out at me.
Shruthi Prakash
So with the system of work topic, right, let me take it from there. So I’m curious how to sort of operationalize all of this in organizations. What does it mean for restructuring that fabric of work, right? And one more thing that you speak about, I guess, often is to see how, let’s say the output is generalized in that sense.
So our focus could be more on maybe what are the inputs that we give and what do we do with the outputs that we give? So how can organizations sort of take shape in all of this, give focus back to, let’s say, the human tasks of, let’s say, empathy, creativity, things like that.
So how can new systems be developed in this AI world?
Sangeet
So there are multiple things that are, you know, in.
Simone Cicero
Yeah, which is ultimately also how value is impacted, right?
Sangeet
Exactly. I mean, we have to start with looking at value. There are multiple layers in your question, which also assumes certain things. It assumes that empathy is necessarily always something that should be valued. And maybe that’s not the case always. It might be the case in certain scenarios and maybe not, and so on.
So we have to think about the entire system, understand what kind of a system we want to create and what will be valued within it. Because I take the example of Shein quite they get early through the book because it’s really good example from a textbook perspective of how to create the organized system around AI and a really bad example from a societal, environmental and human perspective of how to create a perfect system and yet have larger consequences, that are not necessarily acceptable.
So, you know, the first point then that I’m trying to make is that you have to understand that AI provides you the power and you now have the ability to use it to create the kind of system you want. What’s important is understanding that systems are designed around constraints. And so define your constraints and decide how the system should be organized around that. If your constraint is that you do not want environmental impact, then you may just have to build a certain kind of company that makes a certain type of trade-offs, even if AI allows you to do something else and optimize for something else, right? Excuse me.
So the first point that I’m trying to make is that think about systems from the view of constraints. But constraints are then directly related to value because the constraints that you put into a system determine what gets valued in the system. So let me, you know, clarify what I mean by value. There are many different ways of thinking about value, but there are three specific concepts that are useful and relevant when looking at what we’re talking about over here, AI and the system of work. The first concept is that of intrinsic value, right?
So if you smile at somebody that is intrinsically valuable, again, it depends whether people like to be smiled at or not. But in general, intrinsic value has value in and of itself. Then there is economic value. And economic value essentially means that there is a market that is out there to support the value that is being created.
So simple example, oxygen has infinite intrinsic value. If I turn off your oxygen supply, you’ll really want to get it back. But because it is abundant and because markets don’t value abundance, they value scarcity, simple supply demand economics, oxygen does not have high economic value. Oxygen involves high economic value the moment it becomes scarce. So if you’re fighting for your life and you need the ventilator, you can charge really high for that oxygen. If you’re going underwater diving and you need an oxygen tank, you’ll have to charge for that. So economic value is created under conditions of scarcity and under conditions of relevance. It should be relevant to the other person, right? But intrinsic value is different, which is why when we talk about what is distinctly human versus not, we focus entirely on intrinsic value, what we have to realize is that if we do not create systems that are structured in a way to also manage economic value, that particular form of value will not be monetized or rewarded. Monetized is a very transactional word. won’t be rewarded.
A simple example is Uber as a system knows what its economics look like and that economics does not involve the way the Uber driver smiles at you and says thank you to you. So there is no way for monetizing that. It’s not being rewarded right now. So even though it has intrinsic value, economic value is only given to the point A to point B distance that is driven and the amount of market imbalance that is there at that particular point in time.
The third thing is the contextual value. And I think this is the most important concept that I personally articulated for myself as I was going through this, because I always intuitively understand the economic value versus intrinsic value piece, although very often we don’t understand it when we keep saying human touch. Human touch has an intrinsic value, but does it have economic value is always a question you should ask. And if not, how can we make that happen? Not everything can be solved by UBI and those kinds of issues. You have to solve things using systems and markets much better than looking at catch all answers like reskilling and UBI.
And that’s also one of the things that the book argues against that if you have to get all the way down to we’ll solve this with reskilling or we’ll solve this with UBI, that’s because you have failed at designing the system the right way. Because if the system is not designed the right way, some of these things cannot just simply be solved in those ways, right?
So contextual value to me is really the value of a task within a system based on what the constraints of that particular system are. So if the constraints of the previous system of work was that it was expensive to find somebody who can do a certain kind of work. Let’s say it’s the viewing contracts and they’re lining those contracts for the senior partner to review it. If that was expensive to get, you would pay a significantly high salary to a junior associate who would do that. But if that constraint is suddenly removed, first of all, the economic value of the task collapses because now it can be done using AI. But the contextual value of adjacent tasks might also collapse or might actually rise because so much more of this particular task is being done that an adjacent task becomes valuable within the new context that has been created. So the point is that contextual value changes from system to system based on how the constraints shift as the system shifts. And these things are not mutually exclusive, collectively exhaustive.
Economic value is a measure of how much you’ll be paid. Contextual value is a measure of how much you matter. And intrinsic value is measure of how much value you are creating in general. But it’s a combination of what you create and how much you matter that determines how much you get paid.
So we need to think about all these three things in tandem because what is happening with AI today, because to Simone’s point, some of these things were happening before. So algorithmic coordination has happened before. Algorithmic coordination to manage Uber drivers has happened before – But Uber drivers were already well modularized because the code basis of differentiation, which was navigation skills, had been standardized. And as we move towards advanced driving systems and eventually towards self-driving, the code basis of differentiation on driving will also be standardized. But even the difference in navigation is standardized enough to make the Uber driver completely modularized and substitutable.
Now, the same thing can start happening to knowledge work as more and more knowledge work gets modularized, some of which gets absorbed into the model, some of which gets set up as a modular task alongside it. And so all of that can be algorithmically coordinated as well, and that will increasingly happen. It’s not that, you know, certain forms of work can be algorithmically coordinated and other forms cannot. The moment you can modularize and standardize a task sufficiently, it can be algorithmically coordinated.
So that’s really how I see these two factors kind of working together.
So to go back one final time to your point, how do we value the things that humans uniquely create, you have to think about it in the context of the system of work. If the system of work requires something that you uniquely create, and it does require a lot of things that you uniquely create. I articulate this as when answers become cheap, which is what happens with AI, asking the right questions becomes a new scarcity. And asking the right questions,
or curiosity is basically a compounding advantage because it compounds by creating the right path of inquiry. If you start with the wrong question, you go down the wrong path. So at every point, you have to ask the right question to go down the right path. And my point is that when answers are cheap, if you form the right question, you’re actually incurring huge opportunity costs because by forming the right questions, you would have gotten to a better place. And now you’re stuck getting into the wrong places.
So the very nature of what will create value in the system of work will change on the basis of this.You know, I especially talk about asking the right questions and then interpreting the answers the right way as the two levels at which value will be created around the easy access to quote unquote answers. But there are many other ways in which we can think about where humans can uniquely add value. But those are, you know, some high level thoughts.
Shruthi Prakash
For me then I’m curious then does it sort of shift the kind of, let’s say focus on the kind of employees that we have because we earlier had, I would assume, I guess a lot more benefits to gain from a T shaped person. But I would assume now then there might be a shift towards somebody who’s more expertised in certain fields, has more contextual knowledge, things like that.
So how does that role requirement shift?
Sangeet
Yeah, they are, you know, as with all things, there’s many layers. And what I’ve tried to do in writing this book is unpeel some of those layers. So let me unpack some of those over here. So one thing is what do we mean by context, right? One other that we might make is we might say, I’ve been in this company 17 years. And so that’s what context really means, right? And it does mean something. It does have meaning.
But we need to understand that that context may not always hold economic value. And at the same time, that context may also be absorbed and deployed in new and interesting ways. If a model can be trained on all the internal organizational knowledge of a certain kind, all the internal documentation that has been created, the reports, the meeting notes, and everything, and it can then serve as sort of a you know shelling point the idea of a shelling point is if you ask somebody – You know, where do you want to? If you ask somebody to meet you in Rome By default people will go to the Colosseum at 12 noon, right? Because it’s just a shelling point. It’s something people gravitate towards right path is it’s going to be the Eiffel Tower Dubai it’s going to be the Burj Khalifa, but not at 12 noon because it’s too hot It’s going to be at 7 p.m. When the fountains start.
So my point is that the idea of a shelling point is that we feel more comfortable once we start aligning around things that are familiar knowledge. And when you have very little knowledge and you have very little knowledge about what others know, you move towards shelling points. What AI does is it helps us identify those shelling points really well. So if you’re the consulting firm and I trained a model on all the decks that have ever been created, it can identify what is the right narrative structure, what is the right tone, what is the right framework to use in which kind of scenarios so it can identify the shelling points and then modularize it to the level that if you can then tell the model, here’s what I want to get done, it will ask you for the right inputs and then reorganize it.
So my point is that then the model is capturing context, which somebody with 18 years in the firm would have had to capture.
And what we’ve seen is that that kind of context is not necessarily the simple deterministic structured database concept that we normally would think because once AI trains on highly unstructured inputs, creates interesting forms of context.
I mean, one of the most interesting for all its hallucinations, one of the most interesting aspects of GenAI is just the fact that a simple thing, summarization, also means organizational context. It’s not just take my thousand word essay and make it 200 words. It’s also take everything about my organization and help me understand what I really am, what I really value, what gets repeated, who repeats them. So really help me understand the context, right?
So my point is that when AI itself can start taking some of the context, context itself starts becoming cheap as well. So that’s also something we need to understand. What this means is in the past, you could not have hired a freelancer outside for doing something which was high context work. But today, you could potentially give them access to an internally trained model and have it act as a coworker, teammate, co-pilot, whatever fancy name you want to give it work alongside you as a cultural consultant on the organization while you do your high impact outside and work alongside it, right? So my point is that what is context is also going to change quite significantly on the basis of that.
So I think these are some of the ideas that are really interesting. I I’m quoting here things you know, that other people have said in the past and I referenced that in the book as well, especially thinking of AIS, shelling points and so on. But really it’s about how you bring all of these together to think about what the new system of work looks like and how it works. That’s when it becomes very interesting.
Simone Cicero
So I’m about to drive the conversation in a very philosophical place. Sorry about that. But we may be at that point, right? Where just thinking through the old, you know, well-known frames doesn’t work anymore.
So all this talking about context, I think it’s really, really interesting because it connects a lot of the pieces that we discussed among us in the last few weeks and more in general. So intuitively, that GenAI is driving on generally the market that I’m getting from this conversation is that it’s pushing value basically by commoditizing work and generally making time labor less important in the economy. It’s pushing any entity in the market to understand its context, because it’s the place where you can find a new definition of value.
So economic value is getting in crisis because of these automation possibilities, and therefore, you have to be more vocal, more involved in defining what value is for you. It’s really resonant also with some interesting work that recently has been released by Sam Lessin on the state of VC Capital in 2025.
He has a thesis that basically says nowadays liquidity providers are much more vocal in defining what is available for them. there’s no more like a unique thesis of venture capital. It’s much more about investing into something that you have a direct interest in, strategic interest in, right? Which sounds like a fragmentation of markets. So you don’t have any more a big market.
You have multiple markets. have multiple realities in which operators operate and define value much more personally, much more individually.
That’s extremely interesting because you also said to define value in this new landscape, you have to define your constraints. You have to design your constraints. You have to basically design your system as an organization, which talks a lot about organizations being more active in structuring how they want to impact maybe their territories, their employees, their industries, and whatever.
And finally, all this thing really resonates with two things that we also discussed quite often in this podcast. One thing is the idea of, you said, for example, to redefine the context, you have to ask the right questions. And in reality, what I feel like is that we are approaching a place where you don’t really have to ask the right question, but you have to ask the hard questions.
So what I mean with this is you have to ask questions where you a direct skin in the game. You spoke about judgment recently as a way to generate value in the AI economy. And judgment, for me, resonates a lot with the idea of embodiment.
You take a judgment when you are directly impacted by the things you are judging or deciding about. So all this conversation, what does it lead? It leads to a place where organizations must be in institutions more in general and individuals probably. They need to be much more involved in defining what is available for them, taking a call and organizing much more individually.
So we’re looking into a market which is made of not just niches anymore, but really contexts of value. So I think this is a fascinating perspective and connects with one thinker that we have spoken about a lot on this podcast, is Yu Kui. This is philosopher from Hong Kong where he speaks about the idea of multiple cosmotechnics.
So the idea that technologies and ontologies, let’s say, value needs to connect much more broadly in the world. And we have to overcome these unique narratives of value, right? And the relationships with technology. that’s a really interesting place to hand. And it’s really extremely disruptive to the koolaid of let’s win in the market that has led the last couple of decades. So how do you react on these? What is your feeling about all this complex conversation?
Sangeet
Can you just tease out like the one or two points so that I ensure that I did not miss that?
Simone Cicero
First point is as an organization or individual, value is something that you have to define. So you have to define your ontology and even your cosmology. So the connection of value with your meaning, right? With meaning.
And secondly, we’re looking into a market that is no more something that you can understand with common sense, with broad sense, with general I would say principles, it’s much more about infinite niches where relationships and contexts are more important than anything else.
So you have this responsibility to define value, mission, meaning, purpose, and feel it on your own.
Sangeet
Okay, fair enough. Very good. Let’s talk about the larger context first, and then let’s come back to the individual, right? So when you think about the larger context in terms of what’s happening around us, what we often think of when we try to interpret what’s happening is we end up taking two extreme positions. One is the techno-optimist position and the other is sort of the Luddite position, right? But that assumes that there are no trade-offs because if you have these two positions, it means that there are no trade-offs. It just means that you either are one or the other.
The reason we talk about trade-offs and multidimensionality so much is because it’s a natural consequence of fragmentation, modularity, the world that has happened since the digital economy. It changes how we think about things as well, philosophically as well, right? And that is why traditionally we used to think about good and evil, but now we cannot afford to do that because it is multidimensional.
Two dimensions that to me are really important in understanding what’s happening is local effects versus systemic effects. That’s one dimension, which is similar to the task-based view of looking at AI versus system-based view. But let’s generalize it as local effects versus systemic effects. And the other dimension is essentially that competition or the way things are divided remain the same or the way things are divided change. So the rules of slicing the pie remain the same or the rules of slicing the pie change, right? Now, when you think about Luddites and techno optimists, they are assuming very often that the rules of dividing the pie are going to be the same. Luddites are basically saying, machines are coming for my job, that’s over here. Techno optimists are saying, hey, we don’t even know what will happen, too many things will be cool, it will be exponential, everything will go further.
So they’re also assuming the basis of competition remains the same, but because everything goes as I think tide will lift all boats. That’s the general techno-optimist fallacy. No matter whether it’s Andreeson or whether it is somebody else, all mentioned some version of this. mean, Sam Altman regularly mentioned some version of this, that the economy is going to expand so much that it will lift all boats. The problem is all of that works well if the basis of competition of how the pie is sliced and how the pie is divided between players remains the same. Then it’s great because if the pie grows, everybody grows with it.
But if the basis of competition changes, if the way you slice the pie changes and the winners keep getting more and the losers keep getting less, then it’s a problem. And that is why you have to move away from the techno optimist and the Luddite view of looking at things and think about this more from the perspective of unbundling and rebundling. What were the old bundles like? What was the competition competitive narrative on the basis of which it was based? What were the constraints of the basis of which it was based? And now that those things have been unbundled, modularized, what does the new system look like? But more importantly, how does that new system get sliced?
So I think my starting abstract philosophical answer to the philosophical question is, think about the fact that it’s not just that the pie is going to grow – The way it will be sliced will be fundamentally different because the basis of competition is constantly changing. And the basis of competition changes for a variety of reasons.
And I take the example over here of the East India Company. The East India Company was basically the British Empire’s right hand to dominate colonial subjects. But the way they went after the Indian subcontinent was pretty interesting because the Indian subcontinent is what you would call a complex system, right? And if you have to sit remotely in Britain and manage a complex system, you need to really make that system structured. So they essentially did what AI does today to some extent. They took a very complex system and they mapped that system so that it was fully modeled.
So they created a model of India. They undertook what is the biggest survey, the great trigonometrical survey of India of the 1830 or some. It took like 40 years to map the whole country. And this was a crazy country without infrastructure and so on. And then alongside that, they started building infrastructure to kind of connect different parts of the country together. Once they had mapped the country, they did the first census of India.
They basically made the whole of they basically created a knowledge model of India. And once you have a knowledge model, you can change the location of decision. And that’s what happens with AI in organizations. Once you have that knowledge model, the decision can be made elsewhere. And that decision could be made in the UK. And they created with that a centralized decision model with a decentralized execution model, which was called the Indian Civil Services.
So they basically trained people who looked Indian and who were Indian. But they were called, you know, Brown Englishmen, they were taking the English decision making logic and using that to split India. So my point is the same thing of modeling something, using that as a mechanism for driving decisions and using that to execute your goals is how new complex systems can be divided in fundamentally new ways. It’s not just that when the pie starts growing, it will just be shaped the same way. It’s about what you choose to model. If you choose to model, and Uber drivers point A to point B, then you will reward that. And if you choose not to model their smile and their thank you, then you will not reward that. Similarly, if you choose to model a call center agent’s smile and thank you, then the call center agent, once they realize that, will start over-delivering on the smile and thank you because that is what’s being modeled. So my point is that systems keep responding to how you’re modeling them, how you are shaping decisions, and how you are then executing based on which the slicing, dicing happens.
Because of this constantly shifting basis of competition, because that modeling is never going to be perfect. And with AI, have to understand this. Everything that we see from bias to deep fakes, all of these are different versions of modeling errors. The modeling and the representation of reality is never going to be perfect. It’s the classic case of Google Maps taking you into a road that is blocked, because it doesn’t represent reality in the right way.
And yet, for your decision and execution, you are trusting on it.
So my point is that if we believe that those basic parameters of how complex systems work are about to be changed, then competition is not going to be the same. The way things are going to be sliced are not the same. And so because somebody can determine what gets represented, because somebody can determine how decision support systems work, because somebody can determine what a system and agentic systems will look like, the people who can determine that can decide how the pie gets sliced and they can slice it in their own favor.
That is my key philosophical, you since you’ve raised it to that level, I’m answering it at that level. In the book, I take a lot of examples of how this has happened in various things. I take the East India Company example as well, but I take many other examples. But my point with all of this is that if you can control the logic of how the system models something, how that changes decisions and how that changes execution, you can then control how that system decomposes, who gets to be a part of it and who gets to be left out and that’s the bias issue.
You can also decompose how they get, you can also change how the system gets governed. And on the basis of all that, you can determine who gets what, right? So that’s the key idea. Now that is at the level of institutions and systems. Take it back at the level of humans. As humans, think what we need to understand is that this is a unique opportunity to be yourself while understanding what the system needs, while understanding where you are in the system. To what extent can you influence us to work in your favor versus not? It is not a good opportunity to keep looking at local effects, ignoring systemic effects, and constantly try to optimize for local effects. Because the local effects will suddenly transform in step changes. And we’ve seen that already, right? And we’ve seen that only at the first level. If AI implements a step change, that’s only a first order effect. But second order and third order effects are much larger.
And they won’t be, you know, just the basic first step changes. So my point is that if you go back to that things, local versus systemic, cut on the old basis of competition versus new basis of competition. The problem with people thinking about their jobs is they’re thinking Pi is going to be cut on the same basis. And let’s look at the local effects and see if someone using AI is getting better. If somebody else is keeping their job and I’m not keeping my job. But unfortunately, and I am a strong believer in this – I think you need to keep a five to 10 year view on your career, knowing that everything is going to change all through and knowing that you are going, are, there are going to be periods in the five to 10 years where you might lose for one or two years. But as long as you win in the 10 year frame, that should be okay. So you need to build your own systems view of how you go about it. And I think, you know, paradoxically, the more uncertain the future work becomes, the more important it is to, for us to not have long-term plans, but have long-term appetite.
which means that you should be able to take short term hits just because you cannot manage that uncertainty that’s happening in the system anyway. But if you are keeping your eye on the system and if you are staying true to who you are in the 10 year frame, you will win. And I mean, you know, this has been a big reset for me as a person. I reorganized my work around this to constantly not worry about what happens in the six month frame. But I want to be, I want to stay true to myself and be in a place where I can leverage systemic shifts in the 10 year frame. So, I mean, that’s how I personally believe things are going to matter.
Shruthi Prakash
I’m curious about a things. First is like, do you think the rate of change of technology is sort of aligning with the change of the competition and the market landscape and so on? Because we’ve also had a lot of people on this podcast say that the imagined speed of the technology changing is much higher than the reality of it. So that’s one that I’d be curious to know more on. And the second is again, how do we strategically implement all of this in institutions? there, let’s say, do we need to embrace certain organizational models? How do we define value at an organizational level? What would leadership look like tomorrow? What practices do we adopt as an organization? I’d be curious on these two.
Sangeet
Yeah, absolutely. So your first question was about technology. I think, yeah, the pace, the standard, the much quoted quote still stands. We overestimate what technology can do in the short run and underestimate what it can do in the long run. And to a large extent, that is because we also overestimate the layer of pace of technology and what it can achieve in the short run.
The reason for that are twofold. One is that we are dealing with technologies today that seem to get a lot done and create the semblance of execution without always having the fidelity of execution with it. And the fidelity becomes important if, going back to Simone’s point about judgment, you are also responsible for the risk that is associated with the outcome. So one point is that.
In order to see how technology is evolving, should move our focus away from execution towards fidelity to see if what it can do really stands, holds, or does it just look interesting. Now, that’s obvious. I’m saying something which everybody else is talking about. What is less obvious is that even if the rate of change of technology is slow, let’s assume that it’s slow. Let’s assume that AI is not good. It’s dumb. All that said, it is still way ahead of the capacity of our institutions to adopt it. And the reason for that goes back to your philosophical point. The institutions are still stuck in thinking like it’s the 1980s, to be very honest. There are a lot of institutions that think about local effects, even though systemic effects have been unleashed since the container happened. Internet, yes, but even with the container, systemic effects happened. The winners and the losers were completely different and so on.
So institutions, I mean, I’m sorry to say, but EU regulation is a classic example of thinking about the pie is going to stay the same and local effects matter. They are stuck in that particular quadrant for the last two decades. And the problem is they have to skip over to the other quadrant. Yes, regulation is very important, but systemic effects matter and the pie is going to be sliced in new ways. So your regulation is actually preventing you from innovation anyway, and it’s not achieving anything because it’s not looking at systemic effects and it’s not looking at the new way the pie gets sliced.
And so my point then is that we need to think about, you know, we really need to think about, or we really need to move our focus away from is technology moving fast? Is it moving slow? Maybe if you’re a Tesla and you want to do self-driving cars, that matters to you. But for most of the rest of the economy, the pace of technology is not what’s holding them back the quadrant that they’re operating in is what’s holding them back. So it’s not really the base of the technology that’s an issue. I mean, I take many examples in the books that are very interesting, almost none of which are intelligent technologies. Shipping Container is one. I take the example of Walmart versus Kmart when the barcode happened. Kmart used it to do quick scanning and automated its checkout counter. Walmart used it to capture data.
And these structures, the relationship with its suppliers and do centralized sourcing across its suppliers so that Kmart was still sourcing and negotiating at a store by store level. And so it had no power than the value chain and Walmart changed the power dynamics. That’s a classic example of systemic effects, new ways in which the pie is going to be sliced. So my point is that it’s not the piece of technology that’s an issue today. I think that piece of technology far outstrips our ability to move our institutions and think with it.
What’s actually even worse is not that we are trying to move our institutions and think and we are falling behind, but that we are actually in the wrong quadrant. And so what we feel is the right move is actually consistently holding us back on moving us in the wrong direction. And that is the problem that really plagues us today. So that is the answer to your first question. Again, in Reshuffle, I keep going back to this point.
AI is stupid. Let’s assume, let’s agree on that. AI can’t do anything well. Even despite that, unless we move from this quadrant to that quadrant, we are still stuck way behind, ages behind whatever AI can do today or can’t do today. So forget about what LLMs can do and can’t do. Forget about what’s possible and not. If you don’t change how you think about these issues, you are anyway not capable of using what AI can do today, right? So that’s one point.
So to ask to your second question, what should organizations do? What should organizational models look like? I think it comes down specifically to this piece that you need to think about moving away from local effects and thinking about systemic effects. And thinking about systemic effects means that AI is not just something that’s coming in and transforming workflows. I mean, today people talk about job loss and organizations talk about workflow automation. Nobody talks about anything higher than that. Nobody says our organization used to look like this. It’ll look like this in the future. They will say we went from 200 to 120 people, but that’s still task automation. That does not change the logic of the organization.
So if you want to think about how do you reorganize, you have to first start by thinking about where do you play in the ecosystem? How is, you know, what is your logic of why you win today?
Why does value come to that particular point in the value chain? You have to start with the questions of where you play and how you win. You cannot start by looking at this is what leadership will look like and this is what organizations should look like. You have to start with the fundamental questions for why you exist as a business. You do not exist as a business to provide jobs. You exist as a business to play and win. And when you do that, well, you provide jobs. So if you do not start over there, and if you keep trying to figure out how do we eliminate jobs or how do we protect the jobs, you’re going to constantly be stuck in the wrong quadrant and you’re going to kind of do what European regulation has been doing for two decades.
You’re not going to move forward because you are moving, but it’s not in any direction that makes sense given the new system. So I think that’s the challenge. You have to look at answering those two questions. Where do we play? How do we win? How are those two things changing because of AI? And now let’s work back through the system of work.
Simone Cicero
Right. mean, totally, I resonate with that. I’m making a point recently to explain to people that you never start with how you organize or your workflows. You always start from the customer, the needs that are responding to the market, what people are, why you exist in the first place, right? What is the value that you deliver?
And I think this conversation has been quite rich and if I can wrap it up for our listeners, I think in general the messages for organizations specifically and for institutions even if as I understand your faith into institutions is fading and mine as well sometimes but you know especially for organizations what we are telling them is – The impacts of AI are so profound that the universal thesis of markets is pointless. So you have to be much more sovereign in defining what value is for your organization. You have to be much more active in mapping out and understanding how you contribute value to the system more in general, not just into specific micro-customer needs.
So you have to understand the system. You choose what to model. You determine your strategy. And you have to be more strategic, because otherwise, already thin margins that you have to exist in an ordinary market are getting even thinner with the impacts of generative AI and AI more in general. So the message is, be more intentional. What to do as an organization. Try to understand and model the system so that you can understand and model your value and your contribution. So that was fantastic. I’m really much looking forward to the book. So maybe as a closing bit, as we approach the book release, I know that maybe it’s good to share with our listeners where they can follow the progress and the releases. And maybe if you want to add as a closing point, also share a couple of breadcrumbs.
I know that you have been reading a lot in the writing spree that led to the book. So maybe you have a couple of suggestions also for people to add to their library or other suggestions for them to get some new ideas on the table.
Sangeet
Sure, absolutely. You know, if you want to follow my work, you can do it at the newsletter platforms.substack.com. That’s my substack newsletter, platforms.substack.com. And you can look for the book, The Reshuffle on Amazon. It’s available for pre-order. And by the time this podcast comes out, it would have probably been released as well. And…
In terms of what I’m reading, I think my approach to reading has changed quite a bit. I read a lot less online than I used to, or I read a lot less from surfing the web, if you will. I mean, if I deconstruct what I really do these days, I just have long arguments with the chat GPT and identify interesting threads – ask it to figure out the interesting sets over there and then create a reading list on the basis of those threads. And then I dive into that quite often. So I’ve been using it as a way to curate what I’m interested in. And if I find something super interesting, I feed it in and I say, let’s argue on this for the bit. Let’s tease it around. Let’s use game theory and play around with it and try and overlay different things and then ask it to kind of summarize 10 things that you know, new insights that we generated and then clear a reading list on the basis of that. I think that’s been quite interesting for me because it becomes immediately applicable. So I do that quite a bit.
Simone Cicero
Right. Right, that’s interesting. Maybe you can ask ChatGPT to play the role of a grumpy philosopher, librarian somewhere so it can make it more interesting. But that’s fantastic. I mean, it’s a good, it’s a very good suggestion for our listeners to play with ChatGPT to find out new directions for research and learning and reading. So thank you so much. Overall, thank you for the conversation. I hope it was an interesting one also for you.
Sangeet
Always, absolutely.
Simone Cicero
Right. Thank you for engaging with the openness of our questions and broad scope. Shruthi, thank you so much for joining us today as well.
Shruthi Prakash
Thank you. Thank you, Sangeet. Thanks, Simone, as well.
Simone Cicero
And yeah, for our listeners, as always, you can head to www.boundareless.io. This conversation will be on the top page. You will find all the links to the resources we mentioned and the case studies that Sangeet has mentioned. Most likely, we’ll also find the link to the book that will be published in the meantime.
Sangeet
Thank you so much.
Simone Cicero
Don’t forget to purchase your copy and until we speak again, of course, remember to think Boundaryless.