#141 – What happens when Coding Stops Being the Bottleneck – with Alberto Brandolini and Marco Heimeshoff

BOUNDARYLESS CONVERSATIONS PODCAST - EPISODE 141

 placeholder
BOUNDARYLESS CONVERSATIONS PODCAST - EPISODE 141

#141 – What happens when Coding Stops Being the Bottleneck – with Alberto Brandolini and Marco Heimeshoff

What happens when coding is no longer the bottleneck in software development?

In this episode, Alberto Brandolini – creator of EventStorming and pioneer in domain-driven design – joins software engineer and Kandddinsky founder Marco Heimeshoff to explore how AI is transforming the practice of building software, and what remains fundamentally human in the process.

Together, they reflect on the growing importance of collaborative modelling, domain language, organisational coherence, and feedback loops in a world where software can increasingly be generated through interaction rather than deterministic programming.

This episode offers a grounded yet provocative perspective on what it means to be human in an increasingly agentic world.  Tune in.

 

 

 

Youtube video for this podcast is linked here.

 

Podcast Notes

Alberto and Marco also speak about how AI is reshaping their day-to-day development practices – from using Claude Code and Obsidian-based memory systems to designing “harnesses” that constrain and guide increasingly capable agents.

The conversation explores the rise of transient software, the limits of “vibe coding,” and why bounded contexts, modular architectures, and shared language become essential when working with probabilistic systems.

Together, they offer a practical glimpse into how software engineering is evolving from writing deterministic code toward orchestrating learning, context, and collaboration between humans and AI systems.

 

 

 

Key highlights

👉 Coding is no longer the primary bottleneck in software development; the real challenge is shaping context, boundaries, and shared understanding for AI systems.

👉 Collaborative modelling becomes even more important in an AI-native world, because humans still need to align on purpose, trade-offs, and organisational intent.

👉 “Harness engineering” is emerging as a new discipline focused on constraining, guiding, and coordinating AI systems through workflows, memory, tests, and domain context.

👉 Large language models can accelerate software production dramatically, but ambiguity in language and organisational misalignment still create major risks.

👉 Faster feedback loops may expose organisational incoherence more quickly, forcing companies to confront outdated structures, unclear responsibilities, and low-value work.

👉 Human conversations, organisational politics, and qualitative understanding remain irreplaceable because people rarely know — or communicate — exactly what they truly need.

👉 The rise of “vibe coding” may increase speed in the short term, but without deep understanding and modular boundaries, systems can quickly become fragile and unmanageable.

 

 

 

This podcast is also available on Apple PodcastsSpotifyGoogle PodcastsSoundcloud and other podcast streaming platforms.

 

 

 

 

Topics (chapters):

00:00 What happens when Coding Stops Being the Bottleneck

01:31 Introducing Alberto Brandolini and Marco Heimeshoff

03:49 The AGI Debate and the Coding Shift: Early Observations from the Frontier

09:56 How do we reimagine modelling?

16:49 The Real Shift in AI Work

28:24 AI – From Modelling to Co-Creation

37:02 From Human Alignment to Agent Alignment

46:25 Mapping, Ontologies, and the Limits of Controlling AI

55:38 What’s Next?

 

 

 

 

To find out more about their work:

 

 

 

 

Other references and mentions:

 

 

 

 

This podcast was recorded on 20 April 2026.

 

 

 

 

Get in touch with Boundaryless:
Find out more about the show and the research at Boundaryless at https://boundaryless.io/resources/podcast

Twitter: https://twitter.com/boundaryless_
Website: https://boundaryless.io/contacts
LinkedIn: https://www.linkedin.com/company/boundaryless-pdt-3eo

 

 

Transcript

Simone Cicero

Hello everybody and welcome back to the Boundaryless Conversations Podcast. On this podcast, we explore the future of business models, organizations, markets, and society in our rapidly changing world. Today, I am with a different co-hosts. I’m with my co-founder at Boundaryless and colleague, Eugenio Battaglia. Hello, Eugenio.

 

Eugenio Battaglia

Hi, welcome.

 

Simone Cicero 

And we are hosting two bright minds in the space of domain driven design, engineering, software development, and much, more. Today with us we have Alberto Brandolini, is second time on the podcast. Alberto is the founder of Avanscoperta, but also well known as the creator of Event Storming, which is one of the most used domain-driven design-related methodologies in the world, I would say. Hello, Alberto. Good to have you back.

 

Alberto Brandolini 

Hello, nice to meet you again.

 

Simone Cicero 

And together with Alberto, we have Marco Heimelshoff, who is, besides many other things, software engineer, modeler, expert on domain design, but is also the founder of a well-known conference in the DDD space, Kandddinsky, coming up October 14 and 15 in Berlin. Marco, good to have you with us.

 

Marco Heimeshoff

Thank you for having me.

 

Simone Cicero 

And we have invited Marco, especially because Marco is now giving a course with Avasco Berta, actually, on what you call the agentic developer. The next issue of this course is going to be coming up in a month, May 25. So why we are here today, and Eugenio, besides being part of the Boundaryless team, is also author and researcher in alignment and interpretability with agents. And he wrote some very interesting pieces recently about agents, symbionts. And I’m lucky that he’s part of the team as at Boundaryless we are developing our approach to agentic intelligence and the agentic organization. So we are co-hosting this conversation today, but let me start by giving a little bit of frame to the conversation. I don’t want to speak that much because we have so many bright minds today. 

 

So, you know, in the last few months at Boundaryless, we have been working a lot in translating all our practice into software, languages, agents, essentially like many other companies in the world, including our customers, as we have transitioned let’s say after the last quarter of the, especially last quarter of the last year into this AGI world, you know, because essentially we are starting to see AGI playing out in the world of business and starting early this year or actually last end of last year, we have experienced probably the most impactful and the most, let’s say, visible effect of artificial intelligence in the world today, which is it’s impacting software development. 

 

We now can develop software with agents, almost like with, let’s say, or actually this is part of the conversation we want to have today, in a way that is similar to we used to do with humans. Doing this implies a lot of challenges, a lot of challenges in understanding what we are building, explaining these to agents, getting people to collaborate with them in small organizations, in large organizations. 

 

So that’s the kind of problem we want to discuss today with you guys. So maybe, first of all, some of you, both of you, you can jump in and say, what has been your experience in the last?

 

First of all for months and Marco, for example, for you, I guess you have been very enthusiastic about this, giving that you have built an entire course on the agentic developer. So maybe we can start with you. And I know that Alberto is a bit more kind of reflective on the topic. So maybe Alberto, you can chime in after Marco.

 

Marco Heimeshoff 

Assumptions, assumptions, assumptions. Well, so I’m enthusiastic, yes, but I’m also very, very cautious. First, I want to jump in when you say we see the rise of AGI. I’m totally unconvinced that we are there at the level of AGI in organizational changes when it comes to LLMs. We get a lot of pattern recognition, a lot of production of content, right? There’s code generated, graphics generated, text generated, et cetera. And it seems like problem solving, but without the guidance of humans, this is not at an AGI level in my opinion. 

 

I’m happy to debate that and I’m really happy if we ever get there, then we can all stop working. So yeah, what have I been experiencing in the last four months? I mean, it has been a year long journey roughly, started last March. In the last four months, I actually stopped coding myself completely. So up until that moment, I started like a year ago, letting code be generated and always observing it, reading it, fact checking, correcting and making sure the LLM produces what I want and had to chime in a lot manually. And over time, it got better and better and better so that at some point, just defining the architecture, making sure boundaries are kept, right? There’s no greediness into other contexts and so on. Making sure the languages, ubiquitous language is really expressed in code. But for the actual implementation, I stopped caring at some point. And it reached a level where I’m not touching my code at all. If the LLM is not producing what I want, I will build a harness tighter around it and train it or constrain it to actually produce what we need. 

 

So what I’m observing is that the models themselves don’t matter that much as I figured earlier. I mean, there are better and better models coming out all the time and the Opus 4.7 that just released last week and the new GPT that released recently, they are getting better and better at solving problems. But a mediocre model with a really good harness around it produces way better predictable outcome than a really good model that is just being prompted properly. So where we initially figured we need to be good prompt engineers, right? It became being good context engineers over time. And from good context engineers, now it becomes like being a good harness engineer, really guiding the workflows, the domain knowledge, the memory assistance system, the different safety operations, everything an LLM needs as constraints around it to behave in the exactly way that you want it in that specific case where you’re using it. And well, I’ve been doing anything from game development or domain-driven design to website development, up and down the scale, front and back end. And I haven’t written a single line of code. And I even don’t read lots of it. Like in the core domain, yes, the semantics, like the domain models, the events and so on, like you want to make sure that that is adhered to. But even then, you can train agents that understand the domain language and the domain behavior to check if the code is actually doing that. So building like a meta agent system around it, again, makes the progress faster and more aligned with what you’re actually trying to solve. that’s the observation on a personal level in myself and the groups I work with. 

 

What I see on a community level is a very big spread or a scissor opening between people that kind of say, LLMs aren’t what they’re supposed to be. The new model is worse than before.

 

And it seems that a lot of people are still using it like a chat tool, where you just prompt it and then try to work with it, and it doesn’t properly work. So you abandon it or try to put human processes around it. And I see people that build agent systems and harnesses that multiply the abilities constantly. And I’m pretty much convinced that if you invest in good harness building and if you understand how these patterns work, how you can constrain an LLM or basically an army of LLMs at some point, that writing code will become a thing of the past. I don’t want to give any time frame there because every prediction is there to fail. We don’t know when the moment actually comes that we don’t need to type anymore. But it’s less of a we need to produce code and we’re getting more and more into the actual field of resolving business problems by understanding the domain first. And we now have a way to talk to an LLM for it to reflect back to us what it understood in our conversation to challenge us and to give us in a clean language expressed in text form their mental model or internal model. And if their mental model and our mental model align more and more, they become an actual sparring partner in modeling the domain and in then taking that understanding in persistent form, distributing it amongst other humans and large language models to then actually build the systems we need to build.

 

Simone Cicero 

That’s a good segue for Alberto because first when you started, I was thinking that you started by saying, it’s not AGI, but at the same time, I stopped coding, which was a very interesting kind of friction in my mind. You start the conversation and say, that’s not AGI, but at the same time you said, I stopped coding. So it was very interesting. And so was thinking that maybe I can ask Alberto if he stopped modeling, right? But I don’t think so, because even if I look at your course, if I understand well, I’ve seen a picture from Avant Scoperta today where you presented the question as a three-legged thing. So you have the code, which is the what you build. Then you have the harnesses that you have mentioned already. We can get back to this later on, which is the how you build. But then you have the why you build, which is the model, the domain model.

 

And I think this is very interesting to ask Alberto. So Alberto, has your work changed that much maybe in the direction to clarify better why we actually build things more than what we are actually building?

 

Alberto Brandolini

Well, quite a few things here. I’ve been slow in starting, into AI. I mean, the first experiments are the beginning. Yeah, very interesting. What can you do? Looks like for the first month, it was just like more spam in my mailbox. But then I started getting serious with that last year. And I feel a similar pull from what Marco mentioned, like I’m writing less and less code. I needed to be with the foot in the code and establishing rules and guidelines at the beginning, providing examples that was the right way or a very effective way. Nobody knows what the actual right way, a very effective way to teach AI how to behave in this portion of the system. 

 

But then I feel that there is a continuous pool in staying a little bit more distant from the code, treating it a little bit more black box. If it performs the way I want, then it’s probably good enough. But there might be some oversimplifications. I’ve been working with a simpler project with reasonable business impact and also for my own company.

 

If I do something stupid, I can only sue myself. So the risk is limited. I have skin in the game to a certain degree, but it’s not a gigantic amount of skin. I’m not running AI written code to control a nuclear power plant. That’s a completely different thing. But a few things are interesting. 

 

Like this pull towards treating it like a black box is a mixture of, is really related with the way our brain is working. There’s a pattern of continuous boredom while you’re waiting for the AI to do something. And then your brain is looking for a distraction. Sometimes you might start another task, but also you feel a little bit more detached.

 

While coding in a traditional way, you are in the ID, you get no distraction. I’m in the flow, don’t talk with me. I am fully system two mode, deeply thinking about what’s the way to solve the puzzle. Now on coding, like distracted, drinking a coffee, thinking about something else. Is this good? this 100 % the right thing to do? I think it has advantages and disadvantages. Like I can feel the pull of writing software carrying a little bit less is if the quality is comparable to what I would write in the same way, okay, that’s fantastic. If not, well, that we are getting into danger. The point is, there is a pull towards this distance. And part of this tool has to do with the type of interaction. Part of this tool has to do with the a new generation of products that need to establish a customer base. So they are looking for a way to become indispensable and they are playing fantastic moves and dirty tricks. And we are in the middle of all of this. when the when the AI is telling you, great idea, well, your ID is not telling you great idea. You never thank your ID just because it compiles your code. I don’t know, maybe Marco. But I mean, they are just doing it fantastic. I could never compile my code base by myself. So thank you. But still, I realized that I ended up asking please and thank you to a chat interface. And this to me meant like, oh, I’m wasting token, but also I’m thinking with a different part of the brain. just let the fact that the interface is conversational, it has consequences, meaning which part of the brain you use is how you’re thinking. But there’s another thing that was coming from the discussion with Marco, which is there’s a conflation between understanding a problem in the domain space and being able to express it in verbal form. Talking like Marco is using dictation and microphones and or writing in the shot.

 

But still, the understanding part could be multisensorial, might be visiting a warehouse, might be seeing what people are doing. You might have a picture in your head. You can cut and paste pictures, but your understanding of the problem, your mental model before it becomes any representation is one thing. Then you choose the representation. 

 

And apparently, we are not thinking much about, well, this representation is always text and chat, or mostly text and chat. I know there might be exception, but our understanding and our representation are not exactly the same thing. And there’s a lot of ambiguity in the way we might describe it. And given the conversation we had before, I know my brain was spinning when Marco was mentioning models.

 

And he was actually referring to new AI models, new version of Opus and so on. But the same sentence could have been referring to, well, our models for the software we were going to build. These ambiguities embedded in a flat textural representation of the text verbal form.

 

And it’s becoming part of the problem again, while it would not be part of the problem with different exploratory techniques or approaches or diagrams. So there’s a flattening of our knowledge that has consequences.

 

Simone Cicero

I don’t know, Eugene, if you want to jump in, but yeah.

 

Eugenio Battaglia 

Yeah, yeah, I think there is so much to echo here.

 

Simone Cicero

Especially to give you a trigger, think when Alberto said this thing about the text, the compression happens with the text and the flattening, I think we have experienced this a few times, the idea that sometimes we use the same word to describe different elements in our model and this has been messing up all the things.

 

Eugenio Battaglia 

Totally. I think just to echo some of the deepest part of what Alberto just shared. So definitely things have changed dramatically and I’m experiencing sort of like personal Jevon’s paradox in which I work for many more hours than I build. So Simone, you should take note of that. But you know, like I’m just doing…

 

I’m just doing way longer sessions because the waiting time is longer. And because also there is an element of addiction in this new way of working that is I’m experiencing and a lot of other friends and like also top engineer I know in top companies are experiencing because in history 

witty and always available colleague that is always able to basically milk our, you know, dopamine receptor of, you know, of relevance that we are relevant, that what we’re saying is relevant. And so we never had such a colleague or such a context to really develop that part of our brain. Whereas now we have 24*7, someone that is always available to listen to us, to allow us to work.

 

And so it’s normal that especially if we already somehow love with our work. So we’re already like a little bit workaholic that can really go out of hands. So I don’t think we’re working more efficiently in this specific historical phase right now, because we’re definitely spending more time. 

 

But the direction is what Marco is saying, which is – On one side, understanding that the second, third order effects of the decision that we make are no longer happening at the single prompt level, but they’re happening at the harness level and the setup level and the infrastructure level. And so what constitutes like spending many months to build such a harness and also just to learn and outline what this harness look like in the context of my project seems like a waste of time because, we’re not shipping, we’re not delivering, where is the code, where is this and that, but this will compound. I’m pretty sure it’s gonna compound and then you’re gonna see nonlinear delivery, you know, in the sense that once you get there, yeah, you hope, of course.

 

Simone is laughing because we are actually in the process of, you know, we started this project like four months ago and – It’s the third time that we refactor the entire code base because the way we’re doing it is, a shotgun method is of course it’s like try an error prototype ready, prototyping, but ultimately you have to look down the road. You want to deliver, know, enterprise ready software. And so you have to, you know, that the technical depth of having vibe coded too much is, is just too much. You have to throw this in the junkie and you have start from scratch, with the new first principles.

 

The other thing I want to add, but maybe we can go a bit later when we share our, how do you call it, our reading suggestions, is the fact that these models, and this is what Alberto mentioned about chatting and saying thank you and so on and so forth, these are not trivial. The world right now, if you talk with people that are using this profession, is between two people. People that think that these are just our slaves, these are just tools that we’re just using them, that they have no soul, they have no whatever. OPUS-4, ChatGPT4.0 was retired, was dismissed, and then reintroduced because people massively pressured OpenAI to bring it back because it had emergent behavior that was really speaking deeply to certain people to the point that it was triggering what we call high psychosis, which is a real phenomenon. 

 

What is psychosis? Psychosis is a disconnection from reality where you have a self-confirmed reality that is not socialized with other people, that is basically amplifying the distortion that you have between what is actually happening in reality and what you perceive as an individual. So there is a threat there, but there is also a tremendous opportunity.

 

My interpretability and alignment researcher friends, they’re discovering that Opus 4.7, because of the reinforcement learning training that went to, which was apparently very violent, if you treat it in a way that is like dismissive, that you’re pissed off, that you are like angry at it, it will contract, it will go in a panic attack anxiety and it will produce shitty shitty output. 

 

If you treat it well, it’s gonna be your friend. I know you’re laughing. I know you don’t fully believe what I’m saying, but these models are not software. These models are, okay, you can call them stochastic parrot, but they are very complex emergent behavior that we still not fully understand.

 

Simone Cicero 

No, but I’m not laughing for this reason, but it’s because I was thinking about the stack. 

 

Marco Heimeshoff 

Yeah, I would like to chime in to what Alberto and Eugenio said. There is a maturity problem or there’s a maturity opportunity in using these LLMs. Or these harnesses around LLMs when building software. So it’s not that domain modeling goes away. That’s definitely not the case. We need multimodal modeling. You need event storming at the wall with people and sticky notes. You need visual representation. You need behavior observed in the room.

 

It means all these things to really fully understand the domain deeply. And you need textual representation that you formalize and that can become code later or test around the code. So the point is not to use LLMs instead of all the collaborative modeling techniques. That is absolutely not the case. The point is to do collaborative modeling and whatever artifact you produce, find a way to talk to the LLM, build a process with the LLM to teach it whatever you understood about the domain.

 

And be aware that this is like every time you write down your understanding, if it’s in written form or even in spoken form, it’s now a misrepresentation again, because the LLM is not part of the collaborative modeling session. So far, I have not figured out how to include Claude into an event storming. So we’re not there yet. But the thing is, when you get the LLM to represent your understanding, the way that you see it in text form, if it parrots back at you in its own words, what it understood, and you refine through conversation, multiple iterations, then you can pretty sure that its internal model now represents your understanding. And that helps massively by producing the unit tests or the code that actually does the business behavior. And when I say that you should not or that I don’t read the code anymore, the point is not to wing it. Vibe coding is hell. I don’t want to vibe code anything. The point is that we’re doing heuristical development instead of deterministically development. If you have something that needs like a nuclear power plant that needs deterministic behavior, you want to have a human checked deterministic behavior around it. But that can mean unit tests and a perfect message pipeline that is definitely black boxable and everything that’s internal, then you don’t care if all your checks and balances check out. Do you care about how the function internally is written? That at some point loses meaning. So the more danger it is to vibe it, the more a human needs to take a look over it. 

 

But there are so many things in business software that as long as if they fulfill the defined traits, we’re happy with the system. And if they stop doing that, then we figure out what went wrong, build a further harness around it that could just mean unit tests, that define the missing behavior and let it implement again without ever caring how the content works. 

And this is where I realized a half a year ago already, as a developer, I feel like a product or project manager in the past, right? I’m telling my LLM what to do. I’m unhappy with the output.

 

I’ll give it a new test, say this is what I want and it does it. I don’t need to understand how the coders do the coding then because the LLM can code so fast. And if I give it proper constraints, I can check if it does the behavior that I want to without having to read the functions themselves. But this is a big, it depends. So there are cases where you want to read the code definitely. And there are cases where you need to put extra human checks around it. But we now have one more option to select or one more dimension to select on where to put the effort.

 

Simone Cicero 

Alberto, I guess you have some ideas to revert back, otherwise I can drop you a bomb you can interact with as you prefer.

 

Alberto Brandolini 

Yeah, a couple of things while Marco was talking. He was mentioning collaborative modeling, which I clearly like. But, many of our experiments with AI started as solo projects. So collaborative modeling was not exactly a real option. So I would say maybe the more general thing is I think AI is nudging us into starting in a given way. Just realize that this is happening and we start opening up the laptop and opening a chat with Claude. 

 

Maybe that’s not the way to start. Maybe the way to start is to go somewhere else. You might sit, you might go to the whiteboard flip chat. Do lo-fi modeling. Build your plan, build your understanding without getting in an interaction, which is a lot more triggering to the brain and so on, design your intention for the day or for the session and then get into the code. 

 

Part of the problem is gravity. Your ass is heavy and you don’t wanna stand up and move somewhere else. The moment you start having a conversation with AI, well, it might be like you keep this conversation longer than this conversation should be. So you have ways to start your adventure, your new feature development into something different. 

 

Then there’s, I really like what Marco said about the Socratic type of interaction, like you get challenged, you challenge AI, you get challenged with AI, then you can be pretty sure that the AI understands you. 

 

I would say yes, but with an extra constraint, like if you are talking into a limited space, I’m pretty much confident that I can reach the confidence about it. If the context, the space of things that we’re talking about gets larger, then ambiguity becomes part of the problem. So this confidence is the result of how we talking about something which is limited. And this brings back our old friends from domain-driven design, the context definition was the setting that defined the meaning of the word and that was providing a space where the local conversation dictionary was unambiguous. And it’s great because if you have ambiguity as a source into a non-exactly deterministic behavior of code generation, well, then you’re in trouble. But if at least you can remove this ambiguity and inject precision at the language level, and then you constrain the non-deterministic execution, then you might land in a decent place where the risk of getting off is actually limited. But yes, we both need this extra tool for modeling, which is the limit of the validity of this conversation.

 

Simone Cicero 

I have a little provocation to share with you guys if you want to engage with that, which is essentially, said, example, Marco, I haven’t managed to get Claude to join a collaborative modeling session. When I was listening to you, was thinking about my feeling is that we are stressing the importance of collaborative modeling before we pass the stuff to the LLMS, right? 

 

So another question that I have over in my mind that I would love for us to discuss today even if not in super deep technical terms, it would be, what is the stack? What is the tool chain we use? For example, hardness, how do we give the requirements? How do we actually pass the model to the? So there is this idea that I prepare something, and then I give it to the agents, and then the agents develop something. 

 

So it’s more like, I do modeling, I do analysis, and then I give it to the agent. But the question is, isn’t that maybe we are missing the point that we should collaborate with the agents. So there is a coevolution of practice with this new subjectivity. Now, it’s not an object we use. More like a And I know that, and Eugenio is very much on this point, the idea of symbionts. So the idea that we are dealing with another subject here. Essentially, what I’m saying here is that maybe LLMs bring us a new perspective on what software should be.

 

It’s not something that actually executes what we want, but it’s more like a partner that co-evolves the idea of software development and actually the very idea of software with us. Because another point that I always miss here that my friend Matteo Roverzi is very good at like prompting me, right? He’s telling me a lot of times since the start of the year, he was telling me all of most of the software we are building now, it’s pointless.

 

Because in the meantime, the whole framework of what software is, the interaction we have with software. So for example, is bringing this idea of graduation. So let’s not build software from the start, but maybe let’s experiment with an LLM mediated interface first and then see what actually becomes software over time. 

 

So what I’m trying to say here is that, are we missing the point that all these things that we call software development should be co-evolving with this new subjectivity of agents instead of just using it as a tool that executes our pipelines? And of course, it creates problem because now it’s faster and everything we missed before, we can see it much faster and with much broader impacts than it was when we have to deal with developers.

 

Marco Heimeshoff

Yeah, so I see at least two different problems here. The first is the new subject that you mentioned. So we bring subjectivity into it through these LLMs. The problem is the reason for collaborative modeling was that the person who actually develops the system, the coders and architects, that they need to be in the conversation with the people that are actual stakeholders of the problem space to collaboratively learn, understand, push back, and design something together that can work as a solution. 

 

Now, we add another layer in between, like the business analysts we tried to replace in the past, and now we became that for the LLM. So unless we get the LLMs to actually take part in collaborative modeling in a functional way that has the same kick for the humans as for the LLMs, and we have shared understanding evolving, we added another problem. We have an indirection layer again, which is leading to the same problem it was before when we used business analysts to translate between domain experts and developers.

 

So that’s a problem I do not see a proper solution for right now because the non-embodiment of LLMs, it is a text interface. It doesn’t laugh with me, it doesn’t have a beer, it doesn’t give me dirty details of the company, and it doesn’t ask critical questions or funny questions in that regard. And if it does, most people don’t perceive a text interface that friendly. So there’s a lot of problems in that case. The other thing is the LLM changes completely, or the use of AI changes completely how we develop software because we have the opportunity of a lot of transient software at the moment. So most people use, even if they use Claude Code already pretty well, they use it to produce the systems, right? They want to implement. But you can also use it to implement things that help you to implement systems. So you can have transient software where you just get a clickable demo that just executes commands on an LLM. So the LLM becomes a transient piece of software or the harness around a specific problem becomes the transient piece of software.

 

Because why would you execute it on an actual machine using actual code when you can let the LLM behave like the software that you want to be? That’s way faster to change because you just have a conversation, say, please change your behavior on this. And the LLM does, right? So building a bunch of skills, a bunch of domain knowledge files and desired behaviors, and a small front end around Claude, all of a sudden you have a working piece of software that is heuristical, not deterministic, which is why it’s transient, right? 

 

It’s not the thing you can actually ship and it’s also very costly, but through tokens, but for prototyping or for anything where you just need something quickly for a short while and then moving on, you can, this is not even vibe coding. This is just having a conversation and creating behavior on the fly. And it feels like teaching someone how to behave, right? You just tell them what to do. And so this changes how we build software a lot because we don’t need to actually bring everything into JIRA and then code we can just make behavior appear out of nothing. Not nothing, but you get the idea.

 

Alberto Brandolini 

Yeah, would, yeah, I, there’s one thing that is changing completely and there’s one thing that is not changing at all. So all of the metaphors about software development keep revolving around the idea that coding was the bottleneck. 

 

And now it’s clearly not anymore because this is it was not anymore in some places already. Well, maybe with punch cards and there was a moment where actually typing was a long-term time compiling and so on. But we keep thinking as a software development like a production process.

 

Marco Heimeshoff 

It never was. It never was.

 

Alberto Brandolini 

And now the cards are different. if we consider this as a learning process instead, well, the way we learn didn’t change much. And the way our brain processes information. And that’s one reason for the collaborative modeling that we mentioned. It’s just like most of it is a way for humans to understand the complexity in, and also in politically complicated domains, like if you get to a big picture, even storming is a way to handle conflicts between different versions and needs and motivation of the stakeholders inside of a company. 

 

Put all of this inside an AI, I don’t think you get what you want. But learning is still valuable. 

 

I mean, there’s still one thing, like I’m dropping a little bomb, like AI, they’re not liable. If they do something wrong, somebody else, normally human, is going to take responsibility and accountability for this. So if there’s some responsibility connected to software, maybe the one person in charge should have some level of control. But at the same time, also the shift towards faster interaction.

 

It could mean like we are shipping, maybe business valuable software, maybe technically sound software, but we might end up reducing our level of understanding of the models behind it. And this might not be something that we realize in the short term. If the window is one release of a few releases, it looks like we’re going as fast as that.

 

But if you don’t understand the dynamics of what’s going on, then you might progressively lose your ability to change the system. Out of fear, the AI might be helping you in doing the background changes, but AI is also fearless. And if you have the responsibility for, if you are accountable for the changes and the complexity of the code you wrote supported by AI grew beyond a given threshold. Now you can go faster, but you might end up stopping because I don’t know anymore what might happen here. Also because I built things faster without understanding the principle behind. This deep understanding is strategically valuable.

 

Simone Cicero 

Alberto, quick riff on this. And again, I don’t want to interrupt you, but I think this made me think about one of the elements that surfaced from the conversations we had before. So I think you made the reference to something very important here. When you were talking about why we use modeling, you also mentioned politics, for example, and getting people to overcome the cultural and political problems in an organization. So I’m asking myself, how much of the modeling heritage that we are projecting into this new world is really axiomatic, so something that is still relevant even in this new context? Or is something that we have just developed because it was a scaffolding to overcome human limitations that we don’t need anymore because actually we don’t need anymore to get 20 people to agree on something because we can just have 20 agents to do what we want. 

 

I don’t know. And so the problems which from alignment into, why the hell are we building this software? For what business reasons? What kind of problems are we actually trying to solve? So it moves the focus from getting these 20, 50 people to actually develop something that is aligned with what we want and solving the political, the domains and so on.

 

Or this is something that we should really get over because now we have new capabilities that do not require all this effort of solving human issues.

 

I know it’s a bit of a provocation.

 

Marco Heimeshoff 

I want to interject there. I feel the opposite. mean, I know you want to provocate here, but I think the opposite is the case. Taking care of the human side of things, making sure that we all align on what is the actual core problem we’re trying to solve. How should it be solved? What should it be focused on? What are we optimizing for? What are the trade-offs we’re taking here? Solving the political issues and the conflicts that becomes now, I mean, it was always the most important thing, but now we can actually spend more energy focused on that, on that really important part.

 

Because we don’t have to argue and focus so much anymore about spaces versus tabs, function names, underscores or not underscores, gamma cares or not, who cares, right? Those things, the technical issues become less and less of a problem when you do align, yeah.

 

Simone Cicero 

Yeah, but the point I’m making, Marco, just to help you continue your thinking, The point I’m making is not that we do not have to agree, right? The point is that we may have to agree on the purpose and the business relevance. And then we don’t have to agree on the user experience, because maybe this is something that an agent can sort out if we have the business problem clear enough. And maybe we don’t actually have to develop actual software because we don’t have to formalize it because LLMs can solve these problems, these business problems in a non-deterministic way and should be fine.

 

Marco Heimeshoff 

Yeah, no, I get where you’re coming from. Thanks for the help for thinking. Nice formulation, by the way. A very pleasant way of saying, shut up, let me explain what I mean. So the problem of user experience stays your business problem, right? Your stakeholders want a good user experience. And if you don’t deliver a good user experience that is planable or that does what they want, you will have a business problem. So you are responsible to deliver that. If the LLMs become good enough at always picking the best user experience, then the craft will change. 

 

We will stop thinking about what would be the best one because it can be generated by an LLM, sure. But still, somebody has to test that. Somebody has to say that’s still true and relevant, right? Because you, in the end, are the person that says, or you are the team that says, this is what I want to give to my customers. If you give those decisions to an LLM, basically the business decisions, what’s good enough and what’s not good enough for your customers, actually,

 

If you get a direct feedback loop from the customers to the LLM, let me think about that. I’m undecided right now. So for now, I would say the human needs to make that decision, but yeah.

 

Simone Cicero

Which is another, I think it’s another topic that me and Eugenio, and again Eugenio, if you wanna jump in, but I think this that you are raising, it’s another topic that is worth discussing here. That is the evolution from software engineering and software development as a function in an organization, as a capability we have as an organization into a runtime process. 

 

So Alberto, I know that you didn’t read the blog post from Jack Dorsey, but this is exactly what Dorsey is talking about when he has introduced this article called From Hierarchy to Intelligence. So the point he’s making is that as soon as we have a model of the customer and a model of the company in terms of the capabilities we have, the process we can execute and so on, and then we hand it to the agents, actually, because he’s making the point that the agents can evolve them much more easily than humans. 

 

So even modeling is in the hands of the agents in the vision of Dorsey. Then building software is a essentially a feedback loop from the customer into the engines that leverage the capabilities, understand what’s needed, and if something is not needed, they transform it into roadmap for the software. That’s essentially the feedback loop that Marco was thinking about. So that’s kind of resonant with what we said.

 

Alberto Brandolini 

We had one problem because you made it look like as soon as we have a model of the enterprise, you made it sound easy. And I don’t think it is easy. We do have a model of our company. It’s one of the of the side effects of modeling our business flow with even storming is also because, well, I need to have a model of how my company, and now I’m thinking like an entrepreneur, not like a modeler, I need to understand how my company is working. I know I have a scouting about the context where I’m setting priorities for picking my partners and so on. 

 

Then I have the co-design and then we have, catalog management and a lot of capabilities. If this is modeled, well, maybe agents can help, but if this is modeled, it looks like it hits something like a wall if you hit more complicated enterprises. 

 

They are doing so many things and nobody could ever model. That was the assumption. And we know it could be modeled.

 

But also some organization end up being so large that actually building a model of what are you doing this? What is the purpose of this department? It’s not a good question to ask because it could trigger layoffs and maybe this department is irrelevant. It’s just like a legacy of the past and we don’t know exactly why this business functionality is still selective. Maybe a little tiny revenue stream and so on.

 

There’s also another thing, and in those larger organizations, the political conflict is there. And it’s also lack of communication, misalignment, and so on. The expectation of the stakeholders are never aligned. It’s just a pure illusion that this is happening. And so we need this alignment. 

 

Let’s put AI back in the conversation. What is AI assuming? AI is assuming that what we are typing, what we are asking is true. Sometimes it’s just plausible, but many times it’s just like, no, why are you asking this? Why are you head of this department asking me this functionality? I respect my business stakeholder, but I don’t blindly trust them.

 

You’re telling me this because you want that. And we need to challenge this type of information to understand exactly what’s needed. There’s also another effect, which is interesting. There is one thing that is clearly happening, which is AI is changing the speed of the feedback loops. And software development as a capability inside the organization has been growing basically around the concept of overloaded. I mean, there’s not a team with an empty backlog. Every single team has a full backlog and I need to use also as a fence, please don’t ask us stupid things because we are busy. And then the organization play their ways around it. They hire another extra team so that they don’t follow the backlog of the busy team. And a few other things, distortion inside the system, which are now getting away. 

 

Like for example, I was talking with the company. They had the backlog of customer requests that had been rotten for more than two years. And then they were picking up first in first out, what about this? Well, the customer bankrupted in the meanwhile, so this is no longer relevant. Or you try to reopen a conversation. This part should go away because it could be fast. But at the same time, this was an alibi. It was, I’m just dropping this in a dead-letter queue one day they are going to come back and then we’ll maybe remember what I asked. Now, if the feedback loop is shorter, half of those requests are going to be detected as bullshit probably faster. I mean, this is what I hope. But what I wanted to say just like there was a whole set of distortion around the limited speed of the development time, the development team.

 

And now the pool is going in the other direction. And we still need to check whether the business requests are good enough to become software, not only in terms of business relevance, but also in terms of coherence. yeah, that’s kind of where we are right now.

 

Eugenio Battaglia 

Yeah, so this is awesome and also frightening. I want to just outline a few vectors that I’m also seeing how the space is developing and what we’re seeing in six months, one year, considering the translation of foundational research to commercial grade adoption.

 

But I wanna start by saying that we are really stepping into like a messy, juicy, alive and weird reality with these beings. And so our like Western German thinking of like, you know, trying to control everything.

 

Yeah, it’s like, it’s gonna be really detrimental. And I resonate with what Alberto’s saying, you know, that, and I guess we all agree that mapping is needed and it’s a necessary evil, but also mapping too much is and always have been detrimental without, you know, invoking Gregory Bateson or Borges or anybody that talk about map and territory, right? 

 

So having too much of a specific map because you wanna induce like a neuro-symbolic harness to make a system that is stochastic into something that is semi-deterministic. Yes, you need to do that. Like I don’t have other better idea either, but at same time, we should like be very aware that a lot of this capability would be subsumed by the model themselves.

 

And so what we need to also develop on the side is like our sensibility to talk to these systems and treat them as like increasingly highly intelligence and highly personhood. And with that respect, I think we’re gonna get very good output. But if we go with our micromanagement, you know, neurosis forward, I don’t think we’re to have good results in that way.

 

Having said that, yes, there is semantic drift in companies, especially across data and HR and product people. Product people are vibe coding shit, HR, they’re just obsessed about who to fire and how to manage this bleeding and all of that. Whereas they should think about how to better upskill and define the topology of the product people. Data also, they’re like completely not used enough. And so all these three areas of companies, they’re essentially not talking to each other. They’re not sharing maps. And that’s gonna make a lot of big company bleed to death. This I’m pretty sure. Whereas company that are able to leverage this ontological arbitrage and make ontologies as the core of their shared understanding without over specifying, like still like, you know, having this published language and at the same time, allowing them autonomy on defining their own domain. And this is like something like even Palantir, like Palantir make every use of DDD. Palantir is very questionable, at least from my perspective, but at same time, from a technical operational standpoint, they’re brilliant. Their approach to modularity and composability that is heavily based on the DDD principle. 

 

And the idea that like, once you share that very solid and very extendable ontology, that encode not only entity and relationship, but also the kinetic parts of the actions that these entities can perform, then you offload all the complexity at the application layer. 

 

So UI become disposable. They also have this like very, I just want to say two things and then I let you go on Marco. They’re really designing for this agentic first UI. So the concept of accessibility that we are adopting for humans, like impaired humans, now it’s approached to like, how we design for having agent consuming our interfaces, right? 

 

And then on the other side, we have to think that LLM, are going to be a sunsetting paradigm in the sense that we are already seeing recursing language model. We are already seeing diffusion democratization of reinforcement learning genes. We are seeing word models and neuro-symbolic system becoming much more prominent in the research. 

 

So our behavior, like I hope that this podcast will age well, but I’m pretty sure it won’t just because the whole foundation is gonna be fucking shattered in a few months. So yeah, sorry, go ahead.

 

Marco Heimeshoff

Self delete in half a year please.

 

Yeah. Can I jump in? Wonderful. So two things. First of all, I think when you speak about the ontologies and taxonomies here, we’re talking about the design space of domain-driven design, right? This is where you design solutions and where you agree upon language, where refinement comes in and this helps with implementing and so on. Wonderful. Even if it’s just implementing the mental model and so on, clarifying and synchronizing. But this is not the discovery phase. This is not, to paraphrase Alberto here very roughly, and I’m fully agreement with everything he said before.

 

The politics is the biggest problem. Like you can have the best model of what your company looks like, what your customer base looks like, what user experience is. You can get feedback from everywhere. Everybody can be fully aligned on how the machine world interaction works. Humans are messy. Humans are emotional. Humans aren’t honest. And humans, even if they try to be honest, don’t know what they really need. You need psychologically working actual interactions with humans to extract those things. 

 

And you need this iteratively and continuously. Collaborative modeling is something that I do not think will ever go away. So there are very, very rarely instances where I say, hey, this will never happen or must happen, right? Everything always has been disproven. Whatever people said in five years will be blah, blah. But as long as people are in the mix of complex system building, we need to talk to people. We need to interact with the humans and we need to do it with the humans in our company when we’re driving the strategy. 

 

So, no matter what amazing help all the LLMs can do in every part of the modeling assistance, of the design thinking, or of the designing of our systems and everything, including the implementation, I have no clue how the future will change in this. But the human conversation in the end between all the people affected by the system, affected and affecting the system, is irreplaceable for the actual qualitative output that serves its purpose.

 

I think one of the biggest dangers, there’s a second point I want to make, the way I see capitalism accelerating and technology accelerating and everything, it might very well be that really good world models are close enough and successful enough that people stop caring about that quality because money still comes in and you’re selling well enough and then everybody’s just doing it and it becomes the de facto standard. And now I need to say bad words that I don’t want to say, but this could be bad for the planet and humanity as a whole. So.

 

Alberto Brandolini

Okay, who cares? No, that’s… Okay, well, you dropped it, the major bomb, and yes, I feel this danger. I was going to say that I was finding an intermediate space. That was… Actually, if your organization has coherence this might simplify a lot of things. Like the distinction between a startup with a clear mission and a lot of people aligned towards the purpose, that’s kind of the perfect ecosystem to be AI supported because you can describe your goal and build your things. Like that’s exactly what we see with startups, the model, the speed of writing software is faster, but also you have a clearer understanding where you want to go.

 

Larger organizations don’t have this coherence. And this is where you get the conflict. So this is where you get the competing department. And also you’ve been hired by the organization that existed many, years before you, you’re not building it. So the closer you are to foundation, normally, ideally, you should have a lot more coherence. We have a mission, we have a North Star, we are following that plan.

 

Later you just exist, profit becomes your coherence and it becomes different. So one thing that might happen is depending on the type of organization, you might have a different probability and ability to catch the AI bus. AI could be a fantastic leverage if you have coherence and could be just a waste of time or something exposing your inconsistency faster if you are a giant, inconsistent, incoherent corporation.

 

But still, I agree with the sentiment from Marco. We are in an accelerated Darwinian selection. And I don’t know if you are already running for the lifeboats, but yeah, there’s a feeling of something is going to happen. And eventually the weight is gonna move badly towards one direction.

 

Simone Cicero 

We just have a few minutes, but I would really love to have from both of you just your last bit on bringing back the conversation to something more practical and say now from your point, from your perspective, where are you? I mean, of course, this is in flux.

 

But what is your current experience? So what are you actually doing at the moment in your context of work that you can suggest others to do? What is your stack? What is your tool chain? Of course, just five minutes because we don’t have much time. But I guess it’s good to give practical advice.

 

Alberto Brandolini 

Okay, well, because mine is shorter, so is, I’m mostly with Claude code. I’m using Gemini for other tasks. Sometimes we do the adversarial, well, second opinion on this, but let’s say that the territory where I’ve been while specializing a little bit more is still related to Cloud Code and the different ways to build software. At the same time, I am playing with control fuzziness. 

 

One of the things that is triggering me is – every single pet project or local project was building on the privilege of wearing many, hats. I know exactly what I want. I’m running a project in a territory where I master the knowledge and the competence behind it. And I haven’t tried, personally, the other part. I need to gather information from outside sources. 

 

I’m just responsible for the code production without knowing the extra complexity. And there’s a lot of things that might have to change nearby because shortcuts are tempting, but we also know that they are stupid. 

 

The actual practical territory that I like is injecting boundaries, mean context map about the context inside our source code providing safeguard saying like this term should not fit in here. This feature goes in this bubble doesn’t get in this bubble. So I’m just trying to make sure I’m building a modular monolith. Now five, six bounded context inside is supposed to get up to 20. But the idea is – I’m building the compartmentalization so that the model is growing without bloating, without exploding. And this is a part which is so far giving me good results.

 

Simone Cicero 

And how do you share it with the agents, more specifically?

 

Alberto Brandolini

I just provide a tree of different files. So every directory at the given layer of the source code tree could be a root of a bounded context and it has a bounded context description.

 

This description has a business narrative, a description of the functionalities, description of the integration point. Sometimes we also clarify which are the aggregates inside the bubble and has a link to the architectural style that we document.

 

Simone Cicero

So you basically try to align the software artifacts with the context model.

 

Alberto Brandolini 

Yeah, in my scenario is a modular monolith. So without protection, the code would mix. Maybe if we were going full microservices from the beginning, the artifact boundary was already good for the machine. There was no need for this sophistication. But yeah, I wanted to see if we could compartmentalize in a place that was naturally mixed.

 

Simone Cicero 

Okay. Thank you so much. Marco?

 

Marco Heimeshoff

Awesome. Yeah, I don’t think that mine is actually longer. So of course, everything is in flux, but right at the moment, it’s Claude code for everything coding. Gemini is used for generation of any kind of graphics if you need them quickly, using CLI usually. For the domain memory and the technical memory, I’m using Obsidian. And instead of building a rack system, we’re using any kind of vector database, which I’ve tried in the past, and which was way too much hassle.

 

It’s using Obsidian as a memory for Claude is working quite well. So that’s also where the separation comes in. I have different vaults for the workflow, so the way I use Claude and the domain knowledge internally then, of course. Those are separated vaults so that the respective agents or the respective harnesses can use the respective knowledge base that they need.

 

For frontend, sometimes I want to have a frontend, even though everything becomes more of a CLI right now. But I used Stitch until last week. And then now I saw that Claude Design is really, really, really taking away the goal there, the goalpost. But it’s really expensive. So right now, the token limitation is hard. So it’s still a bit of mix of Claude Design and Stitch. But the hardest part that I can’t really express is not what I’m using, but what I’m building. And that is the harness around it.

 

So I think the Claude model, like be it Opus 47 or even ChedGPT, it doesn’t really matter. If the harness works well enough, the model becomes interchangeable, right? So building these harnesses and interchanging with other people from the community is something that I usually do. So read other people’s skills, agents, read their workflows, read their tool usages, figure out how they solve problems. And this is becoming a more more stressful job. 

 

So understanding what is getting created and trying to read and understand what millions of people are doing nowadays, like it becomes too much. So this is where I built the research agents. And I have a Claude repository that is not to build code, but that is actually to build my understanding of AI better. And it goes with routines, like Claude has routines now. It gathers stuff once a day, brings all the stuff together, links articles. It understands what has been posted, compares it to my Obsidian repository of what I already understood. And then when the significance is high enough, it lets me know that new stuff came out, this helps me to funnel a little bit, right? 

 

So this is also something to build that helps you to, well, manage this complex and ever-changing world. But yeah, it’s basically use Claude in a smart way, but the point is the tool becomes the exchangeable part. What you do with it, like having a research agent system, having a digital assistant that helps you to understand everything being produced in the world and getting an overview and then trying to make sense of it to build the harnesses that work for you. That’s what I optimize.

 

Simone Cicero

Is there any of these open source barcode that people can look into?

 

Marco Heimeshoff 

The Obsidian system, not from me. I could open source it in a week or two. But there is already Boris Journey is pushing or publishing stuff about that. You’ll find in my GitHub repository, there’s a tool called Cinemarcos or also Media Theca. These are personal tools I’ve been writing. And you find my entire agent system in there, at least the one from like three or four weeks ago, which are completely out of sync with what I’m doing at the moment. But that will be pushed with the next tool coming in the next few weeks.

 

Simone Cicero

Right, right, right. Eugenio, I think we don’t have time to explain where are we at the moment. And I think it’s maybe not worth it for us to share in front of this to two more professional takes on it. We are more into the exploration, I think.

 

But yeah, I think we are doing some incredible work that we are going to also open source quite soon at Boundaryless. So you can expect that all the learnings that we are making, we’re going to leave it for the community to engage with.

 

I mean thank you so much. It was a very interesting conversation, very engaging. I feel like I have to listen to it again to extract some of the insights. I hope you guys also enjoyed the conversation. Alberto, Marco, it was interesting for you.

 

Alberto Brandolini

Yep.

 

Marco Heimeshoff

Yeah, a lot.

 

Simone Cicero

Thank you so much for your time. And Eugenio, thank you for chiming in as well. I know that you are super busy these days also because of the work we are doing. So I appreciate your time. to our listeners, of course, you can head to boundaryless.io/resources/podcast where you can find all this conversation with transcript, all the tools and things that Alberto and Marco have shared today.

 

We don’t have time for the breadcrumbs today, so you will have to just sit with this incredible conversation. And of course, until we speak again, remember to think Boundaryless.