Human Value as the North Star: Regulating pervasive platforms — with Marshall Van Alstyne and Geoffrey Parker

BOUNDARYLESS CONVERSATIONS PODCAST — SEASON 1 EP #16

 placeholder
BOUNDARYLESS CONVERSATIONS PODCAST — SEASON 1 EP #16

Human Value as the North Star: Regulating pervasive platforms — with Marshall Van Alstyne and Geoffrey Parker

Marshall Van Alstyne and Geoffrey Parker talk about what democratising access to data means for the ability of players in a platform-ecosystem to innovate and grow, and how regulation should be conceived to maximise welfare while reducing risks of negative spillovers off platforms. With creating human value as the North star, they ponder on the idea that we need a Magna Carta of citizens rights for how we should be able to operate and influence on powerful platforms.

Podcast Notes

In this episode we have two leading platform thinkers on the show: professors Marshall Van Alstyne and Geoffrey Parker.

Marshall Van Alstyne is Questrom Chair Professor at Boston University and his work has been honoured with numerous best paper awards, National Science Foundation grants and been featured in Science, Nature, Wired, the New York Times, the Wall Street Journal and on NPR.

Geoffrey Parker is professor of engineering at the Thayer School of Dartmouth College, where he also serves as director of the Master of Engineering Management Program.

Together with Sangeet Choudary, they are the authors of Platform Revolution: How Networked Markets Are Transforming the Economy — and How to Make Them Work for You, from 2016. As originators of the concept of the inverted firm, they were further joint winners of the Thinkers50 2019 Digital Thinking Award, and are both visiting scholars at the MIT Initiative for the Digital Economy and co-chair the annual MIT Platform Summit (see references below).

In this conversation, we talk about what democratising access to data means for the ability of players in a platform-ecosystem context to innovate and how regulation should be conceived participatory and ex ante and that useful intervention should be designed to minimise potential harmful externalities that take place outside of otherwise internally self-regulated platforms. With creating human value as the North star, Marshall and Geoffrey ponder that we might want to see the creation of a Magna carta of citizens rights for how we should be able to operate and influence on powerful platforms.

Here are some important links from the conversation.

Find out more about Marshall and Geoffrey’s work:

– Marshall Van Alstyne: https://scholar.google.com/citations?user=zMqwIkIAAAAJ&hl=en&oi=ao

– Geoffrey Parker: https://scholar.google.com/citations?hl=en&user=He__Th0AAAAJ

Further reads sent over by Marshall and Geoffrey:

Other mentions and references:

  • Simon Wardley on the Innovate-Leverage-Componentize (ILC) cycle.

Part I: https://blog.gardeviance.org/2014/03/understanding-ecosystems-part-i-of-ii.html;

Part II: https://blog.gardeviance.org/2015/08/on-platforms-and-ecosystems.html

Find out more about the show and the research at Boundaryless at https://boundaryless.io/resources/podcast/

Thanks for the ad-hoc music to Liosound / Walter Mobilio. Find his portfolio here: www.platformdesigntoolkit.com/music

Recorded on June 10th 2020

Key Insights

1. Democratised access to data is a key driver of innovation that create interesting and hard questions for regulators and scholars to solve, including on whether access to data is being used to prevent competition, and how it can be used to promote competition and innovation, and how to maximise consumer welfare. The focus on data ownership may also leave space to a new focus on extracting meaning from data (datapoiesis).

2. Forces of fragmentation and aggregation are providing interesting balancing properties across forms of organisation. On the one hand, you get benefits of specialisation, so the first force fragments the market. The second force integrates the market as you go from one process to another, integrating the learnings from interactions in the ecosystem.

3. When platforms are causing damaging spillovers and harmful externalities off the otherwise self-regulated platform environment, we need to start looking at ways to correct the behaviour as a platform. We need new regulations able to: i. restore some of the bargaining power of collectives; ii. introduce fairness in the distribution of value; iii. get platforms to internalise the negative externalities, like the fake news, that is occurring off platforms.

Boundaryless Conversations Podcast is about exploring the future of large scale organising by leveraging on technology, network effects and shaping narratives. We explore how platforms can help us play with a world in turmoil, change, and transformation: a world that is at the same time more interconnected and interdependent than ever but also more conflictual and rivalrous.

This podcast is also available on Apple PodcastsSpotifyGoogle PodcastsSoundcloudStitcherCastBoxRadioPublic, and other major podcasting platforms

Transcript

This episode is hosted by Boundaryless Conversation Podcast host Simone Cicero with co-host Stina Heikkila.

The following is a semi-automatically generated transcript which has not been thoroughly revised by the podcast host or by the guest. Please check with us before using any quotations from this transcript. Thank you.

Simone Cicero:
Hello, everybody. Today we are here with two very special guests, Geoffrey Parker and Marshall Van Alstyne. I probably pronounced it in the wrong way. But guys, are you happy to be with us today?

Geoffrey Parker:
Yes, very much so. Good to join you, Simone and Stina.

Marshall Van Alstyne:
It’s a pleasure to be here and one of these topics and looking forward to exploring with you.

Simone Cicero:
That’s great. And I’m also here with Stina, who is my usual co-host on this podcast.

Stina Heikkila:
Hi, everyone. Really nice to be here.

Simone Cicero:
So we’re very excited to talk with you guys. Because first of all everybody is a big fan of your book, your platform book, which has pretty much been setting the stage of the conversation. Also we are very excited about your latest work on policy and regulation. And here is where we want to start the conversation. It will be about understanding how tools, traditional tools like regulations and policymaking can still work in a world that is seeing such structural changes in terms of the nature of the firm and national markets. So maybe we can start with that.

Marshall Van Alstyne:
So it’s a great topic. One of the things that Geoffrey and I’ve been thinking about is the extent to which the economy today is structurally different than before, and parallel to the structural differences we see from a century ago. So a century ago, we’re getting the rise of gigantic monopolies in steel, automotive, in oil, in banking. Most of those in energy, we’re all driven by supply side economies of scale with high fixed costs and low marginal costs. And today, so many of these differences are driven by network effects or demand side economies of scale. So in this case, users create value for one another, which attracts users, which creates value, which attracts users. So you’re getting gigantic firms. But again, it’s on the demand side, rather than the supply side. So we’re getting very different economic models in production. And so the regulation also is going to have to be different.

Geoffrey Parker:
Let me just dovetail on that: Marshall is absolutely right. This kind of worldwide change in industrial structure, I think is part of what really drew us to these topics. Interestingly, the old economics of supply side effects, of course, have not gone away. And many of the firms that are sort of the new giants benefit from both the demand side network effects as well as the supply side and as Marshall said, you know, they’re now kind of fully drawing the scrutiny of regulatory authorities, who I think correctly question are the current regulatory frameworks really sufficient to even analyse, let alone help sort of shape the actions of these firms toward, you know, what you might think more of as social good.

Marshall Van Alstyne:
So let’s introduce why this is so important. So, you know, Geoff and I, we’ve been developing this concept of the inverted firm. It’s actually pretty simple once you see it. The idea is that if firms are dependent on network effects, you can’t scale them inside the firm as easily as outside the firm. There’s simply more people outside the firm. What this means is value creating activities move from inside to outside, so you are no longer with vertical integration, you are kind of inverting the firm, where users are creating value for one another outside the firm. So if you look for example, at the market value of Facebook, relative to Disney, it’s a factor of 10 in terms of the market cap relative to employee or if you look at The New York Times versus Twitter, or if you look at Uber versus something like BMW, in each instance, it’s really interesting, users outside are creating the value. But what this means for regulation is that interventions, or the standard market tests just don’t necessarily matter. So if you look, for example, at antitrust tests of predation, are you doing below marginal cost pricing? Well, of course, it’s going to be things you’re getting zero cost pricing, so that test fails. Another question is, what’s the boundary of the market? You’ve got Google and search and self driving cars and glasses and mapping? Well, you’ve got Amazon in books and cloud services and groceries and healthcare, what’s the boundary of the marketplace? So that’s another difficult test. Any of the standard mechanisms that you look at for testing whether the market is dominated, you have to define the boundaries, you also have to define the harms. But users are getting unbelievable deals in these things. You also get Amazon’s not restricting your purchases. Facebook’s not restricting your posts, you’re not getting the restrictions and output that you normally associate the supply side come in scale, anti trust market. So it’s a very different framework.

Geoffrey Parker:
That’s exactly right. And that’s one of the reasons why you’re seeing, explosion is perhaps too strong of a word, but just a tremendous growth of interest in the governance and regulation of these giant technology firms that are harnessing network effects because as Marshall said, these traditional tests don’t seem to apply and don’t seem to be useful. And in particular, if you look at some of the remedies, a lot of them are operating on an ex post level where they say “well, if you’ve done a thing, then we’ll levy some kind of a fine” and you see that quite frequently in the EU. But a lot of these fines, even though on the face of it seems like a couple billion euros is a large number, but to some of these giant firms with trillion dollar market caps, it’s really just a speed bump. So, more of an emphasis, I think on analysing ex ante, you know, what would be permissible in terms of mergers and acquisitions and various behaviours, and tried to set those frameworks up so that people know the rules of the game.

Simone Cicero:
I have a question on that. It is about understanding if that is a radical difference in how we need to address certain type of platforms versus other types. For example, everybody has been talking for ages about GAFA’s, the big firms, and it looks like that markets are moving into a new age of marketplaces, which is the age of managed marketplaces, vertical marketplaces to some extent, and of course, there is a danger for these big players to enter this new market. Or you know, that 80% of the consumer spending is not yet organised this way. But also some analysts have pointed out that it’s not going to be easy for them to just jump into any other small market just because they are — theta are still agile firms, they are still platforms — but they are big. Do you see really two different approaches to regulation that are more optimised for certain types of markets or platforms versus more vertical, maybe locally oriented, and more contextual marketplaces and platforms?

Marshall Van Alstyne:
So it’s an interesting question Simone. Actually, you can imagine a set of regulations that apply to supply side economies of scale firms, and you could possibly imagine a set of regulations that apply to demand side economies of scale firms. One of the challenges, of course, is that many of these firms may operate as both. To give you an example, Amazon clearly gets some supply side economies of scale in the production of cloud services or Google and Microsoft get supplies how to come as a scale in the production of operating systems, huge fixed cost, very low marginal cost. So my suspicion and you know, feel free to argue the point, but my suspicion is that we may need some overarching frameworks that allow you to tune the knobs based on how much a firm is predominantly a supply side or it demands an economy of scale style firm. So the interventions might be somewhat different based on, you know, which of these forces, though prevalent, is more heavily emphasised in a specific market. So my suspicion is you will need tools for both. But we’ll need to handle cases when the firms actually use both mechanism.

Geoffrey Parker:
Let me kind of dovetail on that and go to the question you asked about whether these giant, sort of generalist technology firms, the GAFA’s are of course representative, but there are others. I mean, China has had a similar explosion of these firms : Tencent and Alibaba. And then you said this notion that now we’re seeing markets pop up in more B2B contexts in areas where you would expect vertical specialisation to really matter. And I think that’s exactly right. And where the entry point would be for the technology giants, I think it’s going to be in harnessing the data that’s generated by the processes, both on the customer side, but also on the industrial production side. And a lot of the entry points could easily come from incumbents partnering with technology giants, especially for access to machine learning and artificial intelligence capabilities. And so then the danger is that in that partnering, you actually start to teach these generalist firms more about the deep vertical, and I think it’s going to be around the data layer, that an awful lot of the regulatory scrutiny should come so that we can think about whether data access is being used to sort of prevent competition and how access can be used to to promote competition. And you’ll see some of that I think, in a lot of scholars work and certainly we’re focusing on that as well.

Marshall Van Alstyne:
Think of what Geoff just said, there’s some really interesting examples in there. So if we go back to the vertical integration story, that’s in some ways, there’s a story as old as Adam Smith with a specialisation around manufacturer pins, you’d actually increase the production of pins with someone specialising in pulling the pins and one and polishing the head someone and hammering it. The specialisation might be an example where you get some of the traditional kind of vertical integration, but at the same time, then you have to ask this other question: o what extent are their data spillovers, so is it possible to learn from what you’re doing in groceries, about what you might be interested in purchasing in e commerce space, that’s the demand side economy of scale data spillover, that’s the other side of the equation. So we’ve got these two forces: one where you get some benefit of the specialisation, which you’ve also got some benefit of the learning from one interaction to another that the firm can harness. So the first force fragments the market, but the second force integrates the market as you went from one process to another. So again, you’ve got these really interesting balancing properties across the forms of organisation.

Simone Cicero:
So sometimes we used to say, to refer to Simon Wardley work where he explains this dynamic that decodes the ILC, Innovate Leverage Componentize cycle and especially when he speaks about platforms, it makes it clear that ecosystems as he defines them are “future sensing engines”. So essentially according to his approach, platforms have this capability to capture data as Geoff was talking about, they have this possibility to capture data and understand where innovation is going. So to our understanding, platforms are tools, and are designed to institutionalise innovation, more than actually bring innovation. So innovation comes from the ecosystem, the ecosystem produces the data by consuming all these elements and models or products that the platform’s have been creating. And then it’s the responsibility of the platform to institutionalize these innovations and push the ecosystem to innovate faster to upper levels of the value chain. So if one thinks through these, an example could be Amazon doing Amazon Basics by capturing needs and then institutionalising the products into more componentized elements and then pushing the ecosystem to innovate new products. But of course, you can do parallel super digital services. My question is, who is concerned about diffusion happening faster? So when we think about regulating data, for example, what does it mean in terms of stifling or generating more innovation if we impose a regulation that makes data and information available? That doesn’t really make sense or is really necessary to make data more transparent when probably the capability to address data is still residing more inside the platform than in the society just through access to data.

Geoffrey Parker:
Let me take the first crack and then I’m sure Marshall will add on. I think that the tools to work with larger and larger data sets are largely getting democratised. And by that I mean they’re able to be used by individuals and organisations. And what’s interesting is often those tools are provided by the very kind of technology giants that people are concerned about. So think Microsoft Azure, Amazon Web Services or the Chinese analogues. The question then that you ask is, when you enter partnerships then you end up in data sharing, because in that first if you just use this infrastructure as a service or the compute capabilities, you actually still have full control and you can end up with fragmented data. But the issue is, a lot of data is more valuable when it’s aggregated. And because people don’t really know the value of the data, there’s the sort of desire to hoard. And we don’t have really great ways of doing valuation of the data. And furthermore, we don’t have really great ways of asserting ownership rights. And what that ends up doing is creating forces toward fragmentation, even when there would be benefits to the aggregation, and then that ends up sort of creating less welfare, less value than you could and that’s, I think, Simone, where you were getting at. Where the platforms themselves tend to be able to do the data aggregation, and especially on the b2c side, where I think people are less sensitive and perhaps less knowledgeable about the value of the information they’re providing. I think, on the industrial side, firms are much more sensitive. And so you’ve seen a slower kind of use and capture of information and information sharing than otherwise might have happened. And so a lot of the innovation on the economic side really has to come from how do you do data valuation. How do you do data protection? How do you water market? How do you create markets in it? And then once that happens, you can start to see more of the benefits of the aggregation. And then of course, the parallel has to have some regulatory innovation that helps to make that possible.

Marshall Van Alstyne:
So Geoffrey had a number of great ideas. And I think we can answer three or four questions that Simone had and circle back to these idea data markets. So one of the things here is you think about the effects on different parties in the ecosystem. So one of the things you highlight in terms of this institutionalised innovation on platforms, it does have the effect of putting the ecosystem partners on an innovation treadmill. So you’ll be selling a product you might have sold on the marketplace, but then the invited competitors in order to continue with your margins, you’re put in a position where you have to innovate in order to keep sustaining the margins. And the platform keeps inviting competition. So it’s really hard on some of the competitors, and it can squeeze them. And it can also create some unfair advantage if the platform’s bias, sales toward anything they find is high margins, and they seek to sell themselves. So we need to introduce fair and just regulation to prevent self dealing, or self biassing and some of those things. A second question is then what about access to that data? Do we encourage permissionless innovation or do we encourage permissioned innovation? Part of that depends on what we’re talking about in these marketplaces where there’s low risk, there’s low danger of, you know, creating something harmful. You want as much innovation as possible. So you want innovation in apps, you want innovation in software services, or basic product design. In other cases, experimentation can be troublesome or problematic. So imagine you wouldn’t necessarily want experimentation on top of open nuclear power plants or open the API’s to pacemakers, because you could actually hurt people, you could kill people, there could be terrorists actively trying to break in and do harm to some of those things. Then coming back to the idea of markets, if you want to create healthy innovation using the data to package, to learn and to create new kinds of goods, you need multiple parties to have access to that data and the moment just right, the platforms tend to want to lock up that data to keep it as a trade secret and not open that data to third parties as much. I think social good would be served a lot if in addition to democratise tools, there’s in some sense democratised access to the data. A simple current example, context tracing for COVID-19. I mean, there’s enormous benefit if the tracing can happen across platforms as opposed to just within platform. And we’ve seen a tiny bit of collaboration start to take place between Apple and Google. In that case, that kind of thing could happen all the time. If their information spillover benefits to society, there ought to be ways of granting broader access to the knowledge in the exactly the same way we grant access to patents after an extended period of time. You know, in a patent ecosystem, you reward folks for the innovations they create with a short term exclusionary period. And then after you want others to have access to that data as well. So I think it’d be great to solve the antitrust problems and to foster broader innovation indeed to create checks on the behaviour of the platform firms. If we were to grant access more broadly to some of that data, as distinct from some of these regulatory regimes that try to compartmentalise it, fragmente it and keep it so private. It’s hard to create some of the innovations. The real question is, how do you create the wealth? And then how do you divide it fairly? That’s where I think we need to go in the design of these ecosystems.

Simone Cicero:
That’s really interesting. I’m going to hand this over to Stina because I know she has a deep question that I want to introduce. You’re talking about democratisation, you’re talking about fairness. So it looks like there is a fairly political dimension of regulation at the moment. When I think about these dynamics of innovation, like you force a positive team to innovate, which may not be the case especially when we speak about consumer innovations which are happening fairly asymmetric context in terms of power structures. So I’ll handle it over to Stina, but I think this is getting political to some extent.

Stina Heikkila:
What I was interested in when listening to this, we talked about regulation as something that we need to come from some sort of central government or international governance body. And you have this enormous power that is accruing to some of these platforms. To what extent can we expect corporate leaders to become important figure in this political regulation space? To what extent do they need to be partners of government? And the third aspect that I’m interested in is to what extent can we expect people, users, to have democratised access to the data and actually know what to do with it and how to manage it with this kind of wider knowledge about the impact of your interactions online and so on. That could be in regulation coming from responsibilized users, in a sense. So I’m really curious to hear what your thoughts are on that.

Geoffrey Parker:
So great questions Stina. I’m going to take the last one, because I think it’s really interesting. And then it will help inform the first few and I’m sure Marshalll will have plenty to add on that. But the question you asked about how users can manage their data and then participate in the governance and regulation of these technical systems is a fascinating one. And the answer, I think has to go back to the ownership rights in their data. And I think it’s improbable that individuals will be able to both understand the technical issues, but more importantly, I think the individual value of their data isn’t large enough for them to incur the effort to do bilateral negotiations with each and every system. But what that allows for is the entry of an intermediary, entities that can come in and bargain on behalf of individuals for a share of the division and if you can see that arise that would allow for a different division of value creation, and then perhaps more of that could end up being captured by the individuals because I think that the, as Simone said, this is getting political to some extent. I think that’s true, because in addition to the value that gets shared with individuals through the free goods and services, you’ve also seen a tremendous aggregation of wealth, which you can see in the market caps. And that, you know, has some interesting questions in terms of concentration that can be destabilising. So if we can go back and allow for different negotiation over value capture, I think that’s at least one potential path forward.

Marshall Van Alstyne:
So wonderful questions Stina. So let’s tackle this at several different layers. One, let’s just return to the idea that we made. We brought up initially, that we’re seeing the rise of gigantic monopolies today like we saw a century ago in other industries. And it is interesting that it took time for regulation to catch up then, just as it’s taking time for regulation to catch up now. So we are seeing exploitation of labour, we are seeing immense concentration of wealth. These are things that we’re going to have to deal with some new regulation. But let’s also figure out where we can intervene usefully and show where we can actually be helpful. So let’s start with our definition of a platform, which we defined as an open architecture with rules of governance to facilitate interaction. So open architecture is what lets third parties in to help create value. The governance model is what encourages people to join and to create value and then the interactions would actually create value whether you’re matching someone to a ride, a tweet, a post, you know, a stay, what have you. It’s those interactions that matter. Now, who should be doing the regulation? If a market failure, if something bad happens on the platform, platforms typically tend to create it. So as bad as it is, Facebook has actually even protected users against spam from some of the developers. Or if an Uber driver takes a passenger to the wrong direction, then Uber will try to make them good or Airbnb will try to protect the homeowners. So if the data is about something bad that happens on the platform, then it tends to be self correcting because the platform has an interest in unhealthy interaction and it’s got the data. Now contrast that with when the problem occurs off platform. Contrast that with monopoly of the power of the platform and the exploitation of labour or contrast that with the fake news that affects elections that occur off platform. That is a real problem. That’s when government intervention is absolutely warranted. And we need to start looking at ways to correct the behaviour as a platform when they’re causing these damaging spillovers off platform. There, we absolutely need new regulations to create some of the kinds of organisations that Geoffrey was just talking about. To restore some of the bargaining power of collectives, and alike to make that happen, or to introduce fairness where the division of value created happens more generally, or to get platforms to internalise the negative damage of things like the fake news that is occurring off platforms. That’s when extra regulation is absolutely warranted. And we need to move in the direction of a healthier ecosystems with better rewards divided across society.

Simone Cicero:
That’s very interesting because I think what you’re talking about points towards a process of revelation that is itself a platform. Let me explain a bit. So the question is, the users need to have a role in this process. So it looks like the process is less about a third party like a government, regulating plan, but it’s much more about some kind of participatory process for regulating, that is not just about data ownership. No, because just with ownership you don’t do much. Previously, when we interviewed Indy Johar here on the podcast, he was talking about ownership, not about data. And he said that too often in the world that we live in ownership is a thesis of slavery, because at the end of the day, if you own a piece of land, to some extent, you are obliged you to exploit that piece of land, then you’re not gonna do any stewardship on that piece of land.

Marshall Van Alstyne:
Let me jump in for one second, I love what you’re saying. And to put it in one way, I think at the moment you and I are serfs in the kingdom with Paige, Bezos and Zuckerberg. So far they’re getting to make the rules, and so far you and I have not had a vote in the design of ecosystems. One of the things we would like to do moving forward is in some sense to design a Magna Carta of citizens rights for how we should be able to operate and influence the platforms. Sorry to interject too much with it. That’s just such an interesting topic. You know, this is absolutely something that needs to be done.

Simone Cicero:
Of course. I love this idea of Magna Carta because it’s not about the obligations. It’s about the possibilities, to some extent, that is still about rights. But it’s also about your rights to build, your right to create some extent. And the question is really about how do we move from what we call data ownership disease into what we could call a “datapoiesis” thesis. So there is another friend of us — Salvatore Iaconesi — working on this idea of datapoiesis processes. So the question that I want to bring to the table is : now all the data that we have access to who can bring the society into a poiesis process, process of making meaning of data. How can we collectively build this regulatory process that is based on data, but it’s also based on what we do with data? Not just which decision we take, also, how do we enterprise with data? So for example, if you talk about this idea that platforms don’t need to be built in a separate context and then proceed to the market without having any voice available for people. I think it’s not just about the voice of the people. It’s about how the people create value through those platforms. So it’s really about what possibilities but also what kind of possibilities and responsibilities is asked to the participants, to the citizens in terms of sense making, in terms of taking actions and enterprising in creating something new through those platforms. What do you think about that?

Marshall Van Alstyne:
Okay, that is an incredibly rich and wonderful question. In some ways, I couldn’t agree more that’s actually one of the questions we really need to be asking. So let me throw several different kinds of ideas here. One of the problems I’m going to propose is that we have traditionally used too much of a property model. And that’s born out of ownership of physical goods, as distinct from perhaps an entitlement in the value of the assets created from information goods. You can give me an idea, I can use it, but you still have it. And then perhaps the best thing would happen is if I were to give you a share of the innovation value that I created from your idea. One way to think about this in legal returns is actually the difference between a liability regime versus a property regime. In a property regime you have to negotiate with the owner of the property in each instance where you want to actually gain access to that. That creates an enormous amount of economic friction, because in many cases, the data is really small or the negotiation costs are so large relative to the incremental value of a single search or the value of the single purchase. So you could actually shut down the negotiation or the recombination or the analysis of that data under a property regime, or liability regime if user identities were protected and if you had a fair process to allocate rewards. Imagine a case where you might be free to use data, but you are responsible for using it in a fair and just way and traction the value has to go back to the sources of that value. Under a liability regime, you might be free to do that without some of the negotiation costs and you’ll be responsible for the gains and the losses. There’s even an economic formula, the Shapley value, which might be possibly used to determine the fair reward under such a system and it might be a possible way to allow multiple parties to create value on top of data as opposed to simply having to negotiate for each and every access. I think that’s one of the ways it might help solve this problem. Which I fundamentally agree is how do you create value, what regime, what governance model is going to help you create the most value, and then you figure out who should be the beneficiaries of that value as part of the governance and the allocation model. Both of those two things are essential, the creation and the allocation.

Geoffrey Parker:
I’m going to draw on what Marshall said, which I think is wonderful, sort of notion of trying to reduce the transaction costs of arguing in this property model. But at some level, you do have a right to the value that is created and we’ve actually done a fair bit of modelling around this. And there’s been a stream of research in terms of downstream innovation. The idea is that if people build on your innovation and then build something that’s also valuable, then part of the value created should go back up the chain, to the original owner of the idea, or in this case of the data itself. And that’s kind of at the heart of the Shapley value calculation. But I think it also helps explain something that’s so new earlier about why other platforms are so good at organising, directing, and partly it’s because they can capture it from the ecosystem, and then start to make it more widely available for others to build on top of, and I think an important part of kind of the regulatory structure is to identify where that happens and then encourage it. But also make sure that we set up structures where there’s a fair division, because as we’ve done in the economic modelling, to the extent that there is a reward to the original innovator or to the data owner, then they’ll be more willing to share and then everyone ends up benefiting from that.

Marshall Van Alstyne:
So let me give you two resources that support the things that Geoff had just said. For those that are interested, it’s a bit technical, but we developed a really nice formal model of recursive innovation that people get to build on each other’s ideas in a paper innovation, openness and platform control, and it shows how good governance can actually increase the rates of innovation and actually increased the openness of the system if it’s done in such a way parties can build upon each other’s ideas. Another possibly is also one on the social efficiency of fairness, if you treat people fairly, you can get higher rates of innovation because they’re more willing to share ideas in the first place. So we need regulatory frameworks that are going to ensure that level of frame fairness, and then folks are happier to contribute. So these are a couple of economic models we built to actually try to demonstrate some of these ideas.

Simone Cicero:
I think we are getting to some clear idea of what is the question that we have on the table and I think from what I get from your comments, there is this direction we should go to achieve in terms of platforms, openness, and let’s say fairness. It’s really about platform being open for fair distribution of the value that is created. It’s going to be probably more and more about profit share, direction and possibility to create. For example, the experience that we’re having with the Haier group.Haier group is a company of more or less 80,000 people and they have this organisational model based on this idea of micro enterprises. So the new things that get developed in organisation always embed a certain amount of entrepreneurial risk and entrepreneurial investment from their employees. It’s entrepreneurship as a clear way to define new value that gets created and as a clear way to do something and integrate a queue and distribute the value not because you should create a new company on top of another one and the other one invest into the company that gets created, this is a fairly clear way to distribute the value in terms of entrepreneurial creation. So to connect with the question that I know Stina wants to ask, who defines the value that gets created? Because we can expect the regulation to define the value, but I’m not sure this is going to be the right way. Stina do you want to add your reflection on this?

Stina Heikkila:
It made me think about when you were talking about this early stake in innovation, something that I read, and I would have to find reference from Mariana Mazzucato, about how the government is usually quite bad at claiming those stakes and that this leads to some regulatory choices that can be not necessarily to tax, to capture value created from businesses, but rather to have a clear stake in early innovation. I think she was mentioning that Google, for instance, was initially sponsored by the government. Vaccines is another area where it’s usually governments who come in with an initial investment but are very poor in sort of capturing the returns. So that was a reflection that I had while you were talking and coming back to that idea of the kind of relationship between governments and platforms and if there could be a partnership model, that could be envisioned in that space.

Marshall Van Alstyne:
So that’s an interesting issue. I think Geoffrey and I both are huge fans of government sponsored deep research with long term payoffs, whether it’s fundamental research on genes, or mathematics, some of those things. I think those are investments that society can really gain from when governments are really good, better than corporations at investing for long term knowledge creation. So you raise an issue: who gets to define the value once it’s over? In some sense, markets are probably better at determining that value. Because there’s always an information asymmetry problem. Individual or private preferences, this goes back to the pioneering work of von Hayek in economics, are often not known to a central authority. So it’s the trading of ideas or it’s the trading of assets that helps to define the external value in a more accurate way, and in the context of where there are possible externalities as there are in information and data. That brings us to the other set of ideas where again, if you can get the parties either side of transaction to negotiate in a healthy manner, where each has equivalent bargaining power, so one can dominate the other, then often you can arrive at an efficient solution. So often markets are pretty good at that. So the role of government is there often to provide equivalent bargaining power to the parties so that they can arrive at an efficient solution as distinct from externally having the government define that value, unaware of what people individually value. So, we need to create governance mechanisms that help establish the proper valuation through mechanisms that get everyone’s information involved in the decision. And that’s an interesting governance problem in and of itself.

Geoffrey Parker:
Lots of interesting topics here. To use a property and a kind of an intellectual property rights environment. And we have the notion of cultural innovation and that’s where things like copyright and patent come from, the notion that you have incentives to innovate, to build to create new things. But then at some point, those new things would be better off distributed to the world at large for others to build on top of. And I think that we’ve kind of lost track of the right sort of durations, and the right way to think about that in a lot of industries. And part of that is just literally ex post the value of killer thing, then you’re much more likely to want to hoard it, or yourself forever, even if overall social welfare would go down. And so being able to force the duration ex ante is where we have lots of innovation, patents. The problem is that a lot of value toward data or information and the rights regime has tended to be around copyright instead of patent. And those go for a long time on the order of 100 years, which was far beyond what would be necessary to provide the innovation incentives for people to create new things, and sort of rebalance it. And I’ll look at those, I think it’s an order and interestingly enough, some of our work has shown that in some ways, platforms themselves have stepped into a regulatory vacuum and been able for shorter IP rights and allow for innovations to spread throughout the ecosystem, in wisdom, whatever happened if those innovations weren’t governed by the US kind of copyright law. So lots of area, I think for innovation on the regulatory front, but the context matters incredibly and whether you’re really going to be able to get the context right, I think is a completely open question.

Simone Cicero:
It looks like we are kind of going towards some kind of endogenous process of self regulation because we are acknowledging the failure of centralised regulations. Sometimes the regulation stands to get stuck in thesis which are pretty much conservative thesis, for example, the data ownership thesis or the things you mentioned. So, stifling innovations or stifling change just in the interest of replicating patents that probably don’t fit today the current context of organising. I would like to discuss maybe in this last part of the conversation how do you get to platforms, but more generally large scale organising systems that are able to self regulate based on some kind of shared vision. Maybe we can infuse in this kind of organisational systems the things you mentioned. For example, they need to be able to be open to public/private partnerships or they need to be able to give a user access to returns or communities access to the profits that they generate. The question is: how do you design a system that is able to self regulate and in the case the system self regulates what is the direction according to which the self regulation should be put in place? Maybe it’s a good idea to look at the regulatory system that is much less brought up down and much more embedded in the organisation and bottom up.

Marshall Van Alstyne:
Simone I love your phrase “endogenous self regulation”. How do you deal with that? Let’s introduce an idea that Geoffrey and I discuss in the book, which is the chapter on governance. We’re introducing an idea called “design for self design”, which might be a stepping stone in the direction of your endogenous self governance. Here’s what happens. One of the things that we like to do is to describe it again as a platform open architecture with rules of governance for creating healthy interactions. The North Star, in some sense, is creating those healthy interactions or creating value, creating human value in some form. I wouldn’t define the North Star as creating privacy, or creating competition. Those usually are stepping stones to the broader goal of creating human value. So in some sense I would argue, and others feel free to disagree, that creating human value is the North Star in that sense. The interesting then happens. In any illusionary system if you start to create value, it just isn’t biology, you’ll create parasites that try to steal or syphon that value. And it could be a bad partner to the ecosystem or worse. The really hard ones are when the governance model starts to apropriate too much value for itself by virtue of its monopoly power as the authority itself, and that’s where you need properties of changing the rules of governance, in conjunction with the other interested parties, those who are themselves governed, or those who are the partners, those who are creating the value. And so you need mechanisms for them to have a voice in the design of the mechanisms to curtail the parasites that are syphoning the value that’s created. So the process of design for self design would be a process by which you would identify when value is getting syphoned off, who is causing that, who should be involved in decision to fix it and redesigning the rules. And then the process starts over again, you start creating value again and then and then always there’s some kind of an arms race and there’s a new way that’s someone else tries to syphon value and so you have to constantly evolve the ecosystem through this process of design for self design.

Simone Cicero:
I love this idea that the North Star is the creation of human value. And this nicely connects with the things that we briefly mentioned on preparing the conversation. This idea that one of the pillars that is emerging in these evolutionary systems of larger scale organising is that we need to come up with a new human development thesis. Otherwise, if we just trust the technological innovations, today, the machine development thesis overcomes the human development issues and we cannot exercise anymore this governance. In a previous conversation we made an example of fast algorithms that trade shares on the market. You said there are these algorithms that sell stocks and buy stock in like a milliseconds and nanoseconds. So how do you exercise your governance which is our way to exercise regulation on such algorithms, on such platforms? Maybe we need to have a reflective space. So to set the space for this reflection, for this kind of expressing the particular human aspects, our capacity to deal with complexity and big pictures and see second order effects and what we choose to do. So how do you see the kind of creation of reflective spaces in platforms, in the governance of platforms and maybe what could be the role of the government in facilitating that, what could be the role of the users on creating and demanding these reflective spaces as we created these large scale organising networks?

Geoffrey Parker:
So I’m going to talk to them in a conversation. Fascinating direction. But I want to pull back to build a little bit on what Marshall said and you Simone as well about this Northstar of creating human value. Because I think if you go back to the design principles of platforms, we talk about this a fair bit in much of our writing, especially the book, and some of the big HBR more general purpose writings. A lot of times you’ll get organisations aimed toward laxity first, but there’s a lot of complicity and a lot of useful focus on those transaction flows interaction that create value. And that helps to strip away a lot of the noise. But of course, over time, the complexity always, and I think that’s going to be where focus on as Marshal said, there are areas where the kind of parasites will come in to syphon value off. And then it’s that reflectance alone that you talk about, which is kind of how can you examine the sums and then think carefully about where the value is being created. And how do you protect that?

Marshall Van Alstyne:
So I love what you just said and you’ve asked an incredibly hard problem. How would you savour human development thesis over machine development thesis? It’s too easy to just let the technology go and evolve without necessarily bringing humans along in that way. The idea of creating some self reflective spaces improves the awareness. The one thing I worry about in that case, and again, maybe we can discuss better ways to design it and you just ask a question to which I hadn’t considered and then I grant up front, I don’t have a full answer. One thing I would worry about having studied some fake news is the extent to which third parties might try to influence the information available to those trying to become self aware. If we look at what’s currently being propagated on some platforms these days, we look at not just misinformation but active disinformation, trying to influence people’s choices. I genuinely worry that simply providing mechanisms that provide information creates noise, pollution, and deceit and opportunities for disinformation. So we need governance models that also help to screen the disinformation. And that brings us back to to who gets to decide what’s disinformation? What are the penalties for that? Which is, again, a recursively, incredibly hard problem. So, as much as I love the question, how are we going to create these awareness and self reflective spaces? we then have to ask what information are people going to use to become self aware and who’s inserting the information into those channels and want to ensure that parties that self interested parties aren’t corrupting those channels as we’re seeing today with fake news? I’m hoping we’ll have better answers to that for you soon that is a really deep and hard question that I think a lot of societies are going to depend on in terms of just cleaning up their own information channels. Social networks have been great and you know, and things like the Arab Spring, but they’ve been horrible in terms of the disinformation. We see it in, you know, elections in the United States, we see it in disinformation about COVID-19 and we need to make sure that the information channels are clean, and not full of disinformation and cluttered with self interested parties that don’t have the interest of the person at the centre at their heart.

Simone Cicero:
That’s a good point. We can try to deal with regulating innovations, regulating technology and platforms, to some extent, we’re talking about the role of technology in our society. But I think we acknowledge in this conversation that it’s very hard to take on this in a centralised way. So, these are the rules, no question and you need to play according to those rules because this is going to protect everyone. And that doesn’t look like to be the good direction to go with regulations. Embracing the complexity that we are living which is requiring decentralised, localised, contextualised ways to deal with fairness, access, opportunities, development, and so on. So I think our challenge as scholars of this world and also young advisors to people like you guys are doing is really to push the boundaries and to say “Okay, let’s address the problem”. But from a complexity perspective, and not from an industrial perspective, try to enforce something on a system that is really not going to hardly regulate it with a one size fits all approach? I don’t know if you want to add some final comments on this reflection.

Geoffrey Parker:
Well, I think you’re dead on and we’ve learned a lot from those who’ve gone for us. Elinor Ostrom won a Nobel Prize, partly for her analysing the degree to which regulation has to be critical, has to be fitted to the technology regime, which is really a way of saying exactly what you’re saying. But to say, a complexity perspective is really critical, especially as things do tend to grow more complex over time. So very well said.

Marshall Van Alstyne:
So, you’ll see that something else interesting about the technology evolution and how this moves forward. And again, if we’re going to move to the direction of value creation, I actually want to argue that most of this comes down to really good governance models, you know, technology itself is more or less neutral. And it’s how we use it that makes a big difference. So you can use planes for travel or for warfare, you can use gene editing stuff, technology for creating cures or creating viruses. Or you could use nuclear for creating power or for creating bombs. So we have choices, and how all of these things evolve and how they matter it comes down to governance. Good governance depends on the information that’s put into the decisions. Who has the power to make those decisions? And also, how do you deal with the externalities? How do you know how the decisions of one affect the decisions or affect the welfare of third parties? So I’m going to argue that we know our task next is to create human welfare through good governance and it’s partly through good platform governance, and it’s partly good standard social government or normal government informed by better information, uncluttered by disinformation. And that accounts for the externalities that affect individuals, societies and even across society. So I think that is the task before us.

Geoffrey Parker:
Yeah, I think the hard part of these systems is whether they have appropriate governance models. And do they have the structures as well to adapt models or do they end up being kind of carved in stone? I think systems that can adapt more quickly to changing circumstances are going to kind of be in there nor are focused in creating human value. Great summary.

Simone Cicero:
Thank you guys. That was amazing. Before we say buy to everyone do you guys want to add something about what you’re up to, your next steps? Where people can catch up with your latest piece of work?

Marshall Van Alstyne:
So well certainly, we’d love to invite everyone to the MIT platform summit. I think, Geoffrey, you confirm it’s July 8? Everyone’s certainly invited to join. We have a terrific group of international speakers, men and women with some interesting platform ideas that’s taking place in terms of future research. There are a couple of projects one is on the social efficiency of fairness and how that’s going to work. Others are going to be involved with city as platform, can we actually make cities healthier? Can we run, can governments learn from platforms and platforms learn from governments? I think there are interesting ways to explore some of that research. They have some research forthcoming on solutions to the fake news problem that I’m hoping will be ready in the next couple of months. And I think that’s a really tough one, arguments on the upcoming election cycles. And to summarise, I would just say I love the questions you’re asking and hope to be partners in finding the answers. It’s a wonderful space to be exploring.

Simone Cicero:
Thank you guys, we really appreciate your contribution.

Geoffrey Parker:
Absolutely, please come to the MIT platform strategy summit on July 8th like Marshall mentioned. We already started to work on the issues of governance and regulation of digital platsomr. Marshall also detailed a lot of interesting work we’ll be focussing on, incumbents and B2B marketplaces and — we didn’t even get into this — but also the pandemic has driven a lot of innovation in that scace in weeks, in what had normally taken years. But that’ll have to be for another time.

Simone Cicero:
Definitely, really into this acceleration. Thanks again. So again, listeners. We’ll catch up soon!

Marshall Van Alstyne:
It’s a pleasure. Thanks so much.