Episode 02 : An Incomplete Guide to PBS - with Mike Neuder and Chris Hager
Hasu and Jon bring on Mike Neuder (Ethereum Foundation) and Chris Hager (Flashbots) to discuss the design philosophy of proposer-builder separation (PBS). They first dive into the past and present implementations of PBS, including MEV-Geth and MEV-Boost on Ethereum. Then they discuss the future of PBS - whether PBS should be enshrined, protocol-enforced proposer commitments (PEPC), PBS on L2s, how to prevent censorship, and more.
Hasu: Welcome to Uncommon Core, where we explore the big ideas in crypto from first principles. This show is hosted by Jon Charbonneau, co-founder and general partner of DBA, and me, Hasu, strategy lead at Flashbots and advisor to the Lido DAO.
Jon: Today, Hasu and I sat down with Mike Neuder from the Ethereum Foundation and Chris Hager from Flashbots. We had a great time chatting through PBS, also known as Proposer-Builder Separation We talked about the history of PBS on Ethereum, talking about what it looked like on proof of work, and how that brought us to where we are today with MEV-Boost on Ethereum right now. We also looked ahead, looking at the future of PBS, asking each other, should we enshrine PBS or not? And what would that look like? That included talking about really fun new ideas like PEPC. We also had some fun at the end talking about what should PBS look like on other domains, especially Layer 2s on Ethereum, like other rollups, where we chatted through why we think it could actually look very, very different on L2s compared to Ethereum itself. Hope you enjoy.
Hasu: What is Proposal Builder Separation or short as we know it, PBS?
Jon: Sure. So the first thing that I'll pick out that we kind of used before, it's the quote from Barnabé is still my favorite kind of one-liner description of what it really is. So PBS is first and foremost a design philosophy recognizing that protocol actors may invoke services from third parties in the course of their consensus duties. So I really like that it's just a kind of high level setting of what it is. Because it really, while we look at it in Ethereum as a very concrete implementation, the reality is it is just kind of a higher level design philosophy of we understand that we're going to have protocol actors that are responsible for certain things. And then there's going to be an economic incentive and for various other reasons for them to outsource certain actions to other actors that may not be actually in the protocol. So concretely, the way that we're used to thinking about that is in the Ethereum world, where we have validators, one of which will be the active proposer at a given time to propose a block to the rest of the network. And the reason that we have concretely proposer-builder separation here is that we want that proposer to be relatively unsophisticated and yet be economically competitive such that we can keep the validator set decentralized. So they can outsource the very specialized task to this network of specialized block builders, which is outside the protocol. And those block builders are responsible for building what is the most optimized block that can extract the most value such that they can pass the majority of value back. Because otherwise what you would have conversely is if we don't have this ability for proposers to interact with this out-of-protocol market in a relatively trust-minimized way, then you would simply have a very clear return to sophistication where the only way to be a competitive proposer would be, "Okay, well now you need to be a builder in-house. You need to be super sophisticated and know how to optimize everything." So it's trying to get at the fact that you're going to have these different roles and we need to design what is the right way to have an interface between these in-protocol and out-of-protocol roles. And right now the way that that works with MEV-Boost is kind of a strapped-on way of doing that. And a lot of the research right now that Mike has been doing over at the EF is like, "How do we maybe bring that a little bit more in-house and what should that look like to try to make that interface between the in-protocol and out-of-protocol actors even more trustless?"
Mike: Yeah. And I always like to circle back to Vitalik's endgame post. The last sentence of his post is basically, "The future of many iterations of these designs will probably end up in a world where there's centralized production, decentralized verification, and strong anti-censorship properties." He kind of talks about how some ecosystems might start more centralized in the block production world and evolve into something that has decentralized verification only. And others could take different trade-offs in the initial state, but ultimately we might always end up in that state where we need to firewall off the heavy-duty actions that the validators need to take from something that can be run on a local machine, has credible decentralization features. So that's kind of how I like to think about it.
Chris: Yeah. You spoke to a lot of things that I'm also thinking. I think in particular, it's also a case of there is either an implicit or an explicit auction. And if the auction is implicit, it has a lot more negative externalities and incentives to centralization. And PBS recognizes that not all protocol actors may be able to fulfill all the duties in a comparatively performant way and need external support for that to also keep the decentralization of the network stable.
Hasu: Yeah. And what I particularly like and kind of why I picked out this quote is that it really hones in on PBS as philosophy. And I think PBS, the implementation on Ethereum faces a lot of criticism from different directions, all great arguments and concerns that we will also go into in this episode. But really, I think the general idea behind it is one that is extremely sound. And I think that all of you laid out here really well. So with this high level overview out of the way, I'd like to go a bit, you know, a couple years back and hear from you, PBS as an idea. Where did it start? What is its history? How do we get from there to where we are today?
Mike: I think historically, the PBS marketplace was a little more explicit in the MEV-Geth world before we had proof of stake merge. So essentially, in that scenario, there was a few large mining pools that controlled a huge portion of the hash rate. MEV-Geth was the ability, like provided the ability for searchers to send bundles to those miners. The searchers were able to send bundles to the miner kind of without worry about the miner stealing them, because since there were so few, the miners reputation was worth more than stealing the contents of that bundle So in that regard, the interaction between the searchers and the block producers was simpler because there were so many fewer block producers. And then I guess, as the merge kind of approached, a lot of people were talking about PBS as a general approach. And I think even we're considering holding off on shipping the merge until we had some in-protocol version of PBS that could be accompanying the actual, the merge hard fork. I think that was discarded in general because the merge was already a huge lift and adding more complexity to the software and to the spec was kind of just going to slow things down more than necessary. And so, yeah, maybe I'll pass it over to Chris here as Flashbots stepped in and implemented MEV-Boost and that was like the real first PBS instantiation out-of-protocolthat we saw post merge.
Chris: Yeah, I think about one year before the merge, Stefan from Flashbots posted the OG MEV-Boost specification outlining how proposers could interact with an external block building network. And then work started in the background in the Devconnect meeting on MEV-Day in Amsterdam in '22. That was April '22. There was a finalization of all the APIs that were needed. And from then on, it was clear that everybody's shooting for the merge with PBS, with MEV-Boost PBS enabled. I think at this point, it was fully unclear how permissioned or permissionless this whole thing will be and how this plays out. But it seemed inevitable that some form of this is going to ship. And yeah, we worked them through the summer to deliver a permissionless relay on the open source software that also other relay operators can run and had everything ready in time, just in time for the merge that included permissionless builder access.
Mike: Yeah, and it might be worth just kind of running through MEV-Boost as a software for people who aren't familiar. So the idea of MEV-Boost is there's a third party actor here that facilitates the auction between the proposer and the builder. And the reason for that is the proposer needs to trust that the block that the builder produces is both valid and accurately pays them the amount that the builder promised. And the builders can't simply send those blocks to the proposer for them to verify that themselves because the proposer could just steal the MEV from the block and in that way, like take away all the earning from the builder themselves. So the relay kind of sits in the middle. It facilitates this auction in so far as the builders send a bunch of blocks to the relay and the proposer commits to the highest paying of those blocks before they actually see the block contents. So that's an important feature here. And that kind of comes up as I think it'll probably come up as we think more broadly about ePBS designs, which is that proposers need to commit without seeing the contents of their block in order to protect the builders from the MEV being stolen. So the current status quo, I guess post merge, there was maybe like three or four relays running immediately. And now I think we're up to like eight that facilitate most of the MEV-Boost blocks. A bunch of builders are sending blocks to those relays and about 95% of validators are hooked up to one of those relays and using their connection to that relay to source their block production.
Hasu: I guess I'm really a sucker for proof of work and kind of the history of it. So I would add that in some ways there was even a form of proposer-builder separation before MEV-Geth existed in the division of labor that existed between a mining pool operator and the workers. Because the way that it works is the mining pool operator would construct the block body and then they would hash the block header once and they would send it to the workers to hash it further. And that hash would then have the golden nonce or not. So you find a bunch of things, you find block construction, because there was only one party that had to do all of the peering and the validation and the block construction and so on. And also had to invest into latency infrastructure, right? Having good propagation to other mining pools and to big exchanges and so on. And then you had the workers who did the actual work on the encrypted block body, right? So you also had this idea of the commit reveal scheme even back then. So it's funny how far back some of these ideas trace. We established PBS as a design philosophy right? And I think you already touched on it a little bit, Jon, when you said we want to protect validators from having to do these duties that are so complicated, so difficult that it leads to an unevenness in how much money they make, or how well they execute these services. So tell us a bit more about what are the benefits of PBS as we try to unpack this idea of let's create a healthy decentralized market structure?
Jon: So the high level benefit is kind of what Mike was talking about before as well. In large part, decentralization of the validator set is a means to an end primarily. And that's to get certain properties out of them, out of the protocol, such as censorship resistance, liveness in extreme scenarios, stuff like that. So to get that, we want a decentralized validator set. And then to get the decentralized validator set, we want to make sure to offload all the complexity to these other builders to the extent that possible. So that is like the simplest one is just to keep them decentralized. And then the other realization kind of on top of that is, "Hey, if we have these more specialized, more sophisticated actors that kind of sit next to the protocol that we can rely on to be economically incentivized to keep building these blocks, we can kind of lean into that and take advantage of it as long as their power is sufficiently constrained." So the simple things like that is like, "Okay, we can have builders do these more complex tasks going forward." The clearest example of that being for scaling. So something like the dank sharding design, where instead of having all of these different subcommittees where you're building all the... You effectively will now have one gigantic block with all the data in it. And that is a relatively more complex task to do for one single person to make this larger block, make all the KZG commitments for it, etc. But that helps and produces a more efficient scaling design and keeps the load on the proposers very light. It's just a realization that, "Hey, we're going to have these specialized actors anyway, because there's clearly an economic incentive for MEV capture reasons for them to exist, like they will need to." So we can take advantage of that, lean into it, and have them do these other tasks that we can kind of push off to them that someone needs to do in a more efficient manner.
Hasu: And this is a new idea that you're now introducing, right? Because so far we kind of kept it to MEV and just the ordering of transactions. We touched on some things like a mining pool needs latency infrastructure and good peering and stuff like that. But now you're really opening almost like an entirely new, much wider design space, right?
Jon: Yeah, exactly. And even more extreme scenarios, I would say, or pushing the same idea to different layers, it's the same kind of thing where, it depends on who you ask how viable they are, but based rollups as the simple example. When Vitalik had first had a post probably a few years ago on rollups of what are the different ways that you could do sequencing designs, one of the ideas was something that he called total anarchy at the time. And you basically, instead of having a sequencer for the rollup, it would just be like, what's the first one that lands on-chain basically? That's the block for the rollup. And the reason that didn't work is because that would just be an absolute mess. You would have spam on-chain, no one would know who's going to win. It would just be a lot of wasted effort, incredibly inefficient. You wouldn't know anything. The reason that that kind of becomes a more viable design, and this is something that Justin's post kind of touched on earlier this year called base rollups, it's like hey, we can lean into the fact that we now have these more sophisticated and economically rational actors who have this proposer interface where the builder for the layer one can effectively be slash searchers feeding into that, can effectively be the sequencers for the rollup and say, hey, I'm not going to include these 10 failed attempts to get a rollup block in here. I'm going to include the blob that is going to be the most efficient one for this rollup and I'm going to land that one on-chain and I'll pass that to the proposer and make sure only that gets in. So you can lean into it, realizing that there are going to be these economically incentivized parties who are kind of sitting next to the protocol and have them do this kind of additional work, acknowledging that, hey, they're going to be sitting there anyway. We might as well lean into them and use them for these different things. So that's another simple example. Other things, stuff like for statelessness, having them create proofs to make that a viable design because otherwise you'll need to give the witnesses to the validators for them to be able to be stateless, leaning into them for all these kinds of different tasks. You realize there's a lot of stuff that we could lean into the builders and have them kind of outsource those kinds of complexities to them.
Chris: Yeah. And complexity can then also increase on the proposer side of PBS, for instance, in PEPC the protocol enforced proposer commitments, which are also a form of PBS where there's more arbitrary commitments a proposer can enter into. So this whole design space provides a lot of more opportunities to build interesting things.
Jon: It doesn't necessarily, I would say, have to increase the complexity on the proposer a lot because even if they are entering into these arbitrary commitments, they don't have to be the ones who fulfill them. And this is something that kind of Barnabé touched on a bit in his last FAQ of where he's talking about PEPC-Boost and stuff like that. Outsourcing a full block is just one thing you can outsource. They can outsource any of these commitments where a proposer can just be opted into, like, "Hey, here are the commitments that I'm opted into specifically." And the builder, as long as they're aware of those, they can build a block that's in recognition of those and they will send them a block that fulfills those conditions. Because if they don't fulfill those conditions, they know that they're not going to get their block on-chain. It's the same kind of incentive as builders for the full block auction. As long as there is an interface and awareness of the builders know that the commitments that they're opted into, they can kind of build them for them. So if designed well, I would say it doesn't have to increase complexity for the proposers necessarily.
Hasu: Builders can also, I would add, out-of-protocol actors can in general do things that in-protocol actors can't. So in MEV in particular, I think there's two very clear things. They can keep transactions private, like a builder can run a sealed bid auction instead of an open bid auction. They can do simulation on the transaction. So they can have, like, instead of an all pay auction. And these are some things that a validator could do, but it's really difficult and it's not possible really to establish this trust. And so ultimately out of protocol, kind of the design space is much bigger and it then leads to a better market structure.
Mike: Two other features that just kind of jumped to mind when you were describing that, Hasu, is cancellations. So builders can offer bundle cancellations. This is especially important for the centralized exchange arbitrageurs who need, you know, if the centralized exchange price moves against them, they need to be able to cancel a bundle. And also, oh yeah, like instant confirmations. Like builders could, I mean, it kind of depends exactly what type of confirmation the searcher is looking for, but they could give some guarantee on the like post-state route after a bundle, you know, conditioned on their block winning the bid. So yeah, there's a lot that a builder can do out of the protocol, as you mentioned.
Hasu: In general, I think we say, well, PBS allows validators to stay simple and affordable and connecting that back to what you said Jon, why do we want that? It's because we want to maximize the censorship resistance of the network, right? Because a lot of that in particular in Ethereum, so when you talk about Layer 2s, for example, they all have assumptions built into their own security model that basically says, you know, the layer one chain can't be censored for, X period of time or something, right? This is actually kind of the property that we're trying to protect. I've seen another argument discussed, and this is really kind of a double edged sword, but it's nonetheless interesting to point out, which is the regulatory argument. So the argument goes roughly like this, right? So the less discretion validators have over the kind of blocks that they build, the less they have to be regulated as any kind of financial intermediary. And you would really kind of draw the line on the far end, like on one side of kind of the extreme spectrum, you would have a validator as a money transmitter, that basically has to KYC every single person that transacts to them, right? And so it would be like an extremely censored and regulated Ethereum. And on the other end, you have the validator as an ISP, or like a fiber cable, right? So it's just like what it transports, is just data packages and inspecting all of them would be completely infeasible. And so it really has no discretion over what passes through its pipes. And so I think we kind of like a big idea behind keeping proposing simple and affordable is really to also, I think, boost this argument, right? That really proposing is, it should have the least amount discretion as possible. But when you get there, you introduce new problems, right? That we talk about, because yeah, like, because we are talking about really difficult jobs that the builders do. There tends to be, you know, power law outcomes in this market. And so now all of a sudden, it takes a lot fewer things to go wrong to kind of censor that market in turn, right? And so, ultimately, you're not really like solving the problem, like easily, you're just shifting it in some way.
Mike: And it's probably worth just like calling back to immediately post merge the issues around like OFAC compliance and censorship were largely there because the relays, you know, had to commit to censoring those transactions in some way. So even though you allow the burden of the validators to be shifted to the relay that still opens up like new regulatory surface that might be smaller than the regulatory surface of the entire validator set. So yeah, definitely a trade off. And also, you know, in terms of censorship resistance, I think inclusion lists and their relationship here is also very interesting. And this is something Jon and I have discussed a lot, which is if you bring back the censorship resistance properties and place them back on the shoulders of the validators, now the validator has to opt-in to kind of getting these OFAC or these censored transactions on-chain. It's kind of like, okay, the responsibility is still somewhere. It's just like, who shoulders it at the end of the day?
Jon: And that's one of the interesting things with these designs where it is hard to say, when you're designing inclusion lists or something like that, are you designing for a specific regulatory environment in mind where there actually aren't clear regulations? It's pretty hard to actually do that. But yeah, some of the designs like that, if there is a regulatory burden, it's also probably where I say not legal advice type thing, I don't actually know. But some of the designs like that would seem to put more agency on a particular actor where we're saying one person is enforcing censorship resistance. They're explicitly saying you must include this transaction in a block. That maybe looks a little more gray as compared to something like, and this is one of the reasons that I think that designs like SUAVE and other types of encrypted mempools are like, they're very often talked about as an MEV solution. I think that they're very interesting from the censorship side of things. And that is a very underrated property of them, is that they seem to be in the best direction of ensuring these properties for people while also giving everyone possible deniability. Like, I don't know what's in there of completely removing that agency from anyone throughout. It's just that I see a bunch of white noise and I run my algorithm over it, and here's what I get at the end of the day. And that is what starts to really look like at the end of the day, I'm an ISP, I'm sending data packets around, I don't know what the hell any of them are. And you really are a dumb pipe at every point in the supply chain. That is a very interesting dynamic of them that I think does get underrated at times. They are very much a censorship tool and making everyone a dumb infrastructure provider, as opposed to just being this MEV solution. They are very helpful for short-term privacy, for things like making auctions incentive compatible, etc. But the censorship side of things is very interesting to me for that reason as well
Hasu: Yeah, very well put. I rounded off with one more point and then we can move on. But I think what I particularly like about PBS is that it's basically acknowledging that a division of labor between different parties will happen no matter what. And I think this becomes very clear when you contrast it to other ordering algorithms, for example, or other forms how blocks can be constructed, like time-based ordering. Because with PBS, you basically acknowledge that there is a market for ordering these transactions with the goal of maximizing validator revenue. And so that is the most valuable thing for them to do. And if you don't allow validators to kind of compete on that, to maximize the revenue that way, then what you will get is you will get other forms basically of extraction that are kind of ultimately much more destructive for the chain. And so you basically say, well, if there's MEV to be extracted, I want it to be an explicit auction. I don't want it to be an implicit auction that's harder to monitor, that leads to spam, that leads to entrenchment of latency advantage players and all of these things Centralization. Yeah, so it's really about saying, acknowledging that the market will find a way and designing around that. And I think, this is a line of thinking that I think you find in all of Barnabé's articles. Do you have any risks?
Mike: I was just going to call back to this idea that we mentioned before, which is the regulatory surface kind of changes in terms of centralization. And I guess not even from a regulatory perspective, but just the fact that in the current status quo, there's essentially like 8 to 10 relays that are responsible for 95% of Ethereum blocks and 8 to 10 builders that are responsible for producing those blocks. That has some definite risks in terms of, those are the builders especially are the ones in a position to capitalize most from the PBS market. They can continue to make the most money. The relays are in this weird position where they're kind of a public good, but still the fact that there's so few of them controlling such a huge part of the market is kind of anti the ethos, I guess, generally. And one way this actually manifests, not from a economic perspective, but more from just a fragility perspective, has showed up in a few different issues around relay operators and the relationship with consensus clients. Immediately after the Shapella fork, there was a bug in the relationship with how Prysm interacted with MEV-Boost. And that resulted in huge network instability immediately post the hard fork. It took a few epochs for the chain to finalize. There was a lot of missed slots. It was full blown firefighting mode. And that comes from the fact that there's these 10 relays and all of the software that is running on the validator machines is decoupled from this MEV-Boost external software. There is consensus stability implications around the centralization found in particular in out-of-protocol PBS systems.
Chris: I would add to that, there is the overall technical complexity of enshrined PBS. The merge is now basically just a year ago. And the whole year we've been thinking about moving PBS more in protocol, how to get rid and move beyond the relays as trusted actors. And it's super hard challenges where you may need a lot of additional responsibilities. You may need to increase the consensus protocol complexity, which is already pretty hard to reason about. And it could introduce new nuanced reorg risks or vulnerabilities. And this is just the thing that is very hard, a very hard problem to get right. So I would say there is a lot of overall technical risk on the path to in-protocol PBS.
Hasu: I mean, I have a bit more kind of arcane point. But I mean, clearly we are seeing that proposer-builder separation can exist outside the protocol. And that's where it does so far, most of the time. And not all of this stuff is actually maintained by Ethereum core developers. And so I guess as someone who was working for the Ethereum foundation, Mike, what do you think this does to kind of the power dynamics in the Ethereum ecosystem? Is it on the one hand more that we have to change the definition of what it means to be a core developer? Or is it that Ethereum should eventually try to pull everything into the protocol? What do you think it does to the invisible kind of power in the ecosystem?
Mike: Yeah, I would say Barnabé has a really nice post on this. We keep calling him out, but he has a post called seeing like a protocol. And he defines kind of what it could look like to enshrine different things and when to draw the line and say, okay, this is out-of-protocol versus in-protocol. I think part of ePBS and the work that I've been focusing on is kind of figuring out not only what to enshrine, like what design works for ePBS, whatever, technically speaking, but also like on a more meta level, should we actually do the enshrinement? And one of our recent pieces that we wrote with us four actually, and a few others was kind of talking about the role of PBS and enshrined PBS in the world in which a relay market exists outside of the protocol still. So we'll probably touch on that later. But I guess in the current meta where MEV-Boost essentially is core protocol software, I think there's a bit of an ownership mismatch, the Flashbots Org wrote this code and it's been like working really well for the year that's been running post merge. But I think everyone would agree that the testing and tooling and specification around that code is not at the same level of the core consensus clients. And part of that is because it's sort of a public good, but it's also originally written by Flashbots. So I'm not sure exactly how the ownership should evolve and the politics there. I will say, one of my big reasons why I like enshrined PBS is because it makes that distinction a lot more clear. It draws the line in the sand as like, this is the in-protocol mechanism that we're going to maintain in terms of the consensus spec and the client teams. If you want to go outside of that, you have to rely on out-of-protocol software that might inherently be more brittle, more risky, etc. Hopefully that answered your question.
Hasu: Do you think it's more risky for Ethereum that important part of the Ethereum stack are maintained by kind of non-Ethereum foundation teams that may even have a commercial interest? Or do you think it's more risky that it isn't?
Mike: It feels more risky in the current state. And I'll say, especially right now, it feels like the equilibrium we're in is not stable. The relays are kind of fighting for their lives in terms of some of them are third party kind of credibly neutral relays that are trying to get funding from grants and other things. Other relays are parts of companies and commercial entities that are trying to either monetize or trying to figure out if this is part of the core business model. And I think even some of the large relay operators now, it's not clear that if we don't find a viable funding mechanism, will be around by the end of the year, for example. So I think insofar as we get to a world where there's only like two or three relays, that is much riskier to the protocol than the current status quo, which seems to be the direction we're headed in. So I would say, Either enshrining something and clearly delineating between in-protocol and out-of-protocol PBS, or finding a way to ensure that the MEV-Boost ecosystem is more stable into the future and more sustainable is going to be critical in the coming weeks and months.
Hasu: If we see in the protocol that there are some incentives for different actors to specialize or even the same actor to specialize in some way that they can make more money, right, or that they can do additional things for the protocol. I feel like we have established PBS almost as kind of the canonical solution to this problem, but this is not the case at all, right? I kind of want to place it in kind of in contrast to some other things that you could also do. So what would you see as like the main kind of schools of thought that are in some way competing with PBS on solving that problem?
Jon: So in Ethereum, I don't know that there really is a meaningful alternative to PBS like in the specific Ethereum context, because the way that you like the kind of like broad directional alternative to PBS is just completely constraining what the proposer is allowed to do effectively. And you specify very concrete rules of like, this is what you must follow. So like some of the fair ordering type, quote unquote, "fair" ordering proposals, where you're trying to say like all the consensus participants enforce upon each other, this is the ordering that you must follow within this block. So to the extent that that happens, there really isn't room to be outsourcing block production at that point, because it's supposed to be at least deterministic of like, this is exactly the process, like the block that you should be outputting out from this. The reality is like, you're not going to be able to enshrine something that prescriptive in Ethereum generally. And so if you assume that there are going to be decentralized participants within the validator set, and they're going to have some amount of agency to propose a different block, like the natural result of that is there are going to be different people in the world who have like a better block at different times. And there's going to be an economic incentive for them to kind of like outsource that production at different times. So I don't really think that there's an alternative to PBS to any meaningful extent, like within Ethereum, given like a lot of the design constraints that it gives itself for what it's optimizing for.
Hasu: And outside of Ethereum?
Jon: Outside of Ethereum, I think that you can argue that there are credible alternatives. And like the credible alternatives are very opinionated and very app specific. And to those, like you can say that, you know, you don't need to outsource to this arbitrary market. Because, we know for our application very specifically, like, this is the transaction ordering that is going to be welfare maximizing for like what we want to achieve. And so we can ingrain very specifically, like this is the transaction ordering that like must result. Potentially difficult to achieve that. But like, you can credibly have a mechanism that like works pretty well, where I don't think it's like, just even reasonably viable at all to do something like that on Ethereum, which is incredibly opinionated which is incredibly constrained. I think you can make a credible argument for that and like certain app specific use cases. But the thing is, even in the app specific use cases, I think that the reality is, it is still a spectrum on like, how much are you constraining what you're doing? And so like, one of the things that I feel like is sometimes seen as an alternative to PBS is what's called protocol-owned building. So this is something that like is more popular in the Cosmos context with like the Skip guys are working on, where you know, we have these app specific chains. And so they have this notion of protocol-owned building, which is you have certain consensus rules that enforce like certain validity conditions upon the blocks. So we have it as part of our consensus in a chain like us as Osmosis that you know, after these trades, we check if there's an arbitrage, if there's an arbitrage, like it is baked into consensus that that cyclic arbitrage is automatically closed and like the funds are distributed how we agreed upon in consensus. There's no way around that. But the thing is, is like, while that is constraining what you are allowed to build as a block, there are still degrees of freedom within that. So there is still like flexibility within that. So you can constrain the search space with something like protocol on building. But depending on how much you constrain the search space, if there are still degrees of freedom, which there may very well be, you can still outsource block production. So you can have protocol-owned building where you have like certain validity conditions that are enforced, but the validator can still outsource to some other builder to build according to those rules. And that's kind of what I was getting back to before when I was like mentioning PEPC briefly with Chris. Is that like, you don't necessarily have to just because you have more constraints on the proposer, that doesn't necessarily mean there is no more freedom left or like they have to do it themselves. PEPC is a similar idea of PEPC is a way for proposers to constrain the allowable space of like what kind of block they can propose in much the same way that like protocol-owned building does. The difference is more that like protocol-owned building takes the very kind of Cosmos approach of, you know, it's app specific and we can reasonably say, for our given application, this is the right way to constrain the search space of like allowable blocks that like it's relatively welfare optimizing. So like every validator has to go by that commitment. Whereas PEPC is kind of the like Ethereum variation of that, where we can't say that because Ethereum is very general purpose, it is like optimizing for very different guarantees. And so you have to allow proposers to be able to locally make those constraints and those commitments, which are very analogous to what protocol-owned building wants to do. But in like a very generic context constraining, you know, what is the block that I'm going to output at the end of the day. A lot of things are viewed as alternatives to PBS. And I think that is kind of one of the things that I try to hammer out more is, PBS isn't just supposed to refer to, this is the concrete implementation that we see on Ethereum today. It is just like the acknowledgement of, there's probably going to be a separation between different actors. And there is a spectrum of like what that separation is and like how much we constrain what those different actors can do. And I think that we're like starting to see that increasingly across different ecosystems that like PBS really is a spectrum, what kind of constraints are you putting on different people and like what is the interaction between them?
Mike: Yeah. Another thing that came to mind here is that, especially in the Ethereum context, the ordering of a set of transactions could be worth different to different actors, right? Like a certain block might be worth a lot to a builder only because they can close the second leg of the arb on a centralized exchange, whereas that block itself might be worth a lot less to a validator that produced it locally because they don't have the liquidity on the centralized exchange. So there seems to be like, with such a general purpose design, it doesn't seem that viable to just say, okay, this has to be the most valuable block according to everyone's view. So this block becomes canonical, like by that definition, and additionally, just the idea of coming to consensus over the set of transactions that can construct, that can like be eligible to be in a block is a very difficult thing too, because everyone's in the P2P network, they have a different view of the world. I think that's one of the design challenges that Ethereum faces.
Hasu: Yeah, the more you try to constrain, the more you push the auction to happen outside of the protocol, right? Because when you, on the one hand, say I'm going to constrain the validator in some way, like on what block they can build, then all of a sudden now that two things can happen. One is the validator becomes really focused on the things that they still can control. For example, where do I run my machine and who was allowed to run their machine next to my machine, right? Or second, you really push it kind of to, I mean, this is like the most likely thing, but the second thing is really you kind of push extraction away from the validator, even if it's super unlikely, but to like a searcher market that then happens, you know, in a way that's kind of highly latency optimized, that has like very strong kind of winner take all dynamics in MEV and then ultimately, like in a lot of other things as well. I want to also point the spotlight at one more thing that I think is picking up a little more when I read Twitter. It seems like, it's the same people always, but like a few people are talking about it, which is the idea that why do we have to make validators like small at all? I mean, ultimately isn't the goal of Ethereum to be useful to people? So shouldn't we start from the idea of what properties should a blockchain have to be the optimal, let's say, base layer for decentralized finance? And so what is the biggest problem in decentralized finance? Well, it's probably, you know, all of the liquidity because it's very difficult to be a competitive market maker on Ethereum because you're bleeding so much money to arbitrage, right? And so I think this has kind of been, you know, people are kind of honing in on that as the problem. And so they're asking, well, how can we reduce MEV for LPs, but also for traders? And I think one thing you see is, what if we just lowered the block time a lot? And for example, fewer blocks means there's less potential to reorder transactions. There's less MEV, market makers can update their bids faster, all of these things, right? What's kind of your view on this idea of, well, let's sacrifice some decentralization in validators to make Ethereum more useful?
Jon: As far as the people on Twitter who are saying that, I mean, I'm often one of the people on Twitter who are saying that to some extent. And that's a little bit of what I've poked at lately. I just think it's important to kind of delineate between what is Ethereum's place on that spectrum versus what are the other chains' place on that spectrum? Because, there are plenty of more opinionated optimizations that Ethereum could make to make it better for traders, better for users, better in these different, very concrete ways. But you are inherently favoring a certain class of users by doing that over other certain guarantees. That is inherently going to be a trade off with a lot of those changes. And just broad brushstrokes, generally the way that I view it is that is not Ethereum's primary user, some low latency trader, quite frankly. Very often other chains, things like rollups, will optimize more for what is directly the user, the trader, what is their UX, what is their latency, all of those things that matter a lot. I view that less as Ethereum's primary customer, where rollups are building for those users. In large part, Ethereum is building for rollups and other types of longer-term, slower use cases that need really strong guarantees at the end of the day. Which is why I think it's a very practical decision for a lot of the reasons for Ethereum to have a permissionless validator set. And this is some of the stuff that I've touched on. There are trade-offs to a permissionless validator set, particularly in the short term. That means that your validators are not going to be able to enforce any kind of MEV protection. It's harder for them to enforce censorship resistance, potentially. You need to add other mechanisms like inclusion lists. Things like MEV, you basically end up pushing it to these out-of-protocol builders. So now, because a validator on Ethereum probably will, if I send it out to the public mempool, I will get front-run, I will get sandwiched. What do we do? We push that private mempool, we push it to a builder. As opposed to a chain that has a more opinionated validator set, it can have a handful of validators that we trust and we say, "Hey don't front-run the users because if you front-run them, we're going to kick you out of here" And that actually makes sense as a trade-off for other chains. I just don't think that makes sense as a trade-off for Ethereum because it is trying to provide a fundamentally different set of guarantees. If you are looking as a user to use a low latency chain where you can send your things to the public mempool and you're not going to get front-run and you want to pay low fees, you shouldn't use Ethereum. And I'm just fine to say that. You should go use a rollup. That's kind of the whole point. Ethereum is just optimizing for a very different set of trade-offs so that rollups can optimize for the exact opposite other end of trade offs, where they can be more guarded and opinionated in their designs. Where Ethereum is trying to be very unopinionated and very robust and very broad in its design goals and pushing those intricacies over to different layers of the stack. Generally, my response is, those trade-offs do make sense. They just don't necessarily make sense for Ethereum. Different protocol's should have different spots on that trade-off spectrum depending on who is your user, what are the guarantees that they're trying to provide
Hasu: Changing gears here a little bit. PBS is a design philosophy, but it also has an implementation on Ethereum today that's called MEV-Boost. And you are one of the main people working on this MEV-Boost ecosystem for a long time, Chris. Can you describe for us what is the current state of the MEV-Boost ecosystem? And then we will transition that a little bit into how it's going to evolve in the future.
Chris: The current state of the MEV-Boost ecosystem. Yeah, there's a lot to unpack here from the software itself to the relay ecosystem, to the builder ecosystem, to the protocol. I think the MEV-Boost protocol is the one thing that stayed relatively unchanged so far. It's the only change right now being the 4844 upgrade, where we introduced the blobs and they also need to go all the way through the builder network, through the relays to the proposers. And there's a lot of heavy lifting to do here that is all in progress. On the relay side, I think we've reached a somewhat unstable equilibrium with the 10 relays that are providing services. Of course, there's the downside of proposers that the more relays they add, they inherit the weakest security guarantees of the weakest link. So even though that maybe looking good on paper to spin up as many relays as possible in practice for proposers, it often would mean worse security guarantees. Relays are too powerful, trusted actors run by private businesses. This is not great for the whole trust and a rogue relay can cause a lot of harm to proposers, to builders, to the blockchain stability itself. This is something that we are very strongly looking to mitigate on the path to enshrinement. The builder ecosystem is constantly changing with about four to five builders producing the majority of the blocks. I think the top two builders have been relatively stable recently. Relayscan.io is a good website to track it. There is Rsync-builder and Beaverbuild. And now it is also Titan that are dominating the market with almost 70% of the blocks. And there is Flashbots and Builder69 with 10% about each and then a steep drop off to 2% for other builders. So I would say it's like somewhat almost centralized set of players here. It's probably not too easy to ramp up. There's a lot of things these high market share builders do to gain it. But yeah, we will see how that shapes up. Software wise, overall, I think MEV-Boost is relatively stable now. Really operation is the more demanding task for operators mostly right now. It provides a lot of DoS protection, validity checks, payment checks. It has a lot of things to do that requires a lot of compute. It's not quite easy to run it, but possible. Then there is the performance and latency optimizations that the ultrasound team and Mike in particular has implemented over the past couple of months that also really boosted the inclusion rate of this. So optimistic relaying in particular, which means that the guarantees are changed in a certain way that the builder blocks, they are not validated anymore before they reach a proposer and a proposer might sign blindly to it. And the optimistic relay is basically guaranteeing a reimbursement in case of a fault. So this is an interesting development that's currently run by the Ultrasound relay. And I think some other relay, I think, I'm not sure if Bloxroute is also running optimistic mode on some builders. That also has a lot of additional operational overhead. Flashbots is not doing optimistically relaying. But I think overall, our focus is moving beyond relays. The sooner the better.
Mike: Yeah, I was just going to say part of the optimistic kind of roadmap and the idea of making some evolution in the relays is to try and make them actually cheaper to run So what optimistic relaying does is it tries to simplify the task of being a relay operator because the blocks don't have to be simulated in the same few hundred milliseconds right before the end of the slot. By spreading out the simulation over the subsequent slot, the actual overhead of running a relay could go down quite significantly And this is part of kind of this path to hopefully more sustainable and more economic relays. As Chris mentioned, the trade-off here is additional overhead from the relay operation perspective because builders have to be collateralized with the relay. And if there's ever any failure, then that's kind of on the relay to reimburse the proposer for that issue. The kind of long tail goal here is to get to a point where we can explore and kind of forerun some of the features that would be present in an enshrined PBS mechanism through the existing relay market that we have today. So that's kind of the high level goal of optimistic relaying generally.
Hasu: Yeah, let's stick with that for a bit here. So ePBS, enshrined PBS, what is it? What is really the central problem that it's trying to solve or that needs to be solved to have ePBS?
Mike: Yeah, I think the high level problem is just trying to eliminate the need for the relay market. So ideally, we want some way to facilitate the auction between the proposer and the builder without needing a trusted third party.
Hasu: And why is it difficult to do that?
Mike: Yeah, it's difficult because the relays provide some services that the protocol actually, you know, we can try to provide them in the protocol, but they're slightly different in the way they actually manifest themselves. So In our recent ePBS relays post ePBS post, I think we described PBS as kind of a two part mechanism. It has a commit reveal scheme, which is to enforce that the proposer commits to a bid before seeing the actual block. And then it has an unconditional payment mechanism. The relays enforce the unconditional payment mechanism through basically checking the contents of the block because the relays have the block in the clear, they can see, okay, the balance before and after the blocks executed increases for the the proposer. So
Hasu: Mike, what do all of these things tell us about what's the minimum viable ePBS going to look like?
Mike: Yeah, so the minimum viable ePBS would be a commit reveal scheme to allow the proposers to commit to a builder block, and then an unconditional payment mechanism. So the unconditional payment mechanism is important because we no longer have the relay to verify that the payment goes from the builder to the proposer. So the kind of easiest version of this is what we've proposed called top of block payments. And the requirement here is that the builder submits along with their bid, a valid transaction that pays the proposer the amount that is associated with the bid. So this, along with just enforcing that the proposer kind of at the protocol level can sign on to a block header without seeing the block contents is the minimal ePBS instantiation that we're considering.
Hasu: So I assume that would require then changes to how blocks work basically in Ethereum, like the format of them?
Mike: Yeah, so the important enforcement mechanism here is that if a proposer commits to a block, and the builder has a chance basically to reveal their payload and that payload can make it on-chain. So the trade off in the design space here is how do we ensure that, once the proposer reveals their payload, that payload becomes part of the canonical chain. There's a couple different ways to do this, you can give the builder block fork choice weight explicitly, which is kind of the original line of thought that Vitalik's two slot PBS and the kind of the rest of the designs went with. The most recent design we have is called the payload timeliness committee where there's a committee that specifically attests to the availability of the payload from the builder without actually giving explicit fork choice weight to the builder block. So it does change the consensus rules. But the idea would be that most of the structure of the block remains the same. You just have to enforce that if the builder reveals their payload, it becomes canonical.
Chris: And if it doesn't reveal the payload, that the payment is still executed.
Mike: Right, exactly.
Hasu: Okay, so ePBS is one way that PBS is going to evolve as we have heard. Another angle is all of the rollups are looking to decentralize their sequence in some way. So we'll talk about what that means exactly, because different people can have widely different opinions. But one of the things that they are kind of looking at is PBS, but really it's like, it's part of a much broader design spectrum than you have on the layer one. So Jon, can you kind of walk us through how, to what degree do we need at all some form of PBS on Layer 2 and how are these different teams thinking about it?
Jon: So I would say broadly, they have a lot more flexibility in their designs, is the very TLDR of it. Where Ethereum, like kind of as I mentioned before, has this very strict set of constraints where it's like, we want to be very generalized, unopinionated, super permissionless, all of those conditions. It makes it much harder to optimize for. And the reality is rollups are going to have a lot more degrees of flexibility there. So they don't need to have necessarily a gigantic permissionless set of sequencers. They can have potentially one or a handful or some permissioned set of them. And that just like, it makes it much easier to design the process, like that interface between the proposers, who is kind of like the sequencer more or less here, and like some kind of out-of-protocol builder. So it makes it much easier if you kind of know who all the parties are and they're able to have some sort of trust interaction between them for proper execution and fulfilling their commitments. So that makes it a lot easier. And the other part of it is also they can be way more opinionated than Ethereum is going to be. So rollups can play around with things like threshold encryption, with some variations of first come first serve with the batch auction, like Shin's proposal. There are going to be a lot of these different variations that are going to be more opinionated and people are going to try different things. It's going to be like, basically the better analogy for them in large part is Cosmos compared to Ethereum. Rollups are the Cosmos app chains of the Ethereum vision, realistically. They are not Ethereum itself. That is the whole point of what I was going back to before. Ethereum makes a certain set of trade-offs that are very difficult to deal with so that rollups in large part do not have to deal with those and they can optimize for another kind of end of the trade offs. In large part, though, some form of PBS is likely going to rise/be necessary in them. What that looks like will look very different. But for those same reasons before, even when you constrain the search base of like you do certain things like protocol-owned building or you constrain certain ordering rules, there still are going to potentially be degrees of freedom that you want to outsource to a competitive market such that you are getting the best block that the sequencers are going to put in there at the end of the day.
Hasu: That makes sense. And another topic that we have touched already on in this call is PEPC. What is PEPC and how does it relate to PBS?
Mike: Cool. Yeah. So PEPC is a proposal from Barnabé. It stands for Protocol Enforced Proposer Commitments. And the idea here is that it kind of generalizes PBS insofar as expanding the set of commitments that a proposer can make that are enforced at the block validity level. So the idea is in this new design, proposers can sign up for different block validity conditions that are applied to their block. And this is kind of often compared to the type of commitments that could be made through EigenLayer. I think the important distinction is that EigenLayer commitments are only enforceable kind of at the execution layer, meaning they're only enforceable by slashing the stake of the validator kind of after the fact if they don't fulfill the commitments that they made. PEPC is kind of a stronger commitment or in my mind, kind of closer to the metal of Ethereum in that the commitments are actually part of the fork choice rule and part of the state transition function. So if a proposer commits to something and their block doesn't satisfy that constraint, then it's not even able to be part of the blockchain because of the commitments that they made. So I like to think about the difference between ePBS and PEPC as the difference between homogeneous and heterogeneous commitments that the proposer can make. So in ePBS, we're saying we're going to specifically enshrine a single version of the mechanism that the proposer and builders participate in. So that could be a full block auction. So the proposers can commit to a specific block hash. The builder has to reveal a payload that corresponds to that block hash. It could also be more general, like the proposer commits that they sell their block production rights for the entire slot to the builder. So instead of specifying the block that the builder has to produce, they say whatever the builder wants, they can make as long as it's signed by a specific builder, Pubkey, for example. In general, the space of commitments is just the single commitment that the proposer can make in enshrined PBS. PEPC is different in that different proposers can make different commitments from slot to slot. So the slot end proposer could say, "I only want to sell the first one million gas of my block. I'm selling it to this builder. The bundle that comes there has to be signed by that builder," for example. But the next proposer, the slot n+1 proposer, could commit to selling their entire block to a different builder. And that heterogeneity of the commitments is, I think, the important distinction between ePBS and PEPC.
Hasu: Could you say that PBS is a very specific commitment protocol, in the sense that it allows builders to commit to validators and validators to commit to builders in a way that lets them exchange blocks for money without leaking the information? And then PEPC is a highly generalized commitment protocol where both parties can make, especially validators can make more elaborate commitments to the builders?
Mike: Yeah, absolutely. I don't see PEPC and ePBS as mutually exclusive in any way. I see PEPC as the superset of ePBS. And Barnabé actually mentioned this in his recent FAQ. He said, "The way to do PEPC might be to start with limiting the set of commitments that a proposer can make. And that commitment set might just be a single commitment, which is I agree to sell my entire block to this builder. And the roadmap could evolve to open up the space of commitments that the proposers makes." And yeah, that would be probably the direction we go if we decide PEPC is the right roadmap.
Chris: And is the idea still to express these commitments as smart contracts?
Mike: I think the implementation details are still very much being ironed out. And yeah, there's people thinking specifically about PEPC-Boost, what that could look like in the MEV-Boost ecosystem. I think the research stage of PEPC is still in the very early days in the same way that most of the ePBS implementations are too.
Hasu: So imagine that PEPC is live. I know that runs counter to what you just said. It's like early research. But imagine it's live and there's only two possible commitments that can be made. So let's say it's full blocks and I don't know, it's slot auction or it's like auctioning off your block in advance, whatever, right? Something else. So how do I now learn what commitments should I make? Is it basically like I can imagine there's a form of MEV-boost or like, you know, some kind of block market. But instead of only showing me what's the highest bid, it also shows me what kind of commitments I have to make. And then how do I kind of decide between those things? Am I just continuously just picking whatever the highest bid is the same way that it does today with pretty much no discretion?
Mike: Yeah. So I guess in PEPC-boost, if we did this out of protocol, maybe it's easiest to start there. The proposer would broadcast their commitments and the relay would as part of the block validity checks that the relay does, they would make sure that the builder, you know, the builder block that's produced satisfies those conditions. In PEPC in protocol, I think the question becomes a little more complicated because in order to enforce it at the fork choice rule, you really need that commitment to be encoded in the block data somehow. So basically the slot N commitments need to be available for the slot N attesting committee because part of their fork choice rule is going to ensure that those commitments are satisfied by a valid block that's produced at slot N. So the probable mechanism that fits in here is before their slot, the proposer at slot N needs to publicize the commitments that they're willing to make for their slot. And in the slot N-1 block, those commitments are included and encoded in some way that is enforceable by the next round of attestations.
Hasu: Amazing. I think, does anyone have any points that they want to make?
Jon: This is kind of a broad thing, I guess. You can kind of just like as a high level way to think of like these kinds of proposer commitments and the constraints that you're putting, it is kind of the other side of the coin of like when everyone talks about the new buzzword of intents. It's just like from the opposite end where like this whole notion of expressing an intent versus a typical like transaction is like the general idea is you're being very prescriptive in a typical like transaction based model where you're saying like, here's the execution trace of like, this is exactly the path of this transaction. I'm like, this is what will happen versus this notion of intents, whatever it means is like you are generally giving some certain broader set of constraints on like, hey, this doesn't need, you know, I don't want to be so prescriptive of like, this is the exact execution path that you're taking. But like, hey, here are the constraints that I'm happy with. As long as anything within like this kind of realm is the result of whatever you do I'm happy with that at the end of the day. And then like, let someone else go figure out the optimal way to do that. And like, very similarly on like this, this proposer commitments thing, it is kind of the other side of the coin of that, where you are saying like, what is the right balance of what are the constraints that we impose on builders in this scenario from like the proposer side of things, as opposed to just having a very prescriptive mechanism, which is like, I will sign a commitment and you will give me a full block and like, this is it. As opposed to, hey, what if we could say, you give me this full block, but I'm going to give you these constraints of like, it's a block, but it has to meet it has to have this certain type of transaction ordering it and, I want like this oracle transaction at the top of the block. And then the rest of it, you do whatever is like the most welfare maximizing thing, like whatever, you get the most value out of. So it's kind of like, both of them have like that similar trade-off of, what is the right way to express these types of constraints in a way that is like practical? Because also when you have absolutely no constraints, it starts to become like a potentially intractable problem that is just like too difficult to be useful. And when you're too constrained, you know you're possibly destroying value because you are enshrining something that is like very concrete. And there's a broader search space here that you want to work around.
Hasu: Let me ask you a philosophical question. So if in PEPC, a proposer can first make a commitment, then a builder has to honor it. And in SUAVE, a validator can request a block that has certain properties. So they are also basically enforcing a commitment. And they cannot see the contents of the block. So they have no discretion over like withdrawing whatever commitment they made. Is it the same? Is it different?
Jon: It's a similar idea from like, two different perspectives, I would say. It's like broadly what you're doing there It's like both of them are imposing some constraint, whether it's the user side of what they're telling SUAVE of like, "hey, here are the constraints that I want fulfilled."
Hasu: I didn't even mean it from the user side, right? It's really the validator that can say, give me the block that has like property X, for example.
Jon: Yeah. Basically SUAVE is the black box that matches kind of both ends of those. Because what you're saying is like, yes, the validator can tell SAUVE, "hey, give me a block that satisfies these conditions". And then on the other side of them, the user is like sending things into SUAVE saying like, "hey, here are my transactions. Just do something with them that satisfies my kind of constraints." And then SUAVE is that thing kind of in the middle that takes like, okay, here are the validators constraints, here are the constraints that all the users like that they gave me, and I can match those together. What is like the optimal outcome of this, send it along to the proposer and now they can kick it out.
Hasu: What do we learn from all of this? Okay, one thing I guess, commitments are very powerful. Anything else? Any takeaways from you guys on this episode before we wrap up?
Jon: The biggest high level thing for me, honestly, kind of goes back to the thing that we said in the first place is like a lot of the criticism around PBS is very misguided in that it's really a criticism of like a specific mechanism that Ethereum has in place and is looking at. It is not really a criticism of the idea that like, "hey, there's naturally going to be a division of labor for certain kinds of specialization." And even when you make opinionated, protocol-owned building type stuff, that doesn't mean it's impossible to have any division of labor. So it's just realizing that, there's always going to be this kind of separation of roles to some extent. And you just need to like understand in the context of like your own protocol, what is the right place on that trade-off spectrum, does it just look like a very simple, very dumb, "hey, I sign a commitment you give me a full block and like, that's it? Or is it a very opinionated interaction where like there's some kind of outsource, but you're giving a lot of constraints and a lot of enforcement over that. It's a different trade-off spectrum and like different protocol's should have a different spot on that. It's not like a PBS is good or PBS is bad, It's different versions of it makes sense in different places.
Mike: Well said.
Chris: Overall, I think what is clear is that enshrining PBS is hard. It's a challenge. I think we have been making really good progress as a community towards that. And I think it makes sense to start like we did with MEV-Boost with a out-of-protocol way to experiment and then iterating towards enshrining it. I think I'm very excited to see where it's going next. And working on it with all of you guys.
Hasu: Okay, fantastic. Thank you guys so much for the discussion.
Mike: Thanks for having us on.
Jon: Thanks, guys.
Chris: Thanks. It was nice being here.
Hasu: Hey, Jon, what did you think about this episode?
Jon: Well, it took us like five tries or something like that over the past month, but it was worth it. It was a lot of fun doing this one. I guess for a background for the listeners, we first tried to do this episode, I think like over a month ago, we did it in Vienna where like the four of us and then Tomasz and Toni had like spent a week together right after EthCC, which was a ton of fun jamming on like all the PBS.
Hasu: Tomasz from Flashbots and Toni Wahrstätter from the Ethereum Foundation.
Jon: Spent like a week jamming on the PBS stuff and then we tried to record it at the end of the week and just absolute awful audio quality on the laptop. Took a few tries to do it. Finally recorded it a couple weeks ago. And now we're finally doing the recap. Currently in the middle of SBC for me. Finally getting to put it together, but it was a lot of fun doing this one.
Hasu: Yeah, it's been a long way coming. I'm really glad to put this out. What was for you the highlight of the episode?
Jon: The highlight for me, I'd probably say talking about PEPC. That's kind of, it's at least the most fun thing for me at the moment because I feel it's the most probably under talked about thing recently compared to like what will be talked about upcoming at least a little bit, it's an idea that feels like it's been kicking around for a while that like Barnabé had brought up last year. And that kind of went away after that for a few months. It was kind of this fun thought experiment thing. And then especially in the last few months or so, it seems to be just kind of coming back much more meaningfully. I also am probably biased because it's like front of my mind because I just came from listening to Barnabé give a presentation on PEPC two or three hours ago. So it's kind of front of mind for me. But it is very interesting because there's clearly a lot of thought being given on what should really PBS look like to the extent that it's enshrined in the protocol. And like there's a very, very wide design space on the types of commitments that it makes sense to potentially have. And potentially even in the shorter term of out-of-protocol versions of that stuff like PEPC-Boost. And in particular, I mean like you had just sent me the link right before this of like MEV-Boost+ and MEV-Boost++ which is like the idea from Eigen Layer. Which touches on a lot of the same ideas. And like the tougher part with like those kinds of constructions. So for brief context, we'll link it in the show notes. But for the listeners, MEV-Boost+ and MEV-Boost++, they're like ideas from EigenLayer which are basically partial blocks auctions where you can allow like the proposer to opt-in to restaking commitments where they can say like, "Hey, I agree that I'm going to sell the top half of this block. I'm going to agree to this. And then I'll get the block body. And then after that I can add in whatever I want at the bottom of the block." And there's various reasons why like partial block auctions might be interesting. But the initial cited reason of why this came up actually a year ago now, almost to the date. I remember it first came up at SBC last year. It was particularly as a censorship tool. Similar to the idea of inclusion lists. It's a way for proposers to like give them back agency of like, "Okay, even if the builder is censoring, I only have to sell them the top of the block. That's what has the value in any way. And then I can stick something in the bottom of the block." The tougher part of doing kind of this stuff in much more putting the control in the proposers hands as opposed to having the protocol enforce proposer commitments is the fact that the proposer can still deviate from this. So these ones that are secured by restaking are very challenged in the fact that the proposer can deviate from this. So let's say that they want to just, they agree to do the top of the block, but they can make more than they'll be slashed by deviating from that, then they're incentivized to do so. And like the simplest example of that is even if you have say a sandwich trade in there that it might be for a small amount of profit, but as we've seen with a low carb crusader, unbundling that could be a very profitable thing to unbundle. It might be worth it if they ever got sent a bundle through ME-Boost+ to be like, "Hey, I'm going to unbundle this and then just get slashed my 32 ETH or whatever."
Hasu: It's like the MAX_EFFECTIVE_BALANCE change could really help with that, which is really the idea of combining many validators into one. So a single validator wouldn't just have 32 ETH stake to it, but it could have hundreds or thousands of ETH really. And in that case, there would be much more value available for slashing. It's interesting, right? MaxEB, they want it for very different reasons, but it could also help here.
Jon: So it is possible to even do that without MaxEB. That is like a clearer way to do it, but it is possible if someone just opts in they could just opt in for their restaking commitment of just like, "Hey, I tie all of these together." When they tie in the first place, like they just tie in, "Okay, if I screw up with this proposer, all of these, say 10 proposers are all linked." You could slash all of them too.
Hasu: Yeah, like a shared identity layer on top or something.
Jon: The problem with doing that kind of thing is obviously that becomes super centralizing of like, "Okay, well now you need a million dollars to make a commitment and now the small guy can't do that anymore." That's the kind of trade-off. The nice way to solve it is you have it be protocol enforced, these types of commitments. Now it's no longer, "Hey, I lose my 32 ETH if I deviate from this thing." It's if I try to deviate from this thing, the entire testing committee will just reject the block as invalid. So seeing that design space get played out a little bit more is very interesting to see. And it's clearly getting more thought in the last couple of months, which has been a lot of fun.
Hasu: So it's top of mind for a bunch of people but do you think it will be big in terms of impact? Do you think it will be implemented? Do you think it will be heavily used?
Jon: I still have mixed thoughts on this. I definitely think some forms of it will happen. I think that there is going to be certainly enough incentives to do something like PEPC-Boost. Whether it makes its way to be in protocol, I am very mixed on. That I don't have as high a confidence. It feels very likely that someone will do something like a PEPC-Boost out of protocol. There are various commitments that you can enforce them at a different layer where you basically just rely, as opposed to the MEV-Boost+ type of thing where you're relying on the proposers to do this. You can just basically hand that off to a relay where you trust the relay to enforce the commitments. Something like that definitely makes sense. I think that there are variations of this which could be interesting, which people are probably going to be worth experimenting with. Does it get to the point where it makes it into the protocol for Ethereum? That's where I have a lot of questions. It's much harder to tell. They are very fundamentally useful things that these commitments can make. Which is why you will definitely see these types of things happen on a lot of other chains that are very opinionated, like we talked about with the protocol-owned building type stuff. Those are specific implementations of PEPC in a sense, where PEPC is the very general sense of that. It's difficult for me to say on Ethereum that if something like that would ever make its way into the protocol. It's also incredibly early stages of what would. There is no concrete implementation of what something like PEPC would look like It's just the very fun thing at the moment to at least think about. It's a very broad generalization of PBS, which is super interesting. Because it does seem like there's a lot of consideration right now of what, if at all, should ePBS look like. Rethinking it from first principles, that leads you to, "Okay, what is the most general idea possible that we could put on top of a potential ePBS?" Something like PEPC, which is cool.
Hasu: Yeah, whether we do it or not, I think it s good to basically explore the entire design space and think almost as the most extreme option that can be built, like the most generalized. And then,I mean, it may be the case that we land somewhere that's totally different from that, right? I think it's like, this is something totally tangent, but this is something that I think I learned over the two years that I'm doing strategy work at Flashbots and Lido as well, which is, don't stop at like, there's a human bias if you generated one good option to just stop and do it, you know? And you really have to force yourself actively to keep asking, and what else? And what else? And what else? And it's so difficult, but it's so important. And I think for protocol design, even more. And so I think I can really see that at work here.
Jon: And I think they've done a good job of that with ePBS in particular, where it felt like, a year, year and a half ago or whatever, it felt like, oh, like this two slot PBS design, like this is what we're definitely going to do in the short term. I was like, yeah, this seems like it works. And you could implement it. I like that there's been a lot more just fundamental consideration, certainly throughout the course of this year of, okay, just like from first principles, why do we really want this thing? What in the first place are the properties that we need out of it? Do we even need to enshrine something to get those properties? And are there better ways, like more general ways to do it? And a lot of that exploration, I think, is really valuable. And it's producing a lot of very interesting stuff that probably gets implemented to some extent. But even if some of it doesn't get implemented on Ethereum, it's very valuable research that's going to be incredibly useful for a lot of L2s who are going to be thinking about the same ideas as they go through this, and who are going to be even more likely to experiment. Like, sure, we'll go use this thing. This sounds really valuable.
Hasu: Yeah, I mean, for me, fundamentally, if I now see a proposal that is just an implementation, you know, that doesn't start with, here's a description of the problem that we're trying to solve, you know, here are all of the constraints. Oh, and here's five different things that would be possible. And yeah, the trade-offs. And for reasons XYZ, we would suggest to use that one, but it like deserves more research. That's the kind of, I think, clarity of thinking that you need in the future to make any changes to Ethereum or really to any kind of open protocol. And if I don't see that, I'm, you know, almost like, by default, I'm against. But I think we're increasingly kind of moving towards that. And it's very good to see.
Hasu: So one thing that I really liked, that kind of came out in the episode, I think really well is that PBS is not an implementation. Kind of building on that previous point, right? PBS really is a design philosophy that is in itself extremely broad, right? All it really says is there are incentives for division of labor in the protocol, or framing it differently for protocol actors to outsource part of their duties to external actors who might be more specialized, and then those are explicitly not in the protocol. But what the protocol can do is provide, an expressive and as trustless as possible interfaces as it can. To make it so that this outsourcing really becomes as easy and as fair and as egalitarian as possible. Because if it doesn't, then what you see is some protocol actors might be better at outsourcing than others. And this is kind of what we saw initially with MEV in kind of pre-proposer-builder separation days, right? Where there wasn't such a trustless interface and like a way for validators or mining pools to really discover, okay, so who are the searchers I should be working with and now the builders and so on. And so, just like zooming out basically and looking at this entire thing as a design philosophy that's really like strongly rooted in kind of fairness and decentralization of the protocol. That was for me, I would say, the highlight.
Jon: Yeah. Yeah, I like that. And it's definitely been really interesting to see. I've noticed this more over the past several months, particularly as PEPC has gotten a bit more attention. It's a bit what we talked about in the episode of where a lot of these ideas that are almost thought about as opposites of each other, of like the Ethereum PBS and then there's like the Cosmos protocol-owned building or the more opinionated things. You start to realize when you start to do the more soul searching of like, okay, fundamentally what are these things? And you look at things like PEPC and you realize how many parallels actually across those different systems there are. And, hey, they actually work really well together. It's not like this one or this one. Yeah. They like very much do fit together in these different ways. And there's like, they look very different in different ecosystems when you have different goals. Watching how the pieces like actually fit together now and it's like you just approach from different ends has been very cool.
Hasu: Yeah. I have to give you a shout out, I think, especially for that with your efforts around proof of governance, right? Which is really, I think what you're doing very effectively is just removing politics and ideology from what should really be a technical subject matter, right Just because it's Ethereum, the Ethereum ecosystem and Ethereum layer one has PBS doesn't mean that, the exact same implementation should also work for, should also be the right one for layer 2's but have totally different needs and goals and constraints, right? And so it's really about, taking the politics out of it and approaching it from first principles and really seeing, well, these are all part of the same kind of design family and different implementations work best under different conditions. And, they are all fair game, right? It doesn't matter where they were invented. If something was invented in Cosmos or whether it was invented, you know, by the Ethereum foundation or was invented by Flashbots, you know, we're here to kind of build the best crypto ecosystem that we can. This is something that I see very heavily in your research.
Jon: Yeah, appreciate it.
Hasu: One thing that you pointed out to me that we didn't talk about much in the episode was the question whether to enshrine proposer-builder separation or not in Ethereum. How do you think about that?
Jon: Yeah, it was weird. I felt bad that we didn't cover this. I feel like it was the most obvious thing for us to cover. And it was also like right after Mike wrote the post too on a lot of this stuff. This is a lot of the interesting kind of like, it's really the core question for PBS, but also so many other things tangential to the protocol right now. Like PBS, restaking, PEPC, like a lot of them kind of touch different areas where it's like, what is that boundary of the protocol? Again, we'll shill, go look at a bunch of Barnabé's writings and presentations on this of the like the "seeing like a protocol" and what the boundaries are is great.
Hasu: You should just call the episode the ghost of Barnabaé.
Jon: Pretty much.
Hasu: Yeah, like the ghost of Christmas past or something.
Jon: We're quoting him for half of it. But yeah, like that is a lot of what it is. It's like what is fundamentally the protocol's boundary? What is its role? What should be in protocol? What should be out? And that is the fundamental question that a lot of the researchers at the EF are doing on PBS right now is that question. And it's been interesting. There's definitely been I feel like a bit of a change in this was a lot of what we spoke about at Vienna, particularly after EthCC is so a lot of the reason to do and enshrine PBS, ePBS for a long time was thought of as, okay, we'll do ePBS and then the relays go away. That's kind of the reason to do it. And there's starting to be more, I would say realization lately is that, okay, even if we do ePBS relays probably stick around or something very much like them in a reduced role I would say from where they are today where they're significantly less systemically important and less relied upon and they provide less of an advantage, but like where there is probably still an incentive to use some sort of out-of-protocol solutions that are probably more optimal than using the enshrined PBS protocol. Some of the simple examples are, even if we do this enshrined PBS where like there is this canonical like P2P pool where like this is where the bids are and this is where you're supposed to listen to, what are some advantages that some sort of out-of-protocol actor like a relay could still potentially provide you? So a couple of the simple ones that seem to be pretty important are one of the really simple ones is just flexible payments of like the way that you would do the payments in this kind of ePBS where it would be like the main idea is probably to do something called like TOB, top of block payments, where I would be able to, as a builder, like send you a bid that even if I don't give you the block body, like you could take the payment. So that works well. In most cases there are certain times where you would want more flexible payments of, let's say this is like a gigantic MEV block where I'm going to get a thousand ETH in the block or whatever and I'm only going to be able to give you the bid for that a thousand ETH like after the execution payload. So I can't send you the like thousand ETH in the top of block payment because like I actually don't have it yet. The only way I could send it to you is you need to check that like at the end of the block, like, hey, I made the money and like I can actually send it to you. So that's a service that really...
Hasu: The relay is fronting the money but only atomically. For the relay it's trustless, right? But that is something that the protocol cannot do.
Jon: They're effectively guaranteeing to the proposer like, hey, don't worry, the builder is good for this. Like the block, like they definitely capture it. We're going to pay it to you at the end. And so like that is one scenario where like it is still potentially useful to have some sort of third party who's mediating this fair exchange between the proposer and the builders. That may be more of like an edge case one. I'd say the more pointed ones are specifically like through the bidding process of cancellations is one where a lot of these in particular like the CEX DEX arbitrage or builders they will be like continuously updating their bids throughout the slot. And there are times where they will potentially want to cancel their bids at certain times because you know, prices moved off chain and I need to like lower my bid actually. And so you can't cancel if you broadcast something to a P2P like public mempool. There's no way to do that. But a relay can do that. Of, you know, we just have a limitation that like, hey, as a proposer, you can only call get header once. So they'll call it at the end of the slot and I can cancel before then. But also could do private auctions, which is like potentially helpful for some builders who don't want to reveal everything. And then the last thing is just like simple latency of relays are probably going to be like some latency optimization services probably gonna be able to get a faster connection between, if they're like absolutely optimized, between the builder and the proposer as opposed to sending it just to the to the main P2P mempool. So it's very possible that you would be able to get your bid slightly later towards the end if you're using the relay as opposed to the P2P mempool. It gives you these like on the margin optimizations and that becomes like the fundamental question of, is this even the relay like that we think of it as it is today?
Hasu: I was going to ask you.
Jon: Exactly. Is it the relay or is it not? It almost is a different role. It's almost like a latency optimizer. I mean, like whatever you want to call it like it's not a fundamental role that is needed anymore to just mediate the fair exchange between the proposers and the builders. And that is the like interesting difference, today, basically if the relays go down today, like the whole PBS thing doesn't work really. Like there is no interface between the builders and the proposers. In this world, if the relays go down, okay, maybe the latency at the end of the slot is like slightly suboptimal and there are times where like you can't cancel bids. They're optimizations, but it's not like PBS doesn't fundamentally work well. And it's like, okay, you got to build blocks locally now. So it's a, it's a very large Delta and like, there's sort of like an optimization service at that point, as opposed to this was like a fundamental role in the middle of this thing. And like, it doesn't work without them. So it is a very different kind of point there.
Hasu: So do you envision that if we build this form of ePBS that basically makes, you know, the bids trustless would those trustless or quote unquote in protocol bids be used even for services where the quote unquote relay would be used as well? Like, can they be used together or would it be either or in your mind?
Jon: Yeah, the relay could still just like send along those trustless type payments. So like they could work together in that form. Like it would, it would certainly make sense for them to support the like protocol approved type payment in addition to the flexible payments, kind of where they're needed.
Hasu: Because the in protocol payment doesn't really, it doesn't have any downsides, right? It's using the P2P layer where the kind of disadvantage comes from. And so you can use the centralized relay rails, but with the in protocol trustless payment.
Jon: It's also potentially actually better, I should probably point out in one in trustlessness of like, you really don't have to trust them. And two, potentially latency optimization, if you can avoid having to do the flexible bottom of the block payment, that is better. Because if I get to send the top of the block payment, then the relay doesn't even have to check anything. They don't need to waste time simulating anything. If the relay has to simulate the block and then check at the bottom, like, hey, this is there, then that does take additional latency.
Hasu: Yeah, I mean, something that I really regretted after releasing this post or kind of contributing it is even calling it a relay. Because as you say, it has pretty much nothing to do anymore with the relay that we have. And so to say that, quote unquote, "relays will stick around" after ePBS is not accurate. Because you're just like, kind of moving the goalposts, simply because it's not a relay anymore. And so I think I'm landing on the more optimistic side, I would say on this whole debate that we should do this. I think it's a good idea. And even if some form of out-of-protocol infrastructure may still be used, like the structural importance of this infrastructure will be very low. And so yeah, I think it's a good idea.
Jon: It definitely does seem to be quite additive to put it in there. And to your point, I probably do agree. I probably could honestly call them something else. Because like, that is the unclear, they're not a systemic role anymore. It is just like this kind of additional service where it's unclear, what exactly is the delta? And it's effectively reducing the cost of altruism as a way to put it like massively where the delta today is your options are build a block locally, or do the full PBS So like, there's just a gigantic delta between the two of them, where the difference in this world would be, okay, use the PBS, enshrined canonical and use this latency optimized relay thing, maybe you earn like 1% more, it's unclear, what exactly is the number on that? Like, it may be such a small margin that, for most people, it's honestly just not even worth doing it at that point. Of like, it's such a small optimization, I don't really care. It doesn't even justify the cost and the additional risk of running out-of-protocol software of maintaining this thing. It's just like, forget it. The other thing works 99% is good. I don't care about the last five milliseconds at the end of this thing. And like, that exact delta, does matter. And it's unclear exactly what it is. But yeah, it is a very fundamentally different role, as opposed to like, this is the central point that is holding up the whole PBS auction.
Hasu: Yeah, I agree. One more thing that I want to touch on is kind of I was kind of giving Mike a bit of a hard time asking him about, you know, different governance entities in Ethereum and like their power distribution, who maintains what, what this means for the decentralization of the overall ecosystem. Kudos, he gave a good answer. I still want to talk about this a bit more with you. Right now, it's pretty much the case. I think that the Ethereum Foundation is working on ePBS with the help of various other researchers, I think Flashbots is contributing as are various other parties. Meanwhile, Flashbots is primarily maintaining MEV-Boost. And that's where you more have the Ethereum Foundation and support primarily supporting kind of with research, folks like Tony, for example, have done, do some great monitoring and data analysis and increasingly also like academia is starting to contribute to this. So what would you think about the idea of on the one hand, you could enshrine it. And I think Mike, especially was kind of hinting at that idea, right? So you could resolve this, power... it's not a struggle in any sense, it's like, you could do, like, this separation, you could address it by just, you know, saying specifically, okay, PBS is now part of the protocol. And so, the protocol devs basically also have to work on it, right, and make sure that it stays up to date, and it stays optimal. But the alternative may be to basically create like more sustainability and maybe governance, around PBS, but outside the protocol. So between these two options, what do you think?
Jon: Part of it's a time horizon question of, I don't think that you need to rush to enshrine something because of this. You definitely want to take your time on it. In an ideal world, yeah, you solve these problems and you enshrine stuff and you don't have to rely on different companies with different interests to be funding this stuff and developing it, etc. It is the fundamental, recurring trend with Ethereum. Even like execution charge to rollups. Is somewhat of the same trend, honestly. Of, you start to realize like, hey, maybe this actually works really well if we let the free market just take this thing and keep innovating it over time. It particularly becomes like that depending on your view of, how much does this thing need to keep being updated over time? That becomes a big part of it, quite frankly, if you start to have more confidence of like, okay, this is a mechanism which is very simple. It is very forward compatible. It's not very opinionated. This is something which like works and it can last the next 10 years, 20 years, whatever. Then you feel pretty good about like, okay, we could just enshrine this thing. Like, it's really simple. Like it works. You don't need to like leave people to keep innovating, keep changing this thing over time in the way that rollups or something else. Like they're going to keep changing. So part of it changes based on that view, I would say, is like, how confident are you that this thing is like actually static and can stay there for a long time at that point? Like you want to enshrine it, you want to put it in the protocol if possible, because like, just leaving it out to different companies, people have different interests and like that, leads to potentially worse outcomes over time. So it is suboptimal, I would say, in the short to medium term, at least. I definitely think it makes sense. You don't need to rush to do these things. The main pressing result of that, though, is, okay, we do need to figure out funding for a lot of the tangential stuff, particularly for relay funding. That is like the main question out of this PBS guild, you know, and similar ideas is that this is part of, the benefit of ePBS in my mind is like it gets rid of the relay funding issues at that point. You should not get any funding, you are a latency optimization service, you're not like fundamental to the protocol. But the big question today is that we're not there and the relays are pretty fundamental to holding up the PBS process, at least for the untrusted participants. So in the absence of relays, you would have today the top 90% or whatever number of validators and top 90% of builders, like they're fine. Like they could trust each other, Lido and Beaver build like, hey, we know each other, we could trust each other, it's fine. The relays are fundamental to upholding the hey, that last 5%, 10%, whatever that number is of like, they would not be trusted to, receive something from a builder. So they are fundamental for that. And as of right now, like they're not a business that's able to monetize that. So the question is like, how do we try to fund these as a hopefully, we have a ePBS at some point in the next couple of years, whatever it is. But for today, people have to run these relays, it costs them money. And you know, it may not be profitable for them to do so. So figuring out that, one of the main directives of something like this.
Hasu: Yeah, I would agree. I mean, I think like how static you can make it, how close you think you are to like something that can be static, I think, for me, that is a key determinant to whether you want to pull it into the protocol. I think before that point, it really makes sense to, address the relay sustainability issues. You know, why are we talking about this? It's because relays basically have a hard time monetizing in a market driven way because, they basically become too... if some relays charge fees, they become too easy to bypass. And to really create an incentive to just for a builder basically to run their own relay or for a pool to run their own relay. And at that point, you know, they can do it cheaper and better and faster, basically. So it's like the current market structure really doesn't support monetization through fees. To me, this suggests that what we need is kind of an entity, like an independent entity that can support relays through grant funding. And in my view, this could also solve some of the other issues, like it could make the governance of MEV-Boost more open. It could support the development work. And yeah, possibly even govern, not just MEV-Boost, but more the umbrella idea of PBS that we have been talking about and for example, help layer 2's figure out, what they should be doing with regards to this.
Jon: Yeah, certainly on the base layer, depending on where the ePBS road kind of goes. I mean, like I definitely see that being valuable. I've mentioned this before. I don't see this needing to play a role for layer 2's at all in my mind. So I'm curious to hear your thoughts on that.
Hasu: Yeah, why not? So I give you my case. I give you my case. The case is not at all about sustainability. Right. I'm not saying like that entity should fund the research for layer 2's. So it would be crazy. I think the layer 2's can pay for it. And I think especially they could contribute to something like that, like as the grants, actually like the grantees if you will. I mean more in the sense that there isn't a whole lot of expertise right now in building this. So you basically like want to bring parties together and you want to bring the parties who need it and who can fund it with the parties who actually can build it and to understand deeply like what goes into making an efficient and robust kind of implementation of this. And yeah, that's my thinking behind it. What about you?
Jon: So what exactly would the L2's involvement in this really be then? I mean, like they would be funding it, if anything, not like being a recipient of it. Like I don't see why you need another.
Hasu: Oh, sorry. I mean, maybe used the wrong word. Yeah, no, they would be funding it and they would like outsource the building of their form of PBS possibly to, the same parties who do it on layer one or like kind of this body. Do you think they want to own the building and maintenance of this?
Jon: Yes. So I mean, one, I'm influenced by the fact that like I think that they are very different problems in many ways, layer 1's versus layer 2's. And I think that different rollups also have probably make sense for them to have very different opinionated designs of PBS. Like it can look very, very different from one rollup to another. So part of it is that, part of that is also like I think that is something that fundamentally they do kind of want control over. Like for some of them, it is a pretty important selling point of like what they’re going for. Like Arbitrum being kind of the simplest example of like a lot of their research around first come first serve and like how that's evolved over time. That is a pretty important thing to them of like, this is what they're saying to users of like, hey, these are the types of guarantees like we care about a lot. So I don't see them like wanting to outsource like, oh, hey, this like other entity, like please tell us how to do PBS. So I don't think they like want to outsource that necessarily. I think that they like having... Different teams like having opinionated and different stances on some of these things. So I don't know that they really need to outsource it. And I think that they have the resources internally to do it certainly. And like this is where the governance and the ownership of this type what this type of committee is matters a lot that you obviously do not want it to be that we are outsourcing this to some committee that like has some other agenda behind it That is not aligned with what we're doing.
Hasu: Yeah. No, I mean, I completely agree. Right. I mean, it basically whatever PBS is developed for, a layer 2 has to adhere to their policies. I think it would be an unsustainable situation, though, if every layer 2 had its own implementation of this, because it basically becomes like a security nightmare. I mean, you see all of the things that can go wrong on Ethereum layer 1 already. And so I think standardization, across at least a few shelling points, maybe you have like two to three different flavors and they kind of have their own kind of customization options or different policies are enforced by governance, right? Like proof of governance style. You have like, you do have PBS, but the policy comes from governance and governance monitors the builders and whitelists them. I think something like that is probably, a good middle way, because if you really have like a unique implementation for everyone, I don't know that it's very trustworthy, to be honest.
Jon: Yeah, I think what you described there is kind of where I'm more likely envision it happening is like, there are going to be standards within the different verticals, of like you start to see this already with like optimism talking about the super chain and the law of chains, where, you know, each chain within that super chain can have meaningful flexibility to, you know, we can have different sequencer designs across and maybe you do use the shared sequencer or maybe you don't, you know, different flexibility within that, but that there are certain approved standards of like, hey, you have to abide by this. If like you want to be part of this ecosystem, and certain opinionated decisions around something like PBS or allowable sequencer designs is something that you could certainly see falling within there where, you know, hey, within this like Optimism super chain, like here are the things that like are acceptable, like these are approved, governance stamped of like, this works, we approve of this, like this works really well. Similarly, for like the Optimism, sorry, for the Arbitrum kind of stack and like the Polygons and the Starknets and so on, like within each of those verticals, I would imagine like that core team is doing a lot of research of like, hey, here are the approved things that like we think work really well. It makes it very easy if you want to spin up a new chain within that ecosystem. Because I agree, if like, if literally every single chain out there has to do this, it's insane, it's impossible. My guess is like, there are a few basic standards that may look different from one ecosystem to the next where like, for the Arbitrum one, for example, they are still pursuing some variation of first come first serve, whether it's like time boost or whatever. And that is going to look very, very different versus if you do, you know, you slap something like MEV-Boost on another rollup ecosystem. So I think there will be like different certain standards across different ecosystems that may be useful in different places.
Hasu: Yeah. It's also very important to have standards on the builder side for what it’s worth, right? Because like if you're designing this then you want basically to make it as easy as possible for builders to join and you want them to have the same basically interface that they have to other rollups, especially in a cross domain world, right Because like a lot of your value basically comes from, getting access to the same, or making it easy for the same block builders to build your chain that may also build your Ethereum, Optimism, Polygon, Arbitrum, whatever. And so I think there's a huge incentive for standardization of these interfaces. That kind of almost gets us to the vision for SUAVE and why you want a shared mempool, why you want a shared block building layer. Yeah.
Jon: Yeah. And in particular on that, as you start to think of okay, particularly in this future to say that there are many rollups and they each have some variation of PBS or something that looks similar to it. Who are the builders across these ecosystems? You'd probably agree that like we're already in a pretty far from optimal state of like what the builder market on Ethereum looks like where you have like two or three entities that build the vast majority of blocks. What is that going to start to look like when we start to talk about okay, now we have a bunch of different rollups and you know, what's the builder market on rollup number 42 going to look like? It's probably not going to be like, oh, it's 1000 different people who are like perfectly competitive with each other. I would be rather surprised if that's what it looks like. And like you start to realize like, okay, there's a lot of effort being put into Ethereum today and already it's like quite imperfect. Where does that equilibrium end up? And just assuming that there's going to be a bunch of competitive builders for all of these million different chains. Seems highly unrealistic and you probably need a more holistic solution of, okay, how do we decentralize this role and like constrain the power more meaningfully?
Hasu: So on the point of censorship resistance, Jon we are seeing a lot of proposals around giving basically proposers more agency, right? Whether it's to inclusion lists or, MEV-Boost+ or PEPC. And also in MEV-Boost itself now there's min bid, I believe, which is a feature that basically lets you build a block locally unless the value of the block that you can get from the builder market exceeds a certain value. And that value you can configure. So you can basically say, you know, let me build the top 70% through MEV-Boost and like the bottom 30% locally. And this is like ways of giving more agency to the proposer. I think we fully get the idea of why you would want this. I mean, of course, like you want to make the protocol more censorship resistant, et cetera. But then do you think that there's also a market demand from validators to actually use this?
Jon: A big part of my worry is that there's a market demand to not have it in place. That it's very much the opposite. So much of the discussion, particularly early on with ideas like inclusion lists and MEV-Boost+, proposer suffixes was this trade off between wanting to give back proposers agency and like trying to kind of keep them dumb of like, hey, we want to have statelessness. We want to be compatible with these things where like they don't have to enforce these things. I think at this point, I pretty confidently feel that the bigger trade off is realistically just on a potentially practical, legal and regulatory side of things. I don't think that most people want to have the agency thrown on them like, hey, you are the person who enforces censorship resistance on all of these things And it's difficult for anything like Ethereum to ever design itself around trying to guess at like the current estimation of what regulations in a given jurisdiction are. Like you don't want to be designing based on oh, I as a validator think that maybe this is okay legally to do. Like the practical reality is so much of crypto regulation is incredibly unclear what the guidance is. But it is a very practical, simple point of, it is very possible that a lot of validators are going to be uncomfortable saying, hey, I'm going to enforce that I'm going to put all of these OFAC transactions in there. Like there are reasons that relays and builders are understandably personally uncomfortable doing that. And you would imagine that a lot of large validators are going to have similar stances, if it is unclear regulation wise like what we are supposed to be doing, there may be hesitation to exercise those rights, even if they were given those rights. That is a meaningful kind of fear of mine, that there is an assumption that if we give proposers the tool to enforce censorship resistance, that they will use them. And a lot of the other concerns around it was like, oh, you know, they have to be altruistic to use something like an inclusion list. They don't make money by doing it. I don't really think that's a concern. Like it's very negligible, those kinds of incentives. It is entirely, are people going to feel legally comfortable doing this when there isn't really upside to them? So that is a meaningful question in my mind. And that is why I've continued to increasingly be interested in the more like privacy side of things of various forms of encrypted mempools, where it is the much more encompassing solution of providing the censorship resistance, while also giving absolutely everyone plausible deniability effectively throughout the entire supply chain. Whereas the more inclusion lists and MEV-Boost+ side of things is, we are going to put it squarely on one person's back and say, hey, this is up to you, please enforce censorship resistance. And that works fine if you assume that everyone is willing to do that. It's hard to assume that when you have a permissionless protocol and a lot of large entities who are probably going to end up in those shoes. But removing anyone's agency to do anything at all, and you have to let everything through is kind of seems to be the better long term solution of like how to provide a lot of these guarantees. So like, it was something that we touched on a bit in the episode of, well, things like SUAVE and threshold encrypted mempools and different variations are very often presented in the like, hey, this is an MEV tool. They really are a censorship tool just as much and like those do go very much hand in hand, like there's a reason they go hand in hand. And like it is very useful in these kinds of situations.
Hasu: Yeah, I completely agree. I fall on the more bearish camp about any kind of proposer agency tools. I think that there is a design constraint in all of this, and this is that proposers don't want agency. And so a solution that requires them to, you know, exert agency in my book is not going to solve the problem. I think what you need is you need a solution that's compliant with proposers not having agency, not wanting agency. And to me the idea of an encrypted mempool, and encrypted computing environment where blocks can be built, still where you can have a kind of efficient block building on top of private data, to me, that is the only solution that I can see to this problem. And yeah, very bullish.
Jon: That certainly seems the most all encompassing solution to me. The question that I still have and why I probably still lean somewhat positive on them is like something like inclusion lists. Is it detrimental in any way to add it in there or is it like purely additive? So like, let's say you added something like inclusion lists, and you know, in the bad case, let's say most validators are like, hey, I am not legally comfortable doing this. But you know, a good portion of them use it. Is that worth implementing? Probably is in my mind. And that's like an open question of, or maybe they just don't even want that tool available to them in the first place, for like the reasons described. Nothing is a cure all on these short to medium term solutions. But like, yeah, it does seem to be a valuable tool still, nonetheless. And like, that's kind of where I am still of, it does help on the margin, there are certainly going to be some amount of validators who will use this who are comfortable doing so.
Hasu: I would agree.
Jon: And like, where is it worth it? What is the tipping point of where it's worth it? It's like, that's what's kind of tougher.
Hasu: I would agree. I mean, the solution to censorship is probably a patchwork of different options that all work together, that are used by different parties based on kind of their risk preferences. And I don't think inclusion lists hurt in any way. I don't think they will be used heavily, but you can argue the same thing for something like min bid in MEV-Boost, right? And that's also a fair amount of, it's not a big amount, but it's a sufficient amount of the network who use this, right? And then you can say maybe for a system like Ethereum that's okay. To give it the properties that it does that, at any point a sufficient number of validators should mine your transaction. It doesn't need to be the case that all of them have to mine it all the time. And I think if you're really trying to target the latter option, it's probably unfeasible. You're probably going to run your head against the wall. I think it's like pragmatism over ideology. It's a good final word to wrap it up I think this is really the whole idea around PBS, right? That like PBS really is like the wind of pragmatism and like realism about market forces over ideology.
Jon: Yeah, I will strongly agree on that. I love that as a final note.
Hasu: Jon it's been a pleasure.
Jon: Likewise, took the long road to get this done after five or six tries in a month or so. Apologies to the audience, we'll hopefully be quicker on the next episode. But it's a lot of fun.
Hasu: Thanks for joining us today. As always, nothing we say here is investment or legal advice. The views expressed by the co-hosts are their personal views alone. Please see our podcast description for more disclosures. If you enjoyed this episode, please feel free to subscribe and share it on Twitter Thanks and goodbye.