Luohan Academy

The Rise of Blockchain and the Future of Finance: Centralized or Decentralized?

Event materials

  • Raphael Auer's transcript

Raphael Auer previously spent three years in MED's Monetary Policy unit. Prior to that, he worked for 10 years at the Swiss National Bank, including as Deputy Head and Economic Advisor of the International Trade and Capital Flows division. During 2009-10, he was Globalization and Governance Fellow at Princeton School of Public and International Affairs and visiting fellow at the Federal Reserve Bank of New York. He holds a PhD in economics from MIT and serves as president of the Central Bank Research Association. He shared his views on the permissioned ledgers and the governance of money at the 4th Luohan Academy Frontier Dialogue.

Transcript:

Raphael Auer:

Thank you very much for your kind words, and thanks to Long, Zhiguo, Markus, and thank for the kind invitation. And especially, I also want to thank Harvey for this kind of nice introduction, right? He talked about a couple of things, and I think one that was very important is that there are a lot of risks in DeFi that we see now in permission systems, and two that some observers say that we might go full circle, in the sense that we've started with barter, we've had different forms of accounting through history, and now we seem to be having yet another form. That's actually very much the starting notion of this paper with Cyril and Hyun, titled Permission: Decentralized Ledgers and the Governance of Money.

As always, I need to say that I'm always speaking on my own behalf and not on that of the Bank for International Settlements. The starting notion of that paper is really that money is society's memory of economic interactions. Any market is. To know how much you can consume, you just need to remind yourself, not of your employment and consumption history, just need to look at your bank account. And because it's a record keeping device, advances in information and communication technology has continuously transformed its form. From stone wheels to coins, to bank notes, to the rise of centralized accounting, and now maybe today to a point where centralized accounting could become decentralized accounting.

And so, we know that such decentralized accounting already exists in practice for over a decade, in Bitcoin and related cryptocurrencies. Proof of work based systems have worked for some time, but they also have a couple of drawbacks. They come at a horrendous economic cost and a substantial CO2 footprint. They do not prove certain finality, and smaller cryptocurrencies already have been attacked. There are a couple of other issues. Scale, lack of privacy. And because of this, actually, when you look at industry efforts that includes a variety of settings, and certainly also central bank digital currencies at the moment, industry efforts move to permissioned blockchains. Here, rather than an anonymous network of miners, it's a pre- determined and known set of validators that augments the ledger, or the copies of the ledger. So, insofar as these validators are known and form an exclusive set, this is sort of a system that's in the middle between what we know as centralized finance and what we know from cryptocurrencies.

And in this paper, what we really do is we want to examine the economic potential of the permission variant of distributed ledger technology. And the precise question that we want to answer is, who will guard the guardians in such a system? So the validators may be known, but they may have their own objectives and nothing can prevent them from using the privilege. Specifically, in the permission context, validators need to have incentives to do their job, i.e. they need to run a note, keep a copy of the ledger, and verify transactions, and ensure only valid transactions and no invalid transactions ever enter the ledgers. And we need it for these systems to sort of work on their own without external enforcement, which would sort of take away any potential benefits, it is incentives that need to ensure that the validators actually put in the effort to do all these steps, and that they never sort of accept bribes and undo transactions via the equivalent of a double spend attack.

So, we studied these questions, and ask whether from an economic perspective it would actually ever make sense to set up a market in a decentralized way, as opposed to with a centralized intermediary. And along the way, we also studied the optimal design of such market. What will be the optimal super majority rule? What will be the optimal number of validators? And what will be the fee structure? 

So, the economy we have in mind is a simple credit economy discrete time and a discount factor beta. Each period has two production stages, and in each stage a good can be produced. There are three types of agents. There are those that can produce the early good, and then there are two classes of agents that pose as if they could produce the late good. A measure of one minus F actually can indeed produce the late good, but a measure of F is faulty and cannot produce. In this market, late and early producers meet, and importantly, without the help of an intermediary, the early producer cannot tell whether he's facing an actual late producer or a faulty one.

So, there is an information problem here. In this economy, if there was a way to force everyone to behave optimally, faulty producers would simply leave the market, and other matches would work out as follows. In the first stage, A produces at some cost and sends the early good to B, who consumes it. There is a net gain in this, as the cost of production is lower than the utility gained from consuming. It in the second stage, then B sort of repays her debt and produces for Y. Here, we assume it's linear. Could be a surplus, too. That doesn't really matter for the model. What is important here is that there are gains from trade, but there is a commitment problem.

And this brings me back to the title of my presentation today. To solve this commitment problem, we need sort of a collective form of memory or record keeping device. This is where the ledger comes in. It records past behaviors, actions, and outcomes. To the user, the ledger is not known, but the validators can read it at some costs. So, this is the interpretation of running a node in such a setup. And they can also update the ledger with new transactions, an important ingredient to the model because validators are known and if they misbehave as a validator, they can also be kicked out as a late producer, so as a user of the system. And that sort of has the advantage that it's easier to incentivize the avoidance of double spends attack in this system.

Now, what is the exact game played by the validators? They observe all actions and write an honest account of what happened into the ledger. There is a cost of CV to verify the labels of A and B, and there is a privately known cost to communicate the validators vote to the ledger. We allow for this cost to be stochastic, reflecting the fact that, in reality, there might be communication outages or other operational issues that sort of introduce some noise into what it takes to be a validator. And we're going to consider the class of consensus mechanisms that are based on some super majority voting rules where an update of the ledger is considered valid if a faction larger than tau approves of an entry. And we're going to derive that tau later, the optimal tau later, but we'll take it as given for a moment.

So, if, in this economy, validators do their job correctly, the game now proceeds as follows. All validators have an up-to-date copy of the ledger. They read the labels of A and B. They actually do this verification cost, and they sent their vote that this is the case to the ledger. A then produces for X. We go to the second stage... Sorry. A produces for B. B repays. The validators, again, record, yes, everything has played according to plan, and they update the ledger accordingly. The labels of all of B is still a good one. So, in the next stage, B can still be part of this market.

However, again, we've not solved the underlying friction, which is why should validators actually be behave according to this rule book? And one constraint is that agents could bribe validators. And another one is that they could aim to free ride on the efforts for others and vote without actually running a full node or verifying themselves. So, let me look at these in turn, starting with the free riding problem.

Free riding could mean that validators failed to detect that B is actually a faulty producer because they don't actually monitor what's going on. Importantly, this is a coordination game here, as validators can only be punished based on deviations from the majority of other votes, because that's the only information the system has. And if validators assume that all other validators will shirk, then they will not put any effort in themselves because it's not informative if the others are producing noise.

So, this is then a game in which higher order beliefs matter, believes not only about what others will do, but about what others perceive that I will believe, and so forth. For game theorists and those working on stochastic outcomes in the presence of complementarity actions, it's a well- known problem, and it can be solved in a global game approach setting, which is what we do. So, all validators here follow a switching strategy where they will work for sure if their cost is below a certain threshold and a well- known equilibrium property is then, even, for an arbitrary small amount of noise, a unique equilibrium can arise.

 And let's solve for this cutoff by looking at the decision of a validator. The validator at the cutoff will be indifferent between shirking or not if the expected net payoff from these two courses of action is indifferent. Meaning, the payoff of shirking is zero, whereas working first entails paying a cost, for sure, to find out the label, then only in a fraction of one minus F the label is actually good, so the game continues. And if that is the case, the validator will communicate the communication cost, CS. And you only get the reward if other validators also validate. Here, due to the global game structure, and the assumed uniform distribution of the idiosyncratic communication costs, this is actually a linear function. It's linear in tau in the super majority rule. So, the crucial insight here is that when it comes to the design of the super majority voting rule, a voting rule that is close to one, i.e. unanimity, makes it very costly to put in one owns effort, because one perceives it very unlikely that the super majority is reached. And that's why you want to go for a super majority rule that is substantially below one.

And this is one side of sort of the coin. To solve the free rider problem here, you need to have a low super majority voting rule. But there is another problem here, which is the bribing, and the bribing problem is that everything, everybody, all the validators do their job and read the ledger. Production in the first stage happened, but in the second stage, instead of sort of paying back with the production of the late group, B instead bribes the validators so that they just write an incorrect history into the ledger. And nothing can stop them from doing that except a mechanism design.

So, the solution is obviously that validators need to receive rents in order to incentivize correct behavior, because cheating has a chance of being detected and it thus gives the system sort of a punishment lever, which is if cheating is detected, the validators will be kicked out. They will not receive their future rents, and they will also be kicked out as producers. And this actually makes the reaching, incentivizing the validators easier.

I'm running out of time. Let me just give you a summary of the results that we find in this setup. So, first and foremost, we do find that there may be a case for such decentralization. The fundamental case for decentralization is simple. Honest record keeping is made easier if those who actively participate in the system as a user also occasionally are in charge of updating it. But such decentralization doesn't come for free as it requires that the many validators be coordinated, and we show that the validators play a public good contribution game and we derive the optimal voting rule.

For the degree of efficiency of these systems, the degree to which agents are forward-looking matters a great deal. If the system is more in the future matters, the easier it is to incentivize truthful record-keeping. That's trivial. But a surprising result is, in terms of the comparative statics, we find that the case for decentralization, for many validators, instead of only one, or in our continuous interpretation a measure of zero, the case for decentralization is the stronger, the more difficult it is to sustain honest behavior.

So, the intuition harks back to the basic insight, the fundamental case for decentralization. Precisely in instances, in which a central intermediary is tempted to cheat, a decentralized setup that features many who want to also stay in that system, as a user, they fear being kicked out as a user and thus withstand temptation to cheat as a validator. That is one of the main results. Further results are regarding the design of the super majority, how you pay fees, and what the optimal number of validators is conditional on how forward looking validators are. I stop here.

For more information, please visit Luohan Academy's youtube channel: Luohan Academy

0 comments

to leave a comment

loading...

00:00:0000:00:00