Chapter 7: The Abundance Protocol

Perhaps we shouldn't give up just yet. It seems we looked everywhere on the blockchain for mechanisms that capture the value of public and common goods. There is just one place we haven't looked at: the blockchain itself. But that seems to be the one place where a common good — network security — is paid for without any external funding source.

To date, the two largest blockchains by market cap, Bitcoin and Ethereum, have funded their own network security to the tune of nearly 1 trillion dollars or more (depending on the market cap of these coins) — that’s trillion, with a T. This likely dwarfs any effort to fund public or common goods, on-chain or off.

To put it into perspective, the Giving Pledge, representing a commitment by more than 200 of the world’s richest people to give away most of their wealth to charity over their lifetime, has about $600 billion pledged. And that is money pledged — over a lifetime — not money actually given.

Network security is a common good that benefits every user of a blockchain network, whether they actively transact on the network or just hold their money there. Yet, Bitcoin and Ethereum have funded network security without relying on wealthy donors, government bailouts, or cookie sales. Instead, they do so through coin inflation; the blockchain's consensus mechanism issues coins to miners (or validators) when they create new blocks and secure the network.

Keeping the funding of network security self-sustainable means keeping everyone happy: both the people securing the network and those using it. Since network validators are paid through coin inflation, the network’s users would be content if such monetary inflation did not devalue their currency. How then do you maintain the value of the cryptocurrency? The short answer is: by using the laws of supply and demand.

As supply increases to pay for network security, maintaining the same value either means demand for the currency has to increase proportionately, or coins need to be removed from circulation in other ways — or some combination of both.

Bitcoin, Ethereum, and other blockchains have different strategies to achieve self-sustainable funding. Yet, the jury is still out on which would be more effective long-term. As these networks have demonstrated already, however, by running for years without interruption, it's possible to pay for a common good self-sustainably and without external funding.

But network security is just one very particular type of a common good. The question is, how do we extend this mechanism to apply to all common and public goods?

Perhaps we can do so by applying some of the ideas we had for a market-based solution to the blockchain. To recap, we wanted to build on how companies effectively fund common goods that benefit the business. Companies exchange value in business-to-business transactions, where funding comes from a common treasury. We wanted to apply the same logic to create an exchange of value between public goods contributors and the wider community or ecosystem.

The idea is that contributors should be rewarded based on the economic impact their work makes on the ecosystem. Since economic impact is an objective measure, it would eventually be possible to use AI to determine it and allocate funding. The challenge is getting there.

How do you make sure that the developers working on AI are aligned with the ecosystem, and don't try to manipulate the allocation mechanism? And what happens in the meantime? How can users evaluate the impact of projects credibly, transparently, and scalably while the AI is being developed?

With blockchain technology the pieces of the puzzle are finally starting to fall into place.

We can start by replacing our original concept for a common treasury with the cryptocurrency model. Now instead of members contributing funds to a common treasury, there is no need for a treasury anymore; anyone who wants to participate in the ecosystem would simply buy the blockchain's native currency to transact with.

There is one major difference however between the economic model in this ecosystem from other blockchain networks. In most other cryptocurrencies the expectation is for the coin to increase in value over time. Such an approach may be beneficial for using the currency as a non-productive investment asset, but it’s quite terrible to use it for economic activity.

The rationale is that if the currency is expected to appreciate in value, consumers would prefer to wait for product prices to drop instead of buying anything. But what happens when everyone keeps delaying purchases? Producers will expect to sell fewer products and start laying off workers, downsizing, and shutting down factories. A deflationary currency thus leads to decreased economic activity and stagnation. The purpose of our ecosystem however is maximizing economic abundance.

For that reason, the expectation in the ecosystem is for the coin to maintain a relatively stable value in relation to the ecosystem’s economic capacity. Keeping the value of a coin stable as economic capacity increases means increasing monetary supply in proportion to growth in economic capacity.

Economic capacity is the maximum output of the economy given the amount of available resources, scientific knowledge, and technology. Since the amount of resources on Earth is practically fixed, growth in economic capacity essentially means improvement in knowledge or technology.

If such knowledge is released as a public good, its economic impact would be equivalent to the growth in economic capacity. Which means that increasing monetary supply proportionately to compensate public goods contributors would keep the value of the coin stable.

Notice also how compensation for a public good is different from compensation for a common good. The main difference has to do with the fact that, unlike common goods, public goods are inexhaustible; while public goods require an investment of labor and resources to produce the knowledge, once it is produced it has a permanent impact on the economy’s capacity. It is therefore an abundant good that requires practically no further maintenance. Because it leads to growth in economic capacity, such growth is associated with an increase in demand within the economy. An increase in monetary supply to match the greater demand leads to value stability for the currency.

Compensating for a common good, however, means ongoing use of labor or resources to maintain a certain state of economic activity. This is what happens with network security, for example. If compensation for the common good comes from monetary inflation, it has to be matched with an equivalent removal of currency from circulation. Otherwise it would lead to devaluation of the currency. For this reason we need to keep in mind that the mechanism to fund public goods would be somewhat different from the one for common goods.

Now, why does it make sense to maintain value in relation to economic capacity, and not, say, to growth in production? Growth in production can happen even without changes in economic capacity. It can occur due to changes in consumer preferences, seasonal variation, and so on. Tying monetary policy to changes in preferences then makes little sense. But tying it to impact and economic capacity can incentivize growth.

We know that if knowledge is released as a public good it has the greatest economic impact (compared to keeping the knowledge private), since it leads to the most growth in the economy’s capacity. Compensating impact incentivizes contributors to work on and produce public goods that are expected to have the most effect on the economy. It therefore leads to the most economic growth and greatest benefit to the ecosystem.

The benefit of using monetary inflation to compensate public goods contributors is that it creates complete alignment between all participants in the ecosystem. What produces this alignment? Every participant in the ecosystem is interested in two things: maintaining the value of the currency, and maximizing the ecosystem’s economic growth. Every participant wants to maximize economic growth in the ecosystem because that’s what gives them the most economic opportunity and the greatest potential to prosper.

Since economic growth depends on growing economic capacity, every participant is interested in maximizing the impact that comes from public goods. At the same time, participants obviously don’t want their currency devalued, since that reduces their level of prosperity.

Using currency inflation therefore means that everyone in the ecosystem benefits from the production of any public good that leads to growth in economic capacity, regardless of whether they directly benefit from it. At the same time, every participant wants the impact of these public goods to be valued accurately; undervaluing public goods would lead to fewer contributors creating public goods, and thus slower growth to the ecosystem. Overvaluing public goods would lead to currency devaluation and fewer participants using the ecosystem. Accurate valuation leads to currency stability and maximal impact and economic growth.

This mechanism finally allows us to create effective feedback loops for public goods. Since ecosystem participants are aligned on accurately evaluating the impact of public goods projects, participants benefit the most when their work has the greatest impact on the ecosystem — the greater the impact, the greater the reward. Ecosystem participants are perfectly content with issuing large sums of money to contributors, because as long as the evaluation is accurate, that means there is much economic growth in the ecosystem.

So now we know that, at least in principle, it’s possible to create an exchange of value between contributors and an ecosystem, and it’s possible to create effective feedback loops for public goods. But we still have all our work cut out for us: we need to show how this can be achieved in practice. How do we design the mechanisms that would make it all work? How do we make sure that the system maintains public trust, and that it cannot be gamed by bad actors?

Solving the problem of public goods would certainly be transformational for our economy, but it’s still just half of the equation. To change our trajectory toward dystopia, and put us on a path toward economic abundance, we also need to solve the problem of externalities. Yet, it’s still unclear how to create feedback loops for negative externalities in our system.

But let’s not get ahead of ourselves. Before trying to tackle the externalities problem, let’s see how we can create a blockchain protocol to exchange value between contributors and the ecosystem.

Because of the complexity of our task we can consider using the protocol for a programmable blockchain such as Ethereum as our baseline. We will then modify such a protocol for our purpose: creating the foundation for an economy of abundance.

At the moment we're less concerned with the technical aspects of how the blockchain reaches consensus on the creation of blocks or maintaining network security. Our primary concern is with making sure that the network reaches a consensus on the value of common and public goods. The network can then compensate contributors accordingly in the contributor-to-ecosystem (C2E) value exchange.

There is a benefit to separating the technical network security consensus logic from the C2E value consensus. Doing so allows us to run the C2E value consensus at the Smart Contract level on existing blockchains. Which means that on-chain public infrastructure and public goods projects on existing blockchains can finally have effective feedback loops.

What then do we need to make the system work? Where do we start? Perhaps we can start with the desired outcome and build out the rest of the system from there.

Ultimately, we would want the entire process to be done with Artificial Intelligence. This is achievable because impact can be objectively measured, which means that AI computation can be independently verified. The eventual process will have an AI review for each project. Following the review, users would validate the AI computation by sampling it, thus ensuring its integrity.

Obviously simple desktop computers would not be able to verify an entire AI computation, but by sampling all parts of the computation groups of validators will be able to validate every AI review collectively. And since there is still a challenge period in place in such a system, there should be sufficient time for validators to complete each validation. An AI-based protocol is therefore a viable proposition — and one that would be maximally scalable and consume the least labor and resources.

But until we get there we need to develop the protocol so that ecosystem participants can transparently, credibly, and efficiently review the impact of projects, and compensate contributors accordingly.

The review has to be transparent so that anyone can analyze whether the data provided leads to the conclusions made by reviewers, and challenge the results otherwise. It has to be credible so that the ecosystem can have trust in the process and agree on its results. And it has to be efficient so that the system can function properly and scale.

The AI system can be trained on participant reviews in parallel. As it becomes more robust, AI can complement participants’ work, and gradually replace it. This would make the protocol more efficient over time while maintaining public trust in the process.

So how do we make the review process transparent, credible and efficient? For the process to be credible, the protocol must have an incentive structure in place to align the interests of all participants in the ecosystem, including contributors, users, and reviewers. Such a structure would not stop all attacks on the protocol, or prevent bad actors from trying to game the system. It would however make such attacks a lot less likely, and a lot less effective. It would also create feedback loops within the ecosystem to make it more resilient against attacks.

Think, for example, what would happen if social media accounts try to undermine the protocol by falsely accusing validators of corruption. Such an attack may work if the economic incentives of validators were misaligned with those of other participants in the protocol. If the attacks are persistent, they may turn participants against each other and make them lose trust in the credibility of reviews and in the protocol itself. Once the protocol loses credibility, it becomes harder to reach consensus on the value of public goods. Which then leads to the currency losing value and the ecosystem becoming less attractive to contributors and users alike.

But if the incentives for all participants in the ecosystem are aligned, no one would think that claims of corruption by individual users affect the protocol. Not because it's not possible for individuals to be corrupt, but because everyone has the incentive to defend the ecosystem from attacks by bad actors. And if all participants are aligned on defending the protocol, the chance of any corrupt player succeeding in perverting the protocol is miniscule. Aligned incentives would therefore insulate the ecosystem from attacks on individual users. Which means that if the purpose of such attacks is to discredit the ecosystem as a whole, these attacks would lessen as well.

The incentive structure makes the protocol more resilient and hardened against attacks but it cannot by itself counter them. For that the protocol needs built-in mechanisms that can resist attacks and exploits, and prevent bad actors from trying to game the system.

The ability to effectively prevent bad actors from gaming the system would make the review process credible. It would also allow participants to more easily reach a consensus on the value of projects, and reward contributors accordingly.

But in order to get there both the review process and the data it relies on must be transparent. Anyone should be able to audit the data, as well as the review process, and evaluate whether anything along the way was not done properly. If suspicious activity is detected, anyone should be able to challenge the results of a review before funds are released to contributors. In that case it may be necessary to freeze the funding or redo the review with a different set of reviewers.

Finally, we must recognize that the ecosystem doesn't have unlimited resources, and that reviewers' time is valuable. The protocol must be able to prioritize projects, while dedicating sufficient expertise to each review. But if the amount of expertise required for a project must correspond to its impact, how would the protocol know how much expertise is required before it’s been reviewed? This can only be done if expertise resources are based on an estimate of the impact.

What’s more, since public goods are supposed to increase the economy’s capacity, we need to differentiate between the expected impact if fully realized, and the estimated impact. When a public goods project is just created it may have almost no realized impact, but may have significant expected impact if fully realized. For instance, when a cure to a disease is just discovered it has no realized impact, but if everyone affected by the disease is cured the impact may be significant.

The protocol can conserve reviewers’ efforts by first determining the expected total impact of a project, and then periodically reviewing realized impact. Of course, funds should only be released to contributors based on realized impact, not mere expectations.

For the review of total expected impact, those proposing the review should provide their estimate of the total expected impact. For a periodic review, proposers should provide their estimate of the realized impact as a percent of total expected impact. Such estimates would help the protocol determine the required effort needed for reviewing expected or realized impact.

Those providing the estimate should have an incentive to look for projects that the ecosystem would wish to prioritize. Proposers should also have the incentive to be as accurate as possible in their estimates. How can this be achieved?

We want proposers to seek out the public goods projects that would benefit the ecosystem the most. We also want them to be as accurate as possible in their estimation of the expected impact of projects, so that the protocol won’t have to waste resources in the review process. The analogy here may be to stock investors who spend their time picking the stocks that would generate the most return. Such work should supposedly make the market as a whole more efficient.

Applying the same logic to proposers means that they should be compensated proportionately to the impact of the project they propose. At the same time, if they overvalue the expected impact of a project they should bear a financial cost for that. But what if a proposer undervalues a project instead, and the protocol then doesn’t dedicate enough resources to properly review it? Obviously this should not harm contributors. How can all these demands be reconciled then?

An elegant way to resolve these competing demands may be to require the proposer to provide the funding for validator reviews. Then, once the review process is complete, a fixed percent of the project reward can go to the proposer. Of that fixed portion, a part would be dedicated to cover the cost of validator reviews, and another part will be the proposer’s premium.

Now it makes sense that a project with greater impact would require more expertise to review. However, the growth in required expertise should not be linear. If the impact of one project is double that of another, perhaps the required expertise is only 50% greater between the second and first projects, as an example. Similarly, there must be some minimal amount of expertise required for even the simplest project. Taking such considerations into account lets us construct a formula for the required expertise for any level of impact. It also indicates that a proposer can get a larger premium for proposing projects with greater expected impact. At least that is the case as long as the proposer’s estimate is somewhat accurate.

If the proposer overestimates the project’s expected impact, they’d be funding more validators than necessary. Once the project’s impact is determined by the validators, the proposer may get a smaller reward than the money they paid validators. If the proposer was way off they may be losing money in the process.

On the other hand, suppose the proposer purposefully underestimates the project’s impact to at least get some return. Then they’d still be getting a proportionally smaller premium than they could have gotten with a more accurate review. That is because a lower estimate means relatively more required expertise per impact value. Since an underestimation by a proposer should not disadvantage project contributors, other proposers should be able to reevaluate the project, thus earning the premium that the initial proposer gave up with the lower estimate.

Such a system thus incentivizes proposers to look for the most impactful projects. It also motivates them to accurately estimate projects; if they overestimate they end up losing money in the process by overfunding validators. If they underestimate they give away the premium from the project’s impact. Accurate estimates therefore lead to the highest returns to proposers, and the most efficient resource use for the protocol.

Note that because the protocol requires proposers to put up funding first this may disadvantage participants based on their economic status. To mitigate this issue the protocol should allow investors to lend money to proposers. Investors can consider the track record of proposers in estimating project impact, and offer funds at a competitive interest rate, based on their level of risk.

Proposers with the best track record would have more investors competing to lend them money, and would therefore get a lower interest on their loan. By creating a market for investors to provide loans based on proposers' track record, the protocol can make the process more equitable and meritocratic.

Another point to note is that in the current protocol design reward for proposers (and validators) comes fully from the project reward. The work done by proposers and validators is certainly a common good that benefits everyone in the ecosystem, but where should the funding for it come from? Paying for it from the project reward prevents currency devaluation since the total supply of the currency remains the same, but at the same time it reduces the reward for contributors. Reducing contributor rewards may slow ecosystem growth, since contributors would be motivated to work for other ecosystems that offer better rewards.

The alternative is to provide contributors the full value of their impact, while rewarding proposers through currency inflation. Such inflation can later be absorbed by removing an equivalent amount of coins from the protocol. The amount of the reward should still be tied to the project reward, since that aligns the interests of proposers with the ecosystem.

Putting together the concepts we have so far: we need a process by which users would propose to the protocol projects to be reviewed, and provide the expected impact of the project. Then there needs to be some mechanism by which the protocol can prioritize projects based on their expected value to the ecosystem, and assign users with sufficient expertise for the review. The next step would be reviewing the project and determining its expected economic impact, based on all publicly available data. After the project is reviewed, anyone should be able to challenge the results to ensure the credibility of the review.

As the impact of the project is realized, a proposer may request a periodic review and provide the estimated percent of realized impact. Once again the review will be prioritized and assigned reviewers by the protocol. After the periodic review there should be another challenge period. Then finally funds should be released to contributors based on the realized impact of their work.

Though the protocol is beginning to take shape there are still lots of open questions to be resolved. How will funds be distributed between contributors? How are reviewers selected for a project? How is subject matter expertise determined, and how do participants gain expertise? What are the incentives to keep proposers, and everyone else for that matter, aligned? These are just some of the questions we need to address for the protocol to function properly and credibly.

As these issues are systematically addressed, you should keep in mind that the solutions proposed are by no means the only way to solve the problem. Perhaps they’re not even the best ways. The hope with this exercise, however, is to show that it is possible to apply a rigorous method to determining consensus value, while effectively dealing with bad actors or attempts to manipulate the protocol.

So let's get into it. On funding distribution, ideally each contributor to a project should get paid according to their relative contribution. Contributors who collaborate on a project know best how much each of them pitched in. It would therefore be most efficient if all contributors can come to consensus on how funding should be distributed among them. The protocol should lock the funds until such consensus is officially reached.

The protocol should not expend resources on funding distribution among collaborators, since this is not essential to the ecosystem. However, if there is a dispute, the protocol can be a neutral arbitrator. Such a function can protect smaller contributors from unfair treatment, thus promoting collaboration. If all collaborators know that the funding will be distributed fairly, they will be focused on the work itself and not waste any time on self-promotion.

Because arbitration requires effort and resources from the protocol, the cost must be borne by the parties in dispute. The cost of arbitration should therefore incentivize contributors to try to come to consensus on their own.

There is another important aspect to funding distribution however: since public goods are freely accessible by anyone, what happens when one project relies on the work of others? If we want contributors to be fairly compensated for their work, the same should apply to the sources that influence the work. Contributors must therefore specify sources of influence and the extent those influenced the project. Funding will then be distributed based on how much contributors and sources contributed to the project.

Similar to the contributors themselves, sources of influence should be able to dispute their funding allocation. By extending this function to influences, contributors would want to be fair and accurate in reaching a consensus with all the parties that contribute to the project.

The rationale of including contributors and influences in funding allocation is straightforward: each project and each contributor should be compensated based on the impact their work makes on the ecosystem. Since public goods are freely accessible by all, anyone should be able to build on the work of others, but that means also giving them the proper credit and compensation for their effort.

Rewarding the sources doesn’t just make the whole system work. It also creates a framework for abundant goods contributors to collaborate and openly build on each others’ work. It allows compensation for public goods, and obviates the need for copyrights or IP rights (which would become nearly impossible to enforce in the age of AI anyway). Simply put, when every person knows they’d be compensated for their impact, even when others use their work or build on it, it motivates everyone to create public goods openly and collaborate freely. The result can be an incredible proliferation of abundance throughout the ecosystem.

So now we see how the protocol can integrate mechanisms to align everyone’s incentives with the goal of maximizing impact. We also know how all those who contributed to a public goods project can reach internal consensus on funding allocation. We just don’t know how the ecosystem can reach consensus on the value of public goods in the first place.

We previously discussed the issue of reviewing project impact within the market-based solution, and proposed a randomized subset of users to review projects and determine impact. By selecting a random subset of reviewers we can ensure that the review process can scale, since not every decision needs to be made by the entire ecosystem. We also avoid the possibility of collusion between reviewers and contributors, since contributors would have no way of knowing who will review their project.

The trouble however was that the randomization process had to be transparent. Similarly, reviews had to be meaningful. What exactly does this mean? It means that reviewers are estimating the impact of public goods projects on the ecosystem, they're not merely casting votes based on their preference. For them to review the impact they need insight into the subject matter. Yet, there was no clear way to assign subject-matter expertise to validators in the market-based solution. How is subject-matter expertise determined and by whom? How do you make the process transparent and credible? How do you make sure reviewers can't game the system? And ultimately, how do you make sure that reviews result in accurate determination of impact? All these issues could not be solved within the market-based solution. Now let’s see how blockchain technology can potentially solve these.

Randomized validator selection may be the easier task. A pseudo-random functionality can be directly coded into the protocol, so that a review validator set can be selected at random for any project review. Since the code is publicly available, and the process is executed entirely on-chain, anyone can verify that it is not manipulated in any way. Such transparency means that everyone in the ecosystem can trust the integrity of the selection process.

The same applies to the data that validators use to make their impact evaluation. All the data can be hashed and added on-chain, so that all validators know they're dealing with the same data set. It also allows anyone to audit the process and verify that data wasn't altered.

But data integrity doesn't end with making sure the data wasn't altered. It's even more important to know the source of the data. If the data comes from the contributors themselves, for example, or other biased sources, how can the ecosystem trust it? On the other hand, the more neutral and decentralized the dataset is, the more reliable it is.

So now the protocol can randomly select validators to review public goods projects, and can ensure the integrity of impact-related data. But how does it ensure that the review process produces reliable results? The key to that may be making sure that validators have sufficient expertise to review each project. If a project is expected to have a great impact on the ecosystem, the set of validators who review the project should have more expertise.

That doesn’t mean that the validators for each project should all have domain-specific expertise related to the project. In fact, experts in any field are likely to consider their own field as more important than others, thus producing distorted results. What would work better is to first have a group of experts with deeper knowledge in the project’s domain (or domains) who determine the credibility and significance of the project within the field. Then a second group of validators from across the ecosystem can use the first group’s expert analysis to evaluate the expected impact of the project on the ecosystem. The decisions within each group can be weighted based on the level of expertise of each validator, thus making the process more meritocratic.

One benefit of having an on-chain protocol is that non-monetary attributes can be assigned to participants’ accounts. Such attributes can include expertise scores in various subject domains, which can then be used in selecting validators for a project review and weighting validations. To keep the process meritocratic, and avoid plutocratic abuse, no one should be able to purchase or sell their expertise credentials. This can be achieved on-chain by disallowing the transfer of expertise scores between accounts, or by using non-transferable tokens (NTTs) or “soulbound” tokens (SBTs) to denote the expertise score. But the question still remains on the process for obtaining expertise scores by participants.

How would participants in the ecosystem obtain their subject-matter expertise? Validators’ expertise scores are used in the protocol to evaluate the impact of projects, and ultimately issue currency. It is therefore critical that the ecosystem can have trust in the process of obtaining expertise.

As with everything else in the ecosystem, everyone must be able to trust the integrity of the process. If the impact of a project is weighted by the expertise of validators, and it’s unclear where their subject-matter expertise came from, how can the ecosystem trust the process? So the ecosystem must agree on what constitutes expertise and how participants or validators can obtain it.

If the protocol is built around establishing consensus value for public goods, why not use the same consensus for obtaining subject-matter expertise?

How can this be done? Perhaps contributors to public goods projects should be able to obtain a domain-specific expertise score alongside their funding reward. Since every project falls within at least one domain of expertise, each contributor can receive an expertise score corresponding to their contribution to the project. Contributors may choose to internally allocate the expertise provided by the project validation differently from the funding reward itself. This especially makes sense for complex projects where contributors may be working in different domains of expertise. It would make no sense, for instance, for an economist and a biochemist to receive an equivalent proportion of domain-specific expertise score, especially if they worked on completely different aspects of the project.

Since every contributor would want to receive the expertise score that corresponds to their knowledge, there is little incentive for them to allocate scores incorrectly. Similarly, since the expertise scores awarded by the validation process are finite, contributors cannot collectively obtain a greater score than what was awarded. Which means that there is no risk of contributors abusing the process to obtain an advantage in their influence in the protocol.

Thus, the process outlined ensures that expertise scores reflect contributors’ domain-specific knowledge. It also ensures that when contributors use their expertise in validations they are aligned with the ecosystem. That's because their expertise would indicate that they contributed to growing the ecosystem.

Just like contributors can obtain domain-specific expertise through the validation process, the same should also apply to proposers and validators. Proposers need to have at least some level of domain-specific expertise to be able to estimate the expected impact of projects. It therefore makes sense that the premium the proposer receives for successfully reviewing a project would also result in gaining expertise in the domains of the project.

Similarly, whenever subject-matter expert validators review a project they can accumulate expertise in their review domain. Ecosystem-wide validators may gain a general expertise score, which would reflect their contribution to the ecosystem. This process can even be improved to incentivize high quality validations: following the review itself (either expert or ecosystem-wide), the set of validators can have a round of Quadratic Voting (QV) on the quality of every validator's review. Validators would vote on the quality of each review and be able to flag fraudulent reviews. Those receiving higher QV scores should receive a greater portion of the validation reward, as well as a greater expertise score.

To minimize the chance of collusion during QV, validators can be randomly assigned into two separate groups that would then vote on each other's individual validations. This process ensures that validators take their work seriously, and that their interests are aligned with the ecosystem.

All the methods of obtaining expertise through the validation process — whether it is for contributors, validators, or proposers — require existing validators with subject-matter expertise. But if obtaining expertise requires validators with expertise, where did their expertise come from? Logically, there has to be a point at which no one in the ecosystem had expertise. So how do you get something out of nothing? This applies not just to the very beginning of the ecosystem, what about every new domain category introduced in the ecosystem? Also, it seems the process of obtaining expertise is extremely cumbersome. While obtaining expertise by being a validator is relatively simple, you need to already have expertise to be a validator. But so far the only way to obtain expertise otherwise is by being a project contributor or a proposer — both of which require great effort or time investment. With such hurdles, how can we expect the ecosystem to get off the ground or work at scale?

For the protocol to be scalable we need more convenient ways to obtain expertise in the system. But we need to do so while maintaining the integrity of the process to obtain on-chain expertise. Expertise in the protocol is not merely a sign of knowledge, it is also an indication of merit and of alignment, since it is obtained by contributing to the ecosystem. And since there is no way to buy or sell expertise in this system, it’s an effective (and meritocratic) alternative to plutocratic rule.

So how do you make obtaining expertise more convenient? What is the problem with anyone being able to validate projects? The rationale for weighting validator reviews based on level of expertise is that such a mechanism prevents Sybil Attacks on the review process.

Since expertise scores represent actual contribution to the ecosystem, and are non-transferable, splitting expertise between multiple accounts in the protocol would have the same exact influence as just using one account. To illustrate this point, if you have one account with 300 expertise points, it would have just as much of a chance to be randomly selected to validate a project as the combined chance of 3 accounts with 100 expertise points each. There is therefore no risk of users creating multiple accounts to game the system; the total expertise score, and therefore total influence in the protocol, of all accounts created by a user will always be the same as the user just having one account.

While the mechanism can effectively prevent Sybil Attacks, it creates a problem when it comes to new users validating a project. If the users have no expertise in the domain, the weight of their influence would be zero. And since users can create multiple accounts permissionlessly on a blockchain, if anyone without expertise could join a validation set, a user could potentially flood a validation set with countless accounts.

But perhaps there can be a workable solution that would allow new users to participate in a validation while still keeping the integrity of obtaining expertise in the protocol. We can consider three classes of users who don’t have direct expertise in the project’s subject-matter: users with expertise in related fields, users with expertise in unrelated fields, and new users with no expertise at all.

For users with expertise in related fields, the protocol can calculate a “relatedness quotient” which would indicate the degree to which a user’s expertise in other fields are related to the project’s field. Such a quotient can be based on how often project reviews in the protocol have these fields in common, and the level of importance of each project to these fields. The user’s expertise in the other fields can then be multiplied by the relatedness quotient to determine the weight of their expertise in the validation. These users can then participate in the validation pool like any other validator with domain-specific expertise.

The protocol should also allow a limited number of users with expertise in unrelated domains participate in the validation set. Though these users won’t have weight in the Quadratic Voting process, other validators can vote on the quality of the reviews, thus granting the reviews weight in the system. The same concept can apply to users who have no expertise in the system at all. By allowing a few such validators to participate in the validation set, it wouldn’t overburden other validators who have to review the quality of these validations.

Thus we can have a process that maintains the integrity of obtaining on-chain expertise, while allowing users with no expertise in the field and new users to participate in each project validation.

The next question is how would anyone be able to obtain expertise in a new field that is introduced to the protocol? This may be less of an issue for mature ecosystems with lots of domains, but for a nascent ecosystem this may be a common occurrence.

One way to solve the problem may be for a proposer to “graft” the new domain onto existing domains by estimating a relatedness quotient for the new domain in relation to these domains. This process can be initiated during the creation of a new proposal. Validators with expertise in the related domains can then review the project with reviews weighted by relatedness quotient.

To prevent proposers from trying to manipulate the process, there needs to be an initial validation process that ensures the relatedness quotients are sensible. These initial validators can come from the fields specified in the proposal as well as from ecosystem-wide validators. Once the new domain is successfully “grafted” onto existing domains, the system can start calculating relatedness quotients based on the frequency of common fields, as described earlier.

What if a field is unrelated to any other field in the ecosystem? This is hard to imagine, since all fields have at least somewhat of an overlap, but such a situation is possible in the very beginning of an ecosystem. There are two ways to approach this problem; either wait until the ecosystem is more mature and has more related fields, or treat the new field like a separate ecosystem. If the field is treated as a separate ecosystem it is possible to merge these ecosystems later on — though that is a subject for a later discussion.

So now we have a solution for how users can obtain expertise regardless of existing expertise. We also have a solution for project validation in new fields. We still need an answer to how users would obtain expertise at the very beginning of the ecosystem.

It should be evident that if at the beginning no one in the ecosystem has expertise then no one can validate any project. But if it's impossible to validate projects then no one can obtain expertise in the protocol. What then can be the solution? The answer is that when the ecosystem is launched, the initial users — the founders of the ecosystem — would have to self-assign expertise scores based on contribution to the ecosystem.

To state it mildly, this seems like a less-than-ideal solution. Obviously it would be preferable to have an ecosystem where users have verifiable expertise out of the box. Unfortunately that seems to be a technical impossibility for value-based systems. That is like expecting a baby to walk and talk from birth. The good news is the abundance protocol has an effective feedback loop in place that incentivizes founders to self-report their expertise and contribution as accurately as possible.

The goal of founders in the abundance ecosystem is to create a flourishing economy, since that is what would allow them to realize their economic potential and to prosper in the long term. They can only do so if lots of people participate and make the ecosystem thrive. But the more people participate the smaller the founders’ relative influence in the system becomes. At the same time, the more people participate the more the ecosystem decentralizes and gains public trust in the process. This is the general dynamic of a successful ecosystem where founders can prosper. But the only way to get there is by being as transparent and honest as possible in the initial self-reported expertise.

If the founders are being manipulative or fraudulent in their self-reported expertise, people are unlikely to join such an ecosystem, since their contribution will not be valued fairly. But if not many people want to join, the ecosystem would quickly become a ghost-system that no one wants to contribute to.

What’s more, with the proliferation of abundance ecosystems, each new ecosystem launch would be carefully scrutinized by participants in other ecosystems, who would want to provide the most accurate information about the new system. Since ecosystems are not competitors for scarce resources, there would be no misalignment between the public interest and the interest of reporters on this new ecosystem, and the reporting is likely to be reliable. This is yet another mechanism that can help protect users from malicious actors, and incentivize founders to be truthful in their self-reporting.

So now we have a clearer picture of how expertise can work in the abundance protocol, and how the ecosystem can grow and branch out. The integrity of the process of obtaining expertise has to be maintained at all times to preserve public trust in the protocol. Expertise allows meritocracy and alignment within the ecosystem, since it cannot be bought and is based on user contribution to the protocol. Expertise also protects the protocol from Sybil Attacks, as contributions are distinct and non-fungible.

While the role of expertise can now be better understood in the protocol, there are still areas of the protocol that need to be flushed out. This is particularly true as it comes to the role of validators in the protocol. While we worked out the mechanism to keep proposers aligned with the ecosystem, the alignment for validators is still relatively weak. At the moment, validators can lose some expertise points at the most for trying to game the system. But if the expected upside of cheating is greater than the potential downside, such a mechanism is unlikely to be effective. Making the mechanism more effective would mean introducing penalties for fraudulent validations. This can be achieved by validators having to lock up an amount proportionate to their payout (maybe 1/3 the amount) during the review process. This kind of mechanism could be particularly effective for new validators who have no expertise point to lose. By locking up a nominal amount of funds users will have less of an incentive to create multiple accounts with no expertise and try to obtain expertise while having no skin in the game.

After the review process, including the QV voting, is completed, and after the challenge period is over, validators’ funds will be unlocked along with whatever payout (and expertise) the validators earned.

Without locked funds, a user can create countless accounts, apply to every validation pool multiple times, and create arbitrary reviews when selected. Then, even if 99% of the arbitrary reviews are caught by the protocol, the user would still make money in the protocol and gain experience in the process. By locking up funds such a Sybil Attack on the protocol becomes counterproductive. With this mechanism in place, if the user attempts the same tactic they are likely to lose substantially more money than they stand to gain, thus eliminating such an attack vector on the protocol.

Similar to the case with proposers, locking up funds should create a barrier for validators with fewer means. Here too investors can provide loans to validators with an interest rate that corresponds to validators’ review track records. But what would happen to validators who don’t have a track record? They may find it difficult to obtain a loan, and therefore won’t be able to build a track record, creating a vicious cycle. This issue may also be mitigated by participating in validations in a test environment that doesn’t involve real money or expertise scores. Investors may then consider such a track record and offer loans under reasonable interest.

Now we have all the components of the Abundance Protocol in place, but there are still a few more loose ends to tie for the protocol to be complete. The most important ones are: how the protocol deals with projects that require more effort to review, how validator pools work, how challenges work, and how ecosystems can merge. Let's consider how these can be resolved.

On the question of projects that require more effort, the main issue here is that validation compensation is based on the expected impact, not the effort. The validation effort should therefore be indicated by the proposer, so that validators can choose to opt out of such reviews. Since such projects are less attractive to validators, proposers may be tempted to misrepresent the effort required so that a project is prioritized in the validation log. However, initial validators should be able to flag such claims that penalize proposers. Proposers may offer a greater reward for the validation to incentivize the review, otherwise the review set may not have sufficient expertise to determine the project's full impact.

On the issue of the validator pool, any ecosystem participant can choose to opt in to any domain-specific validation pool or the ecosystem-wide validation pool, and be chosen at random based on the rules we’ve already discussed. This process needs to be done on-chain, so that the protocol can randomly select validators from the pool. There is however a more technical issue of how the protocol should fill a set of validators for a review. What if some validators don’t know enough about a particular subject or prefer not to participate for other reasons? If the protocol only chooses enough validators to fill the required expertise for the review, any validator who drops out would result in the review set having insufficient expertise.

To fix this issue the protocol needs to select a random list of validators that also includes standby validators. There should also be a short time period where validators on the list can signal their willingness to review the project. Validators should also indicate whether they’d want to put their full expertise weight toward the review or just part of it; if validators feel less comfortable with the review they may want to limit the downside of an incorrect review. Once the validation set is finalized, validators would have funds locked in proportionate to their expertise weight indication. This process ensures that each validation has sufficient expertise, while allowing validators to use good judgment on their ability and interest in the review.

Now let’s consider how a challenge to a validation would work. There are several issues at play here: if a particular validation is fraudulent, another set of validators would need to be selected at random to redo the validation. Since such a process requires funding, the challenger would need to provide those funds. But why should anyone have to provide their own funds for something that essentially benefits the whole ecosystem? The reason is that fraudulent reviews are not self-evident, and require human intervention. Otherwise detecting fraud could have been hardcoded into the protocol. A challenge can be to the entire validation set (if everyone is suspected of fraud), or to a specific validator (or validators). While a challenger would have to provide funding for the revalidation, they’d also be expecting a payout if their challenge is successful. Otherwise no one would have the incentive to look for fraudulent reviews, and no one would want to fund revalidations. If the challenge is successful, the payout would come out of the locked funds of the validators who were challenged, while the cost of the revalidation will be covered by the funds allocated by the original proposer — these are funds that the original challenged validators will not get.

This incentive structure thus attracts honest challengers to look for fraud in validations. It also disincentivizes bad actors from trying to game the system by challenging honest validations in order to produce more favorable results. A challenger would only be successful if they expect a random set of revalidators to produce a review that aligns with their own impact estimate. If the challenge is unsuccessful, however, the challenger will lose all the money put toward revalidation. Like in other cases in the protocol, challengers can also take a loan to fund a challenge. Investors can then set the interest based on their expectation of the success of the challenge.

Let us now turn to the question of merging ecosystems. Why would we want to merge ecosystems in the first place? While there is a benefit in preserving local decision making, at times there is a benefit to putting resources together. One such benefit has to do with domain experts. It benefits the ecosystem the most if more domain experts can participate in the review process of projects, so that those can be reviewed more credibly and to have the capacity to review larger projects. How can an ecosystem quickly attract more domain experts? By allowing experts from other ecosystems to participate in reviews. The process to make that happen requires normalizing and mapping expertise scores made in other ecosystems to the current ecosystem.

This standardization process obviously needs to be credible and transparent. Since the credibility of reviews is what keeps the value of the ecosystem’s native currency, nothing is more important than making sure that those who review projects have the appropriate credentials to do so. Participants in an ecosystem can therefore propose a formula to translate reviewer scores between the ecosystems. This in effect would allow reviewers in one ecosystem to port their score automatically to another ecosystem while allowing both ecosystems to maintain Sybil Resistance. What does this mean? If a reviewer in Ecosystem A translates her expertise score to Ecosystem B, she cannot use that “additional” expertise score in any way back in Ecosystem A. If she then tries to translate the score from Ecosystem B back to A, the system would flag the user address as already linked.

By translating expertise scores between various ecosystems, reviewers will be able to maximize their earnings by reviewing projects in any of the participating ecosystems. Meanwhile, the ecosystems would benefit from having a larger pool of experts to draw from, while maintaining local control over how funds are distributed.

But if only expertise scores are being translated, doesn’t this means that the ecosystems are still effectively independent from each other and only share pools of experts? Yes. That is exactly what it means. The concept above essentially described an intermediate condition between fully merged ecosystems and completely independent ones, since fully merging ecosystems is not always beneficial. The mechanism to fully merge ecosystems however would be very similar to the one described above. The main difference however is that it would be bi-directional; both ecosystems would have to agree on a formula to translate scores between ecosystems. The formula would also have to apply to the ecosystems’ native currencies, essentially establishing a fixed (programmed) exchange rate between the ecosystems. The ecosystems can then choose to issue a new “unified” native currency that both previous currencies would have their own fixed exchange rate with.

The abundance economy thus allows ecosystems to choose the arrangement that suits them most. They can merge — or diverse — based on what benefits each community the most. All while aligning the interests of all participants in each ecosystem. So now everything in the protocol is sorted out; proposers create a proposal for a public goods project, and specify its estimated impact, subject-matters, and effort required for review. They must provide the funding corresponding to the estimated impact to be used by validators. An initial validator set is then randomly selected for a preliminary review of the proposal. These validators correct any errors in specified subject-matters and effort, and decide on the priority of the proposal. Once the validation log reaches the proposal, expert and ecosystem-wide validators are randomly selected to the proposal. Expert validators review the credibility and importance of the project. They are then split into two groups that vote on the quality of each review from the other group using Quadratic Voting. Ecosystem-wide validators then receive the input from the first group and review the expected impact of the project. This is then followed by a similar QV round. The project then obtains a value from the validation process. Following a challenge period, the value obtained from the validation becomes the ecosystem’s consensus value of expected impact.

Periodically, as the project realizes some of the impact in the ecosystem, proposers can create a new proposal with an estimate of realized impact. Once again they must provide funding for validators, but this time the set of validators can be smaller than for the expected impact validation. Here too the proposal is reviewed by expert and ecosystem-wide validators, and the validation is followed by a challenge period. Once the period ends, funds are issued to the project contract. However, they can only be released if all contributors and project influences reached an internal consensus on the allocation of funds expertise. If all conditions are met, funds can be released to contributors.

By following a rigorous process, everyone in the ecosystem can be confident that each project is reviewed carefully and that impact is determined accurately and transparently. And because the interests of proposers, validators and contributors are aligned with the interest of the ecosystem, the protocol can reach consensus on the value of public goods projects.

While the protocol may be well thought out, this doesn’t mean that its execution won’t have its fair share of challenges. The main challenge has to do with the fact that for an ecosystem to be able to reach consensus on the impact of any project, everyone in the ecosystem must have access to all impact-related data. Without a wealth of data available from decentralized sources, the ecosystem simply won’t be able to assign an impact value to the project.

The good news is that we have all the technology needed to make the data available. Moreover, the incentives structure of the system motivates everyone in the ecosystem to provide as much data as possible on the impact of any project. Doing so improves the ecosystem’s ability to value projects, and therefore helps maintain the value of the currency and attracts contributors to the ecosystem. Over time tools will be developed to capture more impact data from projects, thus allowing the more accurate review of project impact.

Another challenge is that of scale. At least initially, the protocol would likely be only effective for large-scale projects with lots of impact. It would be a lot more difficult to review projects whose impact is smaller or less certain. This is especially true for news media or artistic projects. This is true both due to limitations in estimation tools and the limited number of validators. However, over time the number of validators and their level of expertise is expected to increase. Similarly, there is also a constant incentive for contributors to develop methods and systems to improve the estimation process. Which means that over time the protocol would be able to accurately review a growing amount of projects. Eventually it would be able to effectively review any project regardless of its scale.

So now we have a robust design for a blockchain-based protocol that enables a contributor-to-ecosystem value exchange. This protocol creates effective feedback loops for public goods, but that is only half the equation. Changing our trajectory toward dystopia would require effective feedback loops for both public goods and negative externalities. Only then can an Abundance Paradigm emerge and we can have individual–public interests alignment. The question is, how can the Abundance Protocol create such feedback loops? How can it put us on a path toward economic abundance? This is what we’ll discuss next.

Last updated