While these proposals are certainly better structured as in G3, I still have some grave concerns.
Below I address some of them for each measure, addressed in the order of their importance.
M1:
Endangering security of the network?
The aim of M1 is to get more ALGO to participate in DeFi. One crucial point of this goal is to consider where are these ALGO expected to come from?
The answer is of course from just simple holders. But many of âsimpleâ holders arenât just holding. Many are performing the most crucial task for the network - participating in consensus, thus providing its security.
Participating in the majority of DeFi activities doesnât allow for simultaneous participation in consensus. With M1, these âsimpleâ holders will essentially be penalized in Governance for doing the most crucial task! I would argue that many would reconsider what is best for them personally and deploy funds to DeFi, essentially reducing the security of the network. Hence, I would suggest that only DeFi solutions that allow direct, individual participation in the consensus simultaneously with DeFi activity, should even be considered for such a program. While this would disqualify the majority of protocols, the network security should remain the top priority of Algorand - as itâs always been.
If this concern is deemed exaggerated, I would expect from the Foundation to dismiss it with an exhaustive simulation of possible outcomes, which show that the probability of a reduction in security is indeed negligible.
Unknown benefit?
Another unknown of M1 (at least for the general public) is what is concretely the expected outcome of deploying further 24M ALGO/year to DeFi. There should be available some concrete projections based on the results of Aeneas program. However, I have not yet seen any report published on exactly how those funds have been spent and what was their effect. Before further funds are committed to the exact same cause, I would expect this to be clarified so that the actions can be justified.
Implementation requiring centralized middlemen
The implementation should not rely on any centralized middlemen. If a list is to be constructed and maintained, it should be based solely on on-chain data that anyone can independently verify.
M2:
Unequal treatment of Algorand users
The proposal M2 inherently discriminates between different Algorand users. If one participates in LP of a DEX, one is consciously prepared to part with the ALGO committed to it. That is inherently a different type of commitment than what is required for âordinaryâ Governors.
The proposed solution of soft-locking the LP tokens based on a snapshot at the end of commitment period and including only LPs with assets that have a ârecognized, open and substantial marketâ does not change this. LP users would still be assigned voting power based on ALGOs they might not hold due to impermanent loss, regardless of it being ânaturalâ or maliciously-induced. The proposed solution just tries to minimize the probability of a maliciously-induced loss, while being âreasonably sureâ that it wonât happen.
The requirements for being a Governor should be exactly the same for all.
Unequal treatment of DEXs
DEX can be either implemented as AMM (i.e. with LPs) or as order book. The M2 rewards just the first kind of implementations, while the second ones are as important despite not being as popular. I do not see a difference in their worth to the ecosystem that would justify their different treatment. Hence, a solution is necessary to cover both cases.
Legal ramification of endorsement of certain ASAs by the Foundation?
The proposal M2 is based on the assumption that the Foundation selects certain assets as having a ârecognized, open and substantial marketâ.
If such a selection were made, I would suggest to gather and share with the public a legal opinion about any possible ramifications for the Foundation (consequently the success or failure of Algorand) of such an endorsement if any of these ASAs were to fail. Better be safe than sorry.