Consensus rewards whitepaper concern

I want to preface this with a few things:

  1. We’ve only seen the initial thoughts on consensus participation
  2. The people creating it are geniuses, I am most definitely not
  3. I’m just offering my opinions. Take them as that.
  4. There’s probably many mispellings and typos

That said, I wanted to bring up a concern about on particular aspect: The maximum algos in an account that can partake of the rewards.

The current whitepaper suggests there will be a minimum and maximum number account balance for consensus rewards, with the maximum initially suggested as 2^26 or 2^27.

If we look at the lower maximum of 2^26, that’s 67 million algos running on a single node.

There are several issues that causes me some concern here:

Single point of failure: Since it’s not possible to participate on nodes simultaneously, that node goes down, it’s a big chunk. In fact, it’s just about 5% of current participating stake.

It discourages additional physical hardware: Pooling becomes much more likely. It’s much easier to add to a pool than it is to run and maintain a physical node.

It puts the protocol itself in the hands of fewer centralized decision-makers. It wouldn’t take many pools to exceed 10% of online stake, which means that those pools could act together to prevent future protocol upgrades. They can upgrade (or choose not to) algod version without the permission of the pool’s algo providers. Maybe they don’t like a future protocol update that reduces the maximum pool size to 1 million, for example, and just don’t update to that version. The poolers now have veto power over protocol upgrades. Yeah, the pool contributors could take their stake offline, but how do you communicate that? It seems dangerous to me.

If the Foundation can whitelist pool providers, it goes against the very idea this is supposed to support: decentralization.

I think the top end of accounts that are eligible for consensus rewards should be much lower to encourage more nodes with accounts that have smaller stake. This assumes that a node’s performance suffers if too many accounts are on it. Given the changes that are also coming to better handle accounts that are not behaving as they should (“garbage collection” as the whitepaper called it), it makes sense to have many more physical nodes.

As far as individual entities that are not part of a pool, even if they have a large number of algo participating in consensus, they can still do so how they choose (run a single node or split it up), they just won’t earn consensus rewards once their account exceeds a (lower) threshold. If they have that many algos as a single entity, they are probably protecting the network for a much more important reason than simply earning consensus rewards (the way Silvio Micali originally envisioned). If they are just in it for the rewards, they’d have to maintain more physical nodes to earn them.

The goal of rewards should be increase the number of physical nodes and the number of online algo, and this balance needs to be carefully considered.


Some great points here, thanks for sharing your thoughts.

We’re trying to balance the amount of Algo online (and the amount of nodes which we also want increased), with the cost of operating nodes for a large amount of stake.

The paper is exploratory, so these are not hard decisions, but the lower we make the cap, the more expensive we make global staking operations.

Ensuring we don’t have a stall is primary, and thankfully, as per point 4.5.1 in the paper, we have some protection:

The absenteeism mitigation approach effectively provides ”Garbage Collection” for consensus, permitting relatively significant portions of global stake (5% to 10%), within a reasonable rolling time period, to ungracefully exit consensus ad-infinitum without detriment to the network.


We are planning to provide a pooling service where communities like (NFT community, meme token communities etc) can run their own pools to which community members can stake their algo to run efficient nodes. The profits they make from these pools can be distributed to their holders thus creating utility for holding tokens.

This will also ensure that retail community is also involved in securing the network.
So we recommend not to have any kind of whitelist for pools.

This will be added to service store of notiboy which already has a web3 notifications and chat feature.

We always welcome your feedback on running such a service.


Yep I agree, cap at 10 million, minimum also should be lowered, hostile capture of upgrades spot on as well. Ideally, mining percentage should be dynamic wrt some parameters that can fluctuate with the balancing of the operational cost and algos online

What do you mean by this? It seems an important point.

I think there is no plan from foundation to whitelist pools.

1 Like

Just to chime in, I was hoping for some of these features years ago – mainly the ability for regular, non-technical Algo holders to participate in consensus. Somehwat old thread below:

Following up on this, I think a lot of Algo holders would be happy to contribute some or all of their balance to participation pools if those pools pledged part of their consensus proceeds for particular projects or infra that those users find important. Maybe we can build a model that facilitates this. Then people could “vote with their participating Algo” to fund initiatives or projects, taking some of the burden (and centralization) off of the Foundation and Xgov.

I agree completely with your assessment. A naive solution I thought might be interesting to think about is utilising VRFs to decouple node and stakeholder incentives.

Yes, a larger max lowers the operational expense for the node operator. However, everything comes with a tradeoff. The major tradeoff here is risk to network resilience. I would err on the side of caution. The proposed max limits are too high.

Is the Algorand Foundation reaching out to node operators to run the numbers and also to onboard?

It would also be in Algorand Technologies’ best interest, as a for-profit corporation and as ALGO whales, to provide a staking pool commercial service for Algorand consensus participation.

1 Like

I think very large pools are not good thing so max cap is good. BUT how that will actually work, because some MegaWhaleNodePool Corp can create node1, node2, nodex… and so on, and basically have large control via multiple nodes. they can control them all same way than one large one. They can unplug and manage them all. So idea is good, but how the actual execution in reality will work? This is not negativity or fud, but thing that have to be taken into account.

Personally I like that 2^27 cap.

Best regards,

1 Like

We have to get community involved and create second layer of pools that will serve as layer of safety

1 Like

When you have more details and info, can you contact me on twitter @roamoilanen. I am interested to run COOP Node.

1 Like

Of course avoiding a stall is primary, however, the proposed limits are in line with the production mainnet stake topology today, where we would see stake representing ~5% of the active stake on a single node.

This metric isn’t static either of course, as global stake grows so does the ceiling, by definition.

We set the current 5% (~65MM Algo) guidance based on empirical analysis of the chain since launch.

What have you based this assertion on?


10MM cap significantly reduces yield for any entity with 10’s of MM of stake, including exchanges.
There is a sweet spot, but it’s definitely above 10MM.

As per paper this must align with absenteeism mitigation.


Algorand’s liveness relies upon at least ∼80% of the participating (”online”) stake to actively
and honestly take part in the consensus protocol in order to produce blocks.

Let’s assume the online stake is 7B ALGO.

  • 20% is 1.4B ALGO
  • If 22 nodes hosting 65M ALGO become compromised, then the Algorand network is effectively down - assuming all other nodes are healthy

Algorand takes pride in zero downtime. Let’s keep it that way. The network will be reliant on these large staking pool providers. Thus, the protocol must be designed to promote enterprise-grade best practices.

Large staking pools should be operated at an enterprise-grade level, which means a distributed deployment architecture spread across multiple data centers across multiple regions for high availability, resilience, and disaster recovery to meet network SLA requirements.

By design, participation nodes are lightweight and cheap to run. The cost to scale out is negligible compared to overall operational costs. The lion’s share of the cost for large staking pool operators will be DevOps required to operate, maintain, and support a secure enterprise-grade deployment.

In the short term, ALGO price and transaction volume do not support enterprise-grade commercial models unless highly subsidized by the Algorand Foundation. The bottom line is we need strong ecosystem growth to drive ALGO price appreciation and transaction volume growth. Real-world commercial adoption at enterprise scale is required to economically sustain the network. Enterprise adoption is very risk-averse and downtime risk is unacceptable. The higher the cap, the higher the risk to the network - period. This is why I also suggested that Algorand Technologies provide an enterprise-grade commercial staking service. If Algorand Technologies provided such a service, then it would be a great show of confidence for Algorand’s long-term success.


Not in a highly available manner.

Not really, because there is no way to limit multiple low cap accounts being run on the same node, which is incentivised by a lower cap.


according to some smart people current node software is not optimized for multiple addresses participating on the same node and apparently after 3-4 doing so the node might run into problems (missing votes/proposals). this would mean the lower the cap the more physical nodes those entities would have to run.

question now is if that’s true at all? and if it’s true can the node software be changed to eliminate those problems?

The number of simultaneous part keys before significant slow down on well spec’d machines is about 5.

They might run. You will have people who prioritise lower cost of execution over network health, which is another reason why we need a cap ~4-5% of the active staking set.

Yes absolutely, we can thread this part of the critical path better. Again it comes down to limited resources.


The cloud expense will be minor concerning the overall operational expense. A dedicated team will need to be available 24x7 to maintain, support, and secure the entire system. Large staking pools can’t be managed half-assed by a bunch of amateurs. There are hiring and training costs to consider. The system would require a cloud architect and DevOps personnel.

That would simply be dumb for large staking pools.

1 Like

But, staking pools will do it if they get so large as to reach whatever the cap is. If they are profitable, and they reach the staking cap, they won’t leave money on the table. They will spin up another node, copy their existing code, and create Pool2.