Algorand vs Avalanche

Hi,

I read the whitepaper of Avalanche and they say that about Algorand :

Neither Algorand nor Conflux evaluations take into account
the overhead of cryptographic verification. Their evaluations
use blocks that carry megabytes of dummy data and present
the throughput in MB/hour or GB/hour unit. So we use the
average size of a Bitcoin transaction, 250 bytes, to derive
their throughputs. In contrast, our experiments carry real
transactions and fully take all cryptographic overhead into
account.

The throughput is 3-7 tps for Bitcoin, 874 tps for Algorand
(with 10 Mbyte blocks), 3355 tps for Conflux (in the paper it
claims 3.84x Algorand’s throughput under the same settings).
In contrast, Avalanche achieves over 3400 tps consistently
on up to 2000 nodes without committee or proof-of-work. As
for latency, a transaction is confirmed after 10–60 minutes in
Bitcoin, around 50 seconds in Algorand, 7.6–13.8 minutes in
Conflux, and 1.35 seconds in Avalanche.

Avalanche performs much better than Algorand in both
throughput and latency because Algorand uses a verifiable
random function to elect committees, and maintains a totallyordered log while Avalanche establishes only a partial order.
Algorand is leader-based and performs consensus by committee, while Avalanche is leader-less.

Do you think it is true ? I thought latency was only 4.5s for Algorand.

1 Like

Commenting just on the Algorand facts:

  • Algorand latency is currently around 4.5s not 50s. You can see it on MainNet here https://algoexplorer.io/
  • Algorand maximum block size is 1MB, not 10MB. See https://github.com/algorand/go-algorand/blob/master/config/consensus.go#L520
  • Algorand throughput is indeed around 1,000 TPS for simple transactions. But comparing TPS is always difficult. Most likely, if needed, the blocksize of Algorand could just be increased (by consensus upgrade) to increase the TPS accordingly. There are limits, but it is hard to believe that a block size of 1MB is a hard limit for Algorand.
2 Likes

@Cryptosteph,

I don’t know where exactly you’ve read that, since you did not provide any link to that paper, but I would like to suggest you’ll take any written claims with a grain of salt.

The claim that was made might have been valid at that point in time. I’m not familiar enough with Avalanche specifics, but the claims mentioned above for the current Algorand platform are mostly false. In addition, the Algorand platform is an ongoing developing project, and as such - features, functions and performance characteristics are continuously changing ( for the best… ).

1 Like

Unfortunatly, I think some of this information comes from Algorand’s own papers and website:

https://www.algorand.com/resources/white-papers
https://eprint.iacr.org/2018/377

We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transactions in under a minute,

This paper presents Algorand, a new cryptocurrency designed to confirm transactions on the order of one minute

By relying on Byzantine agreement, Algorand eliminates
the possibility of forks, and avoids the need to reason about
mining strategies [8, 25, 47]. As a result, transactions are
confirmed on the order of a minute.

There are other examples of this. It would be nice if Algorand had an updated paper, or at least something new and official that showed all current metrics with proofs.

For all I know, this information is why Kraken thinks Algorand needs ‘10 confirmations’ on Kraken:

Cryptocurrency Confirmations Required Estimated Time (If included in the next block)
Algorand (ALGO) 10 confirmations 45 seconds

I’m not sure what is the best course of action here. The paper that you’ve mentioned is valid… for the time it was written ( i.e. early 2017; two years prior to mainnet launch ).

We do have plan to make ongoing improvement in all product facets. As such, parameters that have been used before might change their values, change their function or have a different representation.

While writing papers is nice, it’s not an actual testimony of the true network performance. For instance, the initial paper was testing a network with 500,000 users. That’s a great stress testing for the agreement protocol, that is not really needed on today’s mainnet. On the other hand, features like quick recovery ( in case we failed to reach the desired threshold ) are much more valuable and were available on mainnet since day 1.

Last, regarding the 3rd parties that made certain assumptions based on historical network metrics. I’m not sure what mechanism would work best to “wake” these up and update their platforms and documentation. Algorand could attempt to contact these individually, but there are limits to that. Given that it’s an evolving platform, I think that all the users need to be continuously updating their integrations to keep these updated.

I have no romantic notions towards white papers or their formats, but I do expect projects to keep public information at least reasonably up-to-date. It’s not like this is some user-run open-source project with people working on it in their spare time.
Having these historical papers is completely appropriate, but there should be updates showing high-level changes and the current real metrics.

Perhaps a simple, concise, write-up of Algorand’s current specs and basic functioning would be helpful.

I find myself defending Algorand’s mechanics to people and I really don’t have much I can point to ‘officially’ to back up the claims. When papers talk about requiring additional confirmations, or for 50 second block times, for a chain that has ‘single-block finality’ and block times of ~4.3 seconds, the optics aren’t great.

It seems like these are kind of basic things to get right - and right up-front.

Re. 500,000 user tests - I hope sacrifices aren’t currently being made that prevent node sizes (and far larger) like this in the future. Algorand’s current network size is extremely small, and I hope that there is an expectation and plans to grow that - otherwise some of the reasons for its consensus mechanisms would seem for naught. It could have a simpler and faster mechanism if the network size was going to remain small.

Indeed, Avalanche looks faster, but the comparison is you mentioned is not clear, and is missing a good scientific approach:

  1. Algorand has finality after 5s. Avalanche doesn’t have finality. It doesn’t even have state commitments ! Validators only agree about endorsing certain blocks. It’s eventually consistent, though it’s very fast. As far as I know, there is no good research, or confirmed simulation proving their latency claims.

  2. Algorand is putting consistency before throughput. Avalanche is opposite.

  3. To do right benchmark and comparison you will need to put two blockchains on same network setup using same blocksize and have some reasoning about latency and state. This is lot of work, and in case of Avalanche you will need to add one more metric into equation: what’s the ratio of the blocks which are dropped.

  4. Bare bone Algorand is more feature full than Avalanche.

1 Like