Hi,
I am running an experiment by generating some data and then submitting them to the Algorand private network. I am using sandbox dev for this experiment. In each experiment, I send 90 transactions to the network. The issue is that if I do not clean the network after one experiment is finished, the next experiment will take longer to finish (about 0.5 seconds). And when I run 10 experiments, the duration of my last experiment is around 5 seconds more than the first one. Since this is a blockchain, I do not understand why it will take longer for the network to process the same size of data with the same traffic. Could someone help me understand the reason behind this, please?
Thanks,
Sahand
This is surprising.
Can you show how you do the experiments?
Are you using the “dev mode” (that is, a round is created for each submitted transaction) or the normal mode? (concretely, how do you start the sandbox?)
Also, what is your goal? Is it to benchmark a private instance of Algorand or is it to do development (and you don’t understand why things are getting slower)?
I start the sandbox by running ./sandbox up dev. After deploying the smart contract and creating an app, I will call the app and send a random string to save on the blockchain. I save the timestamp for sending the transaction and the timestamp for receiving the confirmation. Then I calculate the duration of time that it takes for Algorand to save my data on the blockchain. I conduct this experiment to benchmark the Algorand and compare it with other blockchain platforms. In every experiment, I send 90 transactions. But when I run the experiment again the duration of the experiment (of sending 90 transactions and receiving the confirmation) would increase. When I reset the network, the duration would go back to normal. But without resetting, it will keep increasing and I cannot tell why.
./sandbox up dev
starts the blockchain in development mode.
It is just used when you develop smart contracts on the blockchain.
Its performance does not reflect at all the performances of the actual blockchain. In particular, blocks are created immediately after transactions are received. This is not at all how the real blockchain works.
If you want to do any benchmark, you need to not use the dev mode, for example: ./sandbox up release
(you will need to first fully reset it and will lose all your accounts there)
I want to caution you strongly regarding benchmarking Algorand using a private network. MainNet is much more decentralized than a private network. One of the strength of Algorand is supporting a very high number of participation nodes without incurring significant penalties in performance (thanks to the novel sortition idea).
Benchmarking Algorand private network will, in most settings, not accurately mirror the real-world performance: not the right stake distribution, number of nodes, network latency/layout… Please be very careful when using such a benchmark to compare with other blockchains. Please also be very careful when basing decisions to use Algorand on this, especially if you are not using a properly spec-ed machine. (For low TPS, Algorand can work very smoothly on a Raspberry Pi, but if you’re hitting higher TPS, you will need enough RAM and fast enough SSD. See Install a node - Algorand Developer Portal. Note that a private network runs multiple instances of the algod server so you need to scale things appropriately)
Based on the documentation for Algorand sandbox, if the "DevMode": true
, every transaction being sent to the node automatically generates a new block, rather than wait for a new round in real time. But in my experiment, I’ve set the DevMode: false to create a block for each round instead of each transaction.
The issue is that the network is getting slower as it is being used. Based on my understanding, a blockchain is not supposed to get slower as we save more data on it. I appreciate it if you would give me a hint on the reason behind this.
I would need to see the full setup. It is possible you set the flag too late or that there is another issue.
Algorand is definitely not slower when you have more transactions.
MainNet has 6M transactions and round time is still less than 4s.
The fact that one transaction takes only 0.5s the first time seems to indicate that your setup is in dev mode. Indeed, no transaction on Algorand can be confirmed so fast in normal mode, because each round takes about 3.9s. (So you necessarily need to wait at least 3s or so.)
It is possible you are not modifying the right flag at the right place or you are modifying it after the network is setup.
I would recommend using the official ./sandbox up release
that is explicitly made to not be in dev mode.
I understand that Algorand is able to handle a lot of transaction in less that 4s. That is why I didn’t understand what is happening in my private network.
Just to be clear, I didn’t say that it takes 0.5s for the first transaction. I am not in dev mode, so my network waits for the next round to create a new block and each round takes around 4 seconds. It takes 40 seconds when I send 90 transactions to the network and wait to get the last confirmation. But the next time I run the experiment and send another 90 transactions, it takes around 40.5 seconds to receive the last confirmation. And if I run the same experiment 10 times more, it would take around 45s.
After I set the flag, I do ./sanbox down dev and then clean. Then I start the network again.
If you send 90 transactions to a properly configured node and wait for confirmation, it should take less than 8s in total (8s is really a worst case where you submit just after the previous block is proposed and need to wait for the current round to finish and then the next round to finalize). If this is not the case, there is either an issue in how you send the transactions, wait for configuration, or how you configured the node (including but not limited to, having under-spec-ed node).
Actually, you should even be able to send more than 10,000 txs to a node and it should take less than 8s for all the transactions to be confirmed.
Yes, I understand that. But I send the transaction with different rates. For example, I send a transaction every 0.4 seconds. That is why it is taking around 40 seconds to get the confirmation. Still, when I run the experiment again, sending 90 transactions with 0.4 delays between them, this time the experiment would take a little longer to process and I can’t tell why.
In that case, many other factors may play a role.
If the only discrepancy you see is 5s, you’re talking about taking essentially one more round.
What is a little bit surprising is that this increase in time systematically happens the second…10th times you run the benchmark. However, I don’t think this is significant. It is possible the issue stems in the way you run the benchmark. For example, if the second batch is immediately after the first one, since you waited for confirmation, you are now essentially aligning yourself with the time of a round. Maybe this alignment makes you lose a round at the end. (or is such that after the 10th experiment you lose a round)
If however, when you run 100 times the benchmark, the 100th time takes significantly longer (say 50s or 60s), then there is an issue to solve somewhere.
If you really want to analyze it further, I would export the list of committed rounds of the transactions and the round time, using the indexer. I would then try to figure out where the discrepancy comes from: are rounds taking longer? Is one round missed? …
I tried running more experiments and analyzed the list of committed rounds of transactions. In my experiments, in parallel to sending the transactions to the network, I was checking if the sent transactions have been confirmed on the blockchain. I found out that this checking for confirmation was the reason for increasing the experiments’ duration. So if I check for confirmation at the end, instead of during sending the transaction, this delay to the experiment duration will be resolved. Still, I do not understand why checking for confirmation would cause a delay. This method that I have is different from waiting for confirmation since it will not stop until it gets the confirmation round. This method just checks for confirmation and then moves on. I cannot find an explanation for this delay. I would appreciate your help.
I unfortunately have no idea what may be happening there, and I’m afraid that finding the discrepancy may be very time consuming.