Algorand is resource hungry

To catch up on Algorand network, you need a 100 Mbit Internet connection and a computer with 4 cores.
It takes a considerable time. But Algorand is still practically UNUSED, with a daily txn count of 3 thousand txn. Now what would happen, if this network would be used by millions, and the daily txn count would be several million?

I am not sure whether my understanding is true or not, because I have not understood the basic mechanism of algorand, not easy to understand the mathematics.
I read this on a homepage, but now it is difficult to find the homepage again.

It is said that algorand need even only one ‘good’ relay node to support the full network, because it does not need a lot of relay nodes to vote for consensus, as a relay nodes is possible to be ‘lazy’ but not possible to be ‘bad’.
so the currently eighty-one relay nodes are enough to support the full network, (in future maybe some big corporators will run new relay nodes), so even they build eight-one big data centers to support algorand, the total cost is much less than POW projects.

I checked the storage usage. The mainnet occupy only a little storage, while testnetdata and betanetdata consume much more space, maybe the developers are testing something.

Maybe when the developers immigrate the tested Dapps data to mainnet, the mainnet will need much larger storage.

If in future the team separate the mainnet to be independent, maybe the data volume can be kept small. For example, with a new option, the users can choose to download mainnet data only.

/var/lib/algorand# du -h --max-depth=1
3.5G ./testnetdata
8.0K ./.algorand
108K ./genesis
3.4G ./betanetdata
29M ./mainnet-v1.0
11G .

Hi, I’m new to the forum, I have ALGORAND and I want to staking. Can you approach the MYALGO portfolio? or only in mobile wallet?

Georgivb you can do it on MyAlgo wallet.

Staking is wallet agnostic, as long as you have Algos, they will passively generate staking rewards.

1 Like

Currently, you can already only connect to mainnet. This is actually the default if you install a fresh node.

The size of the data folder depends significantly whether your node is archival (i.e., keeps all the blocks from the genesis) or not (keeps only the last 1000 blocks or so). To participate in the consensus, you do not need an archival node.

Regarding scalability, I do not have exact figures, but my understanding is that a significantly higher number of transactions will not really degrade performances, because currently most of the data downloaded to sync a node are signatures (of the blocks). Similarly, most of the CPU time used during sync is likely to be spent verifying these signatures.

See also for future improvements.

I have not changed the default setting, so my node should be non-archival, according to

The two files are very large:

Thanks for raising this point.

My understanding is that these two files are for debugging purposes only: you should be able to remove them without an issue.

The node.archive.log is where the node.log is swept to after reaching the maximum size described in the config.json - if you purge it, it will refill.

The size of the node.log file and by proxy the archive is defined under the setting in the config.json setting “LogSizeLimit”.

Thank all of you. Glad to know this. It seems possible to be installed on a POD, and not urgent to develop a light client wallet.

“LogSizeLimit”: 1073741824,