How many ALGO to run a Participation Node

Hi , I am new to this forum. hoping to get some help.

How many ALGO are necessary to run a Participation Node.
I added a partition key but it is not showing under the partlist.
Hence i am not able to register.
Please let me know how to troubleshoot this.

host:~$ goal account addpartkey -a 679ZIQ****CC --roundFirstValid 67662 --roundLastValid 167200 -w doof
Participation key generation successful

host:~$ goal account listpartkeys -w doof
Registered Account ParticipationID Last Used First round Last round

In order to participate in the consensus by running a participation node, you would need your balance to have at least 0.1 Algos ( due to the min account balance requirements, regardless of the participation ).

In the above example, you attempted to add the participation keys for round 67662-167200. Mainnet is currently at round 18,120,028. You might want to create a participation key for rounds 18,120,000 → 18,300,000 and try to send your participation key registration then.

3 Likes

Thank you so much.
Another question. why is my account in the local node wallet not showing the right number of microlgos.?

I have 1 ALGO in the wallet address 679ZIQ****CC at current mkt price
image

host:~$ goal account list
[offline] kalgo-node 679ZIQ****CC 0 microAlgos *Default

Two common reasons why your node wouldn’t show you the correct balance:

  1. your account doesn’t have the said balance. You can check it with any algorand block explorer, such as algoexplorer.io.
  2. Your local node isn’t synchronized with the network. You can check the status of your node by typing “goal node status -d [data dir]”.
2 Likes

Hi Tsachi,
3 days after node creation, my node is not catching up with the current block. Tried the “node catchup” but still not catching up.
Can you please let me know what i might be missing ?

host:~$ goal node status
Last committed block: 627541
Time since last block: 205.9s
Sync Time: 1782.7s
Last consensus protocol: GitHub - algorandfoundation/specs at 5615adc36bad610c7f165fa2967f4ecfa75125f0
Next consensus protocol: GitHub - algorandfoundation/specs at 5615adc36bad610c7f165fa2967f4ecfa75125f0
Round for next consensus protocol: 627542
Next consensus protocol supported: true
Last Catchpoint:
Genesis ID: mainnet-v1.0
Genesis hash ****************************
host:~$

Can you check:

  • version goal version -v
  • logs $ALGORAND_DATA/node.log: is there any warning there?
  • RAM use: you need at least 4GB of RAM
  • CPU use: how much CPU is used?
  • Bandwidth: you need at least 100 Mbps. Below that, it will take a lot of tim.
  • Disk: you need an SSD disk (preferentially NVMe). An HDD will be too slow

Fabrice, please see the outputs.

  • version goal version -v

:~$ goal version -v
Version: [v1 v2]
GenesisID: mainnet-v1.0
Build: 3.2.2.stable [rel/stable] (commit #97e80680)

  • logs $ALGORAND_DATA/node.log : is there any warning there?
    looks like i have an error to read participation key. But i haven’t created this yet as I thought i need my node to catch up with mainnet blocks before i can do that.

{“file”:“node.go”,“function”:“github.com/algorand/go-algorand/node.(*AlgorandFullNode).checkForParticipationKeys",“level”:“error”,“line”:774,“msg”:"[Stack] goroutine 6920145 [running]:\nruntime/debug.Stack(0xc0001eafc0, 0xc000010540, 0xc0065d4070)\n\truntime/debug/stack.go:24 +0x9f\ngithub.com/algorand/go-algorand/logging.logger.Errorf(0xc0001eafc0, 0xc000010540, 0x12b6be0, 0x28, 0xc0069260c0, 0x1, 0x1)\n\tgithub.com/algorand/go-algorand/logging/log.go:229 +0x4a\ngithub.com/algorand/go-algorand/node.(*AlgorandFullNode).checkForParticipationKeys(0xc0001c8900)\n\tgithub.com/algorand/go-algorand/node/node.go:774 +0x1df\ncreated by github.com/algorand/go-algorand/node.(*AlgorandFullNode).startMonitoringRoutines\n\tgithub.com/algorand/go-algorand/node/node.go:408 +0x65\n”,“name”:“”,“time”:“2021-12-22T14:53:09.611333Z”}
{“file”:“node.go”,“function”:“github.com/algorand/go-algorand/node.(*AlgorandFullNode).checkForParticipationKeys",“level”:“error”,“line”:774,“msg”:"Could not refresh participation keys: AlgorandFullNode.loadPartitipationKeys: could not read directory /var/lib/algorand/mainnet-v1.0: open /var/lib/algorand/mainnet-v1.0: permission denied”,“name”:“”,“time”:“2021-12-22T14:53:09.611488Z”}

  • RAM use: you need at least 4GB of RAM

Have plenty
o:/var/lib/algorand/mainnet-v1.0$ free
total used free shared buff/cache available
Mem: 16397112 573508 11797480 1308 4026124 15596804
Swap: 4194300 0 4194300

  • CPU use: how much CPU is used?
    Plenty avaialble!!

:/var/lib/algorand/mainnet-v1.0$ top
top - 15:00:23 up 2 days, 22:55, 1 user, load average: 0.09, 0.09, 0.08
Tasks: 253 total, 1 running, 252 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.5 us, 0.3 sy, 0.0 ni, 99.1 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st
MiB Mem : 16012.8 total, 11520.4 free, 560.4 used, 3932.0 buff/cache
MiB Swap: 4096.0 total, 4096.0 free, 0.0 used. 15230.9 avail Mem

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND

18344 algorand 20 0 2570420 279368 24604 S 6.7 1.7 604:21.17 algod
129424 kalgo 20 0 9384 4048 3248 R 0.3 0.0 0:00.09 top
1 root 20 0 168752 13028 8524 S 0.0 0.1 0:10.81 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.04 kthreadd

  • Bandwidth: you need at least 100 Mbps. Below that, it will take a lot of tim.

200Mbps from Verizon FIOS. Not an issue.

  • Disk: you need an SSD disk (preferentially NVMe). An HDD will be too slow
    I have an SSD. the vm is running on an SSD drive.

kalgo@kalgo:/var/lib/algorand/mainnet-v1.0$ df / -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu–vg-ubuntu–lv 98G 8.3G 85G 9% /
:/var/lib/algorand/mainnet-v1.0$ sudo lshw -short -C disk
[sudo] password for kalgo:
H/W path Device Class Description
===================================================
/0/f/0.0.0 /dev/cdrom disk VMware SATA CD00
/0/10/0.0.0 /dev/sda disk 268GB Virtual disk

What I have seen is that , when i perform a node catchup, the account processed/verified are all zero?
Do I have to open up any ports on my firewall.
This is a vm on esx and all outbound are allowed. the vm is on ssd disk.

:/var/lib/algorand/mainnet-v1.0$ goal node catchup 18170000#FCFMY2H6GJXC2MSTRI6OIAHCVYGAKSFN2Z3UB6RZOSO3TOOGJ4RQ
kalgo@kalgo:/var/lib/algorand/mainnet-v1.0$ goal node status
Last committed block: 633037
Sync Time: 5.2s
Catchpoint: 18170000#FCFMY2H6GJXC2MSTRI6OIAHCVYGAKSFN2Z3UB6RZOSO3TOOGJ4RQ
Catchpoint total accounts: 13883125
Catchpoint accounts processed: 0
Catchpoint accounts verified: 0
Genesis ID: mainnet-v1.0
Genesis hash: **
kalgo@kalgo:/var/lib/algorand/mainnet-v1.0$

There could be more than a single issue, naturally, but the ones that looks the most promising is

Could not refresh participation keys: AlgorandFullNode.loadPartitipationKeys: could not read directory /var/lib/algorand/mainnet-v1.0: open /var/lib/algorand/mainnet-v1.0: permission denied

The above directory need to have read/write permissions by the user that runs algod.

1 Like

This is what i have.
Followed Garry’s guide from here.Newbie Getting Started - #4 by Gary
Is it because the participation key is missing. ( but i haven’t generated it yet).

/var/lib/algorand/mainnet-v1.0:
total 23904
drwx------ 2 kalgo nogroup 4096 Dec 19 22:34 .
drwxrwxr-x 5 kalgo nogroup 4096 Dec 19 16:10 …
-rw-r–r-- 1 kalgo nogroup 4096 Dec 19 16:10 compactcert.sqlite
-rw-r–r-- 1 kalgo nogroup 32768 Dec 19 16:10 compactcert.sqlite-shm
-rw-r–r-- 1 kalgo nogroup 16512 Dec 19 16:10 compactcert.sqlite-wal
-rw-r–r-- 1 kalgo nogroup 4096 Dec 19 16:10 crash.sqlite
-rw-r–r-- 1 kalgo nogroup 32768 Dec 21 23:48 crash.sqlite-shm
-rw-r–r-- 1 kalgo nogroup 8272 Dec 19 16:10 crash.sqlite-wal
-rw-r–r-- 1 kalgo nogroup 14344192 Dec 22 15:41 ledger.block.sqlite
-rw-r–r-- 1 kalgo nogroup 32768 Dec 22 15:40 ledger.block.sqlite-shm
-rw-r–r-- 1 kalgo nogroup 4247752 Dec 22 15:41 ledger.block.sqlite-wal
-rw-r–r-- 1 kalgo nogroup 1335296 Dec 22 15:41 ledger.tracker.sqlite
-rw-r–r-- 1 kalgo nogroup 32768 Dec 22 15:41 ledger.tracker.sqlite-shm
-rw-r–r-- 1 kalgo nogroup 4288952 Dec 22 15:41 ledger.tracker.sqlite-wal
-rw-r–r-- 1 kalgo nogroup 4096 Dec 19 16:10 partregistry.sqlite
-rw-r–r-- 1 kalgo nogroup 32768 Dec 19 16:10 partregistry.sqlite-shm
-rw-r–r-- 1 kalgo nogroup 20632 Dec 19 16:10 partregistry.sqlite-wal
kalgo@kalgo:/var/lib/algorand$

I don’t believe that the issue is that the participation key is missing. The issue is that algod is unable to check if there are participation key installed due to the permission issue.

When you run algod, which user do you run it with ?

ps -jAf | grep algod

I am logged in as kalgo.

kalgo@kalgo:/var/lib/algorand$ whoami
kalgo

kalgo@kalgo:/var/lib/algorand$ ps -jAf | grep algod
algorand 18344 1 18344 18344 14 Dec19 ? 10:17:42 /usr/bin/algod -d /var/lib/algorand
kalgo 131850 126539 131849 126539 0 16:36 pts/0 00:00:00 grep --color=auto algod
kalgo@kalgo:/var/lib/algorand$

According to what you’ve provided above, it looks like algod is being executed using the algorand user. The permissions for the mainnet-v1.0 are for the kalgo user.

I would guess that the Debian installer was configuring this to run as a service, which implies there would be a system.

My suggestion is as follows - delete the mainnet-v1.0 directory. This would make algod to re-create the mainnet-v1.0 directory with the proper permissions.

To do that, type:

goal node stop -d /var/lib/algorand
rm -rf /var/lib/algorand/mainnet-v1.0

and start the node as a service:

sudo systemctl start algorand

There is a more detailed guide here : Install a node - Algorand Developer Portal

1 Like

thanks. looks like i need to re-install from scratch. everytime i tried to restart the node it always fails. here is what i did.

node stop doesn’t work.

kalgo@kalgo:/var/lib/algorand$ goal node stop -d /var/lib/algorand
This node is using systemd and should be managed with systemctl. For additional information refer to Install a node - Algorand Developer Portal

So had to stop the service using systemctl cmd.

kalgo@kalgo:/var/lib/algorand$ sudo systemctl stop algorand
[sudo] password for kalgo:

delete mainnet folder

kalgo@kalgo:/var/lib/algorand$ rm -rf /var/lib/algorand/mainnet-v1.0

Restart algo

kalgo@kalgo:/var/lib/algorand$ sudo systemctl start algorand

But fails to come up.

kalgo@kalgo:/var/lib/algorand$ goal node status
Cannot contact Algorand node: Get “http://127.0.0.1:8080/v2/status”: dial tcp 127.0.0.1:8080: connect: connection refused
kalgo@kalgo:/var/lib/algorand$ goal node status

Anyway, is the mainnet folder to be given permision for the algorand service user and not the human administrator username ?

Let me answer the questions that you’ve brought up:

node stop doesn’t work.

yes, you did the right thing by running sudo systemctl stop algorand

sudo systemctl start algorand

I’m not sure what you mean by “But fails to come up.”. If the node could not start, it would emit some errors to either the stdout, or to the node.log file. If you could provide these, it would be great.

is the mainnet folder to be given permision for the algorand service user and not the human administrator username ?

yes. The human user doesn’t require permissions to the mainnet folder. When you interact with goal, all the interaction is being done over REST API with the algod process. hence, the algod process is the only one that read/writes to this directory. The directory is created by algod, and compactly being maintained by algod.

1 Like

sorry about the lack of clarity. Yes , I meant the algorand process is not starting at all.
and doesn’t seem to show the relevant errors in the node.log

kalgo@kalgo:/var/lib/algorand$ sudo systemctl start algorand

kalgo@kalgo:/var/lib/algorand$ goal node status
Cannot contact Algorand node: Get “http://127.0.0.1:8080/v2/status”: dial tcp 127.0.0.1:8080: connect: connection refused

kalgo@kalgo:/var/lib/algorand$ tail node.log
{“Context”:“sync”,“file”:“service.go”,“function”:“github.com/algorand/go-algorand/catchup.(*Service).pipelineCallback.func1”,“level”:“info”,“line”:394,“msg”:“pipelineCallback(651648): did not fetch or write the block”,“name”:"",“time”:“2021-12-22T17:05:40.887396Z”}
{“Context”:“sync”,“file”:“service.go”,“function”:“github.com/algorand/go-algorand/catchup.(*Service).fetchAndWrite”,“level”:“warning”,“line”:296,“msg”:“fetchAndWrite(651639): lookback block doesn’t exist, cannot authenticate new block”,“name”:"",“time”:“2021-12-22T17:05:40.887280Z”}
{“Context”:“sync”,“file”:“service.go”,“function”:“github.com/algorand/go-algorand/catchup.(*Service).pipelineCallback.func1”,“level”:“info”,“line”:394,“msg”:“pipelineCallback(651639): did not fetch or write the block”,“name”:"",“time”:“2021-12-22T17:05:40.887416Z”}
{“Context”:“sync”,“file”:“service.go”,“function”:“github.com/algorand/go-algorand/catchup.(*Service).pipelineCallback.func1”,“level”:“info”,“line”:394,“msg”:“pipelineCallback(651641): did not fetch or write the block”,“name”:"",“time”:“2021-12-22T17:05:40.887275Z”}
{“Context”:“sync”,“file”:“service.go”,“function”:“github.com/algorand/go-algorand/catchup.(*Service).pipelineCallback.func1”,“level”:“info”,“line”:394,“msg”:“pipelineCallback(651634): did not fetch or write the block”,“name”:"",“time”:“2021-12-22T17:05:40.887273Z”}
{“Context”:“sync”,“file”:“service.go”,“function”:“github.com/algorand/go-algorand/catchup.(*Service).fetchAndWrite”,“level”:“info”,“line”:257,“msg”:“fetchAndWrite(651649): Aborted while waiting for lookback block to ledger after failing once : wsFetcherClient(r-tb.algorand-mainnet.network:4160).requestBlock(651649): Request failed: peer closing 35.204.42.115:4160”,“name”:"",“time”:“2021-12-22T17:05:40.887234Z”}
{“Context”:“sync”,“file”:“service.go”,“function”:“github.com/algorand/go-algorand/catchup.(*Service).pipelineCallback.func1”,“level”:“info”,“line”:394,“msg”:“pipelineCallback(651649): did not fetch or write the block”,“name”:"",“time”:“2021-12-22T17:05:40.887453Z”}
{“Context”:“sync”,“file”:“service.go”,“function”:“github.com/algorand/go-algorand/catchup.(*Service).pipelineCallback.func1”,“level”:“info”,“line”:394,“msg”:“pipelineCallback(651642): did not fetch or write the block”,“name”:"",“time”:“2021-12-22T17:05:40.887281Z”}
{“Context”:“sync”,“details”:{“StartRound”:647068,“EndRound”:651633,“Time”:3088500995083,“InitSync”:false},“file”:“telemetry.go”,“function”:“github.com/algorand/go-algorand/logging.(*telemetryState).logTelemetry”,“instanceName”:“TsdrQj16jM61PX/A”,“level”:“info”,“line”:259,“msg”:"/ApplicationState/CatchupStop",“name”:"",“session”:"",“time”:“2021-12-22T17:05:40.887478Z”}
{“Context”:“sync”,“file”:“service.go”,“function”:“github.com/algorand/go-algorand/catchup.(*Service).sync”,“level”:“info”,“line”:624,“msg”:“Catchup Service: finished catching up, now at round 651633 (previously 647068). Total time catching up 51m28.500995083s.”,“name”:"",“time”:“2021-12-22T17:05:40.887496Z”}
kalgo@kalgo:/var/lib/algorand$

The content of the node.log file makes it looks as if the node does work - otherwise, it wouldn’t perform the catchup.

could you verify that ALGORAND_DATA points to the expected data directory ?

echo $ALGORAND_DATA

(I’m trying to make sure you don’t have two installations)

also, the process itself is algod, and the service name is algorand. when looking for the process name.

If you have algod-err.log or algod-out.log in your data directory ( /var/lib/algorand ), it would be great if you could provide their content.

Did you made any changes to the config.json file ? if you did, could you please provide its content ?

Last, is there any chance that you have another server running on that port ? ( i.e. 8080 )

1 Like