Fast mode for development testing

Ethereum’s geth has options for running a pure development, contract testing node (the -dev and -dev.period 0 options). This makes it so that the node does nothing until you send it a transaction, when it immediately handles the transaction, then sits waiting again.

I have a very simple example program that performs 7 eth transactions and 6 event queries after starting the server in a Docker container. This takes 15s on my 5 yr old laptop.

When I run the same program on Algorand, it makes 7 Algo transaction groups and 6 indexer queries after starting algod and the indexer in a shared Docker container (and postgres in an adjacent one). It takes 3 minutes and 26 seconds----206 seconds or 13 times longer!

It is really excruciatingly slow. I don’t know enough about algod to know where the time goes, but here are some thoughts:

  • The indexer fetcher has a hard-coded 5 second delay on a disconnect from algod ; in my experience this happens a lot even on this simple workload.
  • There are a lot of consensus parameters that are timeouts and so, especially important I assume is SmallLambda

In my private network for testing — https://github.com/reach-sh/reach-lang/tree/master/scripts/algorand-devnet — I only have one node so the committee is just 1. I think that all that is needed is the ability to drop SmallLambda and all of the other parameters to zero in this testing mode, but I think ya’ll at Algorand will just know right off the bat whether this will work and/or if there’s a simpler way.

Has anyone else thought about this problem? Is it safe to just drop SmallLambda to 0 and run a network with one node? I’d like to gather some ideas and try a patch then add the configuration option into algod.

For reference, I have a Github issue about this too: https://github.com/algorand/go-algorand/issues/1598

1 Like

I’m not sure why starting a new network in a container ( plus it’s indexer ), would take that long.
I would expect it to take less than 10 seconds to start running, and at that point, making transactions and queries should be almost instantaneous ( i.e. less than a second ).
If this is not the case, could you please break up the time line to provide us better clarity to which operation takes so long ?

Starting the container and sending the first transaction takes about 8 seconds for me.

Here’s a transcript of a run that took 72 seconds for me that is annotated with timestamps: https://gist.github.com/jeapostrophe/40a5d25a26abfe554519a2003b131800

Lines 1-7 show that it takes 8 seconds to make an account and transfer funds to it.

Lines 8-14 take another 8 seconds for the same thing.

Lines 15-26 creates the application after compiling it, another 8 seconds.

Lines 27-59 updates the application code after compiling a bunch, another 8 seconds.

Lines 60-99 is me getting ready for the first transaction in JS, .1 seconds

Lines 100-149 is actually sending it (plus some junk from another thread), 8 seconds.

Lines 154-169 is doing the indexer query, 4 seconds

In contrast, the other thread starts their indexer query at line 148 and gets it completed at line 189, 12 seconds.

Lines 195 to 249 is another send, 8 seconds.

The last send is 290 to 338, 16 seconds.

So, as you can see, all transactions that contact algod tend to take 8 seconds, although that last one takes twice as much, perhaps because it missed a “deadline”. While those that contact indexer take 4 seconds. These are so consistent, I feel like there has to be some regular period inside the daemons that is throttling work.

Jay

Hi @jeapostrophe,

I tried to look in the log file, but found it to be hard to read ;-(

If I understand it correctly, you’re sending a transaction and wait for it’s confirmation. That can definitely slow things down. The confirmation that you’re after would be available only after two rounds. Lowering the round time would only help so much here. Instead, would you consider batching the operations so that these would work in a more reasonable timing ?

i.e. you can send all the 7 transaction sequentially, and save the transaction ids. Then you can wait until all of these transactions are confirmed and make the follow up queries.

Alice and Bob are using Algorand to communicate with each other, where a TEAL contract is enforcing that they are communicating according to the rules.

So, after Alice sends her transaction, Bob needs to see it on the chain before he can make his transaction, and so on for the rest of the protocol.

So, each transaction depends on information only found in the previous transaction. That is, the seven transactions must be sequentialized and cannot be sent in parallel, which is what I believe you meant.

This is why I want to remove all delays and empty rounds for algod in a testing scenario like this, because there’s always going to just be a single node and a single transaction group per round.

Upvoting. This feature is useful. For developing dapps, we don’t need to use consensus. We just need a node which will process transactions and include them into a blog. This will optimize testing.

@jeapostrophe - I’m developing a framework for Algorand daaps: algob. In the future we are planning to create a simulator (a’la ganache) - having a dev node would be probably even better.

Tip.
@jeapostrophe - don’t create network. It’s not necessary. One node is enough. You can use this guide (just modify the template to use one node). https://developer.algorand.org/tutorials/create-private-network/.

In algob we have a Makefile for that.

Thanks Robert. We create a network using a template just like that. The main reason we make one is so that we can have a custom faucet for automating disbursing funds for testing. I’ll check out algob

That’s an interesting idea. Create a node that would generate a block on any incoming transaction, without having any gossiping network features. That node would be able to work without any agreement, and would allow dapp developers to pseudo test their application against the algorand framework.

Note that the developers would be disappointed to see that real network is much slower… but that’s already a different topic.

1 Like