You are welcome. You missed this part:
- transfer the transaction files to your nodes
- submit them with the goal cli (
goal clerk rawsend
)
Your code still submits each transaction individually over the HTTP API. Furthermore, I think it is doing so sequentially. While you have some multiprocess “plumbing” there, I think you are still only spawning a single process to send these
In order to get them broadcast as tightly packed as possible, you should dump the transactions in binary format to a file, then transfer the file to a node (or break them up into multiple files and use multiple nodes) and send them all in a batch directly from the node, using this command:
goal clerk rawsend -Nf mytxns.dat
(the -N flag instructs not to wait for confirmations)
I’d try this approach with a single txn in a file (just to see if your process is working overall) and then split them up in 8 batches and try again from 8 nodes.
Another thing to keep in mind is that the interval that you broadcast (start → stop of txn broadcast) will not line up perfectly with the block production interval, so if it takes you 1.5 seconds to broadcast all of them (start to finish), you would likely hit a “block boundary” and have some included in the next block, and the rest in the +1 block. If you broadcast a) fast enough and b) enough txns to fill 2-3 blocks, then this effect should not be present.
Filling blocks is quite hard (mostly limited by client/setup) which is why for my AMM test, I had to cut out the HTTP calls entirely and broadcast directly from multiple nodes on the network.
FYI for the txn types you are using, it looks like the theoretical limit at the moment is around 7400 TPS. See this twitter post from today. (see next post) The 10K TPS figure is possible when inner transactions are utilized.
Finally, if you exhaust every other reason for this, note that the testnet infrastructure (relays, consensus nodes, etc) may not necessarily be as performant as mainnet.