Is there a way to view the transactions that are waiting to be executed in the next block? Mempool?

I would like to see the queue of pending transactions that are going to be validated in the next block, like the Ethereum mempool. Is it possible?
Thanks!

There is an endpoint:

But note that contrary to Ethereum, there is no prioritization of transactions by gas fee and there is currently no congestion.

Note also that all the nodes may not have the same view of pending transactions nor of the order of those transaction.

1 Like

Just to emphasize what @fabrice mentioned above - there is no guarantee that your node would see all the transactions which would go to the next (effective) block.

If the network is not congested, and your node have enough download bandwidth and you’re connected to relays that have seen the same set of transactions as the next round proposer, then - your node would have the same set of transactions.

In general, I would suggest you’d avoid trying to make prediction on the next block content - since the protocol doesn’t impose any such constrains. i.e. the proposer could propose an empty block, even when it’s transaction pool is full. This wouldn’t be efficient or helpful for the network, but it wouldn’t be a violation of the network protocol.

4 Likes

Thanks, you two!

Do you know how this guy is able to see transactions added to the pending transaction queue with a timed interval?

https://www.iamnotabot.com/pool

I can know of the next transactions as a bunch only at each new block beginning with the API route Fabrice provided (Hi from France if you’re Francophone!)

It could be implemented using the GET /v2/transactions/pending entry point.
But I can’t really tell if that’s really how it was implemented or whether it’s using some other method to get the data.

what happens if the network is congested? Has algorand been tested with the load of ethereum or solana and how did it perform or put another way, what’s the highest load algorand has gone through? the only tests i’ve heard of is the one silvio did with 500 nodes back in 2019

@Titi
I don’t know what you’ve heard about silvio in 2019; Most of the tests that we’ve conducted back then were focus more on the agreement protocol and less on the transactional volume.

We are continuously working on improving the network throughput and improving the network congestion handling. The current default transaction pool handling would allow each node to cache up to 15,000 transaction in the transaction pool. Given that a block is currently limited to 1MB ( about 5000 transactions ), this gives a buffering of 3 rounds.

Some of the above numbers ( such as the size of the transaction pool ), can be adjusted - so machine with more memory, could allocate larger pool. We have found, however, that these numbers works well for most cases.

If you’d like to watch the latest metrics, feel free to look at https://metrics.algorand.org/ : all the numbers you see there were taken from the mainnet and can be verified by reviewing the blockchain history.

Last, regarding “what happen if the network is congested?” - the answer is that relays and node would refuse to receive additional transactions past the transaction pool size limit.

1 Like

what are some of the insights or improvements to congestion handling so far and would there be load tests or have there been load tests? Also when you look at algorand now what most pressuring issues to solve from a technological standpoint? thank you

There has already been quite many improvements that could “unlock” higher loads. However, many of these efforts have not reached maturity, yet.

From a high-level perspective, nothing in the existing implementation is “wrong”. But, as an engineers, we continuously looking to find better solutions. I clearly cannot speak of future development; but about 4 month ago, we released an improved proposal validation logic, that allows the node to preload all the account data from disk before trying to execute the evaluator and validate the proposal. Our test shown, that when tested against full blocks, it can reduce the overall evaluation time by 25%.

The above improvement was never announced as a major breakthrough because, on it’s own, it’s not enough to get a higher throughput.

As for your question about load testing - than yes - we definitely conduct load testing with different network typologies, machine types and transactional load.

Please keep in mind that the goal is not to be able to get to a “super high throughput on a super comuter”, but rather to “improve the throughput on mainnet”. Different nodes on the network are running on different type of hardware; and we want it to run well on all these platforms.

2 Likes

How was it before, it seems like preloading all accounts from disk would be slower as more accounts are added and it might processing time from other processes on the machine. Also was this aardvak or something else

preloading all the accounts that would be needed to evaluate the block. Not all the accounts on the blockchain.

1 Like

@tsachi , I tried to use the GET /v2/transactions/pending endpoint, but this shows the pending transactions as a bunch. For example, if I refresh this endpoint multiple times DURING the same block, the results would be the same. So my question remains, how this guy could display the transactions being added in real time?

https://www.iamnotabot.com/pool