I want to get a transaction by txid,but get " failed retrieving information from the indexer "

I execute the following command:

curl -H “X-Algo-API-Token:$(cat ./data/algod.token)” ‘http://localhost:8080/v1/transaction/AMI2SDCTAMFFC546X2DQAWM5J624E4CUKMTEHCVVKS54EQURN6PQ

get this: failed retrieving information from the indexer

this is my algod configure(set Archival Mode,enable indexer):

node 's Last committed block is 95614

txid is AMI2SDCTAMFFC546X2DQAWM5J624E4CUKMTEHCVVKS54EQURN6PQ

found in algoexplore,so what’s wrong with it?

curl -H “X-Algo-API-Token:$(cat ./data/algod.token)” “http://localhost:8080/v1/transaction/AMI2SDCTAMFFC546X2DQAWM5J624E4CUKMTEHCVVKS54EQURN6PQ

Worked for me and I see the transaction. I also tested with API and it works. How big is the indexer.sqlite file in your data directory? After changing did you restart?

the question is solutioned,build index by indexer is slow .

I have the same problem in mainnet, how to solve that ? thanks

Do you have the indexer and archival flags set on your node?

attempted to retrieve transaction from network with:
http://127.0.0.1:62321/v1/transaction/ALHUSLHCRTZ6G4OTVCBPES5YVIQY32VENBO72I2H246SZSNOE5KQ

“failed retrieving information from the indexer”

config:

  1 {
  2     "Version": 4,
  3     "AnnounceParticipationKey": true,
  4     "Archival": true,
  5     "BaseLoggerDebugLevel": 4,
  6     "BroadcastConnectionsLimit": -1,
  7     "CadaverSizeTarget": 1073741824,
  8     "CatchupFailurePeerRefreshRate": 10,
  9     "CatchupParallelBlocks": 50,
 10     "DeadlockDetection": 0,
 11     "DNSBootstrapID": "<network>.algorand.network",
 12     "EnableIncomingMessageFilter": false,
 13     "EnableMetricReporting": false,
 14     "EnableOutgoingNetworkMessageFiltering": true,
 15     "EnableTopAccountsReporting": false,
 16     "EndpointAddress": "127.0.0.1:0",
 17     "GossipFanout": 4,
 18     "IncomingConnectionsLimit": 10000,
 19     "IncomingMessageFilterBucketCount": 5,
 20     "IncomingMessageFilterBucketSize": 512,
 21     "IsIndexerActive": true,
 22     "LogSizeLimit": 1073741824,
 23     "MaxConnectionsPerIP": 30,
 24     "NetAddress": "",
 25     "NodeExporterListenAddress": ":9100",
 26     "NodeExporterPath": "./node_exporter",
 27     "OutgoingMessageFilterBucketCount": 3,
 28     "OutgoingMessageFilterBucketSize": 128,
 29     "PriorityPeers": {},
 30     "ReconnectTime": 60000000000,
 31     "ReservedFDs": 256,
 32     "RunHosted": false,
 33     "SuggestedFeeBlockHistory": 3,
 34     "SuggestedFeeSlidingWindowSize": 50,
 35     "TxPoolExponentialIncreaseFactor": 2,
 36     "TxPoolSize": 50000,
 37     "TxSyncIntervalSeconds": 60,
 38     "TxSyncServeResponseSize": 1000000,
 39     "TxSyncTimeoutSeconds": 30
 40 }

I have stopped and started the (testnet) node few times. Also tried before with mainnet

the indexer is still syncing.

{
  "lastRound": 752400,
  "lastConsensusVersion": "https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0",
  "nextConsensusVersion": "https://github.com/algorandfoundation/specs/tree/5615adc36bad610c7f165fa2967f4ecfa75125f0",
  "nextConsensusVersionRound": 752401,
  "nextConsensusVersionSupported": true,
  "timeSinceLastRound": 0,
  "catchupTime": 0
}

Indexer.sqlite is very small

➜  node ls -ll ./testnetdata/testnet-v1.0
total 27632272
-rw-r--r--  1 maros  staff         4096 15 Oct 17:30 crash.sqlite
-rw-r--r--  1 maros  staff        32768 16 Oct 17:43 crash.sqlite-shm
-rw-r--r--  1 maros  staff         8272 15 Oct 17:30 crash.sqlite-wal
-rw-r--r--  1 maros  staff         4096 15 Oct 17:32 indexer.sqlite
-rw-r--r--  1 maros  staff        32768 16 Oct 17:43 indexer.sqlite-shm
-rw-r--r--  1 maros  staff        41232 15 Oct 17:32 indexer.sqlite-wal
-rw-r--r--  1 maros  staff  14124146688 16 Oct 17:44 ledger.block.sqlite
-rw-r--r--  1 maros  staff        32768 16 Oct 17:43 ledger.block.sqlite-shm
-rw-r--r--  1 maros  staff      5071752 16 Oct 17:44 ledger.block.sqlite-wal
-rw-r--r--  1 maros  staff     10440704 16 Oct 17:38 ledger.tracker.sqlite
-rw-r--r--  1 maros  staff        32768 16 Oct 17:43 ledger.tracker.sqlite-shm
-rw-r--r--  1 maros  staff      4919312 16 Oct 17:44 ledger.tracker.sqlite-wal

Related question - are all historic transaction available to access or only N rounds in history? - if so how many, can I configure it?

An archive node - if the archival value is set at the start of the node running - will have all of the blocks, but you can retrieve directly a transaction from only within 1000 rounds of latest and this is not configurable. With archival on and indexer on and the node is fully caught up you have full access to any transaction from the ledger.

Alternatively with archival on you can pick any block to see its transactions or, if you know the transaction “from” or “to”, page through the transactions of a given account using firstRound/lastRound. Both rely on knowing something besides the transaction ID.

Lastly, you can sign up for PureStake’s API (free tier available) and directly request any given transaction from the desired ledger. (https://developer.purestake.io)

hi, i have the same problem, have u solve this?