SignTxnsFunction (ARC-0001) vs TransactionSigner (algosdk) response types

The ARC-0001 spec defines SignTxnsFunction as

export type SignTxnsFunction = (
   txns: WalletTransaction[],
   opts?: SignTxnsOpts
) => Promise<(SignedTxnStr | null)[]>;

In the “Semantic and Security Requirements” section [link] it explains the response should match the length of the txns array, containing a base64-encoded SignedTxnStr for signed transactions or null for unsigned:

Promise<(SignedTxnStr | null)[]>

However, the Algorand JS SDK defines TransactionSigner as

export type TransactionSigner = (
  txnGroup: Transaction[],
  indexesToSign: number[]
) => Promise<Uint8Array[]>;

Its JSDoc comment says the response should match the length of indexesToSign (not txnGroup), and only contain encoded signed transactions:


This contradicting guidance seems to play out in the varying implementations you see in Algorand compatible wallets. I’m the author of @txnlab/use-wallet, and one of the features of the library is that it normalizes the response types of each wallet’s signing function, which varies greatly:

  • Promise<Uint8Array[]> (Defly, Pera)
  • Promise<(Uint8Array | null)[]> (Lute)
  • Promise<(string | null)[]> (Exodus, Kibisis)
  • Promise<(string | undefined)[]> (Magic Link)

The library exports a TransactionSigner function that is meant to be used with Atomic Transaction Composer, which seems to be considered “best practice” since Algokit’s release. So I’ve decided to go with Promise<Uint8Array[]> as the response type for both the signTransactions and transactionSigner methods. [link]

Sorry for the long post… all of this is to say, would it make sense to reconcile these contradicting patterns? New wallets looking for guidance will probably follow ARC-0001 as a finalized spec, but then it requires additional steps (base64 decoding the signed transactions, then filtering nullish elements) before the wallet’s signing function is compatible with Atomic Transaction Composer.


Hello, developer for Lute here.

I prefer the pattern defined in ARC-01 of filling with nulls so that the response length matches the request length. That way the response retains the relative indices transactions - meaning you don’t have to know what the request was in order to parse the response.

I’m not sure why the ATC signer works the other way, but since it handles the responses to its signer, the work is not on the developer to reconstruct the group. However, if a developer wishes to integrate directly to a wallet (not use-wallet) the work is on them to reconstruct the group from the wallet’s response. In that case I would much rather have the nulls in the response - which is why I chose to have the response from Lute contain them.


Ah, that’s a great point. The TransactionSigner’s response only matters to ATC itself, while a developer would be handling SignTxnsFunction’s response and could actually make use of the null placeholders.

In that case, leaving ATC’s signer as-is and changing SignTxnsFunction to return Promise<(Uint8Array | null)[]> would be consistent enough to avoid confusion.