The ARC-0001 spec defines SignTxnsFunction
as
export type SignTxnsFunction = (
txns: WalletTransaction[],
opts?: SignTxnsOpts
) => Promise<(SignedTxnStr | null)[]>;
In the “Semantic and Security Requirements” section [link] it explains the response should match the length of the txns
array, containing a base64-encoded SignedTxnStr
for signed transactions or null
for unsigned:
Promise<(SignedTxnStr | null)[]>
However, the Algorand JS SDK defines TransactionSigner
as
export type TransactionSigner = (
txnGroup: Transaction[],
indexesToSign: number[]
) => Promise<Uint8Array[]>;
Its JSDoc comment says the response should match the length of indexesToSign
(not txnGroup
), and only contain encoded signed transactions:
Promise<Uint8Array[]>
.
This contradicting guidance seems to play out in the varying implementations you see in Algorand compatible wallets. I’m the author of @txnlab/use-wallet, and one of the features of the library is that it normalizes the response types of each wallet’s signing function, which varies greatly:
Promise<Uint8Array[]>
(Defly, Pera)Promise<(Uint8Array | null)[]>
(Lute)Promise<(string | null)[]>
(Exodus, Kibisis)Promise<(string | undefined)[]>
(Magic Link)
The library exports a TransactionSigner
function that is meant to be used with Atomic Transaction Composer, which seems to be considered “best practice” since Algokit’s release. So I’ve decided to go with Promise<Uint8Array[]>
as the response type for both the signTransactions
and transactionSigner
methods. [link]
Sorry for the long post… all of this is to say, would it make sense to reconcile these contradicting patterns? New wallets looking for guidance will probably follow ARC-0001 as a finalized spec, but then it requires additional steps (base64 decoding the signed transactions, then filtering nullish elements) before the wallet’s signing function is compatible with Atomic Transaction Composer.