I am trying to find a way to pass a human-meaningful message when rejecting an application call from a smart contract. Is there one?
Rejecting doesn’t take arguments, and the application logs are not surfaced in the algod exception as far as I can tell. (though I’m “looking” through py-algorand-sdk" - are they?)
This leaves us the option of matching error messages like
where the pc argument will change if the smart contract changes in any small but meaningful way.
For example if you modify the vrf-oracle smart contract, the expected daemon error messages no longer line up and the service exits
Ideally we should even have options between (pyTEAL) Log(Bytes("Failed to get randomness")) ending up in the error as last_log, as well as Reject(Int(91)) or Reject(Bytes("Failed to get randomness")) accepting an argument to propagate.
Hey, unfortunately there isn’t a way for this at present. At least not when submitting to a node. When using dryrun you can get some extra information about the evaluation but that would require a second submission after it’s failed. There’s currently ongoing work to implement a better “simulation” endpoint which can provide more detailed feedback on evaluations, but I’m not sure when that’s due.
This has actually been a subject that has come up multiple times, typically asking for the assert opcode to have an option argument, but placing this in the language itself isn’t ideal and should instead be the tooling which interprets where and why the failure happened before presenting a human readable error to the user.
Update:
Using Beaker it will parse the error and locate the where in the code it failed which will certainly debug. This could potentially give you a way to then categorise roughly where the failure took place to present the end user with a more elegant message?
This has actually been a subject that has come up multiple times,
It isn’t surprising, considering that in this aspect TEAL/AVM doesn’t have parity with FORTRAN 77 Even a single exit-code byte would make a massive difference in developer experience.
typically asking for the assert opcode to have an option argument, but placing this in the language itself isn’t ideal and should instead be the tooling which interprets where and why the failure happened before presenting a human readable error to the user.
If either Reject() took an argument or the last log line was exposed in the error, a custom assert could be written in two lines.
I think the “last log” is a good candidate to carry this information - it is already used in ABI calls as a return.
The beaker sourcemap functionality is interesting, thank you for that. Another possible route is to have an SC test suite generate a dict of error messages per condition, which is likely the way we will go.
// no decorate with @Subroutine or it doesn't work
// Must pass in string, not byte, or the last opcode is Load
def custom_assert(cond, str):
return If(Not(cond)).Then(Assert(Bytes('') == Bytes(str)))
@router.method
def update_state_int_1(key: abi.DynamicBytes, val: abi.Uint64):
return Seq(
custom_assert(Txn.sender() == Global.creator_address(), "UNAUTHORIZED"),
App.globalPut(key.get(), val.get())
)
When the creator calls that ABI call, it happily succeeds
When anyone else calls it, the opcode surfaced is the last op, which is Bytes(str), so the error message is very helpfully:
This is a smart solution, and if you can afford the additional opcode budget and it’s beneficial for you then you should definitely use it. Since you brought this up I had been meaning to create a minimal demo that parses the error and reads the PC, mapping it to a range which provides a human readable error. Although it’s very tricky and can only really be done by hand atm without new tooling.
Unfortunately the bad news, we have seen this type of trick before, and I think it was the AlgoFi team who introduced it to me. It was something similar to this:
I’m considering using my gadget in production as the contract I Am writing has various “expected” failure conditions - e.g. Freebie tickets that will expire after a certain round, it will be paused during special events, etc.
The plan until I came up with this was to build a test suite (which I will anyway) that will create mappings from pc=X opcode=Y back to error IDs that I can then present to the user as a cause of failure
If this works in prod as well it will same some extra magic from happening (the bad kind of magic)
Opcode wise I think it should be OK but we’ll see. I’m testing the edge-most cases to see if I am exceeding it anywhere. I believe (?) the custom_assert function is inlined but I can’t be sure, so during happy paths, instead of an assert(1) and ==(1) I’m doing an If(1?) and a not(1) (which I could remove at the cost of readability/dev expectation) and finally a == (1)
So if “If” has cost 1 then I should be one overbudget compared to native assert afaict
Bumping this to say we did use the custom asserts, and they were very useful in (eventually) figuring out that a subset of nodes that some users were using were out of sync - not sure we would have been able to deduce that otherwise.
The contract, including the custom_assert gadgets, are open-sourced here: