ARC used for decoding on-chain local and global states into meaningful information

Hello,
There are now several Smart Contracts in the ecosystem that all do related tasks (farming, staking, lending). In order to monitor them all, the only thing required would be:

  • a method to parse the local state (found in the user wallet);
  • a method to parse the global state (found in the application);
    into actual usable information.

What I am proposing would be a creation of an ARC that specifies the outputs of those functions given the input of application state.
Ideally, those methods would also not use any additional indexer/node calls as to avoid spamming the network.
The only thing they have in common thus far is that they can all be reversed back to asset balances (positive or negative).
A good starting point would be:

from typing import TypedDict, Optional

class StateOutputARC(TypedDict):
    asset_balances: dict # { asset_id: asset_balance (no decimals) }
    from: Optional[int] # unix timestamp
    to: Optional[int] # unix timestamp
    round_from: Optional[int]
    round_to: Optional[int]

def fetch_applications() -> list[int]:
    application_ids = []
    # fetch a list of applications that adhere to below state parsing specification
    # this function can call any indexer/node endpoint (preferable) or off-chain API
    return application_ids

def parse_state(state: dict) -> StateOutputARC:
    state_output = {"asset_balances": {}}
    # parse each key of state to find meaningful information
    # in case of local state, just return a list of user asset balances
    # in case of global state, return the entirety of SC information
    # this function should work with partial state and handle errors in case of partial state misses
    # in case of smart contracts written in Reach (or any other language that optimizes their state memory), 
    # this memory needs to be parsed from given state, without additional calls
    return state_output

I believe this would be very beneficial to user experience since integration of different Smart Contracts into their portfolio would become trivial. This is so far a pretty big problem and having a standard would simplify the development of new solutions - as the main developer of Vestige, this would become especially useful for me as well as any new product seeking to include these applications into their UI.

I am curious to hear your feedback.

9 Likes

I think this a good idea generally, I think it makes sense in the ARC for the full ApplicationSpec which has yet to be drafted but the issue for discussion is here: Discussion: Extended Application Specification · Issue #118 · algorandfoundation/ARCs · GitHub

I PR’d a change to beaker to specify how to decode a given state value here:

3 Likes

I’m not sure adding it to ARC4 would make sense. This is more specific to the piece of code that would handle parsing of the application than the application itself - a general standard so a huge open-source repository with these snippets of code could be created, developed by SC owners & then easily integrated by other developers.
For anyone that wants their SCs widely integrated this would be a must-have.
The general problem is that everyone has their own idea of how their state should look like, but that means any new SC is a completely new problem to solve. Most of the developers don’t even explain their state in docs so it is literally a trial-and-error process until it works.
The alternative right now is to have each of the parties create their own system for this (new lending protocol? every portfolio tracker needs to spend development time to understand its state) which is insanity when we will account for the growth of the ecosystem.

I’m not sure adding it to ARC4 would make sense.

Definitely, the ApplicationSpec would contain the ARC4 spec but with additional information like, for example, how to interpret given state values.

Additionally it should provide things like expected default arguments or other hints to help client applications.

The version of the application spec that Beaker is currently producing can produce an autogenerated TypeScript client using GitHub - algorand-devrel/beaker-ts

I completely agree with the goal of allowing a complete stranger to easily use and understand an application they’ve never seen before.

Check out beaker/beaker-ts and please do comment on the app spec issue if you have any other thoughts on it

1 Like

Hi @grzracz !

New/existing ARC won’t solve the problem. Nobody uses ARC and with the growth of the ecosystem that nobody will become even smaller. We have got ASA swapping providers and some of them don’t even fill the note, let alone use appropriate ARC-2.

ARC19 is used because it solves people’s problems. dApp providers don’t have a problem with their dApps not being categorized, we who use their dApps have such a problem instead.

Afaic, the only way to solve such a problem is the way we in ASA Stats have planned to do: by open sourcing our codebase and so allowing the users of dApps (like we are) to not spend their time on developing.

Your motivation is benevolent and we in ASA Stats would be glad to assist you in defining the standard to be used in our codebase for exposing various dApps. Nevertheless, as I already told you, our business model can’t keep up with open-sourcing our codebase before the next summer.

@grzracz
I agree with the fact that we need an ARC with some “Mandatory” function inside Apps.
Like they have for ERC-721 for example.

You can start to write an ARC for this, just be sure to respect ARC-4 & ARC-22 (ABI for Read Only).

@ipaleka If you can share with me which ARC is not correctly used, it will be helpful, so please hit me up.

@ipaleka If you can share with me which ARC is not correctly used, it will be helpful, so please hit me up.

I linked an example discussion above:

We have got ASA swapping providers and some of them don’t even fill the note, let alone use appropriate ARC-2.

Here’s an excerpt from it:

Some of those providers like Atomixwap provide a custom note, but others like Swapper don’t.

Can you provide us an example here of a dApp and/or dApp provider on Algorand that uses ARC to identify itself or its dApp?

We have, for example FIFA-collect with
tx: CS5B5YZJHLDBYLTDRV5E35HGAUGGGXMQRFIGF3X5QZJUBZL7KLZA

AlgoMart/v1:j{"t":"ctpf","a":888995843,"s":["arc2"]}

But now that I am looking at it, we need to spread this ARC-2, it is not used by every dApps.

Thanks for sharing this point.

But now that I am looking at it, we need to spread this ARC-2, it is not used by every dApps.

Yes, that’s the starting point for sure. ARC2 implementation is trivial and still people don’t use it. We have managed to bypass that problem for most dApps, but for some of them like those swap providers there is no way to identify them.

And that’s an enormous problem for our NFT engine in ASA Stats as swapping NFTs is just another sell and if we have got a sold price for one of the items in swap then we can define the price for the counterpart(s) in the swap. And providing that information to our future AI NFT price engine will increase our NFT price information accuracy a lot.

The problem @grzracz brought to the table is kind of learning to fly while we still struggle to walk. As I already said, a new ARC won’t solve dApp providers’ problem - it will only help the projects who analyze those dApps - so they will avoid using it. The Foundation needs to find a way to motivate the providers to implement a new ARC. And the same goes with ARC2…

Thanks for sharing this point.

You’re very welcome, thanks for understanding!

2 Likes

What @ipaleka says is actually very interesting. I think what will end up happening is that we’re going to go the open-repo route first. By which I mean we’ll open up HOW we’re decoding state and provide a few examples for the top protocols, and then once we have that ball rolling code-fy it into an ARC with a bit of a “proven” track-record, making sure that as @StephaneBarroso says it respects ARC-4 & ARC-22 standards. Otherwise we’ll end up with a never-ending set of fragmented standards that individual dapps ignore cause they find to cumbersome.

At the end of the day if we find that our way of doing things differs too much from the way ASAStats does then I’m sure we can reach some compromise that satisfies both of us and can achieve the intended goal of having a network standard everyone can abide to.

1 Like

Bear in mind that we have got a completely custom solution right now in production created immediately after various dApps were hitting the mainnet, so refactoring is both needed and expected.

The way how we’re going to refactor our code completely depends on the standard we’re going to have and we’re not going to refactor any line of code before the system is defined by joint efforts of @grzracz, ARC discussion, etc.

This solution is written in JS/TS, correct? We do have a repo setup with templates & an implementation standard, but our approach would be to have it written in Python.

I guess to make everyone happy we would need to have two files with exact same code but with two language implementations. Shouldn’t be difficult once it is implemented in one of the two to translate it to the other.

Yeah, I completely understand and wouldn’t expect it to be any different. I think that’s the best way to approach it, to make sure it’s a standard that has actual sticking power. I would say at the moment what would be best is for each of us to build our solutions and then convene as to the rationale behind them and from it build a standard that satisfies both of us (and others that try to build similar products in the eco, although I don’t know of many, MAYBE AlgoGator?), fully aware that some refactoring will be needed and a solution “as language agnostic as possible” would need to be reached.

We can have an ARC Discord Meeting this month if you guys are ready to discuss it with each other.

It’s way better to have everyone on the same call for 2 hours on one topic.

I usually do them the Wednesday or Thursday at:

  • 1pm NY
  • 5pm UTC
  • 7pm PARIS

If you can just list which dApps I need to add on the call, I might be able to have most of them.

I also added a channel #arc-decoding-on-chain-state if you want to talk about this subject on discord.

Added a brief description to the Discord channel, might be easier to continue the discussion there :slight_smile:

This solution is written in JS/TS, correct? We do have a repo setup with templates & an implementation standard, but our approach would be to have it written in Python.

I guess to make everyone happy we would need to have two files with exact same code but with two language implementations. Shouldn’t be difficult once it is implemented in one of the two to translate it to the other.

Only Python here, I’m afraid.

And the plan is to develop that standardized code in Python too, we simply don’t have other resources for now.

Added a brief description to the Discord channel, might be easier to continue the discussion there :slight_smile:

The worst decision ever regarding ASA Stats was that we didn’t start from the very start with GitHub, we picked Discord instead and now a sea of valuable comments and discussions are lost in the void.

So, please, only public spaces for actual discussions. I mean, if we continue in Discord I’ll be copying/pasting valuable comments somewhere in public space…

From Algorand Discord discussion:


BunsanMuchi | Vestige.fi — 10/07/2022

As for dApps off the top of my head that would be interested on setting a standard I would say all the ones that are building P. Managers (from the directory that I believe are still active, sorry if I missed one):

-AlgoGator

-ASAPortfolio

-ASAStats

-Defly

-Headline(AlgoCloud, but unsure if development has halted)

-Upside Finance

-Vestige

And dApps that would have an interest on having their state decoded that are already on MainNet(again sorry if I’m missing someone):

-AlgoFi

-Cometa

-FolksFinance

-GARD

-Humble

-Pact

-Tinyman

-Yieldly


ipaleka — Yesterday at 10:03 PM

Hi everybody!

First of all, what we are trying to achieve? This channel is named with prefix “#arc-”, but nothing goes that way so far. I’m here on behalf of ASA Stats and we’re going to prepare an open-source version of our community driven system where Algorand users requests a new dApp to be implemented in ASA Stats, and afterwards we do research and implementation. Our governance seats are full of people who contributed that way and we have confirmed that the way from users to tracker providers is the only way. Whichever standard we bring, tracking providers will always need to do their own implementation. Or they will simply wait for our open-source implementation next summer or for some other provider’s implementation if they publish it before.

Our business model is set to open-source our codebase next summer. We’d be delighted to publish such an open-source codebase based on standardized format, and we’re looking forward to creation of such a standard.

That bloated code there in that repo is the very first burden in achieving a standard. The second burden is moving discussion here in a non-public space, That original thread on Forum is far better choice if you have decided to not move it to GitHub - algorandfoundation/ARCs: Algorand Requests for Comments.


ipaleka — Yesterday at 10:21 PM

Process

Before submitting a new ARC, please have a look at ARC-0.


Stef | ARC Manager — Today at 10:56 AM

Thanks for coming here.

With this discord channel, I intend to involve people that doesn’t read the forum to know that we have an interesting discussion about this subject.

I also added a channel arc-decoding-on-chain-state IF you want to talk about this subject on discord.

You guys can still continue the discussion on the forum if you think it fits the use case better.


Stef | ARC Manager — Today at 11:00 AM

You can take the template and fill in each part.


ipaleka — Today at 11:17 AM

Scattered discussion is a problem. People involved have to have all the data in order to provide valuable outcomes. One solution is to copy/paste valuable comments into central thread, but who will do that in this case?


ipaleka — Today at 11:18 AM

You can take the template and fill in each part.

What to fill in should be a topic for this discussion. Code should go as the very last and optional step.

Hi there, one of the Defly devs here. I saw this discussion on Discord and thought this looks interesting & useful, thanks for getting this started!

Figuring out how to make sense of the global/local state of a new protocol is certainly a problem that we also faced and it would be great if there was a better way to get this done. We’d be happy to join and help with drafting such a standard.

That said, I like the proposed bottom-up approach (starting a repository of common snippets that people can contribute to and that is eventually standardized) more than a top-down approach (starting from a standard that then likely is not used).

It would be interesting to hear from protocol creators as well. What would we as protocol integrators expect them to provide to facilitate our work?

1 Like

That said, I like the proposed bottom-up approach (starting a repository of common snippets that people can contribute to and that is eventually standardized) more than a top-down approach (starting from a standard that then likely is not used).

The thing is that you, @grzracz, and other few of us have got insights and we know what is needed and what variations exist in the ecosystem.

Once people become to start using the code that carries missing parts then we’ll 100% create more damage than benefits and the developer community won’t benefit at all, those people will simply choose to fork an existing open-source solution after it arrives (ASA Stats repo is going to be open-source in the summer).

This discussion should start, for example, by defining what defines a need for a dApp to be included in the standard rather than just presenting “STKE” or “LEND” as something people will use as a starting point.

“What is staking?” simply has to be among the first topics for this discussion.

You guys may use that code from @grzracz, but don’t expect that we in ASA Stats will ever use it, we have got our own codebase (just like a few of you have your own) and we don’t have a need or time to refactor it into something that we don’t know yet if it’s better or worst than what we already have.