Hello,
There are now several Smart Contracts in the ecosystem that all do related tasks (farming, staking, lending). In order to monitor them all, the only thing required would be:
a method to parse the local state (found in the user wallet);
a method to parse the global state (found in the application);
into actual usable information.
What I am proposing would be a creation of an ARC that specifies the outputs of those functions given the input of application state.
Ideally, those methods would also not use any additional indexer/node calls as to avoid spamming the network.
The only thing they have in common thus far is that they can all be reversed back to asset balances (positive or negative).
A good starting point would be:
from typing import TypedDict, Optional
class StateOutputARC(TypedDict):
asset_balances: dict # { asset_id: asset_balance (no decimals) }
from: Optional[int] # unix timestamp
to: Optional[int] # unix timestamp
round_from: Optional[int]
round_to: Optional[int]
def fetch_applications() -> list[int]:
application_ids = []
# fetch a list of applications that adhere to below state parsing specification
# this function can call any indexer/node endpoint (preferable) or off-chain API
return application_ids
def parse_state(state: dict) -> StateOutputARC:
state_output = {"asset_balances": {}}
# parse each key of state to find meaningful information
# in case of local state, just return a list of user asset balances
# in case of global state, return the entirety of SC information
# this function should work with partial state and handle errors in case of partial state misses
# in case of smart contracts written in Reach (or any other language that optimizes their state memory),
# this memory needs to be parsed from given state, without additional calls
return state_output
I believe this would be very beneficial to user experience since integration of different Smart Contracts into their portfolio would become trivial. This is so far a pretty big problem and having a standard would simplify the development of new solutions - as the main developer of Vestige, this would become especially useful for me as well as any new product seeking to include these applications into their UI.
Iām not sure adding it to ARC4 would make sense. This is more specific to the piece of code that would handle parsing of the application than the application itself - a general standard so a huge open-source repository with these snippets of code could be created, developed by SC owners & then easily integrated by other developers.
For anyone that wants their SCs widely integrated this would be a must-have.
The general problem is that everyone has their own idea of how their state should look like, but that means any new SC is a completely new problem to solve. Most of the developers donāt even explain their state in docs so it is literally a trial-and-error process until it works.
The alternative right now is to have each of the parties create their own system for this (new lending protocol? every portfolio tracker needs to spend development time to understand its state) which is insanity when we will account for the growth of the ecosystem.
Iām not sure adding it to ARC4 would make sense.
Definitely, the ApplicationSpec would contain the ARC4 spec but with additional information like, for example, how to interpret given state values.
Additionally it should provide things like expected default arguments or other hints to help client applications.
The version of the application spec that Beaker is currently producing can produce an autogenerated TypeScript client using GitHub - algorand-devrel/beaker-ts
I completely agree with the goal of allowing a complete stranger to easily use and understand an application theyāve never seen before.
Check out beaker/beaker-ts and please do comment on the app spec issue if you have any other thoughts on it
New/existing ARC wonāt solve the problem. Nobody uses ARC and with the growth of the ecosystem that nobody will become even smaller. We have got ASA swapping providers and some of them donāt even fill the note, let alone use appropriate ARC-2.
ARC19 is used because it solves peopleās problems. dApp providers donāt have a problem with their dApps not being categorized, we who use their dApps have such a problem instead.
Afaic, the only way to solve such a problem is the way we in ASA Stats have planned to do: by open sourcing our codebase and so allowing the users of dApps (like we are) to not spend their time on developing.
Your motivation is benevolent and we in ASA Stats would be glad to assist you in defining the standard to be used in our codebase for exposing various dApps. Nevertheless, as I already told you, our business model canāt keep up with open-sourcing our codebase before the next summer.
But now that I am looking at it, we need to spread this ARC-2, it is not used by every dApps.
Yes, thatās the starting point for sure. ARC2 implementation is trivial and still people donāt use it. We have managed to bypass that problem for most dApps, but for some of them like those swap providers there is no way to identify them.
And thatās an enormous problem for our NFT engine in ASA Stats as swapping NFTs is just another sell and if we have got a sold price for one of the items in swap then we can define the price for the counterpart(s) in the swap. And providing that information to our future AI NFT price engine will increase our NFT price information accuracy a lot.
The problem @grzracz brought to the table is kind of learning to fly while we still struggle to walk. As I already said, a new ARC wonāt solve dApp providersā problem - it will only help the projects who analyze those dApps - so they will avoid using it. The Foundation needs to find a way to motivate the providers to implement a new ARC. And the same goes with ARC2ā¦
What @ipaleka says is actually very interesting. I think what will end up happening is that weāre going to go the open-repo route first. By which I mean weāll open up HOW weāre decoding state and provide a few examples for the top protocols, and then once we have that ball rolling code-fy it into an ARC with a bit of a āprovenā track-record, making sure that as @StephaneBarroso says it respects ARC-4 & ARC-22 standards. Otherwise weāll end up with a never-ending set of fragmented standards that individual dapps ignore cause they find to cumbersome.
At the end of the day if we find that our way of doing things differs too much from the way ASAStats does then Iām sure we can reach some compromise that satisfies both of us and can achieve the intended goal of having a network standard everyone can abide to.
Bear in mind that we have got a completely custom solution right now in production created immediately after various dApps were hitting the mainnet, so refactoring is both needed and expected.
The way how weāre going to refactor our code completely depends on the standard weāre going to have and weāre not going to refactor any line of code before the system is defined by joint efforts of @grzracz, ARC discussion, etc.
This solution is written in JS/TS, correct? We do have a repo setup with templates & an implementation standard, but our approach would be to have it written in Python.
I guess to make everyone happy we would need to have two files with exact same code but with two language implementations. Shouldnāt be difficult once it is implemented in one of the two to translate it to the other.
Yeah, I completely understand and wouldnāt expect it to be any different. I think thatās the best way to approach it, to make sure itās a standard that has actual sticking power. I would say at the moment what would be best is for each of us to build our solutions and then convene as to the rationale behind them and from it build a standard that satisfies both of us (and others that try to build similar products in the eco, although I donāt know of many, MAYBE AlgoGator?), fully aware that some refactoring will be needed and a solution āas language agnostic as possibleā would need to be reached.
This solution is written in JS/TS, correct? We do have a repo setup with templates & an implementation standard, but our approach would be to have it written in Python.
I guess to make everyone happy we would need to have two files with exact same code but with two language implementations. Shouldnāt be difficult once it is implemented in one of the two to translate it to the other.
Only Python here, Iām afraid.
And the plan is to develop that standardized code in Python too, we simply donāt have other resources for now.
Added a brief description to the Discord channel, might be easier to continue the discussion there
The worst decision ever regarding ASA Stats was that we didnāt start from the very start with GitHub, we picked Discord instead and now a sea of valuable comments and discussions are lost in the void.
So, please, only public spaces for actual discussions. I mean, if we continue in Discord Iāll be copying/pasting valuable comments somewhere in public spaceā¦
As for dApps off the top of my head that would be interested on setting a standard I would say all the ones that are building P. Managers (from the directory that I believe are still active, sorry if I missed one):
-AlgoGator
-ASAPortfolio
-ASAStats
-Defly
-Headline(AlgoCloud, but unsure if development has halted)
-Upside Finance
-Vestige
And dApps that would have an interest on having their state decoded that are already on MainNet(again sorry if Iām missing someone):
-AlgoFi
-Cometa
-FolksFinance
-GARD
-Humble
-Pact
-Tinyman
-Yieldly
ipaleka ā Yesterday at 10:03 PM
Hi everybody!
First of all, what we are trying to achieve? This channel is named with prefix ā#arc-ā, but nothing goes that way so far. Iām here on behalf of ASA Stats and weāre going to prepare an open-source version of our community driven system where Algorand users requests a new dApp to be implemented in ASA Stats, and afterwards we do research and implementation. Our governance seats are full of people who contributed that way and we have confirmed that the way from users to tracker providers is the only way. Whichever standard we bring, tracking providers will always need to do their own implementation. Or they will simply wait for our open-source implementation next summer or for some other providerās implementation if they publish it before.
Our business model is set to open-source our codebase next summer. Weād be delighted to publish such an open-source codebase based on standardized format, and weāre looking forward to creation of such a standard.
That bloated code there in that repo is the very first burden in achieving a standard. The second burden is moving discussion here in a non-public space, That original thread on Forum is far better choice if you have decided to not move it to GitHub - algorandfoundation/ARCs: Algorand Requests for Comments.
ipaleka ā Yesterday at 10:21 PM
Process
Before submitting a new ARC, please have a look at ARC-0.
Stef | ARC Manager ā Today at 10:56 AM
Thanks for coming here.
With this discord channel, I intend to involve people that doesnāt read the forum to know that we have an interesting discussion about this subject.
I also added a channel arc-decoding-on-chain-state IF you want to talk about this subject on discord.
You guys can still continue the discussion on the forum if you think it fits the use case better.
Stef | ARC Manager ā Today at 11:00 AM
You can take the template and fill in each part.
ipaleka ā Today at 11:17 AM
Scattered discussion is a problem. People involved have to have all the data in order to provide valuable outcomes. One solution is to copy/paste valuable comments into central thread, but who will do that in this case?
ipaleka ā Today at 11:18 AM
You can take the template and fill in each part.
What to fill in should be a topic for this discussion. Code should go as the very last and optional step.
Hi there, one of the Defly devs here. I saw this discussion on Discord and thought this looks interesting & useful, thanks for getting this started!
Figuring out how to make sense of the global/local state of a new protocol is certainly a problem that we also faced and it would be great if there was a better way to get this done. Weād be happy to join and help with drafting such a standard.
That said, I like the proposed bottom-up approach (starting a repository of common snippets that people can contribute to and that is eventually standardized) more than a top-down approach (starting from a standard that then likely is not used).
It would be interesting to hear from protocol creators as well. What would we as protocol integrators expect them to provide to facilitate our work?
That said, I like the proposed bottom-up approach (starting a repository of common snippets that people can contribute to and that is eventually standardized) more than a top-down approach (starting from a standard that then likely is not used).
The thing is that you, @grzracz, and other few of us have got insights and we know what is needed and what variations exist in the ecosystem.
Once people become to start using the code that carries missing parts then weāll 100% create more damage than benefits and the developer community wonāt benefit at all, those people will simply choose to fork an existing open-source solution after it arrives (ASA Stats repo is going to be open-source in the summer).
This discussion should start, for example, by defining what defines a need for a dApp to be included in the standard rather than just presenting āSTKEā or āLENDā as something people will use as a starting point.
āWhat is staking?ā simply has to be among the first topics for this discussion.
You guys may use that code from @grzracz, but donāt expect that we in ASA Stats will ever use it, we have got our own codebase (just like a few of you have your own) and we donāt have a need or time to refactor it into something that we donāt know yet if itās better or worst than what we already have.