A Turing complete protocol can perform any computation or data manipulation. Ethereum is Turing complete, Algorand is not. Does this matter? Is it a strike against Algorand, or is Algorand Turing “complete enough” to perform all envisioned DeFi use cases - stablecoins, CBDCs, tokenizations, etc? Is Turing completeness an advantage for Ethereum, or does it mean Ethereum is bloatware?
@markmc, Ethereum and Algorand were designed with very different approaches, and hence ended up with very different solutions.
Ethereum virtual machine is executing EVM bytecode, as long as you’ve provided the proper amount of gas ( fee ). So, while it’s true that it’s turing complete, some types of programs could be inherently too expensive to be executed. Current Ethereum gas price are a proof for that.
Algorand took a very different approach. The core contract language, TEAL, is not turing complete. Nevertheless, it still allow you to implement (almost) any computation needed for DeFi applications. That allows Algorand to operate at a high speed, providing low-latency L1 contracting solution. For few solutions, look here - Algorand Developer Portal.
That L1 solution would meet the needs of 99% of the users / applications. For the reminder, there are two different approaches that are being explored ( no official products around these yet ) : one is to create a co-chain that would exchange tokens with the mainnet. This would allow the co-chain to run slower than the mainnet, and perform heavy computations. Another solution is to implement a layer 2 (L2) application that would interact with some L1 primitives to ensure operational atomicity.
Last, there are companies that attempts to bridge the gap by providing a smooth transition between building for Ethereum vs. building for Algorand. See Reach - https://algorand.foundation/news/reach-development
Thank you @tsachi. Maybe I’m stating the obvious, but the fact that 99% of use cases and applications can be achieved in Layer 1 smart contracts (with no gas equivalent) is a massive advantage for Algorand over all other programmable blockchains. The cost to Ethereum of being Turning complete - and therefore able to address to remaining 1% of applications - is steep in terms of complexity. Moreover, isn’t Ethereum doomed to be a victim of its own success? As the price of ETH climbs, the full capability of the EVM will be cost prohibitive. And this will continue to be the case even under Ethereum 2.0, when is becomes PoS. If I understand Alogrand correctly, it seems to dodge both bullets (unnecessary complexity & costly contract execution) elegantly, and only at the cost being unable to address a tiny percentage of applications.
@tsachi , I completly agree with Algorand’s approach of being non-turing complete by design. Some other DSL also have followed this approach like DAML & google CEL. Having said that, I see that in the post 2.6.0 master branch, where the Teal 4 code resides, it seems to me that Teal 4 is Turing complete.
My reason for this is
const backBranchEnabledVersion = 4
The branch opcodes & assember programs also check this and allow back branching, thus making it turing complete. There are opcodes added for function calls as well.
My question is: What made Silvio & team, rethink the turing compeletness aspects? Is there any blog or doc you can point me to? else, correct me if I am reading the code out of context
I don’t know if you noticed but the code around that implementation is still gated by a consensus release, so it won’t be available for use before a consensus release that would include that feature would become available.
The goal in implementing this feature ( along with few additional features, some are review-pending ), was to improve the programmability on Algorand. I’m having hard-time qualifying this as Turing-complete since a program runtime is strictly bounded to a specific cost, and that cost ( as of this time ), is pretty low.
From academic perspective, it might be qualified for to be considered Turing-complete, but note that this wasn’t the direct intent here. I believe that one of the goals here was code reusability, which would allow more efficient programs be written within the program size constrains.