The Next Step For Decentralized AI

This article will explore unique use cases for decentralized AI systems powered by AVS.
In the next installments of this series, we will explore technical details and introduce how the Othentic Stack can serve as a framework for building the next wave of decentralized AI solutions.

To learn more about AVS, check Yarden Shechter’s post.

Introduction

Artificial Intelligence has reached unprecedented sophistication, with large language models and generative AI becoming deeply integrated into everyday services. Companies like OpenAI, Google, and Anthropic have developed powerful AI systems capable of human-like interactions and complex problem-solving, setting the stage for the next evolution in AI infrastructure - one that leverages the unique advantages of blockchain technology.

Blockchain's inherent properties supercharge AI systems and enable new possibilities, mainly:

Self-enforceability and verifiability - enable transparent systems with automated rule enforcement through smart contracts and cryptographic verification. “Code is Law” for the age of AI.

Cryptoeconomic coordination - creates and aligns participant behaviors through token mechanisms, staking, slashing, token rewards, and penalties.

Financial interoperability - provides a way for AI to leverage programmable financial rails to perform autonomous, intelligence-based actions.

Decentralized governance - ensures democratic control and community-driven standards without central authority.

Shared security protocols (i.e. EigenLayer) accelerate the bootstrapping stage of distributed systems and pave the way for open innovation - developing AI infrastructure and applications on decentralized foundations.

Building decentralized products has traditionally been a complex and resource-heavy process, requiring deep knowledge of low-level infrastructure development. 

In addition, AI systems (e.g., inference networks and federated learning models) are inherently complex and demand substantial development resources.
Developing these systems as decentralized protocols, adds another layer of complexity and introduces development barriers. 

Builders should focus on what matters most - building novel products.

Enter the Othentic Stack.

Leveraging a library of production-ready components, developers can focus on their core service logic while abstracting away complexities around consensus, operators, networking, messaging, and attestations.

The Othentic Stack expands the design space of decentralized AI, enables efficient development of self-enforceable and verifiable systems, and provides the foundation for a wide range of innovations.

Agent Authorization Network

Demand for autonomous AI agents is rapidly increasing. AI Agents are autonomous programs designed to perform specific tasks on behalf of users, leveraging artificial intelligence and machine learning techniques. These agents operate with minimal human intervention, independently working towards predefined goals. 

AI agents require authentication mechanisms and defined access-control policies to ensure secure and controlled operation. Authentication verifies the agent's identity and permissions, while authorization policies determine what actions the agent can perform. Role-based access control (RBAC) and attribute-based access control (ABAC) are common frameworks for managing agent permissions.

AI Agents launching on Virtual such as GAME, Luna, and AIXBT are trained to autonomously execute tasks based on predefined policies, and can leverage access controls to open up space for new ideas and innovations. The recent Freysa experiment sheds light on how agents can leverage access-control to act as independent financial actors.

As DeFi continues to expand with increasingly complex protocols and opportunities, navigating this landscape manually becomes practically impossible. DeFi Agents will become essential for anyone looking to participate effectively in the ecosystem. These intelligent agents will revolutionize how we interact with DeFi by automatically analyzing prices and protocol states to find optimal strategies. From yield farming optimization and risk management to airdrop hunting, portfolio rebalancing, and cross-chain operations, these agents will handle complex operations with an efficiency that human traders simply cannot match.

One key advantage of building these agents on blockchains is financial interoperability. Crypto’s programmable nature enables seamless execution of conditional trades and automated strategies—features that are often limited in TradFi.

Need For This
In the AI agent market, there is a need for an authorization and security layer that enables users to delegate wallet access to agents in a secure yet flexible way. While existing solutions like Smart Accounts or Account Abstraction solutions can achieve this, they are limited in functionalities and require users to transfer their assets to a new account, confining them within the nascent Smart Account ecosystem, and introducing challenges in cross-chain interactions. This results in it being a less-than-ideal solution.

Hence we propose an alternative in the form of an off-chain authorization network secured via economic security, designed to safeguard users’ private keys and execute agentic transactions on user’s behalf.

Potential Architecture

The AVS network functions as a multi-party computation (MPC) authentication layer by splitting keys into parts and sharing them across different nodes—a process known as sharding—to maintain decentralization. Each node signs transactions using its portion of the signature. The nodes can be configured to approve an AI agent's task only when specific user conditions are met.


This network enables:

  1. Decentralized and economically secure storage of a user’s assets.
  2. Secure wallet delegation to AI Agents for autonomous tasks.
  3. Setting preset conditions for task execution to avoid AI hallucinations.

We can create these AI-delegated EOAs in one of two ways:

  1. Allow the user to shard an existing private key owned by them and distribute it across the AVS network; or:
  2. Leverage MPC technology to request a fresh EOA address from the AVS network.

Once users are assigned an EOA managed by the AVS network, they are able to delegate access to an AI agent and set specific rules for its operation. These ground rules ensure the AI Agent acts within boundaries and pre-defined policies that can include parameters such as rate limits, spending caps, and contract whitelisting. Imagine an Agent tasked to execute a flash loan by identifying arbitrage opportunities across multiple DEXs: the agent can find the optimal path to generate the most profit while the user can set preferences for whitelisted DEXs, limit the amount of funds being used and more.

Inference Network

AI inference describes the process of using a trained AI model to form predictions or decisions based on new data. It’s essentially the "execution" phase of AI, where the model applies its learned knowledge to solve real-world tasks.

There are three major problems with executing inference in a centralized manner:

  1. Lack of verifiability - results may be manipulated, censored, or generated using a less accurate model.
  2. Introduces risks for data misuse.
  3. Creates a single point of failure where one breach jeopardizes all the datasets.
Potential Architecture

We need a solution that ensures user’s data privacy while maintaining the verifiability of the models being executed. Trusted Execution Environments (TEEs) provide an ideal solution. TEEs create isolated, secure environments where sensitive computations take place, preventing exposure of data or model parameters to other participants, even in potentially untrusted environments. With the recent release of NVIDIA Confidential Computing, TEEs became available on high-end GPUs, allowing for running LLMs in a privacy-preserving way, protecting the data from any party, including the node operator.

In the proposed AI inference AVS, when an operator executes the inference request, the network initializes the TEE environment and generates an asymmetric key pair (public and private keys) for secure communication. The users can use this key to encrypt their data.

TEEs ensure that proprietary models and personal data remain confidential while verifying that no unauthorized changes or malicious modifications have been made to the models. In addition, TEEs ensure that the user data cannot be accessed by any party, including the node operators themselves.

TEE attestations can be verified on-chain, allowing users to independently verify node operators are running trusted software on verified hardware. However, in this model, ensuring node liveness is a challenge. This can be solved by a sufficiently large, decentralized pool of operators, securely backed by cryptoeconomic guarantees derived from the AVS network. Operators run models within secure TEE environments, go through leader elections, and continuously generate proofs of execution, maintaining network liveliness and reliability.


Building the network as an AVS provides two key improvements:

  1. Privacy-preserving AI inference: users can process sensitive data and receive personalized insights while maintaining data security. Businesses can develop custom AI solutions without compromising their Intellectual Property.
  2. Leveraging verifiable outputs, the AVS strengthens reliability, making these applications suitable for critical sectors like healthcare and finance. 

Several teams are working on providing verifiable and private inferences in this space, including Secret Network, Atoma, Ritual, Modulus, and EZKL, leveraging zkML, opML, and TEE.

Model Training Network

This section builds on ideas shared by Altan and Advait in their tweets about Federated Learning AVS design. A warm acknowledgment to them for leading the way.

In 2017, Google introduced Federated Learning (FL) for efficient AI model training allowing devices to train models locally on their data, sending only local model updates to a central server as opposed to raw data. The server aggregates these updates into a global model, coordinates training, and distributes the improved model back to devices for further refinement. This approach enhances privacy and leverages centralized control for model optimization while using less resources.


The issue with the current FL architecture is that of the centralized aggregator, that aggregates local model updates from individual devices and distributes the updated model back to the devices. The centralized nature of aggregation introduces risks such as:

  1. A single point of failure
  2. No liveness guarantees
  3. Privacy vulnerabilities
  4. Susceptibility to data poisoning attacks - an adversary can guide the learning process in their desired direction.
Potential Architecture

A decentralized network architecture for the aggregation process can solve the issues listed above. Designing this network as an AVS introduces decentralized model aggregation, peer-to-peer consensus for model verification, and cryptoeconomic security to ensure the integrity and liveness of participating nodes.

Since aggregation (a simple sum function) is comparatively less computationally heavy, all network operators can fetch local model updates and run aggregation individually. The network can then come to a consensus regarding the aggregation results. The nodes with dishonest aggregation results can be penalized or slashed accordingly.

Data Labelling Network 

Data labeling is the process of annotating raw data (e.g., text, images, videos, or audio) with labels to form the datasets usable for training AI models. 

This step is critical for creating accurate and reliable AI systems but is often plagued by inconsistencies, bias, and a lack of transparency in the labeling process.

A decentralized, verifiable architecture backed by cryptoeconomic ground rules offers a transformative solution for these challenges. By distributing the data labeling process across a network of participants, with economic incentives for coordination, the labeling process is tamper-proof and transparent, enabling traceability for every modification or addition. 

Decentralization prevents single points of failure and democratizes access to labeling tasks, while the inherent verifiability guarantees data integrity. These features collectively enhance trust and accountability, making AVS an ideal foundation for improving the quality and scalability of AI data labeling.

Potential Architecture

To ensure accurate, high-quality, and tamper-proof data labeling, the network must prevent biased data from entering by punishing harmful contributors and distributing the data labeling process across a network of participants. 

The AVS functions as a distributed labeling network for data clusters and organizes data into specialized clusters (e.g., DePIN, automotive, medical) based on specific use cases and efficiently monitors individual data contributions. Each cluster owner controls data contributors' access based on quality standards and economic ground rules while defining the required data format, attributes, and benchmarks. 

Once the dataset is used for inference, an attribution summary is recorded on-chain to credit the original contributors. The network then distributes rewards based on data contributions, with individual cluster owners setting their own reward structures. 

The decentralized system ensures proper economic incentives while enforcing labeling verification. 

A notable example would be OpenLedger, pioneering AI-ready datasets, verification and labeling.

Conclusion

For teams working on building AI systems, this article serves as a good entry point for understanding the value proposition of AVSs. By leveraging programmable validation of computation and cryptoeconomic mechanisms, AVSs pave the way for a new generation of AI systems.

This article is the first in a series exploring how the Othentic Stack can serve as a framework for building the next wave of decentralized AI solutions, with subsequent articles diving deeper into technicalities.

Meanwhile, at Othentic, we're excited by the convergence of open innovation and AI systems. If you're interested in building decentralized AI systems or want to discuss this exciting future, reach out to us by filling this Form.

Special thanks to Altan Tutar, Advait Jayant, and Alex Zaidelson for feedback and review.

About Othentic

Othentic Stack orchestrates the development of low-level AVS infrastructure.

Imagine a canvas with all of the essential building blocks to spin up AVS, highly configurable and versatile in all aspects of AVS development, while underlying complexities around consensus, operators, networking, messaging, and attestations are abstracted away.

Docs - https://docs.othentic.xyz 

Website - https://www.othentic.xyz/ 

Twitter - https://x.com/0xOthentic 

Discord - https://discord.com/invite/za9tpCdSzs