Vanna Labs
  • Introduction
    • 👋Welcome
    • ✨Summary
  • Vision
    • 🏪The AI One-Stop-Shop
      • AI Models on Vanna
      • Applied ML on Vanna
    • 🤝Interoperability
    • ✅Computational Verifiability
    • 🌐Decentralization
    • 🛡️Censorship Resistance
  • Vanna Network
    • 🏗️Architecture
      • Data Preprocessing
      • Parallelized Inference Pre-Execution (PIPE)
      • Validation-Computation Separation (VCS)
      • Cryptoeconomic Security
      • Modular Design
    • ⚙️Inference Modes
      • Vanilla Inference
      • zkML Inference
      • zkFP Inference
      • opML Inference
      • TEE Inference
    • 💿Data Storage
      • Model Storage
      • Data Availability
    • 💻Portal
  • Build
    • 🛠️Getting started
      • Networks
      • Faucets
      • Model Storage
    • 💻Building dApps
      • Data Preprocessing
      • Inference API
      • Inference Example
    • 💾Models
      • Supported LLMs
  • 🔗Links
  • Key Concepts
    • 🧠AI x Blockchain
    • 📜Optimistic Rollups
    • 💾Data Availability
    • ⚡zkML
    • ☀️opML
Powered by GitBook
On this page
  • Integration Challenges
  • The Vanna Labs Solution
  1. Key Concepts

AI x Blockchain

Challenges arising from the Web3.0 x Blockchain integration

PreviousLinksNextOptimistic Rollups

Last updated 1 year ago

Integration Challenges

Directly integrating inference directly into modern blockchains is challenging and infeasible due to infrastructure limitations. See our for more context.

The same infamous trilemma that plagues the blockchain ecosystem can be similarly applied to the problem of taking AI on-chain.

Decentralization (AI Computational Limitations): Model inference can be computationally expensive, on-chain inferences at scale with modern blockchain architecture would be infeasible without steeply raising the computational requirements to run a full node, which would be a detriment to decentralization.

Security (AI Inference Integrity): As AI models can many times be a black-box, heavy reliance on these inferences could expose protocols to new vectors of attack. Relying on oracles to bring model inference results on-chain have no guarantees in the integrity of the inference, and allow for potential exploits.

Scalability (Replication of state through AI Inference): Model inference can potentially be computationally expensive, in traditional proof-of-stake consensus full nodes re-execute transactions for block validation via Merkle proofs so even if native inferencing is implemented each inference would be executed hundreds of thousands of times which is computationally inefficient and could severely congest the network.

The Vanna Labs Solution

Vanna Labs is creating a scalable and secure AI execution layer that resolves the challenges above.

Scalability - The Vanna Network is able to scale its network through powerful inference nodes that featuring hardware acceleration that is designed to run inference.

Decentralization - The Vanna Network achieves decentralization through Validation-Computation Separation (VPS). The network remains decentralized since hardware requirements for validator nodes remain low since the computationally-intensive inference is done by inference nodes, the validator nodes just focus on validating proofs generated by inference nodes.

Security - The Vanna Network achieves secure computation through a variety of cryptographic schemes that validate the computation done by inference nodes. For more information on these inference modes, see:

🧠
🏗️Architecture
Validation-Computation Separation (VCS)
⚙️Inference Modes
medium article
The AI x Blockchain Trilemma