Vanna Labs
  • Introduction
    • 👋Welcome
    • ✨Summary
  • Vision
    • 🏪The AI One-Stop-Shop
      • AI Models on Vanna
      • Applied ML on Vanna
    • 🤝Interoperability
    • ✅Computational Verifiability
    • 🌐Decentralization
    • 🛡️Censorship Resistance
  • Vanna Network
    • 🏗️Architecture
      • Data Preprocessing
      • Parallelized Inference Pre-Execution (PIPE)
      • Validation-Computation Separation (VCS)
      • Cryptoeconomic Security
      • Modular Design
    • ⚙️Inference Modes
      • Vanilla Inference
      • zkML Inference
      • zkFP Inference
      • opML Inference
      • TEE Inference
    • 💿Data Storage
      • Model Storage
      • Data Availability
    • 💻Portal
  • Build
    • 🛠️Getting started
      • Networks
      • Faucets
      • Model Storage
    • 💻Building dApps
      • Data Preprocessing
      • Inference API
      • Inference Example
    • 💾Models
      • Supported LLMs
  • 🔗Links
  • Key Concepts
    • 🧠AI x Blockchain
    • 📜Optimistic Rollups
    • 💾Data Availability
    • ⚡zkML
    • ☀️opML
Powered by GitBook
On this page
  1. Vision

The AI One-Stop-Shop

Vanna is the one-stop shop for AI

PreviousSummaryNextAI Models on Vanna

Last updated 1 year ago

Vanna is your one-stop-shop for on-chain AI and ML inference.

  1. Model Selection/Hosting - Browse models on the VannaNet UI and choose a model that suits your use-case, or one-click upload your own.

  2. Data Collection - Utilize oracle contracts on Vanna or identify smart contracts on other chains that can be accessed through interchain queries on Vanna. Composability of smart contracts allows you to use AI models like building blocks.

  3. Data Processing - Leverage any precompiles existing on the Vanna Blockchain to preprocess the raw data collected.

  4. Inference Execution/Validation - Execution and validation of inference is done by Vanna's infrastructure.

  5. Data Provenance - Results can be published cross-chain and the results + any artifacts are also batch-posted to the DA layer.

Below is a more detailed graph of an end-to-end inference consumer workflow:

🏪