Vanna Labs
  • Introduction
    • 👋Welcome
    • ✨Summary
  • Vision
    • 🏪The AI One-Stop-Shop
      • AI Models on Vanna
      • Applied ML on Vanna
    • 🤝Interoperability
    • ✅Computational Verifiability
    • 🌐Decentralization
    • 🛡️Censorship Resistance
  • Vanna Network
    • 🏗️Architecture
      • Data Preprocessing
      • Parallelized Inference Pre-Execution (PIPE)
      • Validation-Computation Separation (VCS)
      • Cryptoeconomic Security
      • Modular Design
    • ⚙️Inference Modes
      • Vanilla Inference
      • zkML Inference
      • zkFP Inference
      • opML Inference
      • TEE Inference
    • 💿Data Storage
      • Model Storage
      • Data Availability
    • 💻Portal
  • Build
    • 🛠️Getting started
      • Networks
      • Faucets
      • Model Storage
    • 💻Building dApps
      • Data Preprocessing
      • Inference API
      • Inference Example
    • 💾Models
      • Supported LLMs
  • 🔗Links
  • Key Concepts
    • 🧠AI x Blockchain
    • 📜Optimistic Rollups
    • 💾Data Availability
    • ⚡zkML
    • ☀️opML
Powered by GitBook
On this page
  1. Vision

Computational Verifiability

Importance of computational verifiability and integrity

PreviousInteroperabilityNextDecentralization

Last updated 1 year ago

As use of AI becomes increasingly ubiquitous, the importance of computational verifiability becomes increasingly important to ensure honest behavior. Consider the following scenario:

"A user types in a query on ChatGPT and submits it, the query is sent as an API request to OpenAI's servers where OpenAI inferences their GPT model with the query, then returns the result back to the client (user) where the result is displayed on the site"

OpenAI claims that they run the query through GPT4, but the user has no idea if they used the correct query, or more importantly, the correct model. The user is completely trusting the company to do the right thing. This is dangerous becuase running inference on large AI models is extremely expensive, so it's completely possible that companies could bait-and-switch and choose to use cheaper/smaller models instead to reduce their costs. After all, who would know?

That's why Vanna Labs is developing cryptographic schemes as enshrined primitives on the network that gives inference the consumers the option to have their inference requests cryptographically secured. For more information about Vanna's security models around inference, check out:

In addition to cryptographic proofs that are validated on the Vanna Network, the inference results and artifacts will all be posted to a data availability layer where they can be inspected or verified by anyone.

Computational verifiability becomes increasingly important as the implications of the inference result increases.

What if a fintech company uses cheaper models which results in inaccurate risk assessments? What if a healthcare AI company uses a cheaper model which results in incorrect diagnoses? The Vanna Network and its enshrinement of zkML as a primitive helps to guarantee computational verifiability to protect against such scenarios.

✅
⚙️Inference Modes
Ensuring comptuational verifiability with ZKML