Vanna Labs
CtrlK
  • Introduction
    • πŸ‘‹Welcome
    • ✨Summary
  • Vision
    • πŸͺThe AI One-Stop-Shop
      • AI Models on Vanna
      • Applied ML on Vanna
    • 🀝Interoperability
    • βœ…Computational Verifiability
    • 🌐Decentralization
    • πŸ›‘οΈCensorship Resistance
  • Vanna Network
    • πŸ—οΈArchitecture
      • Data Preprocessing
      • Parallelized Inference Pre-Execution (PIPE)
      • Validation-Computation Separation (VCS)
      • Cryptoeconomic Security
      • Modular Design
    • βš™οΈInference Modes
      • Vanilla Inference
      • zkML Inference
      • zkFP Inference
      • opML Inference
      • TEE Inference
    • πŸ’ΏData Storage
      • Model Storage
      • Data Availability
    • πŸ’»Portal
  • Build
    • πŸ› οΈGetting started
      • Networks
      • Faucets
      • Model Storage
    • πŸ’»Building dApps
      • Data Preprocessing
      • Inference API
      • Inference Example
    • πŸ’ΎModels
      • Supported LLMs
  • πŸ”—Links
  • Key Concepts
    • 🧠AI x Blockchain
    • πŸ“œOptimistic Rollups
    • πŸ’ΎData Availability
    • ⚑zkML
    • β˜€οΈopML
Powered by GitBook
On this page
  1. Build
  2. πŸ’ΎModels

Supported LLMs

Valence supports a number of capable and versatile LLMs out-of-the box. You can use any of these models with the runLlm function defined in Inference API

You can find the list below:

Model
Model ID

Llama-3-8B-Instruct

meta-llama/Meta-Llama-3-8B-Instruct

PreviousModelsNextLinks

Last updated 11 months ago