Inference API
Describes the interfaces Valence exposes for using inference in smart contracts.
Inference Precompile
interface IValenceInference {
enum ModelInferenceMode { VANILLA, ZK }
enum LlmInferenceMode { VANILLA, TEE }
function runModel(
ModelInferenceMode mode,
ModelInferenceRequest memory request
) external returns (ModelOutput memory);
function runLlm(
LlmInferenceMode mode,
LlmInferenceRequest memory request
) external returns (LlmResponse memory);
}LLM Inference Requests
Generic Inference Requests
Simulation Results
Last updated