Computational Verifiability

Importance of computational verifiability and integrity

As use of AI becomes increasingly ubiquitous, the importance of computational verifiability becomes increasingly important to ensure honest behavior. Consider the following scenario:

"A user types in a query on ChatGPT and submits it, the query is sent as an API request to OpenAI's servers where OpenAI inferences their GPT model with the query, then returns the result back to the client (user) where the result is displayed on the site"

OpenAI claims that they run the query through GPT4, but the user has no idea if they used the correct query, or more importantly, the correct model. The user is completely trusting the company to do the right thing. This is dangerous becuase running inference on large AI models is extremely expensive, so it's completely possible that companies could bait-and-switch and choose to use cheaper/smaller models instead to reduce their costs. After all, who would know?

That's why Vanna Labs is developing cryptographic schemes as enshrined primitives on the network that gives inference the consumers the option to have their inference requests cryptographically secured. For more information about Vanna's security models around inference, check out:

⚙️pageInference Modes

In addition to cryptographic proofs that are validated on the Vanna Network, the inference results and artifacts will all be posted to a data availability layer where they can be inspected or verified by anyone.

Computational verifiability becomes increasingly important as the implications of the inference result increases.

What if a fintech company uses cheaper models which results in inaccurate risk assessments? What if a healthcare AI company uses a cheaper model which results in incorrect diagnoses? The Vanna Network and its enshrinement of zkML as a primitive helps to guarantee computational verifiability to protect against such scenarios.

Last updated