The combination of machine learning and artificial intelligence with zero-knowledge proofs is unlocking powerful tools for web3. Here’s how Spectral is approaching zero knowledge machine learning (zkML) and where we’re at today.
The Oracle Problem has been a serious risk with web3. How can you trust that a feed of price information on a decentralized application hasn’t been corrupted? Or, to use an example a little closer to home, how can you trust that Spectral is really using on-chain information and a sophisticated machine-learning algorithm without being able to see the details of it? In the past, you could not, but a series of technological breakthroughs involving a cryptographic technique known as zero-knowledge proofs have made it possible to verify that a complex model has been applied to blockchain data, quickly and cheaply.
Zero Knowledge Proofs and Rollups
Imagine being the only person capable of seeing color in a colorblind universe. If you wanted to demonstrate that you could distinguish between red and blue, you could have someone test your ability by marking one of two otherwise indistinguishable pieces of paper and testing whether or not you could identify the marked piece correctly. Choosing correctly between the two might be dismissed as a coincidence the first time, but eventually, as they tested you more and more, your accuracy would be impossible to deny.
On a more technical level, zero-knowledge proofs (ZKP) can be thought of as being similar to passwords. A password is turned into a hash, and stored securely in a database. It’s easy to compute hash Y from a password, but it's difficult to reverse that order and use hash Y to reveal the generated password. In more mathematical terms, if I know a password (Pi) such that hash(password) = Y, then I can generate a proof and send it to the verifier, such that it would allow them to be convinced that I have Pi. The mathematical proving statement is known as a zero-knowledge circuit.
Creating a zero-knowledge circuit proving knowledge of a password is relatively easy, but proving a machine learning algorithm has been run on a set of data, for example, proving that one of our MACRO Scores has been generated from our machine learning model, would require a very complicated circuit and a lot of computational power (and gas).
Using ZKPs, a number of companies like Starkware and Matter Labs have begun batching numerous Ethereum transactions into a single roll-up, essentially using ZKPs to compress the on-chain activity of dozens or even hundreds of transactions into a single (gas-using) on-chain transaction called a zk-rollup.
A recent article by Aligned estimated that web3 services alone would require 90 billion zero-knowledge proofs by 2030, delivered at 83,000 transactions per second, creating a market worth approximately $10 billion.
In the past there were practical limits to what zk-rollups could do, since historically, many required a signing ceremony, custom software, and serious programming expertise. A series of recent technological developments by Modulus Labs and EZKL have amounted to a breakthrough: practical zkML is now commercially feasible.
Machine learning is being used to decide which ads you see on Youtube, what order your favorite social network decides to show your feed in, and off-line, to answer even more impactful questions such as whether or not you should be given a mortgage, how much to increase your credit score, or increasingly questions like do you deserve bail for the crime you’ve committed or what medical services should your insurer provide for you.
The concept is simple. Given basic instructions, an algorithm can be created that will look for patterns or make predictions from a set of data. There are a variety of techniques used for training models, most often supervised, unsupervised or reinforcement learning, but they all involve investing compute into creating an evermore precisely honed series of weights, and models to use them to create inferences from a set of data.
For example, Spectral’s on-chain credit score, the MACRO Score is generated from a wallet’s on-chain data and a sophisticated and constantly evolving machine learning model detailed in A Deeper Look at the MACRO Score.
As researcher Daniel Kang points out, models are increasingly kept behind closed APIs. “There are good reasons for this: model weights may be unable to be revealed for privacy reasons if they are trained on user data (e.g., medical data) and companies want to protect trade secrets. For example, Twitter recently open-sourced their “For You” timeline ranking algorithm but couldn’t release the weights for privacy reasons. OpenAI has also not released weights for GPT-3 or 4.”
For now, users have to trust that when they send their information to an API they’ll receive what they’ve been told they’re receiving or that their data will be protected.
Zero Knowledge Machine Learning and the End of Trust Assumptions
Aside from providing a way to improve capital efficiency with on-chain protocols, our on-chain MACRO Score is intended to demonstrate how blockchain data can be used in concert with a machine-learning network. At the moment, our users have no way to be certain that the internal machine learning model we detail in our documentation is used when they receive a MACRO Score. The same goes for any other API, a developer has to take it on faith that they’ll receive the data they expect. With zkML, you can mathematically verify that you’re getting what you requested.
If someone wanted us to prove that the data we trained our model on came from the blockchain, and wasn’t simply a credit score ported from one of the three bureaux, we could create a zero-knowledge circuit around our model training, i.e. a function(old model, input training data) creates a new model. We may also want to be able to prove that we used our model to generate a specific MACRO Score without revealing any details of the machine learning model that created it. In that case, we could create a circuit around the generation of an inference, i.e. the function created by a new model and an inference input.
To go beyond simply looking at on-chain wallets to provide a credit score, for example, to create a decentralized machine learning network that anyone could participate in, without revealing any unnecessary details of the models being used, zero-knowledge machine learning is essential.
EZKL and zkML
EZKL, named after the Biblical prophet who was granted visions of the future, is bringing artificial intelligence on-chain using a library that allows developers to create zero-knowledge proofs of machine learning models imported using the Open Neural Network Exchange (ONNX).
We spoke to EZKL’s creator, Jason Morton, about the project. “There was no silver bullet breakthrough making it all possible,” he said, “Zero-knowledge is more of a lead bullet field, it’s a matter of picking the ones that will work.”
He cites Neil Stevenson’s Fall; or Dodge in Hell (2019) as partial inspiration for his interest in zero knowledge. “There was a concept called PURDAH, which was the ability to sign something even after death, and I wondered if it were possible to build something like it,” Morton says. “So I went down the stack until I found something I could do.”
To perform zkML, first, a machine learning model must be trained using a dataset. After the training phase, the machine learning model parameters are converted into a format that can be used with zero-knowledge proofs. EZKL allows users to convert ONNX models into Halo 2 circuits. Halo 2 is a recursive proof system that works without a trusted setup, and has constant-sized proofs and efficient verification times. EZKL also includes layout optimization, quantization, and the ability to deploy proofs on the Ethereum network.
This iteration of EZKL was tested on MobileNet V2, a lightweight convolutional neural network (CNN) architecture designed for efficient mobile and edge computing, and optimized for low-latency and low-power applications such as image recognition on mobile devices. This is enough performance to allow Spectral to safeguard machine learning models and, potentially, safeguard off-chain credit information while incorporating it into our models.
For more information about Morton’s startup, Zkonduit, please visit and experiment with their Github repo at: https://github.com/zkonduit/ezkl or you can read a recent post about LLMs here: https://hackmd.io/mGwARMgvSeq2nGvQWLL2Ww
Modulus Labs and zkML
In January 2023, Modulus Labs released their first paper, the Cost of Intelligence, the “first work to ever benchmark ZK-proof systems across a common suite of AI primitives.” Their assumption was that if the ZK-rollup paradigm was poised to solve general compute costs for Ethereum broadly, could it also bring artificial intelligence inference to the decentralized internet? What might it take to build a zero-knowledge circuit for a MACRO Score?
The answer was a prover called Plonky2. On its surface, it’s extremely expensive and time-consuming to “snark” (referring to a common form of zero-knowledge circuit, the SNARK), even a small artificial intelligence model. We spoke to Modulus Labs founder Daniel Schorr and software developer Nicholas Cosby, and they ballparked the price to verify the smallest part of a smart contract at 300K in gas (around $20 per transaction at current prices) on the Ethereum chain. The recursive structure of a model allows them to compress a model’s structure and with batching, thousands of transactions could be rolled into a single on-chain transaction. They hope to eventually reduce the cost of using a zk-inference to near zero.
Modulus is working on two projects, the first is RockyBot, the zero-knowledge secured fighting game where human players train AIs to battle one another. Zero-knowledge allows players to trust that their opponents have really trained the way they’ve said they have. The second project is Leela vs the World, the first-ever on-chain AI game.
Daniel Kang and zkML
Daniel Kang and Edward Gan recently wrote about releasing their open-source framework for using zkML to generate zero-knowledge proofs. Their open-source framework is the first to produce zero-knowledge proofs of large ML models, including GPT-2, the ML model used for Twitter recommendations, and state-of-the-art image classification models.
These proofs don’t require any additional interaction nor do they require a prover to execute an operation. Better yet they are extremely small, “even for large models, the proofs are typically less than 5kb,” Kang writes. They work like this:
Given a set of public inputs (x) and private inputs (w), ZK-SNARKS can prove that a relation F(x,w) holds between the values without revealing the private inputs.
He cites a Sudoku puzzle as an example. In this case, the public inputs are the starting squares and the private ones are the remaining ones. For ML, model weights are the private input. For the public input, there are the model input features F and an output O. To identify the model they, “also include a model commitment C in the public input. The model commitment functions like a hash, so that with high probability if the weights were modified, the commitments would differ as well. Thus x = (C,F,O). Then the relation we want to prove is that for some private weight value w, having commitment C, the model outputs O on inputs F.”
That means if a verifier is given proof and x, they can verify that a specific model ran honestly.
Check out their repo for more details and to use the code!
Use Cases for zkML
“There are a lot of people thinking about using zkML for marketplaces,” says Jason Morton. “It’s often Kaggle-like projects, where we ship the compiler, and they keep everything local. You can keep the architecture mostly private that way, certainly, you leak some information out, but only a little, there’s no way to glean how many layers there are or the shape of them.”
Performance-wise, he anticipates rapid progress. “Of course, things can always slow down,” he says, “but things are going very fast, I gave a talk last September (2022) and already ezkl is 4,000 times faster than the state of the art back then.”
The biggest problem facing zero knowledge experts is to think through data provenance and ecosystem provenance, which he sees as fundamentally a social, rather than a technological problem.
Privacy-preserving model evaluation: Businesses and organizations can use ZKML to demonstrate a machine learning model's accuracy without revealing its parameters. Buyers can then verify the model's performance on a randomly chosen test set before purchasing, ensuring that they are investing in a legitimate and effective product. A few examples from Worldcoin: decentralized Kaggle, proving that a model has greater than X% accuracy on test data without revealing weights, or medical diagnostics on private patient data with only the patient seeing the result.
Computational Integrity (validity ML): zkML can be used to prove that computation has happened correctly, for example, an online trading bot can use it to prove that certain functions have been executed; other examples include Lyra finance options protocol AMM with intelligent features, Astraly’s AI-based reputation system or Aztec Protocol contact level compliance tools. zkML can also be used to verify that the output is the product of a given model and input pair, allowing ML models to be run off-chain. Giza is working with DeFi yield aggregator Yearn Finance on this.
Machine Learning as a Service (MLaaS) transparency: zkML can be used to prove that a service provider is actually providing the model they say they’re providing.
On-chain verification: In the context of blockchain and distributed ledger technologies, zkML can enable secure and privacy-preserving verification of machine learning models. This can help improve trust and transparency in decentralized applications and smart contracts that rely on artificial intelligence.
Legal discovery and auditing: zkML can be utilized for conducting audits or legal discovery processes without revealing sensitive data. By allowing auditors and investigators to verify the accuracy and compliance of machine learning models without accessing the raw data, zkML helps maintain data privacy while ensuring regulatory adherence. This auditing could also extend to smart contracts where zk-proofs could guarantee a contract fits certain predetermined criteria.
Other examples of important but opaque algorithmic processes that affect our lives every day include Twitter and other social media feeds (although there has been an effort to open some of these details to the public), bail determinations, tax auditing, pension fund investment strategies, and countless other decisions. While there are cryptographic methods for concealing information such as fully homomorphic encryption machine learning (such as the text autofill on an iPhone) or using zero knowledge primitives or validity machine learning, for use cases that require computational integrity, heuristic optimization, and privacy simultaneously, only zkML can fit the bill, while allowing an algorithm to be used on a blockchain network and be scalable, secure, and decentralized.
How Zero Knowledge Machine Learning (zkML) fits into an Accessible, Equitable, Transparent Financial Future
In April 2022, a lawsuit was filed alleging that one of the three major credit bureaus had unintentionally provided hundreds of thousands of inaccurate credit scores. “As many as 300,000 people experienced a score shift of 25 points or more, enough to swing a borrower’s credit rating from good to fair, or fair to poor.” (NBC) That the algorithmic apparatus behind the scores had gone wrong was completely invisible to outsiders.
“Right now, some of the least trustworthy people in society are running machine learning models with the broadest financial impacts,” Yi Sun, EthBogota 2022. Scaling up Trustless Neural Network Inference with Zero Knowledge Proofs.
Credit Scores are unfortunately typical of the way that big data and society often intersect; American consumers are tracked without their consent, and their lending and borrowing behavior are weighted by sophisticated machine learning algorithms into a single score which can change abruptly with serious financial consequences when they do. Blockchains allow public data to be processed by algorithms securely and transparently but, because on-chain data is open to anyone, they sacrifice privacy, and scalability, because running complex machine-learning algorithms on a blockchain is slow and very expensive.
Today, web3 is missing key components such as privacy safeguards and a consistent, persistent user-owned identity that is universally recognized (i.e. a reputation primitive). While the big platforms like Google, Facebook, and Twitter can fill in the gaps in the meantime, to truly decentralize and reap the benefits of decentralization and delegating certain decisions to machine learning, we must be able to conceal the models we’re using, while still being able to vet them and ensure that they’re being used fairly, while safeguarding the data and users who are affected by those decisions.
Challenges Remaining for zkML
Even using EZKL, zero knowledge still demands substantial compute, and adds additional complexity to software development. Dante Camuto, EZKL’s CTO points to a subtle technical issue. “Most ML models are trained using floating point arithmetic,” he says. “So when you go into pytorch you get parameters that like something like: [1.234234, 1.585858, 9.5465665 ....] When we enter ZK land — we're performing operations over Fields (basically integers) — and from what I can tell most if not all ZKML approaches use fixed point arithmetic — you're basically quantizing the model so can lose a bit on accuracy.”
A few other challenges include:
Operator support: The current implementation of EZKL supports only a subset of the 1500+ ONNX operators, limiting the types of models that can be converted into zero-knowledge proofs. However, the package is continuously improving to accommodate a wider range of operators.
Model complexity: The complexity of machine learning models and the number of parameters impact the feasibility of generating zero-knowledge proofs. While there is no fixed limit on the number of parameters, more complex models will require more time and computational resources to generate proofs.
Training: While it is possible to implement zkML for training, it will be significantly slower and more expensive than traditional methods. As the field evolves, proof system innovations may make training more feasible, but the benefits must be carefully weighed against the costs.
Scalability and optimization: Generating zero-knowledge proofs for machine learning models requires careful optimization and balancing of trade-offs between prover time, verifier time, and proof size. As the field advances, a better understanding of these trade-offs will help improve the scalability of zkML solutions.
- Worldcoin's Introduction to zkML
- zkML: Evolving the Intelligence of Smart Contracts Through Zero-Knowledge Cryptography
- Checks and balances: Machine learning and zero-knowledge proofs
- Bridging the Gap: How zk-SNARKS bring transparency to private ML models with zkML
Access Spectral's API
Explore Spectral's on-chain MACRO Scores and other Wallet Signals using our partner dashboard. There's a free trial with 1,000 credits or a custom solutions lab available.