Modeling with Spectral: A Deep Dive into Our First Challenge and Modeling Process

Spectral is launching v0.1.0 of our Machine Intelligence Network for verified ML inference feeds.

Modeling with Spectral: A Deep Dive into Our First Challenge and Modeling Process

On December 5th, we announced the launch of our Machine Intelligence Network, a four-party ecosystem designed to generate high-quality, consumption-ready ML inferences and lay the foundation for what we’re calling the Inference Economy. Our network is made up of Creators, who initiate data science challenges, Modelers, who craft models to solve these challenges, Validators, who verify models’ integrity and performance, and Consumers, who utilize the inference feeds produced by verified models. 

We appeal to top data scientists and ML engineers to develop highly performant models by offering significant rewards and upholding tamper-proof, provably-fair model validation, achieved through a transparent, on-chain process. And, in an industry-first move, we secure Modelers’ intellectual property with cutting-edge zero-knowledge Machine Learning (zkML), allowing them to control how to monetize their models. We also give Creators and Validators a share of revenue for posting and authenticating challenges, and stream inference feeds to Consumers in an easy, intuitive format. By combining interconnected incentives with frictionless inference integration, our network lets businesses everywhere turn their data science needs into challenges, and get inference solutions delivered to them, directly and ready-to-use, through smart contracts.

You’ll find a roadmap below, detailing our upcoming product milestones and hints at what’s over the horizon. But for now, let’s start with our first challenge!

What’s new?

If you signed up for our waitlist before December 5th, you’ll find below an overview of how the latest version of our platform works (v0.1.0). Or if you’re new to Spectral, read on!

Our platform kicks off with a problem significant to many in web3: how to build a decentralized credit score to improve capital efficiency on-chain. We’re offering $100,000 to top participants whose models’ ability to predict liquidation risk can meet our performance benchmarks. In addition, top participants will earn an 85% share of all revenues from consumption calls made for their model’s inferences. 

We’re excited to announce that we’re adding an additional $50K bounty — to be paid out starting Feb 19 — for Modelers who can submit their models by Jan 16. Early submitters will also remain eligible for the original $100K reward. 

What’s new?

If you signed up for our waitlist before December 5th, you’ll find below an overview of how the latest version of our platform works (v 0.1.0). Or if you’re new to Spectral, read on!

Our platform kicks off with a problem significant to many in web3: how to build a decentralized credit score to improve capital efficiency on-chain. We’re offering $100,000 to top participants whose models’ ability to predict liquidation risk can meet our performance benchmarks. In addition, top participants will earn an 85% share of all revenues from consumption calls made for their model’s inferences. 

We’re excited to announce that we’re adding an additional $50K bounty — to be paid out starting Feb 19 — for Modelers who can submit their models by Jan 16. Early submitters will also remain eligible for the original $100K reward. 

To help you get started on this challenge, we’ve created a Modeler Handbook and Starter Kit with guidance, tooling, and documentation.

Participating in challenges

We’ve made it easy to take part in our challenge by streamlining the process into six simple steps. 

1. Set up

Go to the Spectral challenge details page. This is where you will register for an account by clicking “Get Started.” This registration automatically sets up for you a wallet with Privy. You will then download the Spectral CLI.

2. Train model

Fetch Spectral’s custom training dataset and create a machine learning model capable of forecasting the probabilities and target labels specified by the challenge. Our dataset encompasses half a million borrow transactions from the Aave and Compound lending protocols spanning nearly four years, and is custom-built for easy pre-processing for feature engineering. In our effort to foster accurate, rigorous model development, we’re making this dataset freely accessible to Modelers, providing a rich resource to enhance analysis and prediction. 

3. Submit model commitment and predictions

Submit predicted probabilities and target labels your model outputs along with your model commitment. After you’ve committed your models, they are verified using zkML, which makes it possible to mathematically prove their inferences without revealing your proprietary modeling methodologies. This ensures the models' performance and integrity before we make their inferences available for consumption. This entire process is managed by an automated and transparent mechanism, performed by independent Validators.

For more information, please see How Spectral and EZKL are implementing zero-knowledge machine learning

We’ve put together a comprehensive step-by-step guide in our documentation. Access it here.

Remember that your ultimate goal is to create a functioning oracle capable of delivering a stream of insights to users. The preceding section described the actions you’ll be taking on the Spectral platform. This section will take you through evaluation, verification, and consumption, providing color on the steps Spectral takes to examine and rank your models, and giving you a glimpse into the time it takes to go through this process.

4. Model Evaluation

Each challenge features a set of performance benchmarks. They represent both a performance index for Validators to gauge whether a model is performing up to expectations, and a floor which the challenge Creator determines is the minimum standard required for a model to qualify for part of the reward bounty.

Transparently calculated after a 30-day forward testing time window, our performance benchmarks for this first challenge are detailed in Introducing Challenge #1: Decentralized Credit Scoring.  

5. Model Verification

After training and evaluation, models that pass performance benchmarks undergo a retroactive zkML verification process, which guarantees model integrity and makes verified models eligible to receive bounties and revenue-share. At the same time, zkML serves as a privacy-preserving technique, keeping model details private so winning Modelers can continuously capture value from their intellectual property.

6. Model Consumption

Please note that only models that pass performance benchmarks will become eligible for consumption. We will notify Modelers who submit models that don’t pass performance benchmarks.

During our first challenge, the basic idea behind payouts is that your model must be online at the time of consumption calls to qualify for - and retrieve - your bounty, i.e. when someone sends a request for consumption, your model must respond. The more calls that come in, the more bounty is unlocked. Your revenue share is dependent on how many other Modelers are responding. If there are ten Modelers, but you are the only one whose model is responding to consumption calls in a given period, you will receive all of the payout. 

To make it a little simpler: if a consumption call costs one dollar, and ten Modelers are responding, everyone gets 10 cents. If there’s only one Modeler online, then he or she receives the entire dollar.

Bounty + revenue-share payout and requirements

Modelers must qualify for both bounty payouts and the revenue sharing that follows. The rules are the same in each case: the top ten Modelers who pass performance benchmarks and are online will receive payment.

Payment mechanisms

  • Only unlock bounty from the moment inferences are consumed
  • Model must be online at the time of consumption
  • Bounties progressively unlock as more inferences are consumed
  • Each Modeler’s portion depends on how many other Modelers are responding to the call

As the challenge bounty is being withdrawn, Modelers are simultaneously earning income through revenue-sharing: getting a portion of ongoing payments received for Consumers and Creators ingesting model inferences (for Challenge 1, it's 85%). This further illustrates what differentiates Spectral from other competitive ML platforms — introducing “perpetual challenges.”

Perpetual challenge lifecycle 

The what’s and why’s of perpetual challenges

One of the biggest reasons why a traditional competitive ML platform won’t function as well as a machine intelligence network is that it can only offer challenges that are winner-take-all. Spectral offers continuous or “perpetual” challenges. Essentially, a leaderboard is kept, tallying which Modelers are passing performance benchmarks. The leaderboard is dynamic, so that any Modeler can submit a model and earn a place on the leaderboard. If the model lands in the top ten, then they’ll begin receiving a share of the bounty or consumption fees.

The idea behind making every challenge “perpetual” is that it makes winning models and their inferences more readily available to Creators and Consumers, and gives Modelers ongoing opportunities to tap into income via revenue sharing.

Rolling leaderboards

With perpetual models, we make every challenge open to Modelers who may have missed the initial “bounty” window, but who are still interested in fine-tuning solutions to an interesting problem, and would like to earn while doing so. These Modelers can submit new models, which we’ll continuously evaluate through our rolling forward testing windows against new datasets.  Modelers can submit new models to continuously outrank existing top models, and if they can “unseat” a top model, their model can take its place, become consumption-eligible, and start generating revenue through inferences. Modelers who wish to keep their models consumption-ready can “fend off” competition as well by submitting again — but with a higher-performing model that improves on their original.

Why is this beneficial? It’s a quality guarantee for Creators and Consumers, it’s equitable for Modelers who join “later,” and it’s a way for original Modelers to stay motivated and involved.

Empowered by the Spectral Community

A cornerstone of web3 has been its ability to foster vital communities around shared goals, where members can solicit help and draw inspiration or even network. Our Discord and Slack communities are places where Modelers, Creators, and Consumers can stay engaged and support one another, ask questions, keep the pulse, stay in the know, and make valuable connections with other data science and machine learning experts — or seek out some technical support!

Just the beginning - Spectral roadmap and challenges to come 


Upcoming features

Whether you’re a Modeler, Creator, or Consumer, joining our community gives you a voice as we grow our platform to its next phase, which includes:

  • Proof submission engine
  • Model consumption engine
  • Notification system
  • Challenge creation wizard
  • Inference consumption page

Upcoming challenges

We plan to expand to other challenges in the near future. As a company rooted in web3, we will first apply our products to a familiar ecosystem, and work with web3 players to tackle pressing data gaps and challenges in DeFi, blockchain security, etc. but our ambition is to bring decentralized on-chain ML beyond web3 and integrate into the broader data science ecosystem, making Spectral a hub for a diverse spectrum of topics and themes.

We believe that regardless of whether a data problem is native to web2 or web3, our unique technologies and the mechanisms of zkML can impart the benefits of decentralization, privacy, and perpetual IP protections to Modelers, Creators, and Consumers alike. 

We want to bring these benefits to more use cases. If you represent a web3 company and have an idea for a challenge, reach out to us and we may sponsor 100% of your challenge bounty for being an early Creator partner!

Upcoming collaborations

We’re hoping to establish relationships with prospective Creators and Consumers. Here’s how you can get started.

Building the Future of AI and ML with an Inference Economy

Spectral’s Machine Intelligence Network offers data scientists and ML engineers the opportunity to earn significant rewards for their work without compromising their intellectual property. Our network is designed for web3, bringing inferences to smart contracts with the security and verifiability of zkML. Inferences no longer mean a single source of failure, whether they’re in smart contracts or even allowing artificially intelligent agents to communicate with each other and trust that the information provided isn’t a hallucination or out-of-date.

This is the beginning of a long road. We’re excited for Creators, Consumers, and Modelers to join us on our journey as we prepare the Inference Economy for the next evolution of the Information Revolution, fulfilling the promise of AI, ML, and networked systems.