We would like to thank Mo Dong from Celer Network and Brevis for the insightful discussion on ZK Coprocessor’s core concepts and use cases that inspired this blog series. 

The ZK Coprocessor is an exciting innovation in the blockchain space. Pioneered by projects like Brevis, Axiom, Lagrange and Herodotus, it promises to revolutionize how we develop applications on the blockchain. With ZK Coprocessors, developers can craft data-driven dApps that utilize the comprehensive history of omnichain data and perform intricate computations without any additional trust assumptions. Even more significantly, it ushers in a novel paradigm: the asynchronous application architecture. This approach introduces a level of efficiency and scalability previously unattainable in the Web 3.0 software framework.

In this blog series, we aim to demystify ZK Coprocessors for you, whether you're keen on the concept, its practical applications, underlying mechanics, challenges, market strategies, or comparisons between various projects.

The Case of Missing VIP Trader Programs on DEXes

To understand the basic idea of ZK Coprocessors, we need to start with real-world motivating examples. 

One stark difference between centralized (CEX) and decentralized exchanges (DEX) is the presence of trading-volume-based fee schedules, commonly known as "VIP trader loyalty programs." These programs serve as powerful tools to retain traders, boost liquidity, and ultimately enhance revenue for the exchanges.

Interestingly, while every CEX boasts at least one such program, DEXes lack them entirely. Why?

This is because implementing this feature in DEXes is substantially more challenging and expensive than in CEXes. 

In a CEX, implementing a loyalty program simply requires:

  • Logging all users’ trading history in a centralized database — a dirt-cheap task that allows for easy querying in the future.
  • Monthly, executing a straightforward query on the highly performant centralized database to determine each user's trading volume and fee tier based on historical data.

However, DEXes face significant challenges when trying to follow the same steps:

  • Directly storing every user's trading history in smart contracts isn't feasible due to the exorbitant storage costs associated with blockchains. Implementing such logic means ~4X higher gas costs for each of a user's trades. 
  • Even if one could justify the on-going data recording costs, it is even more expensive to do statistical query and calculation on this data. For example, calculating the volume data for a single user with 10K trades will cost 156M gas (yeah we calculated). 

You might say: “Wait a minute, WTF are you talking about? On the blockchain, every user’s every trade is already automatically stored (well because it’s blockchain!). A smart contract, born and raised in the blockchain, should be able to access all this data whenever needed right? right???”

Well, unfortunately no.

The data stored by blockchains and the data accessible to smart contracts within the blockchain VM are completely two different things.

For full/archive nodes of blockchains, they store a wealth of data across the full history of blockchain. With these nodes, you can easily access 

  • The state of the entire blockchain at any given time in history  (e.g. who is the first owner of the cryptopunk #1).
  • The transaction and events emitted as a result of transactions at any given time in history (e.g. Charlie swapped 1000 USDC to 0.5ETH).

In fact, popular off-chain data indexing or analytic tools such as Nansen and Dune Analytics harness this extensive dataset to derive insightful analytics. 

Yet, for smart contracts embedded within the blockchain VM, data access is much more restrictive. They cannot use the data generated by off-chain indexing solutions because it will introduce additional trust on those external and often centralized indexing solutions. 

In fact, a smart contract can only easily and trust-freely access:

  • Data stored in the VM state (excluding transaction or event data).
  • Data from the most recent block (historical data access is constrained).
  • Data from other smart contracts that's made public via "view" functions (excluding private or internal contract data).

A key nuance of the above statement lies in the “easily” word. 

It’s not like a smart contract has absolutely no clue about the full breadth of data on the blockchain. In EVM, a smart contract can access the block header hashes of the latest 256 blocks. These headers encapsulate all the activities on the blockchain up to the current block, condensed into a 32-byte hash value via Merkle Trees and Keccak hashes. 

What’s compressed, can be decompressed. Just not easily 😂

Imagine wanting to access specific data from a prior block trust-freely, leveraging a recent block header. The approach involves fetching the data off-chain from an archive node, then constructing a Merkle Tree and a block validity proof to ascertain the data's authenticity within the blockchain. This validity proof is then processed by the EVM for validation and interpretation. Such operations are substantial and daunting, and they can consume tens of millions of gas units just to retrieve a few past token balances. 

The root of this challenge lies in the fact that blockchain VMs are inherently ill-equipped to handle data-heavy and intensive computations, such as the aforementioned decompression tasks.

ZK Coprocessor Architecture

source: Brevis’s presentation slides in ETHSG

It would be ideal if there existed a magical thing allowing blockchains to delegate such data-intensive and cumbersome computations, receiving results promptly, at a low cost, and without any additional trust assumptions.

Well my friend, that is exactly what a ZK Coprocessor is built for. 

The name “Coprocessor” draws its inspiration from the annals of computer architecture evolution. For example, GPU was introduced as CPU's coprocessor because CPU had to offload certain important computation tasks (e.g. graphics computation or AI training) that are expensive and hard to run on itself to a “companion processor”, a.k.a. GPU. 

But what about the "ZK" in the ZK Coprocessor? Before delving into the intricate technicalities, let's first zoom out and appreciate the broader implications and potential of this innovative technology.

We need data-driven dApps in Web 3.0

The trading fee rebate serves as a prime example. Following this line of thinking, with the ZK Coprocessor, one can seamlessly introduce various loyalty programs across numerous DeFi primitives.

However, it is much bigger than just DeFi loyalty programs. You probably can see now that the same kind of issue exists in every other domain of Web 3.0. If you think about it, all modern Web 2.0 applications are data-driven while none of the Web 3.0 applications is. To build “killer apps” with user experiences rivaling that of traditional Internet applications, this data-driven approach is indispensable.

Let’s take a look at another example in the DeFi space: improving liquidity efficiency by redesigning the liquidity mining reward mechanisms. 

Presently, liquidity incentives on AMM DEXes function on a “pay-as-you-go” model. Here, farming rewards are instantaneously distributed to LPs as they contribute liquidity. This model, however, is far from optimal. Expert farmers, sensing market volatility, can promptly withdraw their liquidity to sidestep impermanent losses. In doing so, they offer minimal value to the protocol but still reap significant rewards.

The ideal AMM liquidity incentive would retrospectively assess the steadfastness of LPs, especially during significant market fluctuations. Those who consistently support the pool during such times should receive the highest rewards. Yet, accessing historical LP behavior data, crucial for this model, remains unfeasible today.

You need ZK Coprocessors to do that. 

There are a lot of similar examples we can make within the DeFi domain, whether it's about active LP position management with predefined algorithms and rules, establishing a credit line using non-tokenized liquidity positions, or determining dynamic liquidation preferences on loans based on past repayment behaviors. The potential of ZK Coprocessors, however, stretches beyond just DeFi.

Building on-chain gaming with great UX powered by ZK Coprocessors

An example of web 2.0 gaming live ops feature

The moment you dive into a newly installed Web 2.0 game, every move you make is intricately recorded. This data doesn’t just sit idle; it actively shapes your gaming journey. It decides when to offer you in-game purchase options, when to introduce a bonus game, when to send you push notification with carefully crafted wordings, which opponents you're matched with, and much more. These are all parts of what the gaming industry calls Live Operations (LiveOps), a cornerstone of enhancing player involvement and revenue streams.

For fully on-chain games to rival the user experience of Web 2.0 classics, they need these LiveOps capabilities. They should be based on players’ historical interactions and transactions with the game’s smart contracts.

Unfortunately, this kind of features are either completely missing or are still driven by centralized solutions in blockchain-powered games. The reason is exactly the same as in the DEX example: difficulty in tapping into and computing your historical gaming data on the blockchain.

Yes, again, you need ZK Coprocessors to do that. 

Web 3.0 social and identity applications are another domain which simply won’t work without the support from ZK Coprocessors. 

In the blockchain world, your digital identity is a tapestry woven from your past actions

  • Want to prove you are a NFT OG? Show me you are one of the original minters of Cryptopunk.
  • Bragging about being a big-time trader? Prove to me that you have paid more than $1M in transaction fees on DEXes.
  • Close to Vitalik? Show me that his address sent funds to your address. 

Off-chain systems, be it humans or Web 2.0 apps, can easily generate such proof because just like in the trading volume example, they can access archive nodes containing all that data. 

The identity proof based on this direct data access requires strong wallet-address association and therefore has its downside of losing privacy, but it works.

However, just like in the trading volume example, if you want to convince a smart contract about your OG status to get early access to some cool games without introducing additional trust, there is simply no good way to do that. 

With ZK Coprocessors, you can stitch together a solid identity proof, a testament to your past behaviors, one that any smart contract would accept without a shadow of a doubt. Your interactions across different applications and even various blockchains can be artfully merged to form this proof.

What's even more appealing is the inherent privacy ZK offers. Your wallet address needn't be openly associated with your identity. You could, for instance, attest to owning a Cryptopunk NFT without disclosing the specific wallet address. Or you might demonstrate having executed 10,000 trades on Uniswap, without pinning down the exact figures.

While we can easily go on with more great use cases, you are hopefully convinced that ZK Coprocess opens an entirely new realm of data-driven dApp building. 

But it’s more than that. 

Beyond the Data-Driven Paradigm: Charting the Asynchronous Horizon of Web 3.0 with ZK Coprocessors

The data-driven dApp paradigm, while notable, is merely the tip of the iceberg.

The inception of ZK Coprocessors is poised to revolutionize the way we view blockchain computations, ushering in an era where asynchronous processing becomes the standard for Web 3.0. This shift redefines how tasks are handled, with specialized processors operating independently for heightened efficiency.

Let’s first give an ELI5 on what is asynchronous processing.

Imagine a synchronous restaurant where one person juggles the roles of chef and waiter. You place an order, and he proceeds to prepare it, leaving you waiting. He can only attend to another guest after serving your dish. While this setup might cater to you, it’s hardly efficient for others.

Contrastingly, in an asynchronous restaurant, a distinct chef and a distinct waiter work in tandem. After taking your order, the waiter swiftly conveys it to the chef, while simultaneously serving other customers. Upon dish completion, the chef signals the waiter, who then serves your meal.

In a computer system:

Synchronous architecture is like the first restaurant where one person waits for each task to finish before moving on. It's straightforward but can be slow because it handles tasks one at a time. It’s also likely that the person might be a good waiter but not a good cook.

Asynchronous architecture is like the second restaurant where there are decoupled and specialized system components which send messages and tasks to each other as a way of coordination. This allows each component to manage their own line of tasks at the same time. While it may require a more intricate management approach, this architecture can be faster and more efficient.

Every single modern Internet application is built on asynchronous architecture for efficiency and scalability, we argue that Web 3.0 should be too. 

ZK Coprocessors are set to be the trailblazers in this transformative shift. For dApp developers, the blockchain acts as the waiter in our asynchronous restaurant. It should primarily handle computations that directly alter blockchain states, such as contingent asset ownership changes.  All other computations should pivot to robust ZK Coprocessors, acting as proficient chefs who efficiently cook up results to be sent to the waiter, all through the power of asynchronous processing.

Specifically, if a computation in a blockchain application matches one of the following two “viable conditions”, the use of ZK Coprocessor should be considered. 

ZK Coprocessor Viable Conditions:

  • On-chain computation cost > (Off-chain ZK Coprocessor computation (inclusive of proof generation) + on-chain verification cost)
  • On-chain computation latency > (Off-chain ZK Coprocessor computation (inclusive of proof generation) + on-chain verification latency)

Even if they match one of them, it’s still worth considering on a case-by-case basis. 

Now you can see that it is much more than just data-driven dApps! It’s a brand new way to bring high-caliber generalized computing such as ML and more to blockchain, but more importantly introduces the paradigm-shifting asynchronous architecture of building dApps that were simply not possible before. 

In the next episode….

If we have managed to convince you that the ZK Coprocessor is an idea that will have a profound impact, it’s probably time we talk about how they work. In the next blog, we will explore the key architecture of the ZK Coprocessor and discuss what are some of the biggest technical challenges remaining in this space. 

Disclaimer: Press release sponsored by our commercial partners.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.