---
title: "hyperscale-rs"
path: "/contents/tech/research/hyperscale-rs"
version: "3.0.0"
author: "Hydrate"
createdAt: "2026-02-17T21:53:28.659Z"
updatedAt: "2026-05-05T11:34:00.322Z"
---

# hyperscale-rs

<Infobox>
| **hyperscale-rs** |
| Type | Consensus Protocol Implementation |
| Language | Rust |
| Status | Foundational consensus complete; [Xi'an RFC](https://radixtalk.com/t/rfc-xian-delivering-hyperscale-for-radix/2280) under community review |
| Crates | 29 (Cargo workspace) |
| Commits | 1,200+ |
| Lead | [flightofthefox](https://github.com/flightofthefox) ([proven.network](https://proven.network)) |
| GitHub | [hyperscalers/hyperscale-rs](https://github.com/hyperscalers/hyperscale-rs) |
| Telegram | [t.me/hyperscale_rs](https://t.me/hyperscale_rs) (421 members) |
| Stars / Forks | 23 / 13 |
| Contributors | 5 |
| Licence | Source-available (Apache 2.0 on RFC acceptance) |
| Platforms | Linux (x86_64), macOS (ARM64) |
</Infobox>

## Introduction

**hyperscale-rs** is a [community-built Rust implementation](https://github.com/hyperscalers/hyperscale-rs) of the Hyperscale consensus protocol for the Radix DLT ecosystem. The project's stated goal is to produce a viable [Xi'an candidate](https://www.radixdlt.com/blog/radix-labs-roadmap---to-hyperscale-and-beyond) — the next-generation sharded consensus layer that will enable Radix to become what its community describes as "the world's first linearly scalable Layer 1 network."

Led by **flightofthefox** of [proven.network](https://proven.network), the project was [publicly announced](https://t.me/hyperscale_rs) in late 2025 with the opening of its source code and an invitation for community review and contribution. By April 2026 the lead developer reported that "all major foundational consensus pieces are pretty solid now," and a formal [RFC for delivering Xi'an](https://radixtalk.com/t/rfc-xian-delivering-hyperscale-for-radix/2280) was submitted to the Radix governance forum on 20 April 2026 with an 18-month mainnet target.

The project significantly diverges from both the original Cerberus design and the Foundation's Hyperscale reference implementation. As the lead developer put it: "It's very different and throws out almost all designs from both Cerberus and the original Hyperscale repo... which is probably surprising to people if they think this is only a rust port."

## Background: Hyperscale & Xi'an

Radix's long-term roadmap centres on achieving linear scalability — the ability to increase throughput proportionally by adding more shards to the network. The [Hyperscale Alpha consensus mechanism](https://radixecosystem.com/news/hyperscale-alpha-part-i-the-inception-of-a-hybrid-consensus-mechanism-the-radix-blog-radix-dlt) (formerly known as Cassandra) represents Radix's approach to this problem, combining principles from Nakamoto consensus and classical Byzantine Fault Tolerant (BFT) protocols.

In public testing, the Foundation's Hyperscale reference implementation [sustained over 500,000 transactions per second](https://getradix.com/updates/news/hyperscale-update-500k-public-test-done-the-radix-blog-radix-dlt) with peaks exceeding 700,000 TPS across more than 590 participating nodes. Private testing demonstrated linear scaling at roughly 250,000 TPS on 64 shards and maintained the same per-shard throughput at 500,000 TPS on 128 shards. However, those tests used very small per-shard committees; hyperscale-rs targets meaningful committee sizes (~100 validators per shard), which fundamentally changes the design space.

The [Xi'an production track](https://www.radixdlt.com/blog/radix-labs-roadmap---to-hyperscale-and-beyond) implements this hybrid consensus mechanism into a production network candidate. After the [interim Hyperscale phase closed](https://www.radixdlt.com/blog/interim-hyperscale-closing-the-chapter) in February 2026, hyperscale-rs has emerged as the leading community-led candidate to deliver Xi'an.

## Architecture

hyperscale-rs is architected as a [pure consensus layer](https://github.com/hyperscalers/hyperscale-rs) — deliberately containing no I/O, no locks, and no async code in the consensus core. This design decision enables deterministic simulation testing as a first-class design principle, allowing the entire consensus logic to be tested without non-determinism from network or disk operations.

A defining feature of the codebase is the systematic pairing of production and simulation backends behind common traits. This includes `network-libp2p` and `network-memory`, `storage-rocksdb` and `storage-memory`, `dispatch-pooled` and `dispatch-sync`, and `metrics-prometheus` and `metrics-memory` — meaning the same consensus code can be exercised either against real infrastructure or inside a deterministic harness that can inject faults, partitions, and adversarial timing.

### Consensus Mechanism

The protocol uses a faster **two-chain commit** derived from **HotStuff-2**. Key consensus features include:

- **Optimistic pipelining** — proposers can submit new blocks immediately after quorum certificate (QC) formation, without waiting for the previous block to be fully committed

- **One-round finality** — BFT provides finality with no possibility of reorganisation after QC

- **Decoupled execution and consensus** — transactions can start at block height N and finalise at N+1 or later; blocks contain certificates so execution does not need to occur before voting on a block

- **[Two-phase commit](https://pprogrammingg.github.io/web3_modules/hyperscale-rs/module-01b-tx-flow.html)** for cross-shard atomicity, where a coordinator sends prepare messages, shards lock resources, then commit or abort

- **Aggregated provisions** — one provision message per block per height covering all touching transactions, with proofs aggregated via [verkle-tree](https://en.wikipedia.org/wiki/Verkle_tree) inclusion proofs

### Fault Model and Committees

Each shard runs a committee of approximately 100 validators using the **n = 3f+1** model, requiring 2f+1 (~67) votes for QC formation. A separate beacon-chain-like control plane of all validators tracks which shards exist, which committees are responsible for each, and which portion of the state space each owns. Validators on a shard act as light clients of remote shards: they verify cross-shard messages using BLS signatures on gossiped headers and state-root checks, removing the single point of failure on remote proposers.

### Radix Engine Integration

Unlike the Foundation's reference implementation, hyperscale-rs integrates directly with the real Radix Engine for smart contract execution, providing actual transaction processing rather than simulated execution. This is the basis for the lead developer's preference for atomic composability: in a sharded system without atomic commit, developers reliably collapse all components into a single shard, defeating the purpose of sharding.

## Crate Structure

The project is organised as a [Cargo workspace of 29 crates](https://github.com/hyperscalers/hyperscale-rs/tree/main/crates), each handling a specific responsibility. The structure has matured significantly through April–May 2026 as runner crates were broken out and networking, sync, and dispatch concerns were extracted into dedicated modules.

### Core Consensus

- **types** — fundamental data structures: cryptographic hashes, blocks, votes, quorum certificates

- **core** — trait-based architecture foundation and state machines

- **bft** — Byzantine fault-tolerant consensus mechanics: block proposal, voting, view changes, committee enforcement

- **messages** — wire format for inter-validator communication

- **topology** — committee membership and shard topology snapshots

### Execution

- **engine** — Radix Engine integration adapters

- **execution** — transaction processing with two-phase commit coordination

- **mempool** — transaction pool administration and shard-specific queuing

- **provisions** — batched cross-shard provision messages and verification

- **remote-headers** — header sync for light-client verification of remote shards

### Networking

- **network** — common networking traits

- **network-libp2p** — production networking via [libp2p](https://libp2p.io/) with Yamux auto-tuning

- **network-memory** — in-process simulation networking with fault injection

### Storage

- **storage** — storage abstraction layer

- **storage-rocksdb** — production persistence via [RocksDB](https://rocksdb.org/)

- **storage-memory** — in-memory storage for simulation

- **jmt** — Jellyfish Merkle Tree for state-root computation

### Dispatch & Metrics

- **dispatch**, **dispatch-pooled**, **dispatch-sync** — task scheduling abstraction with parallel and synchronous backends

- **metrics**, **metrics-prometheus**, **metrics-memory** — observability with Prometheus integration and in-memory recording for simulations

### Node, Simulation, Tooling

- **node** — integrates all subsystems into a complete validator

- **production** — production wiring of node + libp2p + RocksDB

- **simulation**, **simulator** — deterministic simulation harness with configurable network conditions, used for routine 300-node validation runs

- **spammer** — load generation utility for performance evaluation and benchmarking

- **test-helpers** — shared utilities across test suites

## Performance

The project routinely runs full simulations against the deterministic harness to validate design changes. A [300-node simulation reported on 13 April 2026](https://t.me/hyperscale_rs) demonstrated the operating characteristics targeted for production:

| **Nodes** | 300 |
| **Transactions submitted** | 30,000 |
| **Transactions completed** | 23,950 (within 30s window) |
| **Average TPS** | 798 |
| **Peak TPS** | 1,147 |
| **Latency p50** | 6.35 s |
| **Latency p99** | 7.75 s |
| **Lock contention** | 0.00% |
| **Total messages** | 9,438,658 |

Per-shard throughput targets are roughly **1,000 TPS at ~5 second finality**, which the lead developer describes as a deliberate trade-off: rather than chase sub-second finality (which is bounded by the irreducible complexity of atomic commit), the project targets validator hardware that home users on consumer fibre can operate, with running costs estimated at "a few dollars a month." Higher per-shard throughput (10,000 TPS) is technically configurable but would require fibre and multi-core machines beyond the home-validator profile. Linear network scale comes from adding shards rather than scaling individual shards harder.

## Transaction Flow

The [transaction lifecycle](https://pprogrammingg.github.io/web3_modules/hyperscale-rs/module-01b-tx-flow.html) follows a 14-step pipeline from user submission to finality, spanning three phases:

### Pre-Consensus (Steps 1–6)

A user signs a transaction externally, which is submitted via an RPC gateway. The node receives the raw bytes, converts them to internal events, and performs cross-shard analysis to determine which NodeIDs (components, resources, packages, accounts) are touched. Transactions enter shard-specific mempools; cross-shard transactions are propagated to all involved shards via libp2p Gossipsub.

### BFT Consensus (Steps 7–11)

Proposer selection occurs deterministically per round (e.g., round-robin by validator identity). The proposer builds a block from mempool transactions. Validators authenticate the block and broadcast votes. A quorum certificate is formed when 2f+1 (~67 of 100) votes are collected — notably, the QC is not sent as a separate message but is formed by the next proposer from collected votes. The block is committed once the commit rule is satisfied.

### Execution & Finality (Steps 12–14)

Committed transactions are executed per shard using the Radix Engine. Cross-shard coordination uses a two-phase commit protocol where the coordinator sends prepare messages, shards lock resources without visible state changes, then commit or abort with state applied in an agreed order. BFT provides one-round finality with no possibility of reorganisation.

## Improvements Over Reference Implementation

According to the project's design notes, hyperscale-rs aims to improve upon the official Hyperscale reference implementation in several areas:

- **Better architected** — modular workspace of 29 crates with paired production/simulation backends, replacing monolithic "kitchen drawer" crates

- **Better tested** — deterministic simulation harness was the first thing written for the project, and 300-node simulations are part of routine iteration cycles rather than rare set-piece events

- **Validators as light clients of remote shards** — only the remote proposer sends provisions (not M-to-N), with header-and-state-root verification removing the single point of failure on the proposer

- **Aggregated provisions and verkle-tree inclusion proofs** — one provision message per block per height carries all transactions, dramatically reducing N→M messaging overhead at large committee sizes

- **Real Radix Engine** — uses the actual Radix Engine for smart contract execution rather than an alternative environment, so atomic composability across shards behaves the same as same-shard composition from a developer's perspective

- **Decoupled consensus and execution** — durable persistence is batched into a single fsync per block and execution is moved off the consensus dispatch pool; voting on a block does not require executing it first

- **Safety-violation fixes** — addresses several serious safety violations identified in the reference implementation

The lead developer notes that the reference implementation could only sustain its headline numbers with very small per-shard committees, where the inter-shard messaging cost is hidden. At hyperscale-rs's target committee size (~100 nodes per shard), the older design would saturate the bandwidth of a single data centre at a fraction of the throughput hyperscale-rs achieves on consumer hardware.

## Xi'an RFC and Funding (April 2026)

On 20 April 2026, flightofthefox posted an [RFC for delivering Xi'an for Radix](https://radixtalk.com/t/rfc-xian-delivering-hyperscale-for-radix/2280) to the governance forum. The proposal frames hyperscale-rs as the production candidate for the Xi'an mainnet release and structures funding around six delivery milestones, with the existing codebase to be relicensed under **Apache 2.0** on acceptance.

### Headline Terms

| **Total budget** | $300,000 USD-equivalent across acceptance and five delivery milestones, paid in XRD (30-day TWAP per payment) |
| **Bonus** | 50M XRD on Milestone 6 (mainnet launch) |
| **Timeline** | 18 months to mainnet-ready delivery, plus a 12-month post-launch support window. Best case: testnet Q4 2026, mainnet Q1 2027 |
| **Governance** | Milestones signed off by the [Radix Accountability Council](/community/radix-accountability-council); neutral third-party arbitration for disputes |
| **Termination** | No exit fees if the project is paused or terminated at any milestone boundary |

### Milestones

- **M1** — Validator lifecycle and consensus engine: dynamic topology, node shuffling, validator joining/leaving, the staking model

- **M2** — Engine–Gateway alignment: API breaking changes required for sharded operation

- **M3** — Gateway rewrite for sharded data

- **M4** — Alpha desktop validator application (cross-platform), targeted at home validators on consumer hardware

- **M5** — Post-quantum cryptography integration

- **M6** — Mainnet launch with 12-month support window

### Solo-Developer Risk

The proposal acknowledges that the project remains a solo effort and that adding contributors mid-flight would extend the timeline rather than accelerate it. Three options are offered for managing key-man risk: a larger proposal employing a second person, DAO-purchased key-man insurance for the duration of the project, or accepting the single point of failure given the milestone-based payment structure. The lead developer's stated preference is that hyperscale-rs's documentation and design discussions remain public so that other parties can build the depth of knowledge to maintain the network long-term — "we have to get to a world where multiple entities have a stake (and the knowledge to) maintain the network."

### Status

As of early May 2026, the RFC is under active community review on radixtalk, with the [RAC](/community/radix-accountability-council)'s roadmap stepping through legal review, entity setup, and a community vote. Discussion in late April 2026 confirmed that the existing Radix Foundation can issue a Milestone 1 grant payment ahead of the formal MIDAO entity coming into existence, removing the dependency on completed DAO formation as a blocker for kicking off work.

## Development & Community

hyperscale-rs is developed openly with an [active Telegram community](https://t.me/hyperscale_rs) that has grown from 319 members at the project's public unveiling in February 2026 to **421 members** by May 2026. Day-to-day development is tracked through automated GitHub commit notifications in the channel via a Kit-Watcher bot.

### Contributors

- **flightofthefox** ([proven.network](https://proven.network)) — lead developer, 1,100+ commits, sole driver of the architecture

- **kaldeberger** — secondary contributor, ~70 commits

- **shambupujar**, **cyril88888**, **pprogrammingg** — community contributors

- **wizzl0r** — channel owner, PR reviews and testing infrastructure

- **Radical** — code reviewer and author of the [community transaction-flow documentation](https://pprogrammingg.github.io/web3_modules/hyperscale-rs/module-01b-tx-flow.html)

### Recent Development (March–May 2026)

Between 20 March and 5 May 2026, the project recorded **376 commits**, all on the `main` branch and almost entirely from flightofthefox. CI passed 642 runs by mid-April. Headline themes:

- **Foundational consensus declared complete** (13 April 2026) — "all the major foundational consensus pieces are pretty solid now. Not anticipating any more major changes just light tidy ups and doc work from here"

- **Networking refactor** — extraction of networking from runner crates into dedicated `network-libp2p` / `network-memory` crates; switch to default Yamux configuration with auto-tuning

- **Persistence and sync** — decoupled consensus advancement from durable persistence, batched block persistence into a single fsync, reorganised `protocols` module into sibling sync and fetch modules, reclamation of freed sync slots

- **BFT hardening** — vote validation rejects out-of-range committee indices, topology snapshots enforce committee/validator-info consistency, block transactions wrapped in `Arc` to remove duplicate allocations, integer-overflow safety in quorum arithmetic

- **Storage** — separated SST and WAL paths in RocksDB to allow operators to host them on different filesystems, refactored typed column-family encoding into `DbEncode` + `DbCodec`

- **Dispatch** — moved state-root verification onto its own dispatch pool, removed direct `rayon` dependency from the production build, eliminated double mutex locks in `SubstateOverlay` rebuild

### Next Major Spike: Dynamic Topology

With foundational consensus settled, the next major work item is **dynamic topology** — node shuffling between shards across epochs, validator joining and leaving, and the staking-and-jailing rules required for home-validator-class hardware. This work is the headline content of [Milestone 1 of the Xi'an RFC](https://radixtalk.com/t/rfc-xian-delivering-hyperscale-for-radix/2280) and requires social consensus on staking changes, so it is being specced in the proposal rather than landed unilaterally.

### Funding History

Until April 2026 the project was sustained by community donations through the **FoxFund** initiative. In April 2026 the lead developer asked supporters to redirect their support into voting for the Xi'an RFC rather than continuing direct donations: *"no more donations tho lads — just support foxy proposal for xi'an if you want to give back. it will have a hefty enough price tag."*

## External Links

- [GitHub Repository — hyperscalers/hyperscale-rs](https://github.com/hyperscalers/hyperscale-rs)

- [RFC: Xi'an — Delivering Hyperscale for Radix (radixtalk)](https://radixtalk.com/t/rfc-xian-delivering-hyperscale-for-radix/2280)

- [Telegram Community — hyperscale-rs for Radix](https://t.me/hyperscale_rs)

- [Transaction Flow Documentation](https://pprogrammingg.github.io/web3_modules/hyperscale-rs/module-01b-tx-flow.html)

- [Radix Labs Roadmap — To Hyperscale and Beyond](https://www.radixdlt.com/blog/radix-labs-roadmap---to-hyperscale-and-beyond)

- [Hyperscale Alpha Part I — The Inception of a Hybrid Consensus Mechanism](https://radixecosystem.com/news/hyperscale-alpha-part-i-the-inception-of-a-hybrid-consensus-mechanism-the-radix-blog-radix-dlt)

- [Hyperscale Update — 500k+ Public Test Done](https://getradix.com/updates/news/hyperscale-update-500k-public-test-done-the-radix-blog-radix-dlt)

- [Wiki: Radix Mainnet (Xi'an)](/contents/tech/releases/radix-mainnet-xian)

- [Wiki: Radix Accountability Council](/community/radix-accountability-council)