ZisK Prover Setup
Raiko uses the ZisK GPU backend to generate ZK proofs for Surge blocks. This guide covers installation, configuration, and operation.
How Real-Time Proving Works
- Catalyst (orchestrator) receives a new L2 block
- Raiko generates a ZK proof using the ZisK GPU backend (~10-17 seconds)
- The proof is submitted atomically with the block proposal to the RealTimeInbox L1 contract
- The block is finalized immediately on L1
No bonds, no proving windows, no on-chain prover registration.
Hardware Requirements
| Component | Minimum | Recommended |
|---|---|---|
| GPU | CUDA-capable GPU | NVIDIA RTX 5090 or L40 |
| GPU VRAM | 32GB | 48GB+ |
| RAM | 128GB | 256GB |
| CPU | 8 cores | 16 cores |
| Disk | 150GB SSD | 300GB NVMe |
Performance
| GPU Config | Proof Time |
|---|---|
| 1x L40 | ~27s |
| 2x L40 | ~20s |
| 3x L40 | ~16s |
| 4x L40 | ~15s |
| 8x L40 | ~13-14s |
| 8x RTX 5090 | ~10-11s |
The time is measured for one basic block. In our Gnosis deployment, we use an 8x RTX 5090 (~10-11s).
Setup
- Bare Metal (Production)
- Docker
System Requirements
- Ubuntu 24.04 (GLIBC >= 2.36 required by ZisK toolchain)
- CUDA drivers installed (our setup uses CUDA 13.0)
- 150GB+ free disk space (for ZisK proving keys)
- Rust toolchain (installed automatically by the script)
Install System Dependencies
# Base build tools
sudo apt-get update && sudo apt-get install -y \
build-essential cmake pkg-config git curl wget nasm \
gcc-riscv64-unknown-elf \
libgmp-dev libssl-dev libsodium-dev \
libomp-dev libomp5 \
libopenmpi-dev openmpi-bin \
nlohmann-json3-dev protobuf-compiler \
clang libclang-dev
# Symlink libomp as libiomp5 (required by zisk-distributed-worker linker)
sudo ln -sf /usr/lib/x86_64-linux-gnu/libomp.so.5 /usr/lib/x86_64-linux-gnu/libiomp5.so
sudo ldconfig
Clone and Build
git clone https://github.com/NethermindEth/raiko.git
cd raiko
Install ZisK backend (installs .zisk and .sp1 with proving keys):
TARGET=zisk make install
If your home directory lacks space, install to an alternate path:
ZISK_DIR=/path/to/large/disk TARGET=zisk make install
Installation takes a while and needs 150GB+ of disk space.
Compile the guest program:
TARGET=zisk make guest
Configure
Copy chain_spec_list.json from your protocol deployment (generated by deploy-surge-full.sh):
cp /path/to/simple-surge-node/configs/chain_spec_list_default.json host/config/devnet/chain_spec_list.json
The chain_spec_list.json file is generated during protocol deployment. Copy it as-is -- don't modify it unless you know what you're doing.
Run
nohup env RUST_LOG=info \
cargo run --release --features zisk -- \
--config-path=host/config/devnet/config.json \
--chain-spec-path=host/config/devnet/chain_spec_list.json \
> raiko.log 2>&1 &
Check Logs
less -R raiko.log
Prerequisites
- Docker and Docker Compose
- NVIDIA drivers installed (
nvidia-smishould work) - NVIDIA Container Toolkit
Install NVIDIA Container Toolkit
- Ubuntu/Debian
- CentOS/RHEL
# Add the NVIDIA Container Toolkit repository
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
# Install the toolkit
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
# Configure Docker to use the NVIDIA runtime
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
# Add the NVIDIA Container Toolkit repository
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo | \
sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
# Install the toolkit
sudo yum install -y nvidia-container-toolkit
# Configure Docker to use the NVIDIA runtime
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
Verify the installation:
docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi
Clone and Configure
git clone https://github.com/NethermindEth/raiko.git
cd raiko
Copy chain specs from your protocol deployment:
cp /path/to/simple-surge-node/configs/chain_spec_list_default.json host/config/devnet/chain_spec_list.json
Build and Run
docker compose -f docker-compose-zk.yml up -d
Verify
# Check container status
docker compose -f docker-compose-zk.yml ps
# Check logs
docker compose -f docker-compose-zk.yml logs -f
Get Batch VKey
On first run, this takes 4-5 minutes:
curl localhost:8080/guest_data
This warms up zisk prover and returns the ZisK batch verification key, which must be registered in the SurgeVerifier contract.
Integration with Catalyst
Once Raiko is running, point Catalyst at it:
RAIKO_URL=http://<raiko-host>:8080
Catalyst will automatically send proving requests to Raiko when new L2 blocks arrive.
Known Issues
Intermittent ZisK Failures
ZisK sometimes fails on certain transaction types (especially bridge-related ones due to a keccak256 issue). Catalyst retries automatically on failure.
lib-float Build Failure on RTX 5090
The float.o file can disappear mid-archive during parallel compilation. Fix:
CARGO_BUILD_JOBS=1 TARGET=zisk make install
Cold Start
The first proof takes ~70s due to zisk and elf file initialization. Subsequent proofs run at ~10-17s.
GPU Memory
Heavy blocks (many transactions) need more VRAM. Monitor with:
nvidia-smi -l 1
Keep in mind, nvidia-smi could lock GPUs for small amount of time, resulting into increased proof time. ::
VKey Changes
If guest code changes, the verification key changes too. Re-fetch and re-register:
curl localhost:8080/guest_data