var img = document.createElement('img'); img.src = "https://nethermind.matomo.cloud//piwik.php?idsite=6&rec=1&url=https://www.surge.wtf" + location.pathname; img.style = "border:0"; img.alt = "tracker"; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(img,s);
Skip to main content

Common Devnet Issues

This page contains common issues while setting up the devnet and their solutions.

After each L1 - L2 bridged tx, why isn't it processed until a new L2 block is produced?

  1. L2 receives L1's state via an anchor transaction (anchorV4WithSignalSlots). The anchor transaction sends the state root at the last L1 block. So, to get the state root of the bridging txn block, we need another block on top of it. If we are not proposing empty blocks, then at least one L2 tx needs to be sent to force a new block to be generated.

  2. Another thing to remember: This extra L2 block must be a part of the L1 block that follows the L1 block containing the bridging txn. This kind of happens implicitly at the moment since kurtosis blocks are fast.

For L2 - L1 bridged tx to be processed, there should be L1 - L2 bridged txs. Why is this?

  1. The main requirement is that if 1 ETH is bridged from L1 to L2, and in return when we bridge back same 1 ETH from L2 -> L1, we also end up paying a small fee to the relayer. That fee is added on top of 1 ETH.
  2. Now, unless a lot of people have bridged in and the pool is larger than 1 eth. This won't work, since the fee can never be paid as the total amount in the bridge is 1 ETH itself.

Zisk prover fails intermittently

Zisk may occasionally fail on certain transaction types, particularly bridge-related transactions. This is a known issue with the Zisk proving backend. Workarounds:

  1. Catalyst will automatically retry failed proof requests.
  2. If failures persist, check GPU memory usage with nvidia-smi -- insufficient VRAM can cause failures.
  3. On RTX 5090, if the build itself fails (particularly lib-float archive issues), set CARGO_BUILD_JOBS=1 to force sequential compilation.

GPU not detected in Docker container

  1. Ensure NVIDIA Container Toolkit is installed: nvidia-ctk --version
  2. Verify Docker is configured for GPU: docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi
  3. If the above fails, restart Docker after configuring the NVIDIA runtime: sudo nvidia-ctk runtime configure --runtime=docker && sudo systemctl restart docker

Driver fails to sync

  1. Ensure the driver is started with --fork realtime --realtimeInbox <address>.
  2. Check that the RealTimeInbox contract address matches the one from your protocol deployment (found in deployment/deploy_l1.json).
  3. Verify the L1 WebSocket endpoint is accessible and the driver can reach it.

deploy-surge-full.sh protocol deployment fails

  1. Verify .env has valid L1 RPC and WebSocket endpoints that are reachable.
  2. Check that the deployer account has enough ETH for gas. The script deploys ~10 contracts.
  3. The script saves progress -- you can safely re-run it after fixing the issue.

Bare-metal Zisk install runs out of disk space

The Zisk backend needs 150GB+ for proving keys. Check available space with df -h. Use ZISK_DIR=/path/to/large/disk TARGET=zisk make install to install on a different partition.

lib-float build failure on RTX 5090

Parallel cargo compilation can race-condition the float.o archive. Fix:

CARGO_BUILD_JOBS=1 TARGET=zisk make install

Guest compilation fails (SYSROOT not found)

If TARGET=zisk make guest fails because it can't find RISC-V libc headers, try the manual build:

cd provers/zisk/agent/guest
SYSROOT="~/.sp1/riscv/riscv64-unknown-elf/include"
CC_riscv64ima_zisk_zkvm_elf="riscv64-unknown-elf-gcc -march=rv64ima -mabi=lp64 -mstrict-align -falign-functions=2" \
CFLAGS_riscv64ima_zisk_zkvm_elf="-isystem $SYSROOT" \
RUSTFLAGS='--cfg getrandom_backend="custom"' \
cargo-zisk build --release
cp target/riscv64ima-zisk-zkvm-elf/release/zisk-batch elf

Cold start proof takes ~70 seconds

This is expected. The first proof initializes PROOFMAN and ASM microservices inside the ZisK backend. Subsequent warm proofs run at ~10-17 seconds depending on GPU.

VKey mismatch after rebuilding Raiko

Different guest code produces a different verification key. After any guest rebuild:

  1. Get the new VKey: curl localhost:8080/guest_data
  2. Register the new VKey in the SurgeVerifier contract on L1

MaxAnchorBlockTooOld error

The proposal references an L1 block older than 256 blocks. This happens when Catalyst takes too long to submit (L1's blockhash() opcode only works for the last 256 blocks). Check Catalyst logs for proof generation delays or network issues.

SignalSlotNotSent error

A signal slot in the proposal doesn't exist on L1 SignalService. This means the UserOp execution didn't emit the expected SignalSent event. Check:

  1. The UserOp calldata is correct
  2. The L1 Bridge contract is properly deployed
  3. L1 RPC is returning current state (not stale)

SS_SIGNAL_NOT_RECEIVED during L2 block execution

The anchor's signal slots don't match what was committed in the proposal. Verify:

  1. chain_spec_list.json has the correct L2 contract addresses
  2. The Raiko prover is using the same chain spec as the Driver