Deploy Surge on Ethereum Sepolia
Deploy your own Surge rollup with real-time proving on Ethereum Sepolia network.
Prerequisites
- An L1 network with HTTP, WebSocket, and beacon chain endpoints (we use Ethereum Sepolia with QuickNode)
- Docker and Docker Compose
- Git
- An L1 account with ETH for gas (deployer private key)
- A separate machine with CUDA-capable GPU for the ZisK prover (see Prover Setup)
1. Clone and Configure
git clone https://github.com/NethermindEth/simple-surge-node.git
cd simple-surge-node
git checkout realtime
cp .env.devnet .env
Edit .env and set your L1 connection:
| Variable | Description | Example |
|---|---|---|
L1_ENDPOINT_HTTP | L1 RPC endpoint | https://your-provider.com/rpc |
L1_ENDPOINT_WS | L1 WebSocket endpoint | wss://your-provider.com/ws |
L1_BEACON_HTTP | L1 beacon chain endpoint | https://your-provider.com/beacon |
Everything else in .env.devnet has sensible defaults for a devnet deployment, including pre-funded deployer, operator and submittor keys. For production deployments, override PRIVATE_KEY, OPERATOR_PRIVATE_KEY, and SUBMITTER_PRIVATE_KEY with your own funded accounts.
If running the entire stack with prover, make sure to configure RAIKO_HOST_ZKVM to the prover endpoint. (By default, it's the IP of the VM prover is running on, followed by 8080 port, like http://ip:8080 )
If running the entire stack with prover, make sure to configure RAIKO_HOST_ZKVM to the prover endpoint. (By default, it's the IP of the VM prover is running on, followed by 8081 port, like http://ip:8081 )
2. Run the Deployment
./deploy-surge-full.sh
The script runs interactively through 7 phases:
Phase 1: Environment
Choose local or remote, if you are using a remote VM, please pick remote. (Remote VM highly suggested)
Phase 2: L1 General Protocol Contracts
Deploys RealTimeInbox, SurgeVerifier, Bridge, SignalService, token vaults, Multicall, and UserOpsSubmitter to L1. Also generates L2 genesis, chainspec and prover configuration files.
Phase 3: Prepare Prover
Register prover vkey, make sure the RAIKO_HOST_ZKVM is provided and Set up the ZisK Prover is done with generated prover configuration files from last phase.
Phase 4: L2 Stack
Deploys the L2 execution client (Nethermind), the Driver, and Catalyst. Choose:
- Option 1: Driver only
- Option 2: Driver + Catalyst (default, needed for real-time proving)
Phase 5: Cross Chains DEX Protocol Contracts
Deploys L1_VAULT, L2_VAULT, L2_DEX, L2_TOKEN, and links vaults.
Phase 6: DEX UI
Deploys DEX UI with provided environment variables. (Since DEX UI is build with dynamic configurations, so this process can sometimes take ~10 minutes)
Phase 7: Verification
Runs health checks and prints a deployment summary with all endpoints.
The script saves progress. You can safely cancel with Ctrl+C and resume later.
3. Verify
After deployment:
# Check L2 containers are running
docker compose ps
# Test L2 RPC
curl -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"web3_clientVersion","params":[],"id":1}' \
http://localhost:8547
If the Driver is running with --fork realtime, you should see blocks being produced once Catalyst starts proposing.
Our Deployment
For reference, here's what we use for the Gnosis mainnet deployment at *.realtime.surge.wtf:
| Component | Hardware | Notes |
|---|---|---|
| L1 | Gnosis mainnet | QuickNode RPC provider |
| L2 Stack | Standard VM | NMC, Driver, Catalyst |
| Prover | NVIDIA RTX 5090 GPU | Bare-metal Raiko + Zisk (~10-11s proofs) |
| Secondary prover | L40s GPU cluster | ~13-14s proofs |
4. Next Steps
- Set up the ZisK Prover for real-time proving (required for block finalization)
- Deploy a dApp on your Surge network
Cleanup
To tear down everything:
./remove-surge-full.sh --force
This removes all containers, data, and configurations. Add --remove-env true to also delete the .env file.