Why this platform doesn’t suck (and when it does)
Serverless promised fewer servers and more sleep. What we got instead: cold starts, regional bottlenecks, mystery timeouts, and bills that read like plot twists. Workers flips that script. Code runs on V8 isolates in Cloudflare’s global network, spins up in milliseconds, and you can choose to place compute next to your data. It’s genuinely fast for read-heavy web3 APIs and pragmatic enough for write flows when you couple it with Durable Objects and Queues.
This post covers:
- How Workers fits web3: reads from RPCs at the edge, writes without nonce hell, IPFS/Ethereum gateways
- A concrete starter you can deploy today: an edge RPC cache + ERC‑20 balance API with viem , KV, and the Cache API
- A safe(ish) path for server-side signing using a Durable Object for nonce coordination
- Hyperdrive for low-latency Postgres, plus when to pick D1 , KV , or R2
- Smart Placement , Queues , Cron/Alarms , and a GitHub Actions CI that won’t make you cry
You’ll see my notes on the rough edges too: eventual consistency in KV, Node polyfills, and when to use viem over ethers.
TL;DR architecture
- Worker API at the edge
- GET /balance/:address — reads via viem HTTP transport from your Ethereum RPC; caches responses in Cache API and KV for a short TTL
- POST /tx — optional write path; enqueues a transaction request to Queues ; a Durable Object coordinates nonces; a consumer signs and broadcasts
- Data
- KV for config and short-lived cache keys
- D1 for simple relational bits (usage logs, rate limits)
- Hyperdrive to your existing Postgres for heavier queries (read-through caching at network edge)
- R2 if you need blobs (token lists, JSON artifacts, images)
- Placement : enable Smart Placement so back-end heavy routes execute near your RPC/database
Prerequisites
- Node.js 18+ and npm/pnpm
- Cloudflare account
- Wrangler v3+
- An Ethereum JSON-RPC endpoint (Cloudflare Ethereum Gateway or Alchemy/Infura)
# Create a Workers project
npm create cloudflare@latest edge-web3-api
cd edge-web3-api
npm i viem
wrangler config
Use TOML if you like, or JSONC (new features show up there first). Here’s a TOML that runs today:
# wrangler.toml
name = "edge-web3-api"
main = "src/index.ts"
compatibility_date = "2024-09-23"
compatibility_flags = ["nodejs_compat"]
[placement]
mode = "smart" # Run close to your RPC/DB when it helps latency
[vars]
ETH_CHAIN_ID = "1"
CACHE_TTL_SECONDS = "15" # API cache TTL
[[kv_namespaces]]
binding = "CONFIG"
id = "<kv-id>"
[[d1_databases]]
binding = "DB"
database_name = "edgeweb3"
database_id = "<d1-id>"
[[durable_objects.bindings]]
name = "NONCE"
class_name = "NonceManager"
[[queues.producers]]
queue = "tx-queue"
binding = "TX_OUT"
[[queues.consumers]]
queue = "tx-queue"
max_batch_size = 50
max_retries = 3
[[hyperdrive]]
binding = "HYPERDRIVE"
id = "<hyperdrive-id>"
Set your secrets:
npx wrangler secret put ETH_RPC_URL
npx wrangler secret put PRIVATE_KEY_HEX # only if you accept server-side signing
Tip: prefer client-side signing where possible. If you must sign server-side, treat keys as short-lived and scoped; rotate, and keep the “blast radius” tiny.
Edge reads that fly: viem + Cache API + KV
Workers use Web standard APIs, so viem’s HTTP transport runs cleanly. We’ll build a tiny API that returns an ERC‑20 balance, caches it for a few seconds, and avoids hammering your RPC during spikes.
// src/index.ts
import { createPublicClient, http, formatUnits, getAddress } from "viem";
import { mainnet } from "viem/chains";
// Minimal ERC-20 ABI fragment
const erc20 = [
{ "type": "function", "name": "decimals", "stateMutability": "view", "inputs": [], "outputs": [{"type":"uint8"}] },
{ "type": "function", "name": "balanceOf", "stateMutability": "view", "inputs": [{"name":"owner","type":"address"}], "outputs": [{"type":"uint256"}] }
] as const;
export default {
async fetch(req: Request, env: Env, ctx: ExecutionContext) {
const url = new URL(req.url);
const cache = caches.default;
// route: /balance/:token/:address
const parts = url.pathname.split("/").filter(Boolean);
if (parts[0] === "balance" && parts.length === 3) {
const [_, token, addrRaw] = parts;
const address = getAddress(addrRaw); // normalises checksum
const cacheKey = new Request(req.url, req);
const cached = await cache.match(cacheKey);
if (cached) return cached;
const client = createPublicClient({
chain: mainnet,
transport: http(env.ETH_RPC_URL)
});
const [decimals, balance] = await Promise.all([
client.readContract({ address: token as `0x${string}`, abi: erc20, functionName: "decimals" }),
client.readContract({ address: token as `0x${string}`, abi: erc20, functionName: "balanceOf", args: [address] })
]);
const human = formatUnits(balance, Number(decimals));
const body = JSON.stringify({ token, address, balance: balance.toString(), human });
const res = new Response(body, { headers: { "content-type": "application/json", "cache-control": `public, max-age=${env.CACHE_TTL_SECONDS || 15}` } });
ctx.waitUntil(cache.put(cacheKey, res.clone()));
return res;
}
return new Response("Not found", { status: 404 });
},
};
type Env = {
ETH_RPC_URL: string;
CACHE_TTL_SECONDS: string;
};
Why this works well:
- No Node websockets, no vendor SDK oddities; just HTTP JSON‑RPC
- Cloudflare’s Cache API stores the hot responses in the POP that served the request
- Optional: store a copy in KV and return stale‑while‑revalidate for longer TTLs
Gotcha: POP caches are local. Don’t expect a cache hit across continents. If you need strong consistency, route through a Durable Object or DB.
Server-side writes without nonce chaos
You can keep writes on the client. If you must sign server-side (bots, automations), you need nonce coordination. Enter a tiny Durable Object that keeps the next nonce per key and serialises sends.
// src/nonce.ts
import { createWalletClient, http, parseEther } from "viem";
import { mainnet } from "viem/chains";
import { privateKeyToAccount } from "viem/accounts";
export class NonceManager {
state: DurableObjectState; env: Env;
constructor(state: DurableObjectState, env: Env) {
this.state = state; this.env = env;
}
async fetch(req: Request) {
const { to, valueEth, data } = await req.json<{
to: `0x${string}`; valueEth?: string; data?: `0x${string}`
}>();
const account = privateKeyToAccount(this.env.PRIVATE_KEY_HEX as `0x${string}`);
const wallet = createWalletClient({ account, chain: mainnet, transport: http(this.env.ETH_RPC_URL) });
// Pull latest network nonce and our local counter; pick the max
const net = await wallet.getTransactionCount({ address: account.address });
const local = (await this.state.storage.get<number>("nonce")) ?? net;
const nonce = Math.max(net, local);
const txHash = await wallet.sendTransaction({
to,
nonce,
value: valueEth ? parseEther(valueEth) : undefined,
data
});
await this.state.storage.put("nonce", nonce + 1);
return Response.json({ txHash, nonce });
}
}
export interface Env { ETH_RPC_URL: string; PRIVATE_KEY_HEX: string; }
Wire it from the Worker and push writes through a Queue so your fetch path isn’t blocked by a slow RPC:
// excerpt from src/index.ts
export default {
async fetch(req, env) {
const url = new URL(req.url);
if (req.method === "POST" && url.pathname === "/tx") {
const payload = await req.json();
await env.TX_OUT.send(payload); // enqueue
return new Response(null, { status: 202 });
}
return new Response("Not found", { status: 404 });
},
// Queue consumer runs out-of-band
async queue(batch: MessageBatch<{ to: string; valueEth?: string; data?: string }>, env: Env) {
for (const msg of batch.messages) {
const id = env.NONCE.idFromName("signer-1");
const stub = env.NONCE.get(id);
const res = await stub.fetch("https://nonce/tx", { method: "POST", body: JSON.stringify(msg.body) });
if (!res.ok) throw new Error(`tx failed: ${await res.text()}`);
}
}
};
Production notes
- Set per-key Durable Objects if you juggle multiple signers
- Back-pressure with Queues if RPCs throttle you
- Log to D1: hash, nonce, status; reconcile periodically
- Rotate keys. If this makes you nervous, that’s correct. Use an external signer/HSM/MPC when you can.
Hyperdrive: global reads to your regional Postgres
If you already have a single-region Postgres, Hyperdrive turns it into a globally fast read endpoint by pooling connections and caching common queries on the Cloudflare network. Use pg
or postgres.js
normally; just point to the Hyperdrive binding.
// src/db.ts
import { Client } from "pg";
export const queryUsers = async (env: Env) => {
const client = new Client({ connectionString: env.HYPERDRIVE.connectionString });
await client.connect();
const { rows } = await client.query("select id, address, label from users limit 100");
await client.end();
return rows;
};
type Env = { HYPERDRIVE: { connectionString: string } };
If you don’t have Postgres, start with D1 for simple relational needs and upgrade later.
Picking storage on Workers
- KV : ultra-fast global reads, eventual consistency (~seconds). Great for feature flags, cached JSON, token metadata that rarely changes.
- D1 : SQLite semantics, serverless, easy. Good for app data, rate limits, audit trails.
- Durable Objects : stateful actors with strongly consistent KV attached. Use for coordination (locks, nonces, websockets, rooms).
- R2 : S3-compatible object storage with zero egress fees. Stick blobs here; pair with Cache API for hot assets.
- Vectorize : vector DB if you’re sprinkling AI into your app.
Deployments that feel adult
Environments
Define [env.staging]
, [env.prod]
in Wrangler, attach different bindings and routing per env. Deploy with -e staging
and you get an isolated Worker name and resources.
Gradual deployments
Use Gradual Deployments to shift 1% → 100% traffic and auto-roll back if error budgets go red.
CI with GitHub Actions
# .github/workflows/deploy.yml
name: deploy
on:
push:
branches: [ main ]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20 }
- run: npm ci
- uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
workingDirectory: .
command: publish
Troubleshooting and gotchas
- KV is eventually consistent. If you need atomic increments or immediate reads, use a Durable Object or D1.
- Node APIs : enable
nodejs_compat
and set a recent compatibility_date if you use drivers/ORMs. - Cache is per-POP. Design for locality. If you need cross-POP coherence, coordinate through a Durable Object or database.
- RPC client choice : viem works cleanly on edge. If you use ethers v5, set
skipFetchSetup
; v6 is ESM-first but check bundling. - Smart Placement : enable it on back-end heavy routes. It won’t help globally distributed backends or logging sidecars.
- WebSockets : terminate in a Durable Object for chat/streams. Workers support WS pairs and DO hibernation.
- Ports/TLS : Workers only talk to normal HTTPS ports; weird ports can fail. Keep origins sane.
What this looks like in production
- Public RPC cache : A Worker that fronts your RPC provider with 10–30s TTLs per method. KV keys for recent blocks. Cache API for hot paths.
- Wallet analytics : Read-only API that aggregates balances via viem, stores summaries in D1, and pushes heavy joins to Postgres through Hyperdrive.
- Mint backend : Client signs, Worker verifies, writes to a Queue, Durable Object serialises writes, consumer broadcasts. No 3 am nonce collisions.
Final thoughts
I’ve built NFT mints, Telegram bots, and silly-but-fun automation on too many platforms. Workers is the one I keep coming back to because it behaves like the web platform, it’s globally present, and the pieces click together without theatrical YAML.
Is it perfect? No. But if you want edge reads that scream, writes that don’t trample each other, and deployments that feel modern, this is the serverless that finally keeps up with you.