Building APIs With Cloudflare Workers: A Practical Introduction
Cloudflare Workers run JavaScript at the edge, meaning your code executes in data centres close to your users rather than in a single region. For APIs, this translates to lower latency for geographically distributed users. I’ve been building production APIs on Workers for the past year, and the platform has matured significantly.
Getting Started
The Wrangler CLI is your primary tool for developing and deploying Workers:
npm create cloudflare@latest my-api
cd my-api
npx wrangler dev
This scaffolds a new Worker project and starts a local development server. The dev server emulates the Workers runtime locally, including bindings to Cloudflare services like KV, D1, and R2.
A basic API handler looks like this:
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === '/api/health') {
return Response.json({ status: 'ok' });
}
if (url.pathname === '/api/users' && request.method === 'GET') {
const users = await env.DB.prepare('SELECT * FROM users').all();
return Response.json(users.results);
}
return new Response('Not Found', { status: 404 });
},
};
For anything beyond trivial routing, you’ll want a framework. Hono is the most popular choice for Workers:
import { Hono } from 'hono';
const app = new Hono();
app.get('/api/health', (c) => c.json({ status: 'ok' }));
app.get('/api/users', async (c) => {
const db = c.env.DB;
const users = await db.prepare('SELECT * FROM users LIMIT 50').all();
return c.json(users.results);
});
app.post('/api/users', async (c) => {
const body = await c.req.json();
const db = c.env.DB;
await db.prepare('INSERT INTO users (name, email) VALUES (?, ?)')
.bind(body.name, body.email)
.run();
return c.json({ success: true }, 201);
});
export default app;
Hono is lightweight (under 14KB), designed for edge runtimes, and provides Express-like routing with TypeScript support.
D1: SQLite at the Edge
Cloudflare D1 is a distributed SQLite database that runs alongside your Workers. It’s the most natural storage option for Workers-based APIs.
Define your database schema with SQL migrations:
-- migrations/0001_create_users.sql
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
email TEXT UNIQUE NOT NULL,
created_at TEXT DEFAULT (datetime('now'))
);
Apply migrations with Wrangler:
npx wrangler d1 migrations apply my-database
D1 handles reads at the edge (low latency) and routes writes to a primary instance. For read-heavy APIs, this architecture is excellent. For write-heavy workloads, the write latency depends on the distance to the primary.
KV for Caching
Cloudflare KV (Key-Value) is an eventually consistent store that’s ideal for caching API responses:
app.get('/api/config', async (c) => {
const cached = await c.env.CONFIG_KV.get('app-config', 'json');
if (cached) return c.json(cached);
const config = await fetchConfigFromSource();
await c.env.CONFIG_KV.put('app-config', JSON.stringify(config), {
expirationTtl: 300, // 5 minutes
});
return c.json(config);
});
KV reads are fast (single-digit milliseconds) because data is replicated to every Cloudflare edge location. Writes are slower and eventually consistent, so don’t use KV for data that needs immediate consistency.
R2 for File Storage
Cloudflare R2 is S3-compatible object storage with no egress fees. For APIs that handle file uploads:
app.post('/api/upload', async (c) => {
const formData = await c.req.formData();
const file = formData.get('file') as File;
const key = `uploads/${Date.now()}-${file.name}`;
await c.env.STORAGE.put(key, file.stream(), {
httpMetadata: { contentType: file.type },
});
return c.json({ key, url: `/api/files/${key}` });
});
The zero egress fees make R2 particularly attractive if your API serves files to users. With S3, serving a terabyte of images costs around $90 in bandwidth alone. With R2, it’s free.
Limitations to Know
CPU time limits. Workers have a 30-second CPU time limit on the paid plan (10ms on free). This is CPU time, not wall-clock time — waiting for database queries or HTTP requests doesn’t count. But CPU-intensive operations like image processing or heavy JSON parsing can hit the limit.
No persistent connections. Workers don’t maintain persistent connections between requests. Each request is isolated. This means connection pooling to external databases doesn’t work in the traditional sense. Use D1 or Hyperdrive (Cloudflare’s connection pooling service) for database access.
Runtime compatibility. Workers use the V8 isolate model, not Node.js. Some Node.js APIs aren’t available, though Cloudflare has added compatibility layers for many common ones. Check the compatibility list before depending on Node-specific APIs.
Deployment
Deploying is a single command:
npx wrangler deploy
Your API is live globally in seconds. There’s no container to build, no server to provision, no load balancer to configure. For the Team400 clients I’ve seen adopt edge architectures, this deployment simplicity is often cited as the biggest practical advantage.
When to Use Workers
Workers are excellent for APIs that are read-heavy, geographically distributed, and don’t require long-running computations. REST APIs, webhook handlers, authentication services, and content delivery APIs are all good fits.
For APIs that need complex background processing, persistent WebSocket connections, or heavy computation, a traditional server or container-based deployment remains more appropriate.
The edge computing model isn’t right for everything, but when it fits, the combination of global distribution, simple deployment, and competitive pricing makes Workers hard to beat.