Goodbye REST, Hello CQRS Link to heading

At a networking meetup last week, something struck me. The people building content - apps, sites, tools - weren’t building for a broad audience anymore. They were building for an audience of one: themselves. Or more accurately, their own companies.

This went hand in hand with the news in early February that Claude-based agents had wiped roughly a trillion dollars off global stock markets. Investors ran the numbers on SaaS offerings and decided the value proposition no longer stacks up. If an AI agent can build (or perform) a bespoke solution that will deliver the required output for a single organisation, why pay for a generic SaaS product? Especially a product that you only use 25% of the features, or one that only fits 80% of your needs. At the end of the day, the backend is just data and the presentation layer is an easily duplicated - and then customised - UX/UI layer that a simple prompt can generate.

That got me thinking about something I know well: how we segregate audiences on the internet. As a backend engineer in payments whose coding superpower has always been identity and authorisation, I’ve spent years thinking about who sees what, and how we structure our domains to serve different consumers. What I’m realising now is that we have a new consumer we’ve never had to design for before.

The Way We’ve Always Done It Link to heading

If you’ve ever looked at a URL and understood what it was doing, you already know how audience segregation works on the internet. We use domains, subdomains and subfolders to direct different audiences to different experiences. It’s so fundamental that we barely think about it.

Take Apple as an example. You visit apple.com and you get the global storefront. Navigate to apple.com/au/ and you get content tailored for an Australian audience - local pricing, local availability, local regulatory information. Head to store.apple.com and you’re in a completely different user experience optimised for purchasing. Same company, same data underneath, but different audiences served by different surfaces for different experiences.

We do the same with any utility or application these days. Your typical SaaS product has app.example.com hosting the frontend - the rich, interactive, complicated user interface that humans interact with. Behind it sits api.example.com, powering the data feed for reactive updates. Humans interact with app, and app interacts with api.

This pattern is underpinned by identity and authorisation. JWTs, OAuth tokens, API keys - they all serve the same purpose: verifying who is making the request and what they’re allowed to see. The domain structure and the auth layer work together to ensure the right audience gets the right experience. A browser session on app.example.com carries a user’s JWT, and that is used by the app to talk to api.example.com. The JWT itself lists an issuer, audience and a set of credentials claims that allow that request to be authorised (or not!).

A further integration calling downstream-data.example.com carries an API key and this has been our traditional M2M, or machine-to-machine, communication. Different consumers, different credentials, different surfaces.

This has worked brilliantly for twenty years. But it was designed with an assumption that is no longer true: that every consumer either is a human, or was built by a human developer who read your API documentation and wrote integration code against it.

The New Audience Link to heading

Now we have agents. And agents are a fundamentally different kind of consumer.

An agent doesn’t need a frontend. It doesn’t need a beautifully designed React application with responsive layouts and loading spinners. It doesn’t need your carefully crafted user experience. What it needs is a way to discover what your service can do, understand how to do it, and then do it.

In practical terms, an agent needs a Swagger file. Or better yet, an MCP instruction set - a structured, machine-readable description of your service’s capabilities that an AI model can consume and act on without a human developer writing integration code in between.

This changes everything about how we think about our domain model. We’ve always had two audiences: end users on app and developers on api. Now we have a third: agents. And agents don’t fit neatly into either of the existing buckets. They’re not end users who need a UI. But they’re not traditional API consumers either, because there’s no human developer sitting between the agent and your service, translating documentation into code. And they don’t have the authority to access the downstream data layer directly either, because they are beyond the control of the data owner, and acting on behalf of a human consumer. So where do they fit?

Prior to 2026 the developer was the translator. They would read your REST API docs, understand the resource model, write the integration, handle the edge cases, and build the error handling. The agent skips that entire step. It goes straight from “I need to update a user” to calling your service. There’s no human in the loop to interpret your API’s quirks and conventions, but they need to know how to interact with your service just as well as a human developer would. The difference is that the agent doesn’t always understand your REST conventions, but it does understand commands and capabilities.

Goodbye REST Link to heading

And this is where REST starts to show its age.

REST - Representational State Transfer - was a brilliant convention for its time. It gave human developers a predictable mental model: resources as nouns, HTTP methods as verbs. You have a /users endpoint. GET retrieves, POST creates, PUT replaces, PATCH updates, DELETE removes. It’s intuitive, well-documented, and universally understood by developers.

But that’s the key point: it was designed to be understood by developers. REST’s conventions - the distinction between PUT and PATCH, the use of HTTP status codes to convey meaning, the nested resource paths like /users/123/orders/456/items/789 - these are all patterns that make sense when a human is reading documentation and writing code. They’re ergonomic for people.

Agents don’t think in terms of HTTP verbs and resource hierarchies. An agent thinks in terms of actions: “create a user”, “update a user’s email”, “list all orders for this customer”. The mapping from intent to REST endpoint is a translation step that exists purely because REST was designed for human developers. As Elrond says in The Two Towers, “The time of men is over.” We’re moving to the time of machines.

Hello CQRS Link to heading

CQRS - Command Query Responsibility Segregation - isn’t new. It’s been around in enterprise architecture circles for years. But it maps almost perfectly to how agents want to interact with services.

Forget CRUD on /users with GET, POST, PUT, PATCH and DELETE. Instead, think about commands:

{
  "command": "createUser",
  "payload": {
    "name": "Daniel Bryar",
    "email": "daniel@example.com",
    "phone": "+1-123-555-4567"
  }
}
{
  "command": "updateUserEmail",
  "payload": {
    "email": "new@example.com"
  }
}

All of these commands are a POST to a single endpoint. We route to a command interpreter that invokes the same controller logic we used for REST. The underlying business logic doesn’t change; the interface does.

What about discovery? A GET to the same endpoint produces a brief toolchain listing: the available commands, their descriptions, and their security requirements.

{
  "commands": [
    {
      "name": "createUser",
      "description": "Create a new user account",
      "auth": "bearer token with user:write scope"
    },
    {
      "name": "updateUserEmail",
      "description": "Update the email address for an existing user",
      "auth": "bearer token with user:manage scope, or profile scope for updating own email"
    }
  ]
}

Should the agent need more detail - the payload structure, validation rules, expected responses? It asks:

{
  "command": "help",
  "topic": "updateUserEmail"
}

And gets back everything it needs to structure a complete request. The payload schema, required fields, optional fields, validation constraints, example values. Everything a human developer would have found by reading your API documentation (or swagger file), but delivered in a format an agent can consume directly.

This is what MCP looks like under the hood. It’s not a coincidence. MCP formalised what CQRS architects have known for years: that separating commands from queries and making capabilities self-describing is a cleaner model for machine-to-machine interaction than resource-oriented REST.

What This Means Link to heading

We’re not replacing anything. Humans still need frontends. Human developers still need REST APIs (at least for now). But we are accommodating for a new audience, and our architecture needs to reflect that.

Just as we learned to create app.example.com for end users and api.example.com for developers, we now need to think about how we expose our services on agent.example.com. That might be an MCP server. It might be a CQRS-style command endpoint with self-describing capabilities. It might be something we haven’t invented yet.

But one thing is clear: the internet now has two distinct audiences - humans and machine agents - and the way we structure our domains, our APIs, and our auth layers needs to evolve to serve both. The developers and service providers who recognise this shift early and start designing for it will have a significant advantage. The ones who don’t will find their carefully crafted REST APIs being scraped, reverse-engineered, and poorly consumed by agents that were never meant to use them.

The audience is changing. Time to change what we’re building.


Update: From Question to Specification Link to heading

When I wrote this post, I said the answer “might be something we haven’t invented yet.” So I went and invented it.

OpenCALL - Open Command And Lifecycle Layer is an open specification that takes the operation-based principles described above and builds them into a complete, transport-agnostic API protocol. One endpoint (POST /call), one envelope, one self-describing operation registry at /.well-known/ops. It serves both human-facing frontends and LLM-powered agents through the same contract - no more maintaining REST for one audience and MCP for another.

The spec goes well beyond the simple command examples in this post: versioned operations (v1:orders.getItem) with a deprecation lifecycle and contractual sunset dates, asynchronous execution with poll-based retrieval, push-based streaming for sensors and telemetry, media ingress for both browsers (native multipart) and agents (pre-signed URIs), chunked data integrity, and transport bindings for HTTP, WebSocket, MQTT, Kafka, WebRTC and QUIC.

The repo also includes reference API implementations in Go, Java, Python and TypeScript, all running in Docker containers against a shared Bun test suite that exercises every feature of the spec. Same tests, multiple backends - proving the spec is language-agnostic in practice, not just in theory.

If these ideas are something you have been thinking about, or you have always thought there must be a better way, check out the repo and contribute to the spec - or just drop a star if you want to see it shine.