Deck / Appendix F · The NatWest journey
Reference customer · NatWest Group

From bespoke glue to a standard door.

We didn't start with a product. We started as a consultancy solving this problem by hand — Ansible plays, custom APIs, pipeline-to-pipeline glue. Every integration was bespoke. NetOrca is what we built once we realised the interface was the thing that needed standardising, not the automation.

Customer changes
22,000+
delivered via NetOrca
Active service instances
13,340
patterns and firewall rules
Customer teams
~500
onboarded across all domains
Applications
2,300+
under automated delivery
F.1 — The five phases

We tried to solve this problem four different ways before we built NetOrca.

Each bespoke solution worked — for one team, one integration. The effort to do it again for the next team was almost the same as the first. That's what we standardised.

Watch it accrete — then collapse.

Architecture across the phases
CONSUMER TEAMS · ~500 Team A Team B Team C Team D Team E Team F + many more Load balancers LOAD-BALANCER TICKETS SERVICE OWNER · triage, scope, then push NETWORK TEAM load-balancer SMEs ANSIBLE PLAYS final 5-min push CLICK + SSH the rest 13.4 DAYS PER CHANGE · MOSTLY TRIAGE LEGACY IAC REPO cloud team · not ours Tripwire pipeline watches a file · fires µSERVICE A custom REST API µSERVICE B CLI-only API CLI tool (external dep) OUR LB IAC REPO + automation pipeline DB DASHBOARD status bolt-on LOAD-BALANCER ONLY TEAM B · service.yaml service: loadbalancer pattern: ha-https backends: [10.2.0.4, 10.2.0.5] hostname: api.bank NETORCA EXECUTE · plug in existing automations LB patterns FW automations Terraform Vendor APIs + 27 more CONSUMER TEAMS · ~500 · SELF-SERVE Team A Team B Team C Team D Team E Team F + many more SERVICE OWNER TEAMS · run the execute step Network team Security team Cloud team Platform team + more Infrastructure · LB · FW · DNS · cloud
Phase 1 of 4 3 moving parts
01
Phase 1 · The consultancy era ~2019 — 2021

Every service, its own ticket queue. Every queue, a human.

Consumer teams file tickets — LB, FW, DNS, cloud — and each queue lands in the inbox of a different service-owner team. Those SMEs spend most of the time triaging: reading the request, chasing clarifications, figuring out impact, getting approvals. Only at the very end do they push the change — sometimes via an Ansible play they wrote, mostly via clicks and SSH. Average end-to-end: 13.4 days per change. The bespoke automation is just the final 5 minutes.

  • One ticket queue per service type — routed to a different SME team
  • Most of the 13.4 days is triage: clarifying requirements, approvals, scoping impact
  • Ansible plays are the final step, run by the SME at the infra edge — the consumer never touches them
  • Skilled engineers, tribal knowledge, no leverage
Breaking point: second team asks for the same thing. Answer is "we'll come and do it for you". Doesn't scale beyond the team that wrote it.
02
Phase 2 · Shim onto a legacy IaC repo 2021 — 2022

Watch a file in the cloud team's IaC repo. Fire a pipeline when it changes.

Consumer teams were already committing to a legacy IaC repository owned by the cloud team — that was the incumbent workflow, and we weren't going to replace it. So we shimmed onto the side of it: a pipeline that watched a specific file, and when entries appeared or changed, our automation ran. No API of our own, no footprint inside the other team's repo. Just a tripwire.

  • Legacy IaC repo owned by the cloud team — we were a consumer of its events
  • One watched file per service — new service meant a new file and a new pipeline
  • No control over the file format, review flow, or branch policy of the upstream repo
  • Validation and audit bolted on after the fact, not enforced at intake
Breaking point: we owned none of the contract. Every new service reopened the same conversation with the cloud team about file shape, branch layout, and review policy. No leverage — just more surface.
03
Phase 3 · Automation spaghetti 2022 — 2023

One IaC repo for load balancers. Four ways in. A bolted-on dashboard to track what happened.

All of this was just for the F5 / load-balancer use case. We built our own IaC repo for it and let everything flow in. The problem: each consumer wanted a different front door, and we kept agreeing. The Phase-2 shim stayed. Two bespoke microservices with their own APIs got added. One of those microservices was only ever called by a CLI deployment tool owned by a software-engineering team. And because none of those paths could tell a customer whether their request had landed, we built a DB-backed dashboard just to expose request status. Every piece worked. None of them agreed on anything. And every other service (FW, DNS, …) was still going through SMEs or the legacy shim — unsolved.

  • Path 1: GitLab pipeline from the legacy cloud-team IaC repo (Phase 2, still running — also doing cloud FW rules)
  • Path 2: Bespoke microservice A — its own API, its own schema
  • Path 3: Bespoke microservice B — API only ever called by one team's CLI deployment tool
  • Bolt-on: DB-backed customer dashboard so consumers could track whether their request had completed — because none of the intake paths carried state back
Breaking point: every new consumer pattern added another intake and another gap in the status story. The automation team was now maintaining a repo, a pipeline integration, two microservices, a CLI dependency, and a dashboard — and still doing the change work underneath. The plumbing had become the job.
04
Phase 4 · The realisation → NetOrca 2023

The interface is the product. Standardise that. Keep the bespoke backend.

The pattern across three bespoke attempts was the same: consumer → declarative request → validation → execution against existing tooling. The bespoke bit was always just the last step. NetOrca standardises everything up to it — one schema language, one API, one state machine, one audit trail — and lets every team plug their existing automations into the "execute" step.

  • Schema-first declarative intent — owned by the consumer team
  • Validation, approvals, audit — enforced once
  • Execute step plugs into the Ansible / Terraform / vendor API you already have
  • Onboarding a new team or service is configuration, not code
Result today: 30 services live, ~500 teams onboarded, 22,000+ changes, 0 change issues at scale (1 during transition).
05
Phase 5 · Today — a customer-facing automation team 2024 — 2026

Engineering now runs like an internal SaaS. Not a ticket queue.

The automation team's role changed. They don't build one-offs any more. They build services — each with a schema, an SLO, a set of consumer teams, a roadmap. The dashboard replaced the inbox.

30 services live 13,340 active instances ~500 teams 22,000+ changes delivered Cross-charged internally
What this unlocks next: consumer self-service through the Pack assistant, policy-as-code baked into schemas, and AI-assisted service authoring for the automation team itself.
Every bespoke solution solved the problem for one customer. NetOrca solved it for the interface. That's the only thing that turned into leverage.
— The lesson from four years of getting it wrong
F.2 — The growth curve

Once the door was standard, scale stopped being a people problem.

Sourced directly from the NetOrca usage statistics table. Each point is a month-end snapshot. Two curves: customer teams onboarded (left axis) and service items live in the platform (right axis).

Teams and service items onboarded · Sep 2023 — Mar 2026

Teams Service items
500 375 250 125 0 14k 10.5k 7k 3.5k 0 Sep '23 Mar '24 Sep '24 Mar '25 Sep '25 Mar '26 NETORCA LIVE · AUTUMN '23
Values are admin-panel snapshots; gap at one point reflects a reporting reset. Straight-line growth across 30 months — not a hockey-stick fabrication, just an onboarding rhythm that held.
F.3 — Plan vs. reality

What we said we'd do. What we actually did.

The business case that unlocked the original contract. Two years in, the delivered number beat every promised number — most by a wide margin.

Original business case

What we said

Applications onboarded300 in 24 months
Service catalogue growthExpand over term
Change riskReplace manual with repeatable
Cost avoid · customer£7.5m p.a.
Cost avoid · network toil£3.5m p.a.
Cost avoid · dev rework£1.0m p.a.
Delivered

What we did

Applications onboarded995 in 13 months
Service catalogue growth+10 services added
Change risk0 issues at scale
Cost avoid · customerhit — at 3× volume
Cost avoid · network toilhit
Cost avoid · dev reworkhit
3× volume
in half the time
995 apps in 13mo vs 300 in 24mo
Change issues
0 / 22,000+
1 during transition
Internal recharge recovery
£744k
NetOrca services · from Mar '24
Cost avoid · contract term
£12m
customer + toil + rework
NetOrca will lower the risk of change by replacing the slow, manual change — subject to human error — with fully tested, automated, repeatable pattern changes. Original business case · 2023
F.4 — What changed inside the team

The automation function stopped being an order-taker. It became a product team.

Before

Ticket queue

Inbox of bespoke requests. Each one a conversation. SMEs re-explaining the same thing to different teams. Capacity = headcount.

During

Integration shop

Building glue per team. Each new consumer is a new webhook, a new payload, a new debug path. Capacity scales linearly with demand.

Today

Service catalogue

30 productised services — each with an owner, a schema, an SLO, consumer teams, usage telemetry. Capacity is decoupled from demand. New teams onboard through the door, not through the team.

← Back to main deck