Watch it accrete — then collapse.
Every service, its own ticket queue. Every queue, a human.
Consumer teams file tickets — LB, FW, DNS, cloud — and each queue lands in the inbox of a different service-owner team. Those SMEs spend most of the time triaging: reading the request, chasing clarifications, figuring out impact, getting approvals. Only at the very end do they push the change — sometimes via an Ansible play they wrote, mostly via clicks and SSH. Average end-to-end: 13.4 days per change. The bespoke automation is just the final 5 minutes.
- One ticket queue per service type — routed to a different SME team
- Most of the 13.4 days is triage: clarifying requirements, approvals, scoping impact
- Ansible plays are the final step, run by the SME at the infra edge — the consumer never touches them
- Skilled engineers, tribal knowledge, no leverage
Watch a file in the cloud team's IaC repo. Fire a pipeline when it changes.
Consumer teams were already committing to a legacy IaC repository owned by the cloud team — that was the incumbent workflow, and we weren't going to replace it. So we shimmed onto the side of it: a pipeline that watched a specific file, and when entries appeared or changed, our automation ran. No API of our own, no footprint inside the other team's repo. Just a tripwire.
- Legacy IaC repo owned by the cloud team — we were a consumer of its events
- One watched file per service — new service meant a new file and a new pipeline
- No control over the file format, review flow, or branch policy of the upstream repo
- Validation and audit bolted on after the fact, not enforced at intake
One IaC repo for load balancers. Four ways in. A bolted-on dashboard to track what happened.
All of this was just for the F5 / load-balancer use case. We built our own IaC repo for it and let everything flow in. The problem: each consumer wanted a different front door, and we kept agreeing. The Phase-2 shim stayed. Two bespoke microservices with their own APIs got added. One of those microservices was only ever called by a CLI deployment tool owned by a software-engineering team. And because none of those paths could tell a customer whether their request had landed, we built a DB-backed dashboard just to expose request status. Every piece worked. None of them agreed on anything. And every other service (FW, DNS, …) was still going through SMEs or the legacy shim — unsolved.
- Path 1: GitLab pipeline from the legacy cloud-team IaC repo (Phase 2, still running — also doing cloud FW rules)
- Path 2: Bespoke microservice A — its own API, its own schema
- Path 3: Bespoke microservice B — API only ever called by one team's CLI deployment tool
- Bolt-on: DB-backed customer dashboard so consumers could track whether their request had completed — because none of the intake paths carried state back
The interface is the product. Standardise that. Keep the bespoke backend.
The pattern across three bespoke attempts was the same: consumer → declarative request → validation → execution against existing tooling. The bespoke bit was always just the last step. NetOrca standardises everything up to it — one schema language, one API, one state machine, one audit trail — and lets every team plug their existing automations into the "execute" step.
- Schema-first declarative intent — owned by the consumer team
- Validation, approvals, audit — enforced once
- Execute step plugs into the Ansible / Terraform / vendor API you already have
- Onboarding a new team or service is configuration, not code
Engineering now runs like an internal SaaS. Not a ticket queue.
The automation team's role changed. They don't build one-offs any more. They build services — each with a schema, an SLO, a set of consumer teams, a roadmap. The dashboard replaced the inbox.