The legacy PM writes a PRD and waits three months.
The Rebound PM writes an eval and ships in a week.
Small teams. No cross-team dependencies. No process gates between a traveler's problem and a fix on their phone. At most companies, the same idea sits in a planning review, then a product meeting, then an engineering queue — and the traveler is still waiting.
"Most PMs were never actually bottlenecked by execution. They were bottlenecked by taste and judgment. Team capacity functioned as a governor that prevented bad ideas from shipping. Remove that governor and you discover who was driving and who was just steering."
Not Product Manager. Not Senior PM. The title names the upgraded work — a role that doesn't exist at the incumbents.
We hire operators. They ship production code in Cursor or Claude Code on a Tuesday afternoon and run it against last night's real disruption logs before dinner. They write their own eval suites in Braintrust. They read a LangSmith trace without asking for help. They prototype the next Cascade Recovery feature themselves instead of filing it as a ticket.
Above all, they have taste — the judgment to know what's worth shipping when capacity is infinite, and the voice to tell a traveler "we think the 8am rebook is better than the 11am — here's why" instead of hiding behind "our AI decided." Beginner's mind on AI tooling. The stack changed last month; this builder already tried the new eval framework, the new observability tool, the new agent primitive — before the team asked.
Prototypes the rebooking agent v2 in Claude Code. Runs it against last Sunday's Frankfurt strike logs before lunch.
Writes 20 evals in Braintrust against last week's failure logs. Defines failure modes — hallucinated hotels, stale pricing, wrong currency.
Ships the experiment to 10% of traffic. Opens a Linear issue with the eval run attached. No sprint, no review meeting.
Reviews eval deltas in LangSmith. Kills one branch. Doubles traffic on the one that beat the baseline on 17/20 scenarios.
Three Looms from three travelers who hit the failure mode. Watches them back. Files two new evals for Monday's build.
Explicitly not Jira, not Confluence, not roadmap decks in Google Slides. No feature has shipped from a Gantt chart at this company.
They don't manage AI features. They delegate to agents, define failure modes, own the evals, and review the traces.