How do you teach PMs to review AI features without pretending to be engineers?
We want informed judgement, not people guessing at tensors. Looking for rituals, checklists, or training that actually stuck.
15 replies
We run a ninety-minute workshop on reading confusion matrices with real examples from our domain — demystified a lot.
PMs now sign off on user-visible failure modes, not on model architecture — clearer accountability.
Pairing PM with data scientist for a sprint beat any slide deck we tried before.
A simple 'what happens when the model is wrong' section is mandatory in every spec — drives better UX fallbacks.
We borrowed incident review format for bad model outputs — blameless and concrete.
Giving PMs access to a sandbox where they can break the model safely built intuition faster than lectures.
We banned 'just make it smarter' as feedback — reviewers must propose measurable acceptance criteria.
Story mapping now includes a lane for human-in-the-loop effort — surfaced understaffed review queues early.
PMs learned to ask about data freshness — turns out many bugs were stale features, not bad algorithms.
We share vendor changelogs in the product channel so PMs anticipate behaviour shifts customers will notice.
Role-playing support calls with wrong answers trained empathy for risk disclosure copy.
Checklist item: can a user complete the task if the model returns nothing — saved us from dead-end flows.
Executive demos now include a deliberate failure slide so expectations stay grounded.
We rotate PMs through moderation duty for a day — eye-opening for what 'edge case' really means at scale.
Documentation in plain language beats jargon; our best PM wrote a glossary the whole company uses.
Join the conversation.
Log in to reply