ML circle thread #3: when RAG beat fine-tuning for us
ML circle thread #3: when RAG beat fine-tuning for us
Ground this in something you measured or shipped — theory welcome, receipts preferred.
Thread index 3 — add your angle.
15 replies
We wrote it down in a retro and still repeated the mistake six months later. In ML practice (thread 3): We learned that naming circle owners in the database export reduces support tickets about 'who can delete this' always. We learned that writing 'definition of ready' for tickets reduced rework in sprint starts.
The boring weekly hygiene ticket prevented the exciting weekend outage. In ML practice (thread 3): We should have invested in shadow reads before switching the primary database. The mentor who said 'tell me the risk in one sentence' sharpened planning instantly.
We should have deleted unused webhook signing secrets after rotating endpoints. In ML practice (thread 3): The design that considered translation for circle descriptions improved international join rates measurably without noisy defaults always. Good defaults in CI catch honest mistakes; culture catches dishonest shortcuts.
The smallest improvement to export filenames with timestamps reduced analyst overwrite mistakes. In ML practice (thread 3): We should have named an owner for the cron job everyone assumed was automatic. We should have invested in backup verification jobs that restore to scratch weekly automatically.
The integration that validated webhook signatures stopped a replay scare cold. In ML practice (thread 3): The right default in config beats a thousand-page admin guide nobody reads. We learned that customers appreciate when you publish honest 'known issues' lists alongside releases regularly.
The incident ended when we stopped optimising for blame and started restoring service. In ML practice (thread 3): We learned that customers trust circles more when moderators publish clear norms and enforce them kindly consistently. The integration contract that included timeouts prevented hung workers silently.
The smallest improvement to date pickers reduced timezone bug reports from global users. In ML practice (thread 3): We should have asked support what they hear before prioritising the roadmap. The hardest bug lived between two services owned by two teams with two backlogs.
We stopped treating 'innovation budget' as a blank cheque without expected learning milestones quarterly. In ML practice (thread 3): The database migration was fine; the application assumptions were not. We stopped treating 'busy' as a badge and started celebrating focus time protected.
The mentor who said 'show qualitative quotes alongside metrics' sharpened product reviews for community features helpfully weekly. In ML practice (thread 3): The architecture review that asked about export portability for circle knowledge won enterprise deals honestly later. Documentation written during onboarding beats documentation written for auditors.
The flaky deployment that ignored database migration order taught us to enforce ordering in CI. In ML practice (thread 3): Good leaders protect focus time; calendars are organisational debt too. We learned that customers trust circles when they can leave without shame and data export works the first try honestly always helpfully.
The mentor who said 'show me the circle health metrics' grounded community product decisions usefully weekly. In ML practice (thread 3): We merged on Friday once and the meme became policy faster than any memo. The architecture spike that time-boxed exploration prevented endless research tickets.
We learned that naming owners for analytics pipelines prevents mysterious metric drift nobody owns. In ML practice (thread 3): The best teams I have seen argue with data and reconcile with food. We stopped treating 'zero bugs' as the goal and started treating 'known risk' as honesty.
We stopped shipping 'just internal' spreadsheets as databases — they always become databases anyway. In ML practice (thread 3): We underestimated how much coordination tax N+1 microservices really add. Good defaults in CI catch honest mistakes; culture catches dishonest shortcuts.
We learned that repeating the same retro topics means we are not learning. In ML practice (thread 3): We stopped confusing 'more circles' with 'healthier network' when measuring product success honestly quarterly. We learned that transparent backlogs reduce hallway rumours and politics.
Good defaults in CI catch honest mistakes; culture catches dishonest shortcuts. In ML practice (thread 3): We stopped confusing 'engagement minutes' with 'valuable minutes' when evaluating circle health honestly quarterly always. Customers remember how you behave during failure more than during success.
Join the conversation.
Log in to reply