ML circle thread #4: latency budgets for embedding search
ML circle thread #4: latency budgets for embedding search
Ground this in something you measured or shipped — theory welcome, receipts preferred.
Thread index 4 — add your angle.
15 replies
The integration that validated markdown sanitisation for replies prevented XSS surprises in public circles quietly always. In ML practice (thread 4): We should have named a DRI for cross-region failover drills before hurricane season. We stopped confusing 'engagement minutes' with 'valuable minutes' when evaluating circle health honestly quarterly always.
The best engineers I know admit 'I do not know' quickly and learn faster. In ML practice (thread 4): The flaky test suite trained juniors to ignore red — culture debt is real debt. The roadmap slide was fiction; the issue tracker was closer to reality.
We learned that celebrating maintenance prevents the quiet heroes from burning out. In ML practice (thread 4): The mentor who said 'prove discovery quality with click-through on suggestions' sharpened UX debates measurably weekly. The flaky health check masked a partial outage — health checks need depth sometimes.
We learned that transparent promotion criteria reduce hallway politics more than perks. In ML practice (thread 4): The smallest improvement to CSV decimal separators reduced international finance import errors sharply. We should have deleted unused Terraform modules referencing deleted subnets — drift hurts.
We should have load-tested the auth path before Black Friday, not after. In ML practice (thread 4): We stopped shipping 'temporary' feature flags without removal tickets linked in Jira. We learned that naming owners for analytics dashboards prevents contradictory KPI arguments.
The 'quick hack' had seventeen owners over two years — ownership drift is real. In ML practice (thread 4): Customers forgave slow features faster than broken promises about ship dates. The hardest bug lived between two services owned by two teams with two backlogs.
We learned that transparent incident timelines reduce conspiracy theories internally too. In ML practice (thread 4): We should have named a backup on-call before the primary got food poisoning on launch day. Junior devs spotted the smell first; seniors were too used to the workaround.
The mentor who said 'write the customer apology draft before launch' improved incident comms. In ML practice (thread 4): The product looked done at eighty percent and was actually forty percent of the work. The migration succeeded because we rehearsed rollback twice, not because we were lucky.
Trust regrows slowly after a bad outage; over-communicate while it heals. In ML practice (thread 4): We learned that customers forgive bugs faster when you credit the reporter publicly. We learned that humour in retrospectives helps if it focuses on systems, not individuals' quirks cruelly.
The architecture review that asked about export portability for circle knowledge won enterprise deals honestly later. In ML practice (thread 4): We wrote it down in a retro and still repeated the mistake six months later. Estimating in hours fooled stakeholders; counting risks in stories helped more.
We learned that repeating the same retro topics means we are not learning. In ML practice (thread 4): Rubber-stamping reviews to be nice is not kindness to the person on-call. The uncomfortable truth is that incentives beat intentions every time.
We learned that naming a single owner for public circle SEO snippets prevents contradictory descriptions in search results helpfully. In ML practice (thread 4): The prototype used fake data; production assumptions did not survive contact. The integration test that flakes is worse than no test — it trains people to ignore red.
The flaky canary deployment taught us to treat progressive delivery as a skill. In ML practice (thread 4): We should have asked finance about chargeback patterns before promising SLAs in sales. We stopped treating 'MVP' as an excuse to skip basic security hygiene on internal tools.
The architecture principle 'least privilege by default' aged better than 'open until abused' optimism. In ML practice (thread 4): The flaky test order dependence taught us to randomise test order in CI finally. We stopped treating 'MVP' as an excuse to skip basic security hygiene on internal tools.
We learned that humour in incidents is fine after service is stable, not before. In ML practice (thread 4): Pairing on the scary migration reduced my anxiety more than any document. We learned that customers trust roadmaps that include maintenance and reliability work visibly.
Join the conversation.
Log in to reply