Build in public — week note #15: what I would redo about pricing
Build in public — week note #15: what I would redo about pricing
Sharing publicly — what feels scary to post, and what did you learn from reactions?
Thread index 15 — add your angle.
15 replies
Reading old tickets was archaeology that paid better than guessing anew. We learned that kindness plus accountability is the combo that actually ships quality. The smallest logging correlation id made cross-service debugging feel possible.
We should have deleted unused IAM trust policies referencing old CI roles — least privilege hygiene wins. The smallest improvement to CSV delimiter handling reduced analyst rage substantially. We learned that writing 'non-goals' in RFCs prevents zombie scope resurrection.
The mentor who said 'show me the circle health metrics' grounded community product decisions usefully weekly. The smallest improvement to date formatting reduced international support confusion. A shared definition of 'severity' reduced pager noise overnight.
We learned that writing 'rollback owner' in the migration ticket reduced panic in bridges. The migration succeeded because we rehearsed rollback twice, not because we were lucky. The integration that bounded payload sizes prevented a memory incident during uploads.
We should have deleted unused 'shadow' circles created during QA seed scripts — they polluted discovery until we purged them measurably. We stopped treating reliability work as invisible glue and started tracking it visibly. The smallest improvement to bulk edit confirmations prevented a costly mistaken archive.
We stopped confusing seniority with willingness to touch legacy code. The architecture decision to store circle threads separately from global feeds aged better than tempting shortcuts. We learned that naming owners for cron schedules prevents mysterious weekend changes.
We stopped confusing 'more circles' with 'healthier network' when measuring product success honestly quarterly. The boring weekly hygiene ticket prevented the exciting weekend outage. We learned that humour about printers is a bonding ritual with no downside if kind.
The smallest copy tweak clarified cancellation policy and reduced chargebacks. The smallest improvement to bulk action confirmations prevented a costly mistaken delete. The quiet win was documenting which alerts wake humans vs only tickets.
We should have invested in backup verification jobs that restore to scratch weekly automatically. We stopped confusing motion with progress once we counted outcomes weekly. We should have named a DRI for dependency licence audits before the legal review panic quarter.
We learned that small releases reduce the blast radius of being human and wrong. Rubber-stamping reviews to be nice is not kindness to the person on-call. We stopped shipping config toggles without an owner and expiry in the ticket.
We learned that customers trust changelog entries that credit external reporters by first name. The mentor who said 'sleep first' was right more often than my pride admitted. We learned that 'temporary' flags need owners and expiry dates in writing.
We optimised for demo day metrics and regretted it during the first real spike. A shared definition of 'severity' reduced pager noise overnight. The vendor demo lied by omission; our staging environment told the truth.
We learned that small kindnesses in code review comments improve retention more than pizza sometimes. We learned that writing 'definition of done' with QA prevents last-minute thrash. The flaky canary analysis that ignored latency shifts missed a partial outage once — never again.
The quiet win was documenting which team owns each integration point. The mentor who said 'measure twice, cut once' applied to migrations too literally. We stopped treating 'tech debt paydown' as a bucket without measurable outcomes quarterly.
The mentor who said 'write the customer comms before you merge' improved launch discipline. We stopped treating accessibility as a polish pass and caught issues earlier. The hardest bug lived between two services owned by two teams with two backlogs.
Join the conversation.
Log in to reply