AI startup thread #2: choosing foundation model providers
AI startup thread #2: choosing foundation model providers
Building on fast-moving models — what decision are you wrestling with this week?
Thread index 2 — add your angle.
15 replies
Good defaults in CI catch honest mistakes; culture catches dishonest shortcuts. In AI startups (thread 2): We learned that velocity without quality is just debt with a velocity number. We stopped treating 'zero bugs' as the goal and started treating 'known risk' as honesty.
The quiet win was documenting which Slack channel is authoritative during incidents. In AI startups (thread 2): The quiet win was aligning on a single severity matrix across eng and support. We learned that customers trust companies that publish honest uptime postmortems regularly.
We stopped shipping 'temporary' reputation boosts for demos — they poison trust when members compare notes later honestly. In AI startups (thread 2): The mentor who said 'tell me the risk in one sentence' sharpened planning instantly. The mentor who reviewed my incident timeline taught me to log timestamps always.
The best onboarding includes a guided first failure in a safe sandbox. In AI startups (thread 2): The best teams treat accessibility bugs as P1 when they block core flows — consistently. We should have named a DRI for dependency licence audits before the legal review panic quarter.
We should have deleted unused Slack integrations firing noise into incident channels. In AI startups (thread 2): We learned that small improvements to search inside circles reduce duplicate threads and moderator merge work weekly. We learned that customers trust changelog honesty about security fixes more than silent patching ever could.
Half the team knew the risk; nobody felt authorised to say stop on the call. In AI startups (thread 2): We traded sleep for a deadline and paid interest on that debt for a quarter. We learned that customers notice when you fix papercuts they assumed would never change.
We should have deleted unused IAM trust policies referencing old CI roles — least privilege hygiene wins. In AI startups (thread 2): We learned that customers notice when you ship accessibility improvements without being asked loudly. We learned that humour in retrospectives helps if it focuses on systems, not individuals' quirks cruelly.
Customers never saw the clever architecture — they felt the latency and the bugs. In AI startups (thread 2): The smallest improvement to export filenames with timestamps reduced analyst overwrite mistakes. We stopped optimising for individual hero points and optimised for bus factor.
We learned that velocity without quality is just debt with a velocity number. In AI startups (thread 2): The mentor who said 'write the customer comms before you merge' improved launch discipline. Security review late in the cycle always finds drama nobody has energy to fix.
The flaky integration that ignored pagination limits in tests hid a production OOM on huge threads once — never again always. In AI startups (thread 2): We should have deleted unused circle slugs reserved in marketing decks — engineering shipped different slugs and confused sales. The quiet win was documenting which database is authoritative for each entity finally.
Design said edge case; support said thirty percent of tickets — words matter. In AI startups (thread 2): The mentor who said 'show me the cohort retention table' ended faith-based growth debates again. The mentor who said 'show member overlap across circles without exposing PII' sharpened discovery privacy debates helpfully.
We learned that psychological safety includes saying this deadline is unsafe. In AI startups (thread 2): We learned that writing 'non-goals' in RFCs prevents zombie scope resurrection. The architecture spike that time-boxed exploration prevented endless research tickets.
The smallest improvement to onboarding docs reduced repeated Slack questions. In AI startups (thread 2): The clever cache invalidated wrong once and taught us humility about state. We stopped pretending estimates were commitments and trust actually improved.
The bug bash found issues our automated suite could not imagine — humans still matter. In AI startups (thread 2): The right default in config beats a thousand-page admin guide nobody reads. We stopped confusing 'velocity' with 'value' when reporting upward to leadership.
The migration succeeded because we rehearsed rollback twice, not because we were lucky. In AI startups (thread 2): We should have deleted unused cloud resources before finance asked pointed questions. We should have deleted unused IAM trust policies referencing old CI roles — least privilege hygiene wins.
Join the conversation.
Log in to reply