for incident review · investigator-side
Five thirty-minute interviews, running in parallel, the picture forming while they’re still running.
Lacudelph is the interview surface for engineers and ICs running post-incident reviews on a clock. Each respondent gets the same structured conductor; you get a cohort view of what converged, what diverged, and which premises nobody questioned — without running every conversation yourself.
The current shape of the work
Post-incident review compresses into the days right after. Five-to-twelve people who were near the failure each get interviewed by one of three investigators on Zoom. Notes diverge. The synthesis happens in someone’s head and lands as a Confluence doc that’s out of date by the post-mortem.
- Investigators’ time is the bottleneck. Each interview is 45 minutes plus 30 minutes of write-up.
- Comparability is hand-rolled — three investigators each ask different probes, so cross-respondent signal is the synthesizer’s problem.
- Respondents see the same draft incident summary days later and can’t correct misreadings inline.
- The hidden assumptions — “we all knew the deploy was fragile” — never get tested because asking each respondent the same hypothesis is too tedious.
How Lacudelph changes it
1
Investigator writes one brief
Five fields about what you want to learn from each respondent. Lacudelph generates the moderator persona, the probe sequence, the hidden-assumptions list to check, and the takeaway shape.
2
Send the link to all respondents
They open the URL, the AI conductor walks each of them through the same objectives one question at a time. Adaptive — drills down where they say something concrete, moves on where they don't.
3
Each respondent gets a private reflection
At session close they get a structured artefact of what they themselves named, what shifted during the conversation, and which question they didn't resolve. Trust mechanism, not a one-way data extraction.
4
You get the cohort aggregate
Cross-respondent patterns — convergent root causes, divergent framings, hedging shapes that recur across multiple respondents, and routing recommendations for follow-ups. Pro tier.
What an incident-review brief looks like
A worked example for an outage post-mortem. Plug in your own incident’s facts; the conductor handles the rest.
- Goal
- Surface the actual cluster of contributing factors, not just the trigger event. Test which of our hypotheses about gaps in tooling, on-call runbooks, and deploy practices each respondent thinks were load-bearing.
- Audience
- Engineers and ICs who were on-call, paged, or actively debugged in the first 90 minutes after the alert fired.
- Hypotheses to check
- (a) Deploy froze a state none of us understood; (b) Runbook step 4 didn't actually exist when it was needed; (c) We were all reading the same dashboard and missed the lateral failure.
- What respondents get back
- A reflection tailored to their account — sections picked from what they actually said: what they named, where their account converged with or diverged from the cohort, and any factor they almost-but-didn't surface. Theirs to keep, not extracted.
Common questions
How is this different from a Confluence post-mortem with a list of testimonies?
A post-mortem doc collects testimonies after-the-fact, written by whoever had time. Lacudelph runs every respondent through the same structured conductor — same objectives, same probing strategies, same hidden-assumption checks — so the cross-respondent comparison is built in. The cohort report names convergent root-cause hypotheses, surfaces the engineering vs product framing splits, and flags 'inherited' framings (e.g. 'retry storm') that respondents picked up from a standup rather than observed themselves.
Can the conductor handle technical depth — system internals, dashboards, terminology?
The brief generation step takes your domain context (incident timeline, dashboards, runbook references) and produces an interviewer persona that speaks the technical register. The conductor then probes Socratically — it doesn't replace your investigators' judgment, it parallelises the testimony-gathering layer.
What tier do I need for the cohort report?
Cohort rounds + cross-cohort aggregation are Pro-tier features ($99/mo). Free and Solo tiers can run individual interviews and produce per-session takeaways but don't aggregate across the cohort. See pricing.
Is participant data sent to Anthropic?
Yes — interview turns are processed through Anthropic's API (US region) to produce the next conductor turn and the takeaway. We don't share data with third parties beyond the named sub-processors in the DPA. Participants see the consent statement before starting and the AI-authorship disclosure on the takeaway.
Does it work for blameless post-mortems?
The conductor's persona explicitly avoids attribution of cause to individuals — its frame is 'co-noticing structural patterns' rather than 'identifying who was responsible'. The synthesis surfaces what converged and where the framing diverged across respondents, not who said what.
Run your next post-incident on Lacudelph
Free tier covers a single brief and 5 sessions — enough for one investigation. Cohort aggregation lands on Pro at $99/mo.