for user research · indie + SMB
Twenty real user interviews without spending two weeks recruiting and synthesising.
Lacudelph is the interview surface for indie founders and solo PMs who need real qualitative signal without recruiting through Userlytics or paying $20K/seat for enterprise UX tooling. Send the link to your existing users; the AI conductor handles the Socratic work; you watch the cohort picture form turn by turn, while the conversations are still running.
What user research costs you today
You either book ten 30-minute Zooms over a fortnight (and listen back, and synthesise yourself), or you pay an enterprise UX research platform $10K-$25K to do the same with better polish. Neither fits an indie team running quarterly discovery on a real budget.
- The DIY route burns calendar weeks. By the time you’ve synthesised, the question that motivated the research has shifted.
- Survey tools (Maze, Hotjar) collect what users will type into a box, not what they’ll discover when probed Socratically.
- Enterprise platforms (Listen Labs, Outset) start at $10K studies — overhead and seat fees built for an enterprise UX team you don’t have.
- Asking 20 users the same set of probes yourself means asking 20 users the same probes yourself.
How Lacudelph changes it
1
Write one brief in five fields
Audience, goal, hypotheses, what users get back at the end. Lacudelph turns it into the moderator persona + objective sequence + meta-noticing rules.
2
Refine it in conversation (optional)
The platform interviews you about your brief — surfaces audience definitions that drift, objectives that overlap, assumptions you hadn't named. Eight minutes; tightens the brief before users see it.
3
Send the link to your users
They open the URL, the AI conductor adapts to each one — drills where they say something concrete, moves on where they don't. One question at a time, never two. Each interview takes ~12 minutes for the user.
4
Watch the cohort form turn by turn
Every user gets their own structured reflection — sections picked from what they actually said. Trust mechanism, not extraction. The cohort picture builds while interviews are still running: convergent themes, divergent framings, recurring hedges, routing recommendations. Readable while the cycle's still active, not after it closes.
What a UX-research brief looks like
A worked example for a small B2B SaaS validating a new onboarding flow. Substitute your own product’s context.
- Goal
- Find out where the new onboarding flow loses users between sign-up and first successful action — and which step felt like genuine work vs busywork.
- Audience
- Users who signed up in the last 14 days. Both completers (made it to first action) and dropouts (signed up, didn't return). 10-20 of each.
- Hypotheses to check
- (a) Step 3's permissions request feels invasive without explanation; (b) the ‘invite a teammate’ nudge is mistimed; (c) users don't understand what success looks like by the end of the flow.
- What users get back
- A reflection tailored to their answer — sections picked from what they actually said: what they named as the hardest moment, what would have made it 30 seconds shorter, and one micro-experiment they could try in their own product onboarding. Theirs to keep, not extracted.
Common questions
How is this different from Maze, Hotjar, or Userlytics?
Maze and Hotjar collect what users type into a survey or click-test box — structured input, no probing. Userlytics is recruitment + recordings, but the interview itself is whatever the moderator scripts; no cross-turn memory inside the conversation. Lacudelph runs an adaptive multi-turn conductor for every respondent — drilling where they say something concrete, moving on through generic answers — and produces a cohort report that names convergent themes, divergent framings, and recurring hedge shapes across all sessions.
Can I run unmoderated UX research with it?
Yes — that's the default mode. The host writes one brief and sends the link; participants open it asynchronously and the AI conductor handles the conversation without a moderator on the call. The host watches the cohort picture form turn by turn from the dashboard.
Does Lacudelph recruit participants for me?
No — recruitment is yours. Lacudelph is the interview surface; you bring the cohort. Send the link to your existing users (in-product email, Intercom outreach, customer Slack). Participants who don't already know your product probably aren't the right cohort for product discovery anyway.
What tier do I need for cross-cohort aggregation?
Pro tier ($99/mo) for cohort rounds + cross-cohort aggregation. Free and Solo tiers can run individual interviews; Pro is the tier when you want the cohort report — convergent themes, divergent framings, signal-strength bars per objective, routing recommendations.
Is participant data sent to Anthropic?
Yes — interview turns are processed through Anthropic's API (US region) to produce the next conductor turn and the takeaway. We don't share data with third parties beyond the named sub-processors in the DPA. Participants see the consent statement before starting and the AI-authorship disclosure on the takeaway.
Run your next discovery cycle on Lacudelph
Free tier: 1 brief, 5 sessions/mo. Solo $29/mo: 5 briefs, 20 sessions. Pro $99/mo: unlimited briefs, 40 sessions, plus cross-cohort aggregation.