Skip to main content

CASE STUDY — Part VIII: Turning Vague Requirements Into a Shipped System

SCENARIO

PM says:

"We need better onboarding."

No metrics. No clarity. High urgency.

Senior execution means converting ambiguity into delivery without chaos.

More scenario detail: The request lands in Slack or a sprint planning meeting. "Better onboarding" could mean: fewer steps, more guidance, faster time-to-value, higher activation rate, or a complete redesign. Without clarification, engineers build the wrong thing—or build everything and ship nothing on time. The PM is busy; the stakeholder wants results yesterday. Your job is to extract a testable hypothesis and a measurable outcome in one focused conversation.

Examples of vague requirements in the wild:

  • "Make it more intuitive."

  • "Improve the user experience."

  • "We need to be more scalable."

  • "Add AI to the product."

  • "Reduce friction in the funnel."

Each is a direction, not a destination. Senior engineers turn them into: "We believe that [change] will increase [metric] by [target] within [timeframe]."


CLARIFY THE OUTCOME

Clarification interview template—questions to ask the PM:

QuestionWhy it matters
What metric moves if we succeed?Activation rate? Retention? Support tickets? NPS? If they can't name one, propose one.
What does success look like in 30 days?Forces a near-term, measurable target. "More users complete onboarding" → define "complete."
What's out of scope for this iteration?Prevents scope creep. "We're not touching pricing or billing."
Who is the primary user?New user vs power user? Enterprise vs SMB? Changes the design.
What's the biggest drop-off point today?Focus effort. If 60% leave at step 3, fix step 3 first.
What would we ship if we had half the time?Reveals priorities. The "half" version is often the right MVP.

Define a measurable goal: By the end of the conversation, you should have: "We will increase 7-day activation from 22% to 28% by simplifying the first 3 steps and adding a progress indicator." That's a hypothesis you can test.


DESIGN DOC (SMALL, FAST)

Write 1–2 pages. Not a novel. Not a wiki that grows forever.

Sections:

  • Hypothesis: "We believe that [X] will increase [metric] by [Y]."

  • User journey: Current flow (3–5 steps). Proposed flow. What changes?

  • Constraints: Tech limits, timeline, dependencies. What we're not doing.

  • Proposed changes: Bullet list. UI copy, flow order, new components.

  • Risks + rollout: What could go wrong? How do we roll out (flag, %, rollback)?

Mini example:

## Hypothesis
We believe that reducing onboarding from 7 steps to 4 and adding a progress bar
will increase 7-day activation from 22% to 28%.

## Current flow
1. Sign up → 2. Email verify → 3. Profile → 4. Team invite → 5. Import data →
6. Connect integrations → 7. First action

## Proposed flow
1. Sign up + email → 2. Profile (name only) → 3. First action (guided) → 4. Rest optional

## Constraints
- No backend auth changes this quarter
- Must support existing "skip" behavior for power users

## Rollout
- Feature flag `simplified_onboarding`
- 10% internal, then 25% new users, then 100%
- Rollback: revert flag

SLICE THE WORK (SAFE INCREMENTS)

Work slicing framework:

  1. Instrument first. Add analytics for current funnel (step completion, drop-off). You can't prove improvement without baseline.

  2. Ship UI copy + flow improvements behind flag. Change button text, reorder steps, add progress bar. No backend changes yet. Fast to ship, easy to revert.

  3. Add backend changes if needed. Only if the hypothesis requires it (e.g., new "quick start" API). Each slice is independently deployable.

Increment map: For each slice, define: input (what exists), output (what ships), validation (how you know it works). Slice 1: Instrumentation. Slice 2: Progress bar + copy. Slice 3: Step consolidation. Each slice = 1–2 weeks max.

Senior rule:

Ship the smallest thing that validates the hypothesis.


LAUNCH + MONITOR

  • Feature flag: Ship behind simplified_onboarding. No big-bang deploy.

  • Dashboard ready: Before flipping the flag, have a dashboard with: funnel by step, activation rate, cohort comparison (flag on vs off).

  • Rollback plan: If activation drops or support tickets spike, revert the flag. Document the trigger: "Rollback if 7-day activation drops > 2% or support tickets increase > 20%."

Success metrics: Define before launch. "We'll declare success if 7-day activation increases by ≥ 3 percentage points after 2 weeks at 50% traffic." If you hit it, go to 100%. If not, iterate or pivot.

Monitoring: Alert on error rate, latency, and support ticket volume for the new flow. Don't wait for the PM to notice something is broken.


EXERCISE

Write the design doc you'd send for: "We need better onboarding."

Deliverables:

  1. Problem statement: One paragraph. What's broken? What's the cost?

  2. Hypothesis: "We believe that [X] will increase [metric] by [Y]."

  3. Metrics: Primary (activation), secondary (time-to-first-action, support tickets). How will you measure?

  4. Proposed changes: 3–5 concrete changes. Prioritized.

  5. Rollout plan: Phases, percentages, gates. How you'll know it worked.

  6. Risks + rollback: What could go wrong? When do you revert?

  7. Out of scope: What you're explicitly not doing this iteration.

Common pitfalls: (1) Building the "perfect" solution instead of the testable one. (2) Skipping instrumentation—you can't prove impact without baseline. (3) Letting scope creep—every "while we're at it" delays the core hypothesis. (4) No rollback trigger—define when you revert before you launch.


🏁 END — PART VIII CASE STUDY