Skip to main content

Part XV (a) - Capstone Track

HARD TRUTH: MASTERY IS PROVEN IN SHIPPING, NOT READING

Reading creates understanding. Shipping creates capability.

This capstone track is where all previous parts are tested together under realistic constraints.

Each capstone must show technical judgment, delivery discipline, reliability engineering, and measurable impact.


CAPSTONE 1: HIGH-SCALE API PLATFORM

Objective:

  • Build an API system that remains reliable under load and failure.

Minimum scope:

  • SLO and error budget definition
  • Rate limiting and backpressure strategy
  • Idempotent write design
  • Observability dashboard for latency and errors
  • Incident runbook

Success criteria:

  • Stable p95/p99 under load target
  • No data corruption under retries
  • Demonstrable recovery flow from induced failures

Required Evidence

  • Architecture diagram with bottleneck notes
  • Load-test results against target traffic
  • Error budget and alert policy
  • Runbook for partial outage or dependency failure

CAPSTONE 2: FULLSTACK CORRECTNESS SYSTEM

Objective:

  • Build a user-facing workflow where correctness matters across UI and backend.

Minimum scope:

  • Explicit state machine for user flow
  • Concurrency conflict handling
  • Reconciliation job for mismatch repair
  • Clear user messaging for pending/failure states

Success criteria:

  • Correct behavior under duplicate submissions and races
  • Deterministic recovery from partial failures
  • Auditable state transitions

Required Evidence

  • State diagram and invariants list
  • Duplicate-submission test cases
  • Reconciliation or repair workflow
  • User-facing failure-state screenshots

CAPSTONE 3: AI SYSTEM IN PRODUCTION

Objective:

  • Build an AI-assisted system that is evaluated and governed, not only demo-ready.

Minimum scope:

  • Retrieval and grounding approach
  • Evaluation dataset and metrics
  • Monitoring for drift and quality regressions
  • Safety and governance controls

Success criteria:

  • Measured improvement against baseline
  • Controlled behavior under failure and ambiguity
  • Clear rollback path for bad model behavior

Required Evidence

  • Evaluation dataset and release rubric
  • Monitoring dashboard with owner
  • Safety policy and fallback behavior
  • Rollback trigger list

CAPSTONE 4: PLATFORM MIGRATION

Objective:

  • Execute a migration with minimal user disruption and clear deprecation plan.

Minimum scope:

  • Strangler migration plan
  • Dual-run or shadow validation strategy
  • Rollout gates with go/no-go criteria
  • Consumer communication and sunset process

Success criteria:

  • Migration progress with low incident impact
  • Verified parity between old and new paths
  • Clean decommission of legacy component

Required Evidence

  • Migration plan with stages and kill switch
  • Validation results for parity checks
  • Stakeholder communication plan
  • Legacy shutdown checklist

EXECUTION RULES

Every capstone must include:

  • Design doc before implementation
  • Weekly risk register updates
  • Launch checklist before release
  • Postmortem after completion

Field rule: do not skip these artifacts. They are part of the skill being evaluated.


SCORING EXPECTATIONS

Capstones should be judged on:

  • decision quality under constraints
  • correctness and reliability under failure
  • observability and operational readiness
  • clarity of communication and artifacts
  • evidence of tradeoff reasoning

A polished demo without these signals should not score highly.


War-Story Mini-Case: Capstone Without Constraints Was Misleading

Timeline:

  • Day 0: Candidate submits polished capstone demo with strong UX and feature completeness.
  • Day 1 review: Evaluation finds no SLO targets, rollback path, or incident response plan.
  • Day 3: Capstone is re-run with production-like constraints and failure injection.
  • Day 7: Candidate provides runbook, scorecard, and postmortem after simulated outage.
  • Day 8: Panel reassesses based on resilience, decision quality, and recovery execution.

Key decisions:

  • Rejected demo-only evaluation as insufficient for senior-level assessment.
  • Added mandatory operational artifacts to scoring criteria.
  • Included controlled failure scenario to test real engineering behavior.

Outcome:

  • Assessment quality improved from presentation skill to production-readiness evaluation.
  • Pass/fail signal became more predictive of on-the-job performance.

OUTPUT ARTIFACT

For each capstone, publish:

  • Public case study
  • Architecture diagrams
  • Key metrics before and after
  • Incident or failure handling summary
  • Lessons learned and next iteration plan

This turns project work into portable evidence of top-tier engineering capability.