Back to list
mattgierhart

prd-v03-outcome-definition

by mattgierhart

PRD-driven Context Engineering: A systematic approach to building AI-powered products using progressive documentation and context-aware development workflows

9🍴 2📅 Jan 24, 2026

SKILL.md


name: prd-v03-outcome-definition description: Define measurable success metrics (KPIs) tied to product type during PRD v0.3 Commercial Model. Triggers on requests to define success metrics, set KPI targets, determine what to measure, establish go/no-go thresholds, or when user asks "how do we measure success?", "what metrics matter?", "what's our target?", "how do we know if this works?", "define KPIs", "success criteria". Consumes Product Type Classification (BR-) from v0.2. Outputs KPI- entries with thresholds, evidence sources, and downstream gate linkages.

Outcome Definition

Position in HORIZON workflow: v0.2 Product Type Classification → v0.3 Outcome Definition → v0.3 Pricing Model Selection

Metric Quality Hierarchy

Not all metrics are equal. Use this tier system:

TierMetric TypesWhy It Matters
Tier 1Revenue (MRR, first dollar, ACV), Churn (logo, NRR), LTV:CACRevenue validates market fit. "First dollar IS the proof."
Tier 2Conversion rates (trial→paid, lead→customer), Time to Value, ActivationLeading indicators that predict Tier 1 outcomes
Tier 3Engagement (DAU, sessions), Feature adoption, NPS"Nice to know" — only track if tied to Tier 1/2

Rule: Every product needs at least one Tier 1 metric. Tier 3 metrics without Tier 1/2 correlation are vanity metrics.

Product Type × Metric Selection

Metrics must align with product type from v0.2 classification:

Product TypePrimary MetricsAnti-Metrics (Avoid)
CloneFeature parity score, Price delta vs. leader, TTFV vs. leaderGeneric engagement (doesn't prove you beat leader)
UndercutPrice per [unit] vs. leader, Niche conversion rate, CAC in target segmentBroad market share (you're niche by design)
UnbundleCategory NPS vs. platform, Vertical retention, Feature depth usagePlatform-level metrics (irrelevant to your slice)
SliceMarketplace ranking, Install→activate rate, Platform retention liftTAM metrics (platform owns the market)
WrapperTime saved per workflow, API reliability, Integration adoptionStandalone usage (value is in connection)
InnovationEducation→activation conversion, Behavioral change rate, Reference customersUser counts without activation (people try, don't convert)

Leading vs. Lagging Framework

Every product needs BOTH:

Leading Indicators (actionable now, predict outcomes):

  • Sequences sent, open rates, trial starts
  • Time to first value, activation rate
  • Feature adoption in first 7 days

Lagging Indicators (confirm strategy worked):

  • MRR, churn rate, LTV:CAC
  • Net Revenue Retention (NRR)
  • Customer count, logo churn

Pattern: Track leading weekly, lagging monthly. If leading indicators fail, you can pivot before lagging indicators confirm disaster.

Target-Setting Rules

Targets must be evidence-based, never arbitrary:

Good targets (use these approaches):

  • Competitor benchmark × safety margin: "SMB churn benchmark 3-5% → use 5%"
  • Revenue gates: "First dollar by Day 14" (Signal → $1: 14 days)
  • Ratio thresholds: "LTV:CAC ≥ 3:1"
  • Time bounds: "TTFV < 5 minutes for self-serve"

Bad targets (anti-patterns):

  • Round numbers without evidence: "10% improvement"
  • Engagement without revenue tie: "1000 DAU"
  • Aspirational without baseline: "Best in class retention"

Output Template

Create KPI- entries in this format:

KPI-XXX: [Metric Name]
Type: [Tier 1 | Tier 2 | Tier 3]
Category: [Leading | Lagging]
Definition: [Exact calculation formula]
Target: [Specific threshold with evidence source]
Evidence: [CFD-XXX or benchmark source]
Downstream Gate: [Which decision uses this — e.g., "v0.5 Red Team kill criteria"]
Measurement: [How/when measured — e.g., "Weekly via Mixpanel"]

Example KPI- entry:

KPI-001: Time to First Revenue
Type: Tier 1
Category: Lagging
Definition: Days from market signal identification to first paying customer
Target: ≤14 days (GearHeart standard: Signal → $1: 14 days)
Evidence: BR-001 (GearHeart methodology)
Downstream Gate: v0.5 Red Team — if not hit by Day 21, evaluate pivot
Measurement: Manual tracking in PRD changelog

Anti-Patterns to Avoid

  1. Vanity metrics as primary: "50K users" means nothing if only 500 pay
  2. Traffic without quality: High volume + low engagement = quality problem
  3. Arbitrary targets: "10% improvement" without baseline or benchmark
  4. All lagging, no leading: Can't course-correct if you only see outcomes monthly
  5. Ignoring product type: Clone metrics ≠ Innovation metrics
  6. Unmeasurable outcomes: "Better experience" — how do you know?

Downstream Connections

KPI- entries feed into:

ConsumerWhat It UsesExample
v0.5 Red TeamKill thresholds"If KPI-001 not hit by Day 21, pivot"
v0.7 Build ExecutionEPIC acceptance criteria"EPIC complete when KPI-002 validated"
v0.9 GTMLaunch dashboardTrack KPI-001, KPI-003 post-launch
BR- Business RulesDerived constraints"BR-XXX: No launch if LTV:CAC <3:1"

Detailed References

  • Good/bad examples: See references/examples.md
  • Benchmark sources: See references/benchmarks.md
  • KPI template worksheet: See assets/kpi.md

Score

Total Score

75/100

Based on repository quality metrics

SKILL.md

SKILL.mdファイルが含まれている

+20
LICENSE

ライセンスが設定されている

+10
説明文

100文字以上の説明がある

+10
人気

GitHub Stars 100以上

0/15
最近の活動

1ヶ月以内に更新

+10
フォーク

10回以上フォークされている

0/5
Issue管理

オープンIssueが50未満

+5
言語

プログラミング言語が設定されている

+5
タグ

1つ以上のタグが設定されている

+5

Reviews

💬

Reviews coming soon