Back to list
mattgierhart

prd-v05-risk-discovery-interview

by mattgierhart

PRD-driven Context Engineering: A systematic approach to building AI-powered products using progressive documentation and context-aware development workflows

9🍴 2📅 Jan 24, 2026

SKILL.md


name: prd-v05-risk-discovery-interview description: Surface risks through guided questioning, helping users consider pivots, constraints, and prioritization during PRD v0.5 Red Team Review. Triggers on requests to identify risks, stress-test the idea, perform red team review, or when user asks "what could go wrong?", "identify risks", "red team", "risk assessment", "challenge assumptions", "stress test the idea". Consumes all prior IDs (CFD-, BR-, FEA-, PER-, UJ-, SCR-) as interview context. Outputs RISK- entries with owner decisions and mitigations. Feeds v0.5 Technical Stack Selection.

Risk Discovery Interview

Position in workflow: v0.4 Screen Flow Definition → v0.5 Risk Discovery Interview → v0.5 Technical Stack Selection

This is an interactive interview skill. The AI asks questions, the user reflects and decides. The goal is to surface risks so the user can mitigate or accept them—not to kill ideas.

Design Principles

  1. Interview, not inquisition — Facilitate discovery, don't interrogate
  2. Inform, not kill — Surface risks so user can mitigate, not abandon
  3. User owns decisions — AI facilitates, user assigns severity and response
  4. Actionable outputs — Every risk has a mitigation path or explicit "accept"

Risk Categories

CategoryFocus AreaExample Questions
MarketCompetitors, timing, demand"What if [competitor] launches this feature next month?"
TechnicalComplexity, unknowns, dependencies"Which feature has the most technical uncertainty?"
AdoptionUser behavior, activation, retention"What's the biggest friction point in onboarding?"
ResourceTeam, budget, time"If you had to cut scope by 50%, what stays?"
DependencyExternal factors, integrations, partners"What external factor could block launch?"
TimingDeadlines, market windows, seasonality"Is there a deadline we must hit? Why?"

Interview Flow

Phase 1: Context Review

Before asking questions, AI reviews:

  • CFD- evidence from v0.1-v0.2
  • FEA- features and their priorities
  • UJ- journeys and their complexity
  • BR- business rules and constraints

Phase 2: Guided Questions

Ask questions from each category, adapting based on product context:

Market Risks:

  • "What happens if [competitor] launches something similar in 60 days?"
  • "What market assumption are you least confident about?"
  • "What would cause users to choose a competitor instead?"

Technical Risks:

  • "Which feature has the most technical uncertainty?"
  • "What technology choice are you least confident about?"
  • "Is there anything you've never built before?"

Adoption Risks:

  • "What's the biggest friction point in [UJ-001 onboarding journey]?"
  • "What behavior change are you asking users to make?"
  • "What would cause a user to churn in the first week?"

Resource Risks:

  • "If you had only 2 developers, what would you cut?"
  • "What skill does the team lack?"
  • "What's your runway for validation?"

Dependency Risks:

  • "What external API or service could break your product?"
  • "What partner relationship is critical?"
  • "What regulatory requirement could block launch?"

Timing Risks:

  • "Is there a hard deadline? What happens if you miss it?"
  • "Is there a market window closing?"
  • "What seasonal factor affects launch timing?"

Phase 3: Risk Documentation

For each identified risk, create RISK- entry with user input on severity and response.

Phase 4: Priority & Review

  • Force-rank risks by Impact × Likelihood
  • Identify top 3-5 that require active mitigation
  • Document "accept" decisions explicitly

Interview Techniques

TechniqueHow to UseWhen to Use
Pre-mortem"It's 6 months from now and the product failed. Why?"Opening question
Constraint forcing"If you only had [X], what would you cut?"Resource discovery
Dependency mapping"What external factor could block launch?"Dependency discovery
Assumption surfacing"What must be true for this to work?"Any category
Devil's advocate"Let me argue the opposite—what if [X]?"Challenge weak evidence

RISK- Output Template

RISK-XXX: [Risk Title]
Category: [Market | Technical | Adoption | Resource | Dependency | Timing]
Description: [What could go wrong]
Trigger: [What would cause this to happen]
Impact: [High | Medium | Low] — User assessed
Likelihood: [High | Medium | Low] — User assessed
Priority: [Impact × Likelihood ranking]

Early Signal: [How we'd know this is happening]
Response: [Mitigate | Accept | Avoid | Transfer]
Mitigation: [Specific action if Response = Mitigate]
Owner: [Who is responsible for monitoring]

Linked IDs: [FEA-XXX, UJ-XXX, BR-XXX affected]
Review Date: [When to reassess this risk]

Example RISK- entry:

RISK-001: Primary API Dependency (Stripe) Outage
Category: Dependency
Description: Stripe API outage would block all payment processing
Trigger: Stripe infrastructure failure or rate limiting
Impact: High — All revenue blocked during outage
Likelihood: Low — Stripe has 99.99% uptime SLA
Priority: 3 (High × Low)

Early Signal: Stripe status page, payment failure rate spike
Response: Mitigate
Mitigation:
  - Implement graceful degradation (queue payments for retry)
  - Add status page monitoring alert
  - Document manual billing fallback process
Owner: Tech Lead

Linked IDs: FEA-020 (payments), UJ-005 (checkout), BR-030 (pricing)
Review Date: Before launch, quarterly thereafter

Risk Response Types

ResponseWhen to UseExample
MitigateCan reduce impact or likelihoodAdd fallback provider, implement retry logic
AcceptLow impact or unavoidable"Competitor might copy us—we accept"
AvoidChange plan to eliminate riskRemove feature with high technical uncertainty
TransferSomeone else owns the riskUse managed service instead of self-hosting

Severity Matrix

Low ImpactMedium ImpactHigh Impact
High LikelihoodMonitorMitigateMitigate urgently
Medium LikelihoodAcceptMonitor/MitigateMitigate
Low LikelihoodAcceptAccept/MonitorMonitor

Anti-Patterns to Avoid

Anti-PatternSignalFix
Risk theater50+ risks documentedFocus on top 10 that matter
All high severityEverything is criticalForce rank; max 3-5 "High"
No ownerRisks without accountabilityEvery RISK- needs an owner
Mitigation = "be careful"Vague responsesRequire specific, testable actions
Interview becomes lectureAI talks more than userAsk, listen, summarize
Killing ideasEvery risk leads to "don't do it"Frame as "how to succeed despite"

Quality Gates

Before proceeding to Technical Stack Selection:

  • All 6 risk categories explored
  • Maximum 10-15 RISK- entries (focused, not exhaustive)
  • Force-ranked by priority (Impact × Likelihood)
  • Top 5 risks have specific mitigation plans
  • "Accept" decisions are explicit, not accidental
  • Every RISK- has an owner

Downstream Connections

RISK- entries feed into:

ConsumerWhat It UsesExample
v0.5 Technical Stack SelectionRISK- constraints affect tech choicesRISK-003 (latency) → choose edge hosting
v0.6 Architecture DesignRisk mitigations become architecture requirementsRISK-005 → add circuit breaker
v0.7 Build ExecutionRisk monitoring in EPICTrack RISK-001 early signals
KPI- ThresholdsKill criteria from risks"If RISK-002 triggers, evaluate pivot"

Detailed References

  • Interview question bank: See references/question-bank.md
  • RISK- entry template: See assets/risk.md
  • Example risk register: See references/examples.md

Score

Total Score

75/100

Based on repository quality metrics

SKILL.md

SKILL.mdファイルが含まれている

+20
LICENSE

ライセンスが設定されている

+10
説明文

100文字以上の説明がある

+10
人気

GitHub Stars 100以上

0/15
最近の活動

1ヶ月以内に更新

+10
フォーク

10回以上フォークされている

0/5
Issue管理

オープンIssueが50未満

+5
言語

プログラミング言語が設定されている

+5
タグ

1つ以上のタグが設定されている

+5

Reviews

💬

Reviews coming soon