Back to list
existential-birds

review-feedback-schema

by existential-birds

Claude Code plugin for code review skills and verification workflows. Python, Go, React, FastAPI, BubbleTea, and AI frameworks (Pydantic AI, LangGraph, Vercel AI SDK).

11🍴 2📅 Jan 24, 2026

SKILL.md


name: review-feedback-schema description: Schema for tracking code review outcomes to enable feedback-driven skill improvement. Use when logging review results or analyzing review quality.

Review Feedback Schema

Purpose

Structured format for logging code review outcomes. This data enables:

  1. Identifying rules that produce false positives
  2. Tracking skill accuracy over time
  3. Automated skill improvement via pattern analysis

Schema

date,file,line,rule_source,category,severity,issue,verdict,rationale
FieldTypeDescriptionExample Values
dateISO dateWhen review occurred2025-12-23
filepathRelative file pathamelia/agents/developer.py
linestringLine number(s)128, 190-191
rule_sourcestringSkill and rule that triggered issuepython-code-review/common-mistakes:unused-variables, pydantic-ai-common-pitfalls:tool-decorator
categoryenumIssue taxonomytype-safety, async, error-handling, style, patterns, testing, security
severityenumAs flagged by reviewercritical, major, minor
issuestringBrief descriptionReturn type list[Any] loses type safety
verdictenumHuman decisionACCEPT, REJECT, DEFER, ACKNOWLEDGE
rationalestringWhy verdict was chosenpydantic-ai docs explicitly support this pattern

Verdict Types

VerdictMeaningAction
ACCEPTIssue is valid, will fixCode change made
REJECTIssue is invalid/wrongNo change; may improve skill
DEFERValid but not fixing nowTracked for later
ACKNOWLEDGEValid but intentionalDocument why it's intentional

When to Use Each

ACCEPT: The reviewer correctly identified a real issue.

2025-12-27,amelia/agents/developer.py,128,python-code-review:type-safety,type-safety,major,Return type list[Any] loses type safety,ACCEPT,Changed to list[AgentMessage]

REJECT: The reviewer was wrong - the code is correct.

2025-12-23,amelia/drivers/api/openai.py,102,python-code-review:line-length,style,minor,Line too long (104 > 100),REJECT,ruff check passes - no E501 violation exists

DEFER: Valid issue but out of scope for current work.

2025-12-22,api/handlers.py,45,fastapi-code-review:error-handling,error-handling,minor,Missing specific exception type,DEFER,Refactoring planned for Q1

ACKNOWLEDGE: Intentional design decision.

2025-12-21,core/cache.py,89,python-code-review:optimization,patterns,minor,Using dict instead of dataclass,ACKNOWLEDGE,Performance-critical path - intentional

Rule Source Format

Format: skill-name/section:rule-id or skill-name:rule-id

Examples:

  • python-code-review/common-mistakes:unused-variables
  • pydantic-ai-common-pitfalls:tool-decorator
  • fastapi-code-review:dependency-injection
  • pytest-code-review:fixture-scope

Use the skill folder name and identify the specific rule or section that triggered the issue.

Category Taxonomy

CategoryDescriptionExamples
type-safetyType annotation issuesMissing types, incorrect types, Any usage
asyncAsync/await issuesBlocking in async, missing await
error-handlingException handlingBare except, missing error handling
styleCode style/formattingLine length, naming conventions
patternsDesign patternsAnti-patterns, framework misuse
testingTest qualityMissing coverage, flaky tests
securitySecurity issuesInjection, secrets exposure

Writing Good Rationales

For ACCEPT

Explain what you fixed:

  • "Changed Exception to (FileNotFoundError, OSError)"
  • "Fixed using model_copy(update={...})"
  • "Removed unused Any import"

For REJECT

Explain why the issue is invalid:

  • "ruff check passes - no E501 violation exists" (linter authoritative)
  • "pydantic-ai docs explicitly support this pattern" (framework idiom)
  • "Intentional optimization documented in code comment" (documented decision)

For DEFER

Explain when/why it will be addressed:

  • "Tracked in issue #123"
  • "Refactoring planned for Q1"
  • "Blocked on dependency upgrade"

For ACKNOWLEDGE

Explain why it's intentional:

  • "Performance-critical path per CLAUDE.md"
  • "Legacy API compatibility requirement"
  • "Matches upstream library pattern"

Example Log

date,file,line,rule_source,category,severity,issue,verdict,rationale
2025-12-20,tests/integration/test_cli_flows.py,407,pytest-code-review:parametrization,testing,minor,Unused extra_args parameter in parametrization,ACCEPT,Fixed - removed dead parameter
2025-12-20,tests/integration/test_cli_flows.py,237-242,pytest-code-review:coverage,testing,major,Missing review --local in git repo error test,REJECT,Not applicable - review uses different error path
2025-12-21,amelia/server/orchestrator/service.py,1702,python-code-review:immutability,patterns,critical,Direct mutation of frozen ExecutionState,ACCEPT,Fixed using model_copy(update={...})
2025-12-23,amelia/drivers/api/tools.py,48-53,pydantic-ai-common-pitfalls:tool-decorator,patterns,major,Misleading RunContext pattern - should use decorators,REJECT,pydantic-ai docs explicitly support passing raw functions with RunContext to Agent(tools=[])
2025-12-23,amelia/drivers/api/openai.py,102,python-code-review:line-length,style,minor,Line too long (104 > 100),REJECT,ruff check passes - no E501 violation exists
2025-12-27,amelia/core/orchestrator.py,190-191,python-code-review:exception-handling,error-handling,major,Generic exception handling in get_code_changes_for_review,ACCEPT,Changed Exception to (FileNotFoundError OSError)
2025-12-27,amelia/agents/developer.py,128,python-code-review:type-safety,type-safety,major,Return type list[Any] loses type safety,ACCEPT,Changed to list[AgentMessage] and removed unused Any import

Pre-Review Verification Checklist

Before reporting ANY finding, reviewers MUST verify:

Verification Steps

  1. Confirm the issue exists: Read the actual code, don't infer from context
  2. Check surrounding code: The issue may be handled elsewhere (guards, earlier checks)
  3. Trace state/variable usage: Search for all references before claiming "unused"
  4. Verify assertions: If claiming "X is missing", confirm X isn't present
  5. Check framework handling: Many frameworks handle validation/errors automatically
  6. Validate syntax understanding: Verify against current docs (Tailwind v4, TS 5.x, etc.)

Common False Positive Patterns

PatternRoot CausePrevention
"Unused variable"Variable used elsewhereSearch all references
"Missing validation"Framework validatesCheck Pydantic/Zod/etc.
"Type assertion"Actually annotationConfirm as vs :
"Memory leak"Cleanup existsCheck effect returns
"Wrong syntax"New framework versionVerify against current docs
"Style issue"Preference not ruleBoth approaches valid

Signals of False Positive Risk

If you're about to flag any of these, double-check:

  • "This variable appears unused" → Search for ALL references first
  • "Missing error handling" → Check parent/framework handling
  • "Should use X instead of Y" → Both may be valid
  • "This syntax looks wrong" → Verify against current version docs

Reference: review-verification-protocol for full verification workflow.

How This Feeds Into Skill Improvement

  1. Aggregate by rule_source: Identify which rules have high REJECT rates
  2. Analyze rationales: Find common themes in rejections
  3. Update skills: Add exceptions, clarifications, or verification steps
  4. Track impact: Measure if changes reduce rejection rate

See review-skill-improver skill for the full analysis workflow.

Improvement Signals

PatternSkill Improvement
"linter passes" rejectionsAdd linter verification step before flagging style issues
"docs support this" rejectionsAdd exception for documented framework patterns
"intentional" rejectionsAdd codebase context check before flagging
"wrong code path" rejectionsAdd code tracing step before claiming gaps

Score

Total Score

75/100

Based on repository quality metrics

SKILL.md

SKILL.mdファイルが含まれている

+20
LICENSE

ライセンスが設定されている

+10
説明文

100文字以上の説明がある

+10
人気

GitHub Stars 100以上

0/15
最近の活動

1ヶ月以内に更新

+10
フォーク

10回以上フォークされている

0/5
Issue管理

オープンIssueが50未満

+5
言語

プログラミング言語が設定されている

+5
タグ

1つ以上のタグが設定されている

+5

Reviews

💬

Reviews coming soon