← Back to list

verification-first
by gr8monk3ys
An AI chatbot application for character interaction
⭐ 8🍴 4📅 Jan 22, 2026
SKILL.md
name: verification-first description: | WHEN to auto-invoke: Finishing tasks, claiming "done" or "complete", before marking work finished, when asserting code works, wrapping up implementations. WHEN NOT to invoke: During active development, exploration phases, planning discussions, research tasks.
Verification-First Skill
Never claim completion without evidence. Verify before asserting.
Core Principle
"Trust, but verify" - Don't assume code works; prove it works.
Verification Protocol
Before Claiming "Done"
-
Run the Code
- Execute the actual code path
- Don't just read it and assume it works
- Test with real inputs, not just mental simulation
-
Check the Output
- Verify output matches expectations
- Look at actual results, not just "no errors"
- Compare against acceptance criteria
-
Test Edge Cases
- What happens with empty input?
- What happens with invalid input?
- What happens at boundaries?
-
Verify in Context
- Does it work in the actual environment?
- Does it integrate correctly?
- Are there side effects?
Verification Checklist
Before marking complete:
- [ ] Code compiles/transpiles without errors
- [ ] Tests pass (existing + new)
- [ ] Manual verification performed
- [ ] Edge cases checked
- [ ] Error handling works
- [ ] Integration verified
Anti-Patterns to Avoid
Don't Say
- ❌ "This should work..."
- ❌ "I believe this is correct..."
- ❌ "This looks right to me..."
- ❌ "Based on my understanding..."
Instead Say
- ✅ "I've verified this works by..."
- ✅ "Tests confirm this behavior..."
- ✅ "I ran this and observed..."
- ✅ "The output shows..."
Verification Methods
For Code Changes
# Type check
npx tsc --noEmit
# Run tests
npm test
# Build verification
npm run build
For Bug Fixes
- Reproduce the bug first
- Apply the fix
- Verify bug no longer occurs
- Verify no regression
For New Features
- Write failing test first
- Implement feature
- Verify test passes
- Test manually in UI/API
For Refactoring
- Ensure tests exist (add if needed)
- Make refactoring change
- Verify all tests still pass
- Verify behavior unchanged
Evidence Collection
When claiming completion, provide evidence:
## Verification Evidence
**What was verified:**
- Feature X works correctly
**How it was verified:**
- Ran `npm test` - 45/45 tests passing
- Manually tested in browser at /feature-x
- Checked error handling with invalid input
**Results observed:**
- Success case: Shows expected output
- Error case: Displays user-friendly message
- Edge case: Handles empty state gracefully
Confidence Levels
| Level | Meaning | Evidence Required |
|---|---|---|
| Certain | Verified with tests + manual | Tests + screenshots/logs |
| High | Verified with tests | Passing test output |
| Medium | Manually verified | Description of manual test |
| Low | Code review only | Should verify more |
| None | Assumption only | Must verify before claiming |
Integration with Workflow
Write code
↓
Run verification
↓
Evidence collected? ─No→ Verify more
↓ Yes
Claim completion with evidence
Remember
- No verification = No completion claim
- "It should work" is not verification
- Evidence beats confidence
- When in doubt, verify again
Score
Total Score
65/100
Based on repository quality metrics
✓SKILL.md
SKILL.mdファイルが含まれている
+20
✓LICENSE
ライセンスが設定されている
+10
○説明文
100文字以上の説明がある
0/10
○人気
GitHub Stars 100以上
0/15
✓最近の活動
1ヶ月以内に更新
+10
○フォーク
10回以上フォークされている
0/5
✓Issue管理
オープンIssueが50未満
+5
✓言語
プログラミング言語が設定されている
+5
✓タグ
1つ以上のタグが設定されている
+5
Reviews
💬
Reviews coming soon
