
writing-skills
by helmecke
My dotfiles
SKILL.md
name: writing-skills description: This skill should be used when the user asks to "create a skill", "write a skill", "edit a skill", "verify a skill", "test a skill with subagents", or needs guidance on TDD for documentation, skill structure, bulletproofing against rationalization, or Claude Search Optimization. license: MIT compatibility: opencode metadata: category: development version: 1.0.0
Writing Skills
Overview
Writing skills IS Test-Driven Development applied to process documentation.
Personal skills live in ~/.config/opencode/skill/ for OpenCode.
You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
Core principle: If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.
REQUIRED BACKGROUND: You MUST understand test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.
Official guidance: For Anthropic's official skill authoring best practices, see references/anthropic-best-practices.md. This document provides additional patterns and guidelines that complement the TDD-focused approach in this skill.
What is a Skill?
A skill is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.
Skills are: Reusable techniques, patterns, tools, reference guides
Skills are NOT: Narratives about how you solved a problem once
TDD Mapping for Skills
| TDD Concept | Skill Creation |
|---|---|
| Test case | Pressure scenario with subagent |
| Production code | Skill document (SKILL.md) |
| Test fails (RED) | Agent violates rule without skill (baseline) |
| Test passes (GREEN) | Agent complies with skill present |
| Refactor | Close loopholes while maintaining compliance |
| Write test first | Run baseline scenario BEFORE writing skill |
| Watch it fail | Document exact rationalizations agent uses |
| Minimal code | Write skill addressing those specific violations |
| Watch it pass | Verify agent now complies |
| Refactor cycle | Find new rationalizations → plug → re-verify |
The entire skill creation process follows RED-GREEN-REFACTOR.
When to Create a Skill
Create when:
- Technique wasn't intuitively obvious to you
- You'd reference this again across projects
- Pattern applies broadly (not project-specific)
- Others would benefit
Don't create for:
- One-off solutions
- Standard practices well-documented elsewhere
- Project-specific conventions (put in .opencode/AGENTS.md or similar)
- Mechanical constraints (if it's enforceable with regex/validation, automate it—save documentation for judgment calls)
Skill Types
Technique
Concrete method with steps to follow (condition-based-waiting, root-cause-tracing)
Pattern
Way of thinking about problems (flatten-with-flags, test-invariants)
Reference
API docs, syntax guides, tool documentation
Directory Structure (OpenCode)
~/.config/opencode/skill/
skill-name/
SKILL.md # Main reference (required)
references/ # Detailed documentation
examples/ # Working examples
scripts/ # Utility scripts
Separate files for:
- Heavy reference (100+ lines) - API docs, comprehensive syntax →
references/ - Reusable tools - Scripts, utilities, templates →
scripts/ - Working examples - Complete runnable examples →
examples/
Keep inline:
- Principles and concepts
- Code patterns (< 50 lines)
- Quick reference material
SKILL.md Structure (OpenCode)
Frontmatter (YAML):
- Required:
nameanddescription - Optional:
license,compatibility,metadata - Max 1024 characters for description
name: Use letters, numbers, and hyphens only (no parentheses, special chars)description: Third-person, describes ONLY when to use (NOT what it does)- Start with "This skill should be used when..." for OpenCode
- Include specific triggering conditions and symptoms
- NEVER summarize the skill's process or workflow (see CSO section for why)
- Keep under 500 characters if possible
---
name: skill-name
description: This skill should be used when the user asks to "trigger 1", "trigger 2", or needs guidance on specific capability.
compatibility: opencode
---
# Skill Name
## When to Use This Skill
[Triggering conditions]
## Overview
What is this? Core principle in 1-2 sentences.
## Quick Reference
Table or bullets for scanning common operations
## Implementation
Inline code for simple patterns
Link to file for heavy reference or reusable tools
## Common Mistakes
What goes wrong + fixes
## Additional Resources
- **`references/detailed-guide.md`** - Comprehensive reference
- **`examples/working-example.sh`** - Complete example
- **`scripts/utility.py`** - Automation script
Claude Search Optimization (CSO)
Critical for discovery: Future Claude needs to FIND your skill
1. Rich Description Field
Purpose: Claude reads description to decide which skills to load for a given task.
Format: Start with "This skill should be used when..." for OpenCode
CRITICAL: Description = When to Use, NOT What the Skill Does
The description should ONLY describe triggering conditions. Do NOT summarize the skill's process or workflow in the description.
Why this matters: Testing revealed that when a description summarizes the skill's workflow, Claude may follow the description instead of reading the full skill content.
The trap: Descriptions that summarize workflow create a shortcut Claude will take. The skill body becomes documentation Claude skips.
# ❌ BAD: Summarizes workflow - Claude may follow this instead of reading skill
description: This skill should be used when executing plans - dispatches subagent per task with code review between tasks
# ✅ GOOD: Just triggering conditions, no workflow summary
description: This skill should be used when executing implementation plans with independent tasks in the current session
Content:
- Use concrete triggers, symptoms, and situations that signal this skill applies
- Write in third person (injected into system prompt)
- NEVER summarize the skill's process or workflow
2. Keyword Coverage
Use words Claude would search for:
- Error messages: "Hook timed out", "ENOTEMPTY", "race condition"
- Symptoms: "flaky", "hanging", "zombie", "pollution"
- Synonyms: "timeout/hang/freeze", "cleanup/teardown/afterEach"
- Tools: Actual commands, library names, file types
3. Descriptive Naming
Use active voice, verb-first:
- ✅
creating-skillsnotskill-creation - ✅
condition-based-waitingnotasync-test-helpers
Gerunds (-ing) work well for processes:
creating-skills,testing-skills,debugging-with-logs
4. Cross-Referencing Other Skills
When writing documentation that references other skills:
Use skill name only, with explicit requirement markers:
- ✅ Good:
**REQUIRED SUB-SKILL:** Use test-driven-development - ✅ Good:
**REQUIRED BACKGROUND:** You MUST understand systematic-debugging - ❌ Bad:
See skills/testing/test-driven-development(unclear if required)
Why no @ links: @ syntax force-loads files immediately, consuming context before you need them.
Flowchart Usage
digraph when_flowchart {
"Need to show information?" [shape=diamond];
"Decision where I might go wrong?" [shape=diamond];
"Use markdown" [shape=box];
"Small inline flowchart" [shape=box];
"Need to show information?" -> "Decision where I might go wrong?" [label="yes"];
"Decision where I might go wrong?" -> "Small inline flowchart" [label="yes"];
"Decision where I might go wrong?" -> "Use markdown" [label="no"];
}
Use flowcharts ONLY for:
- Non-obvious decision points
- Process loops where you might stop too early
- "When to use A vs B" decisions
Never use flowcharts for:
- Reference material → Tables, lists
- Code examples → Markdown blocks
- Linear instructions → Numbered lists
See references/graphviz-conventions.dot for graphviz style rules.
Visualizing for your human partner: Use scripts/render-graphs.js to render a skill's flowcharts to SVG:
./scripts/render-graphs.js /path/to/skill # Each diagram separately
./scripts/render-graphs.js /path/to/skill --combine # All diagrams in one SVG
Code Examples
One excellent example beats many mediocre ones
Choose most relevant language:
- Testing techniques → TypeScript/JavaScript
- System debugging → Shell/Python
- Data processing → Python
Good example:
- Complete and runnable
- Well-commented explaining WHY
- From real scenario
- Shows pattern clearly
- Ready to adapt (not generic template)
Don't:
- Implement in 5+ languages
- Create fill-in-the-blank templates
- Write contrived examples
You're good at porting - one great example is enough.
File Organization
Self-Contained Skill
defense-in-depth/
SKILL.md # Everything inline
When: All content fits, no heavy reference needed
Skill with Reusable Tool
condition-based-waiting/
SKILL.md # Overview + patterns
examples/
example.ts # Working helpers to adapt
When: Tool is reusable code, not just narrative
Skill with Heavy Reference
pptx/
SKILL.md # Overview + workflows
references/
pptxgenjs.md # 600 lines API reference
ooxml.md # 500 lines XML structure
scripts/ # Executable tools
When: Reference material too large for inline
The Iron Law (Same as TDD)
NO SKILL WITHOUT A FAILING TEST FIRST
This applies to NEW skills AND EDITS to existing skills.
Write skill before testing? Delete it. Start over. Edit skill without testing? Same violation.
No exceptions:
- Not for "simple additions"
- Not for "just adding a section"
- Not for "documentation updates"
- Don't keep untested changes as "reference"
- Don't "adapt" while running tests
- Delete means delete
REQUIRED BACKGROUND: The test-driven-development skill explains why this matters. Same principles apply to documentation.
Testing All Skill Types
Different skill types need different test approaches. For complete testing methodology, see references/testing-skills-with-subagents.md.
Discipline-Enforcing Skills (rules/requirements)
Examples: TDD, verification-before-completion, designing-before-coding
Test with:
- Academic questions: Do they understand the rules?
- Pressure scenarios: Do they comply under stress?
- Multiple pressures combined: time + sunk cost + exhaustion
- Identify rationalizations and add explicit counters
Success criteria: Agent follows rule under maximum pressure
Technique Skills (how-to guides)
Examples: condition-based-waiting, root-cause-tracing, defensive-programming
Test with:
- Application scenarios: Can they apply the technique correctly?
- Variation scenarios: Do they handle edge cases?
- Missing information tests: Do instructions have gaps?
Success criteria: Agent successfully applies technique to new scenario
Pattern Skills (mental models)
Examples: reducing-complexity, information-hiding concepts
Test with:
- Recognition scenarios: Do they recognize when pattern applies?
- Application scenarios: Can they use the mental model?
- Counter-examples: Do they know when NOT to apply?
Success criteria: Agent correctly identifies when/how to apply pattern
Reference Skills (documentation/APIs)
Examples: API documentation, command references, library guides
Test with:
- Retrieval scenarios: Can they find the right information?
- Application scenarios: Can they use what they found correctly?
- Gap testing: Are common use cases covered?
Success criteria: Agent finds and correctly applies reference information
Common Rationalizations for Skipping Testing
| Excuse | Reality |
|---|---|
| "Skill is obviously clear" | Clear to you ≠ clear to other agents. Test it. |
| "It's just a reference" | References can have gaps, unclear sections. Test retrieval. |
| "Testing is overkill" | Untested skills have issues. Always. 15 min testing saves hours. |
| "I'll test if problems emerge" | Problems = agents can't use skill. Test BEFORE deploying. |
| "Too tedious to test" | Testing is less tedious than debugging bad skill in production. |
| "I'm confident it's good" | Overconfidence guarantees issues. Test anyway. |
| "Academic review is enough" | Reading ≠ using. Test application scenarios. |
| "No time to test" | Deploying untested skill wastes more time fixing it later. |
All of these mean: Test before deploying. No exceptions.
Bulletproofing Skills Against Rationalization
Skills that enforce discipline (like TDD) need to resist rationalization. Agents are smart and will find loopholes when under pressure.
Psychology note: Understanding WHY persuasion techniques work helps you apply them systematically. See references/persuasion-principles.md for research foundation on authority, commitment, scarcity, social proof, and unity principles.
Close Every Loophole Explicitly
Don't just state the rule - forbid specific workarounds. See references/persuasion-principles.md for detailed techniques.
Build Rationalization Table
Capture rationalizations from baseline testing. Every excuse agents make goes in the table.
Create Red Flags List
Make it easy for agents to self-check when rationalizing.
RED-GREEN-REFACTOR for Skills
Follow the TDD cycle:
RED: Write Failing Test (Baseline)
Run pressure scenario with subagent WITHOUT the skill. Document exact behavior:
- What choices did they make?
- What rationalizations did they use (verbatim)?
- Which pressures triggered violations?
This is "watch the test fail" - you must see what agents naturally do before writing the skill.
GREEN: Write Minimal Skill
Write skill that addresses those specific rationalizations. Don't add extra content for hypothetical cases.
Run same scenarios WITH skill. Agent should now comply.
REFACTOR: Close Loopholes
Agent found new rationalization? Add explicit counter. Re-test until bulletproof.
Testing methodology: See references/testing-skills-with-subagents.md for the complete testing methodology:
- How to write pressure scenarios
- Pressure types (time, sunk cost, authority, exhaustion)
- Plugging holes systematically
- Meta-testing techniques
Anti-Patterns
❌ Narrative Example
"In session 2025-10-03, we found empty projectDir caused..." Why bad: Too specific, not reusable
❌ Multi-Language Dilution
example-js.js, example-py.py, example-go.go Why bad: Mediocre quality, maintenance burden
❌ Code in Flowcharts
step1 [label="import fs"];
step2 [label="read file"];
Why bad: Can't copy-paste, hard to read
❌ Generic Labels
helper1, helper2, step3, pattern4 Why bad: Labels should have semantic meaning
STOP: Before Moving to Next Skill
After writing ANY skill, you MUST STOP and complete the deployment process.
Do NOT:
- Create multiple skills in batch without testing each
- Move to next skill before current one is verified
- Skip testing because "batching is more efficient"
The deployment checklist below is MANDATORY for EACH skill.
Deploying untested skills = deploying untested code. It's a violation of quality standards.
Skill Creation Checklist (TDD Adapted)
IMPORTANT: Use TodoWrite to create todos for EACH checklist item below.
RED Phase - Write Failing Test:
- Create pressure scenarios (3+ combined pressures for discipline skills)
- Run scenarios WITHOUT skill - document baseline behavior verbatim
- Identify patterns in rationalizations/failures
GREEN Phase - Write Minimal Skill:
- Name uses only letters, numbers, hyphens (no parentheses/special chars)
- YAML frontmatter with name, description, and OpenCode fields
- Description starts with "This skill should be used when..." and includes specific triggers/symptoms
- Description written in third person
- Keywords throughout for search (errors, symptoms, tools)
- Clear overview with core principle
- Address specific baseline failures identified in RED
- Code inline OR link to separate file in references/examples/scripts/
- One excellent example (not multi-language)
- Run scenarios WITH skill - verify agents now comply
REFACTOR Phase - Close Loopholes:
- Identify NEW rationalizations from testing
- Add explicit counters (if discipline skill)
- Build rationalization table from all test iterations
- Create red flags list
- Re-test until bulletproof
Quality Checks:
- Small flowchart only if decision non-obvious
- Quick reference table
- Common mistakes section
- No narrative storytelling
- Supporting files in references/, examples/, or scripts/ directories
Deployment:
- Commit skill to git and push (if configured)
- Consider contributing back via PR (if broadly useful)
Discovery Workflow
How future Claude finds your skill:
- Encounters problem ("tests are flaky")
- Finds SKILL (description matches)
- Scans overview (is this relevant?)
- Reads patterns (quick reference table)
- Loads example (only when implementing)
Optimize for this flow - put searchable terms early and often.
Additional Resources
Reference Files
references/anthropic-best-practices.md- Anthropic's official skill authoring best practicesreferences/persuasion-principles.md- Research on persuasion techniques for bulletproofing skillsreferences/graphviz-conventions.dot- Graphviz style guide for flowchartsreferences/testing-skills-with-subagents.md- Complete testing methodology with pressure scenarios
Examples
examples/CLAUDE_MD_TESTING.md- Complete test campaign example
Scripts
scripts/render-graphs.js- Utility to render flowcharts from SKILL.md to SVG
The Bottom Line
Creating skills IS TDD for process documentation.
Same Iron Law: No skill without failing test first. Same cycle: RED (baseline) → GREEN (write skill) → REFACTOR (close loopholes). Same benefits: Better quality, fewer surprises, bulletproof results.
If you follow TDD for code, follow it for skills. It's the same discipline applied to documentation.
Score
Total Score
Based on repository quality metrics
SKILL.mdファイルが含まれている
ライセンスが設定されている
100文字以上の説明がある
GitHub Stars 100以上
1ヶ月以内に更新
10回以上フォークされている
オープンIssueが50未満
プログラミング言語が設定されている
1つ以上のタグが設定されている
Reviews
Reviews coming soon


