Back to list
onukura

checking-skill-best-practices

by onukura

Sustainability signals for OSS dependencies across ecosystems

16🍴 0📅 Jan 13, 2026

SKILL.md


name: checking-skill-best-practices description: Evaluates Claude skills against official best practices from Anthropic documentation. Use when reviewing skill quality, ensuring compliance with guidelines, or improving existing skills.

Checking Skill Best Practices

Evaluates a skill against the latest official guidelines from Anthropic. Always fetches current documentation to ensure accurate, up-to-date assessment.

When to Use

  • Reviewing skill quality before finalization
  • User asks to check compliance with best practices
  • Improving or refactoring existing skills

Evaluation Process

1. Fetch Latest Guidelines

Start here every time:

fetch_webpage("https://platform.claude.com/docs/en/agents-and-tools/agent-skills/best-practices")

Extract current evaluation criteria from the fetched content.

2. Read Target Skill

read_file(".claude/skills/[skill-name]/SKILL.md")

3. Evaluate Against Fetched Guidelines

Compare skill against criteria from the documentation:

  • Core principles (conciseness, appropriate freedom, testing)
  • Skill structure (frontmatter, naming, description)
  • Content guidelines (terminology, time-sensitivity, patterns)
  • Anti-patterns to avoid

4. Generate Report

Provide structured findings with specific recommendations:

Evaluation Report Template

## Skill Evaluation: [skill-name]

**Overall Score**: X/10
**Guideline Version**: [Date from fetched doc]

### ✅ Strengths
- [What follows best practices]

### ⚠️ Issues Found

#### Critical (Must Fix)
- [ ] [Issue with specific fix]

#### Recommended (Should Fix)
- [ ] [Improvement suggestion]

### 🔧 Actionable Steps
1. [Highest priority fix]
2. [Next improvement]

### 📚 Reference
[Relevant sections from fetched documentation]

Usage Example

User: "Check if adding-new-metric follows best practices"

1. fetch_webpage(best-practices-url)
   → Extract current criteria

2. read_file(".claude/skills/adding-new-metric/SKILL.md")
   → Get skill content

3. Compare against extracted criteria:
   - Name format (gerund form?)
   - Description quality (what + when?)
   - Conciseness (≤500 lines?)
   - Progressive disclosure used?
   - Consistent terminology?

4. Generate report with specific fixes

Key Evaluation Areas

From the fetched documentation, focus on:

Critical:

  • YAML frontmatter correctness
  • Naming convention compliance
  • Description effectiveness

Important:

  • Conciseness (every token justified?)
  • Progressive disclosure (reference files?)
  • Consistent terminology

Code-specific (if applicable):

  • Unix-style paths
  • Error handling
  • MCP tool naming

Iteration Pattern

  1. Evaluate → 2. Report issues → 3. Apply fixes → 4. Re-evaluate

Use multi_replace_string_in_file for efficient corrections.

Score

Total Score

65/100

Based on repository quality metrics

SKILL.md

SKILL.mdファイルが含まれている

+20
LICENSE

ライセンスが設定されている

+10
説明文

100文字以上の説明がある

0/10
人気

GitHub Stars 100以上

0/15
最近の活動

1ヶ月以内に更新

+10
フォーク

10回以上フォークされている

0/5
Issue管理

オープンIssueが50未満

+5
言語

プログラミング言語が設定されている

+5
タグ

1つ以上のタグが設定されている

+5

Reviews

💬

Reviews coming soon