Back to list
ed3dai

researching-on-the-internet

by ed3dai

Ed's repo of Claude Code plugins, centered around a research-plan-implement workflow. Only a tiny bit cursed. If you're lucky.

72🍴 3📅 Jan 23, 2026

SKILL.md


name: researching-on-the-internet description: Use when planning features and need current API docs, library patterns, or external knowledge; when testing hypotheses about technology choices or claims; when verifying assumptions before design decisions - gathers well-sourced, current information from the internet to inform technical decisions

Researching on the Internet

Overview

Gather accurate, current, well-sourced information from the internet to inform planning and design decisions. Test hypotheses, verify claims, and find authoritative sources for APIs, libraries, and best practices.

When to Use

Use for:

  • Finding current API documentation before integration design
  • Testing hypotheses ("Is library X faster than Y?", "Does approach Z work with version N?")
  • Verifying technical claims or assumptions
  • Researching library comparison and alternatives
  • Finding best practices and current community consensus

Don't use for:

  • Information already in codebase (use codebase search)
  • General knowledge within Claude's training (just answer directly)
  • Project-specific conventions (check CLAUDE.md)

Core Research Workflow

  1. Define question clearly - specific beats vague
  2. Search official sources first - docs, release notes, changelogs
  3. Cross-reference - verify claims across multiple sources
  4. Evaluate quality - tier sources (official → verified → community)
  5. Report concisely - lead with answer, provide links and evidence

Hypothesis Testing

When given a hypothesis to test:

  1. Identify falsifiable claims - break hypothesis into testable parts
  2. Search for supporting evidence - what confirms this?
  3. Search for disproving evidence - what contradicts this?
  4. Evaluate source quality - weight evidence by tier
  5. Report findings - supported/contradicted/inconclusive with evidence
  6. Note confidence level - strong consensus vs single source vs conflicting info

Example:

Hypothesis: "Library X is faster than Y for large datasets"

Search for:
✓ Benchmarks comparing X and Y
✓ Performance documentation for both
✓ GitHub issues mentioning performance
✓ Real-world case studies

Report:
- Supported: [evidence with links]
- Contradicted: [evidence with links]
- Conclusion: [supported/contradicted/mixed] with [confidence level]

Quick Reference

TaskStrategy
API docsOfficial docs → GitHub README → Recent tutorials
Library comparisonOfficial sites → npm/PyPI stats → GitHub activity
Best practicesOfficial guides → Recent posts → Stack Overflow
TroubleshootingError search → GitHub issues → Stack Overflow
Current stateRelease notes → Changelog → Recent announcements
Hypothesis testingDefine claims → Search both sides → Weight evidence

Source Evaluation Tiers

TierSourcesUsage
1 - Most reliableOfficial docs, release notes, changelogsPrimary evidence
2 - Generally reliableVerified tutorials, maintained examples, reputable blogsSupporting evidence
3 - Use with cautionStack Overflow, forums, old tutorialsCheck dates, cross-verify

Always note source tier in findings.

Search Strategies

Multiple approaches:

  • WebSearch for overview and current information
  • WebFetch for specific documentation pages
  • Check MCP servers (Context7, search tools) if available
  • Follow links to authoritative sources
  • Search official documentation before community resources

Cross-reference:

  • Verify claims across multiple sources
  • Check publication dates - prefer recent
  • Flag breaking changes or deprecations
  • Note when information might be outdated
  • Distinguish stable APIs from experimental features

Reporting Findings

Lead with answer:

  • Direct answer to question first
  • Supporting details with source links second
  • Code examples when relevant (with attribution)

Include metadata:

  • Version numbers and compatibility requirements
  • Publication dates for time-sensitive topics
  • Security considerations or best practices
  • Common gotchas or migration issues
  • Confidence level based on source consensus

Handle uncertainty clearly:

  • "No official documentation found for [topic]" is valid
  • Explain what you searched and where you looked
  • Distinguish "doesn't exist" from "couldn't find reliable information"
  • Present what you found with appropriate caveats
  • Suggest alternative search terms or approaches

Common Mistakes

MistakeFix
Searching only one sourceCross-reference minimum 2-3 sources
Ignoring publication datesCheck dates, flag outdated information
Treating all sources equallyUse tier system, weight accordingly
Reporting before verificationVerify claims across sources first
Vague hypothesis testingBreak into specific falsifiable claims
Skipping official docsAlways start with tier 1 sources
Over-confident with single sourceNote source tier and look for consensus

Score

Total Score

65/100

Based on repository quality metrics

SKILL.md

SKILL.mdファイルが含まれている

+20
LICENSE

ライセンスが設定されている

0/10
説明文

100文字以上の説明がある

+10
人気

GitHub Stars 100以上

0/15
最近の活動

1ヶ月以内に更新

+10
フォーク

10回以上フォークされている

0/5
Issue管理

オープンIssueが50未満

+5
言語

プログラミング言語が設定されている

+5
タグ

1つ以上のタグが設定されている

+5

Reviews

💬

Reviews coming soon