Back to list
parcadei

mcp-chaining

by parcadei

Context management for Claude Code. Hooks maintain state via ledgers and handoffs. MCP execution without context pollution. Agent orchestration with isolated context windows.

3,352🍴 252📅 Jan 23, 2026

Use Cases

🔗

MCP Server Integration

AI tool integration using Model Context Protocol. Using mcp-chaining.

🔗

API Integration

Easily build API integrations with external services.

🔄

Data Synchronization

Automatically sync data between multiple systems.

SKILL.md


name: mcp-chaining description: Research-to-implement pipeline chaining 5 MCP tools with graceful degradation allowed-tools: [Bash, Read] user-invocable: false

MCP Chaining Pipeline

A research-to-implement pipeline that chains 5 MCP tools for end-to-end workflows.

When to Use

  • Building multi-tool MCP pipelines
  • Understanding how to chain MCP calls with graceful degradation
  • Debugging MCP environment variable issues
  • Learning the tool naming conventions for different MCP servers

What We Built

A pipeline that chains these tools:

StepServerTool IDPurpose
1niania__searchSearch library documentation
2ast-grepast-grep__find_codeFind AST code patterns
3morphmorph__warpgrep_codebase_searchFast codebase search
4qltyqlty__qlty_checkCode quality validation
5gitgit__git_statusGit operations

Key Files

  • scripts/research_implement_pipeline.py - Main pipeline implementation
  • scripts/test_research_pipeline.py - Test harness with isolated sandbox
  • workspace/pipeline-test/sample_code.py - Test sample code

Usage Examples

# Dry-run pipeline (preview plan without changes)
uv run python -m runtime.harness scripts/research_implement_pipeline.py \
    --topic "async error handling python" \
    --target-dir "./workspace/pipeline-test" \
    --dry-run --verbose

# Run tests
uv run python -m runtime.harness scripts/test_research_pipeline.py --test all

# View the pipeline script
cat scripts/research_implement_pipeline.py

Critical Fix: Environment Variables

The MCP SDK's get_default_environment() only includes basic vars (PATH, HOME, etc.), NOT os.environ. We fixed src/runtime/mcp_client.py to pass full environment:

# In _connect_stdio method:
full_env = {**os.environ, **(resolved_env or {})}

This ensures API keys from ~/.claude/.env reach subprocesses.

Graceful Degradation Pattern

Each tool is optional. If unavailable (disabled, no API key, etc.), the pipeline continues:

async def check_tool_available(tool_id: str) -> bool:
    """Check if an MCP tool is available."""
    server_name = tool_id.split("__")[0]
    server_config = manager._config.get_server(server_name)
    if not server_config or server_config.disabled:
        return False
    return True

# In step function:
if not await check_tool_available("nia__search"):
    return StepResult(status=StepStatus.SKIPPED, message="Nia not available")

Tool Name Reference

nia__search              - Universal documentation search
nia__nia_research        - Research with sources
nia__nia_grep            - Grep-style doc search
nia__nia_explore         - Explore package structure
ast-grep__find_code      - Find code by AST pattern
ast-grep__find_code_by_rule - Find by YAML rule
ast-grep__scan_code      - Scan with multiple patterns

morph (Fast Text Search + Edit)

morph__warpgrep_codebase_search  - 20x faster grep
morph__edit_file                 - Smart file editing

qlty (Code Quality)

qlty__qlty_check         - Run quality checks
qlty__qlty_fmt           - Auto-format code
qlty__qlty_metrics       - Get code metrics
qlty__smells             - Detect code smells

git (Version Control)

git__git_status          - Get repo status
git__git_diff            - Show differences
git__git_log             - View commit history
git__git_add             - Stage files

Pipeline Architecture

                    +----------------+
                    |   CLI Args     |
                    | (topic, dir)   |
                    +-------+--------+
                            |
                    +-------v--------+
                    | PipelineContext|
                    | (shared state) |
                    +-------+--------+
                            |
    +-------+-------+-------+-------+-------+
    |       |       |       |       |       |
+---v---+---v---+---v---+---v---+---v---+
| nia   |ast-grp| morph | qlty  | git   |
|search |pattern|search |check  |status |
+---+---+---+---+---+---+---+---+---+---+
    |       |       |       |       |
    +-------v-------v-------v-------+
                    |
            +-------v--------+
            | StepResult[]   |
            | (aggregated)   |
            +----------------+

Error Handling

The pipeline captures errors without failing the entire run:

try:
    result = await call_mcp_tool("nia__search", {"query": topic})
    return StepResult(status=StepStatus.SUCCESS, data=result)
except Exception as e:
    ctx.errors.append(f"nia: {e}")
    return StepResult(status=StepStatus.FAILED, error=str(e))

Creating Your Own Pipeline

  1. Copy the pattern from scripts/research_implement_pipeline.py
  2. Define your steps as async functions
  3. Use check_tool_available() for graceful degradation
  4. Chain results through PipelineContext
  5. Aggregate with print_summary()

Score

Total Score

95/100

Based on repository quality metrics

SKILL.md

SKILL.mdファイルが含まれている

+20
LICENSE

ライセンスが設定されている

+10
説明文

100文字以上の説明がある

+10
人気

GitHub Stars 1000以上

+15
最近の活動

1ヶ月以内に更新

+10
フォーク

10回以上フォークされている

+5
Issue管理

オープンIssueが50未満

+5
言語

プログラミング言語が設定されている

+5
タグ

1つ以上のタグが設定されている

+5

Reviews

💬

Reviews coming soon