← Back to list

tdd-loop
by mpuig
Agent-native workflow orchestration platform that separates intelligence (agents) from infrastructure (state, logging, caching, retries)
⭐ 0🍴 1📅 Jan 10, 2026
SKILL.md
name: tdd-loop description: Test-driven development loop for workflows - write tests first, then implementation
TDD Loop Skill
Use this skill to follow test-driven development practices for workflow implementation.
TDD Cycle
- Write Test First: Define expected behavior in test.py
- Run Test (Red): Verify test fails with clear error
- Write Implementation: Add minimal code to make test pass
- Run Test (Green): Verify test passes
- Refactor: Clean up code while keeping tests green
- Repeat: Move to next test case
Workflow Testing Pattern
test.py Structure
#!/usr/bin/env python3
"""Tests for workflow."""
from pathlib import Path
from workflow_name import WorkflowClass, WorkflowParams
def test_workflow_basic():
"""Test basic workflow execution."""
params = WorkflowParams()
workflow = WorkflowClass(params, workflow_dir=Path(__file__).parent)
result = workflow.run()
assert result == 0 # Success
# Add more assertions
def test_workflow_with_params():
"""Test workflow with specific parameters."""
params = WorkflowParams(param1="value")
workflow = WorkflowClass(params, workflow_dir=Path(__file__).parent)
result = workflow.run()
assert result == 0
# Verify outputs, side effects, etc.
def test_workflow_error_handling():
"""Test workflow handles errors gracefully."""
params = WorkflowParams(invalid="value")
workflow = WorkflowClass(params, workflow_dir=Path(__file__).parent)
# Should handle error, not crash
result = workflow.run()
assert result != 0 # Non-zero exit code for errors
dry_run.py Testing
The dry run is a form of integration testing with mocks:
#!/usr/bin/env python3
"""Dry run with mock data."""
from raw_runtime import DryRunContext
def mock_external_api(ctx: DryRunContext):
"""Mock API call that would normally fetch real data."""
return {
"data": "mock_value",
"status": "success"
}
def mock_file_write(ctx: DryRunContext):
"""Mock file writing - don't actually write."""
ctx.log("Would write to file: results/output.json")
return True
TDD Benefits
- Clear Requirements: Tests document expected behavior
- Regression Prevention: Existing tests catch breaking changes
- Refactoring Safety: Change internals without breaking API
- Design Feedback: Hard-to-test code signals design issues
When to Use
- Adding new workflow functionality
- Fixing bugs (write failing test, then fix)
- Refactoring existing code
- Integrating new tools
Red-Green-Refactor Example
# 1. RED: Write failing test
def test_fetch_stock_data():
result = fetch_stock_data("AAPL")
assert result["symbol"] == "AAPL"
assert "price" in result
# Run: pytest test.py -k test_fetch (FAILS - function doesn't exist)
# 2. GREEN: Minimal implementation
def fetch_stock_data(symbol: str) -> dict:
return {"symbol": symbol, "price": 150.0} # Hardcoded for now
# Run: pytest test.py -k test_fetch (PASSES)
# 3. REFACTOR: Real implementation
def fetch_stock_data(symbol: str) -> dict:
from tools.yahoo_finance import get_quote
data = get_quote(symbol)
return {"symbol": data.symbol, "price": data.current_price}
# Run: pytest test.py -k test_fetch (PASSES)
Score
Total Score
65/100
Based on repository quality metrics
✓SKILL.md
SKILL.mdファイルが含まれている
+20
○LICENSE
ライセンスが設定されている
0/10
✓説明文
100文字以上の説明がある
+10
○人気
GitHub Stars 100以上
0/15
✓最近の活動
3ヶ月以内に更新
+5
○フォーク
10回以上フォークされている
0/5
✓Issue管理
オープンIssueが50未満
+5
✓言語
プログラミング言語が設定されている
+5
✓タグ
1つ以上のタグが設定されている
+5
Reviews
💬
Reviews coming soon


