Back to list
aiskillstore

control-loop-extraction

by aiskillstore

Security-audited skills for Claude, Codex & Claude Code. One-click install, quality verified.

102🍴 3📅 Jan 23, 2026

SKILL.md


name: control-loop-extraction description: Extract and analyze agent reasoning loops, step functions, and termination conditions. Use when needing to (1) understand how an agent framework implements reasoning (ReAct, Plan-and-Solve, Reflection, etc.), (2) locate the core decision-making logic, (3) analyze loop mechanics and termination conditions, (4) document the step-by-step execution flow of an agent, or (5) compare reasoning patterns across frameworks.

Control Loop Extraction

Extracts and documents the core agent reasoning loop from framework source code.

Process

  1. Locate the loop - Find the main agent execution loop
  2. Classify the pattern - Identify ReAct, Plan-and-Solve, Reflection, or Tree-of-Thoughts
  3. Extract the step function - Document the LLM → Parse → Decide flow
  4. Map termination - Catalog all loop exit conditions

Reasoning Pattern Identification

Pattern Signatures

ReAct (Reason + Act)

# Signature: Thought → Action → Observation cycle
while not done:
    thought = llm.generate(prompt)      # Reasoning
    action = parse_action(thought)       # Action selection
    observation = execute(action)        # Environment feedback
    prompt = update_prompt(observation)  # Loop continuation

Plan-and-Solve

# Signature: Upfront planning, then execution
plan = llm.generate("Create a plan for...")
for step in plan.steps:
    result = execute_step(step)
    if needs_replan(result):
        plan = replan(...)

Reflection

# Signature: Act → Self-critique → Adjust
while not done:
    action = llm.generate(prompt)
    result = execute(action)
    critique = llm.generate(f"Evaluate: {result}")
    if critique.needs_adjustment:
        prompt = adjust_approach(critique)

Tree-of-Thoughts

# Signature: Branch → Evaluate → Select
thoughts = [generate_thought() for _ in range(n)]
scores = [evaluate(t) for t in thoughts]
best = select_best(thoughts, scores)

Step Function Analysis

The "step function" is the atomic unit of agent execution. Extract:

  1. Input Assembly - How context is constructed for the LLM
  2. LLM Invocation - The actual model call
  3. Output Parsing - How raw output becomes structured actions
  4. Action Dispatch - Tool execution vs. final response routing

Key Code Patterns

# Common step function structure
def step(self, state):
    # 1. Assemble input
    messages = self._build_messages(state)
    
    # 2. Call LLM
    response = self.llm.invoke(messages)
    
    # 3. Parse output
    parsed = self._parse_response(response)
    
    # 4. Dispatch
    if parsed.is_tool_call:
        return self._execute_tool(parsed.tool, parsed.args)
    else:
        return AgentFinish(parsed.final_answer)

Termination Condition Catalog

Common Termination Patterns

ConditionImplementationRisk
Step limitif step_count >= max_stepsMay cut off valid execution
Token limitif total_tokens >= max_tokensMay truncate mid-thought
Explicit finishif action.type == "finish"Relies on LLM cooperation
Timeoutif elapsed > timeoutWall-clock unpredictable
Loop detectionif state in seen_statesRequires state hashing
Error thresholdif error_count >= max_errorsMay exit on recoverable errors

Anti-Pattern: No Termination Guard

# DANGEROUS: No exit condition
while True:
    result = agent.step()
    if result.is_done:  # What if LLM never outputs done?
        break

Fix: Always include a step counter:

for step in range(max_steps):
    result = agent.step()
    if result.is_done:
        break
else:
    logger.warning("Hit max steps limit")

Output Template

## Control Loop Analysis: [Framework Name]

### Reasoning Topology
- **Pattern**: [ReAct | Plan-and-Solve | Reflection | Tree-of-Thoughts | Hybrid]
- **Location**: `path/to/agent.py:L45-L120`

### Step Function
- **Input Assembly**: [Description of context building]
- **LLM Call**: [Method and parameters]
- **Parser**: [How output is structured]
- **Dispatch Logic**: [Tool vs Finish decision]

### Termination Conditions
1. [Condition 1 with code reference]
2. [Condition 2 with code reference]
3. ...

### Loop Detection
- **Method**: [Heuristic | State hash | None]
- **Implementation**: [Code reference or N/A]

Integration Points

  • Prerequisite: codebase-mapping to identify agent files
  • Feeds into: comparative-matrix for pattern comparison
  • Feeds into: architecture-synthesis for new loop design

Score

Total Score

60/100

Based on repository quality metrics

SKILL.md

SKILL.mdファイルが含まれている

+20
LICENSE

ライセンスが設定されている

0/10
説明文

100文字以上の説明がある

0/10
人気

GitHub Stars 100以上

+5
最近の活動

1ヶ月以内に更新

+10
フォーク

10回以上フォークされている

0/5
Issue管理

オープンIssueが50未満

+5
言語

プログラミング言語が設定されている

+5
タグ

1つ以上のタグが設定されている

+5

Reviews

💬

Reviews coming soon