
research-pipeline-runner
by WILLOSCAR
research-pipeline-runnerは、other分野における実用的なスキルです。複雑な課題への対応力を強化し、業務効率と成果の質を改善します。
SKILL.md
name: research-pipeline-runner
description: |
Run this repo’s Units+Checkpoints research pipelines end-to-end (survey/综述/review/调研/教程/系统综述/审稿), with workspaces + checkpoints.
Trigger: run pipeline, kickoff, 继续执行, 自动跑, 写一篇, survey/综述/review/调研/教程/系统综述/审稿.
Use when: 用户希望端到端跑流程(创建 workspaces/<name>/、生成/执行 UNITS.csv、遇到 HUMAN checkpoint 停下等待)。
Skip if: 用户明确要手工逐条执行(用 unit-executor),或你不应自动推进到 prose 阶段。
Network: depends on selected pipeline (arXiv/PDF/citation verification may need network; offline import supported where available).
Guardrail: 必须尊重 checkpoints(无 Approve 不写 prose);遇到 HUMAN 单元必须停下等待;禁止在 repo root 创建 workspace 工件。
Research Pipeline Runner
Goal: let a user trigger a full pipeline with one natural-language request, while keeping the run auditable (Units + artifacts + checkpoints).
This skill is coordination:
- semantic work is done by the relevant skills’
SKILL.md - scripts are deterministic helpers (scaffold/validate/compile), not the author
Inputs
- User goal (one sentence is enough), e.g.:
- “给我写一个 agent 的 latex-survey”
- Optional:
- explicit pipeline path (e.g.,
pipelines/arxiv-survey-latex.pipeline.md) - constraints (time window, language: EN/中文, evidence_mode: abstract/fulltext)
- explicit pipeline path (e.g.,
Outputs
- A workspace under
workspaces/<name>/containing:STATUS.md,GOAL.md,PIPELINE.lock.md,UNITS.csv,CHECKPOINTS.md,DECISIONS.md- pipeline-specific artifacts (papers/outline/sections/output/latex)
Non-negotiables
- Use
UNITS.csvas the execution contract; one unit at a time. - Respect checkpoints (
CHECKPOINTS.md): no long prose until required approvals are recorded inDECISIONS.md(survey default:C2). - Stop at HUMAN checkpoints and wait for explicit sign-off.
- Never create workspace artifacts in the repo root; always use
workspaces/<name>/.
Decision tree: pick a pipeline
User goal → choose:
- Survey/综述/调研 + Markdown draft →
pipelines/arxiv-survey.pipeline.md - Survey/综述/调研 + PDF output →
pipelines/arxiv-survey-latex.pipeline.md - Snapshot/速览 →
pipelines/lit-snapshot.pipeline.md - Tutorial/教程 →
pipelines/tutorial.pipeline.md - Systematic review/系统综述 →
pipelines/systematic-review.pipeline.md - Peer review/审稿 →
pipelines/peer-review.pipeline.md
Recommended run loop (skills-first)
- Initialize workspace (C0):
- create
workspaces/<name>/ - write
GOAL.md, lock pipeline (PIPELINE.lock.md), seedqueries.md
- Execute units sequentially:
- follow each unit’s
SKILL.mdto produce the declared outputs - only mark
DONEwhen acceptance criteria are satisfied and outputs exist
- Stop at HUMAN checkpoints:
- default survey checkpoint is
C2(scope + outline) - write a concise approval request in
DECISIONS.mdand wait
- Writing-stage self-loop (when drafts look thin/template-y):
- prefer local fixes over rewriting everything:
writer-context-pack(C4→C5 bridge) makes packs debuggablesubsection-writerwrites per-file unitswriter-selfloopfixes only failingsections/*.mddraft-polisherremoves generator voice without changing citation keys
Strict-mode behavior (by design)
In --strict runs, several semantic C3/C4 artifacts are treated as scaffolds until explicitly marked refined.
This is intentional: it prevents bootstrap JSONL from silently passing into C5 writing (a major source of hollow/templated prose).
Create these markers only after you have manually refined/spot-checked the artifacts:
outline/subsection_briefs.refined.okoutline/chapter_briefs.refined.okoutline/evidence_bindings.refined.okoutline/evidence_drafts.refined.okoutline/anchor_sheet.refined.okoutline/writer_context_packs.refined.ok
The runner may BLOCK even if the JSONL exists; add the marker after refinement, then rerun/resume the unit.
- Finish:
- merge → audit → (optional) LaTeX scaffold/compile
Optional CLI helpers (debug only)
- Kickoff + run (optional; convenient, not required):
python scripts/pipeline.py kickoff --topic "<topic>" --pipeline <pipeline-name> --run --strict - Resume:
python scripts/pipeline.py run --workspace <ws> --strict - Approve checkpoint:
python scripts/pipeline.py approve --workspace <ws> --checkpoint C2 - Mark refined unit:
python scripts/pipeline.py mark --workspace <ws> --unit-id <U###> --status DONE --note "LLM refined"
Handling common blocks
- HUMAN approval required: summarize produced artifacts, ask for approval, then record it and resume.
- Quality gate blocked (
output/QUALITY_GATE.mdexists): treat current outputs as scaffolding; refine per the unit’sSKILL.md; markDONE; resume. - No network: use offline imports (
papers/imports/orarxiv-search --input). - Weak coverage: broaden queries or reduce/merge subsections (
outline-budgeter) before writing.
Quality checklist
-
UNITS.csvstatuses reflect actual outputs (noDONEwithout outputs). - No prose is written unless
DECISIONS.mdexplicitly approves it. - The run stops at HUMAN checkpoints with clear next questions.
- In strict mode, scaffold/stub outputs do not get marked
DONEwithout refinement.
スコア
総合スコア
リポジトリの品質指標に基づく評価
SKILL.mdファイルが含まれている
ライセンスが設定されている
100文字以上の説明がある
GitHub Stars 100以上
1ヶ月以内に更新
10回以上フォークされている
オープンIssueが50未満
プログラミング言語が設定されている
1つ以上のタグが設定されている
レビュー
レビュー機能は近日公開予定です

