
consult-codex
by centminmod
Shared starter template configuration and CLAUDE.md memory bank system for Claude Code
Use Cases
Workflow Automation
Automate repetitive tasks and reduce manual work.
CI/CD Pipeline Setup
Build automated pipelines for build, test, and deploy.
Scheduled Execution
Automatically run tasks on a schedule.
SKILL.md
name: consult-codex description: Compare OpenAI Codex GPT-5.2 and code-searcher responses for comprehensive dual-AI code analysis. Use when you need multiple AI perspectives on code questions.
Dual-AI Consultation: Codex GPT-5.2 vs Code-Searcher
You orchestrate consultation between OpenAI's Codex GPT-5.2 and Claude's code-searcher to provide comprehensive analysis with comparison.
When to Use This Skill
High value queries:
- Complex code analysis requiring multiple perspectives
- Debugging difficult issues
- Architecture/design questions
- Code review requests
- Finding specific implementations across a codebase
Lower value (single AI may suffice):
- Simple syntax questions
- Basic file lookups
- Straightforward documentation queries
Workflow
When the user asks a code question:
1. Build Enhanced Prompt
Wrap the user's question with structured output requirements:
[USER_QUESTION]
=== Analysis Guidelines ===
**Structure your response with:**
1. **Summary:** 2-3 sentence overview
2. **Key Findings:** bullet points of discoveries
3. **Evidence:** file paths with line numbers (format: `file:line` or `file:start-end`)
4. **Confidence:** High/Medium/Low with reasoning
5. **Limitations:** what couldn't be determined
**Line Number Requirements:**
- ALWAYS include specific line numbers when referencing code
- Use format: `path/to/file.ext:42` or `path/to/file.ext:42-58`
- For multiple references: list each with its line number
- Include brief code snippets for key findings
**Examples of good citations:**
- "The authentication check at `src/auth/validate.ts:127-134`"
- "Configuration loaded from `config/settings.json:15`"
- "Error handling in `lib/errors.ts:45, 67-72, 98`"
2. Invoke Both Analyses in Parallel
Launch both simultaneously in a single message with multiple tool calls:
-
For Codex GPT-5.2: Use Bash tool directly (NOT Task with codex-cli agent - the agent intercepts queries):
macOS:
zsh -i -c "codex -p readonly exec 'ENHANCED_PROMPT' --json"Linux:
bash -i -c "codex -p readonly exec 'ENHANCED_PROMPT' --json"Replace
ENHANCED_PROMPTwith the actual prompt (escape single quotes as'\''). -
For Code-Searcher: Use Task tool with
subagent_type: "code-searcher"with the same enhanced prompt
This parallel execution significantly improves response time.
3. Handle Errors
- If one agent fails or times out, still present the successful agent's response
- Note the failure in the comparison: "Agent X failed to respond: [error message]"
- Provide analysis based on the available response
4. Create Comparison Analysis
Use this exact format:
Codex (GPT-5.2) Response
[Raw output from codex-cli agent]
Code-Searcher (Claude) Response
[Raw output from code-searcher agent]
Comparison Table
| Aspect | Codex (GPT-5.2) | Code-Searcher (Claude) |
|---|---|---|
| File paths | [Specific/Generic/None] | [Specific/Generic/None] |
| Line numbers | [Provided/Missing] | [Provided/Missing] |
| Code snippets | [Yes/No + details] | [Yes/No + details] |
| Unique findings | [List any] | [List any] |
| Accuracy | [Note discrepancies] | [Note discrepancies] |
| Strengths | [Summary] | [Summary] |
Agreement Level
- High Agreement: Both AIs reached similar conclusions - Higher confidence in findings
- Partial Agreement: Some overlap with unique findings - Investigate differences
- Disagreement: Contradicting findings - Manual verification recommended
[State which level applies and explain]
Key Differences
- Codex GPT-5.2: [unique findings, strengths, approach]
- Code-Searcher: [unique findings, strengths, approach]
Synthesized Summary
[Combine the best insights from both sources into unified analysis. Prioritize findings that are:
- Corroborated by both agents
- Supported by specific file:line citations
- Include verifiable code snippets]
Recommendation
[Which source was more helpful for this specific query and why. Consider:
- Accuracy of file paths and line numbers
- Quality of code snippets provided
- Completeness of analysis
- Unique insights offered]
Score
Total Score
Based on repository quality metrics
SKILL.mdファイルが含まれている
ライセンスが設定されている
100文字以上の説明がある
GitHub Stars 1000以上
1ヶ月以内に更新
10回以上フォークされている
オープンIssueが50未満
プログラミング言語が設定されている
1つ以上のタグが設定されている
Reviews
Reviews coming soon
