Back to list
deepeshBodh

analysis-codebase

by deepeshBodh

SPEC-first multi-agent framework for Claude Code.

7🍴 0📅 Jan 24, 2026

SKILL.md


name: analysis-codebase description: This skill should be used when the user asks to "analyze codebase", "scan project", "detect tech stack", or mentions "codebase analysis", "existing code", "collision risk", "brownfield", or "project context". Provides systematic extraction of tech stack, conventions, entities, and patterns from existing codebases.

Analyzing Codebase

Purpose

Systematically analyze existing codebases to extract structural information. Supports three modes:

  1. Context Mode: Gather project characteristics to inform constitution authoring
  2. Brownfield Mode: Extract entities, endpoints, and collision risks for planning
  3. Setup-Brownfield Mode: Comprehensive analysis for /humaninloop:setup producing codebase-analysis.md

Mode Selection

ModeWhen to UseOutput
ContextSetting up constitution, understanding project DNAMarkdown report for humans
BrownfieldPlanning new features against existing codeJSON inventory with collision risks
Setup-Brownfield/humaninloop:setup on existing codebasecodebase-analysis.md with inventory + assessment

Project Type Detection

Identify project type from package manager files:

FileProject Type
package.jsonNode.js/JavaScript/TypeScript
pyproject.toml / requirements.txtPython
go.modGo
Cargo.tomlRust
pom.xml / build.gradleJava
GemfileRuby
pubspec.yamlFlutter/Dart

Framework Detection

Web Frameworks

FrameworkIndicators
Expressexpress(), router.get(), app.use()
FastAPI@app.get(), FastAPI(), APIRouter
Djangourls.py, views.py, models.py pattern
Flask@app.route(), @bp.route()
Railsroutes.rb, app/models/, app/controllers/
Spring@RestController, @GetMapping, @Entity
Gin/Echor.GET(), e.GET()

ORM/Database Frameworks

FrameworkIndicators
Prismaschema.prisma, @prisma/client
TypeORM@Entity(), @Column(), DataSource
SQLAlchemyBase, db.Model, Column()
Django ORMmodels.Model, models.CharField
GORMgorm.Model, db.AutoMigrate
Mongoosemongoose.Schema, new Schema({
ActiveRecordApplicationRecord, has_many

Architecture Pattern Recognition

PatternIndicators
Layeredsrc/models/, src/services/, src/controllers/
Feature-basedsrc/auth/, src/users/, src/tasks/
MicroservicesMultiple package files, docker compose
Serverlessserverless.yml, lambda/, functions/
MVCmodels/, views/, controllers/
Clean/Hexagonaldomain/, application/, infrastructure/

Mode: Context Gathering

For constitution authoring - gather broad project characteristics.

What to Extract:

  • Tech stack with versions
  • Linting/formatting conventions
  • CI/CD quality gates
  • Team signals (test coverage, required approvals, CODEOWNERS)
  • Existing governance docs (CODEOWNERS, ADRs, CONTRIBUTING.md)

Output: Project Context Report (markdown)

See CONTEXT-GATHERING.md for detailed guidance.

Mode: Brownfield Analysis

For planning - extract structural details for collision detection.

What to Extract:

  • Entities with fields and relationships
  • Endpoints with handlers
  • Collision risks against proposed spec

Output: Codebase Inventory (JSON)

See BROWNFIELD-ANALYSIS.md for detailed guidance.

Mode: Setup Brownfield

For /humaninloop:setup - comprehensive analysis combining Context + Brownfield with Essential Floor assessment.

What to Extract:

  • Everything from Context mode (tech stack, conventions, architecture)
  • Everything from Brownfield mode (entities, relationships)
  • Essential Floor assessment (Security, Testing, Error Handling, Observability)
  • Inconsistencies and strengths assessment

Output: .humaninloop/memory/codebase-analysis.md following codebase-analysis-template.md

Essential Floor Analysis

Assess each of the four essential floor categories:

Security Assessment

CheckHow to DetectStatus Values
Auth at boundariesMiddleware patterns (authenticate, authorize, requireAuth)present/partial/absent
Secrets from env.env.example exists, no hardcoded credentials in codepresent/partial/absent
Input validationSchema validation libraries, input checking patternspresent/partial/absent

Indicators to search:

# Auth middleware
grep -r "authenticate\|authorize\|requireAuth\|isAuthenticated" src/ 2>/dev/null

# Environment variables
ls .env.example .env.sample 2>/dev/null
grep -r "process.env\|os.environ\|os.Getenv" src/ 2>/dev/null

# Validation
grep -r "zod\|yup\|joi\|pydantic\|validator" package.json pyproject.toml 2>/dev/null

Testing Assessment

CheckHow to DetectStatus Values
Test framework configuredConfig files (jest.config.*, pytest.ini, vitest.config.*)present/partial/absent
Test files presentFiles matching *.test.*, *_test.*, test_*.*present/partial/absent
CI runs testsTest commands in workflow filespresent/partial/absent

Indicators to search:

# Test config
ls jest.config.* vitest.config.* pytest.ini pyproject.toml 2>/dev/null

# Test files
find . -name "*.test.*" -o -name "*_test.*" -o -name "test_*.*" 2>/dev/null | head -5

# CI test commands
grep -r "npm test\|yarn test\|pytest\|go test" .github/workflows/ 2>/dev/null

Error Handling Assessment

CheckHow to DetectStatus Values
Explicit error typesCustom error classes/types definedpresent/partial/absent
Context preservationError messages include context, stack traces loggedpresent/partial/absent
Appropriate status codesAPI responses use correct HTTP status codespresent/partial/absent

Indicators to search:

# Custom errors
grep -r "class.*Error\|extends Error\|Exception" src/ 2>/dev/null | head -5

# Error logging
grep -r "error.*context\|error.*stack\|logger.error" src/ 2>/dev/null | head -3

# Status codes
grep -r "status(4\|status(5\|HttpStatus\|status_code" src/ 2>/dev/null | head -3

Observability Assessment

CheckHow to DetectStatus Values
Structured loggingLogger config (winston, pino, structlog, logrus)present/partial/absent
Correlation IDsRequest ID middleware, trace ID patternspresent/partial/absent
No PII in logsLog sanitization, no email/password in log statementspresent/partial/absent

Indicators to search:

# Logger config
grep -r "winston\|pino\|structlog\|logrus\|zap" package.json pyproject.toml go.mod 2>/dev/null

# Correlation IDs
grep -r "requestId\|correlationId\|traceId\|x-request-id" src/ 2>/dev/null | head -3

# PII check (negative - should NOT find these in logs)
grep -r "logger.*email\|logger.*password\|log.*password" src/ 2>/dev/null

Setup-Brownfield Quality Checklist

Before finalizing setup-brownfield analysis:

  • Project identity complete (name, language, framework, entry points)
  • Directory structure documented with purposes
  • Architecture pattern identified with evidence
  • Naming conventions documented (files, variables, functions, classes)
  • All four Essential Floor categories assessed
  • Domain entities extracted with relationships
  • External dependencies documented
  • Strengths to preserve identified (minimum 2-3)
  • Inconsistencies documented with severity
  • Recommendations provided for constitution focus

Detection Script

Run the automated detection script for fast, deterministic stack identification:

bash scripts/detect-stack.sh /path/to/project

Output:

{
  "project_type": "nodejs",
  "package_manager": "npm",
  "frameworks": ["express"],
  "orms": ["prisma"],
  "architecture": ["feature-based"],
  "ci_cd": ["github-actions"],
  "files_found": {...}
}

The script detects:

  • Project type: nodejs, python, go, rust, java, ruby, flutter, elixir
  • Package manager: npm, yarn, pnpm, pip, poetry, cargo, etc.
  • Frameworks: express, fastapi, django, nextjs, gin, rails, spring-boot, etc.
  • ORMs: prisma, typeorm, sqlalchemy, mongoose, gorm, activerecord, etc.
  • Architecture: clean-architecture, mvc, layered, feature-based, serverless, microservices
  • CI/CD: github-actions, gitlab-ci, jenkins, circleci, etc.

Usage pattern:

  1. Run script first for deterministic baseline
  2. Use script output to guide deeper LLM analysis
  3. Script findings are ground truth; LLM adds nuance

Manual Detection Commands

For cases where script detection is insufficient:

# Tech stack detection
cat package.json | jq '{name, engines, dependencies}'
cat pyproject.toml
cat .tool-versions .nvmrc .python-version 2>/dev/null

# Architecture detection
ls -d src/domain src/application src/features 2>/dev/null

# CI/CD detection
ls .github/workflows/*.yml .gitlab-ci.yml 2>/dev/null

# Governance detection
ls CODEOWNERS .github/CODEOWNERS docs/CODEOWNERS 2>/dev/null
cat CODEOWNERS 2>/dev/null | head -20

# Test structure
ls -d test/ tests/ spec/ __tests__/ 2>/dev/null

Quality Checklist

Before finalizing analysis:

Both Modes:

  • Project type and framework correctly identified
  • Architecture pattern documented
  • File paths cited for all findings

Context Mode:

  • Existing linting/formatting config extracted
  • CI quality gates analyzed
  • Existing governance docs checked (CODEOWNERS, ADRs, CONTRIBUTING.md)
  • Approvers identified (from CODEOWNERS or team structure)
  • Recommendations provided

Brownfield Mode:

  • All entity directories scanned
  • All route directories scanned
  • Collision risks classified by severity

Setup-Brownfield Mode:

  • All Context Mode checks completed
  • All four Essential Floor categories assessed
  • Strengths and inconsistencies documented
  • Output written to .humaninloop/memory/codebase-analysis.md

Anti-Patterns

Anti-PatternProblemFix
Assuming frameworkGuessing without evidenceVerify with code patterns
Missing directoriesOnly checking standard pathsProjects vary, explore
Over-extractingAnalyzing every fileFocus on config and patterns
Ignoring governanceMissing existing decisionsCheck README, CLAUDE.md, ADRs
Inventing findingsDocumenting assumptionsOnly report what's found

Score

Total Score

65/100

Based on repository quality metrics

SKILL.md

SKILL.mdファイルが含まれている

+20
LICENSE

ライセンスが設定されている

+10
説明文

100文字以上の説明がある

0/10
人気

GitHub Stars 100以上

0/15
最近の活動

1ヶ月以内に更新

+10
フォーク

10回以上フォークされている

0/5
Issue管理

オープンIssueが50未満

+5
言語

プログラミング言語が設定されている

+5
タグ

1つ以上のタグが設定されている

+5

Reviews

💬

Reviews coming soon