Reviewing Your AI Usage with Claude Code /insights | A New Feature That Analyzes Your Past Month
Claude Code has a new command: /insights. It reads your past month of session history and generates an HTML report covering usage patterns, friction points, and actionable improvement suggestions.
If you want an objective look at how you've been using Claude Code, this is a remarkably practical feature. This article walks through how to use /insights, how to read the report, and how to get the most out of it.
/insights Is "AI-Generated Feedback About You"
/insights is a slash command you run inside Claude Code's terminal. It automatically reads your past month of session logs and generates an analysis report.
The report is saved as an HTML file at ~/.claude/usage-data/report.html, which you can open in any browser. All you need to do is type /insights in a session -- no setup required.
# Just run this inside a Claude Code session
/insights
# The report is saved here
~/.claude/usage-data/report.html
Developers who have received the report describe it as "close to getting feedback from a well-informed manager." It doesn't just praise you -- it also delivers pointed observations like "you're abandoning conversations midway too often."
The 4 Sections of the Report
The /insights report is structured into four main sections. Here's what each one tells you.
What's Working -- Things Going Well
This section summarizes the patterns where your Claude Code usage is effective. For example, it might note that you handle multi-file changes well, or that your debugging instructions are precise.
The interesting part is seeing your own strengths articulated -- patterns you may not have consciously recognized.
What's Hindering -- Where You're Getting Stuck
This may be the most valuable section. It separates issues into Claude's side (misunderstanding instructions, taking the wrong approach) and your side (insufficient context, misconfigured environment).
It gives you those "oh, I've been hitting the same wall every time" moments.
Quick Wins -- Improvements You Can Try Right Away
This section suggests Claude Code features you haven't used yet -- MCP Servers, Custom Skills, Hooks -- tailored to your actual workflow. It includes ready-to-use commands and code examples you can copy and paste.
Ambitious Workflows -- Pushing Further
These are more advanced usage suggestions that are achievable with current model capabilities. It presents ideas like parallel agents, test-driven iteration, and autonomous workflows, complete with concrete prompt examples.
Sample Report Excerpt
What's Working: You've developed an effective pattern of breaking complex tasks into focused sessions. Your debugging workflow -- describing the bug, letting Claude investigate, then iterating on fixes -- is particularly efficient.
What's Hindering: Claude's side: Misunderstood your project structure in 3 sessions, leading to edits in wrong files. Your side: Starting sessions without specifying which directory to work in.
Quick Wins: Try adding a Custom Skill for your commit workflow. You've told Claude "always run tests before committing" in 4 separate sessions -- this should be in your CLAUDE.md instead.
Under the Hood: The 7-Step Analysis Pipeline
/insights doesn't simply read logs. The report is generated through a multi-stage analysis pipeline.
- Session Log Collection -- Reads all session logs under
~/.claude/projects/. - Filtering -- Excludes sub-agent sessions, sessions with fewer than 2 messages, and sessions shorter than 1 minute.
- Metadata Extraction -- Extracts structured data including token counts, tools used, Git operations, files changed, languages, and interruption counts.
- Long Session Summarization -- Sessions exceeding 30,000 characters are split into 25,000-character chunks, summarized individually, then fed into the analysis.
- Facet (Qualitative Assessment) Extraction -- The Haiku model analyzes each session's goal category, completion level, satisfaction, and friction types. Up to 50 sessions are evaluated, and results are cached.
- Integrated Analysis -- A dedicated prompt generates project domains, interaction styles, success patterns, friction analysis, and improvement suggestions.
- HTML Report Generation -- All analysis results are rendered into an interactive HTML report.
Facets from previously analyzed sessions are cached in ~/.claude/usage-data/facets/. On subsequent runs, only new sessions are analyzed, making them faster than the first run.
11 Metrics Analyzed Per Session
/insights extracts a wide range of information from each session. Here are the key metrics.
| Category | What's Analyzed |
|---|---|
| Goal Category | Feature implementation / Bug fix / Refactoring / Debugging / Test writing / Documentation, and 13 types total |
| Completion Level | Fully achieved / Mostly achieved / Partially achieved / Not achieved (4 levels) |
| User Satisfaction | Determined from explicit signals (e.g., "perfect!", "try again") |
| Friction Types | Misunderstood instructions / Wrong approach / Buggy code / Excessive changes, and 12 types total |
| Claude's Contribution | 5 levels from "unhelpful" to "essential" |
| Session Type | Single task / Multi-task / Iteration / Exploration / Quick question |
| Success Patterns | Accurate search / Precise code edits / Clear explanations / Multi-file changes, etc. |
| Tools Used | Call counts for each tool (top 8) |
| Code Changes | Lines added, lines deleted, files changed |
| Git Operations | Number of commits and pushes |
| Feature Usage | Whether Task Agent / MCP / Web Search / Web Fetch were used |
Practical Tips from Using /insights
Based on feedback from developers who have tried /insights, here are the most effective ways to put it to use.
Improving Your CLAUDE.md
The report detects patterns where you're giving the same instructions repeatedly. Writing those into your CLAUDE.md eliminates the repetition. The report even provides ready-made additions you can paste directly into your CLAUDE.md.
Discovering Features You Haven't Used
For features like Hooks, Custom Skills, and Headless Mode that you may know about but haven't tried, the report suggests them in the context of your actual workflow. The copy-paste-ready commands make it easy to experiment.
Running It Periodically for Comparison
Since cached facets carry over, subsequent runs are faster. Running it every 1-2 weeks and comparing reports gives you a tangible sense of improvement. Note that recent sessions tend to be weighted more heavily.
Having a Conversation with Claude About the Report
After the report is generated, you can ask Claude questions about the findings or push back on them. Asking "why did you suggest this?" prompts Claude to explain using specific session examples, leading to deeper insights.
Important Caveats
This is a useful feature, but it's not infallible. Here are some things to keep in mind.
Watch for Bias in the Analysis
If you spent a concentrated period on a specific task (say, browser automation testing), the report may treat it as your primary use case. Recent sessions seem to carry more weight, so if the report feels skewed, try running it again after some time has passed.
How to Receive the Feedback
Even if the report says "you abandon conversations too often," you may have been intentionally cutting sessions short for exploratory purposes. Don't take everything at face value -- weigh it against your own context. Learning to process AI feedback critically rather than blindly is becoming an increasingly important skill.
The First Run Takes Time
Since it analyzes a full month of sessions at once, the first run can take several minutes if you have many sessions. Subsequent runs skip cached facets, so they complete faster.
Who This Is For
The more you've used Claude Code, the more accurate the report becomes and the more insights you'll gain.
- Developers who use Claude Code daily
- People who want to improve their workflow but aren't sure where to start
- Those who haven't fully explored Skills, Hooks, or MCP yet
- Anyone looking to optimize their CLAUDE.md
- Team leads who want to share Claude Code best practices
Conversely, if you've just started using Claude Code and don't have many sessions yet, the analysis may not be very accurate. It's best to use it for 2-3 weeks before giving /insights a try.
Conclusion: A Feature for "Meta-Improving" Your AI Usage
/insights is a feature for improving how you use Claude Code itself. Beyond writing code with AI, being able to objectively observe your own "collaboration patterns with AI" is a fresh experience.
The standout quality is that it gives you clear next steps -- from specific additions to your CLAUDE.md to concrete Custom Skills setup examples. It's not just reflection; it's actionable.
The process of receiving feedback from AI and deciding what to incorporate on your own terms is likely to become an essential skill in AI-assisted work. /insights feels like an ideal way to practice exactly that.