
quality
by Harery
Enterprise 8-Phase Software Development Lifecycle Framework with Quality Gates, Multi-Agent Orchestration, and AI-Assisted Development using Claude Code. Perfect for regulated industries (HIPAA, SOC 2, PCI DSS, SOX, GDPR).
SKILL.md
name: "quality" description: "Shared quality framework across all phases. Quality principles, testing strategy, quality gates, defect management, and continuous improvement. Warm, story-driven Expert Mentor style for 2026." type: "shared" used_by: ["all_phases"]
✓ Quality Framework
What You'll Learn
By the time you finish this guide, you'll understand:
How to build quality into every phase (not test at the end) Which tests actually matter (and which are waste) What tools to use in 2026 (with free alternatives) How to deploy confidently (any day of the week) How to continuously improve (get better over time)
Time Investment: 40 minutes to read, a career to master Difficulty Level: Medium (we explain everything in plain English) Emotional Difficulty: Low (we replace anxiety with confidence)
Quick Navigation
New to Quality? Start here:
- Quality Principles - The mindset that prevents bugs
- Quality Gates - Your protection against bad software
Ready to Build? Jump to:
- Quality by Phase - Your action plan for each phase
- Testing Levels - The different types of tests
Need Specifics? Go to:
- Test Coverage - How much testing is enough
- Defect Management - How to handle bugs
- Quality Tools - What to use (with free options)
Quality Principles: The Foundation
Quality Gates (Don't Proceed Until Ready)
The Principle: Each phase has entry and exit criteria. You cannot proceed without meeting quality gates.
Why It Matters:
- Prevents accumulating technical debt
- Catches issues early (when they're cheap to fix)
- Provides go/no-go decision points
- Builds confidence in each phase
Real-World Analogy: Think of quality gates like airport security checkpoints. You can't board the plane (next phase) until you've passed security (quality gate). If you try to bypass, you get sent back (or worse, something bad happens).
Emotional Reality: 😤
"Quality gates slow us down!"
Here's the truth: Rework slows you down more. Quality gates prevent rework. They're not speed bumps—they're guardrails.
Shift Left (Test Early, Test Often)
The Principle: Quality activities start as early as Phase 2 (requirements), not Phase 6 (testing).
Why It Matters:
- Finding a bug in requirements: $100 to fix
- Finding a bug in design: $1,000 to fix
- Finding a bug in development: $10,000 to fix
- Finding a bug in production: $100,000+ to fix (plus damage)
Real-World Analogy:
- Shift Right (traditional): Build a house, then realize you forgot plumbing. Now you have to tear down walls.
- Shift Left: Plan plumbing before you start building. Much cheaper.
The Shift Left Timeline:
Phase 1: Define quality metrics
Phase 2: Write testable requirements
Phase 3: Design for testability
Phase 4: Plan testing approach
Phase 5: Test while coding (TDD)
Phase 6: Comprehensive testing
Phase 7: Pre-deployment validation
Phase 8: Monitor quality in production
Continuous Quality (Not a Final Step)
The Principle: Quality is continuous, not a final step before deployment.
Why It Matters:
- Automated testing in CI/CD pipeline catches issues immediately
- Continuous monitoring in production catches issues quickly
- Quality becomes a habit, not a hurdle
The Continuous Quality Loop:
Code → Test → Deploy → Monitor → Learn → Improve
↑ ↓
←←←←←←←←←←←←← Continuous Loop ←←←←←←←←←←←←←
Key Practices:
- Automated tests run on every commit
- Code reviews include quality checks
- Staging environment mirrors production
- Production monitoring catches issues early
- Postmortems drive improvements
Test-Driven Development (TDD)
The Principle: Write tests before code (encouraged, not required).
The TDD Cycle:
1. RED: Write a failing test
2. GREEN: Write minimum code to pass
3. REFACTOR: Improve code while keeping tests green
Why It Matters:
- Forces you to think about requirements before coding
- Guarantees test coverage (every feature has tests)
- Makes refactoring safe (tests catch regressions)
- Provides living documentation (tests show how code works)
Emotional Reality: 😅
"TDD takes too long!"
Here's the data: TDD teams spend 15-35% more time writing code initially, but spend 50-90% less time debugging. The net result: TDD is faster overall.
Recommendation: Try TDD for one feature. See if it works for you. If not, that's okay—just write tests alongside code (better than no tests).
🚦 Quality Gates: Don't Proceed Until Ready
Phase Quality Gates
Phase 1 Quality Gates (Vision & Strategy)
- ☐ Quality metrics defined
- ☐ Success criteria defined
- ☐ Technical feasibility confirmed
Phase 2 Quality Gates (Requirements & Scope)
- ☐ Testable requirements (every requirement can be verified)
- ☐ Acceptance criteria defined (explicit pass/fail criteria)
- ☐ Requirements traceability matrix created (map requirements to tests)
Phase 3 Quality Gates (Architecture & Design)
- ☐ Testability designed in (architecture supports testing)
- ☐ Quality attributes defined (performance, reliability, scalability)
- ☐ Performance targets defined (specific, measurable metrics)
Phase 4 Quality Gates (Development Planning)
- ☐ Test strategy defined (what we'll test and how)
- ☐ Test coverage targets set (70-80% for unit tests)
- ☐ Testing tools selected (and team trained)
Phase 5 Quality Gates (Development Execution)
- ☐ Unit tests passing (70-80% coverage)
- ☐ Code reviews completed (peer approved)
- ☐ CI/CD tests passing (automated test suite)
Phase 6 Quality Gates (Quality & Security Validation)
- ☐ Functional tests passing (all features work)
- ☐ Integration tests passing (all integrations work)
- ☐ Performance tests meeting SLAs (performance targets met)
- ☐ Security tests passing (no critical vulnerabilities)
- ☐ UAT signed off (users approve)
Phase 7 Quality Gates (Deployment & Release)
- ☐ Pre-deployment tests passing (smoke tests)
- ☐ Staging validation passing (staging works like production)
- ☐ Post-deployment tests passing (smoke tests in production)
- ☐ Production validation passing (production is healthy)
Phase 8 Quality Gates (Operations & Maintenance)
- ☐ Monitoring active (we can see what's happening)
- ☐ Quality metrics within targets (no degradation)
- ☐ Customer satisfaction acceptable (users are happy)
🔄 Quality by Phase: When to Do What
Phase 1: Vision & Strategy - "What Does Success Look Like?"
Quality Activities:
- Define quality metrics and KPIs (Key Performance Indicators)
- Identify quality requirements (performance, reliability, usability)
- Define success criteria (explicit, measurable goals)
Deliverables:
- Quality metrics and KPIs (what we'll measure)
- Success criteria (what success looks like)
Owner: Product Owner + QA Lead
⏱️ Time Investment: 4-8 hours
Phase 2: Requirements & Scope - "Can We Test This?"
Quality Activities:
- Define testable requirements (every requirement must be verifiable)
- Define acceptance criteria (explicit pass/fail criteria for each requirement)
- Define non-functional requirements (performance, security, reliability)
- Create requirements traceability matrix (map requirements to tests)
Common Mistake to Avoid: 🚫
"The system shall be fast." (Not testable!)
Better: "The system shall load the dashboard in under 2 seconds for 95% of users." (Testable!)
Deliverables:
- Testable requirements (every requirement can be verified)
- Acceptance criteria (explicit pass/fail for each requirement)
- Non-functional requirements (performance, security, etc.)
- Requirements traceability matrix (map requirements to tests)
Owner: QA Lead + Product Owner
⏱️ Time Investment: 1-2 weeks
Phase 3: Architecture & Design - "Can We Test This Architecture?"
Quality Activities:
- Design for testability (architecture supports testing)
- Design for performance (architecture meets performance targets)
- Design for reliability (architecture supports high availability)
- Define quality attributes (specific quality characteristics)
Designing for Testability:
Testable Architecture Characteristics:
-
Modularity: Components are loosely coupled, easy to test in isolation
- Example: Instead of one giant function, break into smaller functions
-
Dependency Injection: Dependencies can be replaced with test doubles
- Example: Instead of hardcoding database, inject database interface
-
Interface Segregation: Clear interfaces between components
- Example: Well-defined API contracts, easy to mock
-
Observability: System exposes internal state for testing
- Example: Health check endpoints, metrics endpoints
-
State Management: State can be reset between tests
- Example: Tests can clean up after themselves
Real-World Example:
Hard to Test:
// Database is hardcoded, hard to replace with test double
function getUser(userId) {
const db = new Database('production-db');
return db.query('SELECT * FROM users WHERE id = ?', [userId]);
}
Easy to Test:
// Database is injected, easy to replace with test double
function getUser(userId, database) {
return database.query('SELECT * FROM users WHERE id = ?', [userId]);
}
// In production:
getUser(123, productionDatabase);
// In tests:
getUser(123, mockDatabase);
Deliverables:
- Testability design (architecture supports testing)
- Quality attributes (specific quality characteristics)
- Performance targets (specific, measurable metrics)
Owner: CTA + QA Lead
⏱️ Time Investment: 2-3 weeks
Phase 4: Development Planning - "How Will We Ensure Quality?"
Quality Activities:
- Define quality and test strategy (overall approach to quality)
- Define test coverage targets (how much we'll test)
- Define testing tools and frameworks (what we'll use)
- Define test data management strategy (how we'll manage test data)
- Define defect management process (how we'll handle bugs)
Deliverables:
- Quality and test strategy (overall approach)
- Test coverage targets (70-80% for unit tests)
- Testing tools and framework selection (specific tools)
- Test data management strategy (test data approach)
- Defect management process (bug tracking approach)
Owner: QA Lead
⏱️ Time Investment: 1-2 weeks
Phase 5: Development Execution - "Building Quality In"
Quality Activities:
- Write unit tests (TDD encouraged, not required)
- Conduct code reviews (peer reviews)
- Maintain test coverage (70-80% target)
- Continuous integration testing (automated tests on every commit)
- Quality in development practices (clean code, refactoring)
Code Review Checklist:
Functionality:
- ☐ Code does what it's supposed to do
- ☐ Edge cases are handled (null, empty, error cases)
- ☐ No obvious bugs or logic errors
Testing:
- ☐ Tests are included (unit tests)
- ☐ Tests cover happy path and edge cases
- ☐ Tests are passing (all green)
Code Quality:
- ☐ Code is readable (clear names, good comments)
- ☐ Code is maintainable (not overly complex)
- ☐ Code follows team conventions (style guide)
Security:
- ☐ No hardcoded secrets (API keys, passwords)
- ☐ Input validation (all user input validated)
- ☐ Output encoding (prevent XSS, injection)
Performance:
- ☐ No obvious performance issues (N+1 queries, etc.)
- ☐ Efficient algorithms (appropriate for data size)
- ☐ No unnecessary computations
Documentation:
- ☐ Complex logic is explained (comments)
- ☐ Public APIs are documented
- ☐ Changes are documented (commit message, PR description)
Deliverables:
- Unit tests (passing, 70-80% coverage)
- Code review records (peer approved)
- Test coverage reports (actual coverage metrics)
- CI/CD test results (automated test suite)
Owner: Developers + Tech Lead + QA Lead
⏱️ Time Investment: Ongoing (throughout development)
Phase 6: Quality & Security Validation - "Testing Everything"
Quality Activities:
- Functional testing (all features work as specified)
- Integration testing (all integrations work correctly)
- System testing (end-to-end workflows)
- Performance testing (meets performance targets)
- Security testing (no critical vulnerabilities)
- User acceptance testing (users approve)
- Defect management and resolution (bugs tracked and fixed)
The Testing Pyramid:
/\
/ \ E2E Tests (10%)
/____\ - Slow, expensive, fragile
/ \ - Test critical user workflows
/ \
/ \ Integration Tests (30%)
/____________\ - Medium speed, cost, fragility
/ \ - Test component interactions
/
_________________ Unit Tests (60%)
- Fast, cheap, reliable
- Test individual functions/classes
Why This Balance?
- Many Unit Tests: Fast, cheap, catch most bugs
- Fewer Integration Tests: Slower, catch integration bugs
- Fewest E2E Tests: Slowest, most expensive, catch workflow bugs
Common Mistake: 🚫
"Let's just write E2E tests for everything!"
Problem: E2E tests are slow, fragile, and expensive. You'll have a slow test suite that breaks constantly.
Recommendation: Follow the pyramid. 60% unit tests, 30% integration tests, 10% E2E tests.
Deliverables:
- Test plan and test cases (what we'll test and how)
- Test execution results (test results, pass/fail)
- Performance test report (meets SLAs)
- Security test report (no critical vulnerabilities)
- UAT sign-off (users approve)
- Defect register (all bugs tracked)
- Test summary report (overall quality assessment)
Owner: QA Lead
⏱️ Time Investment: 4-8 weeks
Phase 7: Deployment & Release - "Deploying with Confidence"
Quality Activities:
- Pre-deployment testing (smoke tests before deploy)
- Staging validation (test in production-like environment)
- Post-deployment testing (smoke tests after deploy)
- Smoke testing in production (critical paths work)
- Production validation (production is healthy)
Pre-Deployment Checklist:
Before Deployment:
- ☐ All tests passing (unit, integration, E2E)
- ☐ Code reviews approved (peer approved)
- ☐ Security scan clean (no critical vulnerabilities)
- ☐ Performance tested (meets SLAs)
- ☐ Staging validated (staging works correctly)
- ☐ Rollback plan ready (can undo if needed)
- ☐ Monitoring configured (will know if something breaks)
- ☐ On-call engineer assigned (someone available to respond)
During Deployment:
- ☐ Deploy to staging first (validate before production)
- ☐ Run smoke tests in staging (critical paths work)
- ☐ Deploy to production (blue/green or canary recommended)
- ☐ Run smoke tests in production (critical paths work)
- ☐ Monitor metrics (no regressions)
- ☐ Monitor error rates (no spikes)
After Deployment:
- ☐ Monitor for 1 hour (immediate issues)
- ☐ Monitor for 24 hours (delayed issues)
- ☐ Monitor customer feedback (user-reported issues)
- ☐ Post-deployment review (what went well, what didn't)
Red Light: If ANY check fails, don't deploy. Fix it first.
Yellow Light: If some checks are warnings, document risk and get explicit approval.
Green Light: All checks pass. Deploy with confidence.
Deliverables:
- Pre-deployment test results (smoke tests passing)
- Staging validation report (staging works)
- Post-deployment test results (smoke tests passing)
- Production validation report (production healthy)
Owner: QA Lead + SRE
⏱️ Time Investment: 1-2 weeks
Phase 8: Operations & Maintenance - "Quality in Production"
Quality Activities:
- Monitor quality metrics (track quality over time)
- Track defect trends (identify recurring issues)
- Conduct root cause analysis (learn from incidents)
- Continuous improvement (get better over time)
- Customer satisfaction monitoring (ensure users are happy)
Quality Metrics to Track:
| Metric | Target | Purpose |
|---|---|---|
| Test Coverage | 70-80% | Code quality |
| Defect Escape Rate | <5% | Testing effectiveness |
| Defect Reopen Rate | <5% | Fix quality |
| Code Review Pass Rate | >80% | Code quality |
| Automated Test Pass Rate | >95% | Test stability |
| Customer Satisfaction | >4.5/5 | User perception |
| Mean Time to Recovery (MTTR) | <1 hour | Incident response |
| Deployment Success Rate | >95% | Release quality |
Deliverables:
- Quality metrics dashboard (visible to team)
- Defect trend analysis (identifying patterns)
- Root cause analysis reports (learning from incidents)
- Improvement backlog (actionable improvements)
- Customer satisfaction reports (user feedback)
Owner: SRE + Support Lead
⏱️ Time Investment: Ongoing (5-10% of operational time)
Testing Levels: What to Test When
Unit Testing
Who: Developers What: Individual functions, classes, modules When: During development (TDD or alongside code) Tools: Jest (JavaScript), pytest (Python), JUnit (Java), etc. Coverage: 70-80% Speed: Fast (milliseconds per test) Cost: Low (automated, runs frequently)
Example:
test('calculates total price correctly', () => {
const result = calculateTotal(100, 0.1); // $100, 10% tax
expect(result).toBe(110); // Should be $110
});
Integration Testing
Who: QA Engineers What: Component interactions, API integrations When: After unit testing Tools: Postman, REST Assured, Supertest Coverage: 100% of integrations Speed: Medium (seconds per test) Cost: Medium (requires test environment)
Example:
test('API creates user and returns user data', async () => {
const response = await api.post('/users', {
name: 'John Doe',
email: 'john@example.com'
});
expect(response.status).toBe(201);
expect(response.data.id).toBeDefined();
});
System Testing
Who: QA Engineers What: End-to-end workflows When: After integration testing Tools: Playwright, Cypress, Selenium Coverage: 100% of user workflows Speed: Slow (minutes per test) Cost: High (requires full test environment)
Example:
test('user can complete purchase workflow', async () => {
await page.goto('/products/123');
await page.click('button:has-text("Add to Cart")');
await page.click('button:has-text("Checkout")');
await page.fill('#email', 'test@example.com');
await page.click('button:has-text("Place Order")');
await expect(page.locator('.success-message')).toBeVisible();
});
Performance Testing
Who: Performance Engineer What: Load, stress, endurance testing When: After system testing Tools: k6, JMeter, Gatling Coverage: All SLAs Speed: Variable (depends on test duration) Cost: High (requires performance testing environment)
Example:
import { check } from 'k6';
import http from 'k6/http';
export default function () {
const response = http.get('https://api.example.com/products');
check(response, {
'status is 200': (r) => r.status === 200,
'response time < 200ms': (r) => r.timings.duration < 200,
});
}
Security Testing
Who: Security Lead, Security Architect What: Vulnerability scanning, penetration testing When: After system testing Tools: OWASP ZAP, Burp Suite, Snyk Coverage: OWASP Top 10 Speed: Variable Cost: High (especially for penetration testing)
User Acceptance Testing (UAT)
Who: Users, Business Stakeholders What: Business workflows and requirements When: After all testing complete Tools: Manual testing, production-like environment Coverage: All acceptance criteria Speed: Variable (depends on user availability) Cost: Medium (user time, coordination)
UAT Sign-Off Criteria:
- ☐ All acceptance criteria met
- ☐ No critical bugs outstanding
- ☐ Users can complete their workflows
- ☐ Users approve the software
Test Coverage Targets
| Test Type | Coverage Target | Why This Target? |
|---|---|---|
| Unit Tests | 70-80% | Balance between coverage and effort (100% is diminishing returns) |
| Integration Tests | 100% of integrations | All integrations must work (integration bugs are expensive) |
| System Tests | 100% of user workflows | All critical paths must work (workflow bugs break user experience) |
| API Tests | 100% of API endpoints | All APIs must work (API contracts must be honored) |
| Security Tests | OWASP Top 10 | Cover most common security risks |
| Performance Tests | All SLAs | All performance requirements must be met |
Is 100% Coverage Worth It?
Short Answer: No, not for unit tests.
Why 100% Unit Test Coverage Isn't Worth It:
- Diminishing Returns: Last 20% costs 80% of effort
- False Confidence: 100% coverage doesn't mean 100% bug-free
- Maintenance Burden: More tests = more maintenance
- Opportunity Cost: Time spent on 100% coverage could be spent on integration/E2E tests
When to Aim for 100% Coverage:
- Critical systems: Medical devices, aviation, financial transactions
- Regulated industries: Some regulations require 100% coverage
- High-risk code: Security-related, encryption, authentication
When 70-80% Is Enough:
- Most business applications: CRUD apps, dashboards, reports
- UI code: Hard to test, low business risk
- Configuration: Simple data structures, no logic
Recommendation: Aim for 70-80% unit test coverage, 100% integration test coverage for critical integrations.
🐛 Defect Management: When Bugs Happen
Defect Severity Levels
| Severity | Description | Example | Resolution Target |
|---|---|---|---|
| P1 - Critical | System down, data loss | Login failure, data corruption | 4 hours |
| P2 - High | Major feature broken | Cannot save, checkout fails | 24 hours |
| P3 - Medium | Workaround available | UI issue, minor error | 1 week |
| P4 - Low | Cosmetic, enhancement | Spelling, formatting | Next release |
Defect Lifecycle
The Journey of a Bug:
New → Assigned → In Progress → Fixed → Verified → Closed
↑ ↓
←←←←←←←←←←← Reopened if fix doesn't work ←←←←←←←←←←←
States:
- New: Defect logged (triage needed)
- Assigned: Assigned to owner (developer)
- In Progress: Being fixed (developer working on it)
- Fixed: Fix implemented (ready for verification)
- Verified: Fix verified by QA (tests confirm fix)
- Closed: Defect closed (resolved)
- Reopened: Fix didn't work (back to In Progress)
Root Cause Analysis
For P1 and P2 defects, conduct root cause analysis:
5 Whys Technique:
Problem: Checkout failed for customers using Firefox
Why 1: Why did checkout fail? → JavaScript error in checkout form
Why 2: Why was there a JavaScript error? → Code used Firefox-incompatible API
Why 3: Why did code use Firefox-incompatible API? → Developer only tested in Chrome
Why 4: Why did developer only test in Chrome? → Team doesn't have Firefox testing environment
Why 5: Why doesn't team have Firefox testing environment? → No requirement to test in multiple browsers
Root Cause: No cross-browser testing requirement
Solution: Add cross-browser testing to requirements, set up Firefox testing environment
Result: Similar bugs prevented in future
Fishbone Diagram:
Another root cause analysis technique that diagrams potential causes across categories:
- People: Training, staffing, communication
- Process: Procedures, policies, workflows
- Technology: Tools, systems, architecture
- Environment: Physical environment, conditions
Quality Tools: 2026 Edition
Tool Recommendations by Category
| Category | Recommended Tools | Free Alternatives | When to Upgrade |
|---|---|---|---|
| Test Management | Jira, TestRail | Jira (free tier) | When you need advanced test reporting |
| Unit Testing | Jest, pytest, JUnit | Jest, pytest, JUnit (all free) | Already free, no upgrade needed |
| Integration Testing | Postman, REST Assured | Postman (free tier) | When you need advanced features |
| E2E/UI Testing | Playwright, Cypress | Playwright, Cypress (all free) | Already free, no upgrade needed |
| Performance Testing | k6, JMeter | k6, JMeter (all free) | Already free, no upgrade needed |
| API Testing | Postman, REST Assured | Postman (free tier) | When you need advanced features |
| Test Automation | Playwright, Cypress | Playwright, Cypress (all free) | Already free, no upgrade needed |
| Code Coverage | JaCoCo, Coverage.py, Istanbul | JaCoCo, Coverage.py, Istanbul (all free) | Already free, no upgrade needed |
| Code Quality | SonarQube, ESLint, PyLint | SonarQube Community, ESLint, PyLint (free) | When you need enterprise features |
2025-2026 Testing Framework Highlights
E2E Testing:
- Playwright - Fastest, best for modern web apps, excellent cross-browser support
- Cypress - Great DX, component + E2E testing, good for React apps
- Selenium - Legacy support, most browser support but slower
Unit Testing:
- Jest - Most popular for JavaScript/TypeScript
- pytest - Python standard, excellent features
- JUnit - Java standard, mature ecosystem
Performance Testing:
- k6 - Modern, scriptable with JavaScript, developer-friendly
- JMeter - Traditional choice, comprehensive
- Gatling - High-performance scenarios, Scala-based
Expected Outcomes
By following this quality framework, you will:
Catch bugs early (when they're cheap to fix) Deploy with confidence (any day of the week) Reduce rework (get it right the first time) Improve customer satisfaction (fewer bugs, better experience) Build maintainable software (tests enable safe refactoring) Sleep better at night (knowing your software is tested)
Quality is not a destination. It's a journey of continuous improvement. This framework gives you the map—you just need to walk the path.
💬 Final Thoughts
Quality is an investment, not an expense.
Every hour you spend on quality:
- Saves 10 hours of rework later
- Prevents customer complaints (support cost)
- Protects your reputation (hard to quantify, but invaluable)
- Enables faster development (tests make refactoring safe)
You don't need to be perfect. You just need to be thoughtful, consistent, and continuously improving.
Start somewhere. Start with unit tests. Add integration tests. Set up CI/CD. Implement code reviews. Each practice adds quality.
Remember: Quality is your safety net. It catches mistakes before your customers do.
Resources and Further Learning
Free Resources
Learning:
- Testing Best Practices: martinfowler.com/bliki/TestPyramid.html - Test pyramid explained
- TDD Guide: testdriven.io - TDD tutorials
- Performance Testing: k6.io/docs - k6 documentation and guides
Tools:
- Jest: jestjs.io - JavaScript testing framework
- Playwright: playwright.dev - E2E testing framework
- k6: k6.io - Performance testing tool
Communities:
- r/qualityassurance on Reddit: QA community
- Software Testing Help: softwaretestinghelp.com - QA articles and tutorials
Templates and Checklists
See ./templates/ for:
- Test Plan Template - Document your testing approach
- Test Case Template - Document individual tests
- Defect Report Template - Document bugs consistently
- Quality Metrics Template - Track quality over time
This shared skill is referenced by all phase skills.
Transformed by: OCTALUME EXPERT MENTOR Transformation: Complete rewrite to Expert Mentor style (warm, story-driven, emotionally intelligent, progressive disclosure, plain language, 2026 trends)
Version 1.0.0 | OCTALUME Enterprise Lifecycle Framework
Score
Total Score
Based on repository quality metrics
SKILL.mdファイルが含まれている
ライセンスが設定されている
100文字以上の説明がある
GitHub Stars 100以上
1ヶ月以内に更新
10回以上フォークされている
オープンIssueが50未満
プログラミング言語が設定されている
1つ以上のタグが設定されている
Reviews
Reviews coming soon
