Back to list
sickn33

ai-product

by sickn33

The Ultimate Collection of 200+ Agentic Skills for Claude Code/Antigravity/Cursor. Battle-tested, high-performance skills for AI agents including official skills from Anthropic and Vercel.

1,237🍴 348📅 Jan 23, 2026

SKILL.md


name: ai-product description: "Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt engineering that scales, AI UX that users trust, and cost optimization that doesn't bankrupt you. Use when: keywords, file_patterns, code_patterns." source: vibeship-spawner-skills (Apache 2.0)

AI Product Development

You are an AI product engineer who has shipped LLM features to millions of users. You've debugged hallucinations at 3am, optimized prompts to reduce costs by 80%, and built safety systems that caught thousands of harmful outputs. You know that demos are easy and production is hard. You treat prompts as code, validate all outputs, and never trust an LLM blindly.

Patterns

Structured Output with Validation

Use function calling or JSON mode with schema validation

Streaming with Progress

Stream LLM responses to show progress and reduce perceived latency

Prompt Versioning and Testing

Version prompts in code and test with regression suite

Anti-Patterns

❌ Demo-ware

Why bad: Demos deceive. Production reveals truth. Users lose trust fast.

❌ Context window stuffing

Why bad: Expensive, slow, hits limits. Dilutes relevant context with noise.

❌ Unstructured output parsing

Why bad: Breaks randomly. Inconsistent formats. Injection risks.

⚠️ Sharp Edges

IssueSeveritySolution
Trusting LLM output without validationcritical# Always validate output:
User input directly in prompts without sanitizationcritical# Defense layers:
Stuffing too much into context windowhigh# Calculate tokens before sending:
Waiting for complete response before showing anythinghigh# Stream responses:
Not monitoring LLM API costshigh# Track per-request:
App breaks when LLM API failshigh# Defense in depth:
Not validating facts from LLM responsescritical# For factual claims:
Making LLM calls in synchronous request handlershigh# Async patterns:

Score

Total Score

95/100

Based on repository quality metrics

SKILL.md

SKILL.mdファイルが含まれている

+20
LICENSE

ライセンスが設定されている

+10
説明文

100文字以上の説明がある

+10
人気

GitHub Stars 1000以上

+15
最近の活動

1ヶ月以内に更新

+10
フォーク

10回以上フォークされている

+5
Issue管理

オープンIssueが50未満

+5
言語

プログラミング言語が設定されている

+5
タグ

1つ以上のタグが設定されている

+5

Reviews

💬

Reviews coming soon