
observability
by SylphxAI
ð AI development platform with MEP architecture - stop writing prompts, start building with 90% less typing
SKILL.md
name: observability description: Observability - logging, metrics, tracing. Use when adding monitoring.
Observability Guideline
Tech Stack
- Error Tracking: Sentry
- Analytics: PostHog
- Platform: Vercel
Non-Negotiables
- Correlation IDs must exist end-to-end (request â job â webhook)
- Alerts must exist for critical failures (webhook failures, auth attacks, drift)
Context
Observability is about answering questions when things go wrong. It's 3am, something is broken, users are complaining â can you figure out what happened? How fast?
Good observability makes debugging easy. Bad observability means you're guessing, adding log lines, redeploying, and hoping. Consider: what questions would you need to answer during an incident, and can you answer them today?
Driving Questions
- If something breaks in production right now, how would we find out?
- What blind spots exist where errors go unnoticed?
- How long would it take to trace a user's request through the entire system?
- What alerts exist, and do they fire for the right things?
- Where do we have noise that's training people to ignore alerts?
- What production issue in the last month was hard to debug, and why?
ã¹ã³ã¢
ç·åã¹ã³ã¢
ãªããžããªã®åè³ªææšã«åºã¥ãè©äŸ¡
SKILL.mdãã¡ã€ã«ãå«ãŸããŠãã
ã©ã€ã»ã³ã¹ãèšå®ãããŠãã
100æå以äžã®èª¬æããã
GitHub Stars 100以äž
3ã¶æä»¥å ã«æŽæ°
10å以äžãã©ãŒã¯ãããŠãã
ãªãŒãã³Issueã50æªæº
ããã°ã©ãã³ã°èšèªãèšå®ãããŠãã
1ã€ä»¥äžã®ã¿ã°ãèšå®ãããŠãã
ã¬ãã¥ãŒ
ã¬ãã¥ãŒæ©èœã¯è¿æ¥å ¬éäºå®ã§ã

