Back to list
anton-abyzov

security-patterns

by anton-abyzov

Autonomous AI Development Framework. Build production software with specs, tests, and docs that write themselves. Works with Claude, Cursor, Copilot.

23🍴 3📅 Jan 24, 2026

SKILL.md


name: security-patterns description: Security Pattern Detector - Real-time detection of dangerous coding patterns during file edits. Based on Anthropic's official security-guidance hook. Activates proactively when writing code. Warns about command injection, XSS, unsafe deserialization, dynamic code execution. allowed-tools: Read, Grep, Glob

Security Pattern Detector Skill

Overview

This skill provides real-time security pattern detection based on Anthropic's official security-guidance plugin. It identifies potentially dangerous coding patterns BEFORE they're committed.

Detection Categories

1. Command Injection Risks

GitHub Actions Workflow Injection

# DANGEROUS - User input directly in run command
run: echo "${{ github.event.issue.title }}"

# SAFE - Use environment variable
env:
  TITLE: ${{ github.event.issue.title }}
run: echo "$TITLE"

Node.js Child Process Execution

// DANGEROUS - Shell command with user input
exec(`ls ${userInput}`);
spawn('sh', ['-c', userInput]);

// SAFE - Array arguments, no shell
execFile('ls', [sanitizedPath]);
spawn('ls', [sanitizedPath], { shell: false });

Python OS Commands

# DANGEROUS
os.system(f"grep {user_input} file.txt")
subprocess.call(user_input, shell=True)

# SAFE
subprocess.run(['grep', sanitized_input, 'file.txt'], shell=False)

2. Dynamic Code Execution

JavaScript eval-like Patterns

// DANGEROUS - All of these execute arbitrary code
eval(userInput);
new Function(userInput)();
setTimeout(userInput, 1000);  // When string passed
setInterval(userInput, 1000); // When string passed

// SAFE - Use parsed data, not code
const config = JSON.parse(configString);

3. DOM-based XSS Risks

React dangerouslySetInnerHTML

// DANGEROUS - Renders arbitrary HTML
<div dangerouslySetInnerHTML={{ __html: userContent }} />

// SAFE - Use proper sanitization
import DOMPurify from 'dompurify';
<div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(userContent) }} />

Direct DOM Manipulation

// DANGEROUS
element.innerHTML = userInput;
document.write(userInput);

// SAFE
element.textContent = userInput;
element.innerText = userInput;

4. Unsafe Deserialization

Python Pickle

# DANGEROUS - Pickle can execute arbitrary code
import pickle
data = pickle.loads(user_provided_bytes)

# SAFE - Use JSON for untrusted data
import json
data = json.loads(user_provided_string)

JavaScript unsafe deserialization

// DANGEROUS with untrusted input
const obj = eval('(' + jsonString + ')');

// SAFE
const obj = JSON.parse(jsonString);

5. SQL Injection

String Interpolation in Queries

// DANGEROUS
const query = `SELECT * FROM users WHERE id = ${userId}`;
db.query(`SELECT * FROM users WHERE name = '${userName}'`);

// SAFE - Parameterized queries
const query = 'SELECT * FROM users WHERE id = $1';
db.query(query, [userId]);

6. Path Traversal

Unsanitized File Paths

// DANGEROUS
const filePath = `./uploads/${userFilename}`;
fs.readFile(filePath); // User could pass "../../../etc/passwd"

// SAFE
const safePath = path.join('./uploads', path.basename(userFilename));
if (!safePath.startsWith('./uploads/')) throw new Error('Invalid path');

Pattern Detection Rules

PatternCategorySeverityAction
eval(Code ExecutionCRITICALBlock
new Function(Code ExecutionCRITICALBlock
dangerouslySetInnerHTMLXSSHIGHWarn
innerHTML =XSSHIGHWarn
document.write(XSSHIGHWarn
exec( + string concatCommand InjectionCRITICALBlock
spawn( + shell:trueCommand InjectionHIGHWarn
pickle.loads(DeserializationCRITICALWarn
${{ github.eventGH Actions InjectionCRITICALWarn
Template literal in SQLSQL InjectionCRITICALBlock

Response Format

When detecting a pattern:

⚠️ **Security Warning**: [Pattern Category]

**File**: `path/to/file.ts:123`
**Pattern Detected**: `eval(userInput)`
**Risk**: Remote Code Execution - Attacker-controlled input can execute arbitrary JavaScript

**Recommendation**:
1. Never use eval() with user input
2. Use JSON.parse() for data parsing
3. Use safe alternatives for dynamic behavior

**Safe Alternative**:
```typescript
// Instead of eval(userInput), use:
const data = JSON.parse(userInput);

## Integration with Code Review

This skill should be invoked:
1. During PR reviews when new code is written
2. As part of security audits
3. When flagged by the code-reviewer skill

## False Positive Handling

Some patterns may be false positives:
- `dangerouslySetInnerHTML` with DOMPurify is safe
- `eval` in build tools (not user input) may be acceptable
- `exec` with hardcoded commands is lower risk

Always check the context before blocking.

Score

Total Score

75/100

Based on repository quality metrics

SKILL.md

SKILL.mdファイルが含まれている

+20
LICENSE

ライセンスが設定されている

+10
説明文

100文字以上の説明がある

+10
人気

GitHub Stars 100以上

0/15
最近の活動

1ヶ月以内に更新

+10
フォーク

10回以上フォークされている

0/5
Issue管理

オープンIssueが50未満

+5
言語

プログラミング言語が設定されている

+5
タグ

1つ以上のタグが設定されている

+5

Reviews

💬

Reviews coming soon