← Back to list

kafka-stream-designer
by phatpham9
🚀 Automated development environment setup for macOS and Ubuntu
⭐ 5🍴 2📅 Jan 18, 2026
SKILL.md
name: kafka-stream-designer description: 'Design Kafka topics, partitions, consumer groups, producers with idempotency, retry strategies, dead letter queues, exactly-once semantics, and schema registry integration'
Purpose
Design reliable, scalable Kafka-based event streaming architectures with proper partitioning, ordering, and error handling strategies.
When to Use
Use this skill when the task involves:
- Designing Kafka topics and partition strategies
- Implementing producers or consumers
- Setting up consumer groups for scaling
- Handling failures with retries and DLQs
- Configuring exactly-once or at-least-once semantics
- Integrating with Schema Registry (Avro, Protobuf)
Constraints
- Enable idempotent producer (
enable.idempotence=true) by default - Use Schema Registry for message contracts in production
- Design partition keys based on ordering requirements
- Implement dead letter queues for poison messages
- Set appropriate retention based on replay needs
- Use transactional producers for exactly-once across topics
- Monitor consumer lag as a key health metric
Execution Steps
- Identify events — List domain events with their schemas
- Design topics — Name conventions, partition count, replication factor
- Define partition keys — Choose keys that ensure ordering where needed
- Implement producers — Idempotent with proper serializers
- Build consumers — Consumer groups, offset management, error handling
- Add DLQ — Configure dead letter topic for failed messages
- Set up monitoring — Consumer lag alerts, throughput dashboards
Examples
Input
Design a Kafka topic for order events that maintains per-customer ordering
Output
// Topic Configuration
const topicConfig = {
topic: 'orders.events.v1',
partitions: 12, // Scale based on throughput needs
replicationFactor: 3,
config: {
'retention.ms': 7 * 24 * 60 * 60 * 1000, // 7 days
'cleanup.policy': 'delete',
'min.insync.replicas': 2,
},
};
// Producer with idempotency
import { Kafka, Partitioners } from 'kafkajs';
const kafka = new Kafka({
clientId: 'order-service',
brokers: ['kafka:9092'],
});
const producer = kafka.producer({
idempotent: true,
maxInFlightRequests: 5,
createPartitioner: Partitioners.DefaultPartitioner,
});
interface OrderEvent {
orderId: string;
customerId: string;
type: 'CREATED' | 'UPDATED' | 'CANCELLED';
payload: Record<string, unknown>;
timestamp: string;
}
async function publishOrderEvent(event: OrderEvent): Promise<void> {
await producer.send({
topic: 'orders.events.v1',
messages: [
{
key: event.customerId, // Partition by customerId for ordering
value: JSON.stringify(event),
headers: {
'event-type': event.type,
'correlation-id': event.orderId,
},
},
],
});
}
// Consumer with error handling
const consumer = kafka.consumer({
groupId: 'order-processor',
maxWaitTimeInMs: 100,
retry: { retries: 3 },
});
await consumer.subscribe({ topic: 'orders.events.v1', fromBeginning: false });
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
try {
const event = JSON.parse(message.value?.toString() || '{}');
await processOrderEvent(event);
} catch (error) {
// Send to DLQ after retries exhausted
await sendToDLQ('orders.events.v1.dlq', message, error);
}
},
});
Related Skills
nest-backend-service-builder— Integrate Kafka with NestJSdatabase-schema-designer— Outbox pattern for reliability
Score
Total Score
65/100
Based on repository quality metrics
✓SKILL.md
SKILL.mdファイルが含まれている
+20
✓LICENSE
ライセンスが設定されている
+10
○説明文
100文字以上の説明がある
0/10
○人気
GitHub Stars 100以上
0/15
✓最近の活動
1ヶ月以内に更新
+10
○フォーク
10回以上フォークされている
0/5
✓Issue管理
オープンIssueが50未満
+5
✓言語
プログラミング言語が設定されている
+5
✓タグ
1つ以上のタグが設定されている
+5
Reviews
💬
Reviews coming soon





