Back to list
digitalocean-labs

ai-services

by digitalocean-labs

Claude/Agent Skills for DigitalOcean App Platform - deployment, migration, networking, database configuration, and troubleshooting

2🍴 1📅 Jan 23, 2026

SKILL.md


AI Services Skill

Configure DigitalOcean Gradient AI Platform for App Platform applications.

Tip: This is one specialized skill in the App Platform library. For complex multi-step projects, consider using the planner skill to generate a staged approach. For an overview of all available skills, see the root SKILL.md.


Quick Decision

What do you need?
├── Simple LLM API calls → Serverless Inference
│   OpenAI-compatible API, no agent management
│
└── Full AI agents → Agent Development Kit (ADK)
    Knowledge bases, RAG, guardrails, multi-agent routing
NeedSolutionReference
Call LLM models directlyServerless Inferenceserverless-inference.md
Build agents with knowledge basesADKagent-development-kit.md
Content filtering / guardrailsADKagent-development-kit.md
Multi-agent workflowsADKagent-development-kit.md

Credential Handling

Model access keys follow the standard credential hierarchy:

  1. GitHub Secrets (recommended): User creates key → adds to GitHub Secrets → app spec references
  2. App Platform Secrets: Set via doctl apps update with type: SECRET
# App Spec pattern
envs:
  - key: MODEL_ACCESS_KEY
    scope: RUN_TIME
    type: SECRET
    value: ${MODEL_ACCESS_KEY}   # From GitHub Secrets

Key creation: Control Panel → Serverless Inference → Model Access Keys

Keys shown only once after creation—store securely.


Quick Start: Serverless Inference

# .do/app.yaml
services:
  - name: api
    envs:
      - key: MODEL_ACCESS_KEY
        scope: RUN_TIME
        type: SECRET
        value: ${MODEL_ACCESS_KEY}
      - key: INFERENCE_ENDPOINT
        value: https://inference.do-ai.run
# Python SDK (OpenAI-compatible)
from openai import OpenAI
import os

client = OpenAI(
    base_url=os.environ["INFERENCE_ENDPOINT"] + "/v1",
    api_key=os.environ["MODEL_ACCESS_KEY"],
)

response = client.chat.completions.create(
    model="llama3.3-70b-instruct",
    messages=[{"role": "user", "content": "Hello!"}],
)

Full guide: See serverless-inference.md


Quick Start: Agent Development Kit

# Install and configure
pip install gradient-adk
gradient agent configure

# Run locally
gradient agent run
# → http://localhost:8080/run

# Deploy to DigitalOcean
gradient agent deploy
# Agent entrypoint
from gradient_adk import entrypoint

@entrypoint
def entry(payload, context):
    query = payload["prompt"]
    return {"response": "Hello from agent!"}

Full guide: See agent-development-kit.md


Available Models

ModelUse Case
llama3.3-70b-instructGeneral purpose, high quality
llama3-8bFaster, lower cost
mistral-7bEfficient, multilingual
# List all available models
doctl genai list-models

Check Gradient AI Models for current availability.


Reference Files


Quick Troubleshooting

ErrorCauseFix
401 UnauthorizedInvalid model access keyVerify key in GitHub Secrets
Model not foundInvalid model IDRun doctl genai list-models
Rate limit exceededToo many requestsImplement exponential backoff
ADK deploy failsMissing token scopesEnsure genai CRUD + project read scopes

Integration with Other Skills

  • → designer: Add AI service environment variables to app spec
  • → deployment: Model access key stored in GitHub Secrets
  • → devcontainers: Test AI integrations locally before deployment
  • → planner: Plan AI-enabled app deployments

Score

Total Score

75/100

Based on repository quality metrics

SKILL.md

SKILL.mdファイルが含まれている

+20
LICENSE

ライセンスが設定されている

+10
説明文

100文字以上の説明がある

+10
人気

GitHub Stars 100以上

0/15
最近の活動

1ヶ月以内に更新

+10
フォーク

10回以上フォークされている

0/5
Issue管理

オープンIssueが50未満

+5
言語

プログラミング言語が設定されている

+5
タグ

1つ以上のタグが設定されている

+5

Reviews

💬

Reviews coming soon