スキル一覧に戻る
syeda-hoorain-ali

streamable-http-mcp-server

by syeda-hoorain-ali

TaskFlow is an innovative todo application that demonstrates the complete evolution of software development from a simple in-memory Python console app to a sophisticated, AI-powered, cloud-native application deployed on Kubernetes.

0🍴 0📅 2026年1月21日
GitHubで見るManusで実行

SKILL.md


name: streamable-http-mcp-server description: Creates and configures Streamable HTTP Model Context Protocol (MCP) server connections for OpenAI Agents SDK

Streamable HTTP MCP Server Skill

This skill helps create and configure Streamable HTTP Model Context Protocol (MCP) server connections for OpenAI Agents SDK.

Purpose

  • Create MCPServerStreamableHttp configurations
  • Configure HTTP connection parameters and authentication
  • Set up caching and retry mechanisms
  • Connect to HTTP-based MCP servers with direct connection management

MCPServerStreamableHttp Constructor Parameters

  • params (MCPServerStreamableHttpParams): Connection parameters for the server
    • url (str): The URL of the server
    • headers (dict[str, str], optional): The headers to send to the server
    • timeout (timedelta | float, optional): The timeout for the HTTP request (default: 5 seconds)
    • sse_read_timeout (timedelta | float, optional): The timeout for the SSE connection (default: 5 minutes)
    • terminate_on_close (bool, optional): Whether to terminate on close
    • httpx_client_factory (HttpClientFactory, optional): Custom HTTP client factory for configuring httpx.AsyncClient behavior
  • cache_tools_list (bool): Whether to cache the list of available tools (default: False)
  • name (string | None): A readable name for the server (default: None, auto-generated from URL)
  • client_session_timeout_seconds (float | None): Read timeout for the MCP ClientSession (default: 5)
  • tool_filter (ToolFilter): The tool filter to use for filtering tools (default: None)
  • use_structured_content (bool): Whether to use tool_result.structured_content when calling an MCP tool (default: False)
  • max_retry_attempts (int): Number of times to retry failed list_tools/call_tool calls (default: 0)
  • retry_backoff_seconds_base (float): The base delay, in seconds, for exponential backoff between retries (default: 1.0)
  • message_handler (MessageHandlerFnT | None): Optional handler invoked for session messages (default: None)

Usage Context

Use this skill when:

  • Managing HTTP connections yourself
  • Running servers locally or remotely with direct connection management
  • Needing to keep latency low with your own infrastructure
  • Wanting to run the server inside your own infrastructure

Basic Example

import asyncio
import os

from agents import Agent, Runner
from agents.mcp import MCPServerStreamableHttp
from agents.model_settings import ModelSettings

async def main() -> None:
    token = os.environ["MCP_SERVER_TOKEN"]
    async with MCPServerStreamableHttp(
        name="Streamable HTTP Python Server",
        params={
            "url": "http://localhost:8000/mcp",
            "headers": {"Authorization": f"Bearer {token}"},
            "timeout": 10,
        },
        cache_tools_list=True,
        max_retry_attempts=3,
    ) as server:
        agent = Agent(
            name="Assistant",
            instructions="Use the MCP tools to answer the questions.",
            mcp_servers=[server],
            model_settings=ModelSettings(tool_choice="required"),
        )

        result = await Runner.run(agent, "Add 7 and 22.")
        print(result.final_output)

asyncio.run(main())

スコア

総合スコア

75/100

リポジトリの品質指標に基づく評価

SKILL.md

SKILL.mdファイルが含まれている

+20
LICENSE

ライセンスが設定されている

+10
説明文

100文字以上の説明がある

+10
人気

GitHub Stars 100以上

0/15
最近の活動

1ヶ月以内に更新

+10
フォーク

10回以上フォークされている

0/5
Issue管理

オープンIssueが50未満

+5
言語

プログラミング言語が設定されている

+5
タグ

1つ以上のタグが設定されている

+5

レビュー

💬

レビュー機能は近日公開予定です