Skip to main content
Glama
qiniu

Qiniu MCP Server

Official
by qiniu

live_streaming_create_bucket

Create a new bucket for live streaming storage using S3-style API to organize and manage streaming content on Qiniu Cloud.

Instructions

Create a new bucket in LiveStreaming using S3-style API. The bucket will be created at https://.<endpoint_url>

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
bucketYesLiveStreaming bucket name

Implementation Reference

  • The primary handler function for the 'live_streaming_create_bucket' MCP tool. It defines the tool metadata (including input schema) and executes the logic by delegating to LiveStreamingService.create_bucket.
    @tools.tool_meta(
        types.Tool(
            name="live_streaming_create_bucket",
            description="Create a new bucket in LiveStreaming using S3-style API. The bucket will be created at https://<bucket>.<endpoint_url>",
            inputSchema={
                "type": "object",
                "properties": {
                    "bucket": {
                        "type": "string",
                        "description": _BUCKET_DESC,
                    },
                },
                "required": ["bucket"],
            },
        )
    )
    async def create_bucket(self, **kwargs) -> list[types.TextContent]:
        result = await self.live_streaming.create_bucket(**kwargs)
        return [types.TextContent(type="text", text=str(result))]
  • The register_tools function where the live_streaming_create_bucket handler is registered into the MCP tools registry via auto_register_tools.
    def register_tools(live_streaming: LiveStreamingService):
        tool_impl = _ToolImpl(live_streaming)
        tools.auto_register_tools(
            [
                tool_impl.create_bucket,
                tool_impl.create_stream,
                tool_impl.bind_push_domain,
                tool_impl.bind_play_domain,
                tool_impl.get_push_urls,
                tool_impl.get_play_urls,
                tool_impl.query_live_traffic_stats,
                tool_impl.list_buckets,
                tool_impl.list_streams,
            ]
        )
  • Core helper implementation in LiveStreamingService that performs the HTTP PUT request to create the bucket, handles authentication, and processes the response.
    async def create_bucket(self, bucket: str) -> Dict[str, Any]:
        """
        Create a bucket using S3-style API
    
        Args:
            bucket: The bucket name to create
    
        Returns:
            Dict containing the response status and message
        """
        url = self._build_bucket_url(bucket)
        data = {}
        bodyJson = json.dumps(data)
        auth_headers = self._get_auth_header(method="PUT", url=url, content_type="application/json", body=bodyJson)
        headers = {"Content-Type": "application/json"}
        # 如果有认证头,添加到headers中
        if auth_headers:
            headers.update(auth_headers)
    
        # 打印 HTTP 请求信息
        print("=== HTTP 请求信息 ===")
        print("方法: PUT")
        print(f"URL: {url}")
        print("请求头:")
        for key, value in headers.items():
            print(f"  {key}: {value}")
        print("请求体: {}")
        print("===================")
    
        print(f"Creating bucket: {bucket} at {url}")
    
        async with aiohttp.ClientSession() as session:
            async with session.put(url, headers=headers, data=bodyJson) as response:
                status = response.status
                text = await response.text()
    
                print(f"状态码: {status}")
                print(f"响应内容: {text}")
                print("==================")
    
                if status == 200 or status == 201:
                    logger.info(f"Successfully created bucket: {bucket}")
                    return {
                        "status": "success",
                        "bucket": bucket,
                        "url": url,
                        "message": f"Bucket '{bucket}' created successfully",
                        "status_code": status
                    }
                else:
                    logger.error(f"Failed to create bucket: {bucket}, status: {status}, response: {text}")
                    return {
                        "status": "error",
                        "bucket": bucket,
                        "url": url,
                        "message": f"Failed to create bucket: {text}",
                        "status_code": status
                    }
  • Helper method to construct the S3-style bucket URL used in the create_bucket implementation.
    def _build_bucket_url(self, bucket: str) -> str:
        """Build S3-style bucket URL"""
        if not self.live_endpoint:
            self.live_endpoint = "mls.cn-east-1.qiniumiku.com"
    
        # Remove protocol if present in live_endpoint
        endpoint = self.live_endpoint
        if endpoint.startswith("http://"):
            endpoint = endpoint[7:]
        elif endpoint.startswith("https://"):
            endpoint = endpoint[8:]
    
        # Build URL in format: https://<bucket>.<endpoint>
        return f"https://{bucket}.{endpoint}"
  • The load function that instantiates LiveStreamingService and calls register_tools to register all live streaming tools including live_streaming_create_bucket.
    def load(cfg: config.Config):
        live = LiveStreamingService(cfg)
        register_tools(live)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool creates a bucket and provides the resulting URL format, but lacks critical details: it doesn't specify if this is a mutating operation (implied but not explicit), what permissions are required, whether the bucket name must be unique, error handling, or rate limits. For a creation tool with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—two sentences that directly state the tool's purpose and outcome. Every word earns its place: 'Create a new bucket' defines the action, 'in LiveStreaming' specifies the context, 'using S3-style API' adds technical detail, and the URL format clarifies the result. There is no redundancy or unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a creation tool with no annotations and no output schema, the description is incomplete. It lacks essential context: it doesn't explain what a 'bucket' is in this system, what happens after creation (e.g., default settings), potential side effects, or return values. The URL format hint is helpful but insufficient for an agent to fully understand the tool's behavior and implications.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'bucket' documented as 'LiveStreaming bucket name'. The description adds minimal value beyond the schema by mentioning the resulting URL format ('https://<bucket>.<endpoint_url>'), which implies the bucket name is used in the URL. However, it doesn't provide additional semantics like naming constraints or examples. With high schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create a new bucket') and resource ('in LiveStreaming'), and specifies the API style ('using S3-style API'). It distinguishes from siblings like 'list_buckets' and 'live_streaming_list_buckets' by focusing on creation rather than listing. However, it doesn't explicitly differentiate from other bucket-related tools beyond the creation aspect.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing permissions or existing infrastructure, or when not to use it (e.g., for updating or deleting buckets). With siblings like 'list_buckets' and 'live_streaming_list_buckets', there's no explicit comparison or context for choosing this tool over others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/qiniu/qiniu-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server