get_object
Retrieve contents of a specified object from Qiniu Cloud Storage by providing the bucket name and object key. The tool supports access via Qiniu MCP Server, integrating with AI large model clients.
Instructions
Get an object contents from Qiniu Cloud bucket. In the GetObject request, specify the full key name for the object.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| bucket | Yes | Qiniu Cloud Storage bucket Name | |
| key | Yes | Key of the object to get. |
Implementation Reference
- The primary handler for the 'get_object' MCP tool. Fetches the object content from the storage service, determines if it's an image based on ContentType, encodes images to base64 ImageContent or decodes text files to TextContent.async def get_object(self, **kwargs) -> list[ImageContent] | list[TextContent]: response = await self.storage.get_object(**kwargs) file_content = response["Body"] content_type = response.get("ContentType", "application/octet-stream") # 根据内容类型返回不同的响应 if content_type.startswith("image/"): base64_data = base64.b64encode(file_content).decode("utf-8") return [ types.ImageContent( type="image", data=base64_data, mimeType=content_type ) ] if isinstance(file_content, bytes): text_content = file_content.decode("utf-8") else: text_content = str(file_content) return [types.TextContent(type="text", text=text_content)]
- The schema definition for the 'get_object' tool, including input parameters 'bucket' and 'key' as required strings.types.Tool( name="get_object", description="Get an object contents from Qiniu Cloud bucket. In the GetObject request, specify the full key name for the object.", inputSchema={ "type": "object", "properties": { "bucket": { "type": "string", "description": _BUCKET_DESC, }, "key": { "type": "string", "description": "Key of the object to get.", }, }, "required": ["bucket", "key"], }, ) )
- src/mcp_server/core/storage/tools.py:236-247 (registration)The register_tools function that creates a _ToolImpl instance and auto-registers the 'get_object' handler along with other storage tools.def register_tools(storage: StorageService): tool_impl = _ToolImpl(storage) tools.auto_register_tools( [ tool_impl.list_buckets, tool_impl.list_objects, tool_impl.get_object, tool_impl.upload_text_data, tool_impl.upload_local_file, tool_impl.get_object_url, ] )
- src/mcp_server/core/storage/__init__.py:7-11 (registration)The load function in storage module init that instantiates StorageService and calls register_tools to register the MCP tools including 'get_object'.def load(cfg: config.Config): storage = StorageService(cfg) register_tools(storage) register_resource_provider(storage)
- The helper method in StorageService that performs the actual S3 get_object operation, reads the full body into memory, and returns the response dict.async def get_object(self, bucket: str, key: str) -> Dict[str, Any]: if self.config.buckets and bucket not in self.config.buckets: logger.warning(f"Bucket {bucket} not in configured bucket list") return {} async with self.s3_session.client( "s3", aws_access_key_id=self.config.access_key, aws_secret_access_key=self.config.secret_key, endpoint_url=self.config.endpoint_url, region_name=self.config.region_name, config=self.s3_config, ) as s3: # Get the object and its stream response = await s3.get_object(Bucket=bucket, Key=key) stream = response["Body"] # Read the entire stream in chunks chunks = [] async for chunk in stream: chunks.append(chunk) # Replace the stream with the complete data response["Body"] = b"".join(chunks) return response