Skip to main content
Glama
duke0317

Image Processing MCP Server

by duke0317

get_performance_stats

Retrieve performance metrics and statistics for image processing operations to monitor efficiency and optimize workflows.

Instructions

获取性能统计信息

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • main.py:793-806 (handler)
    MCP tool handler and registration for 'get_performance_stats'. Calls the utility function from utils.performance and formats the result as a JSON string.
    @mcp.tool()
    def get_performance_stats() -> str:
        """获取性能统计信息"""
        try:
            stats = utils_get_performance_stats()
            return json.dumps({
                "success": True,
                "data": stats
            }, ensure_ascii=False, indent=2)
        except Exception as e:
            return json.dumps({
                "success": False,
                "error": f"获取性能统计失败: {str(e)}"
            }, ensure_ascii=False, indent=2)
  • Core helper function that aggregates performance statistics from global monitor, cache, and resource manager instances.
    def get_performance_stats() -> Dict[str, Any]:
        """获取完整的性能统计"""
        return {
            "monitor": performance_monitor.get_stats(),
            "cache": image_cache.get_stats(),
            "resources": resource_manager.get_stats(),
            "timestamp": time.time()
        }
  • main.py:85-85 (registration)
    Import of the performance stats utility function used by the MCP tool handler.
    from utils.performance import get_performance_stats as utils_get_performance_stats, reset_performance_stats as utils_reset_performance_stats
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. The description only states what the tool does ('get performance statistics') without revealing any behavioral traits: it doesn't specify whether this is a read-only operation, what permissions might be needed, whether it's resource-intensive, what format the statistics are returned in, or if there are any side effects. For a tool with no annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise - a single phrase in Chinese. For a zero-parameter tool that presumably returns performance data, this brevity is appropriate. There's no wasted language or unnecessary elaboration, and the meaning is immediately clear (though incomplete as noted in other dimensions).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that the tool has 0 parameters (simple input), no annotations, but has an output schema (which should document return values), the description is minimally adequate but has clear gaps. The output schema will handle return value documentation, so the description doesn't need to explain that. However, for a tool in a context with many image processing siblings, it should better distinguish itself and provide more behavioral context given the lack of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100% (since there are no parameters to describe). With no parameters, the description doesn't need to compensate for schema gaps. The baseline for zero parameters is 4, as there's no parameter semantics to explain beyond what the empty schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description '获取性能统计信息' (Get performance statistics) is a tautology that essentially restates the tool name 'get_performance_stats' in Chinese. It doesn't specify what kind of performance statistics (image processing? system? application?), what resource they apply to, or how this differs from sibling tools like 'get_image_info' or 'reset_performance_stats'. The purpose is vague and lacks differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There's no mention of context, prerequisites, or comparisons to sibling tools like 'get_image_info' (which might provide different metadata) or 'reset_performance_stats' (which appears to be a related mutation tool). Without any usage instructions, the agent has no basis for selecting this tool appropriately.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/duke0317/ps-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server