Skip to main content
Glama

get_health

Check the health status of the Prefect workflow automation server to monitor system availability and ensure workflows can execute properly.

Instructions

Get health status of the Prefect server.

Returns: Health status information

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The get_health tool handler, decorated with @mcp.tool. It checks the Prefect server's health by calling client.hello() and returns the status or error as TextContent.
    @mcp.tool
    async def get_health() -> List[Union[types.TextContent, types.ImageContent, types.EmbeddedResource]]:
        """
        Get health status of the Prefect server.
        
        Returns:
            Health status information
        """
        try:
            # Test connection to Prefect by calling the health endpoint
            async with get_client() as client:
                health_status = await client.hello()
                
            return [types.TextContent(type="text", text=str(health_status))]
        
        except Exception as e:
            error_status = {
                "status": "unhealthy",
                "message": f"Error connecting to Prefect server: {str(e)}"
            }
            
            return [types.TextContent(type="text", text=str(error_status))]
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns 'Health status information,' but doesn't specify what that entails (e.g., uptime, metrics, error details), whether it's a read-only operation, or any potential side effects. For a diagnostic tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief with two sentences, but the second sentence ('Returns: Health status information') is redundant and adds little value beyond the first. It could be more efficiently structured by combining or omitting the return statement, though it's not overly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is minimal but adequate for basic understanding. However, it lacks details on what 'health status' includes, how to interpret results, or any error handling, which could be important for effective use in a server monitoring context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so there are no parameters to document. The description doesn't need to add parameter semantics beyond what the schema provides. A baseline score of 4 is appropriate as it avoids redundancy while clearly indicating no inputs are required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get health status of the Prefect server.' It specifies the verb ('Get') and resource ('health status'), making the action unambiguous. However, it doesn't differentiate from siblings beyond the obvious health focus, as no other tools appear to serve this diagnostic function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, timing, or context for checking server health, nor does it reference any sibling tools that might overlap or be preferred in certain scenarios. Usage is implied only by the tool's name and purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/allen-munsch/mcp-prefect'

If you have feedback or need assistance with the MCP directory API, please join our Discord server