Skip to main content
Glama
jeel00dev

Excalidraw MCP Server

by jeel00dev

check_llm_status

Check if your local llama.cpp server is running and reachable to ensure offline diagram generation from natural language.

Instructions

Check whether the local llama.cpp server is running and reachable.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The tool handler function for 'check_llm_status'. Checks if the llama.cpp server is running by delegating to check_llm_health() and returns a human-readable status message.
    async def check_llm_status() -> str:
        """Check whether the local llama.cpp server is running and reachable."""
        if await check_llm_health():
            return "llama.cpp server is running at localhost:8080"
        return (
            "llama.cpp server is NOT running.\n"
            "Start it with:\n"
            "  ./build/bin/llama-server -m models/your-model.gguf --port 8080 -c 8192"
        )
  • Registration as an MCP tool via the @mcp.tool() decorator on the check_llm_status function.
    @mcp.tool()
    async def check_llm_status() -> str:
  • The check_llm_health() helper function that performs an HTTP GET to localhost:8080/health and returns True if status is 200.
    async def check_llm_health() -> bool:
        """Return True if the llama.cpp server is reachable."""
        try:
            async with httpx.AsyncClient(timeout=5.0) as client:
                r = await client.get(f"{LLAMA_BASE_URL}/health")
                return r.status_code == 200
        except Exception:
            return False
  • Schema is implicitly defined by the function signature: no input params, returns a string. The docstring serves as the description.
    async def check_llm_status() -> str:
        """Check whether the local llama.cpp server is running and reachable."""
        if await check_llm_health():
            return "llama.cpp server is running at localhost:8080"
        return (
            "llama.cpp server is NOT running.\n"
            "Start it with:\n"
            "  ./build/bin/llama-server -m models/your-model.gguf --port 8080 -c 8192"
        )
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. The description indicates a read-only check operation, but does not describe behavior like timeout, error handling, or what constitutes 'reachable'. However, it is not misleading.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence that conveys the full purpose with no extraneous words. It is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and the existence of an output schema, the description is mostly complete. It could mention the expected return value format (e.g., boolean or status object), but the output schema presumably covers that.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so schema coverage is 100%. The description adds meaning beyond the schema by explaining the tool's purpose. With zero parameters, baseline is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks if a local llama.cpp server is running and reachable. It uses a specific verb ('check') and resource ('local llama.cpp server'). This purpose is distinct from sibling tools (generate_diagram, list_diagrams).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use or alternatives. The context implies usage before other server-dependent tools, but the description does not state this. No exclusions or alternatives are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jeel00dev/exclalidraw_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server