Skip to main content
Glama

health_check

Verify the operational status of the local llama-server to ensure it is running and responsive for use with Claude Desktop.

Instructions

Check if the llama-server is running and responsive

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The asynchronous handler function for the 'health_check' tool. It performs a health check by fetching the /health endpoint with a 5-second timeout, optionally fetches /props for server info, and returns formatted status or error content.
    }, async () => {
      try {
        // Create abort controller for timeout
        const abortController = new AbortController();
        const timeoutId = setTimeout(() => abortController.abort(), 5000);
    
        const healthResponse = await fetch(`${this.config.url}/health`, {
          method: "GET",
          signal: abortController.signal
        });
        
        clearTimeout(timeoutId);
    
        const isHealthy = healthResponse.ok;
        const status = healthResponse.status;
    
        // Try to get server props if available
        let serverInfo = "No additional info available";
        try {
          const propsResponse = await fetch(`${this.config.url}/props`);
          if (propsResponse.ok) {
            const props = await propsResponse.json();
            serverInfo = JSON.stringify(props, null, 2);
          }
        } catch (e) {
          // Props endpoint might not exist, that's OK
        }
    
        return {
          content: [
            {
              type: "text",
              text: `**LibreModel Server Health Check:**\n\n**Status:** ${isHealthy ? "✅ Healthy" : "❌ Unhealthy"}\n**HTTP Status:** ${status}\n**Server URL:** ${this.config.url}\n\n**Server Information:**\n\`\`\`json\n${serverInfo}\n\`\`\``
            }
          ]
        };
      } catch (error) {
        const errorMessage = error instanceof Error && error.name === 'AbortError' 
          ? 'Request timed out after 5 seconds'
          : error instanceof Error ? error.message : String(error);
          
        return {
          content: [
            {
              type: "text",
              text: `**Health check failed:**\n❌ Cannot reach LibreModel server at ${this.config.url}\n\n**Error:** ${errorMessage}\n\n**Troubleshooting:**\n- Is llama-server running?\n- Is it listening on ${this.config.url}?\n- Check firewall/network settings`
            }
          ],
          isError: true
        };
      }
  • The schema definition for the 'health_check' tool, including title, description, and empty inputSchema (no parameters required).
    title: "Check LibreModel Server Health",
    description: "Check if the llama-server is running and responsive",
    inputSchema: {}
  • src/index.ts:159-215 (registration)
    The registration of the 'health_check' tool in the setupTools method using this.server.registerTool, including the schema object and handler function.
    // Server health check
    this.server.registerTool("health_check", {
      title: "Check LibreModel Server Health",
      description: "Check if the llama-server is running and responsive",
      inputSchema: {}
    }, async () => {
      try {
        // Create abort controller for timeout
        const abortController = new AbortController();
        const timeoutId = setTimeout(() => abortController.abort(), 5000);
    
        const healthResponse = await fetch(`${this.config.url}/health`, {
          method: "GET",
          signal: abortController.signal
        });
        
        clearTimeout(timeoutId);
    
        const isHealthy = healthResponse.ok;
        const status = healthResponse.status;
    
        // Try to get server props if available
        let serverInfo = "No additional info available";
        try {
          const propsResponse = await fetch(`${this.config.url}/props`);
          if (propsResponse.ok) {
            const props = await propsResponse.json();
            serverInfo = JSON.stringify(props, null, 2);
          }
        } catch (e) {
          // Props endpoint might not exist, that's OK
        }
    
        return {
          content: [
            {
              type: "text",
              text: `**LibreModel Server Health Check:**\n\n**Status:** ${isHealthy ? "✅ Healthy" : "❌ Unhealthy"}\n**HTTP Status:** ${status}\n**Server URL:** ${this.config.url}\n\n**Server Information:**\n\`\`\`json\n${serverInfo}\n\`\`\``
            }
          ]
        };
      } catch (error) {
        const errorMessage = error instanceof Error && error.name === 'AbortError' 
          ? 'Request timed out after 5 seconds'
          : error instanceof Error ? error.message : String(error);
          
        return {
          content: [
            {
              type: "text",
              text: `**Health check failed:**\n❌ Cannot reach LibreModel server at ${this.config.url}\n\n**Error:** ${errorMessage}\n\n**Troubleshooting:**\n- Is llama-server running?\n- Is it listening on ${this.config.url}?\n- Check firewall/network settings`
            }
          ],
          isError: true
        };
      }
    });
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/openconstruct/llama-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server