Skip to main content
Glama

health_check

Verify LM Studio server status and response availability to enable function discovery and system diagnostics for Claude Desktop integration.

Instructions

Check if LM Studio is running and responding

WORKFLOW: System diagnostics and function discovery TIP: Start with health_check, use list_functions to explore capabilities SAVES: Claude context for strategic decisions

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
detailedNoInclude detailed information about the loaded model and server status

Implementation Reference

  • Core handler function that executes the health_check tool logic: connects to LM Studio, lists loaded models, retrieves context length, and formats response using ResponseFactory.
    async execute(params: any, llmClient: any) { return await withSecurity(this, params, llmClient, async (secureParams) => { try { const client = new LMStudioClient({ baseUrl: config.lmStudioUrl || 'ws://localhost:1234', }); // Try to connect and get basic info const models = await client.llm.listLoaded(); // Get context length from the active model if available let contextLength: number | undefined = undefined; if (models.length > 0) { try { const activeModel = models[0]; // Try to get context length using LM Studio SDK contextLength = await activeModel.getContextLength(); } catch (error) { // If getContextLength fails, try alternative method or leave undefined console.warn('Could not retrieve context length from model:', error); } } // Use ResponseFactory for consistent, spec-compliant output ResponseFactory.setStartTime(); return ResponseFactory.createHealthCheckResponse( 'healthy', 'established', config.lmStudioUrl || 'ws://localhost:1234', undefined, undefined, secureParams.detailed ? { loadedModels: models.map(model => ({ path: model.path, identifier: model.identifier, architecture: (model as any).architecture || 'unknown', contextLength: contextLength // Will be the same for all models since we only check the first one })), modelCount: models.length, hasActiveModel: models.length > 0, contextLength: contextLength, // Add context length to response serverInfo: { url: config.lmStudioUrl, protocol: 'websocket' }, activeModel: models.length > 0 ? { path: models[0].path, identifier: models[0].identifier, architecture: (models[0] as any).architecture || 'unknown', contextLength: contextLength // Add context length to active model } : undefined } : undefined, // Don't provide details in non-detailed mode contextLength // Pass contextLength as separate parameter ); } catch (error: any) { return ResponseFactory.createHealthCheckResponse( 'unhealthy', 'failed', config.lmStudioUrl || 'ws://localhost:1234', error.message || 'Failed to connect to LM Studio', 'Please ensure LM Studio is running and a model is loaded' ); } }); }
  • Input schema definition for the health_check tool parameters.
    parameters = { detailed: { type: 'boolean' as const, description: 'Include detailed information about the loaded model and server status', default: false, required: false } };
  • Output schema interface HealthCheckResponse defining the structure of health_check responses.
    export interface HealthCheckResponse extends BaseResponse { data: { status: "healthy" | "unhealthy"; connection: "established" | "failed"; lmStudioUrl: string; timestamp: string; error?: string; suggestion?: string; contextLength?: number; // Context length of the loaded model details?: { loadedModels: Array<{ path: string; identifier: string; architecture: string; contextLength?: number; // Context length for each model }>; modelCount: number; hasActiveModel: boolean; contextLength?: number; // Context length of active model serverInfo: { url: string; protocol: string; }; activeModel?: { path: string; identifier: string; architecture: string; contextLength?: number; // Context length of active model }; }; }; }
  • src/index.ts:156-172 (registration)
    Dynamic registration of system plugins including HealthCheckPlugin via pluginLoader.registerPlugin during server initialization.
    private async loadSystemPlugin(filePath: string): Promise<void> { try { // Use ES module dynamic import with proper URL const fileUrl = pathToFileURL(filePath).href; const module = await import(fileUrl); const PluginClass = module.default || module.HealthCheckPlugin || module.PathResolverPlugin || Object.values(module)[0]; if (PluginClass && typeof PluginClass === 'function') { const plugin = new PluginClass(); this.pluginLoader.registerPlugin(plugin); // Removed console.log to avoid JSON-RPC interference } } catch (error) { // Silent error handling to avoid JSON-RPC interference // console.error(`[Plugin Server] Error loading system plugin ${filePath}:`, error); } }
  • Helper factory method to create standardized HealthCheckResponse objects used by the handler.
    static createHealthCheckResponse( status: "healthy" | "unhealthy", connection: "established" | "failed", lmStudioUrl: string, error?: string, suggestion?: string, details?: HealthCheckResponse['data']['details'], contextLength?: number ): HealthCheckResponse { return { success: status === "healthy", timestamp: new Date().toISOString(), modelUsed: details?.activeModel?.identifier || 'none', executionTimeMs: this.getExecutionTime(), data: { status, connection, lmStudioUrl, timestamp: new Date().toISOString(), error, suggestion, contextLength, details } }; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/houtini-ai/lm'

If you have feedback or need assistance with the MCP directory API, please join our Discord server