llm_status
Check connection status and list available models for OpenAI-compatible LLM servers to verify API accessibility and model options.
Instructions
Verifica el estado de conexión con el servidor LLM y lista los modelos disponibles
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| baseURL | No | URL del servidor OpenAI-compatible (ej: http://localhost:1234/v1, http://localhost:11434/v1) | |
| apiKey | No | API Key (requerida para OpenAI/Azure, opcional para servidores locales) |
Implementation Reference
- src/tools.ts:231-263 (handler)The handler function that executes the llm_status tool logic: connects to LLM server, checks status, lists models, and returns formatted response.async llm_status(args: z.infer<typeof ConnectionConfigSchema> = {}) { const client = getClient(args); const usedBaseURL = args.baseURL || defaultConfig.baseURL; const status = await client.getServerStatus(); if (status.connected) { const models = await client.listModels(); return { content: [ { type: "text" as const, text: `✅ **LLM Server Conectado**\n\n` + `- URL: ${usedBaseURL}\n` + `- Modelos disponibles: ${status.models}\n\n` + `**Modelos:**\n${models.map(m => `- ${m.id}`).join("\n") || "Ninguno"}`, }, ], }; } else { return { content: [ { type: "text" as const, text: `❌ **LLM Server No Conectado**\n\n` + `No se pudo conectar a ${usedBaseURL}\n\n` + `Verifica que:\n` + `1. El servidor LLM está ejecutándose\n` + `2. La URL es correcta\n` + `3. El puerto está accesible`, }, ], }; } },
- src/tools.ts:5-8 (schema)Zod schema defining the input parameters for connection configuration, used in llm_status handler.export const ConnectionConfigSchema = z.object({ baseURL: z.string().optional().describe("URL del servidor LM Studio (ej: http://localhost:1234/v1)"), apiKey: z.string().optional().describe("API Key opcional"), });
- src/tools.ts:83-93 (registration)MCP tool registration entry defining name, description, and input schema for llm_status.{ name: "llm_status", description: "Verifica el estado de conexión con el servidor LLM y lista los modelos disponibles", inputSchema: { type: "object" as const, properties: { ...connectionProperties, }, required: [], }, },
- src/index.ts:55-56 (registration)Dispatch/registration of llm_status handler in the main server request handler switch statement.case "llm_status": return await toolHandlers.llm_status(args as any);