Skip to main content
Glama

llm_status

Check connection status and list available models for OpenAI-compatible LLM servers to verify API accessibility and model options.

Instructions

Verifica el estado de conexión con el servidor LLM y lista los modelos disponibles

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
baseURLNoURL del servidor OpenAI-compatible (ej: http://localhost:1234/v1, http://localhost:11434/v1)
apiKeyNoAPI Key (requerida para OpenAI/Azure, opcional para servidores locales)

Implementation Reference

  • The handler function that executes the llm_status tool logic: connects to LLM server, checks status, lists models, and returns formatted response.
    async llm_status(args: z.infer<typeof ConnectionConfigSchema> = {}) { const client = getClient(args); const usedBaseURL = args.baseURL || defaultConfig.baseURL; const status = await client.getServerStatus(); if (status.connected) { const models = await client.listModels(); return { content: [ { type: "text" as const, text: `✅ **LLM Server Conectado**\n\n` + `- URL: ${usedBaseURL}\n` + `- Modelos disponibles: ${status.models}\n\n` + `**Modelos:**\n${models.map(m => `- ${m.id}`).join("\n") || "Ninguno"}`, }, ], }; } else { return { content: [ { type: "text" as const, text: `❌ **LLM Server No Conectado**\n\n` + `No se pudo conectar a ${usedBaseURL}\n\n` + `Verifica que:\n` + `1. El servidor LLM está ejecutándose\n` + `2. La URL es correcta\n` + `3. El puerto está accesible`, }, ], }; } },
  • Zod schema defining the input parameters for connection configuration, used in llm_status handler.
    export const ConnectionConfigSchema = z.object({ baseURL: z.string().optional().describe("URL del servidor LM Studio (ej: http://localhost:1234/v1)"), apiKey: z.string().optional().describe("API Key opcional"), });
  • src/tools.ts:83-93 (registration)
    MCP tool registration entry defining name, description, and input schema for llm_status.
    { name: "llm_status", description: "Verifica el estado de conexión con el servidor LLM y lista los modelos disponibles", inputSchema: { type: "object" as const, properties: { ...connectionProperties, }, required: [], }, },
  • src/index.ts:55-56 (registration)
    Dispatch/registration of llm_status handler in the main server request handler switch statement.
    case "llm_status": return await toolHandlers.llm_status(args as any);

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ramgeart/llm-mcp-bridge'

If you have feedback or need assistance with the MCP directory API, please join our Discord server