Skip to main content
Glama

quick_test

Verify LibreModel MCP Server connectivity by running diagnostic tests for response validation and system health checks.

Instructions

Run a quick test to see if LibreModel is responding

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
test_typeNoType of test to runhello

Implementation Reference

  • The handler function for the 'quick_test' tool. It selects a predefined prompt based on the test_type parameter, sends it to LibreModel via callLlamaServer, and returns the response with performance metrics or an error.
    }, async (args) => { const testPrompts = { hello: "Hello! Can you introduce yourself?", math: "What is 15 + 27?", creative: "Write a short haiku about artificial intelligence.", knowledge: "What is the capital of France?" }; const testPrompt = testPrompts[args.test_type as keyof typeof testPrompts] || testPrompts.hello; try { const response = await this.callLlamaServer({ message: testPrompt, temperature: 0.7, max_tokens: 256, top_p: 0.95, top_k: 40, system_prompt: "" }); return { content: [ { type: "text", text: `**${args.test_type} test result:**\n\n**Prompt:** ${testPrompt}\n\n**LibreModel Response:**\n${response.content}\n\n**Performance:**\n- Tokens generated: ${response.tokens_predicted}\n- Tokens evaluated: ${response.tokens_evaluated}\n- Success: ✅` } ] }; } catch (error) { return { content: [ { type: "text", text: `**${args.test_type} test failed:**\n${error instanceof Error ? error.message : String(error)}` } ], isError: true }; } });
  • The schema definition for the 'quick_test' tool, including title, description, and inputSchema with test_type enum.
    title: "Quick LibreModel Test", description: "Run a quick test to see if LibreModel is responding", inputSchema: { test_type: z.enum(["hello", "math", "creative", "knowledge"]).default("hello").describe("Type of test to run") }
  • src/index.ts:112-157 (registration)
    The registration of the 'quick_test' tool using server.registerTool, passing the schema and handler function.
    this.server.registerTool("quick_test", { title: "Quick LibreModel Test", description: "Run a quick test to see if LibreModel is responding", inputSchema: { test_type: z.enum(["hello", "math", "creative", "knowledge"]).default("hello").describe("Type of test to run") } }, async (args) => { const testPrompts = { hello: "Hello! Can you introduce yourself?", math: "What is 15 + 27?", creative: "Write a short haiku about artificial intelligence.", knowledge: "What is the capital of France?" }; const testPrompt = testPrompts[args.test_type as keyof typeof testPrompts] || testPrompts.hello; try { const response = await this.callLlamaServer({ message: testPrompt, temperature: 0.7, max_tokens: 256, top_p: 0.95, top_k: 40, system_prompt: "" }); return { content: [ { type: "text", text: `**${args.test_type} test result:**\n\n**Prompt:** ${testPrompt}\n\n**LibreModel Response:**\n${response.content}\n\n**Performance:**\n- Tokens generated: ${response.tokens_predicted}\n- Tokens evaluated: ${response.tokens_evaluated}\n- Success: ✅` } ] }; } catch (error) { return { content: [ { type: "text", text: `**${args.test_type} test failed:**\n${error instanceof Error ? error.message : String(error)}` } ], isError: true }; } });

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/openconstruct/llama-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server