Skip to main content
Glama

get_server_status

Check PromptArchitect MCP server availability and performance metrics including AI service status, cache efficiency, and response latency.

Instructions

Get PromptArchitect server status and performance metrics.

Use this tool to check: • Whether AI (Gemini) is available • Cache hit rate and request statistics • Average response latency

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler logic for the 'get_server_status' tool. It retrieves performance statistics using getStats(), checks API availability with isApiClientAvailable(), constructs a status object including server info, performance metrics, and overall status, then returns it as JSON text content.
    case 'get_server_status': {
      const stats = getStats();
      const status = {
        server: 'promptarchitect-mcp',
        version: '1.0.0',
        apiAvailable: isApiClientAvailable(),
        performance: {
          totalRequests: stats.totalRequests,
          cacheHits: stats.cacheHits,
          cacheHitRate: stats.totalRequests > 0 
            ? `${Math.round((stats.cacheHits / stats.totalRequests) * 100)}%` 
            : 'N/A',
          avgLatencyMs: stats.avgLatencyMs,
        },
        status: isApiClientAvailable() ? 'ready' : 'degraded (using fallbacks)',
      };
      return {
        content: [
          {
            type: 'text',
            text: JSON.stringify(status, null, 2),
          },
        ],
      };
    }
  • src/server.ts:182-195 (registration)
    Registration of the 'get_server_status' tool in the ListTools response, including its name, description, and empty input schema (no parameters required).
            {
              name: 'get_server_status',
              description: `Get PromptArchitect server status and performance metrics.
    
    Use this tool to check:
    • Whether AI (Gemini) is available
    • Cache hit rate and request statistics
    • Average response latency`,
              inputSchema: {
                type: 'object',
                properties: {},
                required: [],
              },
            },
  • Input schema definition for the 'get_server_status' tool, which requires no parameters.
    inputSchema: {
      type: 'object',
      properties: {},
      required: [],
    },
  • Helper function getStats() that returns Gemini API performance metrics: total requests, cache hits, and average latency, using module-level counters.
    export function getStats(): { totalRequests: number; cacheHits: number; avgLatencyMs: number } {
      return {
        totalRequests,
        cacheHits,
        avgLatencyMs: totalRequests > 0 ? Math.round(totalLatencyMs / totalRequests) : 0,
      };
    }
  • Helper function isApiClientAvailable() that checks if the PromptArchitect API client is configured by verifying the apiUrl.
    export function isApiClientAvailable(): boolean {
      return !!apiUrl;
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes what information the tool returns (AI availability, cache metrics, latency), which is helpful behavioral context. However, it does not disclose potential limitations like rate limits, authentication requirements, or whether this is a read-only operation, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with a clear opening sentence stating the purpose followed by a bulleted list of specific checks. Every sentence earns its place by adding value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple status check with no parameters) and lack of annotations/output schema, the description is adequate but has gaps. It explains what metrics are returned, which is good, but does not cover behavioral aspects like safety or performance implications, making it minimally viable rather than fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, focusing instead on the tool's purpose and outputs. A baseline of 4 is applied for zero-parameter tools, as it efficiently avoids unnecessary details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get', 'check') and resources ('server status and performance metrics'). It distinguishes itself from sibling tools (analyze_prompt, generate_prompt, refine_prompt) by focusing on system monitoring rather than prompt operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('to check' AI availability, cache statistics, and latency). However, it does not explicitly state when NOT to use it or name specific alternatives among the sibling tools, which would be needed for a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MerabyLabs/promptarchitect-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server