Skip to main content
Glama
Luminaire1337

MTA:SA Documentation MCP Server

get_cache_stats

Retrieve statistics on the MTA:SA documentation cache, including cache size and hit rates. Use it to monitor cache performance and verify data freshness.

Instructions

Get statistics about the MTA:SA documentation cache.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function that executes the get_cache_stats tool logic. It queries the database for document count and database size, then returns cache statistics including file path and cache duration.
    async (): Promise<CallToolResult> => {
      const count = queries.countDocs().get() as { count: number };
      const dbStats = db
        .prepare(
          "SELECT page_count * page_size as size FROM pragma_page_count(), pragma_page_size()",
        )
        .get() as { size: number };
    
      return {
        content: [
          {
            type: "text",
            text:
              `# Cache Statistics\n\n` +
              `- Cached MTA:SA docs: ${count.count}\n` +
              `- Database size: ${(dbStats.size / 1024 / 1024).toFixed(2)} MB\n` +
              `- Database path: ${DB_PATH}\n` +
              `- Cache duration: ${CACHE_DURATION / 1000 / 60 / 60 / 24} days`,
          },
        ],
      };
    },
  • src/index.ts:597-626 (registration)
    Tool registration for get_cache_stats using server.registerTool() with inputSchema set to empty object (no inputs required).
    // Register tool: get_cache_stats
    server.registerTool(
      "get_cache_stats",
      {
        description: "Get statistics about the MTA:SA documentation cache.",
        inputSchema: {},
      },
      async (): Promise<CallToolResult> => {
        const count = queries.countDocs().get() as { count: number };
        const dbStats = db
          .prepare(
            "SELECT page_count * page_size as size FROM pragma_page_count(), pragma_page_size()",
          )
          .get() as { size: number };
    
        return {
          content: [
            {
              type: "text",
              text:
                `# Cache Statistics\n\n` +
                `- Cached MTA:SA docs: ${count.count}\n` +
                `- Database size: ${(dbStats.size / 1024 / 1024).toFixed(2)} MB\n` +
                `- Database path: ${DB_PATH}\n` +
                `- Cache duration: ${CACHE_DURATION / 1000 / 60 / 60 / 24} days`,
            },
          ],
        };
      },
    );
  • Input schema for get_cache_stats - an empty object since the tool takes no arguments.
    {
      description: "Get statistics about the MTA:SA documentation cache.",
      inputSchema: {},
    },
  • The countDocs query used by the handler to count documents in the function_docs table.
    countDocs: () =>
      db.prepare(`
      SELECT COUNT(*) as count FROM function_docs
    `),
  • Constants file providing DB_PATH (temporary directory path) and CACHE_DURATION (30 days) used in the handler's output message.
    export const DB_PATH = path.join(
      os.tmpdir(),
      "mtasa-mcp-cache",
      "mtasa_docs.db",
    );
    
    export const CACHE_DURATION = 30 * 24 * 60 * 60 * 1000; // 30 days
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries burden. It does not disclose whether this is a lightweight read operation or if it has side effects. However, the name implies a safe get operation, making it minimally transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, directly states purpose with no extra words. Efficiently front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should describe what statistics are included (e.g., caching timestamps, entry counts). It lacks this detail, reducing completeness for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% with no parameters, baseline is 3. Description adds no detail about what statistics are returned, missing an opportunity to clarify the output.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves statistics about a specific cache, using a specific verb and resource. It distinguishes itself from sibling tools like clear_cache or search functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. For example, it does not indicate that it's for monitoring cache health or that it should be polled sparingly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Luminaire1337/mtasa-docs-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server