Skip to main content
Glama

run-tests

Verifies all tools in this server by running each and reporting the results.

Instructions

Returns a test plan for verifying all garmin-connect-mcp tools work. Call each tool listed and report results.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The 'run-tests' tool handler registered via server.tool(). It simply returns a static markdown test plan instructing the LLM/agent to call other tools manually and report results. No schema params needed (empty object {}).
      server.tool(
        "run-tests",
        "Returns a test plan for verifying all garmin-connect-mcp tools work. Call each tool listed and report results.",
        {},
        async () => {
          return textResult(`# garmin-connect-mcp Test Plan
    
    Execute each tool below and verify it returns valid data (not an error).
    Report results as a table: tool name, status (PASS/FAIL), notes.
    
    ## Prerequisites
    1. Call check-session first. If it fails, call garmin-login to authenticate.
    
    ## Tests (run in order)
    
    ### Session
    - check-session -> should return { status: "ok", profile: { ... } }
    
    ### Activities
    - list-activities (limit: 3) -> should return array of 3 activities
    - get-activity (use activityId from above) -> should return activity object with summaryDTO
    - get-activity-details (same ID) -> should return metricDescriptors + metrics
    - get-activity-splits (same ID) -> should return lapDTOs array
    - get-activity-hr-zones (same ID) -> should return array of 5 zones with secsInZone
    - get-activity-polyline (same ID) -> should return polyline data (may fail for indoor activities)
    - get-activity-weather (same ID) -> should return weather data (may fail for indoor activities)
    
    ### Daily Health (use today's date or omit for default)
    - get-daily-summary -> should return steps, calories, distance fields
    - get-daily-heart-rate -> should return heartRateValues array
    - get-daily-stress -> should return stressValuesArray
    - get-daily-summary-chart -> should return chart data object
    - get-daily-intensity-minutes -> should return intensity minutes data
    - get-daily-movement -> should return movement data
    - get-daily-respiration -> should return respiration data
    
    ### Sleep / Body Battery / HRV
    - get-sleep -> should return sleep score, duration, sleep stages
    - get-body-battery -> should return charged/drained values
    - get-hrv -> should return HRV data (may return { noData: true } if no overnight data yet)
    
    ### Weight / Records / Fitness
    - get-weight (startDate: 30 days ago, endDate: today) -> should return weight data (may be empty array)
    - get-personal-records -> should return personal records with history
    - get-fitness-stats (startDate: 30 days ago, endDate: today) -> should return activity stats by type
    - get-vo2max -> should return VO2 max estimate
    - get-hr-zones-config -> should return HR zone boundaries
    - get-user-profile -> should return user settings with userData
    
    ### Download
    - download-fit (use activityId from list, outputDir: /tmp/garmin-test) -> should save .fit file and return path
    
    ## Expected Acceptable Failures
    - get-activity-polyline / get-activity-weather may fail for indoor activities (no GPS/weather data)
    - get-hrv may return { noData: true } for today if overnight data hasn't synced yet
    - get-weight may return empty array if no weight entries recorded
    
    ## Report
    Present results as a markdown table: | Tool | Status | Notes |
    Count total passed vs failed at the end.`);
        }
      );
  • src/tools.ts:41-41 (registration)
    The function registerTools(server) is where all tools, including 'run-tests', are registered on the MCP server.
    export function registerTools(server: McpServer): void {
  • The textResult helper function used by the 'run-tests' handler to return markdown content.
    function textResult(text: string) {
      return { content: [{ type: "text" as const, text }] };
    }
    
    function errorResult(msg: string) {
      return { content: [{ type: "text" as const, text: msg }], isError: true };
  • The 'run-tests' tool has no input schema (empty object {}) — it takes no arguments.
    {},
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It only says 'Returns a test plan', without stating whether the tool is read-only or has any side effects. The agent cannot infer safety or behavioral traits beyond the minimal stated outcome.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the action ('Returns a test plan'), and contains no wasted words. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and no output schema, the description is mostly adequate. It specifies the purpose and action. However, it could provide more detail on the structure of the test plan or how to interpret results. Still, it is sufficient for a simple meta-tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters, and the input schema is empty. Per guidelines, baseline for 0 parameters is 4. The description adds no further meaning beyond the schema, which already covers 100% of parameters (none).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns a test plan for verifying all tools, which is a specific verb ('returns') and resource ('test plan for verifying all garmin-connect-mcp tools'). This clearly distinguishes it from all sibling tools, which perform specific operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by saying 'Call each tool listed', but it does not explicitly state when to use this tool versus alternatives, nor does it provide any exclusionary context. Since there are no other meta-tools, the usage is implied but lacks explicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/etweisberg/garmin-connect-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server