Compare to Baseline
compare_to_baseline:
Instructions
Compare current server state to a recorded baseline and explain the differences
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| connection | Yes | ||
| baseline_label | No | default |
compare_to_baseline:
Compare current server state to a recorded baseline and explain the differences
| Name | Required | Description | Default |
|---|---|---|---|
| connection | Yes | ||
| baseline_label | No | default |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds that it 'explain[s] the differences' (interpretive behavior not in annotations). However, despite openWorldHint=true and SSH connection parameters in schema, description fails to disclose remote server access requirements or network dependencies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is front-loaded and concise, but severely underspecified for a complex tool with nested objects and multiple authentication options.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for complexity: SSH connectivity, nested credential object, and comparison logic require elaboration. No output schema compounds the incompleteness—description should indicate what 'explaining differences' entails (format, scope).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% top-level description coverage (connection, baseline_label lack descriptions). Description adds zero parameter context—does not mention SSH, credentials, or that baseline_label selects which saved snapshot to compare against.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Compare' and resources 'current server state' vs 'recorded baseline'. Implicitly distinguishes from sibling 'record_baseline' by referencing pre-existing recorded baselines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use versus siblings (analyze_server), nor prerequisites that a baseline must first exist via record_baseline. Only clue is 'recorded baseline' implying prior creation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/oaslananka/mcp-infra-lens'
If you have feedback or need assistance with the MCP directory API, please join our Discord server