Take Metric Snapshot
snapshot:
Instructions
Collect and save current server metrics without analysis
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| connection | Yes |
snapshot:
Collect and save current server metrics without analysis
| Name | Required | Description | Default |
|---|---|---|---|
| connection | Yes |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds that metrics are 'saved' (persisted) and clarifies the lack of analysis, but omits details about the SSH-based interaction implied by openWorldHint=true or where data is stored.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is front-loaded and efficient, avoiding redundancy. However, given the tool's complexity (SSH connections, multiple auth methods), it may be overly terse—sacrificing necessary detail for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool requiring SSH authentication to external servers (openWorldHint=true) with no output schema, the description is incomplete. It omits the transport mechanism (SSH), credential requirements, and the destination/format of the saved snapshot.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With schema coverage at 0%, the description must carry the burden of explaining the complex SSH connection object (host, port, auth credentials). The description mentions 'server' but provides no guidance on the nested connection parameters, authentication methods (password vs. key), or required vs. optional fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool collects and saves server metrics, using specific verbs. It effectively distinguishes from the sibling 'analyze_server' by explicitly stating it operates 'without analysis', clarifying its scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'without analysis' implies this is not for analysis use cases, providing implied guidance. However, it fails to explicitly name 'analyze_server' as the alternative or state prerequisites like when to use this vs. 'record_baseline'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/oaslananka/mcp-infra-lens'
If you have feedback or need assistance with the MCP directory API, please join our Discord server