Skip to main content
Glama

benchmark

Read-only

Measure URL response times with min/avg/max statistics over multiple iterations to analyze web performance.

Instructions

Benchmark fetching URLs with timing statistics.

Measures min/avg/max response times over multiple iterations.

Returns: Benchmark results with timing statistics.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
iterationsYes
urlsYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultsYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses specific metrics measured (min/avg/max response times) and execution pattern (multiple iterations) beyond what annotations provide. Annotations declare readOnlyHint/openWorldHint; description adds the actual measurement behavior. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences front-loaded with purpose. 'Returns' sentence is slightly redundant given output schema exists, but provides useful confirmation. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a 2-parameter tool with output schema. Covers purpose, behavior, and return type. Could benefit from parameter format details given 0% schema coverage, but sufficient for agent selection and basic invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (no descriptions). Description mentions 'URLs' and 'iterations' conceptually, indicating their purposes, but fails to clarify format constraints (e.g., how multiple URLs are passed as a single string, or how iterations distributes across URLs). Adds minimal value over schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states specific verb 'Benchmark' + resource 'URLs' + scope 'timing statistics'. Clearly distinguishes from sibling 'fetch' and 'fetch_batch' by emphasizing performance measurement rather than content retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage through 'Benchmark' and 'timing statistics' (indicating performance testing use case), but lacks explicit comparison to siblings like 'fetch' or guidance on when to prefer this over simple fetching.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MikkoParkkola/nab'

If you have feedback or need assistance with the MCP directory API, please join our Discord server