list_monitors
Retrieve all monitors from Postman, with optional filtering by workspace.
Instructions
Get all monitors
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| workspace | No | Return only monitors found in the given workspace |
Retrieve all monitors from Postman, with optional filtering by workspace.
Get all monitors
| Name | Required | Description | Default |
|---|---|---|---|
| workspace | No | Return only monitors found in the given workspace |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It only says 'Get all monitors' without detailing output format, pagination, auth needs, or performance implications. This is insufficient for a list operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise at two words, but it sacrifices necessary details. It is front-loaded but under-specified, making it only minimally acceptable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and only one optional parameter, the description should clarify scope (e.g., workspace-level or account-level). It does not address return value or filtering behavior beyond the parameter hint. Incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the parameter is fully documented in the schema. The description does not add extra meaning beyond the parameter's own description. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get all monitors' clearly states the verb and resource, and the name matches the action. However, it does not differentiate from other list tools like list_mocks or the single-resource get_monitor.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as get_monitor (single monitor) or other list tools. The description lacks context for choosing this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/delano/postman-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server