Postman MCP Server
Server Quality Checklist
Latest release: v1.0.0
- Disambiguation5/5
With only one tool, there is no possibility of ambiguity or overlap between tools. The tool's purpose is clearly defined as running a Postman Collection using Newman, leaving no room for misselection.
Naming Consistency5/5The single tool name 'run-collection' follows a clear verb-noun pattern, using a hyphen for separation. Since there is only one tool, consistency is inherently perfect with no deviations or mixed conventions.
Tool Count2/5A single tool is too few for a server named 'Postman MCP Server', which suggests a broader scope related to Postman API testing. This minimal toolset feels thin and underdeveloped, limiting functionality to just running collections without supporting other common Postman operations.
Completeness2/5The tool surface is severely incomplete for a Postman-related domain. It only covers running collections, missing obvious gaps such as creating, managing, or listing collections, environments, or variables, which are core to Postman workflows and would cause significant agent failures.
Average 2.9/5 across 1 of 1 tools scored.
See the Tool Scores section below for per-tool breakdowns.
- 0 of 1 issues responded to in the last 6 months
- No commit activity data available
- No stable releases found
- No critical vulnerability alerts
- No high-severity vulnerability alerts
- No code scanning findings
- CI status not available
This repository is licensed under MIT License.
This repository includes a README.md file.
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
If you are the author, simply .
If the server belongs to an organization, first add
glama.jsonto the root of your repository:{ "$schema": "https://glama.ai/mcp/schemas/server.json", "maintainers": [ "your-github-username" ] }Then . Browse examples.
Add related servers to improve discoverability.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but only states the basic action. It fails to mention critical behavioral traits such as whether this is a read-only or destructive operation, authentication requirements, rate limits, execution time, or what happens during the run (e.g., output format, error handling).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It is appropriately sized and front-loaded, directly stating the tool's purpose without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of running a Postman collection (which involves execution, potential side effects, and output), the lack of annotations and output schema means the description is incomplete. It does not address what the tool returns, error conditions, or behavioral implications, leaving significant gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly documenting all four parameters. The description adds no additional semantic information about parameters beyond what the schema provides, so it meets the baseline of 3 for adequate but not additive value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Run') and the resource ('a Postman Collection using Newman'), providing a specific verb+resource combination. However, since there are no sibling tools mentioned, it cannot demonstrate differentiation from alternatives, which prevents a perfect score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, prerequisites, or contextual constraints. It simply states what the tool does without offering any usage instructions or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/shannonlal/mcp-postman'
If you have feedback or need assistance with the MCP directory API, please join our Discord server