Next Role MCP Proxy
Server Quality Checklist
- Disambiguation5/5
Each tool serves a distinct purpose with clear boundaries: get_credits checks account balance, get_pricing retrieves product catalog, and place_order executes transactions. No functional overlap exists between the three operations.
Naming Consistency5/5All tools follow a consistent verb_noun snake_case convention. The naming clearly distinguishes between retrieval operations (get_) and the transactional operation (place_).
Tool Count4/5Three tools is minimal but reasonable for a focused ordering workflow. While the surface is thin, it covers the essential path from balance check to order completion without unnecessary bloat.
Completeness3/5The toolset supports order creation but lacks order management capabilities such as status checking, order history retrieval, or cancellation. Once place_order is called, the agent has no visibility into order progress, creating a dead end for follow-up queries.
Average 4.1/5 across 3 of 3 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 3 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It adds valuable domain context explaining what credits are used for (1 per CV/cover letter order), but lacks operational details like error handling, what happens if phone number not found, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly concise with two sentences. First states purpose immediately; second provides essential domain context about credit costs. Zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter read operation without output schema, the description is nearly complete. It explains the credit system which is essential domain context. Minor gap: doesn't hint at return value structure or error states.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with phoneNumber fully documented. Description mentions 'customer' which loosely maps to the parameter, but adds no additional semantics, format constraints, or examples beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity: specifies the verb 'Check', resource 'credits', and scope 'remaining'. The second sentence distinguishes the domain context (CV/cover letter tailoring) which differentiates this from generic balance checking tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context by explaining that orders cost 1 credit, suggesting this should be checked before placing orders. However, lacks explicit when-to-use guidance or direct comparison to siblings (get_pricing, place_order).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden, effectively disclosing key behavioral traits: processing time (~15 minutes), notification mechanism (two SMS messages), and cost (1 credit). It omits idempotency or error handling details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly constructed sentences with zero waste: purpose, timing, notifications, and cost. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 6-parameter complexity and lack of annotations/output schema, the description is reasonably complete, covering the user journey (order → SMS confirmation → SMS completion). It could strengthen by noting the prerequisite check for credits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description text does not add parameter-specific semantics (e.g., explaining markdown format or productId sourcing), relying entirely on the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Place') and clear resource ('order for a tailored CV and cover letter'), immediately distinguishing it from the read-only sibling tools get_credits and get_pricing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Costs 1 credit per order,' implying a prerequisite to check credits, but lacks explicit guidance on when to use versus alternatives or a required workflow (e.g., calling get_pricing first to obtain the productId).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the business logic (career phases) and workflow ordering (must precede place_order), but lacks technical behavioral traits such as whether the operation is idempotent, cached, or rate-limited, and provides only high-level description of return values without structural details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three efficiently structured sentences: the first defines the core action, the second provides business context for selection, and the third states the workflow prerequisite. Every sentence earns its place with no redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description partially compensates by explaining that the tool returns 'career-level tiers and their product IDs'. Combined with the explicit workflow integration (prerequisite for place_order), this provides sufficient context for a zero-parameter lookup tool, though specific return structure details would strengthen it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per evaluation rules, zero-parameter tools receive a baseline score of 4. The description appropriately does not mention parameters since none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool retrieves 'career-level tiers and their product IDs' using the specific verb 'Get'. It distinguishes itself from sibling tools by explaining its role as a prerequisite for place_order (getting productId), clearly differentiating it from get_credits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit workflow guidance: 'You must call this before placing an order to get the correct productId.' It also includes selection criteria ('customer should pick the tier that best matches where they are in their career'), giving clear context on when and how to use the results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/bats64mgutsi/nextrole-mcp-proxy'
If you have feedback or need assistance with the MCP directory API, please join our Discord server