Server Quality Checklist
- Disambiguation2/5
Severe overlap between atomic extraction tools (extract_skills_structured, extract_experience_structured, classify_entities) and the comprehensive wrapper (analyze_resume_comprehensive). Agents cannot determine whether to use individual pipeline steps or the all-in-one alternative. Additionally, manage_candidates and ats_manage_candidates appear functionally identical but have unclear scope differences.
Naming Consistency3/5Generally follows verb_noun pattern (parse_resume, extract_keywords, compute_similarity), and the ats_ prefix is applied consistently to ATS management tools. However, the outlier manage_candidates breaks the ats_ convention without clear justification, and the mix of atomic (inspect_pipeline) vs comprehensive (analyze_resume_comprehensive) naming doesn't clearly signal their hierarchical relationship.
Tool Count3/524 tools is borderline heavy for this scope. The count is inflated by exposing every pipeline stage as a separate tool (extract_keywords, extract_skills_structured, detect_patterns, classify_entities) alongside comprehensive alternatives that duplicate their functionality. These should be parameters of unified tools rather than discrete endpoints.
Completeness3/5The surface technically covers the full HR lifecycle (resume ingestion, ATS management, interview scheduling, offers), but the abundance of overlapping extraction tools creates confusion gaps—agents will struggle to select the correct parsing path. The ATS management suite is comprehensive (jobs, candidates, interviews, offers, notes, feedback), though lacking clear integration between the resume parsing domain and ATS domain (e.g., no direct import from parse_resume to ats_manage_candidates).
Average 4/5 across 24 of 24 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.3
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 24 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.