Skip to main content
Glama
goofypluto999

cv-mirror-mcp

analyze_cv

Check your CV against 5 real ATS parsers (Workday, Greenhouse, Lever, Taleo, iCIMS). Get per-vendor lint findings, parse risk scores, and concrete fixes to improve ATS compatibility.

Instructions

Analyse a CV (PDF or DOCX) against 5 real ATS parsers (Workday, Greenhouse, Lever, Taleo, iCIMS). Returns per-vendor lint findings, parse risk score, and concrete fixes. Use when the user asks 'is my CV ATS-friendly', 'will my resume pass [vendor]', or 'why am I not getting interviews' (with a file path).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathYesAbsolute path to the CV file (PDF or DOCX).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description carries full burden. It discloses tool reads PDF/DOCX, runs against 5 parsers, and returns findings. Does not mention file size limits, processing duration, or if file is uploaded elsewhere, but is largely transparent about its operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two focused sentences: first defines action and output, second provides usage examples. No unnecessary words. Excellent front-loading of purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, description explains return types (per-vendor lint findings, risk score, fixes). Mentions supported file types. Could add error handling details (e.g., missing file), but otherwise complete for a single-parameter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with description for the single 'path' parameter. The description does not add further details beyond schema, but schema itself is sufficient. Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool analyzes CVs against 5 ATS parsers, returning per-vendor lint findings, risk score, and fixes. It clearly distinguishes from siblings by covering multiple vendors (vs. lint_for_vendor which likely targets one).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description provides explicit user query triggers ('is my CV ATS-friendly', 'will my resume pass [vendor]', 'why am I not getting interviews') and mentions file path requirement. Lacks explicit when-not-to-use or mention of sibling alternatives, but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/goofypluto999/cv-mirror-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server