Laddro Career
Server Details
Resume tailoring, cover letters, PDF export, and job search via the Laddro Career API
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- laddro-app/laddro-career-mcp
- GitHub Stars
- 0
- Server Listing
- laddro-career-mcp
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 18 of 18 tools scored. Lowest: 2.9/5.
Each tool targets a distinct resource (coverLetters, resumes, templates, etc.) with clear actions, so agents can easily distinguish purposes.
All tools follow the consistent pattern laddro.<resource>.<action> in snake_case, making naming predictable and unambiguous.
18 tools cover a broad but focused set of functionalities for resume and cover letter management, neither too few nor excessive.
While the tool set covers creation, listing, and rendering, it lacks update and delete operations for both cover letters and resumes, which are notable gaps for a complete lifecycle.
Available Tools
18 toolsladdro.coverLetters.createAInspect
Create a new cover letter manually with provided contact details and letter content
| Name | Required | Description | Default |
|---|---|---|---|
| No | Applicant's email address | ||
| phone | No | Applicant's phone number | |
| title | No | Internal title for this cover letter | |
| address | No | Applicant's address | |
| fullName | Yes | Applicant's full name | |
| jobTitle | No | Applicant's current or target job title | |
| companyName | No | Name of the company being applied to | |
| hiringManager | No | Name of the hiring manager | |
| letterContent | Yes | Cover letter body content as HTML |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | No | |
| title | No | |
| createdAt | No | |
| updatedAt | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=false, so the description's 'Create' aligns but adds no additional behavioral details about mutation consequences, permissions, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no filler words, directly communicating the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with an output schema (not shown), the description adequately covers intent, but could mention expected response or creation behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description summarizes parameters as 'contact details and letter content' but adds no new semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Create a new cover letter manually', specifying both the action (create) and the resource (cover letter), and distinguishes from sibling tools like generate (AI-generated) by emphasizing 'manually'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for manual creation via 'manually', contrasting with generate, but does not explicitly state when to use this tool over get, list, or other siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.coverLetters.generateCInspect
AI-generate a personalized cover letter based on a resume and job description. Returns a PDF.
| Name | Required | Description | Default |
|---|---|---|---|
| font | No | Font family name | |
| jobUrl | No | URL to the job posting (alternative to jobDescription) | |
| colorId | No | Color scheme identifier | |
| language | No | Output language code (e.g. en, de, fr) | |
| resumeId | No | Resume UUID to base the cover letter on (uses default if omitted) | |
| templateId | No | Template identifier for PDF output | |
| positionName | Yes | Job title or position name being applied for | |
| jobDescription | No | Full job description text |
Output Schema
| Name | Required | Description |
|---|---|---|
| content | No | Base64-encoded PDF data |
| mimeType | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false, implying state modification, but the description does not clarify if the generated letter is saved or its lifetime. Missing details on side effects, authentication, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with one sentence covering purpose and output. However, it could briefly mention parameter roles without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters and sibling tools, the description lacks context on required inputs, optional resume selection, and difference between jobUrl and jobDescription. Output schema may cover return format, but usage context is insufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so all parameters have descriptions. The tool description adds no extra parameter context beyond the schema, meeting baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates a cover letter based on resume and job description and returns a PDF. However, it does not differentiate from sibling 'create' tool, which may cause confusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'create' or 'render'. Missing context about prerequisites or suitable scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.coverLetters.getARead-onlyInspect
Get metadata and content for a specific cover letter by its ID
| Name | Required | Description | Default |
|---|---|---|---|
| coverLetterId | Yes | Cover letter UUID identifier |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | No | |
| title | No | |
| createdAt | No | |
| updatedAt | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the description adds value by specifying what is returned ('metadata and content'). This provides context beyond annotations, though it does not disclose other traits like authorization requirements or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, front-loaded sentence with no extraneous words. Every part contributes to understanding the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter, an output schema, and annotations, the description is sufficiently complete. It captures the core functionality (get by ID, return metadata and content) without missing critical information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already fully describes the single parameter (coverLetterId) with a clear description. The tool description does not add additional meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Get' and identifies the resource as 'cover letter', explicitly mentioning it retrieves 'metadata and content' by ID. This clearly distinguishes it from sibling tools like list (returns multiple) and render (returns formatted version).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need a specific cover letter by ID, but does not explicitly state when not to use it or provide alternatives. It lacks guidance on when to prefer this over list or render.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.coverLetters.listARead-onlyInspect
List the authenticated user's cover letters with pagination support
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return (default 20) | |
| offset | No | Number of results to skip for pagination |
Output Schema
| Name | Required | Description |
|---|---|---|
| total | No | |
| coverLetters | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds context beyond the 'readOnlyHint' annotation by specifying the scope (authenticated user) and highlighting pagination support. It does not detail rate limits or error behaviors, but provides sufficient behavioral insight for a read action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no extraneous words. Every piece of information (list, authenticated user, pagination) earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the essential purpose, scope, and pagination feature. With an output schema present, it is sufficient for an agent to understand the tool's function without needing to explain return values. Lacks mention of sorting or filtering, but these are not typical for a list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameter meanings (limit, offset) are already defined in the input schema. The description merely mentions 'pagination support', which aligns but does not add new semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists cover letters for the authenticated user with pagination support, using a specific verb+resource pair. It distinguishes itself from siblings like 'get' (single letter) and 'create'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions pagination support but does not explicitly state when to use this tool versus alternatives (e.g., using 'get' for a single cover letter or 'render' for generating a PDF). No guidance on when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.coverLetters.renderARead-onlyInspect
Render a saved cover letter as PDF with template and styling settings. Costs 1 API credit.
| Name | Required | Description | Default |
|---|---|---|---|
| font | No | Font family name | |
| locale | No | Language/locale code | |
| margin | No | Page margin in millimeters | |
| colorId | No | Color scheme identifier | |
| spacing | No | Line spacing multiplier | |
| fontSize | No | Base font size in points | |
| templateId | Yes | Template identifier (e.g. GRAPHITE) | |
| coverLetterId | Yes | Cover letter UUID to render | |
| pageNumbering | No | Page numbering style |
Output Schema
| Name | Required | Description |
|---|---|---|
| content | No | Base64-encoded PDF data |
| mimeType | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, which is consistent with the description. The description adds the cost of 1 API credit, a useful behavioral detail. However, other aspects like permission requirements or potential for modifying state are not addressed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that efficiently conveys purpose and cost. No redundant or unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 9 parameters and an output schema, the description is sparse. It covers the core purpose but lacks details on parameter defaults, ordering, or how styling settings are applied. The output schema likely compensates for return value details, but parameter context is insufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so baseline is 3. The description adds no additional meaning to parameters, such as their purpose or interactions beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (render), resource (saved cover letter), and output format (PDF). It distinguishes from sibling tools like create and generate that create new cover letters rather than rendering existing ones.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage after a cover letter is saved, but provides no explicit guidance on when to use this tool versus alternatives (e.g., generate for AI creation, export for resumes). Missing prerequisites or scenarios where rendering might be inappropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.fonts.listARead-onlyInspect
List all available font families for resume and cover letter rendering
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| fonts | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true, so the agent knows it's safe. The description adds that it lists 'all available font families' with the specific context, which is adequate. No additional behavioral traits needed given zero parameters and output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is front-loaded with the core action ('List all available font families') and includes relevant context. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with zero parameters and an output schema, the description covers the purpose and scope adequately. It is complete enough for an agent to select and invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, with 100% coverage. The description does not need to add parameter info, so the baseline of 4 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list' and the resource 'all available font families', with the specific context 'for resume and cover letter rendering'. This distinguishes it from sibling list tools like laddro.templates.list or laddro.resumes.list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use the tool (when listing fonts for resume/cover letter rendering) but does not provide explicit when-not or alternatives. Since siblings are clearly different resources, the intent is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.languages.listARead-onlyInspect
List all 14 supported languages and locales for resume content
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| languages | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint true, indicating safe read. The description adds that it lists exactly 14 languages/locales for resume content, providing specific behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 10 words, no redundancy. Front-loaded with verb and resource. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, output schema present, and annotations, the description is fully sufficient. No missing information for an agent to decide to invoke this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. The description does not add parameter information, but none is needed. Baseline 4 for zero parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List', the resource 'supported languages and locales', and the context 'for resume content', with a specific count of 14. It distinguishes from sibling tools which handle different resources or actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives is provided. However, the tool is simple and the only way to list languages, so usage context is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.models.listARead-onlyInspect
List all supported AI providers and models for Bring Your Own Key (BYOK)
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| providers | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so no need to state read-only. The description adds the BYOK context but no additional behavioral details like pagination or completeness guarantees. With output schema present, return format is covered, so 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no filler words, front-loaded with verb and resource. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter, read-only list tool with an output schema, the description is fully complete. It tells the agent what the tool does, and the scope is clear. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so the description does not need to add parameter info. Baseline is 4 for zero-parameter tools, and the description does not waste words on nonexistent parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the action ('list') and the resource ('supported AI providers and models') with a specific context ('Bring Your Own Key'). It distinguishes itself from sibling list tools that list other entities like cover letters or fonts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing to query available AI models for BYOK, but does not explicitly state when to use this tool versus alternatives or provide exclusion criteria. Sibling tools are for different resources, so context is clear but not formalized.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.resumes.exportARead-onlyInspect
Export a resume as a downloadable PDF file with optional template and styling settings. Costs 1 API credit.
| Name | Required | Description | Default |
|---|---|---|---|
| font | No | Font family name | |
| locale | No | Language/locale code (e.g. en, de, fr) | |
| margin | No | Page margin in millimeters | |
| colorId | No | Color scheme identifier | |
| spacing | No | Line spacing multiplier | |
| fontSize | No | Base font size in points | |
| resumeId | Yes | Resume UUID to export | |
| templateId | No | Template identifier (e.g. GRAPHITE) | |
| pageNumbering | No | Page numbering style |
Output Schema
| Name | Required | Description |
|---|---|---|
| content | No | Base64-encoded PDF data |
| mimeType | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotation 'readOnlyHint: true' is consistent with export (no state mutation). The description adds value by stating 'Costs 1 API credit', a behavioral trait not covered by annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys the core purpose and a cost note. Every word earns its place; no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 9 parameters with full schema descriptions and an output schema, the description covers the essential info (PDF export, cost). Could mention sync/async or file handling, but not required due to output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are fully documented in schema. The description only summarizes 'optional template and styling settings', adding no extra meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool exports a resume as a PDF, with optional template/styling. The verb 'export' and resource 'resume' are specific, and it is distinct from siblings like 'render' or 'tailor'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for PDF export with styling, but lacks explicit guidance on when to choose this tool over similar siblings like 'laddro.resumes.render' or 'laddro.resumes.get'. No alternatives or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.resumes.getARead-onlyInspect
Get metadata and content for a specific resume by its ID
| Name | Required | Description | Default |
|---|---|---|---|
| resumeId | Yes | Resume UUID identifier |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | No | |
| title | No | |
| createdAt | No | |
| updatedAt | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true. The description adds that it retrieves metadata and content, but lacks further behavioral context such as rate limits or required permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, front-loaded sentence with no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with an output schema, the description is sufficient for use. It could mention error scenarios but is otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description does not add meaning beyond the schema; it merely restates that the tool gets a resume by its ID.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'metadata and content for a specific resume by its ID', distinguishing it from siblings like list or export.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when a resume ID is known, but does not explicitly state when to use or not use this tool versus alternatives like list or render.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.resumes.listARead-onlyInspect
List the authenticated user's resumes with pagination support
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return (default 20) | |
| offset | No | Number of results to skip for pagination |
Output Schema
| Name | Required | Description |
|---|---|---|
| total | No | |
| resumes | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so agent knows it's safe. The description adds context about authentication scope (user's resumes) and pagination behavior. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action and resource, no unnecessary words. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With output schema present, return format is covered. Missing default sort order but acceptable for a list operation. Sufficient for tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with adequate descriptions for limit and offset. The description mentions 'pagination support' but does not add new meaning beyond the schema. Baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (list), resource (resumes), scope (authenticated user), and feature (pagination support). It distinguishes from sibling tools like get, export, render, and tailor.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions pagination but does not provide explicit guidance on when to use this tool vs alternatives like 'get' for a single resume or 'export' for downloading. No when-not-to-use or filtering advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.resumes.renderBRead-onlyInspect
Render a resume as PDF with specific template and styling settings. Costs 1 API credit.
| Name | Required | Description | Default |
|---|---|---|---|
| font | No | Font family name (e.g. Inter, Roboto) | |
| locale | No | Language/locale code (e.g. en, de, fr) | |
| margin | No | Page margin in millimeters | |
| colorId | No | Color scheme identifier for the template | |
| spacing | No | Line spacing multiplier (e.g. 1.0, 1.15, 1.5) | |
| fontSize | No | Base font size in points | |
| resumeId | Yes | Resume UUID to render | |
| templateId | Yes | Template identifier (e.g. GRAPHITE) | |
| pageNumbering | No | Page numbering style |
Output Schema
| Name | Required | Description |
|---|---|---|
| content | No | Base64-encoded PDF data |
| mimeType | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already include readOnlyHint=true, which is consistent. Description adds the cost disclosure, but no other behavioral traits beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very concise: one sentence for purpose and one for cost. No wasted words, but slightly more detail could improve clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 9 parameters and an output schema, the description is minimal. It doesn't mention required parameters, defaults, or template selection guidance, leaving some gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so all parameters have descriptions in the input schema. The description adds no additional meaning or context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Render a resume as PDF' with template and styling settings, which is specific. However, it does not explicitly differentiate from sibling tools like laddro.resumes.export, which might also produce PDFs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Only mentions cost ('Costs 1 API credit'), but provides no guidance on when to use this tool versus alternatives (e.g., export, tailor) or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.resumes.tailorAInspect
AI-tailor a resume for a specific job posting. Rewrites content to match the job description and returns a PDF. Provide either jobDescription or jobUrl.
| Name | Required | Description | Default |
|---|---|---|---|
| font | No | Font family name | |
| mode | No | Tailoring mode: standard modifies existing, new creates from scratch | |
| jobUrl | No | URL to the job posting (alternative to jobDescription) | |
| colorId | No | Color scheme identifier | |
| language | No | Output language code (e.g. en, de, fr) | |
| resumeId | No | Resume UUID to tailor (uses user's default resume if omitted) | |
| templateId | No | Template identifier for PDF output | |
| positionName | Yes | Job title or position name being applied for | |
| jobDescription | No | Full job description text to tailor against | |
| includeCoverLetter | No | Also generate a matching cover letter (returns ZIP with both PDFs) |
Output Schema
| Name | Required | Description |
|---|---|---|
| content | No | Base64-encoded PDF data |
| mimeType | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate write operation (readOnlyHint=false). The description adds that it rewrites content and returns a PDF. But it omits behavioral details such as the need for a resume UUID (though resumeId is optional), the fact that it uses AI, and the potential to return a ZIP when includeCoverLetter is true. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is only two sentences (19 words), front-loads the core action, and contains no extraneous information. Every phrase adds meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (10 parameters, optional cover letter), the description covers the core purpose and main input constraint. However, it fails to mention the optional cover letter generation, which changes the output from a single PDF to a ZIP. The existence of an output schema lessens the burden, but this omission reduces completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema describes each parameter. The description adds value by explicitly stating that either 'jobDescription' or 'jobUrl' should be provided, a constraint not enforced by schema (neither is required). This clarifies selection logic for the agent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'tailor' and resource 'resume', clearly states action (rewrites content) and output (PDF). It distinguishes from sibling tools as none other specialize in tailoring. The instruction 'Provide either jobDescription or jobUrl' reinforces the tool's purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives usage guidance by specifying that the input should be either a job description or job URL, which clarifies how to invoke it. However, it does not indicate when to prefer this tool over alternatives like render or export, nor does it specify when not to use it (e.g., if no job posting is available).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.settings.deleteModelADestructiveInspect
Remove the saved AI provider configuration, reverting to Laddro's default AI model
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| message | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true. The description adds that it reverts to default but does not disclose additional behavioral traits (e.g., error state if no config exists).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words, front-loaded with key action and effect.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given an output schema exists, the description is adequate but lacks details on edge cases (e.g., no saved config) or confirmation behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist; baseline is 4. The description does not require parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Remove' and the resource 'saved AI provider configuration', with the effect 'reverting to default'. It distinguishes from siblings like updateModel.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The purpose implies usage for removing a custom AI provider config, but no explicit when-to-use or when-not-to-use guidance is given. Siblings like updateModel provide context, but the description does not compare them.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.settings.getARead-onlyInspect
Get the current AI provider and model configuration for the authenticated user
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| model | No | |
| hasKey | No | |
| provider | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
ReadOnlyHint annotation already indicates safe read. Description adds scope (authenticated user) and specifics (AI provider/model), but does not elaborate on behavior beyond what annotations convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is direct and free of fluff. It earns its place by stating the resource and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple getter with no parameters and an output schema, the description is complete. It states what is retrieved and for whom.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so baseline 4. Description explains what is retrieved, sufficient given zero params.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves the AI provider and model configuration for the authenticated user, using a specific verb+resource. It distinguishes from sibling update/delete tools which modify settings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While no explicit when-to-use guidance is given, the description implies reading current settings for the user, which is intuitive. Siblings like updateModel and deleteModel provide clear alternatives for modification.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.settings.updateModelAInspect
Configure the AI provider and model for BYOK (Bring Your Own Key). Saves an encrypted API key for the chosen provider.
| Name | Required | Description | Default |
|---|---|---|---|
| model | No | Model identifier (uses provider's recommended model if omitted) | |
| apiKey | Yes | Your API key for the chosen provider | |
| provider | Yes | AI provider name (e.g. Anthropic, OpenAI, Google, DeepSeek) |
Output Schema
| Name | Required | Description |
|---|---|---|
| message | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a write operation (readOnlyHint=false) and non-destructiveness. The description adds value by stating that the API key is saved encrypted, providing context beyond annotations. However, it doesn't cover whether the operation is idempotent or overwrites existing settings.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the primary purpose ('Configure the AI provider and model'), no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is adequate for a simple configuration tool with well-documented parameters and an existing output schema. It covers the core functionality (provider/model config, key storage) but omits details like error conditions or side effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add additional meaning beyond the schema's parameter descriptions, which already cover provider, model (optional), and apiKey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's main action: configuring AI provider and model for BYOK, and saving an encrypted API key. It distinguishes itself from sibling tools like settings.get and settings.deleteModel.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for BYOK configuration but does not provide explicit when-to-use or when-not-to guidance, nor does it mention alternative tools or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.templates.getARead-onlyInspect
Get full details for a template including color schemes, fonts, and preview images
| Name | Required | Description | Default |
|---|---|---|---|
| templateId | Yes | Template identifier (e.g. GRAPHITE, ONYX, MARBLE) |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | No | |
| name | No | |
| layout | No | |
| atsScore | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the description does not need to restate that. The description adds context about the return content (color schemes, fonts, preview images), but does not disclose additional behavioral traits like authorization needs or performance considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys all necessary information without redundancy. It is front-loaded with the core purpose and concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with one parameter and an output schema, the description sufficiently explains what the tool returns. No additional information is needed given the annotations and schema richness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers the single parameter with a description, and the description mentions templateId implicitly as 'a template'. Since schema coverage is 100%, the description adds no extra semantic value beyond the schema's parameter description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get full details for a template' specifying the action (get) and resource (template). It enumerates what details are included (color schemes, fonts, preview images), making the purpose unambiguous and differentiating it from sibling tools like list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for retrieving a single template's full details, but does not explicitly state when to use this tool over alternatives such as laddro.templates.list. No when-not-to-use or alternative guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
laddro.templates.listARead-onlyInspect
List all available resume templates with ATS scores and layout types
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| templates | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint: true. Description adds behavioral context by specifying return data (ATS scores, layout types). No gaps or contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with verb and resource. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully adequate for a parameterless list tool with output schema. Clearly states purpose and return data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters; baseline 4. Description doesn't need to add param info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'List' with specific resource 'all available resume templates' and outputs 'ATS scores and layout types'. Distinguishes from sibling laddro.templates.get.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage as a list operation, but no explicit guidance on when to use vs alternatives like laddro.templates.get. No exclusions or when-not-to-use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.