storylenses
Server Details
AI cover letter generation for agents. Job analysis, profile matching, narrative letters.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- benediktgirz/storylenses-mcp-server
- GitHub Stars
- 0
- Server Listing
- StoryLenses MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 5 of 5 tools scored. Lowest: 3.2/5.
Each tool has a clearly distinct purpose with no overlap: job analysis, letter generation, archetype listing, profile matching, and quality checking. The descriptions specify unique operations (e.g., extract fields vs. generate letter vs. match profile), making misselection unlikely.
All tools follow a consistent 'storylenses_verb_noun' pattern with snake_case (e.g., storylenses_analyze_job, storylenses_generate_letter). This predictable naming scheme enhances readability and usability across the set.
With 5 tools, the server is well-scoped for its purpose of job application support. Each tool earns its place by covering distinct aspects like analysis, generation, matching, and evaluation, avoiding bloat or thin coverage.
The toolset provides strong coverage for job application workflows, including analysis, matching, generation, and quality checking. A minor gap exists in direct editing or customization of cover letters beyond generation and feedback, but agents can work around this with the available tools.
Available Tools
5 toolsstorylenses_analyze_jobARead-onlyInspect
Extract 15+ structured fields from a job posting — role requirements, company challenges, culture signals, recruiter priorities
| Name | Required | Description | Default |
|---|---|---|---|
| locale | No | Response language (en, de, es, or pt) | en |
| job_url | No | URL of the job posting to analyze | |
| job_text | No | Raw text of the job posting (use if no URL) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, and openWorldHint=true, indicating a safe, read-only operation with open-world assumptions. The description adds context by specifying the types of fields extracted, but does not disclose additional behavioral traits such as rate limits, authentication needs, or what happens with invalid inputs. There is no contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Extract 15+ structured fields from a job posting') and follows with specific examples of what is extracted. Every part of the sentence adds value without redundancy or unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (analysis with multiple field types), annotations cover safety and openness, and schema fully describes parameters. However, there is no output schema, so the description could benefit from mentioning the format or structure of the extracted fields. It is mostly complete but has a minor gap in output clarification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all three parameters (locale, job_url, job_text). The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining the interaction between job_url and job_text or detailing the extraction process. Baseline score of 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Extract 15+ structured fields') and the resource ('from a job posting'), with explicit details on what types of fields are extracted (role requirements, company challenges, culture signals, recruiter priorities). It distinguishes itself from sibling tools like 'storylenses_generate_letter' or 'storylenses_match_profile' by focusing on analysis rather than generation or matching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for analyzing job postings to extract structured data, which provides clear context. However, it does not explicitly state when to use this tool versus alternatives like 'storylenses_quality_check' or 'storylenses_list_archetypes', nor does it mention any exclusions or prerequisites for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
storylenses_generate_letterAInspect
Generate a story-driven cover letter using matched data and a narrative archetype. Supports en/de/es/pt.
| Name | Required | Description | Default |
|---|---|---|---|
| tone | No | Writing tone — professional, conversational, confident, etc. | professional |
| length | No | Letter length — short (150-200 words), medium (250-350), or full (400-500) | medium |
| locale | No | Output language (en, de, es, or pt) | en |
| archetype | No | Narrative archetype ID (use storylenses_list_archetypes to see options) | golden-fleece |
| match_data | Yes | Match data from storylenses_match_profile | |
| job_analysis | Yes | Job analysis from storylenses_analyze_job | |
| candidate_name | Yes | Candidate's full name for the letter greeting |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-readOnly, non-destructive tool. The description adds useful context about language support ('Supports en/de/es/pt') and the story-driven approach, but doesn't disclose behavioral traits like rate limits, authentication needs, or output format details beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences: the first states the core function and inputs, the second adds language support. Every word earns its place with zero redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 7 parameters, nested objects, and no output schema, the description is adequate but minimal. It covers the basic purpose and language support, but doesn't explain the story-driven approach, how archetypes affect output, or what the generated letter looks like, leaving gaps given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 7 parameters thoroughly. The description adds minimal value by mentioning 'matched data' and 'narrative archetype' (which map to match_data and archetype parameters), but doesn't provide additional semantic context beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Generate a story-driven cover letter'), the resources used ('matched data and a narrative archetype'), and distinguishes from siblings by specifying its unique function (cover letter generation vs analysis, listing, matching, or quality checking tools).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by mentioning it uses 'matched data and a narrative archetype,' which implicitly references sibling tools (storylenses_match_profile and storylenses_list_archetypes). However, it doesn't explicitly state when to use this tool versus alternatives or provide exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
storylenses_list_archetypesBRead-onlyInspect
Return available narrative archetypes with descriptions so the agent or user can select a style
| Name | Required | Description | Default |
|---|---|---|---|
| locale | No | Description language (en, de, es, or pt) | en |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds minimal behavioral context by implying a list of archetypes with descriptions, but doesn't detail format, pagination, or other traits like rate limits or auth needs. It doesn't contradict annotations, so a baseline 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that states the purpose clearly without unnecessary words. It's front-loaded with the main action, though it could be slightly more structured by explicitly mentioning the parameter or output format, but overall it's concise and to the point.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, high schema coverage, no output schema), the description is adequate but minimal. It covers the basic purpose but lacks details on usage context or behavioral nuances, making it just sufficient for a straightforward list operation without being fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the locale parameter fully documented in the schema (including enum values and default). The description doesn't add any parameter-specific information beyond what the schema provides, so it meets the baseline score of 3 for high coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('return available narrative archetypes') and the resource ('narrative archetypes'), and mentions the purpose ('so the agent or user can select a style'). However, it doesn't explicitly differentiate from sibling tools like 'storylenses_match_profile' or 'storylenses_analyze_job', which might also involve archetypes in different contexts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context (e.g., before generating content), or exclusions, leaving the agent to infer usage from the purpose alone without explicit direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
storylenses_match_profileARead-onlyInspect
Match a candidate profile/CV against job data — identifies fit score, matching skills, career gaps, and strongest narrative angle
| Name | Required | Description | Default |
|---|---|---|---|
| locale | No | Response language (en, de, es, or pt) | en |
| candidate_cv | Yes | Candidate's CV or resume as plain text | |
| job_analysis | Yes | Job analysis output from storylenses_analyze_job |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe, non-destructive operation. The description adds value by specifying outputs (fit score, matching skills, career gaps, narrative angle), but does not disclose additional behavioral traits like rate limits, authentication needs, or error handling beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and outcomes without any wasted words, making it easy to understand quickly and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, nested objects, no output schema) and rich annotations, the description is mostly complete. It explains the purpose and outputs well, but could benefit from more detail on usage context or prerequisites to fully compensate for the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds no specific parameter semantics beyond what the schema provides, such as format details for 'candidate_cv' or structure of 'job_analysis', meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('match', 'identifies') and resources ('candidate profile/CV', 'job data'), and distinguishes it from siblings by focusing on matching rather than analyzing jobs, generating letters, listing archetypes, or quality checking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'job data' and referencing 'storylenses_analyze_job' in the schema, but it does not explicitly state when to use this tool versus alternatives like 'storylenses_generate_letter' or provide exclusions, leaving some guidance gaps.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
storylenses_quality_checkARead-onlyInspect
Score and evaluate a cover letter for relevance, narrative strength, and completeness. Returns score 0-100 with actionable feedback.
| Name | Required | Description | Default |
|---|---|---|---|
| locale | No | Feedback language (en, de, es, or pt) | en |
| letter_text | Yes | The cover letter text to evaluate (minimum 200 characters) | |
| job_analysis | Yes | Job analysis from storylenses_analyze_job for context |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, indicating a safe read operation. The description adds valuable behavioral context beyond annotations by specifying the output format ('score 0-100 with actionable feedback') and evaluation criteria ('relevance, narrative strength, and completeness'), which helps the agent understand what to expect. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and output without any wasted words. Every element ('score and evaluate', dimensions, output) earns its place by providing essential information concisely.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (evaluating cover letters with job context), rich annotations (readOnlyHint, etc.), and 100% schema coverage, the description is mostly complete. It specifies evaluation dimensions and output format, but lacks details on scoring methodology or feedback structure, which could be useful since there's no output schema. However, it provides enough context for basic agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description doesn't add any parameter-specific semantics beyond what's in the schema (e.g., it doesn't explain the structure of 'job_analysis' or provide examples). With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but doesn't need to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('score and evaluate') and resource ('a cover letter'), specifying the evaluation dimensions ('relevance, narrative strength, and completeness') and output ('score 0-100 with actionable feedback'). It distinguishes from siblings like storylenses_analyze_job (which analyzes jobs) and storylenses_generate_letter (which creates letters).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'job_analysis from storylenses_analyze_job for context' in the schema, suggesting this tool should be used after analyzing a job. However, it lacks explicit guidance on when to use this tool versus alternatives like storylenses_match_profile or storylenses_generate_letter, and doesn't specify prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.