Archetype
Server Details
Score sales candidates against a proprietary evaluation framework from 10,000+ real interviews. Two tools: generate custom interview scripts and score transcripts with ADVANCE/HOLD/PASS verdicts across 8 signal dimensions.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
2 toolsarchetype_prepBInspect
Generate a custom interview script tailored to a specific candidate and role across five revenue functions: Sales, CS, Marketing, BD, and Ops. Built on 10,000+ real interviews with function-specific frameworks, anti-pattern detection, and scoring calibration.
| Name | Required | Description | Default |
|---|---|---|---|
| function | Yes | Revenue function: sales, cs, marketing, bd, or ops. | |
| role_type | Yes | Role type. Sales: ae/enterprise. CS: csm/enterprise_csm. Marketing: marketing_mgr/marketing_leader. BD: bd_mgr/bd_leader. Ops: ops_mgr/ops_leader. | |
| resume_text | Yes | Full resume or LinkedIn text. Not URLs. | |
| candidate_name | Yes | Name of the candidate | |
| additional_context | No | Optional context about the company and role |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable methodological context ('10,000+ real interviews', 'anti-pattern detection', 'scoring calibration') indicating output quality and content traits, but omits operational behaviors like idempotency, side effects, storage persistence, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences with zero waste. The first sentence front-loads the core action and scope; the second provides supporting credibility context. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate but with clear gaps. Given no output schema exists, the description should ideally specify the return format (structured JSON vs plain text script) and whether the generated script includes scoring rubrics or just questions. Without annotations, behavioral disclosure is also incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description mentions 'tailored to a specific candidate and role' which aligns with the parameters but does not add syntax details, format constraints, or examples beyond what the schema already provides (e.g., schema already specifies 'Not URLs' for resume_text).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Generate' and resource 'custom interview script', clearly defining the scope across five revenue functions. However, it lacks explicit differentiation from sibling tool 'archetype_score' (e.g., stating this is for preparation while the sibling is for evaluation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives, prerequisites (e.g., having a resume ready), or when-not-to-use scenarios. The agent must infer usage solely from the 'prep' naming convention and the sibling 'archetype_score'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
archetype_scoreAInspect
Score a candidate from an interview transcript. Returns a full evaluation with 8 trait scores, qualitative analysis, and ADVANCE/HOLD/PASS verdict across five revenue functions: Sales, CS, Marketing, BD, and Ops. Each function uses its own scoring framework with unique traits, weights, and anti-pattern detection.
| Name | Required | Description | Default |
|---|---|---|---|
| function | Yes | Revenue function: sales, cs, marketing, bd, or ops. | |
| role_type | Yes | Role type within the function. | |
| resume_text | No | Optional resume text | |
| candidate_name | Yes | Name of the candidate | |
| hiring_criteria | No | Optional specific requirements to evaluate | |
| transcript_text | Yes | Full interview transcript | |
| additional_context | No | Optional context |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses key behavioral traits: returns structured evaluation with 8 trait scores, qualitative analysis, and ternary verdict. Uniquely explains that each function employs distinct scoring frameworks with 'unique traits, weights, and anti-pattern detection'—critical context for an evaluation tool. Missing operational characteristics (idempotency, data persistence, rate limits).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste. First sentence establishes core purpose. Second details output structure and scope. Third explains behavioral variation across functions. Information-dense with no filler, well-structured with purpose front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by detailing return structure (8 trait scores, qualitative analysis, verdict). Explains domain context (revenue functions) and evaluation methodology (anti-patterns). With 7 parameters and 100% schema coverage, input side is well-handled. Lacks only operational metadata (error handling, idempotency) to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description adds marginal semantic value by framing the 'function' parameter values as 'revenue functions' and explaining that 'role_type' selections trigger different scoring frameworks with unique weights and anti-patterns. Does not add syntax details or parameter interaction rules beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Score' and clear resource 'candidate from an interview transcript'. It distinguishes scope by specifying five revenue functions (Sales, CS, Marketing, BD, Ops) and detailed output format (8 trait scores, ADVANCE/HOLD/PASS verdict), clearly differentiating from sibling 'archetype_prep' which likely handles pre-interview preparation rather than post-interview evaluation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies temporal context through 'interview transcript' (suggesting post-interview use), but lacks explicit when-to-use guidance versus sibling 'archetype_prep' or prerequisites. No mention of whether to use this for initial screening vs final evaluation, or when to prefer one function/role_type over another.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!