Greenhouse
Server Details
Greenhouse MCP Pack — wraps the Greenhouse Harvest API v1
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-greenhouse
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 10 of 10 tools scored. Lowest: 2.9/5.
The tools are mostly distinct: ask_pipeworx is a natural language interface, discover_tools is for tool discovery, and the Greenhouse tools clearly target specific ATS operations. However, ask_pipeworx's description mentions it 'picks the right tool', which could overlap with discover_tools in purpose, causing slight ambiguity.
The naming is mixed: Greenhouse tools use a consistent 'greenhouse_verb_noun' pattern, but ask_pipeworx and discover_tools use verb phrases, and remember/recall/forget use single verbs. This inconsistency in style makes the set feel less coherent.
With 10 tools, the count is reasonable for the apparent scope. The Greenhouse subset (5 tools) feels slightly limited for a full ATS, but the inclusion of memory tools and the general-purpose ask_pipeworx and discover_tools justifies the count.
The Greenhouse coverage includes basic get and list operations but lacks create, update, or delete for candidates/jobs/applications, which are notable gaps. The memory tools (remember/recall/forget) form a complete small subsystem, but the overall server feels incomplete for managing a full ATS workflow.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that Pipeworx picks the right tool and fills arguments, implying delegation and possibly rate limits or access to external data. It could be more explicit about limitations, but the description is clear about the delegation behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (4 sentences), front-loaded with the core functionality, and uses examples to illustrate usage without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple parameter (1 string) and no output schema, the description is largely complete. It explains what the tool does, how to use it, and gives examples. However, it could mention that the tool may have latency or access constraints due to delegation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'question' is fully covered by the schema description. The description adds examples and context that clarify the parameter's usage (natural language request), but since schema coverage is 100%, the description adds modest value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool answers plain English questions by selecting the best data source and filling arguments, with concrete examples like 'What is the US trade deficit with China?' This distinguishes it from sibling tools which are specific to Greenhouse or memory operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides guidance to use this tool instead of browsing other tools or schemas, and gives examples of appropriate questions. However, it does not explicitly state when NOT to use this tool or mention alternative tools for specific cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral transparency. It explains that the tool returns relevant tools with names and descriptions, which is a basic behavioral description. However, it does not disclose details like whether the search is semantic or keyword-based, if there are any side effects, or if authentication is needed. Given no annotations, a score of 3 is appropriate as it provides some behavioral context but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: three sentences that are front-loaded with the core action, followed by usage guidance and a clear call to action. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no output schema, no nested objects), the description is nearly complete. It explains what the tool does, when to use it, and the key parameter. A slight gap is the lack of information about the return format (e.g., does it return full descriptions or just names?), but overall it's sufficient for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage, so the description does not need to elaborate on parameters. However, the description adds value by explaining the purpose of the tool in relation to the query parameter (describe what you need). It also clarifies the limit parameter by noting default (20) and max (50) values, which goes beyond the schema description. This is helpful but not fully essential, earning a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions.' It uses a specific verb ('Search') and resource ('Pipeworx tool catalog'), and distinguishes itself from sibling tools like ask_pipeworx or greenhouse_* tools by focusing on tool discovery, not data retrieval or management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear guidance on the tool's role as a starting point for tool selection, with a specific condition (500+ tools) and a clear directive ('Call this FIRST').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. The description clearly states that the tool deletes a memory, implying a destructive action. However, it does not disclose any additional behavioral traits (e.g., whether deletion is permanent, cascading effects, or confirmation required). It meets the minimum by clearly indicating the operation type.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with one sentence and no wasted words. It is front-loaded and clearly communicates the action and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description is nearly complete. It covers the action and the required identifier. The only minor gap is that it does not explain the return value or error cases, but given the simplicity, this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a single required parameter 'key' and its description. The description adds minimal meaning beyond the schema—it simply restates that deletion is by key. With full schema coverage, a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Delete') and specifies the resource ('stored memory') and the identifier ('by key'). It distinguishes itself from sibling tools like 'remember' (store) and 'recall' (retrieve) by stating the delete operation explicitly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (when you need to delete a memory), but does not provide guidance on when not to use it or alternatives. Since sibling tools include 'remember' and 'recall', the usage is clear but no exclusions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
greenhouse_get_candidateCInspect
Get full candidate profile by ID. Returns resume, contact info, application history, interviews, and notes.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID | |
| _apiKey | Yes | Greenhouse Harvest API key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It states it retrieves a single candidate but doesn't mention read-only nature, error handling, or rate limits. The description is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, front-loaded with purpose. It is concise and easy to parse, but could be slightly more informative without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple parameters, the description is minimal. It doesn't explain return format, error responses, or candidate structure. For a simple retrieval tool, more completeness is expected.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both 'id' and '_apiKey' have descriptions). The description adds 'single candidate by ID' context but doesn't enhance parameter meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves a single candidate by ID from Greenhouse, which is specific. However, it doesn't differentiate from sibling tools like 'greenhouse_list_candidates', which lists candidates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. It lacks context about prerequisites (e.g., needing the candidate ID) or when listing might be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
greenhouse_get_jobCInspect
Get complete job details by ID. Returns description, requirements, hiring team, and linked applications.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID | |
| _apiKey | Yes | Greenhouse Harvest API key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries full burden. It does not disclose any behavioral traits such as idempotency, error handling, or what happens if the ID doesn't exist. Only states a basic retrieval action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no wasted words, directly states the purpose. Appropriate length for a simple retrieval tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema and simple parameters, but the description is minimal. For a retrieval tool, it lacks details like return format or behavior on error. Could be more helpful with one more sentence.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so description adds minimal value. It doesn't clarify parameter semantics beyond what the schema provides (e.g., the 'id' is a Job ID). Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get'), the resource ('a single job'), and the source ('from Greenhouse'), providing good purpose clarity. It does not differentiate from siblings like 'greenhouse_list_jobs', but the singular nature is implied.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs. alternatives like 'greenhouse_list_jobs'. No context about prerequisites or exclusions, leaving the agent to infer usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
greenhouse_list_applicationsBInspect
View job applications across your pipeline. Returns applicant names, job IDs, application status, and submission dates. Filter by job or stage (e.g., 'screening', 'interview').
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1) | |
| job_id | No | Filter by job ID (optional) | |
| status | No | Filter by status: active, converted, hired, rejected (optional) | |
| _apiKey | Yes | Greenhouse Harvest API key | |
| per_page | No | Results per page (max 500, default 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It says 'list applications' implying a read operation, which is clear. However, it doesn't disclose pagination behavior, rate limits, or authentication specifics beyond what the schema indicates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. Front-loaded with the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters, no output schema, and no annotations, the description is minimal. It covers the basic purpose but lacks details about pagination, filtering behavior, or expected output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not explain parameter semantics beyond what the schema provides. Since schema coverage is 100%, baseline is 3. The description adds no additional meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists job applications from Greenhouse ATS, which is clear and specific. It distinguishes itself from sibling tools like greenhouse_list_candidates and greenhouse_list_jobs, but could be more explicit about the scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like greenhouse_list_candidates or greenhouse_list_jobs. No mention of prerequisites or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
greenhouse_list_candidatesCInspect
Search candidates in your ATS. Returns names, IDs, email addresses, and application status. Use greenhouse_get_candidate for full profile details.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1) | |
| _apiKey | Yes | Greenhouse Harvest API key | |
| per_page | No | Results per page (max 500, default 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description does not disclose any behavioral traits beyond what the input schema provides. There are no annotations (e.g., readOnlyHint, destructiveHint), so the description carries full burden but fails to mention that listing is a read-only operation, whether it requires specific permissions, or how pagination behaves. The schema indicates pagination via page and per_page, but the description does not confirm pagination behavior or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, concise and front-loaded. It wastes no words and immediately conveys the core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 3 parameters (including a required API key) and no output schema. The description is minimal and does not explain return format, error conditions, or how pagination works. For a list tool with pagination, more context is needed to ensure correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning all parameters have descriptions in the schema. The description 'List candidates from Greenhouse ATS.' adds no extra meaning beyond the schema. Baseline 3 is appropriate as the schema already documents parameters well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'List candidates from Greenhouse ATS.' clearly states the action (list) and the resource (candidates from Greenhouse ATS). It is distinct from sibling tools like 'greenhouse_get_candidate' which focuses on a single candidate, and 'greenhouse_list_applications' which lists applications.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For instance, it does not explain when to use 'greenhouse_list_candidates' instead of 'greenhouse_get_candidate' or 'greenhouse_list_applications'. The description lacks any usage context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
greenhouse_list_jobsBInspect
Browse open and closed job postings. Returns titles, IDs, departments, statuses, and posting dates. Use greenhouse_get_job for full details and candidate pipeline.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1) | |
| status | No | Filter by status: open, closed, draft (optional) | |
| _apiKey | Yes | Greenhouse Harvest API key | |
| per_page | No | Results per page (max 500, default 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It does not disclose pagination behavior, rate limits, or whether the API key requires specific permissions. However, the schema provides parameter details, partially compensating.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no waste. Could be slightly more informative without harming conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple list function, the description is adequate but missing details on pagination and filtering. It does not describe return format or edge cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds little beyond listing the resource. It does not explain parameter relationships (e.g., pagination with page/per_page) or status filter behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('list') and resource ('jobs from Greenhouse ATS'). While it distinguishes the tool from siblings like greenhouse_get_job, it could be more specific about the scope (e.g., all jobs vs. filtered).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like greenhouse_get_job or greenhouse_list_applications. The description lacks context for usage decisions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. Discloses that memory persists across sessions, but does not specify if retrieval modifies memory or returns metadata (like timestamps). Adequate but not detailed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with core action, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description could specify return format (e.g., string or object). However, parameter is simple and behavioral expectations are mostly met.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear description for 'key'. Description adds context that omitting key lists all memories, which is useful beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves a memory by key or lists all if key omitted. Distinguishes from 'remember' (store) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes when to use (retrieve context saved earlier) and hints at alternative (omit key for listing). Lacks explicit when-not-to-use compared to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses behavioral traits: key-value storage, session memory, persistence differences (authenticated persistent vs 24-hour anonymous). This is sufficient for a simple memory tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with no wasted words. First sentence defines purpose, second sentence gives usage guidance, third sentence adds behavioral context. Front-loaded with the essential action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (simple key-value store, 2 required params, no output schema), description is complete. It explains what to store, why, and persistence behavior. No gaps identified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description adds value by explaining the purpose of storing findings/context, though it doesn't elaborate on parameter formats beyond schema. Example values in schema already provide good guidance, so description's added value is moderate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool stores a key-value pair in session memory, specifying the verb 'store' and the resource 'session memory'. It distinguishes from sibling 'recall' by implication (store vs retrieve) and from 'forget' (opposite operation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description provides explicit usage context: save intermediate findings, user preferences, or context across tool calls. It mentions memory persistence differences for authenticated vs anonymous users, but does not explicitly state when not to use it or compare to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!