Bamboohr
Server Details
BambooHR MCP Pack — wraps the BambooHR API v1
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-bamboohr
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 10 of 10 tools scored. Lowest: 2.4/5.
Most BambooHR tools are distinct (e.g., directory vs. employee details vs. timeoff), but ask_pipeworx and discover_tools overlap in purpose (both for finding tools/info), and memory tools (forget, recall, remember) are unrelated to HR, causing confusion.
BambooHR tools follow a consistent verb_noun pattern (e.g., bamboohr_get_directory), but non-BambooHR tools (ask_pipeworx, discover_tools, memory tools) break the pattern with different naming conventions.
10 tools is a reasonable count, but half are unrelated to BambooHR (Pipeworx and memory tools), making the set feel padded. A dedicated HR server would be better with 5-7 focused tools.
The HR tools cover basic directory and employee retrieval but lack critical operations like creating/updating employees, time-off management (only listing), or file uploads. The unrelated tools don't fill the gaps.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explains the tool's behavior: it picks the right tool, fills arguments, and returns the result. It does not contradict any annotations (none provided). It provides enough transparency for an agent to understand the delegated nature of this tool, though it could mention any limitations (e.g., what data sources are available).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded with the core purpose. It includes examples to clarify usage. It could be slightly more structured (e.g., separate the examples) but is effective and not verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no output schema), the description is complete enough. It explains the core functionality and provides examples. It does not need to detail return values as the response is dynamic. The tool's delegated nature is well communicated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter 'question' with a description. The description adds value by explaining that the question should be in natural language and that the tool handles the rest. With 100% schema coverage, the baseline is 3, and the description does not add new parameter-level details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: answering plain English questions by automatically selecting the best data source and filling arguments. It provides concrete examples that illustrate the scope, making it easy to distinguish from sibling tools like 'bamboohr_get_employee' which are specific to BambooHR.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says to 'just describe what you need' and gives examples, implying when to use it: for any question where the user wants the system to choose the tool. However, it does not explicitly say when not to use it or mention alternatives, though the sibling tools are clearly different.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bamboohr_get_directoryBInspect
Get complete employee directory with names, titles, departments, contact info, and manager assignments for all staff.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | BambooHR API key | |
| _subdomain | Yes | BambooHR subdomain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description indicates it returns 'basic info for all employees', which implies a read-only operation and broad scope. However, with no annotations present, the description carries the full burden. It does not disclose details like pagination, response format, or whether the operation might be slow for large directories. The description is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys the core purpose efficiently. It is front-loaded with the verb and resource. No unnecessary words. However, it could benefit from a brief usage note.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 simple params, no output schema, no annotations), the description is mostly sufficient. It states the tool returns a directory of all employees with basic info, which is reasonable. However, without any output schema, a brief note on what 'basic info' includes would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the parameters are fully described in the schema. The description does not add any additional meaning beyond what the schema provides (e.g., no mention of required credentials or how to obtain them). Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves an employee directory from BambooHR with basic info for all employees, specifying both the source system and the scope (all employees). It uses a specific verb ('Get') and resource ('employee directory'), distinguishing it from siblings like bamboohr_get_employee which targets a single employee.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus alternatives. It fails to mention when to prefer this over bamboohr_list_employees or bamboohr_get_employee. No usage context or exclusions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bamboohr_get_employeeBInspect
Get detailed employee info by ID (e.g., "12345"). Specify fields like firstName, lastName, email, department. Returns requested data.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Employee ID | |
| fields | Yes | Comma-separated field names (e.g., "firstName,lastName,department,jobTitle,workEmail") | |
| _apiKey | Yes | BambooHR API key | |
| _subdomain | Yes | BambooHR subdomain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It indicates this is a read operation (get details) and that fields can be specified, which is basic. However, it does not disclose potential side effects (none expected), rate limits, or authentication requirements beyond the schema's required parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short at one sentence, which is efficient. It conveys the core purpose immediately. It could be slightly improved by front-loading the most critical info, but it's already concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is simple (get employee details), the description is adequate but minimal. There is no output schema, so the agent might wonder about the return format. However, the description together with the schema provides enough to use the tool correctly. Lacks details like error handling or behavior if ID is invalid.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters clearly. The description adds no additional meaning beyond stating to 'specify which fields to retrieve', which is already clear from the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'details for a specific employee by ID', and distinguishes it from sibling tools like 'bamboohr_list_employees' and 'bamboohr_get_directory'. It specifies the action and the scope (by ID) with no ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention that to get a list of employees first you would use 'bamboohr_list_employees', nor does it specify any prerequisites (e.g., knowing the employee ID).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bamboohr_get_employee_filesCInspect
Get files in an employee's profile by ID. Returns file names, upload dates, and file types.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Employee ID | |
| _apiKey | Yes | BambooHR API key | |
| _subdomain | Yes | BambooHR subdomain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description does not disclose any behavioral traits. Annotations are empty, so the description carries the full burden. It does not mention read-only nature, pagination, or data format. Since annotations are missing, the description should compensate but fails to.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence, front-loading the purpose. It is appropriately short, though it could include more detail without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is insufficient. The tool requires authentication parameters but the description does not mention authentication steps or context. The tool's complexity is low, but completeness is lacking for a file list retrieval.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and parameters are described with basic descriptions (Employee ID, API key, subdomain). However, the description does not add any additional meaning beyond what the schema provides, missing the chance to explain how to obtain the employee ID or any constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a list of files for an employee, but does not specify what information about the files is returned (e.g., file names, IDs). It distinguishes itself from sibling tools by focusing on employee files, but lacks detail about the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus other BambooHR tools, such as when to use this vs. bamboohr_get_employee. There are no prerequisites or context for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bamboohr_list_employeesCInspect
List all employees with directory info. Returns IDs, names, departments, job titles, and contact details.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | BambooHR API key | |
| _subdomain | Yes | BambooHR subdomain (e.g., "mycompany" from mycompany.bamboohr.com) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It does not disclose any behavioral traits such as whether it returns all fields, pagination, or rate limits. Simply states it returns a directory, which is vague.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very short, but could be more informative. The second sentence is somewhat redundant with the first. Could benefit from additional context without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (no output schema, no nested objects), the description is minimally adequate but lacks details on what 'directory' includes (e.g., fields returned). With sibling tools that may overlap, more clarity is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are well-documented in the schema. The description adds no additional semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it lists employees from BambooHR and returns a directory, which is clear. However, it doesn't distinguish this from sibling tool bamboohr_get_directory, which likely also returns employee directory info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like bamboohr_get_directory or bamboohr_get_employee. The description lacks any context on usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bamboohr_list_timeoffCInspect
Search time-off requests by date range (e.g., "2024-01-01" to "2024-12-31"). Returns approved/pending requests with employee names and absence types.
| Name | Required | Description | Default |
|---|---|---|---|
| end | Yes | End date (YYYY-MM-DD) | |
| start | Yes | Start date (YYYY-MM-DD) | |
| _apiKey | Yes | BambooHR API key | |
| _subdomain | Yes | BambooHR subdomain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries full burden. It does not disclose behavioral traits such as whether the tool is read-only, pagination, sorting, or authentication requirements beyond what schema provides.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no fluff. Could be slightly more informative without adding length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 4 required params, no output schema, and empty annotations, the description is minimal. It covers the core action but lacks details on response format, error handling, or use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds no additional meaning to parameters; it only mentions date range but not formats or constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states the verb 'list' and resource 'time-off requests' with a date range constraint. It clearly distinguishes from siblings like bamboohr_get_employee, which retrieves a single employee.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives like bamboohr_list_employees or other time-off-related tools. No mention of prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions it 'Returns the most relevant tools with names and descriptions,' giving a clear expectation of what the output contains. With no annotations provided, this is valuable behavioral context. It could further explain if the search is semantic or keyword-based, but the current disclosure is strong.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long and front-loaded with the core action. It is concise but could be slightly more structured; however, every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is complete. It covers the search intent, usage context, and expected results. The sibling list and context signals further support that no additional information is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds context for the query parameter by providing example natural language queries (e.g., 'analyze housing market trends'), which helps the agent understand the expected input format. It also mentions the limit parameter's default and max values, supplementing the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb+resource combination: 'Search the Pipeworx tool catalog' and clearly states its purpose to find tools by describing needs. It distinguishes itself from siblings like 'ask_pipeworx' by specifying that it searches for tools, not answers questions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear when-to-use guidance and prioritization context, which is especially valuable given the large number of sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must cover behavioral traits. It states deletion by key but lacks details on persistence, irreversibility, or side effects. Adequate for a simple delete operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, perfectly concise, front-loaded with action and object. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a simple single-parameter delete with no output schema or nested objects, description is mostly complete. Could mention confirmation or error behavior, but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (key described in both schema and description). Description adds no extra meaning beyond schema, so baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Delete' and resource 'stored memory by key', clearly distinguishing from siblings like 'recall' (retrieve) and 'remember' (store).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance. Sibling tools like 'recall' and 'remember' imply complementary operations, but description doesn't guide selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description clarifies behavior: key optional, listing all if omitted. Indicates memory persistence across sessions. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two clear sentences, front-loaded with primary action, then listing behavior and usage context. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 1 optional param, no output schema, and simple behavior, description is nearly complete. Could mention return format (string?) but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage 100% but description adds meaning: 'key to retrieve (omit to list all keys)' explains optionality and listing behavior beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all memories. The verb 'Retrieve' and resource 'stored memory' are specific, and the description distinguishes from 'remember' and 'forget' siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Retrieve context you saved earlier'. Does not exclude scenarios, but no explicit mention of when not to use or alternatives beyond implicit differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry the burden. It discloses memory persistence duration (24 hours for anonymous) and authentication benefits, but doesn't mention potential size limits, overwrite behavior, or whether keys are case-sensitive. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. The first sentence defines the action and resource, the second provides usage context and persistence details. Front-loaded with core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple key-value store with 2 parameters and no output schema, the description covers purpose, usage guidelines, and key behavioral traits (persistence). Lacks mention of overwrite behavior or value size limits, but overall complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with examples for both key and value, so the description adds little beyond restating the purpose. Baseline 3 is appropriate as schema already documents parameters well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, with specific verbs and resources. It differentiates from siblings like 'recall' (retrieves) and 'forget' (removes).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use (save intermediate findings, user preferences, context across calls) and notes persistence differences for authenticated vs anonymous users, helping the agent decide contextually.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!