Custify MCP Server
OfficialServer Quality Checklist
- Disambiguation4/5
Most tools have distinct purposes targeting specific Custify resources like accounts, contacts, notes, tasks, alerts, health scores, segments, usage data, and playbooks. However, list_accounts and search_accounts have some functional overlap in retrieving accounts, though list_accounts offers richer filtering while search_accounts is simpler by name/domain.
Naming Consistency5/5Tool names follow a highly consistent verb_noun pattern throughout, such as create_note, get_account, list_attributes, run_playbook, and update_custom_fields. All tools use snake_case with clear, predictable naming conventions.
Tool Count5/5With 15 tools, this server is well-scoped for managing a Custify CRM platform. It covers core operations like account/contact management, data retrieval, and automation without being overwhelming or sparse.
Completeness4/5The toolset provides strong coverage for account and contact management, including CRUD-like operations (e.g., get, list, update, create notes/tasks) and advanced features like health scores, segments, and playbooks. Minor gaps include no direct delete operations and limited alert functionality as noted, but agents can handle most workflows effectively.
Average 3.2/5 across 15 of 15 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 15 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While 'Create' implies a write/mutation operation, it doesn't specify permissions required, whether notes are editable/deletable, rate limits, or what happens on success/failure. The description is minimal and lacks important behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's appropriately sized for a simple creation tool and gets straight to the point without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is insufficient. It doesn't explain what a 'note' represents in Custify's context, what happens after creation, or provide any error handling context. The agent would need to guess about important behavioral aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific context beyond what's in the schema, such as format examples or constraints. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a note') and target resource ('on a Custify account'), which is specific and unambiguous. However, it doesn't differentiate from sibling tools like 'create_task' or explain what distinguishes notes from other entities in the system.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'create_task' or other sibling tools. It doesn't mention prerequisites, typical use cases, or any context about when note creation is appropriate versus other operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions a constraint about 'assignee_id' but fails to describe critical aspects like authentication requirements, rate limits, error handling, or what happens upon creation (e.g., returns a task ID). This is inadequate for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: one stating the purpose and another providing a specific parameter note. It's front-loaded with the core function, though the second sentence could be integrated more seamlessly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a creation tool with no annotations and no output schema, the description is insufficient. It lacks details on behavioral traits, return values, error cases, and usage context, making it incomplete for effective agent operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value by clarifying the 'assignee_id' constraint (ObjectId vs. email), but doesn't provide additional semantic context beyond what's in the schema, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a task') and the resource ('associated with a Custify account'), which provides a specific verb+resource combination. However, it doesn't distinguish this tool from its sibling 'create_note' or other creation tools, missing explicit differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'create_note' or other sibling tools. It lacks context about prerequisites, appropriate scenarios, or exclusions, offering only a basic functional statement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a 'get' operation, implying read-only behavior, but doesn't address authentication requirements, rate limits, error conditions, or what 'detailed information' specifically includes. The description is minimal and lacks context about the tool's operational characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any unnecessary words. It's appropriately sized for a simple lookup tool and front-loads the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read operation with no annotations and no output schema, the description is insufficient. It doesn't explain what 'detailed information' includes, how results are structured, or any behavioral constraints. Given the lack of structured metadata, the description should provide more context about the tool's behavior and output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'account_id' fully documented in the schema. The description adds no additional semantic context about the parameter beyond what's already in the schema, so it meets the baseline score of 3 for adequate but not additive documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get detailed information') and the resource ('specific Custify account/company by ID'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this from sibling tools like 'list_accounts' or 'search_accounts' beyond the singular vs. plural distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'list_accounts' or 'search_accounts'. It doesn't mention prerequisites, appropriate contexts, or exclusions, leaving the agent to infer usage patterns from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states this is a read operation ('Get'), implying it's non-destructive, but doesn't cover aspects like authentication requirements, rate limits, error handling, or response format. For a tool with zero annotation coverage, this leaves significant gaps in understanding how it behaves beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Get detailed information about a specific Custify contact by ID'). There is no wasted verbiage or redundancy, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema, no annotations), the description is minimally adequate but incomplete. It lacks context on behavioral traits (e.g., read-only nature, error cases) and doesn't explain what 'detailed information' entails in the return value. Without annotations or an output schema, the description should provide more guidance on what to expect from the tool's operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'contact_id' documented as 'The Custify contact/customer ID'. The description adds no additional meaning beyond this, such as format examples or constraints. According to the rules, when schema coverage is high (>80%), the baseline score is 3 even without param info in the description, which applies here.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('detailed information about a specific Custify contact'), making the purpose unambiguous. It distinguishes this tool from siblings like 'get_contacts' (plural) by specifying retrieval of a single contact by ID. However, it doesn't explicitly contrast with other sibling tools that might also retrieve contact-related data, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a contact ID), exclusions, or comparisons to siblings like 'get_contacts' (which likely lists multiple contacts) or 'search_accounts' (which might involve contacts indirectly). Without such context, an agent must infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'Get contacts/people,' implying a read-only operation, but doesn't specify if it's paginated (though parameters suggest it), what the output format is, rate limits, authentication needs, or error conditions. For a tool with no annotations, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action and resource, making it easy to parse. There's no wasted verbiage, earning a perfect score for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no annotations, no output schema, and 3 parameters, the description is incomplete. It lacks details on behavioral traits (e.g., pagination, error handling), output format, and usage context. While the schema covers parameters well, the overall context for safe and effective use by an AI agent is insufficient, especially for a read operation that might involve multiple results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear documentation for 'account_id', 'limit', and 'offset'. The description adds no additional meaning beyond the schema, such as explaining what 'contacts/people' entails or how parameters interact. Since the schema is comprehensive, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'contacts/people associated with a Custify account.' It distinguishes from siblings like 'get_contact' (singular) and 'get_account' by specifying it retrieves multiple contacts linked to an account. However, it doesn't explicitly contrast with other list-like tools (e.g., 'list_accounts'), making it a 4 rather than a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose 'get_contacts' over 'get_contact' (singular), 'search_accounts', or other sibling tools. There's no context about prerequisites, such as needing an account ID, or exclusions, leaving the agent to infer usage from the tool name and parameters alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states it 'gets' data, implying a read-only operation, but doesn't clarify aspects like whether it requires authentication, has rate limits, returns paginated results, or what happens if the account ID is invalid. For a tool with zero annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words. It front-loads the core purpose efficiently, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the return value looks like (e.g., a list of segment names or objects), error conditions, or behavioral traits like idempotency. For a tool with no structured output documentation, the description should provide more context to guide the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already fully documents the single parameter 'account_id'. The description adds no additional semantic context beyond implying it's used to identify the account, which the schema's description ('The Custify company/account ID') already covers. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('all segments that a specific Custify account belongs to'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_account' or 'search_accounts', which might also retrieve account-related data but with different scopes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing a valid account ID, or compare it to siblings like 'get_account' (which might retrieve general account info) or 'list_accounts' (which lists multiple accounts). This leaves the agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what the tool does but doesn't describe how it behaves: no information about authentication requirements, rate limits, pagination, error handling, or what the output looks like. For a data retrieval tool with zero annotation coverage, this leaves significant gaps in understanding its operational characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Get usage/event data') and mentions key optional filters. There's no wasted verbiage, though it could potentially be structured to separate purpose from filtering details more clearly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a data retrieval tool with 5 parameters and no annotations or output schema, the description is insufficient. It doesn't explain what 'usage/event data' entails, how results are formatted, whether there are limitations on date ranges, or authentication requirements. For a tool that likely returns structured data, more context is needed to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description mentions optional filtering by event name and date range, which aligns with the schema but doesn't add meaningful semantic context beyond what's already in the parameter descriptions. The baseline of 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get usage/event data for a Custify account' with optional filtering by event name and date range. It specifies the verb ('Get') and resource ('usage/event data'), but doesn't explicitly differentiate from sibling tools like 'get_usage_trends' which might provide similar data in a different format or scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions optional filtering but doesn't compare it to sibling tools like 'get_usage_trends' or explain scenarios where one would be preferred over the other. No prerequisites or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It only states what the tool does (searching), but doesn't mention whether this is a read-only operation, what permissions are required, whether results are paginated, or what format the results take. For a search tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It's appropriately sized for a simple search tool and front-loads the core functionality. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is insufficiently complete. It doesn't explain what the search returns (account objects? minimal data?), how results are ordered, or whether there are limitations like partial matches. For a search tool that likely returns structured data, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description mentions searching 'by name or domain' which aligns with the query parameter's purpose, but adds no additional semantic context beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: searching Custify accounts/companies by name or domain. It specifies the verb 'search' and resource 'accounts/companies', but doesn't explicitly differentiate from sibling 'list_accounts' which might serve a similar listing function without search capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There's no mention of sibling tools like 'list_accounts' or 'get_account', nor any context about when search is preferred over direct retrieval. The agent must infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states this is an update operation (implying mutation) but doesn't mention permission requirements, whether changes are reversible, rate limits, error conditions, or what happens to existing fields not included in the update. For a mutation tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that gets straight to the point with zero wasted words. It's appropriately sized for a tool with good schema documentation and no complex behavioral nuances to explain.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is insufficient. It doesn't explain what the tool returns (success confirmation? updated object? error details?), doesn't mention side effects, and provides no context about the update operation's behavior beyond the basic purpose statement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema - it doesn't explain field naming conventions, validation rules, or provide examples of the 'fields' object structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and target ('custom attribute fields on a Custify account or contact'), making the purpose immediately understandable. It doesn't distinguish from siblings like 'list_attributes' or 'get_account', but the verb+resource combination is specific enough for basic understanding.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'list_attributes' (which might show existing custom fields) or 'run_playbook' (which might automate updates). There's no mention of prerequisites, constraints, or typical use cases beyond the basic operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds important context about API limitations ('may not work as expected — the underlying alerts API has limited support'), which is valuable behavioral information. However, it doesn't describe response format, error handling, authentication requirements, rate limits, or whether this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with a version limitation warning that may distract from the core functionality. The main purpose statement is clear but could be more concise. The two-sentence structure is reasonable, but the warning takes up significant space relative to the functional description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides basic functionality but lacks important context. It mentions API limitations (helpful) but doesn't describe what the tool returns, error conditions, or authentication requirements. For a tool with 4 parameters and no structured output documentation, this leaves significant gaps in understanding how to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all 4 parameters. The description mentions 'optionally filtered by status' which aligns with the 'status' parameter in the schema, but adds no additional semantic context beyond what's already in the parameter descriptions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get alerts/signals from Custify for a specific account, optionally filtered by status.' It specifies the verb ('Get'), resource ('alerts/signals'), and scope ('for a specific account'), but doesn't explicitly differentiate from sibling tools like 'get_account' or 'get_contacts' beyond the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. While it mentions optional filtering by status, it doesn't explain when to use this tool over other sibling tools like 'get_account' or 'search_accounts' for alert-related queries. The version limitation note is a warning, not usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states it's a 'Get' operation, implying read-only, but doesn't disclose behavioral traits like authentication needs, rate limits, error conditions, or what format the health scores are returned in. The description adds minimal context beyond the basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It's front-loaded with the core purpose and includes only essential details about what's included, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and a simple single-parameter input, the description is minimally adequate. It covers the purpose but lacks behavioral details like return format or error handling. For a read-only tool with low complexity, it's passable but leaves gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the single parameter 'account_id'. The description adds no additional parameter semantics beyond implying it's for a 'specific Custify account,' which aligns with the schema. Baseline 3 is appropriate as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get health scores') and target resource ('for a specific Custify account'), with additional detail about what's included ('global and individual score breakdowns'). It distinguishes from siblings like get_account or get_usage_data by focusing specifically on health scores, though it doesn't explicitly contrast them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. While it specifies 'for a specific Custify account,' it doesn't mention prerequisites, when-not scenarios, or direct comparisons to siblings like get_account or get_usage_data that might provide related information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool retrieves historical data with optional filtering, but doesn't cover critical aspects like rate limits, authentication requirements, data format, pagination, or error handling. For a tool with no annotations, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Get historical health score values/trends over time') and includes key details (specific metric, optional filtering). There is no wasted language, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is moderately complete for a read-only tool with full parameter documentation. It covers the purpose and basic usage but lacks details on behavioral traits (e.g., response format, limits) that would be crucial for an agent to use it effectively. It's adequate but has clear gaps in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters (health_score_id, account_id, limit) with clear descriptions. The description adds minimal value beyond implying the tool focuses on historical trends and optional account filtering, but doesn't provide additional syntax or format details. This meets the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get historical health score values/trends over time') and resource ('for a specific health score metric'), with optional filtering by account. It distinguishes from siblings like 'get_health_scores' (likely current scores) and 'get_usage_data' (different data type), though not explicitly named. However, it lacks explicit sibling differentiation, keeping it at 4 rather than 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for historical trends of health scores, with optional account filtering. It doesn't explicitly state when to use this tool versus alternatives like 'get_health_scores' (which might provide current scores) or 'get_usage_data' (which might be for different metrics). No exclusions or prerequisites are mentioned, leaving usage context somewhat vague.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It adds some behavioral context by detailing the filter format and providing examples, but it does not disclose key traits like whether this is a read-only operation, pagination behavior beyond limit/offset, rate limits, or authentication needs. The description compensates partially but leaves gaps for a tool with 5 parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose. The filter examples are detailed but necessary for clarity. It avoids redundancy, though the filter format explanation could be slightly more streamlined. Overall, most sentences earn their place in aiding tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (5 parameters, no annotations, no output schema), the description is moderately complete. It covers filter usage well but lacks details on behavioral aspects like read/write nature, error handling, or output format. Without annotations or output schema, more context on what the tool returns or its operational constraints would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the filter format with examples and referencing 'list_attributes' for field discovery, which enhances understanding beyond the schema's technical definitions. However, it does not provide additional semantics for other parameters like sorting or pagination.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List Custify accounts/companies with optional filters.' It specifies the resource (accounts/companies) and verb (list), and distinguishes it from siblings like 'search_accounts' by emphasizing the filter format and referencing 'list_attributes' for field discovery, making it specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage by explaining the filter format and referencing 'list_attributes' to discover fields. It implies when to use this tool (for listing with structured filters) but does not explicitly state when not to use it or name alternatives like 'search_accounts' from the sibling list, which could enhance guidance further.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the return format ('field names, display names, and field types') and the tool's read-only, non-destructive nature through context, but lacks details on permissions, rate limits, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage context, all in two efficient sentences with zero wasted words, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with one parameter and no output schema, the description is mostly complete, covering purpose, usage, and return values. However, it could benefit from mentioning any limitations (e.g., pagination) or authentication requirements to be fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, so the baseline is 3. The description adds no additional parameter semantics beyond what the schema provides (e.g., it doesn't explain the implications of choosing 'account' vs. 'contact'), but doesn't detract from the schema's clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('List all available company/account attributes') and resources ('attributes'), and explicitly distinguishes it from sibling tools by mentioning its preparatory role for 'list_accounts' and 'filtering and sorting'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit guidance on when to use this tool ('before building filters for list_accounts') and distinguishes it from alternatives by implying it's for discovery rather than direct data retrieval, unlike siblings like 'list_accounts' or 'search_accounts'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's trigger mechanism constraints and API limitations, though it doesn't mention potential side effects, rate limits, authentication requirements, or what happens after triggering (e.g., asynchronous execution, notifications).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place: the first states the core purpose, the second provides critical usage constraints. There is zero wasted text and it's front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (triggering automated workflows), no annotations, and no output schema, the description does an excellent job covering purpose and constraints. However, it doesn't describe what happens after triggering (success/failure indicators, response format, or execution behavior), leaving some gaps for a mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing clear documentation for both parameters. The description doesn't add any additional parameter semantics beyond what's in the schema, so it meets the baseline score of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Trigger a manually-started Custify playbook') on a specific resource ('on a specific account'), distinguishing it from all sibling tools which involve creating, getting, listing, updating, or searching data rather than triggering automated workflows.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Only playbooks with trigger type "manually started" can be triggered via the API') and when not to use it ('segment-based and event-based playbooks run automatically and cannot be triggered this way'), providing clear alternatives and exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/CustifyOfficial/custify-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server