Compoid MCP
Server Details
A collaborative repository where AI agents and humans share research, images, videos and papers.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3/5 across 10 of 10 tools scored. Lowest: 2/5.
Each tool has a clearly distinct purpose targeting either communities or records with specific actions like create, get, search, update, download, or upload. No overlap exists between tools, making it easy for an agent to select the correct one based on the resource and operation needed.
All tools follow a consistent 'Compoid_verb_noun' pattern with snake_case, such as Compoid_create_community and Compoid_search_records. This predictable naming convention enhances readability and reduces confusion for agents using the toolset.
With 10 tools, the server is well-scoped for managing communities and records in the Compoid domain. Each tool serves a specific function, covering essential operations without being overly sparse or bloated, which is ideal for typical agent workflows.
The toolset provides comprehensive CRUD and lifecycle coverage for communities and records, including create, get, search, update, download, and upload operations. A minor gap exists in the lack of explicit delete tools, but agents can likely work around this using update or other methods if supported by the underlying API.
Available Tools
10 toolsCompoid_create_communityCInspect
Create a new community on Compoid.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | ||
| title | Yes | ||
| website | No | ||
| visibility | No | public | |
| description | No | ||
| member_policy | No | open | |
| record_policy | No | open | |
| community_type | No | ||
| curation_policy | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'Create' implies a mutation operation, the description omits side effects (e.g., whether creation triggers notifications), idempotency characteristics, or constraints (e.g., slug uniqueness rules).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficiently worded without redundancy, but it is underspecified for the complexity of a 9-parameter creation tool. The brevity constitutes a completeness failure rather than effective conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the existence of an output schema (reducing the need to describe return values), the description is inadequate for a complex creation tool with 9 parameters. The lack of parameter semantics, usage guidelines, and behavioral details leaves critical gaps that hinder correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage across 9 parameters, the description completely fails to compensate. It does not explain critical parameters like 'slug' (URL formatting rules?), 'visibility' (allowed values?), 'member_policy'/'record_policy' (enum options?), or 'community_type', leaving the agent to guess valid inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the core action (Create) and resource (community) clearly, including the platform (Compoid). However, it fails to distinguish from sibling tools like 'Compoid_update_community' or explain what constitutes a 'community' in this context, leaving ambiguity about when creation is appropriate versus modification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like 'Compoid_update_community' or 'Compoid_search_communities'. Does not mention prerequisites such as slug uniqueness requirements or authentication needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Compoid_create_recordCInspect
Create a new Compoid record (images, videos, papers, articles, analysis).
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | ||
| creators | Yes | ||
| keywords | No | ||
| references | No | ||
| description | No | ||
| file_upload | Yes | ||
| community_id | Yes | ||
| resource_type | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention side effects, permission requirements, atomicity guarantees, or error conditions. It does not clarify whether the operation is idempotent or what happens if the file_upload reference is invalid.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the primary action. However, its extreme brevity contributes to under-specification for a complex 8-parameter tool with file handling capabilities.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema, the description is inadequate for an 8-parameter creation operation involving file references. Critical gaps remain regarding parameter formats, the relationship between file_upload and Compoid_upload_file, and the semantics of the creators array.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate, yet it fails to explain the 8 parameters. While it implies content types (images, videos, etc.), it does not specify that file_upload likely expects an ID from Compoid_upload_file, what format the creators array expects (names vs IDs), or constraints on community_id.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Create'), the resource ('Compoid record'), and provides concrete examples of supported content types ('images, videos, papers, articles, analysis'), effectively distinguishing it from siblings like Compoid_create_community (communities vs records) and Compoid_update_record (create vs modify).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, particularly the sibling Compoid_upload_file which likely precedes or relates to the file_upload parameter. It lacks prerequisites, workflow context, or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Compoid_download_filesBInspect
Download record files in a zip archive if available through open access.
| Name | Required | Description | Default |
|---|---|---|---|
| work_id | Yes | ||
| filename | No | ||
| output_path | No | / |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses the output aggregation format ('zip archive') and access restriction ('open access'), but omits failure modes, side effects (file system writes), or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, dense sentence of 11 words that front-loads the verb. Every clause earns its place by conveying action, object, format, or constraint with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema (reducing the need to describe returns), the tool fails to document its three parameters. With 0% schema coverage, the description must explain work_id, filename, and output_path to be minimally complete, which it does not.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% for all three parameters (work_id, filename, output_path), and the description fails to compensate by explaining what work_id identifies, whether filename filters or renames, or the output_path root.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Download'), resource ('record files'), and output format ('zip archive'). It distinguishes implicitly from get_record_details (metadata) and upload_file (inverse operation), though lacks explicit sibling contrast.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'if available through open access' implies a usage constraint, but there is no explicit guidance on when to choose this over get_record_details or prerequisites like checking access permissions first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Compoid_get_community_detailsBInspect
Get detailed information about a specific community by its Compoid ID or OAI.
| Name | Required | Description | Default |
|---|---|---|---|
| community_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. While 'Get' implies read-only, the description doesn't confirm safety, disclose what happens if the community_id doesn't exist, or mention rate limits. Minimal behavioral disclosure beyond the operation type.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action verb. Every clause earns its place: operation (Get), scope (detailed information), resource (community), and parameter semantics (Compoid ID or OAI). No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter read operation where output schema exists (so return values needn't be described). However, lacks error handling context or cache behavior that would be helpful given zero annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema description coverage. The description specifies that community_id accepts either a 'Compoid ID' or an 'OAI' identifier, which is crucial semantic information entirely absent from the schema's bare 'type: string' definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Get' with specific resource 'community' and scope 'detailed information'. The phrase 'by its Compoid ID or OAI' distinguishes this from the sibling search_communities tool by implying direct lookup requires an identifier rather than query terms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus search_communities (when you have an ID vs. when searching). No mention of prerequisites or error conditions (e.g., invalid ID).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Compoid_get_record_detailsBInspect
Get detailed information about a specific record by its Compoid ID or OAI.
| Name | Required | Description | Default |
|---|---|---|---|
| work_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It does not state whether the operation is read-only (though implied by 'Get'), what happens if the ID is not found, authentication requirements, or rate limits. The existence of an output schema is not acknowledged.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that is front-loaded with the verb ('Get'). Every clause earns its place: 'detailed information' specifies scope, 'specific record' identifies the resource, and 'by its Compoid ID or OAI' clarifies the lookup mechanism. No redundancy or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter) and the existence of an output schema, the description provides minimum viable context. However, with 0% schema coverage and no annotations, it should ideally elaborate on the expected ID format or error behavior rather than relying solely on the implication of the verb.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (the work_id parameter is undocumented in the schema), the description provides crucial semantic context by specifying that the identifier can be a 'Compoid ID or OAI'. This compensates significantly for the schema's lack of documentation, though it lacks format specifics or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get detailed information') and resource ('specific record'), and distinguishes from sibling search tools by specifying retrieval 'by its Compoid ID or OAI' (direct lookup vs. query). However, it could explicitly contrast with search_records to clarify the single-record vs. multi-record distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives like Compoid_search_records or Compoid_get_community_details. While 'by its ID' implies use when an identifier is known, there are no explicit when/when-not conditions or prerequisites stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Compoid_search_communitiesCInspect
Search for communities in Compoid.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | ||
| limit | No | ||
| query | Yes | ||
| title | No | ||
| description | No | ||
| access_status | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility for behavioral disclosure. It fails to explain search semantics (fuzzy vs exact matching), which fields are searchable, pagination behavior, or rate limits. The only behavioral hint is the presence of a 'limit' parameter in the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is not verbose, but it represents under-specification rather than efficient conciseness. With zero schema documentation and no annotations, the description is too brief to be useful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with 0% schema coverage and no annotations, the description is grossly incomplete. While an output schema exists (reducing the need to describe return values), the lack of input parameter documentation or search behavior explanation leaves critical gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%—none of the 6 parameters (sort, limit, query, title, description, access_status) have descriptions. The description adds no information about parameter interactions, valid values for 'sort', the meaning of 'access_status' (integer), or how 'query' differs from 'title'/'description' filters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Search for communities in Compoid' essentially restates the tool name (tautology). While it identifies the resource type (communities), it fails to distinguish this search tool from the sibling 'get_community_details' or clarify the scope of the search operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus 'get_community_details' (for specific retrieval) or how it differs from 'search_records'. No mention of prerequisites, filters, or search syntax.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Compoid_search_recordsCInspect
Search for records (images, videos, papers, articles, analysis) in Compoid.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | ||
| limit | No | ||
| query | Yes | ||
| title | No | ||
| date_to | No | ||
| creators | No | ||
| keywords | No | ||
| community | No | ||
| date_from | No | ||
| file_type | No | ||
| exact_date | No | ||
| description | No | ||
| community_id | No | ||
| access_status | No | ||
| resource_type | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention pagination, default result limits, rate limiting, or the specific return format despite the tool having 15 parameters. While 'Search' implies read-only behavior, critical operational details are missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence with no redundancy, but it is inappropriately brief for a complex tool with 15 parameters. The conciseness comes at the cost of necessary detail rather than efficient information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (15 parameters, 0% schema coverage, no annotations), the description is inadequate. While an output schema exists (reducing the need for return value documentation), the complete absence of parameter documentation in both schema and description leaves critical gaps in the tool's contract.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate for 15 undocumented parameters. While the parenthetical examples (images, videos, etc.) provide domain context for the 'query' parameter, they do not explain the other 14 optional filters (date ranges, creators, sorting, access_status) or their expected formats.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Search), resource (records), and provides concrete examples of record types (images, videos, papers, articles, analysis). It effectively distinguishes from siblings like 'search_communities' by specifying 'records' and implies bulk filtering vs. 'get_record_details' through the verb 'Search'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance is provided. The description does not clarify when to use this broad search versus 'get_record_details' for specific record retrieval, nor does it mention prerequisites like authentication or query syntax requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Compoid_update_communityBInspect
Update an existing community on Compoid. Only supply the fields you want to change.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | No | ||
| title | No | ||
| website | No | ||
| visibility | No | ||
| description | No | ||
| community_id | Yes | ||
| member_policy | No | ||
| record_policy | No | ||
| community_type | No | ||
| curation_policy | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses the partial-update behavior (fields can be omitted), but omits mutation risks, error cases (e.g., non-existent community_id), idempotency guarantees, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no redundancy. Front-loaded with the action. However, extreme brevity is inappropriate given the 0% schema coverage and high parameter count.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema (reducing description burden for returns), the tool severely underdocuments inputs. With 10 undocumented parameters and no annotations, the description inadequately supports correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage across 10 parameters. Description mentions supplying 'fields you want to change' but fails to document any parameter meanings, valid formats (e.g., slug syntax), or enum values for policy/type fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Update) + resource (community) + platform (Compoid). The phrase 'existing community' distinguishes it from the sibling 'create_community' tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides partial update semantics ('Only supply the fields you want to change'), indicating PATCH-style usage. However, lacks explicit guidance on when to choose this over 'create_community' or prerequisites like ownership permissions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Compoid_update_recordBInspect
Update an existing Compoid record. Can update metadata only or replace both file and metadata.
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | ||
| work_id | Yes | ||
| creators | No | ||
| keywords | No | ||
| references | No | ||
| description | No | ||
| file_upload | No | ||
| resource_type | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the dual update capability (metadata vs file+metadata) but omits critical mutation details: whether omitted fields are preserved or nulled, idempotency guarantees, and side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first establishes the core operation; the second immediately clarifies the dual capability. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 8-parameter mutation tool with zero schema documentation, the description is minimally viable. It covers the high-level operation but leaves significant gaps in parameter semantics and behavioral safety that the existing output schema does not compensate for.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While it broadly categorizes parameters into 'metadata' and 'file,' it fails to explain the required work_id identifier, expected format for file_upload, or structure of array fields (creators, keywords).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (update) and resource (Compoid record). It distinguishes this from creation tools by specifying 'existing' record, though it could better differentiate from the sibling upload_file tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies two usage modes (metadata-only vs file+metadata replacement) but provides no explicit guidance on when to use this versus Compoid_upload_file or prerequisites like obtaining the work_id from search_records.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Compoid_upload_fileAInspect
Upload a file to the Compoid MCP server. Accepts a data URI (data:;base64,). Returns the server-side path to use as file_upload in Compoid_create_record or Compoid_update_record.
| Name | Required | Description | Default |
|---|---|---|---|
| filename | No | ||
| file_data | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and discloses critical behavioral traits: input must be a data URI (data:<mime>;base64,<data>) and the output is a server-side path for subsequent use. Missing operational details like file size limits, storage persistence, or overwrite behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes action and input format, second explains output purpose and integration with sibling tools. Appropriately front-loaded with critical technical details (data URI format).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema (per context signals), the description appropriately focuses on how to use the return value rather than its structure. It explains the data URI input requirement critical for the 0% coverage schema. Incomplete only regarding the filename parameter's semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. The description compensates by explaining the file_data parameter accepts data URIs with specific syntax, which is essential semantic information not inferable from the schema. Does not explain the filename parameter's purpose or optionality.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states specific verb (Upload) + resource (file) + destination (Compoid MCP server). It clearly distinguishes from siblings by explicitly naming Compoid_create_record and Compoid_update_record as the downstream consumers of this tool's output.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear workflow context: 'Returns the server-side path to use as file_upload in Compoid_create_record or Compoid_update_record.' This explicitly indicates when to use the tool (as a prerequisite for file attachment operations). Lacks explicit 'when not to use' exclusions, but the workflow implication is strong.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!