智睦云打印
OfficialServer Quality Checklist
- Disambiguation3/5
The four update_printer_* tools are named as if they configure printer hardware settings, but they actually modify roaming task parameters. This creates dangerous ambiguity with query_printer_detail (which queries actual printer capabilities), forcing agents to rely entirely on descriptions to distinguish printer properties from job configuration.
Naming Consistency3/5While most tools follow verb_noun patterns (create_roaming_task, upload_file), direct_print_document breaks convention by using an adjective-noun structure. More critically, the update_printer_* prefix is domain-inaccurate since these modify tasks, not printers, creating inconsistency with the actual object model.
Tool Count4/5Ten tools is a reasonable count for cloud printing functionality, covering environment checks, printer discovery, file handling, and dual print workflows. However, the granularity of four separate single-property update tools feels slightly excessive compared to a unified task update operation.
Completeness3/5Basic printing and discovery are covered, but the roaming task workflow lacks lifecycle management: you can create and modify tasks, but cannot submit, check status, cancel, or list jobs. This leaves agents unable to track print completion or handle failures in the roaming workflow.
Average 3.1/5 across 10 of 10 tools scored. Lowest: 2.3/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.2
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 10 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior1/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description adds zero behavioral context. It does not disclose whether the operation is synchronous, what happens if the printer is offline, required permissions, or what the return value indicates (no output schema exists).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with no redundancy. However, given the high parameter complexity and lack of schema documentation, this brevity represents under-specification rather than appropriate conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 undocumented required parameters, no annotations, and no output schema, the description is severely incomplete. It establishes the core action but leaves the agent without sufficient information to correctly populate parameters or handle responses.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters1/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage across 5 required parameters, the description completely fails to compensate. Critical semantic gaps remain unresolved: the distinction between device_name and control_sn, valid media_format values, and whether url requires a specific protocol.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool sends a document to a specific printer, distinguishing it from sibling tools that query printer details (query_printer_detail) or update settings (update_printer_*). However, it uses 'Send' rather than 'Print', which slightly weakens the specific action intent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus upload_file (a sibling tool) or whether the document must be pre-uploaded. There are no prerequisites, exclusions, or workflow context provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds the constraint that the URL must be 'public,' but fails to disclose whether the operation is asynchronous, what the return value is (likely a task ID given the 'create task' naming), or how to check task status. It does not indicate if the operation is destructive or idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with no redundancy. However, given that all three parameters lack schema descriptions and no annotations exist, the description is underspecified. It prioritizes brevity over necessary detail, making it 'too concise' for the complexity of the tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with three undocumented parameters, no annotations, and no output schema, the description is insufficient. It lacks guidance on the 'roaming' workflow, expected 'media_format' values, the purpose of 'file_name' when a URL is provided, and how to interpret or use the return value (presumably a task identifier).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage for all three parameters (file_name, url, media_format). The description only partially compensates by implying the 'url' parameter requires a public document URL. It completely omits explanation of 'file_name' (is it a destination name or metadata?) and 'media_format' (expected values like 'A4', 'Letter', 'PDF'?).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Create') and resource ('roaming print task'), and includes the source ('public document URL'). The term 'roaming' helps distinguish it from the sibling 'direct_print_document', though it could further clarify what 'roaming' implies in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus siblings like 'direct_print_document' or 'upload_file'. It mentions 'public document URL' which hints at a requirement, but does not explicitly state selection criteria or prerequisites for using this specific tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It fails to specify what 'capabilities' are returned, error handling behavior (e.g., printer not found), or whether this operation requires specific permissions. The term 'Query' implies read-only, but explicit confirmation is absent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is efficiently structured and front-loaded with the action and target. However, extreme brevity becomes a liability given the lack of schema documentation and annotations, leaving critical information unstated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for the tool's complexity. With three optional parameters (suggesting multiple query patterns), zero schema descriptions, no output schema, and no annotations, the description should explain parameter relationships, return value structure, and lookup precedence. It provides none of these.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description must compensate but only partially succeeds. It mentions 'specific printer or shared device' implying 'printer_name' and 'share_sn', but does not explain the distinction between them, valid values for 'device_type', or that all parameters are optional with null defaults.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a clear verb ('Query') and resource ('printer capabilities'), and distinguishes from sibling 'query_printers' by specifying 'for a specific printer'. However, it could better differentiate from the 'update_printer_*' siblings by explicitly stating this is a read-only information retrieval operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus 'query_printers', or how to select between the identification parameters ('printer_name' vs 'share_sn'). The description does not indicate that all parameters are optional or suggest how to identify the target device.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It indicates mutation via 'Update' and specifies the roaming task context, but fails to disclose valid color values (string|null is ambiguous), error behavior for missing tasks, or whether updates are immediate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is front-loaded and contains no redundant words. However, it is arguably too concise given the lack of schema coverage and annotations, leaving necessary context undocumented.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 2 parameters and 0% schema coverage, the description is insufficient. It fails to specify valid color input values, explain the null default behavior, or describe what confirms a successful update (no output schema exists to compensate).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description partially compensates by identifying 'color' as a 'mode' and 'task_id' as referring to an 'existing roaming task'. However, it omits valid color enumerations or formats (e.g., 'color' vs 'monochrome'), which is critical given the lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Update) and resource (color mode for an existing roaming task). It distinguishes from siblings like update_printer_copies by specifying 'roaming task' rather than printer hardware, and links to create_roaming_task.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While 'existing roaming task' implies a prerequisite (likely created via create_roaming_task), there is no explicit when-to-use guidance or differentiation from other update_printer_* siblings. No mention of what constitutes valid color values.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the mutation action but fails to disclose idempotency, error conditions, side effects on the print job state, or whether this triggers immediate reprocessing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence contains no redundant words and front-loads the action, though its extreme brevity contributes to informational gaps given the lack of supporting schema descriptions and annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero schema descriptions, no annotations, and no output schema, the description is insufficiently complete. It omits behavioral implications of changing copy counts mid-process and lacks parameter specifications necessary for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate for both parameters. It maps 'copy count' to the 'copies' parameter and implies 'task_id' via 'existing roaming task', but provides no format constraints, valid ranges (e.g., max copies), or whether task_id is a UUID or integer.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Update') and resource ('copy count') and scopes it to 'existing roaming task', clearly distinguishing it from sibling tools like update_printer_color or update_printer_paper which handle different attributes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentioning 'existing roaming task' implicitly signals that the task must already exist (suggesting create_roaming_task is a prerequisite), but lacks explicit guidance on when to use this versus direct_print_document or other alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden but provides minimal behavioral context. It doesn't specify valid values for the side parameter (e.g., 'simplex' vs 'duplex'), explain the default null behavior, indicate idempotency, or mention permission requirements for modifying tasks.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 9 words with zero redundancy. The information is front-loaded with the action verb 'Update'. However, given the 0% schema coverage and lack of annotations, the brevity contributes to under-documentation rather than efficient communication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter mutation tool with no output schema and no annotations, the description is insufficient. It omits critical details like valid enum values for 'side', the effect of null, error conditions, or whether the update is persistent or immediate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While 'simplex or duplex' hints at valid values for the 'side' parameter, it doesn't confirm exact string values, explain the null default, or describe what 'task_id' represents (format/source). Insufficient compensation for complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool updates 'simplex or duplex settings' (specific resource) for an 'existing roaming task' (scope). It effectively distinguishes from sibling update_printer_* tools (color, copies, paper) by specifying the duplex/simplex domain, though it assumes familiarity with 'roaming task' from create_roaming_task.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The mention of 'existing roaming task' implies a prerequisite (task must exist first), suggesting usage order. However, it lacks explicit guidance on when to use this versus direct_print_document or other update tools, and doesn't clarify that null side might reset to default.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it implies a status check ('whether...fully configured'), it fails to specify the return format (boolean, percentage, list of missing components), caching behavior, or what criteria define 'fully configured.'
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence of nine words with no redundant information. It is appropriately front-loaded with the action verb and wastes no space on tautological restatements of the tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description is minimally adequate for tool selection but leaves significant gaps. It does not explain what constitutes 'fully configured,' what the response structure looks like, or how this relates to the sibling printer management tools, which would be valuable given the lack of structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, which per the scoring rules establishes a baseline of 4. There are no parameters requiring semantic explanation beyond what the schema (empty properties object) already conveys.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Check' and identifies the resource 'user's cloud print environment' with the scope 'fully configured.' It distinguishes from siblings like query_printers (which lists specific printers) by focusing on overall environment configuration status rather than individual device queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states what the tool does but provides no guidance on when to use it versus alternatives. It does not indicate whether this should be called before other operations, how it relates to query_printers, or prerequisites for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses user-scoping ('current user') implying authorization context, but does not describe the return format, pagination behavior, or whether the listing includes offline printers.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single, efficient sentence with no extraneous words. It is appropriately front-loaded with the core action and subject.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this is a simple zero-parameter listing tool without an output schema, the description is minimally complete. It could be improved by mentioning that it returns a collection/list of printer objects, but the current description is sufficient for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters. Per the scoring rules, zero parameters establishes a baseline score of 4. The description appropriately reflects that no filtering parameters are required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a clear verb ('List') and resource ('printers') with scope ('available to the current user'). However, it does not explicitly differentiate from the sibling 'query_printer_detail' (which likely retrieves specific printer information) in the text itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus siblings like 'query_printer_detail', 'direct_print_document', or the various update_printer_* tools. It does not state prerequisites or conditions for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that the tool returns a 'public URL' (important security context) and implies a side effect (upload). However, it omits critical mutation details: URL persistence, file cleanup policies, size limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single 13-word sentence with zero waste. It front-loads the action ('Upload'), specifies the input ('local file'), output ('public URL'), and domain context ('print service') with no redundant phrases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without output schema or annotations, the description adequately covers the basic contract (input file → output URL). However, it leaves operational gaps regarding the 'public URL' security implications, longevity, and whether the upload is temporary or persistent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (file_path has no description). The text adds minimal semantic value by implying file_path is a local filesystem path ('local file'), but fails to specify format constraints, absolute vs. relative paths, or supported file types needed to fully compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Upload' with clear resource 'local file' and distinguishes itself from printer-management siblings by specifying the outcome is 'a public URL that the print service can read.' This clearly positions it as a file preparation step for printing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context by mentioning 'print service can read,' suggesting when to use it (when files need to be made accessible for printing). However, it lacks explicit when-to-use guidance versus direct_print_document or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully explains the polymorphic input behavior (preset name string vs. custom dimensions object), but fails to mention mutation effects, return values, error conditions, or whether changes are immediate or queued.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is tightly constructed with zero redundancy: 'Update paper size' (action), 'for an existing roaming task' (scope), and 'using a preset name or custom dimensions in millimeters' (parameter semantics). Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-parameter update tool with no output schema, the description adequately covers the core functionality and input formats. However, it should ideally mention what constitutes a successful response or common error conditions (e.g., invalid task_id or unsupported preset names) to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description compensates effectively by explaining that 'paper' accepts either a preset name (string) or custom dimensions in millimeters (object), clarifying the anyOf schema structure. It implies task_id references an existing roaming task, though explicit parameter naming would strengthen this further.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Update paper size'), the target resource ('existing roaming task'), and distinguishes itself from sibling tools like update_printer_color or update_printer_copies by specifying 'paper size' as the particular attribute being modified.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'existing roaming task' implies this tool is for modifying previously created tasks (likely via create_roaming_task), providing implicit workflow context. However, it lacks explicit guidance on when to use this versus direct_print_document or prerequisites for the roaming task state.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/zimsoft/webprinter-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server