mcp-synology
Server Quality Checklist
- Disambiguation5/5
Each tool has a clearly distinct purpose with explicit guardrails. search_files explicitly directs users to list_files for directory contents, preventing overlap. Listing, searching, metadata retrieval, sizing, downloading, and recycle-bin operations are cleanly separated.
Naming Consistency5/5All tools follow consistent snake_case verb_noun conventions. 'list_' prefix is used consistently for collection enumeration (list_files, list_shares, list_recycle_bin), 'get_' for retrieval operations, and action verbs (download, search) clearly describe the operation.
Tool Count5/5Seven tools strikes an appropriate balance for a focused NAS file browsing and download utility. The set covers discovery (list_shares), navigation (list_files), search (search_files), metadata (get_file_info, get_dir_size), transfer (download_file), and lifecycle visibility (list_recycle_bin) without bloat.
Completeness3/5The surface is read-only regarding NAS state changes—featuring download, listing, and search but lacking complementary write operations. Notable gaps include upload_file, delete_file, restore_recycle_bin (despite being able to list it), and move/rename operations, limiting it to browse-only workflows.
Average 4.1/5 across 6 of 7 tools scored. Lowest: 3.3/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.4.1
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 7 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/destructive profile. Description adds behavioral context about pagination, sorting, and filtering capabilities, though it lacks details on recursion depth, error handling, or maximum limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences. First establishes core purpose, second enumerates capabilities. No redundant or wasted text; front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists (covering return values) and annotations cover safety, the description meets minimum needs but leaves significant gaps in parameter documentation given 0% schema coverage across 7 parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. The description compensates partially by mapping 'glob pattern' to pattern, 'file type' to filetype, etc., but omits crucial semantics for the required 'path' parameter and valid values for sort_by/filetype enums.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'List' and resource 'files and folders' with scope 'in a directory'. It implies directory enumeration rather than global search, distinguishing it from sibling search_files, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings like search_files (which might be better for recursive finding) or get_file_info (for single file details). Simply lists capabilities without contextual selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/non-destructive safety profile; description adds valuable domain context that items are restorable. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences. First establishes scope, second adds restorability context. No redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a listing tool with output schema present, but gaps remain: 0% schema coverage necessitates parameter hints (especially sorting/filtering options) which are absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring description compensation. Description mentions 'shared folder' (hints at 'share' parameter) but fails to explain 'pattern' filtering, 'sort_by'/'sort_direction' options, or 'limit' despite complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'List' with clear resource 'shared folder's recycle bin'. Second sentence clarifies content scope (recently deleted/restorable files), distinguishing from sibling 'list_files' which presumably lists active files.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through 'can be restored' (recovery scenarios), but lacks explicit guidance on when to use versus 'list_files' or 'search_files' siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only safety, but the description adds valuable return value semantics by listing specific metadata fields returned (size, owner, timestamps, permissions, real path), including the notable 'real path' indicating symlink resolution. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with efficient colon-separated list of attributes. Front-loaded action ('Get detailed metadata') followed by scoping clause. No redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for complexity: output schema exists so return format needs minimal description, annotations cover safety profile, and the listed metadata fields provide sufficient expectations. Could note batch/array capability given plural 'paths' parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fails to compensate adequately. It mentions 'specific files or folders' but does not clarify that 'paths' accepts an array, whether absolute/relative paths are required, or shell globbing support. The mapping between the parameter name and description is implicit.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with resource 'detailed metadata' and clearly defines scope as 'files or folders'. It distinguishes from siblings like download_file (content), list_files (directory enumeration), and search_files (discovery) by emphasizing metadata retrieval for specific targets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies specificity ('specific files or folders') distinguishing it from directory listing or search tools, but provides no explicit when-to-use guidance, prerequisites, or alternatives for when paths are unknown.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false. The description adds critical behavioral context: the overwrite protection default and the configuration dependency for dest_folder, which agents need to handle errors correctly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, all earning their place. Front-loaded with the core action. Slightly imperative tone ('Provide the NAS file path') rather than descriptive, but efficient and readable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists (no need to document return values) and annotations cover safety profile, the description captures the essential complexities: the config-dependent optional parameter and the file collision behavior. Missing only what happens on filename conflicts.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, the description must compensate. It explains 'path' ('NAS file path') and 'dest_folder' (optional condition), and implies 'overwrite' behavior. However, it completely omits 'filename' and doesn't describe parameter formats or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Download'), clear resource ('NAS file'), and destination ('local directory'), distinguishing it from sibling listing/info tools like list_files and get_file_info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage conditions: 'dest_folder is optional if default_download_dir is configured' and safety behavior 'Does not overwrite existing local files by default'. Lacks explicit comparison to when to use list_files vs download_file, though the distinction is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true. Description adds valuable behavioral context: it discloses recursive traversal ('including all files and subdirectories') and enumerates return values ('total size, file count, and directory count') not visible in annotations. Does not mention performance characteristics or permission requirements, but adequately supplements annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 defines action and scope, sentence 2 specifies return values, sentence 3 provides usage guidance. Logical front-loaded structure where every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a single-parameter tool with output schema present. Description covers the operation, recursive behavior, and return value summary without needing to duplicate full output schema documentation. Annotations cover safety profile. No gaps requiring elaboration.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (path parameter has title/type but no description), so description must compensate. It minimally compensates by implying the input is a directory path ('Calculate... of a directory'), but does not explicitly document the path parameter semantics, expected format, or examples. This meets the minimum viable threshold given the single parameter and clear tool name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Calculate') with resource ('total size of a directory') and scope ('including all files and subdirectories'). The phrase 'best tool for answering how much space does X use' effectively distinguishes from siblings like get_file_info (single file) and list_files (enumeration without size aggregation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('best tool for answering how much space does X use questions'), providing clear contextual guidance. However, lacks explicit 'when not to use' or named alternatives (e.g., does not mention to use get_file_info instead for single files).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations confirm read-only safety (readOnlyHint=true), while the description adds valuable workflow context about discovery and navigation entry points. It appropriately notes that this reveals 'available paths' for subsequent operations, adding conceptual behavior beyond the raw annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences. The first states the core function; the second provides usage context. Every word earns its place with no redundancy or tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simple purpose, existing output schema, and clear read-only annotations, the description is complete. It adequately covers the discovery purpose and workflow position without needing to elaborate return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fails to compensate. It does not mention the 'sort_by' or 'sort_direction' parameters at all, leaving the agent uninformed about sorting capabilities despite the schema lacking descriptions for these optional fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') with explicit resource ('shared folders on the NAS') and scope ('all'). It clearly distinguishes from sibling tools like 'list_files' by establishing this as the root-level discovery operation for shared folders rather than directory contents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: 'This is the starting point for file navigation — call this first to discover available paths.' This clearly positions the tool in the workflow hierarchy and specifies when to invoke it before using other file navigation siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish read-only safety; description adds crucial operational details: pattern uses 'substring match (not glob)' and accepts 'human-readable sizes like 500MB'. Could mention result limits or pagination behavior, but covers primary query semantics well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences, all value-add: purpose statement, pattern semantics, extension example, size format, and sibling alternatives. No redundancy or generic fluff. Front-loaded with the recursive search capability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 9 parameters with zero schema descriptions, the description successfully covers the critical functional parameters (pattern, extension, size range, folder_path). Minor gap: filetype and exclude_pattern are undocumented, though output schema exists to describe results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, description compensates by explaining pattern (substring vs glob), extension (with mkv example), and size parameters (human-readable formats). Missing documentation for filetype, exclude_pattern, and limit parameters, but covers the essential search criteria adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb (Search) + resource (files) + scope (recursively) + filtering dimensions (keyword, extension, size range). Clearly distinguishes from siblings like list_files (for directory contents) and get_dir_size (for sizes).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly names alternatives and when to use them: 'For directory contents or sizes, use list_files or get_dir_size instead.' Also clarifies the distinction between pattern and extension parameters to prevent misuse.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/cmeans/mcp-synology'
If you have feedback or need assistance with the MCP directory API, please join our Discord server