mcpHydroSSH
Server Quality Checklist
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.4
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 10 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. While 'remove' implies deletion, the description fails to clarify if this is permanent, whether active connections are affected, or if the operation can be undone. For a destructive config operation, this is insufficient disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness2/5Is the description appropriately sized, front-loaded, and free of redundancy?
At four words, the description is undersized rather than appropriately concise. It front-loads nothing beyond the bare action statement, wasting no words but also failing to earn its place with sufficient actionable information for a destructive tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, and considering this is a destructive operation, the description should disclose behavioral traits like permanence and side effects. With only one parameter, complexity is low, but the safety-critical nature of config deletion demands more context than provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('Server ID to remove'), so the schema fully documents the parameter. The description adds no additional semantic context about the parameter format or valid server ID sources, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic action (remove) and target (server from config), preventing confusion with physical server deletion. However, it lacks the specificity of which config or what constitutes a 'server' in this context, falling short of clearly distinguishing this as SSH config management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines1/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus siblings like ssh_disconnect (which terminates sessions but preserves config) or ssh_update_server (which modifies existing entries). No prerequisites mentioned, such as whether the server must be disconnected first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states 'Update' implying mutation, but doesn't disclose whether omitted fields are preserved or reset, whether changes are validated immediately, idempotency guarantees, or what the operation returns. Critical gaps for a destructive-adjacent configuration operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at five words with no redundancy. However, the brevity comes at the cost of omitting necessary context; every word earns its place, but the description needs additional sentences to be functionally complete.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 8-parameter mutation tool with partial update semantics, conditional required fields (privateKeyPath vs password based on authMethod), and no output schema, the description is insufficient. It fails to explain the partial update behavior, field interdependencies, or operation side effects necessary for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all 8 parameters (serverId, name, host, port, username, authMethod, privateKeyPath, password) adequately documented in the schema itself. The description adds no parameter-specific guidance beyond the generic 'server config' reference, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose2/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic action ('Update') and target ('server config'), but 'config' is ambiguous (SSH connection settings vs remote server configuration). It fails to distinguish from sibling 'ssh_view_config' (read vs write) or clarify that this modifies stored connection parameters (host, port, auth) rather than remote server files.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this vs 'ssh_add_server' (create new) or prerequisites like obtaining serverId from 'ssh_list_servers'. The schema implies partial updates are supported (only serverId required), but the description doesn't confirm this critical usage pattern.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but discloses almost nothing about behavior: it does not state whether the connection persists for subsequent tool calls, what authentication method is used, what error conditions to expect, or what the return value indicates (no output schema exists).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely brief at four words with no filler, but arguably too terse for the complexity of SSH connection management. The single sentence is front-loaded with the verb, earning points for structure despite under-specification.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a connection-oriented tool with no output schema and no annotations. Critical missing context includes: session persistence model (does it affect subsequent ssh_exec calls?), connection lifecycle, and relationship to the sibling tool ssh_disconnect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting both serverId (referencing ssh_list_servers) and timeout. The description adds no additional parameter context, syntax guidance, or examples beyond what the schema already provides, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
States the basic action (Connect) and resource (SSH server) clearly, but fails to differentiate from siblings like ssh_add_server or clarify that this establishes a session to an existing configured server rather than creating a new configuration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives (e.g., ssh_exec which may handle its own connection), nor does it mention prerequisites such as needing to call ssh_add_server first or obtain a serverId from ssh_list_servers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but reveals almost nothing. It doesn't specify what status values are returned (boolean, string enum, object?), whether this performs an active health check or passive lookup, or that omitting connectionId returns all connections (a key behavioral trait only documented in the schema).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief at four words with no redundancy. However, it may be excessively concise—sacrificing necessary context for brevity. The single sentence is front-loaded but undersized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of output schema and annotations, the description should explain what 'status' means and what data structure is returned. It also omits the important behavior that this can function as a list-all-connections tool when no ID is provided, leaving a significant gap for an AI agent determining how to monitor connection states.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting that connectionId is optional and controls filtering vs. listing all connections. The description adds no semantic information beyond what the schema already provides, but meets the baseline expectation given the complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a basic verb ('Get') and resource ('SSH connection status'), but 'status' remains ambiguous—it could mean connection health, configuration state, or active socket status. It fails to distinguish from ssh_list_servers (which lists configured servers) or clarify whether this checks established sessions vs. server configurations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like ssh_connect (which might implicitly check status) or ssh_list_servers. It doesn't mention prerequisites (e.g., needing an existing connectionId from a previous connection) or when status checking is necessary before ssh_exec.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Lacks disclosure about return values (stdout/stderr/exit code), streaming behavior, side effects on the remote server, or what happens if the connection drops. 'Execute' implies mutation but specifics are absent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is efficient but undersized for a 4-parameter tool with complex lifecycle implications. While not wasting words, it fails to front-load critical operational context (prerequisites, output format) that would help an agent invoke this correctly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Incomplete for a command execution tool lacking annotations and output schema. Missing: prerequisite connection state, return value structure, error conditions, and side effect disclosure. The description only identifies the tool without operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline 3 is appropriate. The description adds no parameter-specific context, but the schema adequately documents all four parameters including units (milliseconds) and optional behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Execute' with clear resource 'command on an SSH server'. However, it does not clarify whether this requires a pre-existing connection (implied by connectionId parameter but not stated) or how it differs from ssh_connect in the workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance or prerequisites mentioned. Fails to state that an active connection is required (inferred from connectionId parameter) or when to prefer this over ssh_connect. No mention of error handling or timeout considerations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. It fails to specify idempotency behavior (what happens if 'id' already exists?), persistence characteristics, validation rules (does it verify host reachability?), or return values. 'Add' implies mutation but lacks safety context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficiently structured with zero redundancy. However, given the tool's complexity (8 parameters with interdependent conditional logic), the description is arguably undersized rather than optimally concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 8 parameters, complex conditional requirements between authMethod and credential fields, and no output schema, the description is inadequate. It omits return value specification, error handling scenarios, and the critical relationship between authentication method selection and required credential parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear documentation for all 8 parameters including enum values and conditional requirements. The description adds no parameter-specific context, but baseline 3 is appropriate when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Add' with resource 'SSH server' and destination 'to config', distinguishing it from siblings like ssh_connect (session establishment) and ssh_update_server (modification). However, it lacks specificity about what 'config' entails (persistence model, scope).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus ssh_update_server (for existing entries) or prerequisites like ID uniqueness. The conditional parameter requirements (privateKeyPath required only when authMethod is 'key') are not mentioned despite being critical for correct invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but discloses nothing about side effects. It omits whether this gracefully terminates ongoing commands, affects other connections, or cleanup behavior (e.g., does it also remove from ssh_get_status?).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely terse at four words. While no words are wasted, the extreme brevity contributes to the lack of behavioral context and guidelines. A single additional sentence about lifecycle or defaults would improve utility without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimum viable for a simple lifecycle operation. However, given the complexity of the SSH tool ecosystem (9 siblings including exec and connect), the description should clarify session lifecycle implications—specifically whether disconnecting affects running exec commands or server configuration persistence.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds no parameter details, but the schema adequately documents the optional connectionId behavior without needing expansion in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb (disconnect) and resource (SSH server), making the basic purpose clear. However, it misses the opportunity to distinguish from sibling 'ssh_remove_server' (which deletes configuration) by clarifying this terminates active sessions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use versus alternatives, or prerequisites (e.g., 'use after completing ssh_exec operations'). The optional nature of the connectionId parameter and its default behavior (most recent) is left entirely to the schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'List' implies a read-only operation, the description fails to specify what 'configured' entails (persistent storage vs. active connections), return format, or whether the operation is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence 'List all configured SSH servers' is optimally concise with no redundant words. The key verb and resource appear immediately at the front.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool without annotations or output schema, the description meets minimum viability by stating the core operation. However, it lacks context about what properties are returned for each server or what constitutes a 'configured' server in this system.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4 per the evaluation rules. The description appropriately requires no additional parameter clarification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'List' with the clear resource 'configured SSH servers', distinguishing it from connection-oriented siblings like ssh_connect and ssh_exec. However, it doesn't clarify the distinction from ssh_view_config, which might also access server information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives like ssh_view_config or ssh_get_status. There are no 'when-not-to-use' caveats or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'View' implies a read-only operation, the description fails to disclose return format, potential errors, whether credentials are exposed, or any side effects. The agent lacks information about what 'configuration' actually contains beyond the mention of servers and settings.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single efficient sentence that is appropriately front-loaded with the action verb. There is no redundant or wasteful text, and the length is appropriate for a parameterless inspection tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description minimally compensates by mentioning 'servers and settings' as content, but fails to indicate return format (JSON, text, structured object). For a simple read-only tool with no parameters, the description is adequate but has a clear gap regarding output expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, which per evaluation rules establishes a baseline score of 4. No parameter documentation is required or expected given the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a clear verb ('View') and resource ('SSH configuration') with scope indicators ('full', 'including servers and settings'). It implicitly distinguishes from sibling ssh_list_servers by emphasizing 'full configuration' rather than just a list, though it could explicitly mention this distinction for clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states what the tool does but provides no guidance on when to use it versus alternatives. It does not indicate when to prefer this over ssh_list_servers or whether it should be called before other operations like ssh_connect.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It explains the tool displays help content, but lacks disclosure on return format (text vs structured), safety profile, or whether this operates offline without server configuration.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 7 words with zero redundancy. Information is front-loaded and appropriately sized for a simple utility tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple help utility with one optional parameter and no output schema. While it could briefly mention the available topic categories, the description successfully conveys the tool's function without unnecessary verbosity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'topic' parameter fully documented including enum values. The description does not mention the parameter, but per calibration guidelines, high schema coverage establishes a baseline of 3 without requiring compensation from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Show') and resource ('help and usage examples'), clearly distinguishing this from operational siblings like ssh_connect or ssh_exec. It precisely scopes the tool to the mcpHydroSSH system.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives, or when to use specific topics (config, auth, etc.). No mention of whether this requires an active connection or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/hydroCoderClaud/mcpHydroSSH'
If you have feedback or need assistance with the MCP directory API, please join our Discord server