SSH MCP Server
Server Quality Checklist
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.2.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 7 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so the description carries full disclosure burden. While it identifies SFTP as the protocol, it fails to disclose whether the operation overwrites existing files, creates parent directories automatically, or how it handles authentication precedence when both password and key are provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single 10-word sentence with zero redundancy and proper front-loading. However, given the tool's complexity (multiple authentication methods, path handling), the extreme brevity arguably underserves the user despite being technically concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 7-parameter file transfer operation with complex authentication options (key vs password), no output schema, and no annotations, a 10-word description is inadequate. Missing critical context: overwrite semantics, directory creation behavior, and success/failure indicators.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema completely documents all 7 parameters including authentication options and paths. The description adds minimal semantic value beyond the schema, only mentioning 'local file' and 'remote host' which correspond to existing parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb (Upload), clear resource (local file to remote host), and mechanism (SFTP). It implicitly distinguishes from siblings like ssh_download (directionality) and ssh_write_file (local file path vs inline content) by emphasizing 'local file' transfer via SFTP protocol.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to prefer this over ssh_write_file (which likely writes content directly) versus uploading existing local files. No mention of prerequisites like SSH access requirements, file existence checks, or authentication method selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It partially compensates by disclosing the return structure (stdout, stderr, exit code) in lieu of an output schema. However, it omits critical behavioral traits: that execution is destructive/mutative, requires network connectivity, or handles authentication failures.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently structured: first defines the operation, second describes the return value. No redundant phrases or unnecessary verbiage. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a high-risk remote execution tool with 7 parameters and no annotations/output schema, the description is minimally adequate. It covers the return structure but misses security implications (arbitrary code execution), error handling behaviors, and side-effect warnings that would be expected for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds no parameter-specific guidance beyond the schema (e.g., no syntax examples for privateKeyPath, no guidance on timeout units beyond the schema's 'milliseconds').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb 'Execute' and resource 'command on a remote host via SSH'. While it implicitly distinguishes from file-centric siblings (ssh_download, ssh_upload, etc.) by specifying 'command execution' versus file operations, it lacks explicit differentiation from ssh_diagnose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like ssh_diagnose for troubleshooting. It also lacks prerequisites such as authentication requirements (despite auth parameters existing) or security warnings about executing arbitrary commands.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but omits critical behavioral details: it does not disclose the read-only/safe nature of the operation, the output format/structure (crucial for a listing tool without output schema), connection timeouts, or authentication failure behaviors. Only mentions SFTP protocol.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action verb, zero redundancy. Efficiently conveys core purpose without extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a listing tool but has clear gaps: missing output specification (file names only? metadata? recursive?), no mention of pagination for large directories, and no safety/permission context given the lack of annotations and output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for all 6 parameters (host, port, username, etc.). The description adds no additional parameter semantics, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'List files' is a precise verb phrase, 'directory on remote host' identifies the resource, and 'via SFTP' distinguishes the transport method from siblings like ssh_exec (which uses SSH) and ssh_read_file (which reads file content).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through clear naming (list vs download/read/write), but lacks explicit guidance on when to prefer this over ssh_exec (which could also run `ls`) or how it relates to ssh_diagnose. No 'when-not' or alternative recommendations stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses critical destructive behavior ('overwrites') and transport method ('SFTP') since no annotations are provided. However, it omits important behavioral details: whether parent directories are created if missing, authentication failure modes, atomicity of the write operation, and encoding handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences with zero redundancy. Front-loaded action verb immediately establishes purpose. No filler words or redundant explanations. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimal but functional for a file write operation with well-documented schema. However, for a 7-parameter destructive operation with complex authentication options (key vs password) and no output schema, the description lacks important operational context such as directory creation behavior, permission defaults, and error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema documents all 7 parameters (host, port, username, privateKeyPath, password, path, content) adequately. The description references 'content' which aligns with the schema, but adds no additional semantic detail beyond what the parameter descriptions already provide. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Write'), resource ('file on a remote host'), and protocol ('via SFTP'). The phrase 'Creates or overwrites' clearly distinguishes this from append operations and clarifies destructive behavior. However, it does not explicitly differentiate from sibling 'ssh_upload' (which likely transfers local files versus writing string content).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'Creates or overwrites' clause provides implicit guidance on idempotent usage (use when you want to replace file contents entirely). However, it lacks explicit guidance on when to prefer this over 'ssh_upload' for local file transfers, or prerequisites like connection requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden and discloses specific diagnostic behaviors: checks ssh-agent, keys, known_hosts, config, and performs test connection. Deducting one point as it doesn't explicitly confirm read-only safety or describe output format, though 'diagnose' and 'checks' imply non-destructive behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured: sentence 1 states purpose, sentence 2 details specific checks, sentence 3 gives usage timing. Zero redundancy; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong completeness given 2-param tool with no output schema. Description adequately scopes what gets diagnosed. Minor gap: doesn't hint at return value structure (e.g., whether it returns a report vs boolean), though listed checks imply diagnostic output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (host and port fully described), establishing baseline 3. Description doesn't add param-specific semantics, but doesn't need to given complete schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specific purpose: 'Diagnose SSH connectivity issues' with specific verb (diagnose) and resource. Clearly distinguishes from operational siblings (ssh_exec, ssh_upload, etc.) by listing diagnostic checks (agent, keys, known_hosts, config) rather than actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Exceptional guidance with explicit temporal logic: 'Use this BEFORE attempting SSH operations... or AFTER a failed SSH operation'. Clearly positions the tool relative to alternatives and failure scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/YawLabs/ssh-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server