Proxmox MCP Server
Server Quality Checklist
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 8 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden. It successfully documents the auto-detect behavior for the node parameter and lists return fields (status, uptime, CPU, etc.). However, it lacks safety disclosure (read-only vs destructive), error handling, or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear Args/Returns sections. First sentence establishes purpose efficiently. The Returns section may be redundant given context signals indicate an output schema exists, but it provides readable summary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 2-parameter tool with simple types. Covers parameter semantics and return values, but lacks safety annotations (readOnlyHint equivalent) or error scenarios that would help an agent handle failures gracefully.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage (only titles 'Vmid', 'Node'). Description compensates by defining vmid as 'The VM or container ID' and node as 'The Proxmox node name (optional, will auto-detect)', clarifying optionality and behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action 'Get' and resource 'current runtime status of a VM or container'. The term 'runtime' helps distinguish from sibling get_vm_info (likely static configuration), though it doesn't explicitly contrast with get_vm_metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus siblings like get_vm_metrics or get_vm_info. No prerequisites or conditions mentioned (e.g., VM must exist, requires monitoring permissions).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully documents the output structure (name, description, creation time, RAM state) and mentions the auto-detect behavior for the node parameter. However, it lacks operational details like error handling, performance characteristics, or confirmation that this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with clear 'Args:' and 'Returns:' sections, uses bullet points for the output fields, and contains zero wasted words. The purpose statement is front-loaded in the first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 parameters and simple output, the description is appropriately complete. It documents inputs (compensating for empty schema) and outputs (despite existence of output schema), providing sufficient information for invocation. It could be improved by mentioning error cases (e.g., VM not found).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description effectively compensates by documenting both parameters: 'vmid' is identified as 'The VM ID' and 'node' is explained as 'The Proxmox node name' with the critical behavioral note that it is optional and 'will auto-detect'. This adds necessary context missing from the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('snapshots for a VM'), making the tool's function immediately obvious. However, it does not explicitly differentiate from siblings like 'get_vm_info' or 'get_vm_status' which also retrieve VM-related data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., when to use 'get_vm_status' instead). While the return value description implies it's for historical snapshot inspection, there are no explicit when/when-not recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what information is returned (node name, status, resources, uptime), but does not explicitly state safety characteristics (read-only, non-destructive) or performance constraints despite being a zero-parameter query operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement followed by a bulleted list of return values. Every sentence earns its place with no redundancy or extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters) and the existence of an output schema, the description provides adequate completeness by summarizing the key returned fields. It appropriately focuses on scope clarification rather than parameter documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero input parameters, which establishes a baseline score of 4. The description correctly implies no filtering or configuration is needed for this cluster-wide list operation, consistent with the empty input schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (List) and resource (Proxmox nodes in the cluster), distinguishing it from sibling tools like list_vms (virtual machines) and get_cluster_status (overall status vs individual node details).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While there are no explicit when-to-use or when-not-to-use statements, the detailed list of returned fields (CPU, memory, VM count) implies appropriate use cases for resource monitoring and node inventory. However, it does not explicitly compare against alternatives like get_cluster_status.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It effectively discloses return structure (cluster name, quorum status, node list, total resources), though it omits explicit safety/side-effect statements (implied by 'Get').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Optimal structure: single purpose sentence followed by bullet-point return value documentation. Every element conveys necessary information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complements the existing output schema well by summarizing the key data fields returned. Adequate for a parameterless read operation, though explicit idempotency or caching notes would perfect it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, meeting the baseline expectation. No parameter semantics needed or provided, which is appropriate for an empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Get') and resource ('Proxmox cluster status and health'), clearly distinguishing from VM-focused siblings (get_vm_*, list_vms) by emphasizing cluster-wide scope and quorum concepts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through specificity ('overall cluster status'), but lacks explicit guidance on when to prefer this over list_nodes or how it differs from aggregating individual node statuses.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses the external dependency (QEMU guest agent), installation requirements, and enumerates return fields (mount points, total size, used/free space, filesystem type). Could be improved by describing error behavior when the agent is unavailable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (purpose, prerequisites, args, returns). Front-loaded with the core action. Installation commands are slightly verbose but provide necessary troubleshooting context. The 'Args:' and 'Returns:' sections effectively compensate for the undescribed schema without excessive wordiness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero annotations and zero schema description coverage, the description successfully covers the critical prerequisite (guest agent), parameter meanings, and output structure. The context signals indicate an output schema exists, so the description appropriately summarizes rather than exhaustively documents return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, requiring full compensation by the description. The text adequately documents both parameters: 'vmid: The VM ID' and 'node: The Proxmox node name (optional, will auto-detect)'. This provides sufficient semantic meaning for the agent to understand parameter purposes, though it lacks format constraints or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with verb 'Get' + resource 'filesystem/disk space information from inside a VM' + mechanism 'using the QEMU guest agent'. The mention of the guest agent clearly distinguishes this from sibling tools like get_vm_info or get_vm_metrics which likely operate at the hypervisor level.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Strong prerequisite disclosure: explicitly states 'requires the qemu-guest-agent to be installed and running' and provides installation commands for Debian/Ubuntu and RHEL/CentOS. This effectively signals when the tool will/won't work. Lacks explicit mention of alternatives if the agent is unavailable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Effectively documents search behavior (cross-node when node omitted) and return structure (hardware, settings, status categories). Missing explicit safety confirmation (read-only nature) or performance notes about cross-node searches.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear Args/Returns sections. Front-loaded purpose statement followed by parameter details and structured return value list. Every element provides value; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given output schema exists, the detailed return value list is helpful bonus context rather than required. Parameter documentation is complete despite empty schema. Sufficient for a 2-parameter read operation, though could mention Proxmox platform context explicitly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage (titles only). Description fully compensates by providing concrete example for vmid ('100, 101') and clear behavioral semantics for node ('If not provided, will search all nodes'). Adds essential context missing from schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Get detailed information') and resource ('specific VM or container'). Implicitly distinguishes from sibling list_vms (single vs list) and get_vm_status (detailed config vs simple status) by emphasizing 'detailed configuration' including hardware, settings, and status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides useful guidance that omitting 'node' parameter searches all nodes, but lacks explicit comparison to siblings like get_vm_status or get_vm_metrics. Does not clarify when to use this comprehensive tool versus narrower alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully documents the return structure (time-series data for CPU, Memory, Network, Disk I/O) and timeframe granularity options. It could improve by mentioning data aggregation behavior or error conditions for invalid VM IDs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear Args and Returns sections. Every sentence adds value—purpose is front-loaded, parameter details are precise, and output specification is comprehensive without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool with zero schema annotations, the description is nearly complete. It covers inputs and outputs effectively. A minor gap is the lack of behavioral context regarding data resolution or potential latency when querying large timeframes like 'year'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description provides essential compensation by fully documenting all three parameters: vmid (VM or container ID), node (optional Proxmox node name), and timeframe (with complete enum descriptions and default value noted).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a clear, specific purpose: 'Get historical metrics/performance data for a VM.' The verb 'Get' and resource 'historical metrics/performance data' distinctly differentiate this tool from siblings like get_vm_status (current state) and get_vm_info (configuration).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The word 'historical' implies the intended use case (trend analysis vs. point-in-time status), but there is no explicit guidance on when to choose this over get_vm_status or get_vm_info, nor any mention of prerequisites like VM existence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and comprehensively documents the return structure (specific fields like vmid, type, maxmem). However, it omits operational context such as potential performance implications of listing all resources or required permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement followed by a detailed breakdown of return fields. No sentences are wasted; the content is front-loaded and information-dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists and the tool has zero parameters, the description provides complete coverage by detailing the return structure and clearly defining the tool's scope relative to the Proxmox environment.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline of 4. The description correctly requires no additional parameter documentation since the tool accepts no arguments.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and scope ('all VMs and containers across all Proxmox nodes'), effectively distinguishing it from sibling tools like get_vm_info (single item retrieval) and list_nodes (infrastructure rather than workloads).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies bulk enumeration use cases through phrases like 'all VMs' and 'across all nodes', it lacks explicit guidance on when to prefer this over get_vm_info for specific VM details or performance considerations when listing large clusters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/teomarcdhio/proxmox-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server