Moku MCP Server
Server Quality Checklist
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 8 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior1/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate whether this operation is destructive (overwrites existing routes), idempotent, requires specific privileges, or what errors might occur (e.g., invalid source/destination validation).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness2/5Is the description appropriately sized, front-loaded, and free of redundancy?
While the three-word description contains no filler, it is inappropriately undersized for a configuration tool handling signal routing. It fails the test that 'every sentence should earn its place' because it provides insufficient information to justify its brevity given the operational complexity implied by the schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness1/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, annotations, and the write-operation nature of 'set' commands, the description is grossly incomplete. It omits critical context such as whether changes are immediate, if they persist across reboots, or what the MCC context refers to, leaving the agent unprepared to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('List of routing connections'), establishing a baseline of 3. The description adds no further semantics about what constitutes valid 'source' or 'destination' strings (e.g., hardware channels, logical identifiers), nor does it explain the MCC routing domain, but it does not need to compensate for schema gaps since coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose2/5Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Configure MCC signal routing' is essentially a tautology that restates the tool name 'set_routing' with minimal expansion. While it mentions 'MCC' (undefined acronym) and 'signal routing', it fails to distinguish this tool from siblings like 'push_config' or 'get_config', leaving the specific scope ambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines1/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidance is provided. The description does not indicate when to use this tool versus alternatives like 'push_config', nor does it mention prerequisites (e.g., whether a device must be attached first) or warn about potential disruption to existing signals when re-routing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. 'Deploy' implies a write operation but doesn't disclose atomicity, validation behavior, error handling, or whether this overwrites existing device state. For a configuration deployment tool, this lack of behavioral context is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely terse (6 words) but efficient. No wasted words, though the brevity borders on under-specification given the complexity of hardware configuration deployment. Front-loaded structure is appropriate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a deployment tool with nested object parameters and no output schema. Missing: prerequisites for device connection, success/failure indicators, relationship to get_config (inverse operation), and whether deployment is immediate or requires subsequent activation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description doesn't add parameter semantics beyond the schema (which already describes config_dict as 'MokuConfig serialized as dict'), nor does it provide examples, constraints, or format details for the nested object structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Deploy') and resource ('MokuConfig'), with explicit target ('connected device'). However, it fails to distinguish from sibling get_config (which presumably retrieves config), leaving the bidirectional relationship implicit rather than explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use versus alternatives (e.g., set_routing for partial changes), nor prerequisites (e.g., whether attach_moku must be called first). The description assumes the agent already knows the workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but discloses minimal behavioral traits. While 'Retrieve' implies read-only access, it does not explicitly confirm safety, describe the return format, or explain error conditions (e.g., if called before device attachment).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at four words. The single sentence is front-loaded and contains no filler, though it borders on under-specification given the lack of supporting annotations or output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for operational use given no output schema exists. Fails to mention that device attachment is likely required (per sibling 'attach_moku'), does not describe configuration scope, and omits return value structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters, establishing baseline 4 per rubric. No parameter semantics are required or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States a specific verb ('Retrieve') and resource ('device configuration'), making the basic function clear. However, it does not explicitly differentiate from sibling 'get_device_info', leaving ambiguity between configuration and general device information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like 'get_device_info' or 'push_config'. Does not mention prerequisites (e.g., requiring 'attach_moku' first) or typical workflows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a read-only operation, what data format it returns, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely terse (4 words) and front-loaded with the verb. While efficient, it borders on under-specification given the lack of annotations or output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, the description should indicate what the tool returns (e.g., slot identifiers, configurations). Currently insufficient for an agent to understand the operation's value without invoking it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters and 100% schema coverage of those zero parameters, meeting the baseline of 4. No parameter semantics are needed or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('List') and specific resource ('configured instrument slots'), distinguishing it from siblings like get_config or discover_mokus. However, it lacks domain context explaining what 'instrument slots' represent in the Moku device system.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like get_config or get_device_info, nor any prerequisites or conditions for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. The phrase 'assume ownership' hints at exclusive access semantics, but fails to explain failure modes (what happens if another client owns it without 'force'?), session persistence, or cleanup requirements. The mention of ownership is valuable but insufficient for a stateful connection tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is front-loaded with the core action. However, given the lack of annotations and the complex stateful nature of device ownership, the description is overly minimal rather than appropriately concise—leaving critical behavioral gaps.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a stateful connection tool with ownership semantics, the description is incomplete. It lacks: prerequisites (discovery), success/failure outcomes, the exclusive nature of ownership, cleanup obligations, and the fact that this is required before using configuration siblings. No output schema exists to compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The main description adds no parameter-specific guidance (e.g., when to use 'force=true', or format examples for 'device_id' like IP vs serial number), but the schema adequately documents both parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Connect', 'assume ownership') and identifies the resource ('Moku device'). However, it doesn't explicitly distinguish from sibling tools like 'discover_mokus' (which finds devices) or 'release_moku' (which drops ownership), though 'assume ownership' implies the state transition.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Missing crucial workflow context: it should mention that 'discover_mokus' typically precedes this call, that this must be called before configuration tools like 'push_config', and that 'release_moku' should be called when finished.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'zeroconf' indicating network broadcast behavior but fails to disclose what the tool returns (a list of devices?), whether it blocks for the full timeout duration, or that it generates network traffic. Critical behavioral context for a network discovery tool is missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with six words. It front-loads the action ('Discover') and contains zero waste. Every word earns its place by conveying the operation, target, and method.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one optional parameter and no output schema, the description adequately explains the operation but leaves a significant gap regarding return values. Since no output schema exists to document what 'discover' returns (likely device identifiers/IPs), the description should mention this to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('Discovery timeout in seconds'), so the schema already fully documents the parameter. The description adds no additional parameter context, but given the high schema coverage, the baseline score of 3 is appropriate as no compensation is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Discover') and resource ('Moku devices'), and specifies the mechanism ('via zeroconf'). It implicitly distinguishes from sibling tools like 'attach_moku' and 'push_config' by describing a read-only discovery operation versus state-changing actions, though it could explicitly state this is the first step before attaching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states what the tool does but provides no guidance on when to use it versus alternatives. It does not mention that this should be used before 'attach_moku' to find available devices, nor does it specify prerequisites like network access or when to adjust the timeout parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. While 'Query' implies read-only, description does not explicitly confirm lack of side effects, error conditions (e.g., device not attached), or whether data is cached versus live. Insufficient disclosure for a hardware interaction tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with no wasted words. Front-loaded with verb and resource. However, given lack of annotations and output schema, the extreme brevity leaves significant gaps, suggesting it may be undersized for the tool's implicit complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimum viable for a zero-parameter tool: states what it returns. However, fails to compensate for missing output schema (no return format description) and missing annotations (no behavioral constraints). Does not address device attachment state requirements implied by sibling attach_moku.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present per schema, establishing baseline of 4. Description correctly omits parameter discussion as none exist, avoiding unnecessary elaboration.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Query' with clear resource 'device metadata' and provides concrete examples (name, serial, IP). Distinguishes reasonably from sibling get_config by using 'metadata' versus configuration terminology, though explicit contrast would strengthen it further.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use versus alternatives (e.g., discover_mokus for finding devices vs getting current device info). Critical omission: fails to mention prerequisite requirements (e.g., whether attach_moku must be called first given the sibling tool exists).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the ownership release mechanism, implying exclusive access patterns typical of hardware devices. However, it omits critical behavioral details such as idempotency, whether device operations stop upon disconnection, error conditions if called without an active connection, or side effects on configuration state.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single, efficient sentence with zero redundancy. Every phrase earns its place: 'Disconnect' establishes the action, 'Moku' identifies the target, and 'release ownership' specifies the critical resource management outcome.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters, no output schema) and lack of annotations, the description covers the basic operation but remains incomplete regarding the connection lifecycle. It should clarify that this is a cleanup operation paired with attach_moku and indicate whether the device remains running or stops after release.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool accepts zero parameters. Per the scoring rules, this establishes a baseline of 4. The description does not need to compensate for parameter documentation, and the empty schema is self-explanatory.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Disconnect') and resource ('Moku') with the specific outcome ('release ownership'). While it implies the inverse of attach_moku through the verb choice, it does not explicitly differentiate from the sibling tool or state that this terminates a connection established by attach_moku.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it indicate that this should be called after operations complete or to clean up resources. There is no mention of prerequisites (e.g., requiring an active connection) or sequencing relative to attach_moku.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/sealablab/moku-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server