yade-mcp
Server Quality Checklist
- Disambiguation5/5
Tools have clearly distinct purposes with no overlap. The two execution tools (sync vs async) are explicitly differentiated by use case and warnings. The two API documentation tools serve different discovery modes (hierarchical browsing vs keyword search). Task lifecycle tools cover non-overlapping operations.
Naming Consistency5/5All tools follow the consistent yade_verb_noun pattern in snake_case. The prefix is uniform, verbs are action-oriented (browse, check, execute, interrupt, list, query), and compound names use underscores consistently (task_status, not taskStatus).
Tool Count5/5Seven tools is optimal for this domain. The set covers documentation access (2 tools), synchronous execution (1), and asynchronous task lifecycle management (4), with no redundancy or bloat. Each tool earns its place in the YADE workflow.
Completeness4/5Covers the main YADE interaction patterns well: API discovery, immediate REPL-style execution, and full async job lifecycle (submit, poll, interrupt, list). Minor gaps might include task cleanup/removal or environment reset capabilities, but core workflows are fully supported.
Average 4/5 across 7 of 7 tools scored. Lowest: 3.1/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.1
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 7 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet description fails to disclose critical behavioral traits: the polling/waiting capability (wait_seconds up to 3600s), paginated log retrieval (skip_newest/limit), or whether repeated calls are safe/idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 9 words, front-loaded with no waste. Appropriate length for the information conveyed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimal but acceptable given output schema exists and input schema documents all 5 parameters; however, lacks high-level context about pagination patterns and long-polling behavior that would aid agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with clear parameter documentation; description adds no additional syntax details, usage examples, or cross-parameter relationships beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Check' and resources 'status and output' for a 'submitted YADE task', clearly distinguishing from sibling execution tools like yade_execute_task by implying a polling/retrieval pattern on existing tasks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use versus alternatives (e.g., yade_list_tasks for bulk listing) or workflow prerequisites (e.g., requiring task_id from yade_execute_task).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'pagination' (useful context for the parameter mechanism) and 'tracked' (suggesting filtered results), but fails to clarify sort order (implied reverse-chronological by 'skip_newest'), real-time vs. cached data, or what 'tracked' specifically means in the YADE context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at six words with no filler. It is front-loaded with the action verb. However, it may be overly terse given the lack of annotations and output schema documentation—it could accommodate one additional sentence covering usage guidelines without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description appropriately omits return value details. However, for a tool with no annotations and multiple task-related siblings, it lacks necessary context on what distinguishes 'tracked' tasks, the YADE system context, and how this listing relates to the task lifecycle (e.g., does it include completed tasks?).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'pagination' which contextualizes the limit/skip_newest parameters, but does not explain the unusual 'skip newest' semantics (reverse chronological ordering) or provide usage examples that would help an agent understand the pagination pattern.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('tracked YADE tasks') and mentions pagination, which distinguishes it from a simple unconstrained list. However, it does not explicitly differentiate from sibling 'yade_check_task_status' (single task lookup vs. bulk listing) or explain what 'YADE' refers to.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this versus alternatives like 'yade_check_task_status' for specific task lookups, or 'yade_execute_task' for creating new tasks. There are no stated prerequisites, exclusions, or workflow positioning hints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It adds valuable behavioral context with 'graceful' (implying clean shutdown vs immediate termination), but fails to explain side effects, final task states, or what the operation returns despite the existence of an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence of six words that front-loads the core action. No words are wasted on tautology or redundant restatement of the tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with 100% schema coverage and an output schema, the description covers the essential action. However, it lacks context on interruption eligibility (can pending tasks be interrupted?) and post-interruption state that would help an agent handle the response correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (task_id is fully documented as returned by yade_execute_task). The description adds no explicit parameter guidance, which aligns with the baseline score of 3 when the schema already fully documents inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Request graceful interruption') and resource ('running YADE task'), clearly distinguishing it from execution or monitoring tools. However, it does not explicitly differentiate from sibling tools like yade_check_task_status or yade_execute_task in the text itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'graceful interruption of a running YADE task' implies when to use it (to stop an active task), but provides no explicit when-not-to-use guidance, prerequisites, or alternative approaches (e.g., force kill vs graceful stop).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully explains the hierarchical navigation model and progressive disclosure behavior (what happens at each path level). However, it omits auxiliary behavioral traits like rate limiting, caching behavior, or explicit safety confirmation (though 'browse documentation' implies read-only).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with the core purpose front-loaded in the first sentence, followed immediately by the filesystem analogy. The bullet points efficiently demonstrate the three-level navigation pattern without redundancy. Every sentence and example earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (which handles return value documentation) and 100% schema parameter coverage, the description provides complete contextual coverage. The hierarchical browsing concept is fully explained, and the filesystem analogy provides sufficient mental model for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% coverage with detailed examples, the description adds valuable semantic context beyond the schema by framing the path parameter as a filesystem-like navigation system. The 'ls /' and 'cat' metaphors help agents understand the parameter's purpose as a navigation pointer rather than just a string value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Browse[s] YADE Python API documentation hierarchically' with a specific verb and resource. The filesystem analogy ('like a filesystem') effectively distinguishes it from sibling tools like yade_query_api (likely search/lookup) and yade_execute_code (execution), establishing its unique navigation/exploration purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides excellent implicit usage guidance through the progressive disclosure examples (empty path → categories → classes → details). However, it lacks explicit guidance on when to choose this over yade_query_api for API discovery, though the hierarchical 'browse' metaphor strongly implies use for exploration versus direct querying.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses key behavioral traits: immediate return of task_id, background execution nature, and the requirement to use companion tools for status checking and interruption. However, it could improve by mentioning error handling behavior, output persistence, or concurrency constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally structured and front-loaded: purpose statement first, return behavior second, lifecycle tooling third, use cases fourth, and alternative tool last. Every sentence serves a distinct function with zero redundancy. The bulleted list for companion tools efficiently compresses what could have been verbose prose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (which absolves the description from detailing return values), 100% parameter schema coverage, and the complexity of async task management, the description is complete. It covers the execution model, lifecycle management integration, appropriate use cases, and explicit differentiation from synchronous alternatives without over-specifying what the schemas handle.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting both 'entry_script' as an absolute path and 'description' as a brief purpose. The description refers to 'Python script' generally but does not add semantic details, constraints, or examples beyond what the schema already provides. Baseline 3 is appropriate given the schema's completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Submit') and clearly identifies the resource (Python script) and execution model (asynchronous in YADE). It effectively distinguishes from sibling 'yade_execute_code' by contrasting 'production simulation runs/long cycles' versus 'quick queries and REPL-style testing', ensuring the agent selects the correct tool for long-running vs. interactive use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('production simulation runs, long O.run() cycles, and any operation that may take minutes or longer') and explicitly names the alternative ('For quick queries... use yade_execute_code'). Additionally, it comprehensively maps the task lifecycle ecosystem by listing the three companion tools needed for polling, interrupting, and browsing history.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical behavioral traits: synchronous/blocking execution ('fire-and-return'), stdout return mechanism, side effect persistence ('side effects persist'), environment context ('yade modules are already imported'), and operational constraints ('cannot be cancelled'). Warning section highlights timeout risks.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Exceptionally well-structured with clear sections: execution model, return behavior, typical uses, sibling differentiation, and warning. Every sentence earns its place. Front-loaded with synchronous nature and immediate return. No redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given output schema exists (per context signals), description appropriately minimizes return value explanation while noting 'Returns stdout immediately'. Comprehensive coverage of execution environment, blocking behavior, and sibling relationships sufficient for a code execution tool with complex side effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Schema already documents 'Python code to execute in YADE process' and timeout details. Description provides usage examples (querying O.bodies) that contextualize the code parameter but does not add syntactic or semantic details beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb+resource ('Execute Python code synchronously in the running YADE process') and explicitly distinguishes from sibling yade_execute_task by contrasting 'fire-and-return' vs tracked/async execution models. Lists concrete operations (query simulation state, create bodies, set properties) that clarify scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly names alternative tool (yade_execute_task) for long simulations and states when NOT to use this tool ('Avoid long-running calls... They block until completion'). Clearly states limitations: 'NOT tracked by yade_list_tasks', 'cannot be interrupted or polled'. Typical uses section provides positive guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return format ('matching class/function names with descriptions'), ranking behavior ('ranked by relevance'), and matching semantics ('like grep'). Does not mention error handling for empty results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with front-loaded purpose statement, followed by return value description, usage guidelines with bullet points, and related tools section. No redundant text; every sentence provides actionable context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and existence of output schema, description appropriately focuses on usage context and sibling differentiation rather than parameter mechanics. Covers purpose, returns, usage scenarios, and alternatives comprehensively for a search tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value with 'like grep' analogy clarifying substring matching behavior for the 'query' parameter, and provides concrete domain examples ('friction material', 'hertz mindlin') supplementing schema examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Search' with resource 'YADE API documentation' and mechanism 'like grep'. Clearly distinguishes from sibling yade_browse_api by contrasting keyword search with full documentation retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'When to use' section with concrete scenario ('You have keywords but don't know the exact class name') and domain-specific examples. Directly names alternative tool yade_browse_api for the complementary use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/yusong652/yade-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server