Roam Research
Server Quality Checklist
Latest release: v2.18.0
- Disambiguation5/5
Each tool has a clearly distinct purpose: creation tools are separated by content type (page, outline, table, todo), search tools are specialized by query type (text, tag, date, status, hierarchy, block refs), and other tools handle specific operations like fetching, batch actions, renaming, and memory. Descriptions are thorough and explicitly compare similar tools, minimizing ambiguity.
Naming Consistency5/5All tools follow a consistent `roam_` prefix with a clear verb_noun pattern (e.g., `roam_create_outline`, `roam_search_by_text`, `roam_fetch_block`). Even compound verbs like `find_pages_modified_today` maintain the pattern. No mixing of styles or abbreviations.
Tool Count4/524 tools is on the higher side but appropriate for the broad scope of Roam Research, which includes content creation, multiple search types, batch operations, queries, memory, and import/export. Each tool serves a specific function, though a few could potentially be merged (e.g., search tools might be consolidated).
Completeness4/5The tool set covers most core workflows: creating pages (including outlines, tables, todos), fetching content, advanced searching, batch editing, renaming, and memory. Missing dedicated delete operations are handled through batch actions. Some advanced features like attribute management are absent but not critical for basic usage.
Average 3.9/5 across 24 of 24 tools scored. Lowest: 2.7/5.
See the Tool Scores section below for per-tool breakdowns.
- 1 of 3 issues responded to in the last 6 months
- No commit activity data available
- Last stable release on
- No critical vulnerability alerts
- No high-severity vulnerability alerts
- No code scanning findings
- CI status not available
This repository is licensed under MIT License.
This repository includes a README.md file.
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
If you are the author, simply .
If the server belongs to an organization, first add
glama.jsonto the root of your repository:{ "$schema": "https://glama.ai/mcp/schemas/server.json", "maintainers": [ "your-github-username" ] }Then . Browse examples.
Add related servers to improve discoverability.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Tool Scores
- Behavior1/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose any behavioral traits (e.g., read-only, error handling, permissions). It simply states it fetches content, which is insufficient for an agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise (one sentence) and effectively front-loaded, but it could include more detail without harming conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters, multiple format options, and no output schema, the description is too brief and does not cover important aspects like return structure or error states.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents parameters thoroughly. The description adds no meaningful extra semantics beyond what the schema provides, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches a page by title and returns content in a specified format, but it does not differentiate from sibling tools like roam_fetch_page_full_view or roam_fetch_block.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives; it lacks context on prerequisites or scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It only states the rename action and identification method, but omits critical details like side effects (e.g., cascading renames), required permissions, graph selection, or write key requirements. The behavioral impact is underdescribed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences that convey the core action without extraneous information. It is front-loaded with the purpose and identification method, achieving high efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 parameters (including graph and write_key) and no output schema, the description lacks completeness. It does not explain the graph/write_key parameters, return behavior, or confirmations. A user would need to infer from the schema alone, which is insufficient for confident use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the parameter descriptions in the schema are already comprehensive. The description adds marginal value by restating the identification via title or UID, but does not clarify relationships between parameters or usage nuances beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (rename a page) and the resource (a page identified by title or UID). It is specific and uses a verb+resource structure. However, it does not differentiate from sibling tools like roam_create_page or roam_move_block, which could also involve page titles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For example, it doesn't explain when to use this instead of roam_create_page or roam_update_page_markdown. The description lacks context for appropriate usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears the full burden. It does not disclose any behavioral traits beyond the basic search action, such as performance, pagination, error handling, or what happens with no results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with key information. Efficient but could be slightly expanded without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 6 parameters and no output schema, the description is minimal. It does not explain output format, error scenarios, or how multiple filters interact, leaving gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds context that page_title_uid can be title or UID, but others like include/exclude are not elaborated beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for blocks by status (TODO/DONE) and specifies the scope (all pages or specific page). It distinguishes from siblings by focusing on status but doesn't explicitly contrast with other search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternative search tools (e.g., roam_search_by_text, roam_search_by_date). No exclusions or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It implies a read-only search but does not disclose side effects, authentication needs, or return behavior. The presence of a write_key parameter is not explained in context of this search tool, potentially confusing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no redundant information. Every word carries meaning. Ideal conciseness for a simple search tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description should explain return format. It does not mention what is returned (blocks, UIDs, etc.) or any pagination/limitations. For a tool with 6 parameters, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are well-described in the schema. The description adds minimal value, only echoing the direction concept. Baseline 3 is appropriate as the description does not significantly enhance parameter understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches for parent or child blocks in the block hierarchy, specifying direction up or down. It distinguishes from sibling tools by focusing on hierarchy, though it could be more explicit about the starting block.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like roam_search_block_refs or roam_search_by_text. It does not specify prerequisites or exclusions, leaving the agent to infer usage from the description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses some behaviors (heading creation, parent_uid precedence, default for include_memories_tag) but does not explain idempotency, error handling, or side effects like duplicate memory handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise with a front-loaded purpose sentence. The markdown note adds necessary context, though it could be more tightly coupled. Overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers main functionality but lacks output/return specification and error handling details. Given no output schema, this is a gap. Adequate for basic use but incomplete for robust tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema fully describes parameters. The description adds marginal value (e.g., markdown note) but does not significantly enhance parameter understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: adding a memory stored on the daily page with a tag. While it implicitly distinguishes from roam_recall (retrieval), it does not explicitly differentiate from other add tools like roam_add_todo.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear context for usage (adding memories) and includes a prerequisite (load Roam Markdown Cheatsheet). However, it lacks guidance on when to avoid using this tool or direct comparisons with alternatives like roam_recall.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full responsibility for behavioral disclosure. It accurately describes the read nature of the operation but does not mention permissions, rate limits, or error handling. The description is adequate but not detailed enough to fully inform the agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is exceptionally concise: two sentences that front-load the core purpose and quickly elaborate on search capabilities. No superfluous information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description should mention what the tool returns (e.g., a list of block references or UIDs). It currently only describes input behavior, leaving a significant gap for the agent regarding expected results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage and includes descriptions for each parameter. The tool's general description adds no additional meaning beyond what the schema already provides. Thus, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: searching for block references within a page or across the entire graph. It specifies the types of searches (specific block, page title, all references), which effectively distinguishes it from sibling search tools that focus on text, dates, or tags.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description outlines when to use the tool (for block references) but does not provide explicit guidance on when not to use it or suggest alternatives. For example, it could clarify that roam_search_by_text is more appropriate for general text searches. This lack of exclusions limits the score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It only describes core function and one exclusion, lacking details on behaviors like return format, pagination, or performance implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two succinct sentences that front-load the purpose and add a key exclusion. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with no annotations or output schema, the description is minimal. It lacks details on result format, limits, and edge cases, leaving the agent under-informed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents all parameters. The description adds no additional parameter-level context beyond the schema, meeting baseline but not exceeding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it searches for blocks/pages based on dates, and explicitly excludes daily pages with ordinal date titles, providing a specific verb-resource-scope and distinguishing from potential confusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when not to use (daily pages with ordinal date titles), but does not provide alternatives or when to use this tool over other search siblings like roam_search_by_text or roam_search_by_status.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses returned fields (UID, text, order, children, optional ancestors) and optional parameters, but omits error behavior, auth requirements (write_key mentioned but not explained), or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each earning its place. First sentence states action and resource; second details return values. No redundant or filler text. Front-loaded with key info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description provides essential info but lacks details on error handling, rate limits, or the role of write_key in a fetch operation. It covers core functionality but not edge cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining the purpose of depth and include_ancestors, including default values and meaning of deduced data (e.g., 'chain to the page root'). This goes beyond the schema's plain field descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Fetch a block by its UID' with optional children/ancestors, which is a specific verb+resource combination. It distinguishes from sibling tools like roam_fetch_page_by_title and roam_fetch_page_full_view by focusing on blocks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor any when-not-to-use conditions. Sibling tool names suggest differentiation, but the description itself lacks explicit usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It only states it 'moves a block', indicating mutation, but lacks details on side effects, reversibility, or any constraints beyond the schema. For a mutation tool, this is insufficient disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the action and purpose. Every word is necessary, no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a simple move operation with a clear schema and relation to a sibling, the description is adequate but does not explain output or error conditions. It meets minimum completeness for a straightforward tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds no additional meaning beyond the schema, which already fully documents all 5 parameters with their descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states it moves a block to a new parent/position, using a specific verb ('Move') and resource ('block'), and distinguishes itself from the sibling tool `roam_process_batch_actions` by noting it's a convenience wrapper for single block moves.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
It indicates it's for single block moves as a wrapper around `roam_process_batch_actions`, implying when to use it (single move) vs. the batch alternative. However, it does not explicitly state when not to use it or other criteria like required permissions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that each item becomes an actionable block with todo status and provides markdown formatting rules. However, it does not explain behavioral aspects such as irreversibility, required permissions, or effect on existing blocks. Since no annotations exist, more behavioral context would be beneficial.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: three sentences covering the main function, markdown notes, and a prerequisite. It is well-structured with the key action first, followed by important usage notes. No extraneous information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the essential information for using the tool: what it does, how to format todos, and a prerequisite. However, it omits details on output/return behavior and potential side effects. Given the lack of output schema and annotations, the description could be slightly more comprehensive, but it is largely complete for a focused tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already provides clear parameter definitions. The description does not add significant meaning beyond the schema, aside from implicitly associating the 'todos' array with the tool's purpose. Therefore, the description adds minimal extra value for parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function using a specific verb 'Add' and resource 'list of todo items' to a specific location 'today's daily page'. This distinguishes it from sibling tools that create pages, outlines, or tables, making its purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit guidance on when to choose this tool over alternatives. While it implies use for daily journal todos, it lacks exclusions or comparisons to sibling tools like roam_create_outline or roam_update_page_markdown. The prerequisite about the markdown cheatsheet is helpful but does not address usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It explains pagination, scope behavior, and case-sensitivity, but lacks details on rate limits, authentication, error handling, or return format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with core purpose, no wasted words. Efficient and scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description should specify the return format (e.g., block UIDs, texts). It does not, leaving a significant gap for an agent to understand what the response looks like.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining how scope affects search behavior and pagination, but most parameters are already well-described in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for blocks containing text, with a special scope for page titles ('page_titles'). It distinguishes from siblings like roam_search_block_refs and roam_search_by_date by focusing on text content and namespace prefix matching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the 'page_titles' scope and mentions pagination. However, it does not explicitly state when not to use this tool (e.g., for date-based search), though sibling names provide context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description clearly indicates the temporal scope (today since midnight) and mentions pagination/sorting. It does not detail return format or performance, but is sufficiently transparent for a read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence front-loads the core purpose and key features. Every word earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 5 parameters and no output schema or annotations, the description is brief but adequate for a simple retrieval tool. It misses context like what a page constitutes or how modification time is determined, but remains functional.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All 5 parameters have descriptions in the schema (100% coverage), so the description adds little beyond summarizing 'pagination and sorting'. The description does not introduce new semantics not already in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds pages modified today and includes pagination and sorting. It uses specific verb and resource, differentiating from siblings like roam_search_by_text or roam_fetch_page_by_title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for retrieving today's modified pages but does not explicitly state when not to use or compare with alternatives like roam_search_by_date. Adequate but lacks differentiation guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. The description implies a read-only, non-destructive operation by stating it provides reference information. However, it does not explicitly confirm safety, permissions, or rate limits. The 'API efficiency tips' mention adds some behavioral context, but overall transparency is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise given the extensive list of topics covered. It is front-loaded with the main purpose, followed by a bullet-like list, and ends with an important usage note. Every sentence adds value, though the list could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description fully explains what the tool provides and includes a usage directive. Given no output schema, the description does not specify return format, but the nature of a cheatsheet makes this acceptable. The tool's role among siblings is clear, and the description is complete enough for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for its two parameters (graph and write_key), with clear descriptions. The description does not add any extra meaning beyond the schema. Baseline of 3 is appropriate since schema already handles parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'provides the comprehensive Roam syntax reference' with a specific verb and resource. It covers a wide range of topics, distinguishing it from the sibling tools which are all action-oriented (create, fetch, search, etc.) rather than reference.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes an explicit usage guideline: 'Always load this cheatsheet before creating or updating Roam content.' This gives clear context for when to use the tool. It does not mention when not to use or alternatives, but since this is the only reference tool among siblings, exclusion is not necessary.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description explains key behaviors: it combines data from two sources (page and tags), deduplicates, and optionally filters/sorts. This adds useful context beyond a simple retrieval, though it does not mention safety or side effects (likely read-only).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with the main action. It is concise but could be slightly tighter by removing redundancy with parameter descriptions. Still efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given moderate complexity (4 params, no output schema, no annotations), the description covers the main functionality, data sources, deduplication, and filtering/sorting. It does not explain error handling or prerequisites, but is reasonably complete for an agent to use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema descriptions already cover all parameters (100% coverage). The description restates filtering and sorting options, adding no new meaning. It does explain the output structure (deduplicated list), which is slightly helpful but not parameter-specific.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves stored memories from a specific page or tagged blocks, and returns a combined, deduplicated list. It uses a specific verb ('Retrieve') and resource ('stored memories'), distinguishing it from sibling tools like roam_fetch_block or roam_search_block_refs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this tool is for accessing stored memories, but it does not explicitly mention when to use alternatives or when not to use it. No guidance on exclusions or preferred use cases is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It explains that the tool handles nested block structures, validates consistency, and converts empty cells. However, it does not disclose that it is a write operation, nor does it mention authorization or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with sections and front-loaded purpose. Slightly verbose in the example but justified given complexity. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers core usage well with example and parameter details. But does not explain return value (no output schema) or error handling. Given 6 parameters and no output schema, somewhat incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). Description adds value beyond schema: explains typical first header is empty, cells count must match headers, order can be first/last. Provides an example demonstrating usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it creates a table in Roam with headers and rows, and explains why this tool is needed (abstracts complex nested structure). It distinguishes from sibling tools like roam_create_outline by focusing specifically on tables.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a 'Why use this tool' section explaining when to use it. Includes an important prerequisite about loading a resource. Does not explicitly list alternatives or when not to use, but the specificity implies appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. The description explains pagination, defaults, and case_sensitive behavior but lacks details on performance impact (beyond limit=-1 note) or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each providing distinct information: purpose, parameter usage, and specific use case. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters, no output schema, and no annotations, the description covers core functionality and parameter usage. It lacks info on return format or error handling but is sufficient for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description reinforces the schema but doesn't add significant new information beyond what is already in parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches for blocks containing a specific tag, and mentions a specific use case (ROAM_MEMORIES_TAG). This distinguishes it from sibling tools like roam_search_by_text or roam_search_block_refs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains how to use primary_tag and optional page_title_uid, and mentions pagination. It does not explicitly contrast with other search tools, but provides enough guidance for typical usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It mentions creation of a new page and batch efficiency, but fails to disclose behavior for existing pages (overwrite? error?), idempotency, or return value. Write key hint indicates it's a write operation, but not enough for full transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with bullet points and bold text for key information. Front-loaded with purpose. Each sentence adds value, though description could be slightly more concise without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (outlines, tables, nesting, headings) and no output schema, description is fairly complete. Explains content array structure, level, heading, and table details. Mentions required cheatsheet. Missing info on existing page behavior or return value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description adds value by explaining content structure, nesting levels, and table format. Provides use-case context for parameters like 'content' array items and 'level' for tables. However, some details repeat schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool creates a new standalone page with optional content including outlines and tables. It distinguishes from siblings like roam_create_outline and roam_create_table by noting it's the preferred method for creating a page with an outline in one step. Also references roar_process_batch_actions for adding content to existing pages.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'Best for:' list provides clear use cases. Includes an 'Efficiency Tip' and an 'IMPORTANT' note about loading the cheatsheet. Specifies when not to use (for adding content to existing pages) and directs to alternative tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description fully carries burden. Details matching mechanism (title prefix), automatic trailing slash, optional filters, and content inclusion. For a read operation, sufficiently discloses behavior without contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with main purpose upfront. Each sentence adds useful detail. Could be slightly tightened, but overall concise and informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description explains what is returned (list of sub-pages, optionally with content). Covers main behavior, optional filters, and multi-graph parameter. Adequately complete for the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All parameters have schema descriptions (100% coverage), but description adds value: explains prefix as namespace matching, filter_tag matches both #tag and [[tag]], and defaults. This provides context beyond raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Fetch all sub-pages (namespace children) of a given page prefix', providing a specific verb and resource. Also distinguishes from sibling roam_search_hierarchy by explaining the difference in matching (title prefix vs block parent/child).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context on when to use (to get sub-pages by prefix) and explicitly differentiates from roam_search_hierarchy. Does not explicitly state when not to use or list alternatives, but the differentiation is sufficient for guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explains the algorithm in 4 steps (fetch, match, generate ops, preserve UIDs) and states that it generates minimal changes and preserves references. Since no annotations are provided, the description carries the full burden. It does not disclose error handling or rollback behavior, but the core behavioral traits are well-covered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear summary, bullet-pointed use cases, and a numbered algorithm. It is concise (no unnecessary words) and front-loads the core purpose. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains the process thoroughly but does not document the return value or output schema. Given the complexity and the lack of output schema, the agent might need to know what the tool returns (e.g., success status or diff preview). The prerequisite warning is helpful but does not fully compensate for the missing output documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining how dry_run works ('returns the planned actions without executing them') and how graph and write_key are used in multi-graph mode. This extra context justifies a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Update an existing page with new markdown content using smart diff' which is a specific verb+resource combination. It distinguishes from siblings like roam_create_page (create) and roam_import_markdown (import) by emphasizing preservation of block UIDs and minimal changes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases ('Syncing external markdown files to Roam', 'AI-assisted content updates that preserve references', 'Batch content modifications without losing block references') and includes a critical prerequisite ('ensure that you have loaded into context the 'Roam Markdown Cheatsheet' resource'). However, it does not explicitly list when not to use this tool or mention alternatives like roam_import_markdown for fresh imports.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the output (page content, linked references with breadcrumbs and children expanded) and mentions parameters like max_references to prevent timeouts. It does not explicitly state that it is a read-only operation, but the nature of 'fetch' implies no side effects. The write_key parameter is included but described as for write operations, which may cause confusion for a fetch tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the tool's purpose and key features. Every sentence adds value: the first defines what it does, the second states when to use it. No unnecessary words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description covers the main aspects: what is returned (page content, linked references with breadcrumb context and children), and key parameter roles (max_references for timeouts, children_depth for expansion). It does not detail output structure or error handling, but for a fetch tool with schema descriptions, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds extra guidance: for the 'title' parameter, it specifies date page format ('January 2nd, 2025'), and for 'max_references' it explains it prevents timeouts on heavily-referenced pages. This additional context improves parameter understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches a complete page view mirroring Roam's UI, including page content and all linked references with breadcrumb context. This distinguishes it from simpler tools like roam_fetch_page_by_title (a sibling) that likely only fetch page content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use this when you need the full picture of a page' and describes both page content and backlinks. It does not explicitly mention when not to use or directly name alternatives, but the context is clear. For a more explicit guideline, it could have suggested roam_fetch_page_by_title for simpler cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses verification queries after creation, rate limit concerns, duplicate block risk, and prerequisite to load markdown cheatsheet. Lacks error/feedback details but provides substantial behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded action, then sections on nesting, best-for, alternatives, and notes. Every sentence contributes, though slightly lengthy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, parameters, alternatives, prerequisites, and behavioral notes. Missing explicit success response but overall thorough for a tool with no output schema and 6 parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds nesting rules (first level 1, no skipping), order options, and write_key clarification. Adds value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Add a structured outline to an existing page or block' with clear verb and resource. It distinguishes from roam_create_page (for new pages) and roam_process_batch_actions (for complex nesting).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit when-to-use: 'Best for adding supplementary content...' and when-not-to-use: 'For complex nesting... consider roam_process_batch_actions instead.' Also warns about large outlines and rate limits.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It discloses non-transactional execution, ordered actions, UID placeholder mechanism, and need for valid UIDs. Could be more explicit about destructive deletes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Description is well-structured with bold headings and examples, front-loading key usage guidance. Slightly long but each part contributes meaningfully.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations or output schema, description comprehensively covers efficiency, ordering, UID placeholders, prerequisites, and alternative tools, plus reference to markdown cheatsheet.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds value with UID placeholder examples, explanation of action ordering, and supported Roam syntax in the string field.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it executes a sequence of block actions (create, update, move, delete) in a single batch, distinguishing it from sibling tools like roam_create_outline for simpler outlines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly recommends using this tool for efficiency with multiple operations, notes it is non-transactional, suggests alternatives like roam_create_outline for simpler outlines, and advises obtaining UIDs via other tools first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses case-sensitivity, lists namespaces and attributes, predicates, aggregates, and tips. It doesn't explicitly state that it's read-only (likely safe), but the context implies data retrieval. Lacks details on error handling or performance, but overall transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is long but well-structured with headings, bulleted lists, and an example. It front-loads the primary purpose and then provides layered detail. A few sentences could be trimmed, but overall clear and organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (custom query language, multiple parameters with dependencies, no output schema), the description is remarkably complete. It covers use cases, data model, predicates, aggregates, and practical tips, leaving little ambiguity for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% but description adds an example query, explains regex filter behavior (client-side, target fields), graph/write_key context, and includes a tip section. This significantly enriches the meaning of parameters beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it executes a custom Datomic query for advanced data retrieval beyond available search tools. It distinguishes itself from sibling tools like roam_search_by_text by targeting complex, custom queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists optimal use cases (advanced filtering, regex, complex boolean logic, arbitrary sorting, proximity search) with formatting that contrasts with simpler tools. Provides actionable guidance on when to use this tool over others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses key behaviors: creation of blocks if parent_string not found, verification by fetching full structure after import (which impacts rate limits), and the return of a nested structure. This fully informs the agent of side effects and performance considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with three short paragraphs, using bold for the API usage note and important prerequisite. Every sentence adds value: core function, alternative suggestion, prerequisite. No wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of importing markdown with parent location and creation, the description is complete: covers location methods, creation, verification, alternative tool, and prerequisite. No output schema needed as return is described.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter descriptions. The description adds workflow semantics: explains the two-location method (UID vs string), creation behavior of parent_string, and ordering default. This enhances understanding beyond the schema, earning a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Import nested markdown content into Roam under a specific block.' It distinguishes from sibling roam_process_batch_actions by suggesting that alternative for large imports. The use of UID vs string matching is explained.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit usage guidance: suggests using roam_process_batch_actions for large imports or rate limit concerns. Also instructs to load the 'Roam Markdown Cheatsheet' resource before use. Provides clear context on when to prefer UID vs string matching.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/2b3pro/roam-research-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server