xmp4 — Semantic code knowledge for your stack
Server Details
OSS libs in your stack, really used: source, tests, callers. C#, Java, TS, Python, Rust, PHP+.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- 0ics-srls/lsai-xmp4.public
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 16 of 16 tools scored. Lowest: 2.8/5.
Each tool targets a distinct code analysis operation: callers/callees, dependencies, grep, hierarchy, info, outline, projects, search, source, symbol lookup, tests, usages, file viewing, and a guide. No two tools have overlapping purposes; even similar tools like xmp4_callees and xmp4_callers are clearly opposite directions.
All tools share the uniform 'xmp4_' prefix and follow a consistent lowercase_underscore convention. The names are consistently verb_noun (e.g., xmp4_search, xmp4_source) or noun-based (e.g., xmp4_callees, xmp4_deps) with no mixing of camelCase or other styles.
16 tools is slightly above the typical 3-15 range, but the count is justified by the comprehensive code analysis domain. Each tool serves a distinct and necessary function, so the set feels well-scoped rather than bloated.
The tool surface covers the full lifecycle of code exploration: project discovery, symbol search, definitions, call graphs, type hierarchy, usages, tests, grep, file viewing, and server info. There are no obvious gaps for a read-only semantic code knowledge server.
Available Tools
16 toolsxmp4_calleesAInspect
Find direct callees (methods called by) a symbol in a project. Navigate step-by-step by calling xmp4_callees again on interesting results.
| Name | Required | Description | Default |
|---|---|---|---|
| docs | No | Include docs: none (default) | summary | full (xmp4_info only) | |
| page | No | Page number (1-based, default 1; ignored by xmp4_info/xmp4_source) | |
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| file_path | No | File path to disambiguate | |
| page_size | No | Results per page (default 20, max 100) | |
| symbol_name | Yes | Symbol name | |
| output_format | No | Output format: Compact (default) or Verbose |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool is for 'direct callees' and suggests iterative use, but it does not disclose whether the tool is read-only, has rate limits, or how results are paginated (though pagination params exist). With no annotations, behavioral disclosure is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences, no wasted words. It front-loads the main action and includes a practical usage tip. Every sentence provides value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple navigation tool with no output schema, the description covers the core functionality and suggests iterative use. It could mention return format or pagination behavior, but given the schema describes page-related params, it's reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the description adds limited value beyond the schema. The description does not explain parameter details; it focuses on purpose and usage. According to guidelines, baseline is 3 when coverage is high and param info is minimal.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action: 'Find direct callees (methods called by) a symbol in a project.' It uses a specific verb ('find') and resource ('direct callees'), and it distinguishes from siblings like xmp4_callers (which would find callers) by specifying 'callees'. The tool name and description align perfectly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises to 'Navigate step-by-step by calling xmp4_callees again on interesting results,' implying a recursive usage pattern. However, it does not explicitly exclude alternatives or specify when not to use it compared to siblings like xmp4_hierarchy or xmp4_deps.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_callersAInspect
Find direct callers of a symbol in a project. Navigate step-by-step by calling xmp4_callers again on interesting results.
| Name | Required | Description | Default |
|---|---|---|---|
| docs | No | Include docs: none (default) | summary | full (xmp4_info only) | |
| page | No | Page number (1-based, default 1; ignored by xmp4_info/xmp4_source) | |
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| file_path | No | File path to disambiguate | |
| page_size | No | Results per page (default 20, max 100) | |
| symbol_name | Yes | Symbol name | |
| output_format | No | Output format: Compact (default) or Verbose |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description implies the tool supports pagination via page and page_size parameters, and includes output_format for results, but does not explicitly state that results are paginated or describe the return format beyond the schema. With no annotations, the description covers basic safety (read-only assumed) but misses stating no side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences, no wasted words. It effectively conveys the primary purpose and a key usage hint.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple query tool with a single output and no output schema, the description is sufficient to understand usage. The schema covers all parameters. However, it could mention that pagination is supported (implied by parameters) and that results show callers in a list.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented in the schema. The description adds no additional meaning beyond the schema; it does not explain how parameters like docs, file_path, or output_format affect results. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds direct callers of a symbol in a project, which is specific and distinguishes it from siblings like xmp4_callees (which finds callees) and xmp4_usages (which finds all usages).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises to navigate step-by-step by calling xmp4_callers again on interesting results, providing a clear usage pattern and distinguishing it from alternatives for iterative exploration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_depsBInspect
List external dependencies of a project.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-based, default 1) | |
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| page_size | No | Results per page (default 20, max 100) | |
| output_format | No | Output format: Compact (default) or Verbose |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes a read operation without mentioning side effects, which is appropriate for a list tool. However, it does not disclose important behavioral traits such as pagination (though parameters indicate pagination), rate limits, or output behavior (e.g., how dependencies are returned). The description is adequate but could be more transparent about the output format or ordering.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that clearly states the tool's purpose. It is appropriately sized for a simple listing operation and avoids unnecessary detail. It could be slightly improved by adding a brief note about pagination or output format, but it is not overly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the parameter count (4), schema coverage (100%), no output schema, and the simplicity of the listing operation, the description is complete enough for basic usage. However, it omits context about what constitutes an 'external dependency' (e.g., versus internal dependencies) and does not mention that results are paginated (though parameters hint at it). With no output schema, the description could briefly explain the return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 4 parameters with 100% description coverage, meaning each parameter has a clear schema-level description. The tool description does not add additional meaning beyond what the schema already provides (e.g., it doesn't explain that 'project' must follow the 'xmp4_projects' format or that 'output_format' options affect verbosity). Since schema coverage is high, the baseline is 3; no extra value is added by the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'List external dependencies of a project' clearly identifies the verb (List) and resource (external dependencies of a project). It distinguishes this tool from siblings like xmp4_callees and xmp4_callers, which focus on internal call relationships, and xmp4_info which provides general metadata. However, it does not explicitly differentiate from siblings like xmp4_outline or xmp4_search, but the purpose is specific enough.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing external dependencies (opposed to internal ones), but it does not provide explicit guidance on when to use this tool versus alternatives (e.g., xmp4_callees for internal calls, xmp4_info for project metadata). It does not mention prerequisites or exclusions. The agent might infer usage from tool name and sibling set, but explicit guidance is missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_grepAInspect
Server-side regex text search over indexed project source files. Free tier: requires file_path (single file). Premium tier (XMP4_PREMIUM_GREP_WALK=true): allows file_glob multi-file walk. Prefer xmp4_tests_for/xmp4_usages for SCIP symbols — grep is for text not indexed (comments, literals, config keys).
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-based, default 1) | |
| pattern | Yes | Regex pattern (case-insensitive by default) | |
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| file_glob | No | Optional file glob for multi-file walk (premium tier). Set XMP4_PREMIUM_GREP_WALK=true to enable. | |
| file_path | No | Single-file grep target, repo-relative (e.g. 'src/foo.rs'). Required in free tier. | |
| page_size | No | Results per page (default 20, max 100) | |
| max_results | No | Maximum number of hits to collect before pagination (default 50, max 1000) | |
| case_sensitive | No | If true, search is case-sensitive (default false) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It explains tier-dependent behavior (free vs premium) and mentions regex pattern behavior (case-insensitive by default). Could add more on pagination or performance, but given zero annotations, this is strong.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states purpose and tiers, second gives usage guidance. No fluff, every sentence provides critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters (all described in schema), no output schema, and no annotations, the description covers purpose, alternatives, and tier logic well. Slight gap: doesn't explain pagination behavior or output format, but schema covers params sufficiently.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds value by explaining free tier requires file_path, premium tier enables file_glob, and gives examples. It doesn't detail every parameter but adds context beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description begins with 'Server-side regex text search over indexed project source files,' providing a specific verb and resource. It distinguishes from siblings like xmp4_tests_for and xmp4_usages by noting those are for SCIP symbols, while grep is for text not indexed like comments and literals.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explicitly states when to use grep vs alternatives: 'Prefer xmp4_tests_for/xmp4_usages for SCIP symbols — grep is for text not indexed (comments, literals, config keys).' Also explains free vs premium tier conditions with environment variable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_guideAInspect
Usage guide for xmp4 tools — read this first to learn the correct workflow
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states that this tool provides a 'usage guide' and should be used to 'learn the correct workflow', indicating it is a read-only, informational tool. Since no annotations are provided, the description carries full burden and adequately conveys the non-destructive, educational behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that conveys the tool's purpose and usage guidance without any waste. It is front-loaded with the key term 'Usage guide' and immediately tells the agent to read it first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this tool has no input parameters, no output schema, and no annotations, the description is perfectly complete. It tells the agent everything needed: that it's a guide to be read first. Nothing is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so the description doesn't need to explain any. The schema description coverage is 100%, and the task of a guide with zero params is inherently self-explanatory. The description adds no param info because none is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states this is a 'usage guide' that should be read first to learn the correct workflow, clearly distinguishing it from other xmp4 tools. The verb 'guide' and the resource 'workflow' make the purpose immediately clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises to 'read this first', providing clear guidance on when to use this tool (before other tools) to learn the correct workflow. This effectively guides the agent's usage order among sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_hierarchyBInspect
Get type hierarchy (base, interfaces, derived) for a symbol in a project.
| Name | Required | Description | Default |
|---|---|---|---|
| docs | No | Include docs: none (default) | summary | full (xmp4_info only) | |
| page | No | Page number (1-based, default 1; ignored by xmp4_info/xmp4_source) | |
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| file_path | No | File path to disambiguate | |
| page_size | No | Results per page (default 20, max 100) | |
| symbol_name | Yes | Symbol name | |
| output_format | No | Output format: Compact (default) or Verbose |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It states 'Get' which implies a read operation, but does not disclose any behavioral traits like pagination, error handling, or performance. Given the schema has pagination parameters (page, page_size), the description should mention that results can be paginated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is clear and front-loaded. It could be slightly expanded to include key parameter effects, but remains efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters with full schema coverage but no output schema, the description is complete enough for a retrieval tool. However, it lacks guidance on pagination and output format, which are important for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no extra meaning beyond what the schema provides; it does not explain how parameters like 'docs', 'file_path', or 'output_format' affect the hierarchy output.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'type hierarchy (base, interfaces, derived)' for a symbol in a project. It distinguishes from siblings by focusing on hierarchy rather than callers, callees, or usages. However, it could be more specific about what 'hierarchy' includes (e.g., inheritance chain).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when needing type hierarchy information, but does not explicitly state when to use this tool vs alternatives like xmp4_info or xmp4_outline. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_infoAInspect
Get detailed information about a symbol in a project.
| Name | Required | Description | Default |
|---|---|---|---|
| docs | No | Include docs: none (default) | summary | full (xmp4_info only) | |
| page | No | Page number (1-based, default 1; ignored by xmp4_info/xmp4_source) | |
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| file_path | No | File path to disambiguate | |
| page_size | No | Results per page (default 20, max 100) | |
| symbol_name | Yes | Symbol name | |
| output_format | No | Output format: Compact (default) or Verbose |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description must disclose behavioral traits. It states it returns information, but does not mention that it performs a read operation, is safe, or whether it requires specific permissions. The description is neutral but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that states the function. It is concise and front-loaded with the key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters, 100% schema coverage, and no output schema, the description adequately summarizes the tool's purpose. However, the description does not explain that the output is a detailed symbol info object (depends on user's knowledge of the tool). It is complete enough for a basic understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no value for parameters beyond the schema, but the schema descriptions are self-contained. However, the description's mention of 'detailed information' implies the output includes docs, which is consistent with the 'docs' parameter. A slight plus for aligning with schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'get' and resource 'detailed information about a symbol in a project'. It is distinct from siblings like xmp4_source (source code) and xmp4_usages (usages). However, it could be more specific by mentioning that it retrieves metadata (type, signature, docs) rather than just 'detailed information'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need info on a symbol, but does not explicitly state when to use this vs siblings like xmp4_callees or xmp4_view. No alternatives or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_outlineBInspect
Get outline (all symbols) for a file in a project.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-based, default 1) | |
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| file_path | Yes | File path | |
| page_size | No | Results per page (default 20, max 100) | |
| output_format | No | Output format: Compact (default) or Verbose |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavior. It states the action (get outline) and input (file, project), but does not mention any side effects, permissions, or pagination behavior beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, concise and front-loaded with key information. However, it could include usage guidance without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a retrieval tool with complete schema coverage and no output schema, the description is adequate but lacks clarity on what 'outline' entails (e.g., is it a list of symbol names?), and no return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with descriptions. The description adds no additional meaning beyond the schema, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Get' and resource 'outline (all symbols)', clearly indicating it retrieves the symbol outline for a file. It distinguishes from siblings like xmp4_symbol_at or xmp4_hierarchy by focusing on all symbols vs specific ones.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool over others. While the sibling tools have different purposes, the description does not provide context or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_projectsAInspect
Search and browse projects by language and name. Use this first to discover projects, then use other tools with the repo name.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-based, default 1) | |
| repo | No | Filter by repository name | |
| query | No | Search by project name (case-insensitive contains) | |
| language | No | Filter by language (e.g., Python, TypeScript, CSharp) | |
| page_size | No | Results per page (default 20, max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral transparency. It does not mention any behavioral traits like pagination behavior, authentication needs, or side effects. However, the input schema covers pagination with page and page_size, so the description's omission is partially mitigated by schema coverage. Still, additional context on return format or limits would improve transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences with zero wasted words. It front-loads the purpose and immediately follows with usage guidance. Every sentence is essential and informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 parameters, all optional, no output schema, no nested objects), the description is largely complete. It explains the tool's role in a workflow and leaves parameter details to the schema. A minor gap is the lack of information about what the tool returns (e.g., a list of project objects), but since there's no output schema, the description could hint at the return structure. Overall, adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning every parameter has a description in the schema. The tool description adds no new parameter meaning beyond what's in the schema. Therefore, baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to search and browse projects by language and name. It uses a specific verb ('Search and browse') and identifies the resource ('projects'). Among siblings like xmp4_repos and xmp4_search, this tool is distinguished as a project discovery tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use this tool: 'Use this first to discover projects, then use other tools with the repo name.' This provides clear workflow guidance and indirectly suggests when not to use it (e.g., after you have a repo name, use more specific tools).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_searchAInspect
Search symbols in a project. Use xmp4_projects first to find the project identifier.
| Name | Required | Description | Default |
|---|---|---|---|
| kind | No | Filter by kind: Class, Method, Function, etc. | |
| page | No | Page number (1-based, default 1) | |
| query | Yes | Search query (symbol name or pattern) | |
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| page_size | No | Results per page (default 20, max 100) | |
| max_results | No | Maximum results to return (default: 50). Kept as per-page upper bound alongside page_size. | |
| output_format | No | Output format: Compact (default) or Verbose |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It mentions the prerequisite step but does not disclose whether the search is case-sensitive, supports regex, or any pagination behavior beyond the parameters. For a search tool, more behavioral details would be helpful, but the description is not misleading.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: the first defines the tool's purpose, the second gives a usage tip. Both are concise and front-loaded. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (7 params, 2 required, no output schema, no annotations), the description is adequately complete for a search tool with a well-defined prerequisite. It could mention that results include symbol details, but it is sufficient for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description does not add meaning beyond the schema; it just repeats the project parameter's purpose. The schema itself documents all 7 parameters adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Search symbols in a project' with a specific verb and resource. It clearly distinguishes from sibling tools like xmp4_grep (which likely searches file content) and xmp4_info (which gets symbol details). The search scope is explicitly tied to a project, narrowing the purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use xmp4_projects first to find the project identifier', providing a clear prerequisite and guiding the agent on the correct workflow. This distinguishes it from other search tools that may not require a project context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_serverAInspect
Get server information, version, and capabilities
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It states what it returns (info, version, capabilities) but doesn't detail performance, rate limits, or side effects. With no annotations, more context would be beneficial.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is short and direct, one sentence covering the purpose. It's appropriately sized for a tool with no parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description adequately explains what the tool does. However, for a server info tool, it could mention that it returns JSON or provide example fields to be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, and schema description coverage is 100%. The description adds context about the return data, which is helpful since there's no output schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves server information, version, and capabilities. However, it could better differentiate from sibling tools like xmp4_info, which might overlap.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies it's for getting general server details, but doesn't specify when to use it over similar tools like xmp4_info or xmp4_guide, nor explicit alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_sourceAInspect
Extract source code for a symbol in a project.
| Name | Required | Description | Default |
|---|---|---|---|
| docs | No | Include docs: none (default) | summary | full (xmp4_info only) | |
| page | No | Page number (1-based, default 1; ignored by xmp4_info/xmp4_source) | |
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| file_path | No | File path to disambiguate | |
| page_size | No | Results per page (default 20, max 100) | |
| symbol_name | Yes | Symbol name | |
| output_format | No | Output format: Compact (default) or Verbose |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It does not disclose side effects, auth requirements, or pagination behavior. The description is neutral; no contradictions, but missing details about return format or error states.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 7 words, front-loaded with the action. No wasted words, though slightly terse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters and no output schema, the description is minimal. It should explain that the output is source code lines, and possibly mention pagination or error handling. The description is adequate but could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% as all 7 parameters have descriptions. The tool description adds no extra meaning beyond what the schema provides. Baseline score of 3 is appropriate since description doesn't clarify parameter relationships (e.g., when 'file_path' is needed for disambiguation).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Extract' and the resource 'source code for a symbol in a project'. It distinguishes from siblings like xmp4_info (which extracts information) and xmp4_view (which views files), but could more explicitly contrast with xmp4_symbol_at.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this tool is for extracting source code of a symbol, but does not specify when to use it vs alternatives like xmp4_info or xmp4_symbol_at. No guidance on prerequisites (e.g., project must exist) or context when file_path is needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_symbol_atAInspect
LSP-style position→symbol lookup. Given (file_path, line, column) returns the symbol whose occurrence covers that cursor position. line is 1-based; column is 0-based. Use after xmp4_outline/xmp4_view when you know coordinates and want the canonical symbol.
| Name | Required | Description | Default |
|---|---|---|---|
| line | No | Line number (1-based) | |
| column | No | Column number (0-based, default 0 = beginning of line) | |
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| file_path | Yes | File path inside the project, relative to repo root |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It explains that line is 1-based and column is 0-based, which is helpful, but it does not disclose behavior such as error handling (e.g., if coordinates don't correspond to a symbol), performance characteristics, or response structure. The description is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single short paragraph that conveys purpose, usage context, and indexing conventions in an efficient manner. Every sentence is informative, and it is front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is simple (position->symbol lookup), has no output schema, and has 100% schema coverage, the description provides sufficient context for an AI agent to understand and invoke the tool correctly. The only minor gap is the lack of behavioral disclosure on error cases, but overall it is complete for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents all parameters with descriptions. The description adds the detail that column defaults to 0 (beginning of line) and reinforces indexing conventions, but these are already present in the schema. Thus, the description adds minimal semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: given file_path, line, and column, it returns the symbol whose occurrence covers that cursor position. It explicitly distinguishes from siblings by mentioning it is an 'LSP-style position→symbol lookup' and suggests using it after xmp4_outline/xmp4_view when you know coordinates and want the canonical symbol.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage context: 'Use after xmp4_outline/xmp4_view when you know coordinates and want the canonical symbol.' This tells the agent when to use this tool versus alternatives, offering clear guidance on prerequisites and intended workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_tests_forAInspect
Find direct tests that exercise a given symbol (direct callers filtered to test-file candidates per language pattern: CSharp/Java/PHP: Test(s).; Python: test_.py / *_test.py; TypeScript/JavaScript: *.spec/test.{ts,js}; Rust: *_tests.rs / tests/; etc.).
| Name | Required | Description | Default |
|---|---|---|---|
| docs | No | Include docs: none (default) | summary | full (xmp4_info only) | |
| page | No | Page number (1-based, default 1; ignored by xmp4_info/xmp4_source) | |
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| file_path | No | File path to disambiguate | |
| page_size | No | Results per page (default 20, max 100) | |
| symbol_name | Yes | Symbol name | |
| output_format | No | Output format: Compact (default) or Verbose |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses filtering behavior (to test-file candidates per language patterns) and scope (direct callers only), which is valuable behavioral context. Since no annotations are provided, the description carries full burden; it does well but could add more about return format or pagination. No contradictions with missing annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with parenthetical examples, which is concise and front-loaded with the key action. The language patterns are packed into parentheses, which is efficient. It could be slightly more structured (e.g., bullet points), but it's not verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the tool (7 parameters, no output schema), the description adequately explains what it does and the filtering rules. However, it does not explain the return format or how pagination works (page/page_size parameters), which would be helpful for a complete understanding. Since no output schema exists, some description of output would increase completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents all 7 parameters. The description adds meaning for the symbol_name parameter (the target symbol) and implies project context from the required 'project' parameter, but does not add significant detail beyond what the schema provides for other parameters. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Find' and identifies the resource as 'direct tests that exercise a given symbol', clearly distinguishing it from siblings like xmp4_callers (which finds direct callers) and xmp4_usages (which finds all usages). The description also provides language-specific patterns for test files, adding precision.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Find direct tests that exercise a given symbol') and implies when not to use it (when not looking for direct test coverage). It contrasts with siblings that find all callers (xmp4_callers) or general usages (xmp4_usages), providing clear alternatives, though not naming them explicitly; the sibling list is available for context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_usagesCInspect
Find all usages/references of a symbol in a project.
| Name | Required | Description | Default |
|---|---|---|---|
| docs | No | Include docs: none (default) | summary | full (xmp4_info only) | |
| page | No | Page number (1-based, default 1; ignored by xmp4_info/xmp4_source) | |
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| file_path | No | File path to disambiguate | |
| page_size | No | Results per page (default 20, max 100) | |
| symbol_name | Yes | Symbol name | |
| output_format | No | Output format: Compact (default) or Verbose |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so the description carries the full burden. It states the tool 'Find[s]' usages, implying a read-only operation, but does not disclose any other behavioral traits such as whether it searches across files, respects file_path for disambiguation, or any performance considerations. The description is too brief to fully inform the agent of side effects or constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is concise but could include more context without becoming verbose. It front-loads the verb, which is good, but the sentence is minimal and does not elaborate on scope or limitations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this is a usage-finding tool with 7 parameters, the description is insufficient. It does not explain the array of parameters (e.g., docs, page, page_size, output_format) or how they influence results. There is no output schema to supplement behavior, so the description should provide more detail on how the tool processes symbols, disambiguation with file_path, and what 'usages' means (e.g., all references, only imports, etc.).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so each parameter has a schema description. However, the tool description does not add meaning beyond the schema. For example, the description does not explain how 'docs' or 'output_format' affect results, or that 'page' and 'page_size' control pagination. Baseline 3 is appropriate since schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the verb 'Find' and the resource 'usages/references of a symbol in a project,' clearly indicating the tool's purpose. It distinguishes from siblings like xmp4_symbol_at (which likely gets a specific symbol) and xmp4_search (which searches texts), but does not explicitly differentiate from other reference-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool vs alternatives (e.g., xmp4_callees, xmp4_callers). The description implies it's for finding symbol usages, but there is no information on prerequisites or when to prefer other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
xmp4_viewAInspect
Read a raw file excerpt from an indexed project by line range. Use after xmp4_search/xmp4_outline locates the region of interest, or to expand a truncated xmp4_source snippet. Hard cap of 500 lines per call.
| Name | Required | Description | Default |
|---|---|---|---|
| project | Yes | Project id: 'repo/project' or 'repo/project/language'. Case-insensitive prefix match. Append '/Python'|'/CSharp'|'/Java'|etc. only to disambiguate multi-language projects (e.g. 'django/Django/Python' vs 'django/Django/JavaScript'). 1 match → proceeds; N → warning lists candidates; 0 → do NOT iterate guesses, call xmp4_projects(query=...) once then retry. | |
| to_line | No | Ending line (inclusive, default from_line+49, hard cap 500 lines per call) | |
| file_path | Yes | File path inside the project, relative to repo root | |
| from_line | No | Starting line (1-based, default 1) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the hard cap of 500 lines, which is a key behavioral trait. It doesn't describe performance, error handling, or authentication needs, but the cap is a significant detail. Lacks mention of ordering or line numbering convention (1-based is in schema but not description).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first sentence states the action, second provides usage context and a constraint. Information dense with no wasted words. Could be slightly more structured (e.g., bullet points for constraints) but functional.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity is low (simple file read with line range), the description covers the essential purpose, usage context, and a key constraint (500 lines). No output schema exists, but the tool returns raw file content, which is straightforward. The description doesn't specify return format (e.g., lines as array or string) but that is inferable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds implicit meaning by mentioning 'by line range' and associating it with siblings, but doesn't elaborate on parameter formats or constraints beyond the schema. Default values for from_line and to_line are in the schema but not repeated.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Read a raw file excerpt from an indexed project by line range,' specifying the verb (read), resource (file excerpt), and scope (by line range). It also distinguishes itself from siblings by referencing xmp4_search/xmp4_outline and xmp4_source, making it unique among the sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: after xmp4_search/xmp4_outline locates a region of interest, or to expand a truncated xmp4_source snippet. This provides clear context and alternatives, and the hard cap of 500 lines gives practical usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!