usageReferences
Find where ABAP code symbols are referenced in your development environment to analyze usage patterns and dependencies.
Instructions
Find symbol references
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| line | No | ||
| column | No |
Find where ABAP code symbols are referenced in your development environment to analyze usage patterns and dependencies.
Find symbol references
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| line | No | ||
| column | No |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure but fails completely. It doesn't indicate whether this is a read-only operation, what permissions might be required, whether it's resource-intensive, what format results come in, or any other behavioral characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is maximally concise at just three words. While this represents severe under-specification rather than ideal conciseness, from a pure structural perspective there's zero wasted verbiage and it's front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, no annotations, no output schema, and 0% schema description coverage, the description is completely inadequate. It doesn't explain what the tool actually does beyond the tautological name restatement, provides no parameter guidance, and offers no behavioral context - leaving the agent with insufficient information to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 3 parameters (url, line, column), the description provides zero information about what these parameters mean or how they should be used. The schema shows url is required while line and column are optional, but the description offers no context about what constitutes a valid URL or how line/column parameters affect the search.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Find symbol references' is a tautology that essentially restates the tool name 'usageReferences' without providing meaningful specificity. It doesn't clarify what type of symbols, what context they're found in, or what constitutes a 'reference' - making it vague about the actual operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides absolutely no guidance about when to use this tool versus alternatives. Given the many sibling tools (like findDefinition, findObjectPath, searchObject, etc.), there's no indication of how this tool differs or when it's the appropriate choice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/dachienit/mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server