Skip to main content
Glama
jonmmease

jons-mcp-java

by jonmmease

references

Locate all references to Java symbols in your codebase by specifying file, line, and character position. This tool helps developers understand symbol usage and track dependencies across Java projects.

Instructions

Find all references to the symbol at the given position.

Args: file_path: Absolute path to the Java file line: 0-indexed line number character: 0-indexed character position include_declaration: Whether to include the declaration in results

Returns: Dictionary with 'locations' array or 'status'/'message' if initializing

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYes
lineYes
characterYes
include_declarationNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The core handler function for the 'references' MCP tool. It uses the JDT.LS client to send an LSP 'textDocument/references' request and formats the locations.
    @mcp.tool()
    async def references(
        file_path: str,
        line: int,
        character: int,
        include_declaration: bool = True,
    ) -> dict:
        """
        Find all references to the symbol at the given position.
    
        Args:
            file_path: Absolute path to the Java file
            line: 0-indexed line number
            character: 0-indexed character position
            include_declaration: Whether to include the declaration in results
    
        Returns:
            Dictionary with 'locations' array or 'status'/'message' if initializing
        """
        manager = get_manager()
        if manager is None:
            return {"status": "error", "message": "Server not initialized"}
    
        client, status = await manager.get_client_for_file_with_status(Path(file_path))
    
        if client is None:
            return {"status": "initializing", "message": status}
    
        await client.ensure_file_open(file_path)
    
        response = await client.request(
            LSP_TEXT_DOCUMENT_REFERENCES,
            {
                "textDocument": {"uri": path_to_uri(file_path)},
                "position": {"line": line, "character": character},
                "context": {"includeDeclaration": include_declaration}
            }
        )
    
        return format_locations(response)
  • Imports the tools modules, which triggers registration of the @mcp.tool()-decorated 'references' function via FastMCP.
    # Import tools to register them
    from jons_mcp_java.tools import navigation, symbols, diagnostics, info  # noqa: E402, F401
  • Helper function used by the 'references' handler (and others) to normalize LSP location responses into a standard {'locations': [...]} format.
    def format_locations(response: dict | list | None) -> dict:
        """
        Normalize LSP Location response to a consistent format.
    
        LSP methods like definition can return:
        - null/None
        - Single Location object
        - Array of Location objects
        - Array of LocationLink objects
    
        This normalizes to: {"locations": [...]}
        """
        if response is None:
            return {"locations": []}
    
        if isinstance(response, dict):
            # Single Location or LocationLink
            return {"locations": [_normalize_location(response)]}
    
        if isinstance(response, list):
            return {"locations": [_normalize_location(loc) for loc in response]}
    
        return {"locations": []}
    
    
    def _normalize_location(loc: dict) -> dict:
        """Normalize a Location or LocationLink to a common format."""
        # LocationLink has targetUri/targetRange, Location has uri/range
        if "targetUri" in loc:
            # LocationLink
            uri = loc["targetUri"]
            range_obj = loc.get("targetSelectionRange") or loc.get("targetRange", {})
        else:
            # Location
            uri = loc.get("uri", "")
            range_obj = loc.get("range", {})
    
        # Convert URI to path for easier consumption
        try:
            path = str(uri_to_path(uri))
        except ValueError:
            path = uri
    
        start = range_obj.get("start", {})
        end = range_obj.get("end", {})
    
        return {
            "path": path,
            "uri": uri,
            "line": start.get("line", 0),
            "character": start.get("character", 0),
            "end_line": end.get("line", 0),
            "end_character": end.get("character", 0),
        }
  • LSP method constant used in the 'references' tool request.
    LSP_TEXT_DOCUMENT_REFERENCES = "textDocument/references"
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It mentions the return format ('Dictionary with 'locations' array or 'status'/'message' if initializing'), which adds useful context beyond basic functionality. However, it doesn't disclose important behavioral aspects like error conditions, performance characteristics, or what 'initializing' means in the return context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement followed by parameter explanations and return information. Every sentence serves a purpose, though the return statement could be slightly more concise. The information is appropriately front-loaded with the core functionality first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and the detailed parameter explanations in the description, the tool is well-documented for its complexity. The description covers the essential what, how (via parameters), and what to expect in return. The main gap is lack of usage guidance relative to sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description provides excellent parameter semantics despite 0% schema description coverage. It clearly explains each parameter's purpose: 'file_path: Absolute path to the Java file', 'line: 0-indexed line number', 'character: 0-indexed character position', and 'include_declaration: Whether to include the declaration in results'. This fully compensates for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Find all references') and target ('to the symbol at the given position'), distinguishing it from sibling tools like 'definition' or 'implementation' which serve different purposes. It uses precise technical language appropriate for a code analysis tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'definition' or 'implementation'. While the purpose is clear, there's no mention of typical use cases, prerequisites, or comparison with sibling tools that might overlap in functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jonmmease/jons-mcp-java'

If you have feedback or need assistance with the MCP directory API, please join our Discord server