Skip to main content
Glama

list_identifiers

Display tracked Python identifiers in your session. Filter by type like functions or variables to organize code elements.

Instructions

List all tracked identifiers in the current session.

Optionally filter by type (function, variable, class, method, constant).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
type_filterNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The MCP tool handler function 'list_identifiers' that processes requests and returns identifier data.
    @mcp.tool()
    async def list_identifiers(
        type_filter: str | None = None, context: Context | None = None
    ) -> dict[str, Any]:
        """
        List all tracked identifiers in the current session.
    
        Optionally filter by type (function, variable, class, method, constant).
        """
        identifiers = session_tracker.list_identifiers(id_type=type_filter)
    
        return {
            "count": len(identifiers),
            "identifiers": [
                {
                    "name": info.name,
                    "type": info.type,
                    "occurrences": info.occurrences,
                    "first_seen": info.first_seen.isoformat(),
                    "last_seen": info.last_seen.isoformat(),
                    "signatures": info.signatures,
                    "files": list(info.file_locations),
  • The underlying logic in 'SessionTracker' that retrieves and filters identifiers from the session data.
    def list_identifiers(self, id_type: str | None = None) -> list[IdentifierInfo]:
        """List all tracked identifiers, optionally filtered by type."""
        if id_type:
            return [info for info in self.identifiers.values() if info.type == id_type]
        return list(self.identifiers.values())
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states it lists tracked identifiers, implying a read-only operation, but doesn't clarify if it's safe, whether it requires specific permissions, or how it handles large datasets (e.g., pagination). The mention of 'current session' adds some context, but overall, key behavioral traits like side effects or performance are missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by an optional feature. Both sentences are essential and waste-free, making it highly efficient and easy to scan. The structure is logical and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which should cover return values), the description doesn't need to explain outputs. However, with no annotations and low schema coverage, it partially compensates by detailing the parameter's semantics. It's adequate for a simple list tool but lacks guidance on usage and behavioral context, making it minimally complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining that 'type_filter' can filter by categories like function, variable, class, method, or constant, which clarifies the parameter's purpose beyond the schema's generic title. However, it doesn't specify allowed values or format details, leaving some gaps in parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('tracked identifiers in the current session'), making the purpose understandable. It distinguishes from siblings like 'track_identifier' (which creates) versus this listing operation. However, it doesn't explicitly contrast with other read operations among siblings, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions optional filtering by type but doesn't specify scenarios for filtering or when to choose this over other sibling tools like 'check_code' or 'suggest_fix'. There's no mention of prerequisites or exclusions, leaving usage context vague.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kimasplund/mcp-pyrefly'

If you have feedback or need assistance with the MCP directory API, please join our Discord server