Skip to main content
Glama

locate_concept

Find ranked entry points—key functions, classes, and files—to understand a concept, reducing manual search. Includes contrastive concepts to clarify boundaries. Use for 'how does X work' or 'where is X defined'.

Instructions

Find the best entry points for understanding a concept — returns a ranked shortlist of key functions, classes, and files to read, plus contrastive concepts that clarify boundaries. Saves reading dozens of grep matches by surfacing the most important locations first. Use when asked 'how does X work', 'where should I look for X', or 'where are X defined'.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
termYesConcept to locate (e.g. 'transform')

Implementation Reference

  • Tool definition for 'locate_concept' including its MCP name, label, description, promptSnippet, and parameter schema (term: string). This is the TypeScript type/schema definition.
    {
      mcpName: "locate_concept",
      label: "Locate Concept",
      description:
        "Find the best entry points for understanding a concept — ranked " +
        "shortlist of key functions, classes, and files to read.",
      promptSnippet:
        "ontomics_locate_concept: find key entry points for a concept",
      parameters: Type.Object({
        term: Type.String({
          description: "Concept to locate (e.g. 'transform')",
        }),
      }),
  • Registration of all tools including 'locate_concept'. The tool is registered via pi.registerTool with the name 'ontomics_locate_concept'. The execute handler delegates to an external ontomics MCP binary via the McpClient class.
    for (const def of toolDefs()) {
      pi.registerTool({
        name: `ontomics_${def.mcpName}`,
        label: def.label,
        description: def.description,
        promptSnippet: def.promptSnippet,
        promptGuidelines: [
          "Use ontomics tools BEFORE grep/glob for semantic codebase questions.",
        ],
        parameters: def.parameters,
        async execute(_toolCallId, params, _signal, onUpdate, _ctx) {
          onUpdate?.({
            content: [{ type: "text", text: `Querying ontomics: ${def.mcpName}...` }],
          });
          try {
            const mcp = await getClient();
            const text = await mcp.callTool(def.mcpName, cleanArgs(params));
            return { content: [{ type: "text", text }] };
          } catch (err) {
            throw new Error(
              `ontomics ${def.mcpName} failed: ${err instanceof Error ? err.message : String(err)}`,
            );
          }
        },
      });
    }
  • Generic execute handler for all tools (including locate_concept). Sends a JSON-RPC 'tools/call' request to the external ontomics binary over stdio, passing the tool name 'locate_concept' and cleaned parameters.
    async execute(_toolCallId, params, _signal, onUpdate, _ctx) {
      onUpdate?.({
        content: [{ type: "text", text: `Querying ontomics: ${def.mcpName}...` }],
      });
      try {
        const mcp = await getClient();
        const text = await mcp.callTool(def.mcpName, cleanArgs(params));
        return { content: [{ type: "text", text }] };
      } catch (err) {
        throw new Error(
          `ontomics ${def.mcpName} failed: ${err instanceof Error ? err.message : String(err)}`,
        );
      }
    },
  • McpClient.callTool method that performs the actual JSON-RPC call to the ontomics binary. This is what the execute handler uses to invoke 'locate_concept' on the Rust backend.
    async callTool(
      name: string,
      args: Record<string, unknown>,
    ): Promise<string> {
      const result = (await this.request("tools/call", {
        name,
        arguments: args,
      })) as { content?: Array<{ text?: string }> };
      const text = result.content?.[0]?.text ?? JSON.stringify(result);
      return text;
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description implies a read-only operation that ranks results. It does not disclose potential side effects, rate limits, or response structure beyond the shortlist. Adequate but could explicitly state it's non-destructive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundant information. First sentence defines function and output, second gives use cases. Efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is complete: it explains purpose, typical use cases, and the nature of the result (ranked shortlist). No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'term' is fully described in the schema, and the description adds context (e.g., 'e.g. 'transform''). The description explains how the parameter is used to find entry points, adding meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool finds best entry points for understanding a concept, returning a ranked shortlist of key functions, classes, and files, plus contrastive concepts. It distinguishes from sibling tools like 'trace_concept' by focusing on initial exploration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases ('how does X work', 'where should I look for X'), giving clear context for when to invoke. Lacks explicit guidance on when not to use or direct alternatives, but the specificity is strong.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/EtienneChollet/ontomics'

If you have feedback or need assistance with the MCP directory API, please join our Discord server