Skip to main content
Glama

query_concept

Look up semantic concepts in codebases to find variants, related terms, naming conventions, function signatures, and file locations in one query.

Instructions

Semantic concept lookup — returns all variants (including abbreviations like trf for transform), related concepts, naming conventions, function signatures, and file locations in one call. Richer than grep: querying 'transform' also finds 'trf', 'spatial_transform', 'apply_transform', and related concepts like 'displacement'. Use when asked 'what is X', 'what does X mean', or 'where is X used'.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
termYesThe concept term to look up (e.g. 'transform')
max_relatedNoMax related concepts to return (default: 10)
max_occurrencesNoMax occurrence locations to return (default: 5)
max_variantsNoMax variant identifiers to return (default: 20)
max_signaturesNoMax function signatures to return (default: 5)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It effectively describes behavioral traits: it's a semantic lookup that returns multiple types of information (variants, related concepts, conventions, signatures, locations) in one call. However, it doesn't mention performance characteristics, error conditions, or authentication needs, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first clause, followed by specific capabilities, comparison to alternatives, and usage guidelines. Every sentence earns its place by adding distinct value - no redundancy or wasted words. The structure flows logically from what it does to when to use it.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a semantic lookup tool with no annotations and no output schema, the description provides good context about what information is returned and when to use it. However, it doesn't describe the return format or structure, which would be helpful given the absence of an output schema. The description compensates well for missing annotations but could better address output expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it only mentions 'term' in the example 'transform'. This meets the baseline of 3 since the schema does the heavy lifting, but the description doesn't add significant value regarding parameter meaning or usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('lookup', 'returns') and resources ('variants', 'related concepts', 'naming conventions', 'function signatures', 'file locations'). It distinguishes from siblings by explicitly comparing to 'grep' and listing richer capabilities, making it easy to differentiate from tools like 'locate_concept' or 'list_concepts'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use when asked "what is X", "what does X mean", or "where is X used".' It also distinguishes from alternatives by stating it's 'Richer than grep' and gives concrete examples of what it finds, helping the agent choose this over simpler lookup tools like 'locate_concept'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/EtienneChollet/ontomics'

If you have feedback or need assistance with the MCP directory API, please join our Discord server