Skip to main content
Glama

quick_query

Analyze code and files using Gemini's large context window for security, architecture, or performance insights. Ask questions about specific files or repository code to get focused analysis.

Instructions

Analyze code/files quickly using Gemini's large context window. Preferred when questions mention specific files or require reading repository code. Example: {prompt: 'Explain @src/auth.ts security approach', focus: 'security', responseStyle: 'concise'}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesResearch question or analysis request
focusNoOptional focus area to guide analysis
responseStyleNoDesired verbosity of responsenormal
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'quickly' and 'Gemini's large context window,' which gives some context about speed and capability, but doesn't cover important behavioral aspects like rate limits, authentication needs, error handling, or what the output looks like. The example helps illustrate usage but doesn't fully compensate for the lack of structured behavioral information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured. It starts with the core purpose, provides usage guidelines, and includes a concrete example—all in three sentences. Every sentence adds value: the first defines the tool, the second gives usage context, and the third illustrates with an example. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (analysis tool with 3 parameters, no annotations, no output schema), the description is somewhat complete but has gaps. It covers purpose and usage well, but lacks details on behavioral traits (e.g., what 'quickly' means operationally, error cases) and output format. The example helps, but without an output schema, the agent doesn't know what to expect from the tool's response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds minimal value beyond the schema: it mentions 'questions mention specific files' which relates to the 'prompt' parameter, and the example shows usage of all parameters. However, it doesn't provide additional semantic context or constraints beyond what's in the schema descriptions and enums.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze code/files quickly using Gemini's large context window.' It specifies the verb ('analyze'), resource ('code/files'), and key capability ('Gemini's large context window'). It also distinguishes from siblings by mentioning it's 'preferred when questions mention specific files or require reading repository code,' which helps differentiate it from tools like analyze_directory or deep_research.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Preferred when questions mention specific files or require reading repository code.' This tells the agent when to use this tool versus alternatives. While it doesn't explicitly name sibling tools, the context signals (e.g., 'analyze_directory', 'deep_research') combined with this guidance help distinguish appropriate use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/capyBearista/gemini-researcher'

If you have feedback or need assistance with the MCP directory API, please join our Discord server