Skip to main content
Glama

Server Details

Google Drive MCP Pack

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-google_drive
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 10 of 10 tools scored.

Server CoherenceB
Disambiguation3/5

Tools like drive_list_files and drive_search both search for files using Drive query syntax, creating overlapping functionality. The ask_pipeworx tool is vague and could be confused with other data-retrieval tools. However, memory tools (remember, recall, forget) are distinct.

Naming Consistency3/5

Google Drive tools follow a consistent drive_verb_noun pattern (drive_create_file, drive_get_content, etc.), but memory tools (remember, recall, forget) and Pipeworx tools (ask_pipeworx, discover_tools) break this pattern, leading to inconsistency.

Tool Count4/5

With 10 tools, the count is appropriate for a Google Drive server. The inclusion of memory and Pipeworx tools expands scope but remains manageable, slightly pushing the boundary.

Completeness2/5

The server lacks basic CRUD operations for Drive files: there is no update (edit file metadata/content) or delete tool. Search and list overlap, while memory and Pipeworx tools are unrelated to Drive, creating gaps in the core Drive functionality.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that the tool internally selects the best source and fills arguments, giving insight into its autonomous behavior. No contradiction since no annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is three sentences with key information front-loaded, including examples. Every sentence adds value, no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter schema and no output schema, the description adequately covers what the tool does and how to use it. Could mention that it may return text from various sources, but sufficient for the use case.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and description adds no parameter-level detail beyond the schema's 'Your question or request in natural language', but the overall use case is clear; baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool accepts plain English questions and returns answers by routing to the best data source, distinguishing it from other tools that require schema knowledge or manual tool selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to ask questions in natural language, provides examples, and implies not needing to browse tools or learn schemas, but does not explicitly state when not to use it or contrast with siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It states the tool searches and returns results, which is appropriate for a read-only operation. However, it doesn't disclose any behavioral traits such as whether it modifies state, whether results are ranked, or if there are any limitations (e.g., search quality). The description is clear but minimal for behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long with zero wasted words. It front-loads the purpose and immediately follows with the usage directive. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (2 simple params, no output schema, no nested objects), the description covers the essential information: what it does, when to use it, and how to use the query parameter. The lack of output schema means the agent doesn't know what the return format is, but the description implies it returns tool names and descriptions, which is likely sufficient for the task of selecting tools. It could be more complete by mentioning the result format, but the context signals show a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the purpose of the query parameter with concrete examples ('analyze housing market trends', etc.) and specifying default and max values for limit. This goes beyond the schema's basic description, providing practical guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('search') and resource ('tool catalog'), and explains it returns 'the most relevant tools with names and descriptions.' The phrase 'by describing what you need' adds specificity. It also distinguishes from siblings by being a catalog search tool, not a direct data access or file tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides a clear usage directive and implies when not to use (e.g., when you already know the tool). No alternatives are named, but the context of having many tools justifies calling this first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

drive_create_fileAInspect

Create a new file in Drive with specified name, content, and type (e.g., 'text/plain', 'application/pdf'). Returns the file ID for future reference.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesName for the new file
contentYesText content of the file
mime_typeYesMIME type of the file (e.g., "text/plain", "application/json", "text/html")
parent_folder_idNoID of the parent folder (optional, defaults to root)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates a mutation (create), but does not disclose behavioral traits like whether it overwrites existing files, permission requirements, or error conditions. No annotations are provided, so the description carries the burden, and it only partially fulfills it.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. Effectively communicates the core action and key parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no output schema, and no annotations, the description is adequate but leaves gaps (e.g., default behavior for parent folder, return value). It provides the essential purpose but not full context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description need not add much. However, it mentions 'name, content, and MIME type' which matches the required parameters, but does not add meaning beyond what the schema already provides. The optional 'parent_folder_id' is not mentioned.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Create') and resource ('new file in Google Drive'), and mentions key attributes (name, content, MIME type). It clearly distinguishes from sibling tools like 'drive_list_files' or 'drive_get_content'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for creating files, but lacks explicit guidance on when to use this vs alternatives (e.g., drive_get_file for reading, drive_list_files for listing). No mention of when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

drive_get_contentAInspect

Download file content from Drive. Export Google Docs/Sheets/Slides to PDF, Word, Excel, etc., or retrieve raw content from other files.

ParametersJSON Schema
NameRequiredDescriptionDefault
file_idYesThe ID of the file to download
export_mime_typeNoMIME type to export Google Workspace files to (e.g., "text/plain", "application/pdf", "text/csv"). Required for Google Docs/Sheets/Slides.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It states that the tool returns raw content or exports to a format, which covers the core behavior. However, it does not mention whether the operation is read-only (likely, but not stated), any side effects, rate limits, or auth requirements. The description is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Each sentence adds essential information: first sentence defines the action and scope, second sentence clarifies the two modes of operation. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (2 params, no output schema, no nested objects), the description covers the essential purpose and parameter semantics. It could mention that the return value is binary or the exported content, but since there is no output schema, this is a minor gap. Overall, the description is nearly complete for this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters. The description adds context that export_mime_type is required for Google Workspace files, which reinforces the schema but does not add new meaning beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Download or export'), the resource ('content of a Google Drive file'), and distinguishes between two use cases: exporting Google Docs/Sheets/Slides to a format, and getting raw content for binary files. This differentiates it from sibling tools like drive_get_file (which presumably retrieves metadata) and drive_create_file.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use export_mime_type (for Google Workspace files) but does not explicitly exclude scenarios, provide prerequisites, or mention alternatives. It lacks explicit when-to-use or when-not-to-use guidance, which is acceptable given the tool's narrow scope but could be improved.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

drive_get_fileAInspect

Get metadata for a specific Drive file by ID. Returns name, type, size, owners, permissions, creation date, and last modified time.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsNoComma-separated list of fields to include (default: id,name,mimeType,size,createdTime,modifiedTime,owners,webViewLink)
file_idYesThe ID of the file to retrieve
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. It discloses that the tool returns metadata (name, mimeType, etc.) and does not modify or delete, but lacks details on access requirements (e.g., permission scope), rate limits, or behavior when file_id is invalid. With no annotations, a score of 3 is appropriate as it provides basic but incomplete behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence front-loads purpose and lists key fields, no fluff. Slightly more efficient would be to remove 'and more' if exact fields are listed, but still concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only metadata retrieval with no output schema, the description is adequate but lacks guidance on error handling (e.g., missing file) or rate limits. Given tool simplicity, it's minimally complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both parameters described). The description adds no additional meaning beyond the schema for file_id, but does not elaborate on fields (e.g., format, available values). Baseline 3 is correct.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get metadata' and resource 'Google Drive file by ID', clearly distinguishing from siblings like drive_get_content (which retrieves file content) and drive_list_files (which lists files without a specific ID). The listed fields (name, mimeType, size, etc.) further clarify the scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when metadata is needed for a specific file ID, but does not explicitly state when to use this over alternatives. For example, it doesn't contrast with drive_search or drive_list_files, nor does it mention prerequisites like having the file ID.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

drive_list_filesAInspect

List files in your Google Drive. Optionally filter by name, type, owner, or modified date (e.g., 'name contains "report"'). Returns file names, IDs, types, and metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
qNoDrive search query (e.g., "name contains 'report'" or "mimeType='application/pdf'")
order_byNoSort order (e.g., "modifiedTime desc", "name")
page_sizeNoMaximum number of files to return (default 10, max 100)
page_tokenNoToken for fetching the next page of results
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavior. It mentions optional filtering but lacks details like default sorting, pagination behavior, or any side effects. The description is minimal but not misleading.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with a period; no wasted words. Could potentially add a second sentence for when to use, but it's concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with full schema coverage and no output schema, the description is adequate but could mention pagination behavior or default order. It's not incomplete, but not rich.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining the 'q' parameter uses Drive query syntax and gives examples in the schema, which is helpful.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists files in Google Drive and optionally filters with a search query, which is specific and distinct from sibling tools like drive_create_file or drive_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (listing files, optionally filtering), but does not explicitly contrast with siblings like drive_search or provide when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears the full burden. It states 'Delete' which implies irreversible mutation, but does not disclose whether confirmation is needed, if the key must exist, or any side effects. A 3 is appropriate for adequate but not detailed transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, short sentence with no filler. Every word is necessary and front-loaded with the action 'Delete'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete tool with one required parameter and no output schema, the description is minimally complete. It does not explain return behavior (e.g., success vs. not found) or error states, but the tool's simplicity reduces need for more detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one required parameter 'key' described as 'Memory key to delete'. The description reinforces that the key identifies what to delete, adding no new info but confirming the schema. With high coverage, baseline is 3, but the alignment earns a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Delete') and resource ('stored memory by key'), clearly distinguishing it from sibling tools like 'remember' (create) and 'recall' (retrieve).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies deletion by key, which is clear usage context. However, no explicit when-not-to-use or alternatives are given. Sibling names hint at distinctions but the description itself does not elaborate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. States behavior (list all when key omitted) but does not disclose read-only nature or potential side effects. However, given the tool's simple nature, the description is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with purpose, then usage context. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple input schema (1 optional param), no output schema, and no annotations, the description is complete. It explains the two modes of operation. Could mention that return format is a list or single string, but not required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and description adds meaning by explaining that omitting key lists all memories. The parameter description in schema is clear, so additional detail is minimal but sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves a memory by key or lists all memories when key is omitted. Uses specific verbs ('Retrieve', 'list') and resource ('memory'), and distinguishes from sibling tools like 'remember' (which stores) and 'forget' (which deletes).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit context: 'Retrieve context you saved earlier in the session or in previous sessions.' While it doesn't explicitly state when not to use it, the purpose is clear enough to avoid confusion with other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses persistence behavior (authenticated vs. anonymous) and session duration (24 hours for anonymous), which is valuable beyond the input schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, concise and front-loaded with the core action. Every sentence adds value, and the use cases are stated efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (two string params, no output schema, no nested objects), the description is complete. It covers purpose, usage, and persistence behavior adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add parameter-specific details beyond what the schema already provides, but the examples in the schema are good. No extra semantic guidance is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'store a key-value pair in your session memory', which is a specific verb and resource. It distinguishes itself from sibling tools like 'forget' and 'recall' by focusing on writing data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use: 'save intermediate findings, user preferences, or context across tool calls.' It also differentiates between authenticated and anonymous sessions, but does not explicitly mention when not to use it or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.